id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
list | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
5,987 | Coal | Coal is a combustible black or brownish-black sedimentary rock, formed as rock strata called coal seams. Coal is mostly carbon with variable amounts of other elements, chiefly hydrogen, sulfur, oxygen, and nitrogen. Coal is a type of fossil fuel, formed when dead plant matter decays into peat and is converted into coal by the heat and pressure of deep burial over millions of years. Vast deposits of coal originate in former wetlands called coal forests that covered much of the Earth's tropical land areas during the late Carboniferous (Pennsylvanian) and Permian times.
Coal is used primarily as a fuel. While coal has been known and used for thousands of years, its usage was limited until the Industrial Revolution. With the invention of the steam engine, coal consumption increased. In 2020, coal supplied about a quarter of the world's primary energy and over a third of its electricity. Some iron and steel-making and other industrial processes burn coal.
The extraction and use of coal causes premature death and illness. The use of coal damages the environment, and it is the largest anthropogenic source of carbon dioxide contributing to climate change. Fourteen billion tonnes of carbon dioxide were emitted by burning coal in 2020, which is 40% of the total fossil fuel emissions and over 25% of total global greenhouse gas emissions. As part of worldwide energy transition, many countries have reduced or eliminated their use of coal power. The United Nations Secretary General asked governments to stop building new coal plants by 2020. Global coal use was 8.3 billion tonnes in 2022. Global coal demand is set to remain at record levels in 2023. To meet the Paris Agreement target of keeping global warming below 2 °C (3.6 °F) coal use needs to halve from 2020 to 2030, and "phasing down" coal was agreed upon in the Glasgow Climate Pact.
The largest consumer and importer of coal in 2020 was China, which accounts for almost half the world's annual coal production, followed by India with about a tenth. Indonesia and Australia export the most, followed by Russia.
The word originally took the form col in Old English, from Proto-Germanic *kula(n), which in turn is hypothesized to come from the Proto-Indo-European root *g(e)u-lo- "live coal". Germanic cognates include the Old Frisian kole, Middle Dutch cole, Dutch kool, Old High German chol, German Kohle and Old Norse kol, and the Irish word gual is also a cognate via the Indo-European root.
Coal is composed of macerals, minerals and water. Fossils and amber may be found in coal.
The conversion of dead vegetation into coal is called coalification. At various times in the geologic past, the Earth had dense forests in low-lying wetland areas. In these wetlands, the process of coalification began when dead plant matter was protected from biodegradation and oxidation, usually by mud or acidic water, and was converted into peat. This trapped the carbon in immense peat bogs that were eventually deeply buried by sediments. Then, over millions of years, the heat and pressure of deep burial caused the loss of water, methane and carbon dioxide and increased the proportion of carbon. The grade of coal produced depended on the maximum pressure and temperature reached, with lignite (also called "brown coal") produced under relatively mild conditions, and sub-bituminous coal, bituminous coal, or anthracite coal (also called "hard coal" or "black coal") produced in turn with increasing temperature and pressure.
Of the factors involved in coalification, temperature is much more important than either pressure or time of burial. Subbituminous coal can form at temperatures as low as 35 to 80 °C (95 to 176 °F) while anthracite requires a temperature of at least 180 to 245 °C (356 to 473 °F).
Although coal is known from most geologic periods, 90% of all coal beds were deposited in the Carboniferous and Permian periods, which represent just 2% of the Earth's geologic history. Paradoxically, this was during the Late Paleozoic icehouse, a time of global glaciation. However, the drop in global sea level accompanying the glaciation exposed continental shelves that had previously been submerged, and to these were added wide river deltas produced by increased erosion due to the drop in base level. These widespread areas of wetlands provided ideal conditions for coal formation. The rapid formation of coal ended with the coal gap in the Permian–Triassic extinction event, where coal is rare.
Favorable geography alone does not explain the extensive Carboniferous coal beds. Other factors contributing to rapid coal deposition were high oxygen levels, above 30%, that promoted intense wildfires and formation of charcoal that was all but indigestible by decomposing organisms; high carbon dioxide levels that promoted plant growth; and the nature of Carboniferous forests, which included lycophyte trees whose determinate growth meant that carbon was not tied up in heartwood of living trees for long periods.
One theory suggested that about 360 million years ago, some plants evolved the ability to produce lignin, a complex polymer that made their cellulose stems much harder and more woody. The ability to produce lignin led to the evolution of the first trees. But bacteria and fungi did not immediately evolve the ability to decompose lignin, so the wood did not fully decay but became buried under sediment, eventually turning into coal. About 300 million years ago, mushrooms and other fungi developed this ability, ending the main coal-formation period of earth's history. Although some authors pointed at some evidence of lignin degradation during the Carboniferous, and suggested that climatic and tectonic factors were a more plausible explanation, reconstruction of ancestral enzymes by phylogenetic analysis corroborated a hypothesis that lignin degrading enzymes appeared in fungi approximately 200 MYa.
One likely tectonic factor was the Central Pangean Mountains, an enormous range running along the equator that reached its greatest elevation near this time. Climate modeling suggests that the Central Pangean Mountains contributed to the deposition of vast quantities of coal in the late Carboniferous. The mountains created an area of year-round heavy precipitation, with no dry season typical of a monsoon climate. This is necessary for the preservation of peat in coal swamps.
Coal is known from Precambrian strata, which predate land plants. This coal is presumed to have originated from residues of algae.
Sometimes coal seams (also known as coal beds) are interbedded with other sediments in a cyclothem. Cyclothems are thought to have their origin in glacial cycles that produced fluctuations in sea level, which alternately exposed and then flooded large areas of continental shelf.
The woody tissue of plants is composed mainly of cellulose, hemicellulose, and lignin. Modern peat is mostly lignin, with a content of cellulose and hemicellulose ranging from 5% to 40%. Various other organic compounds, such as waxes and nitrogen- and sulfur-containing compounds, are also present. Lignin has a weight composition of about 54% carbon, 6% hydrogen, and 30% oxygen, while cellulose has a weight composition of about 44% carbon, 6% hydrogen, and 49% oxygen. Bituminous coal has a composition of about 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. This implies that chemical processes during coalification must remove most of the oxygen and much of the hydrogen, leaving carbon, a process called carbonization.
Carbonization proceeds primarily by dehydration, decarboxylation, and demethanation. Dehydration removes water molecules from the maturing coal via reactions such as
Decarboxylation removes carbon dioxide from the maturing coal and proceeds by reaction such as
while demethanation proceeds by reaction such as
In each of these formulas, R represents the remainder of a cellulose or lignin molecule to which the reacting groups are attached.
Dehydration and decarboxylation take place early in coalification, while demethanation begins only after the coal has already reached bituminous rank. The effect of decarboxylation is to reduce the percentage of oxygen, while demethanation reduces the percentage of hydrogen. Dehydration does both, and (together with demethanation) reduces the saturation of the carbon backbone (increasing the number of double bonds between carbon).
As carbonization proceeds, aliphatic compounds (carbon compounds characterized by chains of carbon atoms) are replaced by aromatic compounds (carbon compounds characterized by rings of carbon atoms) and aromatic rings begin to fuse into polyaromatic compounds (linked rings of carbon atoms). The structure increasingly resembles graphene, the structural element of graphite.
Chemical changes are accompanied by physical changes, such as decrease in average pore size. The macerals (organic particles) of lignite are composed of huminite, which is earthy in appearance. As the coal matures to sub-bituminous coal, huminite begins to be replaced by vitreous (shiny) vitrinite. Maturation of bituminous coal is characterized by bitumenization, in which part of the coal is converted to bitumen, a hydrocarbon-rich gel. Maturation to anthracite is characterized by debitumenization (from demethanation) and the increasing tendency of the anthracite to break with a conchoidal fracture, similar to the way thick glass breaks.
As geological processes apply pressure to dead biotic material over time, under suitable conditions, its metamorphic grade or rank increases successively into:
There are several international standards for coal. The classification of coal is generally based on the content of volatiles. However the most important distinction is between thermal coal (also known as steam coal), which is burnt to generate electricity via steam; and metallurgical coal (also known as coking coal), which is burnt at high temperature to make steel.
Hilt's law is a geological observation that (within a small area) the deeper the coal is found, the higher its rank (or grade). It applies if the thermal gradient is entirely vertical; however, metamorphism may cause lateral changes of rank, irrespective of depth. For example, some of the coal seams of the Madrid, New Mexico coal field were partially converted to anthracite by contact metamorphism from an igneous sill while the remainder of the seams remained as bituminous coal.
The earliest recognized use is from the Shenyang area of China where by 4000 BC Neolithic inhabitants had begun carving ornaments from black lignite. Coal from the Fushun mine in northeastern China was used to smelt copper as early as 1000 BC. Marco Polo, the Italian who traveled to China in the 13th century, described coal as "black stones ... which burn like logs", and said coal was so plentiful, people could take three hot baths a week. In Europe, the earliest reference to the use of coal as fuel is from the geological treatise On Stones (Lap. 16) by the Greek scientist Theophrastus (c. 371–287 BC):
Among the materials that are dug because they are useful, those known as anthrakes [coals] are made of earth, and, once set on fire, they burn like charcoal [anthrakes]. They are found in Liguria ... and in Elis as one approaches Olympia by the mountain road; and they are used by those who work in metals.
Outcrop coal was used in Britain during the Bronze Age (3000–2000 BC), where it formed part of funeral pyres. In Roman Britain, with the exception of two modern fields, "the Romans were exploiting coals in all the major coalfields in England and Wales by the end of the second century AD". Evidence of trade in coal, dated to about AD 200, has been found at the Roman settlement at Heronbridge, near Chester; and in the Fenlands of East Anglia, where coal from the Midlands was transported via the Car Dyke for use in drying grain. Coal cinders have been found in the hearths of villas and Roman forts, particularly in Northumberland, dated to around AD 400. In the west of England, contemporary writers described the wonder of a permanent brazier of coal on the altar of Minerva at Aquae Sulis (modern day Bath), although in fact easily accessible surface coal from what became the Somerset coalfield was in common use in quite lowly dwellings locally. Evidence of coal's use for iron-working in the city during the Roman period has been found. In Eschweiler, Rhineland, deposits of bituminous coal were used by the Romans for the smelting of iron ore.
No evidence exists of coal being of great importance in Britain before about AD 1000, the High Middle Ages. Coal came to be referred to as "seacoal" in the 13th century; the wharf where the material arrived in London was known as Seacoal Lane, so identified in a charter of King Henry III granted in 1253. Initially, the name was given because much coal was found on the shore, having fallen from the exposed coal seams on cliffs above or washed out of underwater coal outcrops, but by the time of Henry VIII, it was understood to derive from the way it was carried to London by sea. In 1257–1259, coal from Newcastle upon Tyne was shipped to London for the smiths and lime-burners building Westminster Abbey. Seacoal Lane and Newcastle Lane, where coal was unloaded at wharves along the River Fleet, still exist.
These easily accessible sources had largely become exhausted (or could not meet the growing demand) by the 13th century, when underground extraction by shaft mining or adits was developed. The alternative name was "pitcoal", because it came from mines.
Cooking and home heating with coal (in addition to firewood or instead of it) has been done in various times and places throughout human history, especially in times and places where ground-surface coal was available and firewood was scarce, but a widespread reliance on coal for home hearths probably never existed until such a switch in fuels happened in London in the late sixteenth and early seventeenth centuries. Historian Ruth Goodman has traced the socioeconomic effects of that switch and its later spread throughout Britain and suggested that its importance in shaping the industrial adoption of coal has been previously underappreciated.
The development of the Industrial Revolution led to the large-scale use of coal, as the steam engine took over from the water wheel. In 1700, five-sixths of the world's coal was mined in Britain. Britain would have run out of suitable sites for watermills by the 1830s if coal had not been available as a source of energy. In 1947 there were some 750,000 miners in Britain, but the last deep coal mine in the UK closed in 2015.
A grade between bituminous coal and anthracite was once known as "steam coal" as it was widely used as a fuel for steam locomotives. In this specialized use, it is sometimes known as "sea coal" in the United States. Small "steam coal", also called dry small steam nuts (DSSN), was used as a fuel for domestic water heating.
Coal played an important role in industry in the 19th and 20th century. The predecessor of the European Union, the European Coal and Steel Community, was based on the trading of this commodity.
Coal continues to arrive on beaches around the world from both natural erosion of exposed coal seams and windswept spills from cargo ships. Many homes in such areas gather this coal as a significant, and sometimes primary, source of home heating fuel.
The composition of coal is reported either as a proximate analysis (moisture, volatile matter, fixed carbon, and ash) or an ultimate analysis (ash, carbon, hydrogen, nitrogen, oxygen, and sulfur). The "volatile matter" does not exist by itself (except for some adsorbed methane) but designates the volatile compounds that are produced and driven off by heating the coal. A typical bituminous coal may have an ultimate analysis on a dry, ash-free basis of 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis.
The composition of ash, given in terms of oxides, varies:
Other minor components include:
Coke is a solid carbonaceous residue derived from coking coal (a low-ash, low-sulfur bituminous coal, also known as metallurgical coal), which is used in manufacturing steel and other iron products. Coke is made from coking coal by baking in an oven without oxygen at temperatures as high as 1,000 °C, driving off the volatile constituents and fusing together the fixed carbon and residual ash. Metallurgical coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. The carbon monoxide produced by its combustion reduces hematite (an iron oxide) to iron.
Waste carbon dioxide is also produced ( 2 Fe 2 O 3 + 3 C ⟶ 4 Fe + 3 CO 2 {\displaystyle {\ce {2Fe2O3 + 3C -> 4Fe + 3CO2}}} ) together with pig iron, which is too rich in dissolved carbon so must be treated further to make steel.
Coking coal should be low in ash, sulfur, and phosphorus, so that these do not migrate to the metal. The coke must be strong enough to resist the weight of overburden in the blast furnace, which is why coking coal is so important in making steel using the conventional route. Coke from coal is grey, hard, and porous and has a heating value of 29.6 MJ/kg. Some coke-making processes produce byproducts, including coal tar, ammonia, light oils, and coal gas.
Petroleum coke (petcoke) is the solid residue obtained in oil refining, which resembles coke but contains too many impurities to be useful in metallurgical applications.
Finely ground bituminous coal, known in this application as sea coal, is a constituent of foundry sand. While the molten metal is in the mould, the coal burns slowly, releasing reducing gases at pressure, and so preventing the metal from penetrating the pores of the sand. It is also contained in 'mould wash', a paste or liquid with the same function applied to the mould before casting. Sea coal can be mixed with the clay lining (the "bod") used for the bottom of a cupola furnace. When heated, the coal decomposes and the bod becomes slightly friable, easing the process of breaking open holes for tapping the molten metal.
Scrap steel can be recycled in an electric arc furnace; and an alternative to making iron by smelting is direct reduced iron, where any carbonaceous fuel can be used to make sponge or pelletised iron. To lessen carbon dioxide emissions hydrogen can be used as the reducing agent and biomass or waste as the source of carbon. Historically, charcoal has been used as an alternative to coke in a blast furnace, with the resultant iron being known as charcoal iron.
Coal gasification, as part of an integrated gasification combined cycle (IGCC) coal-fired power station, is used to produce syngas, a mixture of carbon monoxide (CO) and hydrogen (H2) gas to fire gas turbines to produce electricity. Syngas can also be converted into transportation fuels, such as gasoline and diesel, through the Fischer–Tropsch process; alternatively, syngas can be converted into methanol, which can be blended into fuel directly or converted to gasoline via the methanol to gasoline process. Gasification combined with Fischer–Tropsch technology was used by the Sasol chemical company of South Africa to make chemicals and motor vehicle fuels from coal.
During gasification, the coal is mixed with oxygen and steam while also being heated and pressurized. During the reaction, oxygen and water molecules oxidize the coal into carbon monoxide (CO), while also releasing hydrogen gas (H2). This used to be done in underground coal mines, and also to make town gas, which was piped to customers to burn for illumination, heating, and cooking.
If the refiner wants to produce gasoline, the syngas is routed into a Fischer–Tropsch reaction. This is known as indirect coal liquefaction. If hydrogen is the desired end-product, however, the syngas is fed into the water gas shift reaction, where more hydrogen is liberated:
Coal can be converted directly into synthetic fuels equivalent to gasoline or diesel by hydrogenation or carbonization. Coal liquefaction emits more carbon dioxide than liquid fuel production from crude oil. Mixing in biomass and using CCS would emit slightly less than the oil process but at a high cost. State owned China Energy Investment runs a coal liquefaction plant and plans to build 2 more.
Coal liquefaction may also refer to the cargo hazard when shipping coal.
Chemicals have been produced from coal since the 1950s. Coal can be used as a feedstock in the production of a wide range of chemical fertilizers and other chemical products. The main route to these products was coal gasification to produce syngas. Primary chemicals that are produced directly from the syngas include methanol, hydrogen and carbon monoxide, which are the chemical building blocks from which a whole spectrum of derivative chemicals are manufactured, including olefins, acetic acid, formaldehyde, ammonia, urea and others. The versatility of syngas as a precursor to primary chemicals and high-value derivative products provides the option of using coal to produce a wide range of commodities. In the 21st century, however, the use of coal bed methane is becoming more important.
Because the slate of chemical products that can be made via coal gasification can in general also use feedstocks derived from natural gas and petroleum, the chemical industry tends to use whatever feedstocks are most cost-effective. Therefore, interest in using coal tended to increase for higher oil and natural gas prices and during periods of high global economic growth that might have strained oil and gas production.
Coal to chemical processes require substantial quantities of water. Much coal to chemical production is in China where coal dependent provinces such as Shanxi are struggling to control its pollution.
The energy density of coal is roughly 24 megajoules per kilogram (approximately 6.7 kilowatt-hours per kg). For a coal power plant with a 40% efficiency, it takes an estimated 325 kg (717 lb) of coal to power a 100 W lightbulb for one year.
27.6% of world energy was supplied by coal in 2017 and Asia used almost three-quarters of it.
Refined coal is the product of a coal-upgrading technology that removes moisture and certain pollutants from lower-rank coals such as sub-bituminous and lignite (brown) coals. It is one form of several precombustion treatments and processes for coal that alter coal's characteristics before it is burned. Thermal efficiency improvements are achievable by improved pre-drying (especially relevant with high-moisture fuel such as lignite or biomass). The goals of precombustion coal technologies are to increase efficiency and reduce emissions when the coal is burned. Precombustion technology can sometimes be used as a supplement to postcombustion technologies to control emissions from coal-fueled boilers.
Coal burnt as a solid fuel in coal power stations to generate electricity is called thermal coal. Coal is also used to produce very high temperatures through combustion. Early deaths due to air pollution have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities. Efforts around the world to reduce the use of coal have led some regions to switch to natural gas and electricity from lower carbon sources.
When coal is used for electricity generation, it is usually pulverized and then burned in a furnace with a boiler (see also Pulverized coal-fired boiler). The furnace heat converts boiler water to steam, which is then used to spin turbines which turn generators and create electricity. The thermodynamic efficiency of this process varies between about 25% and 50% depending on the pre-combustion treatment, turbine technology (e.g. supercritical steam generator) and the age of the plant.
A few integrated gasification combined cycle (IGCC) power plants have been built, which burn coal more efficiently. Instead of pulverizing the coal and burning it directly as fuel in the steam-generating boiler, the coal is gasified to create syngas, which is burned in a gas turbine to produce electricity (just like natural gas is burned in a turbine). Hot exhaust gases from the turbine are used to raise steam in a heat recovery steam generator which powers a supplemental steam turbine. The overall plant efficiency when used to provide combined heat and power can reach as much as 94%. IGCC power plants emit less local pollution than conventional pulverized coal-fueled plants; however the technology for carbon capture and storage (CCS) after gasification and before burning has so far proved to be too expensive to use with coal. Other ways to use coal are as coal-water slurry fuel (CWS), which was developed in the Soviet Union, or in an MHD topping cycle. However these are not widely used due to lack of profit.
In 2017 38% of the world's electricity came from coal, the same percentage as 30 years previously. In 2018 global installed capacity was 2TW (of which 1TW is in China) which was 30% of total electricity generation capacity. The most dependent major country is South Africa, with over 80% of its electricity generated by coal; but China alone generates more than half of the world's coal-generated electricity.
Maximum use of coal was reached in 2013. In 2018 coal-fired power station capacity factor averaged 51%, that is they operated for about half their available operating hours.
About 8000 Mt of coal are produced annually, about 90% of which is hard coal and 10% lignite. As of 2018 just over half is from underground mines. More accidents occur during underground mining than surface mining. Not all countries publish mining accident statistics so worldwide figures are uncertain, but it is thought that most deaths occur in coal mining accidents in China: in 2017 there were 375 coal mining related deaths in China. Most coal mined is thermal coal (also called steam coal as it is used to make steam to generate electricity) but metallurgical coal (also called "metcoal" or "coking coal" as it is used to make coke to make iron) accounts for 10% to 15% of global coal use.
China mines almost half the world's coal, followed by India with about a tenth. Australia accounts for about a third of world coal exports, followed by Indonesia and Russia, while the largest importers are Japan and India. Russia is increasingly orienting its coal exports from Europe to Asia as Europe transitions to renewable energy and subjects Russia to sanctions over its invasion of Ukraine.
The price of metallurgical coal is volatile and much higher than the price of thermal coal because metallurgical coal must be lower in sulfur and requires more cleaning. Coal futures contracts provide coal producers and the electric power industry an important tool for hedging and risk management.
In some countries, new onshore wind or solar generation already costs less than coal power from existing plants. However, for China this is forecast for the early 2020s and for southeast Asia not until the late 2020s. In India, building new plants is uneconomic and, despite being subsidized, existing plants are losing market share to renewables.
Of the countries which produce coal, China mines by far the most, almost half the world's coal, followed by less than 10% by India. China is also by far the largest consumer of coal. Therefore, international market trends depend on Chinese energy policy. Although the government effort to reduce air pollution in China means that the global long-term trend is to burn less coal, the short and medium term trends may differ, in part due to Chinese financing of new coal-fired power plants in other countries.
Countries with annual production higher than 300 million tonnes are shown.
Countries with annual consumption higher than 500 million tonnes are shown. Shares are based on data expressed in tonnes oil equivalent.
Exporters are at risk of a reduction in import demand from India and China.
The use of coal as fuel causes ill health and deaths. Mining and processing of coal causes air and water pollution. Coal-powered plants emit nitrogen oxides, sulfur dioxide, particulate pollution and heavy metals, which adversely affect human health. Coal bed methane extraction is important to avoid mining accidents.
The deadly London smog was caused primarily by the heavy use of coal. Globally coal is estimated to cause 800,000 premature deaths every year, mostly in India and China.
Burning coal is a major contributor to sulfur dioxide emissions, which creates PM2.5 particulates, the most dangerous form of air pollution.
Coal smokestack emissions cause asthma, strokes, reduced intelligence, artery blockages, heart attacks, congestive heart failure, cardiac arrhythmias, mercury poisoning, arterial occlusion, and lung cancer.
Annual health costs in Europe from use of coal to generate electricity are estimated at up to €43 billion.
In China, improvements to air quality and human health would increase with more stringent climate policies, mainly because the country's energy is so heavily reliant on coal. And there would be a net economic benefit.
A 2017 study in the Economic Journal found that for Britain during the period 1851–1860, "a one standard deviation increase in coal use raised infant mortality by 6–8% and that industrial coal use explains roughly one-third of the urban mortality penalty observed during this period."
Breathing in coal dust causes coalworker's pneumoconiosis or "black lung", so called because the coal dust literally turns the lungs black. In the US alone, it is estimated that 1,500 former employees of the coal industry die every year from the effects of breathing in coal mine dust.
Huge amounts of coal ash and other waste is produced annually. Use of coal generates hundreds of millions of tons of ash and other waste products every year. These include fly ash, bottom ash, and flue-gas desulfurization sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals, along with non-metals such as selenium.
Around 10% of coal is ash. Coal ash is hazardous and toxic to human beings and some other living things. Coal ash contains the radioactive elements uranium and thorium. Coal ash and other solid combustion byproducts are stored locally and escape in various ways that expose those living near coal plants to radiation and environmental toxics.
Coal mining, coal combustion wastes and flue gas are causing major environmental damage.
Water systems are affected by coal mining. For example, mining affects groundwater and water table levels and acidity. Spills of fly ash, such as the Kingston Fossil Plant coal fly ash slurry spill, can also contaminate land and waterways, and destroy homes. Power stations that burn coal also consume large quantities of water. This can affect the flows of rivers, and has consequential impacts on other land uses. In areas of water scarcity, such as the Thar Desert in Pakistan, coal mining and coal power plants contribute to the depletion of water resources.
One of the earliest known impacts of coal on the water cycle was acid rain. In 2014, approximately 100 Tg/S of sulfur dioxide (SO2) was released, over half of which was from burning coal. After release, the sulfur dioxide is oxidized to H2SO4 which scatters solar radiation, hence its increase in the atmosphere exerts a cooling effect on the climate. This beneficially masks some of the warming caused by increased greenhouse gases. However, the sulfur is precipitated out of the atmosphere as acid rain in a matter of weeks, whereas carbon dioxide remains in the atmosphere for hundreds of years. Release of SO2 also contributes to the widespread acidification of ecosystems.
Disused coal mines can also cause issues. Subsidence can occur above tunnels, causing damage to infrastructure or cropland. Coal mining can also cause long lasting fires, and it has been estimated that thousands of coal seam fires are burning at any given time. For example, Brennender Berg has been burning since 1668 and is still burning in the 21st century.
The production of coke from coal produces ammonia, coal tar, and gaseous compounds as byproducts which if discharged to land, air or waterways can pollute the environment. The Whyalla steelworks is one example of a coke producing facility where liquid ammonia was discharged to the marine environment.
Emission intensity is the greenhouse gas emitted over the life of a generator per unit of electricity generated. The emission intensity of coal power stations is high, as they emit around 1000 g of CO2eq for each kWh generated, while natural gas is medium-emission intensity at around 500 g CO2eq per kWh. The emission intensity of coal varies with type and generator technology and exceeds 1200 g per kWh in some countries.
Thousands of coal fires are burning around the world. Those burning underground can be difficult to locate and many cannot be extinguished. Fires can cause the ground above to subside, their combustion gases are dangerous to life, and breaking out to the surface can initiate surface wildfires. Coal seams can be set on fire by spontaneous combustion or contact with a mine fire or surface fire. Lightning strikes are an important source of ignition. The coal continues to burn slowly back into the seam until oxygen (air) can no longer reach the flame front. A grass fire in a coal area can set dozens of coal seams on fire. Coal fires in China burn an estimated 120 million tons of coal a year, emitting 360 million metric tons of CO2, amounting to 2–3% of the annual worldwide production of CO2 from fossil fuels. In Centralia, Pennsylvania (a borough located in the Coal Region of the U.S.), an exposed vein of anthracite ignited in 1962 due to a trash fire in the borough landfill, located in an abandoned anthracite strip mine pit. Attempts to extinguish the fire were unsuccessful, and it continues to burn underground to this day. The Australian Burning Mountain was originally believed to be a volcano, but the smoke and ash come from a coal fire that has been burning for some 6,000 years.
At Kuh i Malik in Yagnob Valley, Tajikistan, coal deposits have been burning for thousands of years, creating vast underground labyrinths full of unique minerals, some of them very beautiful.
The reddish siltstone rock that caps many ridges and buttes in the Powder River Basin in Wyoming and in western North Dakota is called porcelanite, which resembles the coal burning waste "clinker" or volcanic "scoria". Clinker is rock that has been fused by the natural burning of coal. In the Powder River Basin approximately 27 to 54 billion tons of coal burned within the past three million years. Wild coal fires in the area were reported by the Lewis and Clark Expedition as well as explorers and settlers in the area.
The largest and most long-term effect of coal use is the release of carbon dioxide, a greenhouse gas that causes climate change. Coal-fired power plants were the single largest contributor to the growth in global CO2 emissions in 2018, 40% of the total fossil fuel emissions, and more than a quarter of total emissions. Coal mining can emit methane, another greenhouse gas.
In 2016 world gross carbon dioxide emissions from coal usage were 14.5 gigatonnes. For every megawatt-hour generated, coal-fired electric power generation emits around a tonne of carbon dioxide, which is double the approximately 500 kg of carbon dioxide released by a natural gas-fired electric plant. In 2013, the head of the UN climate agency advised that most of the world's coal reserves should be left in the ground to avoid catastrophic global warming. To keep global warming below 1.5 °C or 2 °C hundreds, or possibly thousands, of coal-fired power plants will need to be retired early.
Local pollution standards include GB13223-2011 (China), India, the Industrial Emissions Directive (EU) and the Clean Air Act (United States).
Satellite monitoring is now used to crosscheck national data, for example Sentinel-5 Precursor has shown that Chinese control of SO2 has only been partially successful. It has also revealed that low use of technology such as SCR has resulted in high NO2 emissions in South Africa and India.
A few Integrated gasification combined cycle (IGCC) coal-fired power plants have been built with coal gasification. Although they burn coal more efficiently and therefore emit less pollution, the technology has not generally proved economically viable for coal, except possibly in Japan although this is controversial.
Although still being intensively researched and considered economically viable for some uses other than with coal; carbon capture and storage has been tested at the Petra Nova and Boundary Dam coal-fired power plants and has been found to be technically feasible but not economically viable for use with coal, due to reductions in the cost of solar PV technology.
In 2018 US$80 billion was invested in coal supply but almost all for sustaining production levels rather than opening new mines. In the long term coal and oil could cost the world trillions of dollars per year. Coal alone may cost Australia billions, whereas costs to some smaller companies or cities could be on the scale of millions of dollars. The economies most damaged by coal (via climate change) may be India and the US as they are the countries with the highest social cost of carbon. Bank loans to finance coal are a risk to the Indian economy.
China is the largest producer of coal in the world. It is the world's largest energy consumer, and coal in China supplies 60% of its primary energy. However two fifths of China's coal power stations are estimated to be loss-making.
Air pollution from coal storage and handling costs the US almost 200 dollars for every extra ton stored, due to PM2.5. Coal pollution costs the €43 billion each year. Measures to cut air pollution benefit individuals financially and the economies of countries such as China.
Subsidies for coal in 2021 have been estimated at US$19 billion, not including electricity subsidies, and are expected to rise in 2022. As of 2019 G20 countries provide at least US$63.9 billion of government support per year for the production of coal, including coal-fired power: many subsidies are impossible to quantify but they include US$27.6 billion in domestic and international public finance, US$15.4 billion in fiscal support, and US$20.9 billion in state-owned enterprise (SOE) investments per year. In the EU state aid to new coal-fired plants is banned from 2020, and to existing coal-fired plants from 2025. As of 2018, government funding for new coal power plants was supplied by Exim Bank of China, the Japan Bank for International Cooperation and Indian public sector banks. Coal in Kazakhstan was the main recipient of coal consumption subsidies totalling US$2 billion in 2017. Coal in Turkey benefited from substantial subsidies in 2021.
Some coal-fired power stations could become stranded assets, for example China Energy Investment, the world's largest power company, risks losing half its capital. However, state-owned electricity utilities such as Eskom in South Africa, Perusahaan Listrik Negara in Indonesia, Sarawak Energy in Malaysia, Taipower in Taiwan, EGAT in Thailand, Vietnam Electricity and EÜAŞ in Turkey are building or planning new plants. As of 2021 this may be helping to cause a carbon bubble which could cause financial instability if it bursts.
Countries building or financing new coal-fired power stations, such as China, India, Indonesia, Vietnam, Turkey and Bangladesh, face mounting international criticism for obstructing the aims of the Paris Agreement. In 2019, the Pacific Island nations (in particular Vanuatu and Fiji) criticized Australia for failing to cut their emissions at a faster rate than they were, citing concerns about coastal inundation and erosion. In May 2021, the G7 members agreed to end new direct government support for international coal power generation.
Opposition to coal pollution was one of the main reasons the modern environmental movement started in the 19th century.
In order to meet global climate goals and provide power to those that do not currently have it coal power must be reduced from nearly 10,000 TWh to less than 2,000 TWh by 2040. Phasing out coal has short-term health and environmental benefits which exceed the costs, but some countries still favor coal, and there is much disagreement about how quickly it should be phased out. However many countries, such as the Powering Past Coal Alliance, have already or are transitioned away from coal; the largest transition announced so far being Germany, which is due to shut down its last coal-fired power station between 2035 and 2038. Some countries use the ideas of a "Just Transition", for example to use some of the benefits of transition to provide early pensions for coal miners. However, low-lying Pacific Islands are concerned the transition is not fast enough and that they will be inundated by sea level rise, so they have called for OECD countries to completely phase out coal by 2030 and other countries by 2040. In 2020, although China built some plants, globally more coal power was retired than built: the UN Secretary General has also said that OECD countries should stop generating electricity from coal by 2030 and the rest of the world by 2040. Phasing down coal was agreed at COP26 in the Glasgow Climate Pact. Vietnam is among few coal-dependent developing countries that pledged to phase out unabated coal power by the 2040s or as early as possible thereafter
Peak coal is the peak consumption or production of coal by a human community. The peak of coal's share in the global energy mix was in 2008, when coal accounted for 30% of global energy production. Coal consumption is declining in the United States and Europe, as well as developed economies in Asia. However, consumption is still increasing in India and Southeast Asia, which compensates for the falls in other regions. Global coal consumption reached an all time high in 2022 at 8.3 billion tonnes. Global coal demand is set to remain at record levels in 2023.
Coal-fired generation puts out about twice as much carbon dioxide—around a tonne for every megawatt hour generated—as electricity generated by burning natural gas at 500 kg of greenhouse gas per megawatt hour. In addition to generating electricity, natural gas is also popular in some countries for heating and as an automotive fuel.
The use of coal in the United Kingdom declined as a result of the development of North Sea oil and the subsequent dash for gas during the 1990s. In Canada some coal power plants, such as the Hearn Generating Station, switched from coal to natural gas. In 2017, coal power in the US provided 30% of the electricity, down from approximately 49% in 2008, due to plentiful supplies of low cost natural gas obtained by hydraulic fracturing of tight shale formations.
Some coal-mining regions are highly dependent on coal.
Some coal miners are concerned their jobs may be lost in the transition. A just transition from coal is supported by the European Bank for Reconstruction and Development.
The white rot fungus Trametes versicolor can grow on and metabolize naturally occurring coal. The bacteria Diplococcus has been found to degrade coal, raising its temperature.
Coal is the official state mineral of Kentucky, and the official state rock of Utah and West Virginia. These US states have a historic link to coal mining.
Some cultures hold that children who misbehave will receive only a lump of coal from Santa Claus for Christmas in their stockings instead of presents.
It is also customary and considered lucky in Scotland and the North of England to give coal as a gift on New Year's Day. This occurs as part of first-footing and represents warmth for the year to come. | [
{
"paragraph_id": 0,
"text": "Coal is a combustible black or brownish-black sedimentary rock, formed as rock strata called coal seams. Coal is mostly carbon with variable amounts of other elements, chiefly hydrogen, sulfur, oxygen, and nitrogen. Coal is a type of fossil fuel, formed when dead plant matter decays into peat and is converted into coal by the heat and pressure of deep burial over millions of years. Vast deposits of coal originate in former wetlands called coal forests that covered much of the Earth's tropical land areas during the late Carboniferous (Pennsylvanian) and Permian times.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Coal is used primarily as a fuel. While coal has been known and used for thousands of years, its usage was limited until the Industrial Revolution. With the invention of the steam engine, coal consumption increased. In 2020, coal supplied about a quarter of the world's primary energy and over a third of its electricity. Some iron and steel-making and other industrial processes burn coal.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The extraction and use of coal causes premature death and illness. The use of coal damages the environment, and it is the largest anthropogenic source of carbon dioxide contributing to climate change. Fourteen billion tonnes of carbon dioxide were emitted by burning coal in 2020, which is 40% of the total fossil fuel emissions and over 25% of total global greenhouse gas emissions. As part of worldwide energy transition, many countries have reduced or eliminated their use of coal power. The United Nations Secretary General asked governments to stop building new coal plants by 2020. Global coal use was 8.3 billion tonnes in 2022. Global coal demand is set to remain at record levels in 2023. To meet the Paris Agreement target of keeping global warming below 2 °C (3.6 °F) coal use needs to halve from 2020 to 2030, and \"phasing down\" coal was agreed upon in the Glasgow Climate Pact.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The largest consumer and importer of coal in 2020 was China, which accounts for almost half the world's annual coal production, followed by India with about a tenth. Indonesia and Australia export the most, followed by Russia.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The word originally took the form col in Old English, from Proto-Germanic *kula(n), which in turn is hypothesized to come from the Proto-Indo-European root *g(e)u-lo- \"live coal\". Germanic cognates include the Old Frisian kole, Middle Dutch cole, Dutch kool, Old High German chol, German Kohle and Old Norse kol, and the Irish word gual is also a cognate via the Indo-European root.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "Coal is composed of macerals, minerals and water. Fossils and amber may be found in coal.",
"title": "Geology"
},
{
"paragraph_id": 6,
"text": "The conversion of dead vegetation into coal is called coalification. At various times in the geologic past, the Earth had dense forests in low-lying wetland areas. In these wetlands, the process of coalification began when dead plant matter was protected from biodegradation and oxidation, usually by mud or acidic water, and was converted into peat. This trapped the carbon in immense peat bogs that were eventually deeply buried by sediments. Then, over millions of years, the heat and pressure of deep burial caused the loss of water, methane and carbon dioxide and increased the proportion of carbon. The grade of coal produced depended on the maximum pressure and temperature reached, with lignite (also called \"brown coal\") produced under relatively mild conditions, and sub-bituminous coal, bituminous coal, or anthracite coal (also called \"hard coal\" or \"black coal\") produced in turn with increasing temperature and pressure.",
"title": "Geology"
},
{
"paragraph_id": 7,
"text": "Of the factors involved in coalification, temperature is much more important than either pressure or time of burial. Subbituminous coal can form at temperatures as low as 35 to 80 °C (95 to 176 °F) while anthracite requires a temperature of at least 180 to 245 °C (356 to 473 °F).",
"title": "Geology"
},
{
"paragraph_id": 8,
"text": "Although coal is known from most geologic periods, 90% of all coal beds were deposited in the Carboniferous and Permian periods, which represent just 2% of the Earth's geologic history. Paradoxically, this was during the Late Paleozoic icehouse, a time of global glaciation. However, the drop in global sea level accompanying the glaciation exposed continental shelves that had previously been submerged, and to these were added wide river deltas produced by increased erosion due to the drop in base level. These widespread areas of wetlands provided ideal conditions for coal formation. The rapid formation of coal ended with the coal gap in the Permian–Triassic extinction event, where coal is rare.",
"title": "Geology"
},
{
"paragraph_id": 9,
"text": "Favorable geography alone does not explain the extensive Carboniferous coal beds. Other factors contributing to rapid coal deposition were high oxygen levels, above 30%, that promoted intense wildfires and formation of charcoal that was all but indigestible by decomposing organisms; high carbon dioxide levels that promoted plant growth; and the nature of Carboniferous forests, which included lycophyte trees whose determinate growth meant that carbon was not tied up in heartwood of living trees for long periods.",
"title": "Geology"
},
{
"paragraph_id": 10,
"text": "One theory suggested that about 360 million years ago, some plants evolved the ability to produce lignin, a complex polymer that made their cellulose stems much harder and more woody. The ability to produce lignin led to the evolution of the first trees. But bacteria and fungi did not immediately evolve the ability to decompose lignin, so the wood did not fully decay but became buried under sediment, eventually turning into coal. About 300 million years ago, mushrooms and other fungi developed this ability, ending the main coal-formation period of earth's history. Although some authors pointed at some evidence of lignin degradation during the Carboniferous, and suggested that climatic and tectonic factors were a more plausible explanation, reconstruction of ancestral enzymes by phylogenetic analysis corroborated a hypothesis that lignin degrading enzymes appeared in fungi approximately 200 MYa.",
"title": "Geology"
},
{
"paragraph_id": 11,
"text": "One likely tectonic factor was the Central Pangean Mountains, an enormous range running along the equator that reached its greatest elevation near this time. Climate modeling suggests that the Central Pangean Mountains contributed to the deposition of vast quantities of coal in the late Carboniferous. The mountains created an area of year-round heavy precipitation, with no dry season typical of a monsoon climate. This is necessary for the preservation of peat in coal swamps.",
"title": "Geology"
},
{
"paragraph_id": 12,
"text": "Coal is known from Precambrian strata, which predate land plants. This coal is presumed to have originated from residues of algae.",
"title": "Geology"
},
{
"paragraph_id": 13,
"text": "Sometimes coal seams (also known as coal beds) are interbedded with other sediments in a cyclothem. Cyclothems are thought to have their origin in glacial cycles that produced fluctuations in sea level, which alternately exposed and then flooded large areas of continental shelf.",
"title": "Geology"
},
{
"paragraph_id": 14,
"text": "The woody tissue of plants is composed mainly of cellulose, hemicellulose, and lignin. Modern peat is mostly lignin, with a content of cellulose and hemicellulose ranging from 5% to 40%. Various other organic compounds, such as waxes and nitrogen- and sulfur-containing compounds, are also present. Lignin has a weight composition of about 54% carbon, 6% hydrogen, and 30% oxygen, while cellulose has a weight composition of about 44% carbon, 6% hydrogen, and 49% oxygen. Bituminous coal has a composition of about 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. This implies that chemical processes during coalification must remove most of the oxygen and much of the hydrogen, leaving carbon, a process called carbonization.",
"title": "Geology"
},
{
"paragraph_id": 15,
"text": "Carbonization proceeds primarily by dehydration, decarboxylation, and demethanation. Dehydration removes water molecules from the maturing coal via reactions such as",
"title": "Geology"
},
{
"paragraph_id": 16,
"text": "Decarboxylation removes carbon dioxide from the maturing coal and proceeds by reaction such as",
"title": "Geology"
},
{
"paragraph_id": 17,
"text": "while demethanation proceeds by reaction such as",
"title": "Geology"
},
{
"paragraph_id": 18,
"text": "In each of these formulas, R represents the remainder of a cellulose or lignin molecule to which the reacting groups are attached.",
"title": "Geology"
},
{
"paragraph_id": 19,
"text": "Dehydration and decarboxylation take place early in coalification, while demethanation begins only after the coal has already reached bituminous rank. The effect of decarboxylation is to reduce the percentage of oxygen, while demethanation reduces the percentage of hydrogen. Dehydration does both, and (together with demethanation) reduces the saturation of the carbon backbone (increasing the number of double bonds between carbon).",
"title": "Geology"
},
{
"paragraph_id": 20,
"text": "As carbonization proceeds, aliphatic compounds (carbon compounds characterized by chains of carbon atoms) are replaced by aromatic compounds (carbon compounds characterized by rings of carbon atoms) and aromatic rings begin to fuse into polyaromatic compounds (linked rings of carbon atoms). The structure increasingly resembles graphene, the structural element of graphite.",
"title": "Geology"
},
{
"paragraph_id": 21,
"text": "Chemical changes are accompanied by physical changes, such as decrease in average pore size. The macerals (organic particles) of lignite are composed of huminite, which is earthy in appearance. As the coal matures to sub-bituminous coal, huminite begins to be replaced by vitreous (shiny) vitrinite. Maturation of bituminous coal is characterized by bitumenization, in which part of the coal is converted to bitumen, a hydrocarbon-rich gel. Maturation to anthracite is characterized by debitumenization (from demethanation) and the increasing tendency of the anthracite to break with a conchoidal fracture, similar to the way thick glass breaks.",
"title": "Geology"
},
{
"paragraph_id": 22,
"text": "",
"title": "Geology"
},
{
"paragraph_id": 23,
"text": "As geological processes apply pressure to dead biotic material over time, under suitable conditions, its metamorphic grade or rank increases successively into:",
"title": "Geology"
},
{
"paragraph_id": 24,
"text": "There are several international standards for coal. The classification of coal is generally based on the content of volatiles. However the most important distinction is between thermal coal (also known as steam coal), which is burnt to generate electricity via steam; and metallurgical coal (also known as coking coal), which is burnt at high temperature to make steel.",
"title": "Geology"
},
{
"paragraph_id": 25,
"text": "Hilt's law is a geological observation that (within a small area) the deeper the coal is found, the higher its rank (or grade). It applies if the thermal gradient is entirely vertical; however, metamorphism may cause lateral changes of rank, irrespective of depth. For example, some of the coal seams of the Madrid, New Mexico coal field were partially converted to anthracite by contact metamorphism from an igneous sill while the remainder of the seams remained as bituminous coal.",
"title": "Geology"
},
{
"paragraph_id": 26,
"text": "The earliest recognized use is from the Shenyang area of China where by 4000 BC Neolithic inhabitants had begun carving ornaments from black lignite. Coal from the Fushun mine in northeastern China was used to smelt copper as early as 1000 BC. Marco Polo, the Italian who traveled to China in the 13th century, described coal as \"black stones ... which burn like logs\", and said coal was so plentiful, people could take three hot baths a week. In Europe, the earliest reference to the use of coal as fuel is from the geological treatise On Stones (Lap. 16) by the Greek scientist Theophrastus (c. 371–287 BC):",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Among the materials that are dug because they are useful, those known as anthrakes [coals] are made of earth, and, once set on fire, they burn like charcoal [anthrakes]. They are found in Liguria ... and in Elis as one approaches Olympia by the mountain road; and they are used by those who work in metals.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Outcrop coal was used in Britain during the Bronze Age (3000–2000 BC), where it formed part of funeral pyres. In Roman Britain, with the exception of two modern fields, \"the Romans were exploiting coals in all the major coalfields in England and Wales by the end of the second century AD\". Evidence of trade in coal, dated to about AD 200, has been found at the Roman settlement at Heronbridge, near Chester; and in the Fenlands of East Anglia, where coal from the Midlands was transported via the Car Dyke for use in drying grain. Coal cinders have been found in the hearths of villas and Roman forts, particularly in Northumberland, dated to around AD 400. In the west of England, contemporary writers described the wonder of a permanent brazier of coal on the altar of Minerva at Aquae Sulis (modern day Bath), although in fact easily accessible surface coal from what became the Somerset coalfield was in common use in quite lowly dwellings locally. Evidence of coal's use for iron-working in the city during the Roman period has been found. In Eschweiler, Rhineland, deposits of bituminous coal were used by the Romans for the smelting of iron ore.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "No evidence exists of coal being of great importance in Britain before about AD 1000, the High Middle Ages. Coal came to be referred to as \"seacoal\" in the 13th century; the wharf where the material arrived in London was known as Seacoal Lane, so identified in a charter of King Henry III granted in 1253. Initially, the name was given because much coal was found on the shore, having fallen from the exposed coal seams on cliffs above or washed out of underwater coal outcrops, but by the time of Henry VIII, it was understood to derive from the way it was carried to London by sea. In 1257–1259, coal from Newcastle upon Tyne was shipped to London for the smiths and lime-burners building Westminster Abbey. Seacoal Lane and Newcastle Lane, where coal was unloaded at wharves along the River Fleet, still exist.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "These easily accessible sources had largely become exhausted (or could not meet the growing demand) by the 13th century, when underground extraction by shaft mining or adits was developed. The alternative name was \"pitcoal\", because it came from mines.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Cooking and home heating with coal (in addition to firewood or instead of it) has been done in various times and places throughout human history, especially in times and places where ground-surface coal was available and firewood was scarce, but a widespread reliance on coal for home hearths probably never existed until such a switch in fuels happened in London in the late sixteenth and early seventeenth centuries. Historian Ruth Goodman has traced the socioeconomic effects of that switch and its later spread throughout Britain and suggested that its importance in shaping the industrial adoption of coal has been previously underappreciated.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The development of the Industrial Revolution led to the large-scale use of coal, as the steam engine took over from the water wheel. In 1700, five-sixths of the world's coal was mined in Britain. Britain would have run out of suitable sites for watermills by the 1830s if coal had not been available as a source of energy. In 1947 there were some 750,000 miners in Britain, but the last deep coal mine in the UK closed in 2015.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "A grade between bituminous coal and anthracite was once known as \"steam coal\" as it was widely used as a fuel for steam locomotives. In this specialized use, it is sometimes known as \"sea coal\" in the United States. Small \"steam coal\", also called dry small steam nuts (DSSN), was used as a fuel for domestic water heating.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Coal played an important role in industry in the 19th and 20th century. The predecessor of the European Union, the European Coal and Steel Community, was based on the trading of this commodity.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Coal continues to arrive on beaches around the world from both natural erosion of exposed coal seams and windswept spills from cargo ships. Many homes in such areas gather this coal as a significant, and sometimes primary, source of home heating fuel.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The composition of coal is reported either as a proximate analysis (moisture, volatile matter, fixed carbon, and ash) or an ultimate analysis (ash, carbon, hydrogen, nitrogen, oxygen, and sulfur). The \"volatile matter\" does not exist by itself (except for some adsorbed methane) but designates the volatile compounds that are produced and driven off by heating the coal. A typical bituminous coal may have an ultimate analysis on a dry, ash-free basis of 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis.",
"title": "Chemistry"
},
{
"paragraph_id": 37,
"text": "The composition of ash, given in terms of oxides, varies:",
"title": "Chemistry"
},
{
"paragraph_id": 38,
"text": "Other minor components include:",
"title": "Chemistry"
},
{
"paragraph_id": 39,
"text": "Coke is a solid carbonaceous residue derived from coking coal (a low-ash, low-sulfur bituminous coal, also known as metallurgical coal), which is used in manufacturing steel and other iron products. Coke is made from coking coal by baking in an oven without oxygen at temperatures as high as 1,000 °C, driving off the volatile constituents and fusing together the fixed carbon and residual ash. Metallurgical coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. The carbon monoxide produced by its combustion reduces hematite (an iron oxide) to iron.",
"title": "Chemistry"
},
{
"paragraph_id": 40,
"text": "Waste carbon dioxide is also produced ( 2 Fe 2 O 3 + 3 C ⟶ 4 Fe + 3 CO 2 {\\displaystyle {\\ce {2Fe2O3 + 3C -> 4Fe + 3CO2}}} ) together with pig iron, which is too rich in dissolved carbon so must be treated further to make steel.",
"title": "Chemistry"
},
{
"paragraph_id": 41,
"text": "Coking coal should be low in ash, sulfur, and phosphorus, so that these do not migrate to the metal. The coke must be strong enough to resist the weight of overburden in the blast furnace, which is why coking coal is so important in making steel using the conventional route. Coke from coal is grey, hard, and porous and has a heating value of 29.6 MJ/kg. Some coke-making processes produce byproducts, including coal tar, ammonia, light oils, and coal gas.",
"title": "Chemistry"
},
{
"paragraph_id": 42,
"text": "Petroleum coke (petcoke) is the solid residue obtained in oil refining, which resembles coke but contains too many impurities to be useful in metallurgical applications.",
"title": "Chemistry"
},
{
"paragraph_id": 43,
"text": "Finely ground bituminous coal, known in this application as sea coal, is a constituent of foundry sand. While the molten metal is in the mould, the coal burns slowly, releasing reducing gases at pressure, and so preventing the metal from penetrating the pores of the sand. It is also contained in 'mould wash', a paste or liquid with the same function applied to the mould before casting. Sea coal can be mixed with the clay lining (the \"bod\") used for the bottom of a cupola furnace. When heated, the coal decomposes and the bod becomes slightly friable, easing the process of breaking open holes for tapping the molten metal.",
"title": "Chemistry"
},
{
"paragraph_id": 44,
"text": "Scrap steel can be recycled in an electric arc furnace; and an alternative to making iron by smelting is direct reduced iron, where any carbonaceous fuel can be used to make sponge or pelletised iron. To lessen carbon dioxide emissions hydrogen can be used as the reducing agent and biomass or waste as the source of carbon. Historically, charcoal has been used as an alternative to coke in a blast furnace, with the resultant iron being known as charcoal iron.",
"title": "Chemistry"
},
{
"paragraph_id": 45,
"text": "Coal gasification, as part of an integrated gasification combined cycle (IGCC) coal-fired power station, is used to produce syngas, a mixture of carbon monoxide (CO) and hydrogen (H2) gas to fire gas turbines to produce electricity. Syngas can also be converted into transportation fuels, such as gasoline and diesel, through the Fischer–Tropsch process; alternatively, syngas can be converted into methanol, which can be blended into fuel directly or converted to gasoline via the methanol to gasoline process. Gasification combined with Fischer–Tropsch technology was used by the Sasol chemical company of South Africa to make chemicals and motor vehicle fuels from coal.",
"title": "Chemistry"
},
{
"paragraph_id": 46,
"text": "During gasification, the coal is mixed with oxygen and steam while also being heated and pressurized. During the reaction, oxygen and water molecules oxidize the coal into carbon monoxide (CO), while also releasing hydrogen gas (H2). This used to be done in underground coal mines, and also to make town gas, which was piped to customers to burn for illumination, heating, and cooking.",
"title": "Chemistry"
},
{
"paragraph_id": 47,
"text": "If the refiner wants to produce gasoline, the syngas is routed into a Fischer–Tropsch reaction. This is known as indirect coal liquefaction. If hydrogen is the desired end-product, however, the syngas is fed into the water gas shift reaction, where more hydrogen is liberated:",
"title": "Chemistry"
},
{
"paragraph_id": 48,
"text": "Coal can be converted directly into synthetic fuels equivalent to gasoline or diesel by hydrogenation or carbonization. Coal liquefaction emits more carbon dioxide than liquid fuel production from crude oil. Mixing in biomass and using CCS would emit slightly less than the oil process but at a high cost. State owned China Energy Investment runs a coal liquefaction plant and plans to build 2 more.",
"title": "Chemistry"
},
{
"paragraph_id": 49,
"text": "Coal liquefaction may also refer to the cargo hazard when shipping coal.",
"title": "Chemistry"
},
{
"paragraph_id": 50,
"text": "Chemicals have been produced from coal since the 1950s. Coal can be used as a feedstock in the production of a wide range of chemical fertilizers and other chemical products. The main route to these products was coal gasification to produce syngas. Primary chemicals that are produced directly from the syngas include methanol, hydrogen and carbon monoxide, which are the chemical building blocks from which a whole spectrum of derivative chemicals are manufactured, including olefins, acetic acid, formaldehyde, ammonia, urea and others. The versatility of syngas as a precursor to primary chemicals and high-value derivative products provides the option of using coal to produce a wide range of commodities. In the 21st century, however, the use of coal bed methane is becoming more important.",
"title": "Chemistry"
},
{
"paragraph_id": 51,
"text": "Because the slate of chemical products that can be made via coal gasification can in general also use feedstocks derived from natural gas and petroleum, the chemical industry tends to use whatever feedstocks are most cost-effective. Therefore, interest in using coal tended to increase for higher oil and natural gas prices and during periods of high global economic growth that might have strained oil and gas production.",
"title": "Chemistry"
},
{
"paragraph_id": 52,
"text": "Coal to chemical processes require substantial quantities of water. Much coal to chemical production is in China where coal dependent provinces such as Shanxi are struggling to control its pollution.",
"title": "Chemistry"
},
{
"paragraph_id": 53,
"text": "The energy density of coal is roughly 24 megajoules per kilogram (approximately 6.7 kilowatt-hours per kg). For a coal power plant with a 40% efficiency, it takes an estimated 325 kg (717 lb) of coal to power a 100 W lightbulb for one year.",
"title": "Electricity generation"
},
{
"paragraph_id": 54,
"text": "27.6% of world energy was supplied by coal in 2017 and Asia used almost three-quarters of it.",
"title": "Electricity generation"
},
{
"paragraph_id": 55,
"text": "Refined coal is the product of a coal-upgrading technology that removes moisture and certain pollutants from lower-rank coals such as sub-bituminous and lignite (brown) coals. It is one form of several precombustion treatments and processes for coal that alter coal's characteristics before it is burned. Thermal efficiency improvements are achievable by improved pre-drying (especially relevant with high-moisture fuel such as lignite or biomass). The goals of precombustion coal technologies are to increase efficiency and reduce emissions when the coal is burned. Precombustion technology can sometimes be used as a supplement to postcombustion technologies to control emissions from coal-fueled boilers.",
"title": "Electricity generation"
},
{
"paragraph_id": 56,
"text": "Coal burnt as a solid fuel in coal power stations to generate electricity is called thermal coal. Coal is also used to produce very high temperatures through combustion. Early deaths due to air pollution have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities. Efforts around the world to reduce the use of coal have led some regions to switch to natural gas and electricity from lower carbon sources.",
"title": "Electricity generation"
},
{
"paragraph_id": 57,
"text": "When coal is used for electricity generation, it is usually pulverized and then burned in a furnace with a boiler (see also Pulverized coal-fired boiler). The furnace heat converts boiler water to steam, which is then used to spin turbines which turn generators and create electricity. The thermodynamic efficiency of this process varies between about 25% and 50% depending on the pre-combustion treatment, turbine technology (e.g. supercritical steam generator) and the age of the plant.",
"title": "Electricity generation"
},
{
"paragraph_id": 58,
"text": "A few integrated gasification combined cycle (IGCC) power plants have been built, which burn coal more efficiently. Instead of pulverizing the coal and burning it directly as fuel in the steam-generating boiler, the coal is gasified to create syngas, which is burned in a gas turbine to produce electricity (just like natural gas is burned in a turbine). Hot exhaust gases from the turbine are used to raise steam in a heat recovery steam generator which powers a supplemental steam turbine. The overall plant efficiency when used to provide combined heat and power can reach as much as 94%. IGCC power plants emit less local pollution than conventional pulverized coal-fueled plants; however the technology for carbon capture and storage (CCS) after gasification and before burning has so far proved to be too expensive to use with coal. Other ways to use coal are as coal-water slurry fuel (CWS), which was developed in the Soviet Union, or in an MHD topping cycle. However these are not widely used due to lack of profit.",
"title": "Electricity generation"
},
{
"paragraph_id": 59,
"text": "In 2017 38% of the world's electricity came from coal, the same percentage as 30 years previously. In 2018 global installed capacity was 2TW (of which 1TW is in China) which was 30% of total electricity generation capacity. The most dependent major country is South Africa, with over 80% of its electricity generated by coal; but China alone generates more than half of the world's coal-generated electricity.",
"title": "Electricity generation"
},
{
"paragraph_id": 60,
"text": "Maximum use of coal was reached in 2013. In 2018 coal-fired power station capacity factor averaged 51%, that is they operated for about half their available operating hours.",
"title": "Electricity generation"
},
{
"paragraph_id": 61,
"text": "About 8000 Mt of coal are produced annually, about 90% of which is hard coal and 10% lignite. As of 2018 just over half is from underground mines. More accidents occur during underground mining than surface mining. Not all countries publish mining accident statistics so worldwide figures are uncertain, but it is thought that most deaths occur in coal mining accidents in China: in 2017 there were 375 coal mining related deaths in China. Most coal mined is thermal coal (also called steam coal as it is used to make steam to generate electricity) but metallurgical coal (also called \"metcoal\" or \"coking coal\" as it is used to make coke to make iron) accounts for 10% to 15% of global coal use.",
"title": "Coal industry"
},
{
"paragraph_id": 62,
"text": "China mines almost half the world's coal, followed by India with about a tenth. Australia accounts for about a third of world coal exports, followed by Indonesia and Russia, while the largest importers are Japan and India. Russia is increasingly orienting its coal exports from Europe to Asia as Europe transitions to renewable energy and subjects Russia to sanctions over its invasion of Ukraine.",
"title": "Coal industry"
},
{
"paragraph_id": 63,
"text": "The price of metallurgical coal is volatile and much higher than the price of thermal coal because metallurgical coal must be lower in sulfur and requires more cleaning. Coal futures contracts provide coal producers and the electric power industry an important tool for hedging and risk management.",
"title": "Coal industry"
},
{
"paragraph_id": 64,
"text": "In some countries, new onshore wind or solar generation already costs less than coal power from existing plants. However, for China this is forecast for the early 2020s and for southeast Asia not until the late 2020s. In India, building new plants is uneconomic and, despite being subsidized, existing plants are losing market share to renewables.",
"title": "Coal industry"
},
{
"paragraph_id": 65,
"text": "Of the countries which produce coal, China mines by far the most, almost half the world's coal, followed by less than 10% by India. China is also by far the largest consumer of coal. Therefore, international market trends depend on Chinese energy policy. Although the government effort to reduce air pollution in China means that the global long-term trend is to burn less coal, the short and medium term trends may differ, in part due to Chinese financing of new coal-fired power plants in other countries.",
"title": "Coal industry"
},
{
"paragraph_id": 66,
"text": "Countries with annual production higher than 300 million tonnes are shown.",
"title": "Coal industry"
},
{
"paragraph_id": 67,
"text": "Countries with annual consumption higher than 500 million tonnes are shown. Shares are based on data expressed in tonnes oil equivalent.",
"title": "Coal industry"
},
{
"paragraph_id": 68,
"text": "Exporters are at risk of a reduction in import demand from India and China.",
"title": "Coal industry"
},
{
"paragraph_id": 69,
"text": "The use of coal as fuel causes ill health and deaths. Mining and processing of coal causes air and water pollution. Coal-powered plants emit nitrogen oxides, sulfur dioxide, particulate pollution and heavy metals, which adversely affect human health. Coal bed methane extraction is important to avoid mining accidents.",
"title": "Damage to human health"
},
{
"paragraph_id": 70,
"text": "The deadly London smog was caused primarily by the heavy use of coal. Globally coal is estimated to cause 800,000 premature deaths every year, mostly in India and China.",
"title": "Damage to human health"
},
{
"paragraph_id": 71,
"text": "Burning coal is a major contributor to sulfur dioxide emissions, which creates PM2.5 particulates, the most dangerous form of air pollution.",
"title": "Damage to human health"
},
{
"paragraph_id": 72,
"text": "Coal smokestack emissions cause asthma, strokes, reduced intelligence, artery blockages, heart attacks, congestive heart failure, cardiac arrhythmias, mercury poisoning, arterial occlusion, and lung cancer.",
"title": "Damage to human health"
},
{
"paragraph_id": 73,
"text": "Annual health costs in Europe from use of coal to generate electricity are estimated at up to €43 billion.",
"title": "Damage to human health"
},
{
"paragraph_id": 74,
"text": "In China, improvements to air quality and human health would increase with more stringent climate policies, mainly because the country's energy is so heavily reliant on coal. And there would be a net economic benefit.",
"title": "Damage to human health"
},
{
"paragraph_id": 75,
"text": "A 2017 study in the Economic Journal found that for Britain during the period 1851–1860, \"a one standard deviation increase in coal use raised infant mortality by 6–8% and that industrial coal use explains roughly one-third of the urban mortality penalty observed during this period.\"",
"title": "Damage to human health"
},
{
"paragraph_id": 76,
"text": "Breathing in coal dust causes coalworker's pneumoconiosis or \"black lung\", so called because the coal dust literally turns the lungs black. In the US alone, it is estimated that 1,500 former employees of the coal industry die every year from the effects of breathing in coal mine dust.",
"title": "Damage to human health"
},
{
"paragraph_id": 77,
"text": "Huge amounts of coal ash and other waste is produced annually. Use of coal generates hundreds of millions of tons of ash and other waste products every year. These include fly ash, bottom ash, and flue-gas desulfurization sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals, along with non-metals such as selenium.",
"title": "Damage to human health"
},
{
"paragraph_id": 78,
"text": "Around 10% of coal is ash. Coal ash is hazardous and toxic to human beings and some other living things. Coal ash contains the radioactive elements uranium and thorium. Coal ash and other solid combustion byproducts are stored locally and escape in various ways that expose those living near coal plants to radiation and environmental toxics.",
"title": "Damage to human health"
},
{
"paragraph_id": 79,
"text": "Coal mining, coal combustion wastes and flue gas are causing major environmental damage.",
"title": "Damage to the environment"
},
{
"paragraph_id": 80,
"text": "Water systems are affected by coal mining. For example, mining affects groundwater and water table levels and acidity. Spills of fly ash, such as the Kingston Fossil Plant coal fly ash slurry spill, can also contaminate land and waterways, and destroy homes. Power stations that burn coal also consume large quantities of water. This can affect the flows of rivers, and has consequential impacts on other land uses. In areas of water scarcity, such as the Thar Desert in Pakistan, coal mining and coal power plants contribute to the depletion of water resources.",
"title": "Damage to the environment"
},
{
"paragraph_id": 81,
"text": "One of the earliest known impacts of coal on the water cycle was acid rain. In 2014, approximately 100 Tg/S of sulfur dioxide (SO2) was released, over half of which was from burning coal. After release, the sulfur dioxide is oxidized to H2SO4 which scatters solar radiation, hence its increase in the atmosphere exerts a cooling effect on the climate. This beneficially masks some of the warming caused by increased greenhouse gases. However, the sulfur is precipitated out of the atmosphere as acid rain in a matter of weeks, whereas carbon dioxide remains in the atmosphere for hundreds of years. Release of SO2 also contributes to the widespread acidification of ecosystems.",
"title": "Damage to the environment"
},
{
"paragraph_id": 82,
"text": "Disused coal mines can also cause issues. Subsidence can occur above tunnels, causing damage to infrastructure or cropland. Coal mining can also cause long lasting fires, and it has been estimated that thousands of coal seam fires are burning at any given time. For example, Brennender Berg has been burning since 1668 and is still burning in the 21st century.",
"title": "Damage to the environment"
},
{
"paragraph_id": 83,
"text": "The production of coke from coal produces ammonia, coal tar, and gaseous compounds as byproducts which if discharged to land, air or waterways can pollute the environment. The Whyalla steelworks is one example of a coke producing facility where liquid ammonia was discharged to the marine environment.",
"title": "Damage to the environment"
},
{
"paragraph_id": 84,
"text": "Emission intensity is the greenhouse gas emitted over the life of a generator per unit of electricity generated. The emission intensity of coal power stations is high, as they emit around 1000 g of CO2eq for each kWh generated, while natural gas is medium-emission intensity at around 500 g CO2eq per kWh. The emission intensity of coal varies with type and generator technology and exceeds 1200 g per kWh in some countries.",
"title": "Damage to the environment"
},
{
"paragraph_id": 85,
"text": "Thousands of coal fires are burning around the world. Those burning underground can be difficult to locate and many cannot be extinguished. Fires can cause the ground above to subside, their combustion gases are dangerous to life, and breaking out to the surface can initiate surface wildfires. Coal seams can be set on fire by spontaneous combustion or contact with a mine fire or surface fire. Lightning strikes are an important source of ignition. The coal continues to burn slowly back into the seam until oxygen (air) can no longer reach the flame front. A grass fire in a coal area can set dozens of coal seams on fire. Coal fires in China burn an estimated 120 million tons of coal a year, emitting 360 million metric tons of CO2, amounting to 2–3% of the annual worldwide production of CO2 from fossil fuels. In Centralia, Pennsylvania (a borough located in the Coal Region of the U.S.), an exposed vein of anthracite ignited in 1962 due to a trash fire in the borough landfill, located in an abandoned anthracite strip mine pit. Attempts to extinguish the fire were unsuccessful, and it continues to burn underground to this day. The Australian Burning Mountain was originally believed to be a volcano, but the smoke and ash come from a coal fire that has been burning for some 6,000 years.",
"title": "Damage to the environment"
},
{
"paragraph_id": 86,
"text": "At Kuh i Malik in Yagnob Valley, Tajikistan, coal deposits have been burning for thousands of years, creating vast underground labyrinths full of unique minerals, some of them very beautiful.",
"title": "Damage to the environment"
},
{
"paragraph_id": 87,
"text": "The reddish siltstone rock that caps many ridges and buttes in the Powder River Basin in Wyoming and in western North Dakota is called porcelanite, which resembles the coal burning waste \"clinker\" or volcanic \"scoria\". Clinker is rock that has been fused by the natural burning of coal. In the Powder River Basin approximately 27 to 54 billion tons of coal burned within the past three million years. Wild coal fires in the area were reported by the Lewis and Clark Expedition as well as explorers and settlers in the area.",
"title": "Damage to the environment"
},
{
"paragraph_id": 88,
"text": "The largest and most long-term effect of coal use is the release of carbon dioxide, a greenhouse gas that causes climate change. Coal-fired power plants were the single largest contributor to the growth in global CO2 emissions in 2018, 40% of the total fossil fuel emissions, and more than a quarter of total emissions. Coal mining can emit methane, another greenhouse gas.",
"title": "Damage to the environment"
},
{
"paragraph_id": 89,
"text": "In 2016 world gross carbon dioxide emissions from coal usage were 14.5 gigatonnes. For every megawatt-hour generated, coal-fired electric power generation emits around a tonne of carbon dioxide, which is double the approximately 500 kg of carbon dioxide released by a natural gas-fired electric plant. In 2013, the head of the UN climate agency advised that most of the world's coal reserves should be left in the ground to avoid catastrophic global warming. To keep global warming below 1.5 °C or 2 °C hundreds, or possibly thousands, of coal-fired power plants will need to be retired early.",
"title": "Damage to the environment"
},
{
"paragraph_id": 90,
"text": "Local pollution standards include GB13223-2011 (China), India, the Industrial Emissions Directive (EU) and the Clean Air Act (United States).",
"title": "Pollution mitigation"
},
{
"paragraph_id": 91,
"text": "Satellite monitoring is now used to crosscheck national data, for example Sentinel-5 Precursor has shown that Chinese control of SO2 has only been partially successful. It has also revealed that low use of technology such as SCR has resulted in high NO2 emissions in South Africa and India.",
"title": "Pollution mitigation"
},
{
"paragraph_id": 92,
"text": "A few Integrated gasification combined cycle (IGCC) coal-fired power plants have been built with coal gasification. Although they burn coal more efficiently and therefore emit less pollution, the technology has not generally proved economically viable for coal, except possibly in Japan although this is controversial.",
"title": "Pollution mitigation"
},
{
"paragraph_id": 93,
"text": "Although still being intensively researched and considered economically viable for some uses other than with coal; carbon capture and storage has been tested at the Petra Nova and Boundary Dam coal-fired power plants and has been found to be technically feasible but not economically viable for use with coal, due to reductions in the cost of solar PV technology.",
"title": "Pollution mitigation"
},
{
"paragraph_id": 94,
"text": "In 2018 US$80 billion was invested in coal supply but almost all for sustaining production levels rather than opening new mines. In the long term coal and oil could cost the world trillions of dollars per year. Coal alone may cost Australia billions, whereas costs to some smaller companies or cities could be on the scale of millions of dollars. The economies most damaged by coal (via climate change) may be India and the US as they are the countries with the highest social cost of carbon. Bank loans to finance coal are a risk to the Indian economy.",
"title": "Economics"
},
{
"paragraph_id": 95,
"text": "China is the largest producer of coal in the world. It is the world's largest energy consumer, and coal in China supplies 60% of its primary energy. However two fifths of China's coal power stations are estimated to be loss-making.",
"title": "Economics"
},
{
"paragraph_id": 96,
"text": "Air pollution from coal storage and handling costs the US almost 200 dollars for every extra ton stored, due to PM2.5. Coal pollution costs the €43 billion each year. Measures to cut air pollution benefit individuals financially and the economies of countries such as China.",
"title": "Economics"
},
{
"paragraph_id": 97,
"text": "Subsidies for coal in 2021 have been estimated at US$19 billion, not including electricity subsidies, and are expected to rise in 2022. As of 2019 G20 countries provide at least US$63.9 billion of government support per year for the production of coal, including coal-fired power: many subsidies are impossible to quantify but they include US$27.6 billion in domestic and international public finance, US$15.4 billion in fiscal support, and US$20.9 billion in state-owned enterprise (SOE) investments per year. In the EU state aid to new coal-fired plants is banned from 2020, and to existing coal-fired plants from 2025. As of 2018, government funding for new coal power plants was supplied by Exim Bank of China, the Japan Bank for International Cooperation and Indian public sector banks. Coal in Kazakhstan was the main recipient of coal consumption subsidies totalling US$2 billion in 2017. Coal in Turkey benefited from substantial subsidies in 2021.",
"title": "Economics"
},
{
"paragraph_id": 98,
"text": "Some coal-fired power stations could become stranded assets, for example China Energy Investment, the world's largest power company, risks losing half its capital. However, state-owned electricity utilities such as Eskom in South Africa, Perusahaan Listrik Negara in Indonesia, Sarawak Energy in Malaysia, Taipower in Taiwan, EGAT in Thailand, Vietnam Electricity and EÜAŞ in Turkey are building or planning new plants. As of 2021 this may be helping to cause a carbon bubble which could cause financial instability if it bursts.",
"title": "Economics"
},
{
"paragraph_id": 99,
"text": "Countries building or financing new coal-fired power stations, such as China, India, Indonesia, Vietnam, Turkey and Bangladesh, face mounting international criticism for obstructing the aims of the Paris Agreement. In 2019, the Pacific Island nations (in particular Vanuatu and Fiji) criticized Australia for failing to cut their emissions at a faster rate than they were, citing concerns about coastal inundation and erosion. In May 2021, the G7 members agreed to end new direct government support for international coal power generation.",
"title": "Politics"
},
{
"paragraph_id": 100,
"text": "Opposition to coal pollution was one of the main reasons the modern environmental movement started in the 19th century.",
"title": "Politics"
},
{
"paragraph_id": 101,
"text": "In order to meet global climate goals and provide power to those that do not currently have it coal power must be reduced from nearly 10,000 TWh to less than 2,000 TWh by 2040. Phasing out coal has short-term health and environmental benefits which exceed the costs, but some countries still favor coal, and there is much disagreement about how quickly it should be phased out. However many countries, such as the Powering Past Coal Alliance, have already or are transitioned away from coal; the largest transition announced so far being Germany, which is due to shut down its last coal-fired power station between 2035 and 2038. Some countries use the ideas of a \"Just Transition\", for example to use some of the benefits of transition to provide early pensions for coal miners. However, low-lying Pacific Islands are concerned the transition is not fast enough and that they will be inundated by sea level rise, so they have called for OECD countries to completely phase out coal by 2030 and other countries by 2040. In 2020, although China built some plants, globally more coal power was retired than built: the UN Secretary General has also said that OECD countries should stop generating electricity from coal by 2030 and the rest of the world by 2040. Phasing down coal was agreed at COP26 in the Glasgow Climate Pact. Vietnam is among few coal-dependent developing countries that pledged to phase out unabated coal power by the 2040s or as early as possible thereafter",
"title": "Transition away from coal"
},
{
"paragraph_id": 102,
"text": "Peak coal is the peak consumption or production of coal by a human community. The peak of coal's share in the global energy mix was in 2008, when coal accounted for 30% of global energy production. Coal consumption is declining in the United States and Europe, as well as developed economies in Asia. However, consumption is still increasing in India and Southeast Asia, which compensates for the falls in other regions. Global coal consumption reached an all time high in 2022 at 8.3 billion tonnes. Global coal demand is set to remain at record levels in 2023.",
"title": "Transition away from coal"
},
{
"paragraph_id": 103,
"text": "Coal-fired generation puts out about twice as much carbon dioxide—around a tonne for every megawatt hour generated—as electricity generated by burning natural gas at 500 kg of greenhouse gas per megawatt hour. In addition to generating electricity, natural gas is also popular in some countries for heating and as an automotive fuel.",
"title": "Transition away from coal"
},
{
"paragraph_id": 104,
"text": "The use of coal in the United Kingdom declined as a result of the development of North Sea oil and the subsequent dash for gas during the 1990s. In Canada some coal power plants, such as the Hearn Generating Station, switched from coal to natural gas. In 2017, coal power in the US provided 30% of the electricity, down from approximately 49% in 2008, due to plentiful supplies of low cost natural gas obtained by hydraulic fracturing of tight shale formations.",
"title": "Transition away from coal"
},
{
"paragraph_id": 105,
"text": "Some coal-mining regions are highly dependent on coal.",
"title": "Transition away from coal"
},
{
"paragraph_id": 106,
"text": "Some coal miners are concerned their jobs may be lost in the transition. A just transition from coal is supported by the European Bank for Reconstruction and Development.",
"title": "Transition away from coal"
},
{
"paragraph_id": 107,
"text": "The white rot fungus Trametes versicolor can grow on and metabolize naturally occurring coal. The bacteria Diplococcus has been found to degrade coal, raising its temperature.",
"title": "Transition away from coal"
},
{
"paragraph_id": 108,
"text": "Coal is the official state mineral of Kentucky, and the official state rock of Utah and West Virginia. These US states have a historic link to coal mining.",
"title": "Cultural usage"
},
{
"paragraph_id": 109,
"text": "Some cultures hold that children who misbehave will receive only a lump of coal from Santa Claus for Christmas in their stockings instead of presents.",
"title": "Cultural usage"
},
{
"paragraph_id": 110,
"text": "It is also customary and considered lucky in Scotland and the North of England to give coal as a gift on New Year's Day. This occurs as part of first-footing and represents warmth for the year to come.",
"title": "Cultural usage"
}
]
| Coal is a combustible black or brownish-black sedimentary rock, formed as rock strata called coal seams. Coal is mostly carbon with variable amounts of other elements, chiefly hydrogen, sulfur, oxygen, and nitrogen.
Coal is a type of fossil fuel, formed when dead plant matter decays into peat and is converted into coal by the heat and pressure of deep burial over millions of years. Vast deposits of coal originate in former wetlands called coal forests that covered much of the Earth's tropical land areas during the late Carboniferous (Pennsylvanian) and Permian times. Coal is used primarily as a fuel. While coal has been known and used for thousands of years, its usage was limited until the Industrial Revolution. With the invention of the steam engine, coal consumption increased. In 2020, coal supplied about a quarter of the world's primary energy and over a third of its electricity. Some iron and steel-making and other industrial processes burn coal. The extraction and use of coal causes premature death and illness. The use of coal damages the environment, and it is the largest anthropogenic source of carbon dioxide contributing to climate change. Fourteen billion tonnes of carbon dioxide were emitted by burning coal in 2020, which is 40% of the total fossil fuel emissions and over 25% of total global greenhouse gas emissions. As part of worldwide energy transition, many countries have reduced or eliminated their use of coal power. The United Nations Secretary General asked governments to stop building new coal plants by 2020. Global coal use was 8.3 billion tonnes in 2022. Global coal demand is set to remain at record levels in 2023. To meet the Paris Agreement target of keeping global warming below 2 °C (3.6 °F) coal use needs to halve from 2020 to 2030, and "phasing down" coal was agreed upon in the Glasgow Climate Pact. The largest consumer and importer of coal in 2020 was China, which accounts for almost half the world's annual coal production, followed by India with about a tenth. Indonesia and Australia export the most, followed by Russia. | 2001-11-06T19:23:54Z | 2023-12-30T14:38:02Z | [
"Template:Sfn",
"Template:Blockquote",
"Template:Refn",
"Template:Multiple image",
"Template:OEtymD",
"Template:Short description",
"Template:Other uses",
"Template:See also",
"Template:€",
"Template:Cite magazine",
"Template:Pp-semi-indef",
"Template:As of",
"Template:Cite web",
"Template:USD",
"Template:Annotated link",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite book",
"Template:Authority control",
"Template:Pp-move",
"Template:Main",
"Template:Wikibooks",
"Template:Electricity generation",
"Template:Rock type",
"Template:Div col",
"Template:Div col end",
"Template:ISBN",
"Template:Commons",
"Template:Coal",
"Template:Infobox rock",
"Template:Excerpt",
"Template:Cite news",
"Template:Cite Collier's",
"Template:Cite EB1911",
"Template:Convert",
"Template:Where",
"Template:Rp",
"Template:Chem2",
"Template:Cite report",
"Template:Wiktionary",
"Template:Anchor",
"Template:Further",
"Template:Val",
"Template:Cite NIE",
"Template:Use dmy dates",
"Template:Nowrap",
"Template:Portal",
"Template:Notelist",
"Template:Webarchive",
"Template:Cbignore",
"Template:US$",
"Template:Energy country lists",
"Template:Citation needed",
"Template:Citation",
"Template:By who"
]
| https://en.wikipedia.org/wiki/Coal |
5,992 | Traditional Chinese medicine | Traditional Chinese medicine (TCM) is an alternative medical practice drawn from traditional medicine in China. It has been described as "fraught with pseudoscience", with the majority of its treatments having no logical mechanism of action.
Medicine in traditional China encompassed a range of sometimes competing health and healing practices, folk beliefs, literati theory and Confucian philosophy, herbal remedies, food, diet, exercise, medical specializations, and schools of thought. In the early twentieth century, Chinese cultural and political modernizers worked to eliminate traditional practices as backward and unscientific. Traditional practitioners then selected elements of philosophy and practice and organized them into what they called "Chinese medicine" (Chinese: 中医 Zhongyi). In the 1950s, the Chinese government sponsored the integration of Chinese and Western medicine, and in the Great Proletarian Cultural Revolution of the 1960s, promoted Chinese medicine as inexpensive and popular. After the opening of relations between the United States and China after 1972, there was great interest in the West for what is now called traditional Chinese medicine (TCM).
TCM is said to be based on such texts as Huangdi Neijing (The Inner Canon of the Yellow Emperor), and Compendium of Materia Medica, a sixteenth-century encyclopedic work, and includes various forms of herbal medicine, acupuncture, cupping therapy, gua sha, massage (tui na), bonesetter (die-da), exercise (qigong), and dietary therapy. TCM is widely used in the Sinosphere. One of the basic tenets is that the body's qi is circulating through channels called meridians having branches connected to bodily organs and functions. There is no evidence that meridians or vital energy exist. Concepts of the body and of disease used in TCM reflect its ancient origins and its emphasis on dynamic processes over material structure, similar to the humoral theory of ancient Greece and ancient Rome.
The demand for traditional medicines in China has been a major generator of illegal wildlife smuggling, linked to the killing and smuggling of endangered animals.
Scholars in the history of medicine in China distinguish its doctrines and practice from those of present-day TCM. As Ian Johnson notes, the term "Traditional Chinese Medicine" was coined by "party propagandists" and first appeared in English in 1955. Nathan Sivin criticizes attempts to treat medicine and medical practices in traditional China as if they were a single system. Instead, he says, there were 2,000 years of "medical system in turmoil" and speaks of a "myth of an unchanging medical tradition." He urges that "Traditional medicine translated purely into terms of modern medicine becomes partly nonsensical, partly irrelevant, and partly mistaken; that is also true the other way around, a point easily overlooked." TJ Hinrichs observes that people in modern Western societies divide healing practices into biomedicine for the body, psychology for the mind, and religion for the spirit, but these distinctions are inadequate to describe medical concepts among Chinese historically and to a considerable degree today.
The medical anthropologist Charles Leslie writes that Chinese, Greco-Arabic, and Indian traditional medicines were all grounded in systems of correspondence that aligned the organization of society, the universe, and the human body and other forms of life into an "all-embracing order of things." Each of these traditional systems was organized with such qualities as heat and cold, wet and dry, light and darkness, qualities that also align the seasons, compass directions, and the human cycle of birth, growth, and death. They provided, Leslie continued, a "comprehensive way of conceiving patterns that ran through all of nature," and they "served as a classificatory and mnemonic device to observe health problems and to reflect upon, store, and recover empirical knowledge," but they were also "subject to stultifying theoretical elaboration, self-deception, and dogmatism."
The doctrines of Chinese medicine are rooted in books such as the Yellow Emperor's Inner Canon and the Treatise on Cold Damage, as well as in cosmological notions such as yin–yang and the five phases. The "Documentation of Chinese materia medica" (CMM) dates back to around 1,100 BCE when only a few dozen drugs were described. By the end of the 16th century, the number of drugs documented had reached close to 1,900. And by the end of the last century, published records of CMM had reached 12,800 drugs." Starting in the 1950s, these precepts were standardized in the People's Republic of China, including attempts to integrate them with modern notions of anatomy and pathology. In the 1950s, the Chinese government promoted a systematized form of TCM.
Traces of therapeutic activities in China date from the Shang dynasty (14th–11th centuries BCE). Though the Shang did not have a concept of "medicine" as distinct from other health practices, their oracular inscriptions on bones and tortoise shells refer to illnesses that affected the Shang royal family: eye disorders, toothaches, bloated abdomen, and such. Shang elites usually attributed them to curses sent by their ancestors. There is currently no evidence that the Shang nobility used herbal remedies.
Stone and bone needles found in ancient tombs led Joseph Needham to speculate that acupuncture might have been carried out in the Shang dynasty. This being said, most historians now make a distinction between medical lancing (or bloodletting) and acupuncture in the narrower sense of using metal needles to attempt to treat illnesses by stimulating points along circulation channels ("meridians") in accordance with beliefs related to the circulation of "Qi". The earliest evidence for acupuncture in this sense dates to the second or first century BCE.
The Yellow Emperor's Inner Canon (Huangdi Neijing), the oldest received work of Chinese medical theory, was compiled during the Han dynasty around the first century BCE on the basis of shorter texts from different medical lineages. Written in the form of dialogues between the legendary Yellow Emperor and his ministers, it offers explanations on the relation between humans, their environment, and the cosmos, on the contents of the body, on human vitality and pathology, on the symptoms of illness, and on how to make diagnostic and therapeutic decisions in light of all these factors. Unlike earlier texts like Recipes for Fifty-Two Ailments, which was excavated in the 1970s from the Mawangdui tomb that had been sealed in 168 BCE, the Inner Canon rejected the influence of spirits and the use of magic. It was also one of the first books in which the cosmological doctrines of Yinyang and the Five Phases were brought to a mature synthesis.
The Treatise on Cold Damage Disorders and Miscellaneous Illnesses (Shang Han Lun) was collated by Zhang Zhongjing sometime between 196 and 220 CE; at the end of the Han dynasty. Focusing on drug prescriptions rather than acupuncture, it was the first medical work to combine Yinyang and the Five Phases with drug therapy. This formulary was also the earliest public Chinese medical text to group symptoms into clinically useful "patterns" (zheng 證) that could serve as targets for therapy. Having gone through numerous changes over time, the formulary now circulates as two distinct books: the Treatise on Cold Damage Disorders and the Essential Prescriptions of the Golden Casket, which were edited separately in the eleventh century, under the Song dynasty.
Nanjing or "Classic of Difficult Issues," originally called "The Yellow Emperor Eighty-one Nan Jing", ascribed to Bian Que in the eastern Han dynasty. This book was compiled in the form of question-and-answer explanations. A total of 81 questions have been discussed. Therefore, it is also called "Eighty-One Nan". The book is based on basic theory and has also analyzed some disease certificates. Questions one to twenty-two is about pulse study, questions twenty-three to twenty-nine is about meridian study, questions thirty to forty-seven is related to urgent illnesses, questions forty-eight to sixty-one is related to serious diseases, questions sixty-two to sixty-eight is related to acupuncture points, and questions sixty-nine to eighty-one is related to the needlepoint methods.
The book is credited as developing its own path, while also inheriting the theories from Huangdi Neijing. The content includes physiology, pathology, diagnosis, treatment contents, and a more essential and specific discussion of pulse diagnosis. It has become one of the four classics for Chinese medicine practitioners to learn from and has impacted the medical development in China.
Shennong Ben Cao Jing is one of the earliest written medical books in China. Written during the Eastern Han Dynasty between 200 and 250 CE, it was the combined effort of practitioners in the Qin and Han Dynasties who summarized, collected and compiled the results of pharmacological experience during their time periods. It was the first systematic summary of Chinese herbal medicine. Most of the pharmacological theories and compatibility rules and the proposed "seven emotions and harmony" principle have played a role in the practice of medicine for thousands of years. Therefore, it has been a textbook for medical workers in modern China. The full text of Shennong Ben Cao Jing in English can be found online.
In the centuries that followed, several shorter books tried to summarize or systematize the contents of the Yellow Emperor's Inner Canon. The Canon of Problems (probably second century CE) tried to reconcile divergent doctrines from the Inner Canon and developed a complete medical system centered on needling therapy. The AB Canon of Acupuncture and Moxibustion (Zhenjiu jiayi jing 針灸甲乙經, compiled by Huangfu Mi sometime between 256 and 282 CE) assembled a consistent body of doctrines concerning acupuncture; whereas the Canon of the Pulse (Maijing 脈經; c. 280) presented itself as a "comprehensive handbook of diagnostics and therapy."
Around 900–1000 AD, Chinese were the first to develop a form of vaccination, known as variolation or inoculation, to prevent smallpox. Chinese physicians had realised that when healthy people were exposed to smallpox scab tissue, they had a smaller chance of being infected by the disease later on. The common methods of inoculation at the time was through crushing smallpox scabs into powder and breathing it through the nose.
Prominent medical scholars of the post-Han period included Tao Hongjing (456–536), Sun Simiao of the Sui and Tang dynasties, Zhang Jiegu (c. 1151–1234), and Li Shizhen (1518–1593).
In 1950, Chinese Communist Party (CCP) chairman Mao Zedong announced support of traditional Chinese medicine, but he did not personally believe in and did not use it. In 1952, the president of the Chinese Medical Association said that, "This One Medicine, will possess a basis in modern natural sciences, will have absorbed the ancient and the new, the Chinese and the foreign, all medical achievements—and will be China's New Medicine!"
During the Cultural Revolution (1966–1976) the CCP and the government emphasized modernity, cultural identity and China's social and economic reconstruction and contrasted them to the colonial and feudal past. The government established a grassroots health care system as a step in the search for a new national identity and tried to revitalize traditional medicine and made large investments in traditional medicine to try to develop affordable medical care and public health facilities. The Ministry of Health directed health care throughout China and established primary care units. Chinese physicians trained in Western medicine were required to learn traditional medicine, while traditional healers received training in modern methods. This strategy aimed to integrate modern medical concepts and methods and revitalize appropriate aspects of traditional medicine. Therefore, traditional Chinese medicine was re-created in response to Western medicine.
In 1968, the CCP supported a new system of health care delivery for rural areas. Villages were assigned a barefoot doctor (a medical staff with basic medical skills and knowledge to deal with minor illnesses) responsible for basic medical care. The medical staff combined the values of traditional China with modern methods to provide health and medical care to poor farmers in remote rural areas. The barefoot doctors became a symbol of the Cultural Revolution, for the introduction of modern medicine into villages where traditional Chinese medicine services were used.
The State Intellectual Property Office (now known as CNIPA) established a database of patents granted for traditional Chinese medicine.
In the second decade of the twenty-first century, Chinese Communist Party general secretary Xi Jinping strongly supported TCM, calling it a "gem". As of May 2011, in order to promote TCM worldwide, China had signed TCM partnership agreements with over 70 countries. His government pushed to increase its use and the number of TCM-trained doctors and announced that students of TCM would no longer be required to pass examinations in Western medicine. Chinese scientists and researchers, however, expressed concern that TCM training and therapies would receive equal support with Western medicine. They also criticized a reduction in government testing and regulation of the production of TCMs, some of which were toxic. Government censors have removed Internet posts that question TCM. In 2020 Beijing drafted a law project outlawing criticism of TCM. According to a response to a BMJ paper, TCM is declining in mainland China.
At the beginning of Hong Kong's opening up, Western medicine was not yet popular, and Western medicine doctors were mostly foreigners; local residents mostly relied on Chinese medicine practitioners. In 1841, the British government of Hong Kong issued an announcement pledging to govern Hong Kong residents in accordance with all the original rituals, customs and private legal property rights. As traditional Chinese medicine had always been used in China, the use of traditional Chinese medicine was not regulated.
The establishment in 1870 of the Tung Wah Hospital was the first use of Chinese medicine for the treatment in Chinese hospitals providing free medical services. As the promotion of Western medicine by the British government started from 1940, Western medicine started being popular among Hong Kong population. In 1959, Hong Kong had researched the use of traditional Chinese medicine to replace Western medicine.
Historians have noted two key aspects of Chinese medical history: understanding conceptual differences when translating the term 身, and observing the history from the perspective of cosmology rather than biology.
In Chinese classical texts, the term 身 is the closest historical translation to the English word "body" because it sometimes refers to the physical human body in terms of being weighed or measured, but the term is to be understood as an "ensemble of functions" encompassing both the human psyche and emotions. This concept of the human body is opposed to the European duality of a separate mind and body. It is critical for scholars to understand the fundamental differences in concepts of the body in order to connect the medical theory of the classics to the "human organism" it is explaining.
Chinese scholars established a correlation between the cosmos and the "human organism." The basic components of cosmology, qi, yin yang and the Five Phase theory, were used to explain health and disease in texts such as Huangdi neijing. Yin and yang are the changing factors in cosmology, with qi as the vital force or energy of life. The Five Phase theory (Wuxing) of the Han dynasty contains the elements wood, fire, earth, metal, and water. By understanding medicine from a cosmology perspective, historians better understand Chinese medical and social classifications such as gender, which was defined by a domination or remission of yang in terms of yin.
These two distinctions are imperative when analyzing the history of traditional Chinese medical science.
A majority of Chinese medical history written after the classical canons comes in the form of primary source case studies where academic physicians record the illness of a particular person and the healing techniques used, as well as their effectiveness. Historians have noted that Chinese scholars wrote these studies instead of "books of prescriptions or advice manuals;" in their historical and environmental understanding, no two illnesses were alike so the healing strategies of the practitioner was unique every time to the specific diagnosis of the patient. Medical case studies existed throughout Chinese history, but "individually authored and published case history" was a prominent creation of the Ming dynasty. An example such case studies would be the literati physician, Cheng Congzhou, collection of 93 cases published in 1644.
Historians of science have developed the study of medicine in traditional China into a field with its own scholarly associations, journals, graduate programs, and debates with each other. Many distinguish "medicine in traditional China" from the recent Traditional Chinese Medicine (TCM), which took elements from traditional texts and practices to construct a systematic body. Paul Unschuld, for instance, sees a "departure of TCM from its historical origins." What is called "Traditional Chinese Medicine" and practiced today in China and the West is not thousands of years old, but recently constructed using selected traditional terms, some of which have been taken out of context, some badly misunderstood. He has criticized Chinese and Western popular books for selective use of evidence, choosing only those works or parts of historical works that seem to lead to modern medicine, ignoring those elements that do not now seem to be effective.
A 2007 editorial the journal Nature wrote that TCM "remains poorly researched and supported, and most of its treatments have no logical mechanism of action." Critics say that TCM theory and practice have no basis in modern science, and TCM practitioners do not agree on what diagnosis and treatments should be used for any given person. A Nature editorial described TCM as "fraught with pseudoscience". A review of the literature in 2008 found that scientists are "still unable to find a shred of evidence" according to standards of science-based medicine for traditional Chinese concepts such as qi, meridians, and acupuncture points, and that the traditional principles of acupuncture are deeply flawed. "Acupuncture points and meridians are not a reality", the review continued, but "merely the product of an ancient Chinese philosophy". In June 2019, the World Health Organization included traditional Chinese medicine in a global diagnostic compendium, but a spokesman said this was "not an endorsement of the scientific validity of any Traditional Medicine practice or the efficacy of any Traditional Medicine intervention."
A 2012 review of cost-effectiveness research for TCM found that studies had low levels of evidence, with no beneficial outcomes. Pharmaceutical research on the potential for creating new drugs from traditional remedies has few successful results. Proponents suggest that research has so far missed key features of the art of TCM, such as unknown interactions between various ingredients and complex interactive biological systems. One of the basic tenets of TCM is that the body's qi (sometimes translated as vital energy) is circulating through channels called meridians having branches connected to bodily organs and functions. The concept of vital energy is pseudoscientific. Concepts of the body and of disease used in TCM reflect its ancient origins and its emphasis on dynamic processes over material structure, similar to Classical humoral theory.
TCM has also been controversial within China. In 2006, the Chinese philosopher Zhang Gongyao triggered a national debate with an article entitled "Farewell to Traditional Chinese Medicine", arguing that TCM was a pseudoscience that should be abolished in public healthcare and academia. The Chinese government however, took the stance that TCM is a science and continued to encourage its development.
There are concerns over a number of potentially toxic plants, animal parts, and mineral Chinese compounds, as well as the facilitation of disease. Trafficked and farm-raised animals used in TCM are a source of several fatal zoonotic diseases. There are additional concerns over the illegal trade and transport of endangered species including rhinoceroses and tigers, and the welfare of specially farmed animals, including bears.
Traditional Chinese medicine (TCM) is a broad range of medicine practices sharing common concepts which have been developed in China and are based on a tradition of more than 2,000 years, including various forms of herbal medicine, acupuncture, massage (tui na), exercise (qigong), and dietary therapy. It is primarily used as a complementary alternative medicine approach. TCM is widely used in China and it is also used in the West. Its philosophy is based on Yinyangism (i.e., the combination of Five Phases theory with Yin–Yang theory), which was later absorbed by Daoism. Philosophical texts influenced TCM, mostly by being grounded in the same theories of qi, yin-yang and wuxing and microcosm-macrocosm analogies.
Yin and yang are ancient Chinese deductive reasoning concepts used within Chinese medical diagnosis which can be traced back to the Shang dynasty (1600–1100 BCE). They represent two abstract and complementary aspects that every phenomenon in the universe can be divided into. Primordial analogies for these aspects are the sun-facing (yang) and the shady (yin) side of a hill. Two other commonly used representational allegories of yin and yang are water and fire. In the yin–yang theory, detailed attributions are made regarding the yin or yang character of things:
The concept of yin and yang is also applicable to the human body; for example, the upper part of the body and the back are assigned to yang, while the lower part of the body is believed to have the yin character. Yin and yang characterization also extends to the various body functions, and – more importantly – to disease symptoms (e.g., cold and heat sensations are assumed to be yin and yang symptoms, respectively). Thus, yin and yang of the body are seen as phenomena whose lack (or over-abundance) comes with characteristic symptom combinations:
TCM also identifies drugs believed to treat these specific symptom combinations, i.e., to reinforce yin and yang.
Strict rules are identified to apply to the relationships between the Five Phases in terms of sequence, of acting on each other, of counteraction, etc. All these aspects of Five Phases theory constitute the basis of the zàng-fǔ concept, and thus have great influence regarding the TCM model of the body. Five Phase theory is also applied in diagnosis and therapy.
Correspondences between the body and the universe have historically not only been seen in terms of the Five Elements, but also of the "Great Numbers" (大數; dà shū) For example, the number of acu-points has at times been seen to be 365, corresponding with the number of days in a year; and the number of main meridians–12–has been seen as corresponding with the number of rivers flowing through the ancient Chinese empire.
TCM "holds that the body's vital energy (chi or qi) circulates through channels, called meridians, that have branches connected to bodily organs and functions." Its view of the human body is only marginally concerned with anatomical structures, but focuses primarily on the body's functions (such as digestion, breathing, temperature maintenance, etc.):
These functions are aggregated and then associated with a primary functional entity – for instance, nourishment of the tissues and maintenance of their moisture are seen as connected functions, and the entity postulated to be responsible for these functions is xiě (blood). These functional entities thus constitute concepts rather than something with biochemical or anatomical properties.
The primary functional entities used by traditional Chinese medicine are qì, xuě, the five zàng organs, the six fǔ organs, and the meridians which extend through the organ systems. These are all theoretically interconnected: each zàng organ is paired with a fǔ organ, which are nourished by the blood and concentrate qi for a particular function, with meridians being extensions of those functional systems throughout the body.
Concepts of the body and of disease used in TCM are pseudoscientific, similar to Mediterranean humoral theory. TCM's model of the body is characterized as full of pseudoscience. Some practitioners no longer consider yin and yang and the idea of an energy flow to apply. Scientific investigation has not found any histological or physiological evidence for traditional Chinese concepts such as qi, meridians, and acupuncture points. It is a generally held belief within the acupuncture community that acupuncture points and meridians structures are special conduits for electrical signals but no research has established any consistent anatomical structure or function for either acupuncture points or meridians. The scientific evidence for the anatomical existence of either meridians or acupuncture points is not compelling. Stephen Barrett of Quackwatch writes that, "TCM theory and practice are not based upon the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community. TCM practitioners disagree among themselves about how to diagnose patients and which treatments should go with which diagnoses. Even if they could agree, the TCM theories are so nebulous that no amount of scientific study will enable TCM to offer rational care."
Qi is a polysemous word that Traditional Chinese medicine distinguishes as being able to transform into many different qualities of qi (气; 氣; qì). In a general sense, qi is something that is defined by five "cardinal functions":
A lack of qi will be characterized especially by pale complexion, lassitude of spirit, lack of strength, spontaneous sweating, laziness to speak, non-digestion of food, shortness of breath (especially on exertion), and a pale and enlarged tongue.
Qi is believed to be partially generated from food and drink, and partially from air (by breathing). Another considerable part of it is inherited from the parents and will be consumed in the course of life.
TCM uses special terms for qi running inside of the blood vessels and for qi that is distributed in the skin, muscles, and tissues between them. The former is called yingqi (营气; 營氣; yíngqì); its function is to complement xuè and its nature has a strong yin aspect (although qi in general is considered to be yang). The latter is called weiqi (卫气; 衛氣; weìqì); its main function is defence and it has pronounced yang nature.
Qi is said to circulate in the meridians. Just as the qi held by each of the zang-fu organs, this is considered to be part of the 'principal' qi of the body.
In contrast to the majority of other functional entities, xuè or xiě (血, "blood") is correlated with a physical form – the red liquid running in the blood vessels. Its concept is, nevertheless, defined by its functions: nourishing all parts and tissues of the body, safeguarding an adequate degree of moisture, and sustaining and soothing both consciousness and sleep.
Typical symptoms of a lack of xiě (usually termed "blood vacuity" [血虚; xiě xū]) are described as: Pale-white or withered-yellow complexion, dizziness, flowery vision, palpitations, insomnia, numbness of the extremities; pale tongue; "fine" pulse.
Closely related to xuě are the jinye (津液; jīnyè, usually translated as "body fluids"), and just like xuě they are considered to be yin in nature, and defined first and foremost by the functions of nurturing and moisturizing the different structures of the body. Their other functions are to harmonize yin and yang, and to help with the secretion of waste products.
Jinye are ultimately extracted from food and drink, and constitute the raw material for the production of xuě; conversely, xuě can also be transformed into jinye. Their palpable manifestations are all bodily fluids: tears, sputum, saliva, gastric acid, joint fluid, sweat, urine, etc.
The zangfu (脏腑; 臟腑; zàngfǔ) are the collective name of eleven entities (similar to organs) that constitute the centre piece of TCM's systematization of bodily functions. The term zang refers to the five considered to be yin in nature—Heart, Liver, Spleen, Lung, Kidney—while fu refers to the six associated with yang—Small Intestine, Large Intestine, Gallbladder, Urinary Bladder, Stomach and San Jiao. Despite having the names of organs, they are only loosely tied to (rudimentary) anatomical assumptions. Instead, they are primarily understood to be certain "functions" of the body. To highlight the fact that they are not equivalent to anatomical organs, their names are usually capitalized.
The zang's essential functions consist in production and storage of qi and xuě; they are said to regulate digestion, breathing, water metabolism, the musculoskeletal system, the skin, the sense organs, aging, emotional processes, and mental activity, among other structures and processes. The fǔ organs' main purpose is merely to transmit and digest (傳化; chuán-huà) substances such as waste and food.
Since their concept was developed on the basis of Wǔ Xíng philosophy, each zàng is paired with a fǔ, and each zàng-fǔ pair is assigned to one of five elemental qualities (i.e., the Five Elements or Five Phases). These correspondences are stipulated as:
The zàng-fǔ are also connected to the twelve standard meridians – each yang meridian is attached to a fǔ organ, and five of the yin meridians are attached to a zàng. As there are only five zàng but six yin meridians, the sixth is assigned to the Pericardium, a peculiar entity almost similar to the Heart zàng.
The meridians (经络, jīng-luò) are believed to be channels running from the zàng-fǔ in the interior (里, lǐ) of the body to the limbs and joints ("the surface" [表, biaǒ]), transporting qi and xuĕ. TCM identifies 12 "regular" and 8 "extraordinary" meridians; the Chinese terms being 十二经脉 (shí-èr jīngmài, lit. "the Twelve Vessels") and 奇经八脉 (qí jīng bā mài) respectively. There's also a number of less customary channels branching from the "regular" meridians.
Fuke (妇科; 婦科; Fùkē) is the traditional Chinese term for women's medicine (it means gynecology and obstetrics in modern medicine). However, there are few or no ancient works on it except for Fu Qingzhu's Fu Qingzhu Nu Ke (Fu Qingzhu's Gynecology). In traditional China, as in many other cultures, the health and medicine of female bodies was less understood than that of male bodies. Women's bodies were often secondary to male bodies, since women were thought of as the weaker, sicklier sex.
In clinical encounters, women and men were treated differently. Diagnosing women was not as simple as diagnosing men. First, when a woman fell ill, an appropriate adult man was to call the doctor and remain present during the examination, for the woman could not be left alone with the doctor. The physician would discuss the female's problems and diagnosis only through the male. However, in certain cases, when a woman dealt with complications of pregnancy or birth, older women assumed the role of the formal authority. Men in these situations would not have much power to interfere. Second, women were often silent about their issues with doctors due to the societal expectation of female modesty when a male figure was in the room. Third, patriarchal society also caused doctors to call women and children patients "the anonymous category of family members (Jia Ren) or household (Ju Jia)" in their journals. This anonymity and lack of conversation between the doctor and woman patient led to the inquiry diagnosis of the Four Diagnostic Methods being the most challenging. Doctors used a medical doll known as a Doctor's lady, on which female patients could indicate the location of their symptoms.
Cheng Maoxian (b. 1581), who practiced medicine in Yangzhou, described the difficulties doctors had with the norm of female modesty. One of his case studies was that of Fan Jisuo's teenage daughter, who could not be diagnosed because she was unwilling to speak about her symptoms, since the illness involved discharge from her intimate areas. As Cheng describes, there were four standard methods of diagnosis – looking, asking, listening and smelling and touching (for pulse-taking). To maintain some form of modesty, women would often stay hidden behind curtains and screens. The doctor was allowed to touch enough of her body to complete his examination, often just the pulse taking. This would lead to situations where the symptoms and the doctor's diagnosis did not agree and the doctor would have to ask to view more of the patient.
These social and cultural beliefs were often barriers to learning more about female health, with women themselves often being the most formidable barrier. Women were often uncomfortable talking about their illnesses, especially in front of the male chaperones that attended medical examinations. Women would choose to omit certain symptoms as a means of upholding their chastity and honor. One such example is the case in which a teenage girl was unable to be diagnosed because she failed to mention her symptom of vaginal discharge. Silence was their way of maintaining control in these situations, but it often came at the expense of their health and the advancement of female health and medicine. This silence and control were most obviously seen when the health problem was related to the core of Ming fuke, or the sexual body. It was often in these diagnostic settings that women would choose silence. In addition, there would be a conflict between patient and doctor on the probability of her diagnosis. For example, a woman who thought herself to be past the point of child-bearing age, might not believe a doctor who diagnoses her as pregnant. This only resulted in more conflict.
Yin and yang were critical to the understanding of women's bodies, but understood only in conjunction with male bodies. Yin and yang ruled the body, the body being a microcosm of the universe and the earth. In addition, gender in the body was understood as homologous, the two genders operating in synchronization. Gender was presumed to influence the movement of energy and a well-trained physician would be expected to read the pulse and be able to identify two dozen or more energy flows. Yin and yang concepts were applied to the feminine and masculine aspects of all bodies, implying that the differences between men and women begin at the level of this energy flow. According to Bequeathed Writings of Master Chu the male's yang pulse movement follows an ascending path in "compliance [with cosmic direction] so that the cycle of circulation in the body and the Vital Gate are felt...The female's yin pulse movement follows a defending path against the direction of cosmic influences, so that the nadir and the Gate of Life are felt at the inch position of the left hand". In sum, classical medicine marked yin and yang as high and low on bodies which in turn would be labeled normal or abnormal and gendered either male or female.
Bodily functions could be categorized through systems, not organs. In many drawings and diagrams, the twelve channels and their visceral systems were organized by yin and yang, an organization that was identical in female and male bodies. Female and male bodies were no different on the plane of yin and yang. Their gendered differences were not acknowledged in diagrams of the human body. Medical texts such as the Yuzuan yizong jinjian were filled with illustrations of male bodies or androgynous bodies that did not display gendered characteristics.
As in other cultures, fertility and menstruation dominate female health concerns. Since male and female bodies were governed by the same forces, traditional Chinese medicine did not recognize the womb as the place of reproduction. The abdominal cavity presented pathologies that were similar in both men and women, which included tumors, growths, hernias, and swellings of the genitals. The "master system," as Charlotte Furth calls it, is the kidney visceral system, which governed reproductive functions. Therefore, it was not the anatomical structures that allowed for pregnancy, but the difference in processes that allowed for the condition of pregnancy to occur.
Traditional Chinese medicine's dealings with pregnancy are documented from at least the seventeenth century. According to Charlotte Furth, "a pregnancy (in the seventeenth century) as a known bodily experience emerged [...] out of the liminality of menstrual irregularity, as uneasy digestion, and a sense of fullness". These symptoms were common among other illness as well, so the diagnosis of pregnancy often came late in the term. The Canon of the Pulse, which described the use of pulse in diagnosis, stated that pregnancy was "a condition marked by symptoms of the disorder in one whose pulse is normal" or "where the pulse and symptoms do not agree". Women were often silent about suspected pregnancy, which led to many men not knowing that their wife or daughter was pregnant until complications arrived. Complications through the misdiagnosis and the woman's reluctance to speak often led to medically induced abortions. Cheng, Furth wrote, "was unapologetic about endangering a fetus when pregnancy risked a mother's well being". The method of abortion was the ingestion of certain herbs and foods. Disappointment at the loss of the fetus often led to family discord.
If the baby and mother survived the term of the pregnancy, childbirth was then the next step. The tools provided for birth were: towels to catch the blood, a container for the placenta, a pregnancy sash to support the belly, and an infant swaddling wrap. With these tools, the baby was born, cleaned, and swaddled; however, the mother was then immediately the focus of the doctor to replenish her qi. In his writings, Cheng places a large amount of emphasis on the Four Diagnostic methods to deal with postpartum issues and instructs all physicians to "not neglect any [of the four methods]". The process of birthing was thought to deplete a woman's blood level and qi so the most common treatments for postpartum were food (commonly garlic and ginseng), medicine, and rest. This process was followed up by a month check-in with the physician, a practice known as zuo yuezi.
Infertility, not very well understood, posed serious social and cultural repercussions. The seventh-century scholar Sun Simiao is often quoted: "those who have prescriptions for women's distinctiveness take their differences of pregnancy, childbirth and [internal] bursting injuries as their basis." Even in contemporary fuke placing emphasis on reproductive functions, rather than the entire health of the woman, suggests that the main function of fuke is to produce children.
Once again, the kidney visceral system governs the "source Qi", which governs the reproductive systems in both sexes. This source Qi was thought to "be slowly depleted through sexual activity, menstruation and childbirth." It was also understood that the depletion of source Qi could result from the movement of an external pathology that moved through the outer visceral systems before causing more permanent damage to the home of source Qi, the kidney system. In addition, the view that only very serious ailments ended in the damage of this system means that those who had trouble with their reproductive systems or fertility were seriously ill.
According to traditional Chinese medical texts, infertility can be summarized into different syndrome types. These were spleen and kidney depletion (yang depletion), liver and kidney depletion (yin depletion), blood depletion, phlegm damp, liver oppression, and damp heat. This is important because, while most other issues were complex in Chinese medical physiology, women's fertility issues were simple. Most syndrome types revolved around menstruation, or lack thereof. The patient was entrusted with recording not only the frequency, but also the "volume, color, consistency, and odor of menstrual flow." This placed responsibility of symptom recording on the patient, and was compounded by the earlier discussed issue of female chastity and honor. This meant that diagnosing female infertility was difficult, because the only symptoms that were recorded and monitored by the physician were the pulse and color of the tongue.
In general, disease is perceived as a disharmony (or imbalance) in the functions or interactions of yin, yang, qi, xuĕ, zàng-fǔ, meridians etc. and/or of the interaction between the human body and the environment. Therapy is based on which "pattern of disharmony" can be identified. Thus, "pattern discrimination" is the most important step in TCM diagnosis. It is also known to be the most difficult aspect of practicing TCM.
To determine which pattern is at hand, practitioners will examine things like the color and shape of the tongue, the relative strength of pulse-points, the smell of the breath, the quality of breathing or the sound of the voice. For example, depending on tongue and pulse conditions, a TCM practitioner might diagnose bleeding from the mouth and nose as: "Liver fire rushes upwards and scorches the Lung, injuring the blood vessels and giving rise to reckless pouring of blood from the mouth and nose." He might then go on to prescribe treatments designed to clear heat or supplement the Lung.
In TCM, a disease has two aspects: "bìng" and "zhèng". The former is often translated as "disease entity", "disease category", "illness", or simply "diagnosis". The latter, and more important one, is usually translated as "pattern" (or sometimes also as "syndrome"). For example, the disease entity of a common cold might present with a pattern of wind-cold in one person, and with the pattern of wind-heat in another.
From a scientific point of view, most of the disease entities (病; bìng) listed by TCM constitute symptoms. Examples include headache, cough, abdominal pain, constipation etc.
Since therapy will not be chosen according to the disease entity but according to the pattern, two people with the same disease entity but different patterns will receive different therapy. Vice versa, people with similar patterns might receive similar therapy even if their disease entities are different. This is called yì bìng tóng zhì, tóng bìng yì zhì (异病同治,同病异治; 'different diseases', 'same treatment', 'same disease', 'different treatments').
In TCM, "pattern" (证; zhèng) refers to a "pattern of disharmony" or "functional disturbance" within the functional entities of which the TCM model of the body is composed. There are disharmony patterns of qi, xuě, the body fluids, the zàng-fǔ, and the meridians. They are ultimately defined by their symptoms and signs (i.e., for example, pulse and tongue findings).
In clinical practice, the identified pattern usually involves a combination of affected entities (compare with typical examples of patterns). The concrete pattern identified should account for all the symptoms a person has.
The Six Excesses (六淫; liù yín, sometimes also translated as "Pathogenic Factors", or "Six Pernicious Influences"; with the alternative term of 六邪; liù xié, – "Six Evils" or "Six Devils") are allegorical terms used to describe disharmony patterns displaying certain typical symptoms. These symptoms resemble the effects of six climatic factors. In the allegory, these symptoms can occur because one or more of those climatic factors (called 六气; liù qì, "the six qi") were able to invade the body surface and to proceed to the interior. This is sometimes used to draw causal relationships (i.e., prior exposure to wind/cold/etc. is identified as the cause of a disease), while other authors explicitly deny a direct cause-effect relationship between weather conditions and disease, pointing out that the Six Excesses are primarily descriptions of a certain combination of symptoms translated into a pattern of disharmony. It is undisputed, though, that the Six Excesses can manifest inside the body without an external cause. In this case, they might be denoted "internal", e.g., "internal wind" or "internal fire (or heat)".
The Six Excesses and their characteristic clinical signs are:
Six-Excesses-patterns can consist of only one or a combination of Excesses (e.g., wind-cold, wind-damp-heat). They can also transform from one into another.
For each of the functional entities (qi, xuĕ, zàng-fǔ, meridians etc.), typical disharmony patterns are recognized; for example: qi vacuity and qi stagnation in the case of qi; blood vacuity, blood stasis, and blood heat in the case of xuĕ; Spleen qi vacuity, Spleen yang vacuity, Spleen qi vacuity with down-bearing qi, Spleen qi vacuity with lack of blood containment, cold-damp invasion of the Spleen, damp-heat invasion of Spleen and Stomach in case of the Spleen zàng; wind/cold/damp invasion in the case of the meridians.
TCM gives detailed prescriptions of these patterns regarding their typical symptoms, mostly including characteristic tongue and/or pulse findings. For example:
The process of determining which actual pattern is on hand is called 辩证 (biàn zhèng, usually translated as "pattern diagnosis", "pattern identification" or "pattern discrimination"). Generally, the first and most important step in pattern diagnosis is an evaluation of the present signs and symptoms on the basis of the "Eight Principles" (八纲; bā gāng). These eight principles refer to four pairs of fundamental qualities of a disease: exterior/interior, heat/cold, vacuity/repletion, and yin/yang. Out of these, heat/cold and vacuity/repletion have the biggest clinical importance. The yin/yang quality, on the other side, has the smallest importance and is somewhat seen aside from the other three pairs, since it merely presents a general and vague conclusion regarding what other qualities are found. In detail, the Eight Principles refer to the following:
After the fundamental nature of a disease in terms of the Eight Principles is determined, the investigation focuses on more specific aspects. By evaluating the present signs and symptoms against the background of typical disharmony patterns of the various entities, evidence is collected whether or how specific entities are affected. This evaluation can be done
There are also three special pattern diagnosis systems used in case of febrile and infectious diseases only ("Six Channel system" or "six division pattern" [六经辩证; liù jīng biàn zhèng]; "Wei Qi Ying Xue system" or "four division pattern" [卫气营血辩证; weì qì yíng xuè biàn zhèng]; "San Jiao system" or "three burners pattern" [三焦辩证; sānjiaō biàn zhèng]).
Although TCM and its concept of disease do not strongly differentiate between cause and effect, pattern discrimination can include considerations regarding the disease cause; this is called 病因辩证 (bìngyīn biàn zhèng, "disease-cause pattern discrimination").
There are three fundamental categories of disease causes (三因; sān yīn) recognized:
In TCM, there are five major diagnostic methods: inspection, auscultation, olfaction, inquiry, and palpation. These are grouped into what is known as the "Four pillars" of diagnosis, which are Inspection, Auscultation/ Olfaction, Inquiry, and Palpation (望,聞,問,切).
Examination of the tongue and the pulse are among the principal diagnostic methods in TCM. Details of the tongue, including shape, size, color, texture, cracks, teeth marks, as well as tongue coating are all considered as part of tongue diagnosis. Various regions of the tongue's surface are believed to correspond to the zàng-fŭ organs. For example, redness on the tip of the tongue might indicate heat in the Heart, while redness on the sides of the tongue might indicate heat in the Liver.
Pulse palpation involves measuring the pulse both at a superficial and at a deep level at three different locations on the radial artery (Cun, Guan, Chi, located two fingerbreadths from the wrist crease, one fingerbreadth from the wrist crease, and right at the wrist crease, respectively, usually palpated with the index, middle and ring finger) of each arm, for a total of twelve pulses, all of which are thought to correspond with certain zàng-fŭ. The pulse is examined for several characteristics including rhythm, strength and volume, and described with qualities like "floating, slippery, bolstering-like, feeble, thready and quick"; each of these qualities indicates certain disease patterns. Learning TCM pulse diagnosis can take several years.
The term "herbal medicine" is somewhat misleading in that, while plant elements are by far the most commonly used substances in TCM, other, non-botanic substances are used as well: animal, human, fungi, and mineral products are also used. Thus, the term "medicinal" (instead of herb) may be used, although there is no scientific evidence that any of these compounds have medicinal effects.
There are roughly 13,000 compounds used in China and over 100,000 TCM recipes recorded in the ancient literature. Plant elements and extracts are by far the most common elements used. In the classic Handbook of Traditional Drugs from 1941, 517 drugs were listed – out of these, 45 were animal parts, and 30 were minerals.
Some animal parts used include cow gallstones, hornet nests, leeches, and scorpion. Other examples of animal parts include horn of the antelope or buffalo, deer antlers, testicles and penis bone of the dog, and snake bile. Some TCM textbooks still recommend preparations containing animal tissues, but there has been little research to justify the claimed clinical efficacy of many TCM animal products.
Some compounds can include the parts of endangered species, including tiger bones and rhinoceros horn which is used for many ailments (though not as an aphrodisiac as is commonly misunderstood in the West). The black market in rhinoceros horns (driven not just by TCM but also unrelated status-seeking) has reduced the world's rhino population by more than 90 percent over the past 40 years. Concerns have also arisen over the use of pangolin scales, turtle plastron, seahorses, and the gill plates of mobula and manta rays.
Poachers hunt restricted or endangered species to supply the black market with TCM products. There is no scientific evidence of efficacy for tiger medicines. Concern over China considering to legalize the trade in tiger parts prompted the 171-nation Convention on International Trade in Endangered Species (CITES) to endorse a decision opposing the resurgence of trade in tigers. Fewer than 30,000 saiga antelopes remain, which are exported to China for use in traditional fever therapies. Organized gangs illegally export the horn of the antelopes to China. The pressures on seahorses (Hippocampus spp.) used in traditional medicine is enormous; tens of millions of animals are unsustainably caught annually. Many species of syngnathid are currently part of the IUCN Red List of Threatened Species or national equivalents.
Since TCM recognizes bear bile as a treatment compound, more than 12,000 asiatic black bears are held in bear farms. The bile is extracted through a permanent hole in the abdomen leading to the gall bladder, which can cause severe pain. This can lead to bears trying to kill themselves. As of 2012, approximately 10,000 bears are farmed in China for their bile. This practice has spurred public outcry across the country. The bile is collected from live bears via a surgical procedure. As of March 2020 bear bile as ingredient of Tan Re Qing injection remains on the list of remedies recommended for treatment of "severe cases" of COVID-19 by National Health Commission of China and the National Administration of Traditional Chinese Medicine.
The deer penis is believed to have therapeutic benefits according to traditional Chinese medicine. Tiger parts from poached animals include tiger penis, believed to improve virility, and tiger eyes. The illegal trade for tiger parts in China has driven the species to near-extinction because of its popularity in traditional medicine. Laws protecting even critically endangered species such as the Sumatran tiger fail to stop the display and sale of these items in open markets. Shark fin soup is traditionally regarded in Chinese medicine as beneficial for health in East Asia, and its status as an elite dish has led to huge demand with the increase of affluence in China, devastating shark populations. The shark fins have been a part of traditional Chinese medicine for centuries. Shark finning is banned in many countries, but the trade is thriving in Hong Kong and China, where the fins are part of shark fin soup, a dish considered a delicacy, and used in some types of traditional Chinese medicine.
The tortoise (freshwater turtle, guiban) and turtle (Chinese softshell turtle, biejia) species used in traditional Chinese medicine are raised on farms, while restrictions are made on the accumulation and export of other endangered species. However, issues concerning the overexploitation of Asian turtles in China have not been completely solved. Australian scientists have developed methods to identify medicines containing DNA traces of endangered species. Finally, although not an endangered species, sharp rises in exports of donkeys and donkey hide from Africa to China to make the traditional remedy ejiao have prompted export restrictions by some African countries.
Traditional Chinese medicine also includes some human parts: the classic Materia medica (Bencao Gangmu) describes (also criticizes) the use of 35 human body parts and excreta in medicines, including bones, fingernail, hairs, dandruff, earwax, impurities on the teeth, feces, urine, sweat, organs, but most are no longer in use.
Human placenta has been used an ingredient in certain traditional Chinese medicines, including using dried human placenta, known as "Ziheche", to treat infertility, impotence and other conditions. The consumption of the human placenta is a potential source of infection.
The traditional categorizations and classifications that can still be found today are:
As of 2007 there were not enough good-quality trials of herbal therapies to allow their effectiveness to be determined. A high percentage of relevant studies on traditional Chinese medicine are in Chinese databases. Fifty percent of systematic reviews on TCM did not search Chinese databases, which could lead to a bias in the results. Many systematic reviews of TCM interventions published in Chinese journals are incomplete, some contained errors or were misleading. The herbs recommended by traditional Chinese practitioners in the US are unregulated.
With an eye to the enormous Chinese market, pharmaceutical companies have explored creating new drugs from traditional remedies. The journal Nature commented that "claims made on behalf of an uncharted body of knowledge should be treated with the customary skepticism that is the bedrock of both science and medicine."
There had been success in the 1970s, however, with the development of the antimalarial drug artemisinin, which is a processed extract of Artemisia annua, a herb traditionally used as a fever treatment. Artemisia annua has been used by Chinese herbalists in traditional Chinese medicines for 2,000 years. In 1596, Li Shizhen recommended tea made from qinghao specifically to treat malaria symptoms in his Compendium of Materia Medica. Researcher Tu Youyou discovered that a low-temperature extraction process could isolate an effective antimalarial substance from the plant. Tu says she was influenced by a traditional Chinese herbal medicine source, The Handbook of Prescriptions for Emergency Treatments, written in 340 by Ge Hong, which states that this herb should be steeped in cold water. The extracted substance, once subject to detoxification and purification processes, is a usable antimalarial drug – a 2012 review found that artemisinin-based remedies were the most effective drugs for the treatment of malaria. For her work on malaria, Tu received the 2015 Nobel Prize in Physiology or Medicine. Despite global efforts in combating malaria, it remains a large burden for the population. Although WHO recommends artemisinin-based remedies for treating uncomplicated malaria, resistance to the drug can no longer be ignored.
Also in the 1970s Chinese researcher Zhang TingDong and colleagues investigated the potential use of the traditionally used substance arsenic trioxide to treat acute promyelocytic leukemia (APL). Building on his work, research both in China and the West eventually led to the development of the drug Trisenox, which was approved for leukemia treatment by the FDA in 2000.
Huperzine A, an extract from the herb, Huperzia serrata, is under preliminary research as a possible therapeutic for Alzheimer's disease, but poor methodological quality of the research restricts conclusions about its effectiveness.
Ephedrine in its natural form, known as má huáng (麻黄) in TCM, has been documented in China since the Han dynasty (206 BCE – 220 CE) as an antiasthmatic and stimulant. In 1885, the chemical synthesis of ephedrine was first accomplished by Japanese organic chemist Nagai Nagayoshi based on his research on Japanese and Chinese traditional herbal medicines
Pien tze huang was first documented in the Ming dynasty.
A 2012 systematic review found there is a lack of available cost-effectiveness evidence in TCM.
From the earliest records regarding the use of compounds to today, the toxicity of certain substances has been described in all Chinese materiae medicae. Since TCM has become more popular in the Western world, there are increasing concerns about the potential toxicity of many traditional Chinese plants, animal parts and minerals. Traditional Chinese herbal remedies are conveniently available from grocery stores in most Chinese neighborhoods; some of these items may contain toxic ingredients, are imported into the U.S. illegally, and are associated with claims of therapeutic benefit without evidence. For most compounds, efficacy and toxicity testing are based on traditional knowledge rather than laboratory analysis. The toxicity in some cases could be confirmed by modern research (i.e., in scorpion); in some cases it could not (i.e., in Curculigo). Traditional herbal medicines can contain extremely toxic chemicals and heavy metals, and naturally occurring toxins, which can cause illness, exacerbate pre-existing poor health or result in death. Botanical misidentification of plants can cause toxic reactions in humans. The description of some plants used in TCM has changed, leading to unintended poisoning by using the wrong plants. A concern is also contaminated herbal medicines with microorganisms and fungal toxins, including aflatoxin. Traditional herbal medicines are sometimes contaminated with toxic heavy metals, including lead, arsenic, mercury and cadmium, which inflict serious health risks to consumers. Also, adulteration of some herbal medicine preparations with conventional drugs which may cause serious adverse effects, such as corticosteroids, phenylbutazone, phenytoin, and glibenclamide, has been reported.
Substances known to be potentially dangerous include Aconitum, secretions from the Asiatic toad, powdered centipede, the Chinese beetle (Mylabris phalerata), certain fungi, Aristolochia, arsenic sulfide (realgar), mercury sulfide, and cinnabar. Asbestos ore (Actinolite, Yang Qi Shi, 阳起石) is used to treat impotence in TCM. Due to galena's (litharge, lead(II) oxide) high lead content, it is known to be toxic. Lead, mercury, arsenic, copper, cadmium, and thallium have been detected in TCM products sold in the U.S. and China.
To avoid its toxic adverse effects Xanthium sibiricum must be processed. Hepatotoxicity has been reported with products containing Reynoutria multiflora (synonym Polygonum multiflorum), glycyrrhizin, Senecio and Symphytum. The herbs indicated as being hepatotoxic included Dictamnus dasycarpus, Astragalus membranaceus, and Paeonia lactiflora. Contrary to popular belief, Ganoderma lucidum mushroom extract, as an adjuvant for cancer immunotherapy, appears to have the potential for toxicity. A 2013 review suggested that although the antimalarial herb Artemisia annua may not cause hepatotoxicity, haematotoxicity, or hyperlipidemia, it should be used cautiously during pregnancy due to a potential risk of embryotoxicity at a high dose.
However, many adverse reactions are due to misuse or abuse of Chinese medicine. For example, the misuse of the dietary supplement Ephedra (containing ephedrine) can lead to adverse events including gastrointestinal problems as well as sudden death from cardiomyopathy. Products adulterated with pharmaceuticals for weight loss or erectile dysfunction are one of the main concerns. Chinese herbal medicine has been a major cause of acute liver failure in China.
The harvesting of guano from bat caves (yemingsha) brings workers into close contact with these animals, increasing the risk of zoonosis. The Chinese virologist Shi Zhengli has identified dozens of SARS-like coronaviruses in samples of bat droppings.
Acupuncture is the insertion of needles into superficial structures of the body (skin, subcutaneous tissue, muscles) – usually at acupuncture points (acupoints) – and their subsequent manipulation; this aims at influencing the flow of qi. According to TCM it relieves pain and treats (and prevents) various diseases. The US FDA classifies single-use acupuncture needles as Class II medical devices, under CFR 21.
Acupuncture is often accompanied by moxibustion – the Chinese characters for acupuncture (针灸; 針灸; zhēnjiǔ) literally meaning "acupuncture-moxibustion" – which involves burning mugwort on or near the skin at an acupuncture point. According to the American Cancer Society, "available scientific evidence does not support claims that moxibustion is effective in preventing or treating cancer or any other disease".
In electroacupuncture, an electric current is applied to the needles once they are inserted, to further stimulate the respective acupuncture points.
A recent historian of Chinese medicine remarked that it is "nicely ironic that the specialty of acupuncture -- arguably the most questionable part of their medical heritage for most Chinese at the start of the twentieth century -- has become the most marketable aspect of Chinese medicine." She found that acupuncture as we know it today has hardly been in existence for sixty years. Moreover, the fine, filiform needle we think of as the acupuncture needle today was not widely used a century ago. Present day acupuncture was developed in the 1930s and put into wide practice only as late as the 1960s.
A 2013 editorial in the American journal Anesthesia and Analgesia stated that acupuncture studies produced inconsistent results, (i.e. acupuncture relieved pain in some conditions but had no effect in other very similar conditions) which suggests the presence of false positive results. These may be caused by factors like biased study design, poor blinding, and the classification of electrified needles (a type of TENS) as a form of acupuncture. The inability to find consistent results despite more than 3,000 studies, the editorial continued, suggests that the treatment seems to be a placebo effect and the existing equivocal positive results are the type of noise one expects to see after a large number of studies are performed on an inert therapy. The editorial concluded that the best controlled studies showed a clear pattern, in which the outcome does not rely upon needle location or even needle insertion, and since "these variables are those that define acupuncture, the only sensible conclusion is that acupuncture does not work."
According to the US NIH National Cancer Institute, a review of 17,922 patients reported that real acupuncture relieved muscle and joint pain, caused by aromatase inhibitors, much better than sham acupuncture. Regarding cancer patients, the review hypothesized that acupuncture may cause physical responses in nerve cells, the pituitary gland, and the brain – releasing proteins, hormones, and chemicals that are proposed to affect blood pressure, body temperature, immune activity, and endorphin release.
A 2012 meta-analysis concluded that the mechanisms of acupuncture "are clinically relevant, but that an important part of these total effects is not due to issues considered to be crucial by most acupuncturists, such as the correct location of points and depth of needling ... [but is] ... associated with more potent placebo or context effects". Commenting on this meta-analysis, both Edzard Ernst and David Colquhoun said the results were of negligible clinical significance.
A 2011 overview of Cochrane reviews found evidence that suggests acupuncture is effective for some but not all kinds of pain. A 2010 systematic review found that there is evidence "that acupuncture provides a short-term clinically relevant effect when compared with a waiting list control or when acupuncture is added to another intervention" in the treatment of chronic low back pain. Two review articles discussing the effectiveness of acupuncture, from 2008 and 2009, have concluded that there is not enough evidence to conclude that it is effective beyond the placebo effect.
Acupuncture is generally safe when administered using Clean Needle Technique (CNT). Although serious adverse effects are rare, acupuncture is not without risk. Severe adverse effects, including very rarely death (5 case reports), have been reported.
Tui na (推拿) is a form of massage, based on the assumptions of TCM, from which shiatsu is thought to have evolved. Techniques employed may include thumb presses, rubbing, percussion, and assisted stretching.
Qìgōng (气功; 氣功) is a TCM system of exercise and meditation that combines regulated breathing, slow movement, and focused awareness, purportedly to cultivate and balance qi. One branch of qigong is qigong massage, in which the practitioner combines massage techniques with awareness of the acupuncture channels and points.
Qi is air, breath, energy, or primordial life source that is neither matter or spirit. While Gong is a skillful movement, work, or exercise of the qi.
Cupping (拔罐; báguàn) is a type of Chinese massage, consisting of placing several glass "cups" (open spheres) on the body. A match is lit and placed inside the cup and then removed before placing the cup against the skin. As the air in the cup is heated, it expands, and after placing in the skin, cools, creating lower pressure inside the cup that allows the cup to stick to the skin via suction. When combined with massage oil, the cups can be slid around the back, offering "reverse-pressure massage".
Gua sha (刮痧; guāshā) is abrading the skin with pieces of smooth jade, bone, animal tusks or horns or smooth stones; until red spots then bruising cover the area to which it is done. It is believed that this treatment is for almost any ailment. The red spots and bruising take three to ten days to heal, there is often some soreness in the area that has been treated.
Diē-dǎ (跌打) or Dit Da, is a traditional Chinese bone-setting technique, usually practiced by martial artists who know aspects of Chinese medicine that apply to the treatment of trauma and injuries such as bone fractures, sprains, and bruises. Some of these specialists may also use or recommend other disciplines of Chinese medical therapies if serious injury is involved. Such practice of bone-setting (正骨; 整骨) is not common in the West.
The concepts yin and yang are associated with different classes of foods, and tradition considers it important to consume them in a balanced fashion.
Many governments have enacted laws to regulate TCM practice.
From 1 July 2012 Chinese medicine practitioners must be registered under the national registration and accreditation scheme with the Chinese Medicine Board of Australia and meet the Board's Registration Standards, to practice in Australia.
TCM is regulated in five provinces in Canada: Alberta, British Columbia, Ontario, Quebec, and Newfoundland & Labrador.
The National Administration of Traditional Chinese Medicine was created in 1949, which then absorbed existing TCM management in 1986 with major changes in 1998.
China's National People's Congress Standing Committee passed the country's first law on TCM in 2016, which came into effect on 1 July 2017. The new law standardized TCM certifications by requiring TCM practitioners to (i) pass exams administered by provincial-level TCM authorities, and (ii) obtain recommendations from two certified practitioners. TCM products and services can be advertised only with approval from the local TCM authority.
During British rule, Chinese medicine practitioners in Hong Kong were not recognized as "medical doctors", which means they could not issue prescription drugs, give injections, etc. However, TCM practitioners could register and operate TCM as "herbalists". The Chinese Medicine Council of Hong Kong was established in 1999. It regulates the compounds and professional standards for TCM practitioners. All TCM practitioners in Hong Kong are required to register with the council. The eligibility for registration includes a recognised 5-year university degree of TCM, a 30-week minimum supervised clinical internship, and passing the licensing exam.
Currently, the approved Chinese medicine institutions are HKU, CUHK and HKBU.
The Portuguese Macau government seldom interfered in the affairs of Chinese society, including with regard to regulations on the practice of TCM. There were a few TCM pharmacies in Macau during the colonial period. In 1994, the Portuguese Macau government published Decree-Law no. 53/94/M that officially started to regulate the TCM market. After the sovereign handover, the Macau S.A.R. government also published regulations on the practice of TCM. In 2000, Macau University of Science and Technology and Nanjing University of Traditional Chinese Medicine established the Macau College of Traditional Chinese Medicine to offer a degree course in Chinese medicine.
In Macau, the legitimacy of Chinese medicine is not built upon "miracle making". Instead, it is achieved through a celebration of cultural tradition rejuvenated with discourses of nationalism and modernity, and through the mutual constructions of medical references between doctors and patients.
In 2022, a new law regulating TCM, Law no. 11/2021, came into effect. The same law also repealed Decree-Law no. 53/94/M.
All traditional medicines, including TCM, are regulated by Indonesian Minister of Health Regulation of 2013 on traditional medicine. Traditional medicine license (Surat Izin Pengobatan Tradisional – SIPT) is granted to the practitioners whose methods are recognized as safe and may benefit health. The TCM clinics are registered but there is no explicit regulation for it. The only TCM method which is accepted by medical logic and is empirically proofed is acupuncture. The acupuncturists can get SIPT and participate in health care facilities.
Under the Medical Service Act (의료법/醫療法), an oriental medical doctor, whose obligation is to administer oriental medical treatment and provide guidance for health based on oriental medicine, shall be treated in the same manner as a medical doctor or dentist.
The Korea Institute of Oriental Medicine is the top research center of TCM in Korea.
The Traditional and Complementary Medicine Bill was passed by parliament in 2012 establishing the Traditional and Complementary Medicine Council to register and regulate traditional and complementary medicine practitioners, including TCM practitioners as well as other traditional and complementary medicine practitioners such as those in traditional Malay medicine and traditional Indian medicine.
There are no specific regulations in the Netherlands on TCM; TCM is neither prohibited nor recognised by the government of the Netherlands. Chinese herbs as well as Chinese herbal products that are used in TCM are classified as foods and food supplements, and these Chinese herbs can be imported into the Netherlands as well as marketed as such without any type registration or notification to the government.
Despite its status, some private health insurance companies reimburse a certain amount of annual costs for acupuncture treatments, this depends on one's insurance policy, as not all insurance policies cover it, and if the acupuncture practitioner is or is not a member of one of the professional organisations that are recognised by private health insurance companies. The recognized professional organizations include the Nederlandse Vereniging voor Acupunctuur (NVA), Nederlandse Artsen Acupunctuur Vereniging (NAAV), ZHONG, (Nederlandse Vereniging voor Traditionele Chinese Geneeskunde), Nederlandse Beroepsvereniging Chinese Geneeswijzen Yi (NBCG Yi), and Wetenschappelijke Artsen Vereniging voor Acupunctuur in Nederland (WAVAN).
Although there are no regulatory standards for the practice of TCM in New Zealand, in the year 1990, acupuncture was included in the Governmental Accident Compensation Corporation (ACC) Act. This inclusion granted qualified and professionally registered acupuncturists to provide subsidised care and treatment to citizens, residents, and temporary visitors for work or sports related injuries that occurred within and upon the land of New Zealand. The two bodies for the regulation of acupuncture and attainment of ACC treatment provider status in New Zealand are Acupuncture NZ and The New Zealand Acupuncture Standards Authority.
The TCM Practitioners Act was passed by Parliament in 2000 and the TCM Practitioners Board was established in 2001 as a statutory board under the Ministry of Health, to register and regulate TCM practitioners. The requirements for registration include possession of a diploma or degree from a TCM educational institution/university on a gazetted list, either structured TCM clinical training at an approved local TCM educational institution or foreign TCM registration together with supervised TCM clinical attachment/practice at an approved local TCM clinic, and upon meeting these requirements, passing the Singapore TCM Physicians Registration Examination (STRE) conducted by the TCM Practitioners Board.
In 2024, Nanyang Technological University will offer the four-year Bachelor of Chinese Medicine programme, which is the first local programme accredited by the Ministry of Health.
In Taiwan, TCM practitioners are physicians and are regulated by the Physicians Act.They are able to diagnose, write prescriptions, and dispense Chinese medicine independently. Under current law, those who wish to qualify for the Chinese medicine exam must have obtained a 7-year university degree in TCM.
The National Research Institute of Chinese Medicine, established in 1963, is the largest Chinese herbal medicine research center in Taiwan.
As of July 2012, only six states lack legislation to regulate the professional practice of TCM: Alabama, Kansas, North Dakota, South Dakota, Oklahoma, and Wyoming. In 1976, California established an Acupuncture Board and became the first state licensing professional acupuncturists. | [
{
"paragraph_id": 0,
"text": "Traditional Chinese medicine (TCM) is an alternative medical practice drawn from traditional medicine in China. It has been described as \"fraught with pseudoscience\", with the majority of its treatments having no logical mechanism of action.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Medicine in traditional China encompassed a range of sometimes competing health and healing practices, folk beliefs, literati theory and Confucian philosophy, herbal remedies, food, diet, exercise, medical specializations, and schools of thought. In the early twentieth century, Chinese cultural and political modernizers worked to eliminate traditional practices as backward and unscientific. Traditional practitioners then selected elements of philosophy and practice and organized them into what they called \"Chinese medicine\" (Chinese: 中医 Zhongyi). In the 1950s, the Chinese government sponsored the integration of Chinese and Western medicine, and in the Great Proletarian Cultural Revolution of the 1960s, promoted Chinese medicine as inexpensive and popular. After the opening of relations between the United States and China after 1972, there was great interest in the West for what is now called traditional Chinese medicine (TCM).",
"title": ""
},
{
"paragraph_id": 2,
"text": "TCM is said to be based on such texts as Huangdi Neijing (The Inner Canon of the Yellow Emperor), and Compendium of Materia Medica, a sixteenth-century encyclopedic work, and includes various forms of herbal medicine, acupuncture, cupping therapy, gua sha, massage (tui na), bonesetter (die-da), exercise (qigong), and dietary therapy. TCM is widely used in the Sinosphere. One of the basic tenets is that the body's qi is circulating through channels called meridians having branches connected to bodily organs and functions. There is no evidence that meridians or vital energy exist. Concepts of the body and of disease used in TCM reflect its ancient origins and its emphasis on dynamic processes over material structure, similar to the humoral theory of ancient Greece and ancient Rome.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The demand for traditional medicines in China has been a major generator of illegal wildlife smuggling, linked to the killing and smuggling of endangered animals.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Scholars in the history of medicine in China distinguish its doctrines and practice from those of present-day TCM. As Ian Johnson notes, the term \"Traditional Chinese Medicine\" was coined by \"party propagandists\" and first appeared in English in 1955. Nathan Sivin criticizes attempts to treat medicine and medical practices in traditional China as if they were a single system. Instead, he says, there were 2,000 years of \"medical system in turmoil\" and speaks of a \"myth of an unchanging medical tradition.\" He urges that \"Traditional medicine translated purely into terms of modern medicine becomes partly nonsensical, partly irrelevant, and partly mistaken; that is also true the other way around, a point easily overlooked.\" TJ Hinrichs observes that people in modern Western societies divide healing practices into biomedicine for the body, psychology for the mind, and religion for the spirit, but these distinctions are inadequate to describe medical concepts among Chinese historically and to a considerable degree today.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The medical anthropologist Charles Leslie writes that Chinese, Greco-Arabic, and Indian traditional medicines were all grounded in systems of correspondence that aligned the organization of society, the universe, and the human body and other forms of life into an \"all-embracing order of things.\" Each of these traditional systems was organized with such qualities as heat and cold, wet and dry, light and darkness, qualities that also align the seasons, compass directions, and the human cycle of birth, growth, and death. They provided, Leslie continued, a \"comprehensive way of conceiving patterns that ran through all of nature,\" and they \"served as a classificatory and mnemonic device to observe health problems and to reflect upon, store, and recover empirical knowledge,\" but they were also \"subject to stultifying theoretical elaboration, self-deception, and dogmatism.\"",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The doctrines of Chinese medicine are rooted in books such as the Yellow Emperor's Inner Canon and the Treatise on Cold Damage, as well as in cosmological notions such as yin–yang and the five phases. The \"Documentation of Chinese materia medica\" (CMM) dates back to around 1,100 BCE when only a few dozen drugs were described. By the end of the 16th century, the number of drugs documented had reached close to 1,900. And by the end of the last century, published records of CMM had reached 12,800 drugs.\" Starting in the 1950s, these precepts were standardized in the People's Republic of China, including attempts to integrate them with modern notions of anatomy and pathology. In the 1950s, the Chinese government promoted a systematized form of TCM.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Traces of therapeutic activities in China date from the Shang dynasty (14th–11th centuries BCE). Though the Shang did not have a concept of \"medicine\" as distinct from other health practices, their oracular inscriptions on bones and tortoise shells refer to illnesses that affected the Shang royal family: eye disorders, toothaches, bloated abdomen, and such. Shang elites usually attributed them to curses sent by their ancestors. There is currently no evidence that the Shang nobility used herbal remedies.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Stone and bone needles found in ancient tombs led Joseph Needham to speculate that acupuncture might have been carried out in the Shang dynasty. This being said, most historians now make a distinction between medical lancing (or bloodletting) and acupuncture in the narrower sense of using metal needles to attempt to treat illnesses by stimulating points along circulation channels (\"meridians\") in accordance with beliefs related to the circulation of \"Qi\". The earliest evidence for acupuncture in this sense dates to the second or first century BCE.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Yellow Emperor's Inner Canon (Huangdi Neijing), the oldest received work of Chinese medical theory, was compiled during the Han dynasty around the first century BCE on the basis of shorter texts from different medical lineages. Written in the form of dialogues between the legendary Yellow Emperor and his ministers, it offers explanations on the relation between humans, their environment, and the cosmos, on the contents of the body, on human vitality and pathology, on the symptoms of illness, and on how to make diagnostic and therapeutic decisions in light of all these factors. Unlike earlier texts like Recipes for Fifty-Two Ailments, which was excavated in the 1970s from the Mawangdui tomb that had been sealed in 168 BCE, the Inner Canon rejected the influence of spirits and the use of magic. It was also one of the first books in which the cosmological doctrines of Yinyang and the Five Phases were brought to a mature synthesis.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The Treatise on Cold Damage Disorders and Miscellaneous Illnesses (Shang Han Lun) was collated by Zhang Zhongjing sometime between 196 and 220 CE; at the end of the Han dynasty. Focusing on drug prescriptions rather than acupuncture, it was the first medical work to combine Yinyang and the Five Phases with drug therapy. This formulary was also the earliest public Chinese medical text to group symptoms into clinically useful \"patterns\" (zheng 證) that could serve as targets for therapy. Having gone through numerous changes over time, the formulary now circulates as two distinct books: the Treatise on Cold Damage Disorders and the Essential Prescriptions of the Golden Casket, which were edited separately in the eleventh century, under the Song dynasty.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Nanjing or \"Classic of Difficult Issues,\" originally called \"The Yellow Emperor Eighty-one Nan Jing\", ascribed to Bian Que in the eastern Han dynasty. This book was compiled in the form of question-and-answer explanations. A total of 81 questions have been discussed. Therefore, it is also called \"Eighty-One Nan\". The book is based on basic theory and has also analyzed some disease certificates. Questions one to twenty-two is about pulse study, questions twenty-three to twenty-nine is about meridian study, questions thirty to forty-seven is related to urgent illnesses, questions forty-eight to sixty-one is related to serious diseases, questions sixty-two to sixty-eight is related to acupuncture points, and questions sixty-nine to eighty-one is related to the needlepoint methods.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The book is credited as developing its own path, while also inheriting the theories from Huangdi Neijing. The content includes physiology, pathology, diagnosis, treatment contents, and a more essential and specific discussion of pulse diagnosis. It has become one of the four classics for Chinese medicine practitioners to learn from and has impacted the medical development in China.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Shennong Ben Cao Jing is one of the earliest written medical books in China. Written during the Eastern Han Dynasty between 200 and 250 CE, it was the combined effort of practitioners in the Qin and Han Dynasties who summarized, collected and compiled the results of pharmacological experience during their time periods. It was the first systematic summary of Chinese herbal medicine. Most of the pharmacological theories and compatibility rules and the proposed \"seven emotions and harmony\" principle have played a role in the practice of medicine for thousands of years. Therefore, it has been a textbook for medical workers in modern China. The full text of Shennong Ben Cao Jing in English can be found online.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In the centuries that followed, several shorter books tried to summarize or systematize the contents of the Yellow Emperor's Inner Canon. The Canon of Problems (probably second century CE) tried to reconcile divergent doctrines from the Inner Canon and developed a complete medical system centered on needling therapy. The AB Canon of Acupuncture and Moxibustion (Zhenjiu jiayi jing 針灸甲乙經, compiled by Huangfu Mi sometime between 256 and 282 CE) assembled a consistent body of doctrines concerning acupuncture; whereas the Canon of the Pulse (Maijing 脈經; c. 280) presented itself as a \"comprehensive handbook of diagnostics and therapy.\"",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Around 900–1000 AD, Chinese were the first to develop a form of vaccination, known as variolation or inoculation, to prevent smallpox. Chinese physicians had realised that when healthy people were exposed to smallpox scab tissue, they had a smaller chance of being infected by the disease later on. The common methods of inoculation at the time was through crushing smallpox scabs into powder and breathing it through the nose.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Prominent medical scholars of the post-Han period included Tao Hongjing (456–536), Sun Simiao of the Sui and Tang dynasties, Zhang Jiegu (c. 1151–1234), and Li Shizhen (1518–1593).",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1950, Chinese Communist Party (CCP) chairman Mao Zedong announced support of traditional Chinese medicine, but he did not personally believe in and did not use it. In 1952, the president of the Chinese Medical Association said that, \"This One Medicine, will possess a basis in modern natural sciences, will have absorbed the ancient and the new, the Chinese and the foreign, all medical achievements—and will be China's New Medicine!\"",
"title": "History"
},
{
"paragraph_id": 18,
"text": "During the Cultural Revolution (1966–1976) the CCP and the government emphasized modernity, cultural identity and China's social and economic reconstruction and contrasted them to the colonial and feudal past. The government established a grassroots health care system as a step in the search for a new national identity and tried to revitalize traditional medicine and made large investments in traditional medicine to try to develop affordable medical care and public health facilities. The Ministry of Health directed health care throughout China and established primary care units. Chinese physicians trained in Western medicine were required to learn traditional medicine, while traditional healers received training in modern methods. This strategy aimed to integrate modern medical concepts and methods and revitalize appropriate aspects of traditional medicine. Therefore, traditional Chinese medicine was re-created in response to Western medicine.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 1968, the CCP supported a new system of health care delivery for rural areas. Villages were assigned a barefoot doctor (a medical staff with basic medical skills and knowledge to deal with minor illnesses) responsible for basic medical care. The medical staff combined the values of traditional China with modern methods to provide health and medical care to poor farmers in remote rural areas. The barefoot doctors became a symbol of the Cultural Revolution, for the introduction of modern medicine into villages where traditional Chinese medicine services were used.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The State Intellectual Property Office (now known as CNIPA) established a database of patents granted for traditional Chinese medicine.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In the second decade of the twenty-first century, Chinese Communist Party general secretary Xi Jinping strongly supported TCM, calling it a \"gem\". As of May 2011, in order to promote TCM worldwide, China had signed TCM partnership agreements with over 70 countries. His government pushed to increase its use and the number of TCM-trained doctors and announced that students of TCM would no longer be required to pass examinations in Western medicine. Chinese scientists and researchers, however, expressed concern that TCM training and therapies would receive equal support with Western medicine. They also criticized a reduction in government testing and regulation of the production of TCMs, some of which were toxic. Government censors have removed Internet posts that question TCM. In 2020 Beijing drafted a law project outlawing criticism of TCM. According to a response to a BMJ paper, TCM is declining in mainland China.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "At the beginning of Hong Kong's opening up, Western medicine was not yet popular, and Western medicine doctors were mostly foreigners; local residents mostly relied on Chinese medicine practitioners. In 1841, the British government of Hong Kong issued an announcement pledging to govern Hong Kong residents in accordance with all the original rituals, customs and private legal property rights. As traditional Chinese medicine had always been used in China, the use of traditional Chinese medicine was not regulated.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The establishment in 1870 of the Tung Wah Hospital was the first use of Chinese medicine for the treatment in Chinese hospitals providing free medical services. As the promotion of Western medicine by the British government started from 1940, Western medicine started being popular among Hong Kong population. In 1959, Hong Kong had researched the use of traditional Chinese medicine to replace Western medicine.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Historians have noted two key aspects of Chinese medical history: understanding conceptual differences when translating the term 身, and observing the history from the perspective of cosmology rather than biology.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "In Chinese classical texts, the term 身 is the closest historical translation to the English word \"body\" because it sometimes refers to the physical human body in terms of being weighed or measured, but the term is to be understood as an \"ensemble of functions\" encompassing both the human psyche and emotions. This concept of the human body is opposed to the European duality of a separate mind and body. It is critical for scholars to understand the fundamental differences in concepts of the body in order to connect the medical theory of the classics to the \"human organism\" it is explaining.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Chinese scholars established a correlation between the cosmos and the \"human organism.\" The basic components of cosmology, qi, yin yang and the Five Phase theory, were used to explain health and disease in texts such as Huangdi neijing. Yin and yang are the changing factors in cosmology, with qi as the vital force or energy of life. The Five Phase theory (Wuxing) of the Han dynasty contains the elements wood, fire, earth, metal, and water. By understanding medicine from a cosmology perspective, historians better understand Chinese medical and social classifications such as gender, which was defined by a domination or remission of yang in terms of yin.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "These two distinctions are imperative when analyzing the history of traditional Chinese medical science.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "A majority of Chinese medical history written after the classical canons comes in the form of primary source case studies where academic physicians record the illness of a particular person and the healing techniques used, as well as their effectiveness. Historians have noted that Chinese scholars wrote these studies instead of \"books of prescriptions or advice manuals;\" in their historical and environmental understanding, no two illnesses were alike so the healing strategies of the practitioner was unique every time to the specific diagnosis of the patient. Medical case studies existed throughout Chinese history, but \"individually authored and published case history\" was a prominent creation of the Ming dynasty. An example such case studies would be the literati physician, Cheng Congzhou, collection of 93 cases published in 1644.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Historians of science have developed the study of medicine in traditional China into a field with its own scholarly associations, journals, graduate programs, and debates with each other. Many distinguish \"medicine in traditional China\" from the recent Traditional Chinese Medicine (TCM), which took elements from traditional texts and practices to construct a systematic body. Paul Unschuld, for instance, sees a \"departure of TCM from its historical origins.\" What is called \"Traditional Chinese Medicine\" and practiced today in China and the West is not thousands of years old, but recently constructed using selected traditional terms, some of which have been taken out of context, some badly misunderstood. He has criticized Chinese and Western popular books for selective use of evidence, choosing only those works or parts of historical works that seem to lead to modern medicine, ignoring those elements that do not now seem to be effective.",
"title": "Critique"
},
{
"paragraph_id": 30,
"text": "A 2007 editorial the journal Nature wrote that TCM \"remains poorly researched and supported, and most of its treatments have no logical mechanism of action.\" Critics say that TCM theory and practice have no basis in modern science, and TCM practitioners do not agree on what diagnosis and treatments should be used for any given person. A Nature editorial described TCM as \"fraught with pseudoscience\". A review of the literature in 2008 found that scientists are \"still unable to find a shred of evidence\" according to standards of science-based medicine for traditional Chinese concepts such as qi, meridians, and acupuncture points, and that the traditional principles of acupuncture are deeply flawed. \"Acupuncture points and meridians are not a reality\", the review continued, but \"merely the product of an ancient Chinese philosophy\". In June 2019, the World Health Organization included traditional Chinese medicine in a global diagnostic compendium, but a spokesman said this was \"not an endorsement of the scientific validity of any Traditional Medicine practice or the efficacy of any Traditional Medicine intervention.\"",
"title": "Critique"
},
{
"paragraph_id": 31,
"text": "A 2012 review of cost-effectiveness research for TCM found that studies had low levels of evidence, with no beneficial outcomes. Pharmaceutical research on the potential for creating new drugs from traditional remedies has few successful results. Proponents suggest that research has so far missed key features of the art of TCM, such as unknown interactions between various ingredients and complex interactive biological systems. One of the basic tenets of TCM is that the body's qi (sometimes translated as vital energy) is circulating through channels called meridians having branches connected to bodily organs and functions. The concept of vital energy is pseudoscientific. Concepts of the body and of disease used in TCM reflect its ancient origins and its emphasis on dynamic processes over material structure, similar to Classical humoral theory.",
"title": "Critique"
},
{
"paragraph_id": 32,
"text": "TCM has also been controversial within China. In 2006, the Chinese philosopher Zhang Gongyao triggered a national debate with an article entitled \"Farewell to Traditional Chinese Medicine\", arguing that TCM was a pseudoscience that should be abolished in public healthcare and academia. The Chinese government however, took the stance that TCM is a science and continued to encourage its development.",
"title": "Critique"
},
{
"paragraph_id": 33,
"text": "There are concerns over a number of potentially toxic plants, animal parts, and mineral Chinese compounds, as well as the facilitation of disease. Trafficked and farm-raised animals used in TCM are a source of several fatal zoonotic diseases. There are additional concerns over the illegal trade and transport of endangered species including rhinoceroses and tigers, and the welfare of specially farmed animals, including bears.",
"title": "Critique"
},
{
"paragraph_id": 34,
"text": "Traditional Chinese medicine (TCM) is a broad range of medicine practices sharing common concepts which have been developed in China and are based on a tradition of more than 2,000 years, including various forms of herbal medicine, acupuncture, massage (tui na), exercise (qigong), and dietary therapy. It is primarily used as a complementary alternative medicine approach. TCM is widely used in China and it is also used in the West. Its philosophy is based on Yinyangism (i.e., the combination of Five Phases theory with Yin–Yang theory), which was later absorbed by Daoism. Philosophical texts influenced TCM, mostly by being grounded in the same theories of qi, yin-yang and wuxing and microcosm-macrocosm analogies.",
"title": "Philosophical background"
},
{
"paragraph_id": 35,
"text": "Yin and yang are ancient Chinese deductive reasoning concepts used within Chinese medical diagnosis which can be traced back to the Shang dynasty (1600–1100 BCE). They represent two abstract and complementary aspects that every phenomenon in the universe can be divided into. Primordial analogies for these aspects are the sun-facing (yang) and the shady (yin) side of a hill. Two other commonly used representational allegories of yin and yang are water and fire. In the yin–yang theory, detailed attributions are made regarding the yin or yang character of things:",
"title": "Philosophical background"
},
{
"paragraph_id": 36,
"text": "The concept of yin and yang is also applicable to the human body; for example, the upper part of the body and the back are assigned to yang, while the lower part of the body is believed to have the yin character. Yin and yang characterization also extends to the various body functions, and – more importantly – to disease symptoms (e.g., cold and heat sensations are assumed to be yin and yang symptoms, respectively). Thus, yin and yang of the body are seen as phenomena whose lack (or over-abundance) comes with characteristic symptom combinations:",
"title": "Philosophical background"
},
{
"paragraph_id": 37,
"text": "TCM also identifies drugs believed to treat these specific symptom combinations, i.e., to reinforce yin and yang.",
"title": "Philosophical background"
},
{
"paragraph_id": 38,
"text": "Strict rules are identified to apply to the relationships between the Five Phases in terms of sequence, of acting on each other, of counteraction, etc. All these aspects of Five Phases theory constitute the basis of the zàng-fǔ concept, and thus have great influence regarding the TCM model of the body. Five Phase theory is also applied in diagnosis and therapy.",
"title": "Philosophical background"
},
{
"paragraph_id": 39,
"text": "Correspondences between the body and the universe have historically not only been seen in terms of the Five Elements, but also of the \"Great Numbers\" (大數; dà shū) For example, the number of acu-points has at times been seen to be 365, corresponding with the number of days in a year; and the number of main meridians–12–has been seen as corresponding with the number of rivers flowing through the ancient Chinese empire.",
"title": "Philosophical background"
},
{
"paragraph_id": 40,
"text": "TCM \"holds that the body's vital energy (chi or qi) circulates through channels, called meridians, that have branches connected to bodily organs and functions.\" Its view of the human body is only marginally concerned with anatomical structures, but focuses primarily on the body's functions (such as digestion, breathing, temperature maintenance, etc.):",
"title": "Model of the body"
},
{
"paragraph_id": 41,
"text": "These functions are aggregated and then associated with a primary functional entity – for instance, nourishment of the tissues and maintenance of their moisture are seen as connected functions, and the entity postulated to be responsible for these functions is xiě (blood). These functional entities thus constitute concepts rather than something with biochemical or anatomical properties.",
"title": "Model of the body"
},
{
"paragraph_id": 42,
"text": "The primary functional entities used by traditional Chinese medicine are qì, xuě, the five zàng organs, the six fǔ organs, and the meridians which extend through the organ systems. These are all theoretically interconnected: each zàng organ is paired with a fǔ organ, which are nourished by the blood and concentrate qi for a particular function, with meridians being extensions of those functional systems throughout the body.",
"title": "Model of the body"
},
{
"paragraph_id": 43,
"text": "Concepts of the body and of disease used in TCM are pseudoscientific, similar to Mediterranean humoral theory. TCM's model of the body is characterized as full of pseudoscience. Some practitioners no longer consider yin and yang and the idea of an energy flow to apply. Scientific investigation has not found any histological or physiological evidence for traditional Chinese concepts such as qi, meridians, and acupuncture points. It is a generally held belief within the acupuncture community that acupuncture points and meridians structures are special conduits for electrical signals but no research has established any consistent anatomical structure or function for either acupuncture points or meridians. The scientific evidence for the anatomical existence of either meridians or acupuncture points is not compelling. Stephen Barrett of Quackwatch writes that, \"TCM theory and practice are not based upon the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community. TCM practitioners disagree among themselves about how to diagnose patients and which treatments should go with which diagnoses. Even if they could agree, the TCM theories are so nebulous that no amount of scientific study will enable TCM to offer rational care.\"",
"title": "Model of the body"
},
{
"paragraph_id": 44,
"text": "Qi is a polysemous word that Traditional Chinese medicine distinguishes as being able to transform into many different qualities of qi (气; 氣; qì). In a general sense, qi is something that is defined by five \"cardinal functions\":",
"title": "Model of the body"
},
{
"paragraph_id": 45,
"text": "A lack of qi will be characterized especially by pale complexion, lassitude of spirit, lack of strength, spontaneous sweating, laziness to speak, non-digestion of food, shortness of breath (especially on exertion), and a pale and enlarged tongue.",
"title": "Model of the body"
},
{
"paragraph_id": 46,
"text": "Qi is believed to be partially generated from food and drink, and partially from air (by breathing). Another considerable part of it is inherited from the parents and will be consumed in the course of life.",
"title": "Model of the body"
},
{
"paragraph_id": 47,
"text": "TCM uses special terms for qi running inside of the blood vessels and for qi that is distributed in the skin, muscles, and tissues between them. The former is called yingqi (营气; 營氣; yíngqì); its function is to complement xuè and its nature has a strong yin aspect (although qi in general is considered to be yang). The latter is called weiqi (卫气; 衛氣; weìqì); its main function is defence and it has pronounced yang nature.",
"title": "Model of the body"
},
{
"paragraph_id": 48,
"text": "Qi is said to circulate in the meridians. Just as the qi held by each of the zang-fu organs, this is considered to be part of the 'principal' qi of the body.",
"title": "Model of the body"
},
{
"paragraph_id": 49,
"text": "In contrast to the majority of other functional entities, xuè or xiě (血, \"blood\") is correlated with a physical form – the red liquid running in the blood vessels. Its concept is, nevertheless, defined by its functions: nourishing all parts and tissues of the body, safeguarding an adequate degree of moisture, and sustaining and soothing both consciousness and sleep.",
"title": "Model of the body"
},
{
"paragraph_id": 50,
"text": "Typical symptoms of a lack of xiě (usually termed \"blood vacuity\" [血虚; xiě xū]) are described as: Pale-white or withered-yellow complexion, dizziness, flowery vision, palpitations, insomnia, numbness of the extremities; pale tongue; \"fine\" pulse.",
"title": "Model of the body"
},
{
"paragraph_id": 51,
"text": "Closely related to xuě are the jinye (津液; jīnyè, usually translated as \"body fluids\"), and just like xuě they are considered to be yin in nature, and defined first and foremost by the functions of nurturing and moisturizing the different structures of the body. Their other functions are to harmonize yin and yang, and to help with the secretion of waste products.",
"title": "Model of the body"
},
{
"paragraph_id": 52,
"text": "Jinye are ultimately extracted from food and drink, and constitute the raw material for the production of xuě; conversely, xuě can also be transformed into jinye. Their palpable manifestations are all bodily fluids: tears, sputum, saliva, gastric acid, joint fluid, sweat, urine, etc.",
"title": "Model of the body"
},
{
"paragraph_id": 53,
"text": "The zangfu (脏腑; 臟腑; zàngfǔ) are the collective name of eleven entities (similar to organs) that constitute the centre piece of TCM's systematization of bodily functions. The term zang refers to the five considered to be yin in nature—Heart, Liver, Spleen, Lung, Kidney—while fu refers to the six associated with yang—Small Intestine, Large Intestine, Gallbladder, Urinary Bladder, Stomach and San Jiao. Despite having the names of organs, they are only loosely tied to (rudimentary) anatomical assumptions. Instead, they are primarily understood to be certain \"functions\" of the body. To highlight the fact that they are not equivalent to anatomical organs, their names are usually capitalized.",
"title": "Model of the body"
},
{
"paragraph_id": 54,
"text": "The zang's essential functions consist in production and storage of qi and xuě; they are said to regulate digestion, breathing, water metabolism, the musculoskeletal system, the skin, the sense organs, aging, emotional processes, and mental activity, among other structures and processes. The fǔ organs' main purpose is merely to transmit and digest (傳化; chuán-huà) substances such as waste and food.",
"title": "Model of the body"
},
{
"paragraph_id": 55,
"text": "Since their concept was developed on the basis of Wǔ Xíng philosophy, each zàng is paired with a fǔ, and each zàng-fǔ pair is assigned to one of five elemental qualities (i.e., the Five Elements or Five Phases). These correspondences are stipulated as:",
"title": "Model of the body"
},
{
"paragraph_id": 56,
"text": "The zàng-fǔ are also connected to the twelve standard meridians – each yang meridian is attached to a fǔ organ, and five of the yin meridians are attached to a zàng. As there are only five zàng but six yin meridians, the sixth is assigned to the Pericardium, a peculiar entity almost similar to the Heart zàng.",
"title": "Model of the body"
},
{
"paragraph_id": 57,
"text": "The meridians (经络, jīng-luò) are believed to be channels running from the zàng-fǔ in the interior (里, lǐ) of the body to the limbs and joints (\"the surface\" [表, biaǒ]), transporting qi and xuĕ. TCM identifies 12 \"regular\" and 8 \"extraordinary\" meridians; the Chinese terms being 十二经脉 (shí-èr jīngmài, lit. \"the Twelve Vessels\") and 奇经八脉 (qí jīng bā mài) respectively. There's also a number of less customary channels branching from the \"regular\" meridians.",
"title": "Model of the body"
},
{
"paragraph_id": 58,
"text": "Fuke (妇科; 婦科; Fùkē) is the traditional Chinese term for women's medicine (it means gynecology and obstetrics in modern medicine). However, there are few or no ancient works on it except for Fu Qingzhu's Fu Qingzhu Nu Ke (Fu Qingzhu's Gynecology). In traditional China, as in many other cultures, the health and medicine of female bodies was less understood than that of male bodies. Women's bodies were often secondary to male bodies, since women were thought of as the weaker, sicklier sex.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 59,
"text": "In clinical encounters, women and men were treated differently. Diagnosing women was not as simple as diagnosing men. First, when a woman fell ill, an appropriate adult man was to call the doctor and remain present during the examination, for the woman could not be left alone with the doctor. The physician would discuss the female's problems and diagnosis only through the male. However, in certain cases, when a woman dealt with complications of pregnancy or birth, older women assumed the role of the formal authority. Men in these situations would not have much power to interfere. Second, women were often silent about their issues with doctors due to the societal expectation of female modesty when a male figure was in the room. Third, patriarchal society also caused doctors to call women and children patients \"the anonymous category of family members (Jia Ren) or household (Ju Jia)\" in their journals. This anonymity and lack of conversation between the doctor and woman patient led to the inquiry diagnosis of the Four Diagnostic Methods being the most challenging. Doctors used a medical doll known as a Doctor's lady, on which female patients could indicate the location of their symptoms.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 60,
"text": "Cheng Maoxian (b. 1581), who practiced medicine in Yangzhou, described the difficulties doctors had with the norm of female modesty. One of his case studies was that of Fan Jisuo's teenage daughter, who could not be diagnosed because she was unwilling to speak about her symptoms, since the illness involved discharge from her intimate areas. As Cheng describes, there were four standard methods of diagnosis – looking, asking, listening and smelling and touching (for pulse-taking). To maintain some form of modesty, women would often stay hidden behind curtains and screens. The doctor was allowed to touch enough of her body to complete his examination, often just the pulse taking. This would lead to situations where the symptoms and the doctor's diagnosis did not agree and the doctor would have to ask to view more of the patient.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 61,
"text": "These social and cultural beliefs were often barriers to learning more about female health, with women themselves often being the most formidable barrier. Women were often uncomfortable talking about their illnesses, especially in front of the male chaperones that attended medical examinations. Women would choose to omit certain symptoms as a means of upholding their chastity and honor. One such example is the case in which a teenage girl was unable to be diagnosed because she failed to mention her symptom of vaginal discharge. Silence was their way of maintaining control in these situations, but it often came at the expense of their health and the advancement of female health and medicine. This silence and control were most obviously seen when the health problem was related to the core of Ming fuke, or the sexual body. It was often in these diagnostic settings that women would choose silence. In addition, there would be a conflict between patient and doctor on the probability of her diagnosis. For example, a woman who thought herself to be past the point of child-bearing age, might not believe a doctor who diagnoses her as pregnant. This only resulted in more conflict.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 62,
"text": "Yin and yang were critical to the understanding of women's bodies, but understood only in conjunction with male bodies. Yin and yang ruled the body, the body being a microcosm of the universe and the earth. In addition, gender in the body was understood as homologous, the two genders operating in synchronization. Gender was presumed to influence the movement of energy and a well-trained physician would be expected to read the pulse and be able to identify two dozen or more energy flows. Yin and yang concepts were applied to the feminine and masculine aspects of all bodies, implying that the differences between men and women begin at the level of this energy flow. According to Bequeathed Writings of Master Chu the male's yang pulse movement follows an ascending path in \"compliance [with cosmic direction] so that the cycle of circulation in the body and the Vital Gate are felt...The female's yin pulse movement follows a defending path against the direction of cosmic influences, so that the nadir and the Gate of Life are felt at the inch position of the left hand\". In sum, classical medicine marked yin and yang as high and low on bodies which in turn would be labeled normal or abnormal and gendered either male or female.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 63,
"text": "Bodily functions could be categorized through systems, not organs. In many drawings and diagrams, the twelve channels and their visceral systems were organized by yin and yang, an organization that was identical in female and male bodies. Female and male bodies were no different on the plane of yin and yang. Their gendered differences were not acknowledged in diagrams of the human body. Medical texts such as the Yuzuan yizong jinjian were filled with illustrations of male bodies or androgynous bodies that did not display gendered characteristics.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 64,
"text": "As in other cultures, fertility and menstruation dominate female health concerns. Since male and female bodies were governed by the same forces, traditional Chinese medicine did not recognize the womb as the place of reproduction. The abdominal cavity presented pathologies that were similar in both men and women, which included tumors, growths, hernias, and swellings of the genitals. The \"master system,\" as Charlotte Furth calls it, is the kidney visceral system, which governed reproductive functions. Therefore, it was not the anatomical structures that allowed for pregnancy, but the difference in processes that allowed for the condition of pregnancy to occur.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 65,
"text": "Traditional Chinese medicine's dealings with pregnancy are documented from at least the seventeenth century. According to Charlotte Furth, \"a pregnancy (in the seventeenth century) as a known bodily experience emerged [...] out of the liminality of menstrual irregularity, as uneasy digestion, and a sense of fullness\". These symptoms were common among other illness as well, so the diagnosis of pregnancy often came late in the term. The Canon of the Pulse, which described the use of pulse in diagnosis, stated that pregnancy was \"a condition marked by symptoms of the disorder in one whose pulse is normal\" or \"where the pulse and symptoms do not agree\". Women were often silent about suspected pregnancy, which led to many men not knowing that their wife or daughter was pregnant until complications arrived. Complications through the misdiagnosis and the woman's reluctance to speak often led to medically induced abortions. Cheng, Furth wrote, \"was unapologetic about endangering a fetus when pregnancy risked a mother's well being\". The method of abortion was the ingestion of certain herbs and foods. Disappointment at the loss of the fetus often led to family discord.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 66,
"text": "If the baby and mother survived the term of the pregnancy, childbirth was then the next step. The tools provided for birth were: towels to catch the blood, a container for the placenta, a pregnancy sash to support the belly, and an infant swaddling wrap. With these tools, the baby was born, cleaned, and swaddled; however, the mother was then immediately the focus of the doctor to replenish her qi. In his writings, Cheng places a large amount of emphasis on the Four Diagnostic methods to deal with postpartum issues and instructs all physicians to \"not neglect any [of the four methods]\". The process of birthing was thought to deplete a woman's blood level and qi so the most common treatments for postpartum were food (commonly garlic and ginseng), medicine, and rest. This process was followed up by a month check-in with the physician, a practice known as zuo yuezi.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 67,
"text": "Infertility, not very well understood, posed serious social and cultural repercussions. The seventh-century scholar Sun Simiao is often quoted: \"those who have prescriptions for women's distinctiveness take their differences of pregnancy, childbirth and [internal] bursting injuries as their basis.\" Even in contemporary fuke placing emphasis on reproductive functions, rather than the entire health of the woman, suggests that the main function of fuke is to produce children.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 68,
"text": "Once again, the kidney visceral system governs the \"source Qi\", which governs the reproductive systems in both sexes. This source Qi was thought to \"be slowly depleted through sexual activity, menstruation and childbirth.\" It was also understood that the depletion of source Qi could result from the movement of an external pathology that moved through the outer visceral systems before causing more permanent damage to the home of source Qi, the kidney system. In addition, the view that only very serious ailments ended in the damage of this system means that those who had trouble with their reproductive systems or fertility were seriously ill.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 69,
"text": "According to traditional Chinese medical texts, infertility can be summarized into different syndrome types. These were spleen and kidney depletion (yang depletion), liver and kidney depletion (yin depletion), blood depletion, phlegm damp, liver oppression, and damp heat. This is important because, while most other issues were complex in Chinese medical physiology, women's fertility issues were simple. Most syndrome types revolved around menstruation, or lack thereof. The patient was entrusted with recording not only the frequency, but also the \"volume, color, consistency, and odor of menstrual flow.\" This placed responsibility of symptom recording on the patient, and was compounded by the earlier discussed issue of female chastity and honor. This meant that diagnosing female infertility was difficult, because the only symptoms that were recorded and monitored by the physician were the pulse and color of the tongue.",
"title": "Gender in traditional medicine"
},
{
"paragraph_id": 70,
"text": "In general, disease is perceived as a disharmony (or imbalance) in the functions or interactions of yin, yang, qi, xuĕ, zàng-fǔ, meridians etc. and/or of the interaction between the human body and the environment. Therapy is based on which \"pattern of disharmony\" can be identified. Thus, \"pattern discrimination\" is the most important step in TCM diagnosis. It is also known to be the most difficult aspect of practicing TCM.",
"title": "Concept of disease"
},
{
"paragraph_id": 71,
"text": "To determine which pattern is at hand, practitioners will examine things like the color and shape of the tongue, the relative strength of pulse-points, the smell of the breath, the quality of breathing or the sound of the voice. For example, depending on tongue and pulse conditions, a TCM practitioner might diagnose bleeding from the mouth and nose as: \"Liver fire rushes upwards and scorches the Lung, injuring the blood vessels and giving rise to reckless pouring of blood from the mouth and nose.\" He might then go on to prescribe treatments designed to clear heat or supplement the Lung.",
"title": "Concept of disease"
},
{
"paragraph_id": 72,
"text": "In TCM, a disease has two aspects: \"bìng\" and \"zhèng\". The former is often translated as \"disease entity\", \"disease category\", \"illness\", or simply \"diagnosis\". The latter, and more important one, is usually translated as \"pattern\" (or sometimes also as \"syndrome\"). For example, the disease entity of a common cold might present with a pattern of wind-cold in one person, and with the pattern of wind-heat in another.",
"title": "Concept of disease"
},
{
"paragraph_id": 73,
"text": "From a scientific point of view, most of the disease entities (病; bìng) listed by TCM constitute symptoms. Examples include headache, cough, abdominal pain, constipation etc.",
"title": "Concept of disease"
},
{
"paragraph_id": 74,
"text": "Since therapy will not be chosen according to the disease entity but according to the pattern, two people with the same disease entity but different patterns will receive different therapy. Vice versa, people with similar patterns might receive similar therapy even if their disease entities are different. This is called yì bìng tóng zhì, tóng bìng yì zhì (异病同治,同病异治; 'different diseases', 'same treatment', 'same disease', 'different treatments').",
"title": "Concept of disease"
},
{
"paragraph_id": 75,
"text": "In TCM, \"pattern\" (证; zhèng) refers to a \"pattern of disharmony\" or \"functional disturbance\" within the functional entities of which the TCM model of the body is composed. There are disharmony patterns of qi, xuě, the body fluids, the zàng-fǔ, and the meridians. They are ultimately defined by their symptoms and signs (i.e., for example, pulse and tongue findings).",
"title": "Concept of disease"
},
{
"paragraph_id": 76,
"text": "In clinical practice, the identified pattern usually involves a combination of affected entities (compare with typical examples of patterns). The concrete pattern identified should account for all the symptoms a person has.",
"title": "Concept of disease"
},
{
"paragraph_id": 77,
"text": "The Six Excesses (六淫; liù yín, sometimes also translated as \"Pathogenic Factors\", or \"Six Pernicious Influences\"; with the alternative term of 六邪; liù xié, – \"Six Evils\" or \"Six Devils\") are allegorical terms used to describe disharmony patterns displaying certain typical symptoms. These symptoms resemble the effects of six climatic factors. In the allegory, these symptoms can occur because one or more of those climatic factors (called 六气; liù qì, \"the six qi\") were able to invade the body surface and to proceed to the interior. This is sometimes used to draw causal relationships (i.e., prior exposure to wind/cold/etc. is identified as the cause of a disease), while other authors explicitly deny a direct cause-effect relationship between weather conditions and disease, pointing out that the Six Excesses are primarily descriptions of a certain combination of symptoms translated into a pattern of disharmony. It is undisputed, though, that the Six Excesses can manifest inside the body without an external cause. In this case, they might be denoted \"internal\", e.g., \"internal wind\" or \"internal fire (or heat)\".",
"title": "Concept of disease"
},
{
"paragraph_id": 78,
"text": "The Six Excesses and their characteristic clinical signs are:",
"title": "Concept of disease"
},
{
"paragraph_id": 79,
"text": "Six-Excesses-patterns can consist of only one or a combination of Excesses (e.g., wind-cold, wind-damp-heat). They can also transform from one into another.",
"title": "Concept of disease"
},
{
"paragraph_id": 80,
"text": "For each of the functional entities (qi, xuĕ, zàng-fǔ, meridians etc.), typical disharmony patterns are recognized; for example: qi vacuity and qi stagnation in the case of qi; blood vacuity, blood stasis, and blood heat in the case of xuĕ; Spleen qi vacuity, Spleen yang vacuity, Spleen qi vacuity with down-bearing qi, Spleen qi vacuity with lack of blood containment, cold-damp invasion of the Spleen, damp-heat invasion of Spleen and Stomach in case of the Spleen zàng; wind/cold/damp invasion in the case of the meridians.",
"title": "Concept of disease"
},
{
"paragraph_id": 81,
"text": "TCM gives detailed prescriptions of these patterns regarding their typical symptoms, mostly including characteristic tongue and/or pulse findings. For example:",
"title": "Concept of disease"
},
{
"paragraph_id": 82,
"text": "The process of determining which actual pattern is on hand is called 辩证 (biàn zhèng, usually translated as \"pattern diagnosis\", \"pattern identification\" or \"pattern discrimination\"). Generally, the first and most important step in pattern diagnosis is an evaluation of the present signs and symptoms on the basis of the \"Eight Principles\" (八纲; bā gāng). These eight principles refer to four pairs of fundamental qualities of a disease: exterior/interior, heat/cold, vacuity/repletion, and yin/yang. Out of these, heat/cold and vacuity/repletion have the biggest clinical importance. The yin/yang quality, on the other side, has the smallest importance and is somewhat seen aside from the other three pairs, since it merely presents a general and vague conclusion regarding what other qualities are found. In detail, the Eight Principles refer to the following:",
"title": "Concept of disease"
},
{
"paragraph_id": 83,
"text": "After the fundamental nature of a disease in terms of the Eight Principles is determined, the investigation focuses on more specific aspects. By evaluating the present signs and symptoms against the background of typical disharmony patterns of the various entities, evidence is collected whether or how specific entities are affected. This evaluation can be done",
"title": "Concept of disease"
},
{
"paragraph_id": 84,
"text": "There are also three special pattern diagnosis systems used in case of febrile and infectious diseases only (\"Six Channel system\" or \"six division pattern\" [六经辩证; liù jīng biàn zhèng]; \"Wei Qi Ying Xue system\" or \"four division pattern\" [卫气营血辩证; weì qì yíng xuè biàn zhèng]; \"San Jiao system\" or \"three burners pattern\" [三焦辩证; sānjiaō biàn zhèng]).",
"title": "Concept of disease"
},
{
"paragraph_id": 85,
"text": "Although TCM and its concept of disease do not strongly differentiate between cause and effect, pattern discrimination can include considerations regarding the disease cause; this is called 病因辩证 (bìngyīn biàn zhèng, \"disease-cause pattern discrimination\").",
"title": "Concept of disease"
},
{
"paragraph_id": 86,
"text": "There are three fundamental categories of disease causes (三因; sān yīn) recognized:",
"title": "Concept of disease"
},
{
"paragraph_id": 87,
"text": "In TCM, there are five major diagnostic methods: inspection, auscultation, olfaction, inquiry, and palpation. These are grouped into what is known as the \"Four pillars\" of diagnosis, which are Inspection, Auscultation/ Olfaction, Inquiry, and Palpation (望,聞,問,切).",
"title": "Diagnostics"
},
{
"paragraph_id": 88,
"text": "Examination of the tongue and the pulse are among the principal diagnostic methods in TCM. Details of the tongue, including shape, size, color, texture, cracks, teeth marks, as well as tongue coating are all considered as part of tongue diagnosis. Various regions of the tongue's surface are believed to correspond to the zàng-fŭ organs. For example, redness on the tip of the tongue might indicate heat in the Heart, while redness on the sides of the tongue might indicate heat in the Liver.",
"title": "Diagnostics"
},
{
"paragraph_id": 89,
"text": "Pulse palpation involves measuring the pulse both at a superficial and at a deep level at three different locations on the radial artery (Cun, Guan, Chi, located two fingerbreadths from the wrist crease, one fingerbreadth from the wrist crease, and right at the wrist crease, respectively, usually palpated with the index, middle and ring finger) of each arm, for a total of twelve pulses, all of which are thought to correspond with certain zàng-fŭ. The pulse is examined for several characteristics including rhythm, strength and volume, and described with qualities like \"floating, slippery, bolstering-like, feeble, thready and quick\"; each of these qualities indicates certain disease patterns. Learning TCM pulse diagnosis can take several years.",
"title": "Diagnostics"
},
{
"paragraph_id": 90,
"text": "The term \"herbal medicine\" is somewhat misleading in that, while plant elements are by far the most commonly used substances in TCM, other, non-botanic substances are used as well: animal, human, fungi, and mineral products are also used. Thus, the term \"medicinal\" (instead of herb) may be used, although there is no scientific evidence that any of these compounds have medicinal effects.",
"title": "Herbal medicine"
},
{
"paragraph_id": 91,
"text": "There are roughly 13,000 compounds used in China and over 100,000 TCM recipes recorded in the ancient literature. Plant elements and extracts are by far the most common elements used. In the classic Handbook of Traditional Drugs from 1941, 517 drugs were listed – out of these, 45 were animal parts, and 30 were minerals.",
"title": "Herbal medicine"
},
{
"paragraph_id": 92,
"text": "Some animal parts used include cow gallstones, hornet nests, leeches, and scorpion. Other examples of animal parts include horn of the antelope or buffalo, deer antlers, testicles and penis bone of the dog, and snake bile. Some TCM textbooks still recommend preparations containing animal tissues, but there has been little research to justify the claimed clinical efficacy of many TCM animal products.",
"title": "Herbal medicine"
},
{
"paragraph_id": 93,
"text": "Some compounds can include the parts of endangered species, including tiger bones and rhinoceros horn which is used for many ailments (though not as an aphrodisiac as is commonly misunderstood in the West). The black market in rhinoceros horns (driven not just by TCM but also unrelated status-seeking) has reduced the world's rhino population by more than 90 percent over the past 40 years. Concerns have also arisen over the use of pangolin scales, turtle plastron, seahorses, and the gill plates of mobula and manta rays.",
"title": "Herbal medicine"
},
{
"paragraph_id": 94,
"text": "Poachers hunt restricted or endangered species to supply the black market with TCM products. There is no scientific evidence of efficacy for tiger medicines. Concern over China considering to legalize the trade in tiger parts prompted the 171-nation Convention on International Trade in Endangered Species (CITES) to endorse a decision opposing the resurgence of trade in tigers. Fewer than 30,000 saiga antelopes remain, which are exported to China for use in traditional fever therapies. Organized gangs illegally export the horn of the antelopes to China. The pressures on seahorses (Hippocampus spp.) used in traditional medicine is enormous; tens of millions of animals are unsustainably caught annually. Many species of syngnathid are currently part of the IUCN Red List of Threatened Species or national equivalents.",
"title": "Herbal medicine"
},
{
"paragraph_id": 95,
"text": "Since TCM recognizes bear bile as a treatment compound, more than 12,000 asiatic black bears are held in bear farms. The bile is extracted through a permanent hole in the abdomen leading to the gall bladder, which can cause severe pain. This can lead to bears trying to kill themselves. As of 2012, approximately 10,000 bears are farmed in China for their bile. This practice has spurred public outcry across the country. The bile is collected from live bears via a surgical procedure. As of March 2020 bear bile as ingredient of Tan Re Qing injection remains on the list of remedies recommended for treatment of \"severe cases\" of COVID-19 by National Health Commission of China and the National Administration of Traditional Chinese Medicine.",
"title": "Herbal medicine"
},
{
"paragraph_id": 96,
"text": "The deer penis is believed to have therapeutic benefits according to traditional Chinese medicine. Tiger parts from poached animals include tiger penis, believed to improve virility, and tiger eyes. The illegal trade for tiger parts in China has driven the species to near-extinction because of its popularity in traditional medicine. Laws protecting even critically endangered species such as the Sumatran tiger fail to stop the display and sale of these items in open markets. Shark fin soup is traditionally regarded in Chinese medicine as beneficial for health in East Asia, and its status as an elite dish has led to huge demand with the increase of affluence in China, devastating shark populations. The shark fins have been a part of traditional Chinese medicine for centuries. Shark finning is banned in many countries, but the trade is thriving in Hong Kong and China, where the fins are part of shark fin soup, a dish considered a delicacy, and used in some types of traditional Chinese medicine.",
"title": "Herbal medicine"
},
{
"paragraph_id": 97,
"text": "The tortoise (freshwater turtle, guiban) and turtle (Chinese softshell turtle, biejia) species used in traditional Chinese medicine are raised on farms, while restrictions are made on the accumulation and export of other endangered species. However, issues concerning the overexploitation of Asian turtles in China have not been completely solved. Australian scientists have developed methods to identify medicines containing DNA traces of endangered species. Finally, although not an endangered species, sharp rises in exports of donkeys and donkey hide from Africa to China to make the traditional remedy ejiao have prompted export restrictions by some African countries.",
"title": "Herbal medicine"
},
{
"paragraph_id": 98,
"text": "Traditional Chinese medicine also includes some human parts: the classic Materia medica (Bencao Gangmu) describes (also criticizes) the use of 35 human body parts and excreta in medicines, including bones, fingernail, hairs, dandruff, earwax, impurities on the teeth, feces, urine, sweat, organs, but most are no longer in use.",
"title": "Herbal medicine"
},
{
"paragraph_id": 99,
"text": "Human placenta has been used an ingredient in certain traditional Chinese medicines, including using dried human placenta, known as \"Ziheche\", to treat infertility, impotence and other conditions. The consumption of the human placenta is a potential source of infection.",
"title": "Herbal medicine"
},
{
"paragraph_id": 100,
"text": "The traditional categorizations and classifications that can still be found today are:",
"title": "Herbal medicine"
},
{
"paragraph_id": 101,
"text": "As of 2007 there were not enough good-quality trials of herbal therapies to allow their effectiveness to be determined. A high percentage of relevant studies on traditional Chinese medicine are in Chinese databases. Fifty percent of systematic reviews on TCM did not search Chinese databases, which could lead to a bias in the results. Many systematic reviews of TCM interventions published in Chinese journals are incomplete, some contained errors or were misleading. The herbs recommended by traditional Chinese practitioners in the US are unregulated.",
"title": "Herbal medicine"
},
{
"paragraph_id": 102,
"text": "With an eye to the enormous Chinese market, pharmaceutical companies have explored creating new drugs from traditional remedies. The journal Nature commented that \"claims made on behalf of an uncharted body of knowledge should be treated with the customary skepticism that is the bedrock of both science and medicine.\"",
"title": "Herbal medicine"
},
{
"paragraph_id": 103,
"text": "There had been success in the 1970s, however, with the development of the antimalarial drug artemisinin, which is a processed extract of Artemisia annua, a herb traditionally used as a fever treatment. Artemisia annua has been used by Chinese herbalists in traditional Chinese medicines for 2,000 years. In 1596, Li Shizhen recommended tea made from qinghao specifically to treat malaria symptoms in his Compendium of Materia Medica. Researcher Tu Youyou discovered that a low-temperature extraction process could isolate an effective antimalarial substance from the plant. Tu says she was influenced by a traditional Chinese herbal medicine source, The Handbook of Prescriptions for Emergency Treatments, written in 340 by Ge Hong, which states that this herb should be steeped in cold water. The extracted substance, once subject to detoxification and purification processes, is a usable antimalarial drug – a 2012 review found that artemisinin-based remedies were the most effective drugs for the treatment of malaria. For her work on malaria, Tu received the 2015 Nobel Prize in Physiology or Medicine. Despite global efforts in combating malaria, it remains a large burden for the population. Although WHO recommends artemisinin-based remedies for treating uncomplicated malaria, resistance to the drug can no longer be ignored.",
"title": "Herbal medicine"
},
{
"paragraph_id": 104,
"text": "Also in the 1970s Chinese researcher Zhang TingDong and colleagues investigated the potential use of the traditionally used substance arsenic trioxide to treat acute promyelocytic leukemia (APL). Building on his work, research both in China and the West eventually led to the development of the drug Trisenox, which was approved for leukemia treatment by the FDA in 2000.",
"title": "Herbal medicine"
},
{
"paragraph_id": 105,
"text": "Huperzine A, an extract from the herb, Huperzia serrata, is under preliminary research as a possible therapeutic for Alzheimer's disease, but poor methodological quality of the research restricts conclusions about its effectiveness.",
"title": "Herbal medicine"
},
{
"paragraph_id": 106,
"text": "Ephedrine in its natural form, known as má huáng (麻黄) in TCM, has been documented in China since the Han dynasty (206 BCE – 220 CE) as an antiasthmatic and stimulant. In 1885, the chemical synthesis of ephedrine was first accomplished by Japanese organic chemist Nagai Nagayoshi based on his research on Japanese and Chinese traditional herbal medicines",
"title": "Herbal medicine"
},
{
"paragraph_id": 107,
"text": "Pien tze huang was first documented in the Ming dynasty.",
"title": "Herbal medicine"
},
{
"paragraph_id": 108,
"text": "A 2012 systematic review found there is a lack of available cost-effectiveness evidence in TCM.",
"title": "Herbal medicine"
},
{
"paragraph_id": 109,
"text": "From the earliest records regarding the use of compounds to today, the toxicity of certain substances has been described in all Chinese materiae medicae. Since TCM has become more popular in the Western world, there are increasing concerns about the potential toxicity of many traditional Chinese plants, animal parts and minerals. Traditional Chinese herbal remedies are conveniently available from grocery stores in most Chinese neighborhoods; some of these items may contain toxic ingredients, are imported into the U.S. illegally, and are associated with claims of therapeutic benefit without evidence. For most compounds, efficacy and toxicity testing are based on traditional knowledge rather than laboratory analysis. The toxicity in some cases could be confirmed by modern research (i.e., in scorpion); in some cases it could not (i.e., in Curculigo). Traditional herbal medicines can contain extremely toxic chemicals and heavy metals, and naturally occurring toxins, which can cause illness, exacerbate pre-existing poor health or result in death. Botanical misidentification of plants can cause toxic reactions in humans. The description of some plants used in TCM has changed, leading to unintended poisoning by using the wrong plants. A concern is also contaminated herbal medicines with microorganisms and fungal toxins, including aflatoxin. Traditional herbal medicines are sometimes contaminated with toxic heavy metals, including lead, arsenic, mercury and cadmium, which inflict serious health risks to consumers. Also, adulteration of some herbal medicine preparations with conventional drugs which may cause serious adverse effects, such as corticosteroids, phenylbutazone, phenytoin, and glibenclamide, has been reported.",
"title": "Herbal medicine"
},
{
"paragraph_id": 110,
"text": "Substances known to be potentially dangerous include Aconitum, secretions from the Asiatic toad, powdered centipede, the Chinese beetle (Mylabris phalerata), certain fungi, Aristolochia, arsenic sulfide (realgar), mercury sulfide, and cinnabar. Asbestos ore (Actinolite, Yang Qi Shi, 阳起石) is used to treat impotence in TCM. Due to galena's (litharge, lead(II) oxide) high lead content, it is known to be toxic. Lead, mercury, arsenic, copper, cadmium, and thallium have been detected in TCM products sold in the U.S. and China.",
"title": "Herbal medicine"
},
{
"paragraph_id": 111,
"text": "To avoid its toxic adverse effects Xanthium sibiricum must be processed. Hepatotoxicity has been reported with products containing Reynoutria multiflora (synonym Polygonum multiflorum), glycyrrhizin, Senecio and Symphytum. The herbs indicated as being hepatotoxic included Dictamnus dasycarpus, Astragalus membranaceus, and Paeonia lactiflora. Contrary to popular belief, Ganoderma lucidum mushroom extract, as an adjuvant for cancer immunotherapy, appears to have the potential for toxicity. A 2013 review suggested that although the antimalarial herb Artemisia annua may not cause hepatotoxicity, haematotoxicity, or hyperlipidemia, it should be used cautiously during pregnancy due to a potential risk of embryotoxicity at a high dose.",
"title": "Herbal medicine"
},
{
"paragraph_id": 112,
"text": "However, many adverse reactions are due to misuse or abuse of Chinese medicine. For example, the misuse of the dietary supplement Ephedra (containing ephedrine) can lead to adverse events including gastrointestinal problems as well as sudden death from cardiomyopathy. Products adulterated with pharmaceuticals for weight loss or erectile dysfunction are one of the main concerns. Chinese herbal medicine has been a major cause of acute liver failure in China.",
"title": "Herbal medicine"
},
{
"paragraph_id": 113,
"text": "The harvesting of guano from bat caves (yemingsha) brings workers into close contact with these animals, increasing the risk of zoonosis. The Chinese virologist Shi Zhengli has identified dozens of SARS-like coronaviruses in samples of bat droppings.",
"title": "Herbal medicine"
},
{
"paragraph_id": 114,
"text": "Acupuncture is the insertion of needles into superficial structures of the body (skin, subcutaneous tissue, muscles) – usually at acupuncture points (acupoints) – and their subsequent manipulation; this aims at influencing the flow of qi. According to TCM it relieves pain and treats (and prevents) various diseases. The US FDA classifies single-use acupuncture needles as Class II medical devices, under CFR 21.",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 115,
"text": "Acupuncture is often accompanied by moxibustion – the Chinese characters for acupuncture (针灸; 針灸; zhēnjiǔ) literally meaning \"acupuncture-moxibustion\" – which involves burning mugwort on or near the skin at an acupuncture point. According to the American Cancer Society, \"available scientific evidence does not support claims that moxibustion is effective in preventing or treating cancer or any other disease\".",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 116,
"text": "In electroacupuncture, an electric current is applied to the needles once they are inserted, to further stimulate the respective acupuncture points.",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 117,
"text": "A recent historian of Chinese medicine remarked that it is \"nicely ironic that the specialty of acupuncture -- arguably the most questionable part of their medical heritage for most Chinese at the start of the twentieth century -- has become the most marketable aspect of Chinese medicine.\" She found that acupuncture as we know it today has hardly been in existence for sixty years. Moreover, the fine, filiform needle we think of as the acupuncture needle today was not widely used a century ago. Present day acupuncture was developed in the 1930s and put into wide practice only as late as the 1960s.",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 118,
"text": "A 2013 editorial in the American journal Anesthesia and Analgesia stated that acupuncture studies produced inconsistent results, (i.e. acupuncture relieved pain in some conditions but had no effect in other very similar conditions) which suggests the presence of false positive results. These may be caused by factors like biased study design, poor blinding, and the classification of electrified needles (a type of TENS) as a form of acupuncture. The inability to find consistent results despite more than 3,000 studies, the editorial continued, suggests that the treatment seems to be a placebo effect and the existing equivocal positive results are the type of noise one expects to see after a large number of studies are performed on an inert therapy. The editorial concluded that the best controlled studies showed a clear pattern, in which the outcome does not rely upon needle location or even needle insertion, and since \"these variables are those that define acupuncture, the only sensible conclusion is that acupuncture does not work.\"",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 119,
"text": "According to the US NIH National Cancer Institute, a review of 17,922 patients reported that real acupuncture relieved muscle and joint pain, caused by aromatase inhibitors, much better than sham acupuncture. Regarding cancer patients, the review hypothesized that acupuncture may cause physical responses in nerve cells, the pituitary gland, and the brain – releasing proteins, hormones, and chemicals that are proposed to affect blood pressure, body temperature, immune activity, and endorphin release.",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 120,
"text": "A 2012 meta-analysis concluded that the mechanisms of acupuncture \"are clinically relevant, but that an important part of these total effects is not due to issues considered to be crucial by most acupuncturists, such as the correct location of points and depth of needling ... [but is] ... associated with more potent placebo or context effects\". Commenting on this meta-analysis, both Edzard Ernst and David Colquhoun said the results were of negligible clinical significance.",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 121,
"text": "A 2011 overview of Cochrane reviews found evidence that suggests acupuncture is effective for some but not all kinds of pain. A 2010 systematic review found that there is evidence \"that acupuncture provides a short-term clinically relevant effect when compared with a waiting list control or when acupuncture is added to another intervention\" in the treatment of chronic low back pain. Two review articles discussing the effectiveness of acupuncture, from 2008 and 2009, have concluded that there is not enough evidence to conclude that it is effective beyond the placebo effect.",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 122,
"text": "Acupuncture is generally safe when administered using Clean Needle Technique (CNT). Although serious adverse effects are rare, acupuncture is not without risk. Severe adverse effects, including very rarely death (5 case reports), have been reported.",
"title": "Acupuncture and moxibustion"
},
{
"paragraph_id": 123,
"text": "Tui na (推拿) is a form of massage, based on the assumptions of TCM, from which shiatsu is thought to have evolved. Techniques employed may include thumb presses, rubbing, percussion, and assisted stretching.",
"title": "Tui na"
},
{
"paragraph_id": 124,
"text": "Qìgōng (气功; 氣功) is a TCM system of exercise and meditation that combines regulated breathing, slow movement, and focused awareness, purportedly to cultivate and balance qi. One branch of qigong is qigong massage, in which the practitioner combines massage techniques with awareness of the acupuncture channels and points.",
"title": "Qigong"
},
{
"paragraph_id": 125,
"text": "Qi is air, breath, energy, or primordial life source that is neither matter or spirit. While Gong is a skillful movement, work, or exercise of the qi.",
"title": "Qigong"
},
{
"paragraph_id": 126,
"text": "Cupping (拔罐; báguàn) is a type of Chinese massage, consisting of placing several glass \"cups\" (open spheres) on the body. A match is lit and placed inside the cup and then removed before placing the cup against the skin. As the air in the cup is heated, it expands, and after placing in the skin, cools, creating lower pressure inside the cup that allows the cup to stick to the skin via suction. When combined with massage oil, the cups can be slid around the back, offering \"reverse-pressure massage\".",
"title": "Other therapies"
},
{
"paragraph_id": 127,
"text": "Gua sha (刮痧; guāshā) is abrading the skin with pieces of smooth jade, bone, animal tusks or horns or smooth stones; until red spots then bruising cover the area to which it is done. It is believed that this treatment is for almost any ailment. The red spots and bruising take three to ten days to heal, there is often some soreness in the area that has been treated.",
"title": "Other therapies"
},
{
"paragraph_id": 128,
"text": "Diē-dǎ (跌打) or Dit Da, is a traditional Chinese bone-setting technique, usually practiced by martial artists who know aspects of Chinese medicine that apply to the treatment of trauma and injuries such as bone fractures, sprains, and bruises. Some of these specialists may also use or recommend other disciplines of Chinese medical therapies if serious injury is involved. Such practice of bone-setting (正骨; 整骨) is not common in the West.",
"title": "Other therapies"
},
{
"paragraph_id": 129,
"text": "The concepts yin and yang are associated with different classes of foods, and tradition considers it important to consume them in a balanced fashion.",
"title": "Other therapies"
},
{
"paragraph_id": 130,
"text": "Many governments have enacted laws to regulate TCM practice.",
"title": "Regulations"
},
{
"paragraph_id": 131,
"text": "From 1 July 2012 Chinese medicine practitioners must be registered under the national registration and accreditation scheme with the Chinese Medicine Board of Australia and meet the Board's Registration Standards, to practice in Australia.",
"title": "Regulations"
},
{
"paragraph_id": 132,
"text": "TCM is regulated in five provinces in Canada: Alberta, British Columbia, Ontario, Quebec, and Newfoundland & Labrador.",
"title": "Regulations"
},
{
"paragraph_id": 133,
"text": "The National Administration of Traditional Chinese Medicine was created in 1949, which then absorbed existing TCM management in 1986 with major changes in 1998.",
"title": "Regulations"
},
{
"paragraph_id": 134,
"text": "China's National People's Congress Standing Committee passed the country's first law on TCM in 2016, which came into effect on 1 July 2017. The new law standardized TCM certifications by requiring TCM practitioners to (i) pass exams administered by provincial-level TCM authorities, and (ii) obtain recommendations from two certified practitioners. TCM products and services can be advertised only with approval from the local TCM authority.",
"title": "Regulations"
},
{
"paragraph_id": 135,
"text": "During British rule, Chinese medicine practitioners in Hong Kong were not recognized as \"medical doctors\", which means they could not issue prescription drugs, give injections, etc. However, TCM practitioners could register and operate TCM as \"herbalists\". The Chinese Medicine Council of Hong Kong was established in 1999. It regulates the compounds and professional standards for TCM practitioners. All TCM practitioners in Hong Kong are required to register with the council. The eligibility for registration includes a recognised 5-year university degree of TCM, a 30-week minimum supervised clinical internship, and passing the licensing exam.",
"title": "Regulations"
},
{
"paragraph_id": 136,
"text": "Currently, the approved Chinese medicine institutions are HKU, CUHK and HKBU.",
"title": "Regulations"
},
{
"paragraph_id": 137,
"text": "The Portuguese Macau government seldom interfered in the affairs of Chinese society, including with regard to regulations on the practice of TCM. There were a few TCM pharmacies in Macau during the colonial period. In 1994, the Portuguese Macau government published Decree-Law no. 53/94/M that officially started to regulate the TCM market. After the sovereign handover, the Macau S.A.R. government also published regulations on the practice of TCM. In 2000, Macau University of Science and Technology and Nanjing University of Traditional Chinese Medicine established the Macau College of Traditional Chinese Medicine to offer a degree course in Chinese medicine.",
"title": "Regulations"
},
{
"paragraph_id": 138,
"text": "In Macau, the legitimacy of Chinese medicine is not built upon \"miracle making\". Instead, it is achieved through a celebration of cultural tradition rejuvenated with discourses of nationalism and modernity, and through the mutual constructions of medical references between doctors and patients.",
"title": "Regulations"
},
{
"paragraph_id": 139,
"text": "In 2022, a new law regulating TCM, Law no. 11/2021, came into effect. The same law also repealed Decree-Law no. 53/94/M.",
"title": "Regulations"
},
{
"paragraph_id": 140,
"text": "All traditional medicines, including TCM, are regulated by Indonesian Minister of Health Regulation of 2013 on traditional medicine. Traditional medicine license (Surat Izin Pengobatan Tradisional – SIPT) is granted to the practitioners whose methods are recognized as safe and may benefit health. The TCM clinics are registered but there is no explicit regulation for it. The only TCM method which is accepted by medical logic and is empirically proofed is acupuncture. The acupuncturists can get SIPT and participate in health care facilities.",
"title": "Regulations"
},
{
"paragraph_id": 141,
"text": "Under the Medical Service Act (의료법/醫療法), an oriental medical doctor, whose obligation is to administer oriental medical treatment and provide guidance for health based on oriental medicine, shall be treated in the same manner as a medical doctor or dentist.",
"title": "Regulations"
},
{
"paragraph_id": 142,
"text": "The Korea Institute of Oriental Medicine is the top research center of TCM in Korea.",
"title": "Regulations"
},
{
"paragraph_id": 143,
"text": "The Traditional and Complementary Medicine Bill was passed by parliament in 2012 establishing the Traditional and Complementary Medicine Council to register and regulate traditional and complementary medicine practitioners, including TCM practitioners as well as other traditional and complementary medicine practitioners such as those in traditional Malay medicine and traditional Indian medicine.",
"title": "Regulations"
},
{
"paragraph_id": 144,
"text": "There are no specific regulations in the Netherlands on TCM; TCM is neither prohibited nor recognised by the government of the Netherlands. Chinese herbs as well as Chinese herbal products that are used in TCM are classified as foods and food supplements, and these Chinese herbs can be imported into the Netherlands as well as marketed as such without any type registration or notification to the government.",
"title": "Regulations"
},
{
"paragraph_id": 145,
"text": "Despite its status, some private health insurance companies reimburse a certain amount of annual costs for acupuncture treatments, this depends on one's insurance policy, as not all insurance policies cover it, and if the acupuncture practitioner is or is not a member of one of the professional organisations that are recognised by private health insurance companies. The recognized professional organizations include the Nederlandse Vereniging voor Acupunctuur (NVA), Nederlandse Artsen Acupunctuur Vereniging (NAAV), ZHONG, (Nederlandse Vereniging voor Traditionele Chinese Geneeskunde), Nederlandse Beroepsvereniging Chinese Geneeswijzen Yi (NBCG Yi), and Wetenschappelijke Artsen Vereniging voor Acupunctuur in Nederland (WAVAN).",
"title": "Regulations"
},
{
"paragraph_id": 146,
"text": "Although there are no regulatory standards for the practice of TCM in New Zealand, in the year 1990, acupuncture was included in the Governmental Accident Compensation Corporation (ACC) Act. This inclusion granted qualified and professionally registered acupuncturists to provide subsidised care and treatment to citizens, residents, and temporary visitors for work or sports related injuries that occurred within and upon the land of New Zealand. The two bodies for the regulation of acupuncture and attainment of ACC treatment provider status in New Zealand are Acupuncture NZ and The New Zealand Acupuncture Standards Authority.",
"title": "Regulations"
},
{
"paragraph_id": 147,
"text": "The TCM Practitioners Act was passed by Parliament in 2000 and the TCM Practitioners Board was established in 2001 as a statutory board under the Ministry of Health, to register and regulate TCM practitioners. The requirements for registration include possession of a diploma or degree from a TCM educational institution/university on a gazetted list, either structured TCM clinical training at an approved local TCM educational institution or foreign TCM registration together with supervised TCM clinical attachment/practice at an approved local TCM clinic, and upon meeting these requirements, passing the Singapore TCM Physicians Registration Examination (STRE) conducted by the TCM Practitioners Board.",
"title": "Regulations"
},
{
"paragraph_id": 148,
"text": "In 2024, Nanyang Technological University will offer the four-year Bachelor of Chinese Medicine programme, which is the first local programme accredited by the Ministry of Health.",
"title": "Regulations"
},
{
"paragraph_id": 149,
"text": "In Taiwan, TCM practitioners are physicians and are regulated by the Physicians Act.They are able to diagnose, write prescriptions, and dispense Chinese medicine independently. Under current law, those who wish to qualify for the Chinese medicine exam must have obtained a 7-year university degree in TCM.",
"title": "Regulations"
},
{
"paragraph_id": 150,
"text": "The National Research Institute of Chinese Medicine, established in 1963, is the largest Chinese herbal medicine research center in Taiwan.",
"title": "Regulations"
},
{
"paragraph_id": 151,
"text": "As of July 2012, only six states lack legislation to regulate the professional practice of TCM: Alabama, Kansas, North Dakota, South Dakota, Oklahoma, and Wyoming. In 1976, California established an Acupuncture Board and became the first state licensing professional acupuncturists.",
"title": "Regulations"
}
]
| Traditional Chinese medicine (TCM) is an alternative medical practice drawn from traditional medicine in China. It has been described as "fraught with pseudoscience", with the majority of its treatments having no logical mechanism of action. Medicine in traditional China encompassed a range of sometimes competing health and healing practices, folk beliefs, literati theory and Confucian philosophy, herbal remedies, food, diet, exercise, medical specializations, and schools of thought. In the early twentieth century, Chinese cultural and political modernizers worked to eliminate traditional practices as backward and unscientific. Traditional practitioners then selected elements of philosophy and practice and organized them into what they called "Chinese medicine". In the 1950s, the Chinese government sponsored the integration of Chinese and Western medicine, and in the Great Proletarian Cultural Revolution of the 1960s, promoted Chinese medicine as inexpensive and popular. After the opening of relations between the United States and China after 1972, there was great interest in the West for what is now called traditional Chinese medicine (TCM). TCM is said to be based on such texts as Huangdi Neijing, and Compendium of Materia Medica, a sixteenth-century encyclopedic work, and includes various forms of herbal medicine, acupuncture, cupping therapy, gua sha, massage, bonesetter (die-da), exercise (qigong), and dietary therapy. TCM is widely used in the Sinosphere. One of the basic tenets is that the body's qi is circulating through channels called meridians having branches connected to bodily organs and functions. There is no evidence that meridians or vital energy exist. Concepts of the body and of disease used in TCM reflect its ancient origins and its emphasis on dynamic processes over material structure, similar to the humoral theory of ancient Greece and ancient Rome. The demand for traditional medicines in China has been a major generator of illegal wildlife smuggling, linked to the killing and smuggling of endangered animals. | 2001-07-31T00:13:53Z | 2023-12-29T09:50:54Z | [
"Template:Lang-zh",
"Template:Efn",
"Template:See also",
"Template:More medical citations needed",
"Template:Further",
"Template:Use dmy dates",
"Template:Circa",
"Template:Verify source",
"Template:Div col end",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite encyclopedia",
"Template:Cite magazine",
"Template:Authority control",
"Template:JSTOR",
"Template:Sfnb",
"Template:More citations needed",
"Template:Page needed",
"Template:Notelist",
"Template:Cite web",
"Template:Harvnb",
"Template:Webarchive",
"Template:In lang",
"Template:Short description",
"Template:Alternative medical systems",
"Template:Div col",
"Template:Cite news",
"Template:Refbegin",
"Template:Traditional Chinese medicine",
"Template:Chinese folk religion",
"Template:Zh",
"Template:Asof",
"Template:Wikiquote",
"Template:Health in the People's Republic of China",
"Template:Traditional medicine",
"Template:Rp",
"Template:Cite book",
"Template:Harvp",
"Template:Open access",
"Template:Refend",
"Template:Pp-protected",
"Template:Main",
"Template:Clarify",
"Template:Transliteration",
"Template:Portal",
"Template:ISBN",
"Template:Citation",
"Template:Commons category",
"Template:Redirect",
"Template:Infobox Chinese",
"Template:Lang"
]
| https://en.wikipedia.org/wiki/Traditional_Chinese_medicine |
5,993 | Chemical bond | A chemical bond is a lasting attraction between atoms or ions that enables the formation of molecules, crystals, and other structures. The bond may result from the electrostatic force between oppositely charged ions as in ionic bonds, or through the sharing of electrons as in covalent bonds. The strength of chemical bonds varies considerably; there are "strong bonds" or "primary bonds" such as covalent, ionic and metallic bonds, and "weak bonds" or "secondary bonds" such as dipole–dipole interactions, the London dispersion force, and hydrogen bonding.
Since opposite electric charges attract, the negatively charged electrons surrounding the nucleus and the positively charged protons within a nucleus attract each other. Electrons shared between two nuclei will be attracted to both of them. "Constructive quantum mechanical wavefunction interference" stabilizes the paired nuclei (see Theories of chemical bonding). Bonded nuclei maintain an optimal distance (the bond distance) balancing attractive and repulsive effects explained quantitatively by quantum theory.
The atoms in molecules, crystals, metals and other forms of matter are held together by chemical bonds, which determine the structure and properties of matter.
All bonds can be described by quantum theory, but, in practice, simplified rules and other theories allow chemists to predict the strength, directionality, and polarity of bonds. The octet rule and VSEPR theory are examples. More sophisticated theories are valence bond theory, which includes orbital hybridization and resonance, and molecular orbital theory which includes the linear combination of atomic orbitals and ligand field theory. Electrostatics are used to describe bond polarities and the effects they have on chemical substances.
A chemical bond is an attraction between atoms. This attraction may be seen as the result of different behaviors of the outermost or valence electrons of atoms. These behaviors merge into each other seamlessly in various circumstances, so that there is no clear line to be drawn between them. However it remains useful and customary to differentiate between different types of bond, which result in different properties of condensed matter.
In the simplest view of a covalent bond, one or more electrons (often a pair of electrons) are drawn into the space between the two atomic nuclei. Energy is released by bond formation. This is not as a result of reduction in potential energy, because the attraction of the two electrons to the two protons is offset by the electron-electron and proton-proton repulsions. Instead, the release of energy (and hence stability of the bond) arises from the reduction in kinetic energy due to the electrons being in a more spatially distributed (i.e. longer de Broglie wavelength) orbital compared with each electron being confined closer to its respective nucleus. These bonds exist between two particular identifiable atoms and have a direction in space, allowing them to be shown as single connecting lines between atoms in drawings, or modeled as sticks between spheres in models.
In a polar covalent bond, one or more electrons are unequally shared between two nuclei. Covalent bonds often result in the formation of small collections of better-connected atoms called molecules, which in solids and liquids are bound to other molecules by forces that are often much weaker than the covalent bonds that hold the molecules internally together. Such weak intermolecular bonds give organic molecular substances, such as waxes and oils, their soft bulk character, and their low melting points (in liquids, molecules must cease most structured or oriented contact with each other). When covalent bonds link long chains of atoms in large molecules, however (as in polymers such as nylon), or when covalent bonds extend in networks through solids that are not composed of discrete molecules (such as diamond or quartz or the silicate minerals in many types of rock) then the structures that result may be both strong and tough, at least in the direction oriented correctly with networks of covalent bonds. Also, the melting points of such covalent polymers and networks increase greatly.
In a simplified view of an ionic bond, the bonding electron is not shared at all, but transferred. In this type of bond, the outer atomic orbital of one atom has a vacancy which allows the addition of one or more electrons. These newly added electrons potentially occupy a lower energy-state (effectively closer to more nuclear charge) than they experience in a different atom. Thus, one nucleus offers a more tightly bound position to an electron than does another nucleus, with the result that one atom may transfer an electron to the other. This transfer causes one atom to assume a net positive charge, and the other to assume a net negative charge. The bond then results from electrostatic attraction between the positive and negatively charged ions. Ionic bonds may be seen as extreme examples of polarization in covalent bonds. Often, such bonds have no particular orientation in space, since they result from equal electrostatic attraction of each ion to all ions around them. Ionic bonds are strong (and thus ionic substances require high temperatures to melt) but also brittle, since the forces between ions are short-range and do not easily bridge cracks and fractures. This type of bond gives rise to the physical characteristics of crystals of classic mineral salts, such as table salt.
A less often mentioned type of bonding is metallic bonding. In this type of bonding, each atom in a metal donates one or more electrons to a "sea" of electrons that reside between many metal atoms. In this sea, each electron is free (by virtue of its wave nature) to be associated with a great many atoms at once. The bond results because the metal atoms become somewhat positively charged due to loss of their electrons while the electrons remain attracted to many atoms, without being part of any given atom. Metallic bonding may be seen as an extreme example of delocalization of electrons over a large system of covalent bonds, in which every atom participates. This type of bonding is often very strong (resulting in the tensile strength of metals). However, metallic bonding is more collective in nature than other types, and so they allow metal crystals to more easily deform, because they are composed of atoms attracted to each other, but not in any particularly-oriented ways. This results in the malleability of metals. The cloud of electrons in metallic bonding causes the characteristically good electrical and thermal conductivity of metals, and also their shiny lustre that reflects most frequencies of white light.
Early speculations about the nature of the chemical bond, from as early as the 12th century, supposed that certain types of chemical species were joined by a type of chemical affinity. In 1704, Sir Isaac Newton famously outlined his atomic bonding theory, in "Query 31" of his Opticks, whereby atoms attach to each other by some "force". Specifically, after acknowledging the various popular theories in vogue at the time, of how atoms were reasoned to attach to each other, i.e. "hooked atoms", "glued together by rest", or "stuck together by conspiring motions", Newton states that he would rather infer from their cohesion, that "particles attract one another by some force, which in immediate contact is exceedingly strong, at small distances performs the chemical operations, and reaches not far from the particles with any sensible effect."
In 1819, on the heels of the invention of the voltaic pile, Jöns Jakob Berzelius developed a theory of chemical combination stressing the electronegative and electropositive characters of the combining atoms. By the mid 19th century, Edward Frankland, F.A. Kekulé, A.S. Couper, Alexander Butlerov, and Hermann Kolbe, building on the theory of radicals, developed the theory of valency, originally called "combining power", in which compounds were joined owing to an attraction of positive and negative poles. In 1904, Richard Abegg proposed his rule that the difference between the maximum and minimum valencies of an element is often eight. At this point, valency was still an empirical number based only on chemical properties.
However the nature of the atom became clearer with Ernest Rutherford's 1911 discovery that of an atomic nucleus surrounded by electrons in which he quoted Nagaoka rejected Thomson's model on the grounds that opposite charges are impenetrable. In 1904, Nagaoka proposed an alternative planetary model of the atom in which a positively charged center is surrounded by a number of revolving electrons, in the manner of Saturn and its rings.
Nagaoka's model made two predictions:
Rutherford mentions Nagaoka's model in his 1911 paper in which the atomic nucleus is proposed.
At the 1911 Solvay Conference, in the discussion of what could regulate energy differences between atoms, Max Planck stated: "The intermediaries could be the electrons." These nuclear models suggested that electrons determine chemical behavior.
Next came Niels Bohr's 1913 model of a nuclear atom with electron orbits. In 1916, chemist Gilbert N. Lewis developed the concept of electron-pair bonds, in which two atoms may share one to six electrons, thus forming the single electron bond, a single bond, a double bond, or a triple bond; in Lewis's own words, "An electron may form a part of the shell of two different atoms and cannot be said to belong to either one exclusively."
Also in 1916, Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonding. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904).
Niels Bohr also proposed a model of the chemical bond in 1913. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other.
In 1927, the first mathematically complete quantum description of a simple chemical bond, i.e. that produced by one electron in the hydrogen molecular ion, H2, was derived by the Danish physicist Øyvind Burrau. This work showed that the quantum approach to chemical bonds could be fundamentally and quantitatively correct, but the mathematical methods used could not be extended to molecules containing more than one electron. A more practical, albeit less quantitative, approach was put forward in the same year by Walter Heitler and Fritz London. The Heitler–London method forms the basis of what is now called valence bond theory. In 1929, the linear combination of atomic orbitals molecular orbital method (LCAO) approximation was introduced by Sir John Lennard-Jones, who also suggested methods to derive electronic structures of molecules of F2 (fluorine) and O2 (oxygen) molecules, from basic quantum principles. This molecular orbital theory represented a covalent bond as an orbital formed by combining the quantum mechanical Schrödinger atomic orbitals which had been hypothesized for electrons in single atoms. The equations for bonding electrons in multi-electron atoms could not be solved to mathematical perfection (i.e., analytically), but approximations for them still gave many good qualitative predictions and results. Most quantitative calculations in modern quantum chemistry use either valence bond or molecular orbital theory as a starting point, although a third approach, density functional theory, has become increasingly popular in recent years.
In 1933, H. H. James and A. S. Coolidge carried out a calculation on the dihydrogen molecule that, unlike all previous calculation which used functions only of the distance of the electron from the atomic nucleus, used functions which also explicitly added the distance between the two electrons. With up to 13 adjustable parameters they obtained a result very close to the experimental result for the dissociation energy. Later extensions have used up to 54 parameters and gave excellent agreement with experiments. This calculation convinced the scientific community that quantum theory could give agreement with experiment. However this approach has none of the physical pictures of the valence bond and molecular orbital theories and is difficult to extend to larger molecules.
Because atoms and molecules are three-dimensional, it is difficult to use a single method to indicate orbitals and bonds. In molecular formulas the chemical bonds (binding orbitals) between atoms are indicated in different ways depending on the type of discussion. Sometimes, some details are neglected. For example, in organic chemistry one is sometimes concerned only with the functional group of the molecule. Thus, the molecular formula of ethanol may be written in conformational form, three-dimensional form, full two-dimensional form (indicating every bond with no three-dimensional directions), compressed two-dimensional form (CH3–CH2–OH), by separating the functional group from another part of the molecule (C2H5OH), or by its atomic constituents (C2H6O), according to what is discussed. Sometimes, even the non-bonding valence shell electrons (with the two-dimensional approximate directions) are marked, e.g. for elemental carbon .C. Some chemists may also mark the respective orbitals, e.g. the hypothetical ethene anion (\C=C/ ) indicating the possibility of bond formation.
Strong chemical bonds are the intramolecular forces that hold atoms together in molecules. A strong chemical bond is formed from the transfer or sharing of electrons between atomic centers and relies on the electrostatic attraction between the protons in nuclei and the electrons in the orbitals.
The types of strong bond differ due to the difference in electronegativity of the constituent elements. Electronegativity is the tendency for an atom of a given chemical element to attract shared electrons when forming a chemical bond, where the higher the associated electronegativity then the more it attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, which characterizes a bond along the continuous scale from covalent to ionic bonding. A large difference in electronegativity leads to more polar (ionic) character in the bond.
Ionic bonding is a type of electrostatic interaction between atoms that have a large electronegativity difference. There is no precise value that distinguishes ionic from covalent bonding, but an electronegativity difference of over 1.7 is likely to be ionic while a difference of less than 1.7 is likely to be covalent. Ionic bonding leads to separate positive and negative ions. Ionic charges are commonly between −3e to +3e. Ionic bonding commonly occurs in metal salts such as sodium chloride (table salt). A typical feature of ionic bonds is that the species form into ionic crystals, in which no ion is specifically paired with any single other ion in a specific directional bond. Rather, each species of ion is surrounded by ions of the opposite charge, and the spacing between it and each of the oppositely charged ions near it is the same for all surrounding atoms of the same type. It is thus no longer possible to associate an ion with any specific other single ionized atom near it. This is a situation unlike that in covalent crystals, where covalent bonds between specific atoms are still discernible from the shorter distances between them, as measured via such techniques as X-ray diffraction.
Ionic crystals may contain a mixture of covalent and ionic species, as for example salts of complex acids such as sodium cyanide, NaCN. X-ray diffraction shows that in NaCN, for example, the bonds between sodium cations (Na) and the cyanide anions (CN) are ionic, with no sodium ion associated with any particular cyanide. However, the bonds between the carbon (C) and nitrogen (N) atoms in cyanide are of the covalent type, so that each carbon is strongly bound to just one nitrogen, to which it is physically much closer than it is to other carbons or nitrogens in a sodium cyanide crystal.
When such crystals are melted into liquids, the ionic bonds are broken first because they are non-directional and allow the charged species to move freely. Similarly, when such salts dissolve into water, the ionic bonds are typically broken by the interaction with water but the covalent bonds continue to hold. For example, in solution, the cyanide ions, still bound together as single CN ions, move independently through the solution, as do sodium ions, as Na. In water, charged ions move apart because each of them are more strongly attracted to a number of water molecules than to each other. The attraction between ions and water molecules in such solutions is due to a type of weak dipole-dipole type chemical bond. In melted ionic compounds, the ions continue to be attracted to each other, but not in any ordered or crystalline way.
Covalent bonding is a common type of bonding in which two or more atoms share valence electrons more or less equally. The simplest and most common type is a single bond in which two atoms share two electrons. Other types include the double bond, the triple bond, one- and three-electron bonds, the three-center two-electron bond and three-center four-electron bond.
In non-polar covalent bonds, the electronegativity difference between the bonded atoms is small, typically 0 to 0.3. Bonds within most organic compounds are described as covalent. The figure shows methane (CH4), in which each hydrogen forms a covalent bond with the carbon. See sigma bonds and pi bonds for LCAO descriptions of such bonding.
Molecules that are formed primarily from non-polar covalent bonds are often immiscible in water or other polar solvents, but much more soluble in non-polar solvents such as hexane.
A polar covalent bond is a covalent bond with a significant ionic character. This means that the two shared electrons are closer to one of the atoms than the other, creating an imbalance of charge. Such bonds occur between two atoms with moderately different electronegativities and give rise to dipole–dipole interactions. The electronegativity difference between the two atoms in these bonds is 0.3 to 1.7.
A single bond between two atoms corresponds to the sharing of one pair of electrons. The Hydrogen (H) atom has one valence electron. Two Hydrogen atoms can then form a molecule, held together by the shared pair of electrons. Each H atom now has the noble gas electron configuration of helium (He). The pair of shared electrons forms a single covalent bond. The electron density of these two bonding electrons in the region between the two atoms increases from the density of two non-interacting H atoms.
A double bond has two shared pairs of electrons, one in a sigma bond and one in a pi bond with electron density concentrated on two opposite sides of the internuclear axis. A triple bond consists of three shared electron pairs, forming one sigma and two pi bonds. An example is nitrogen. Quadruple and higher bonds are very rare and occur only between certain transition metal atoms.
A coordinate covalent bond is a covalent bond in which the two shared bonding electrons are from the same one of the atoms involved in the bond. For example, boron trifluoride (BF3) and ammonia (NH3) form an adduct or coordination complex F3B←NH3 with a B–N bond in which a lone pair of electrons on N is shared with an empty atomic orbital on B. BF3 with an empty orbital is described as an electron pair acceptor or Lewis acid, while NH3 with a lone pair that can be shared is described as an electron-pair donor or Lewis base. The electrons are shared roughly equally between the atoms in contrast to ionic bonding. Such bonding is shown by an arrow pointing to the Lewis acid. (In the Figure, solid lines are bonds in the plane of the diagram, wedged bonds point towards the observer, and dashed bonds point away from the observer.)
Transition metal complexes are generally bound by coordinate covalent bonds. For example, the ion Ag reacts as a Lewis acid with two molecules of the Lewis base NH3 to form the complex ion Ag(NH3)2, which has two Ag←N coordinate covalent bonds.
In metallic bonding, bonding electrons are delocalized over a lattice of atoms. By contrast, in ionic compounds, the locations of the binding electrons and their charges are static. The free movement or delocalization of bonding electrons leads to classical metallic properties such as luster (surface light reflectivity), electrical and thermal conductivity, ductility, and high tensile strength.
There are several types of weak bonds that can be formed between two or more molecules which are not covalently bound. Intermolecular forces cause molecules to attract or repel each other. Often, these forces influence physical characteristics (such as the melting point) of a substance.
Van der Waals forces are interactions between closed-shell molecules. They include both Coulombic interactions between partial charges in polar molecules, and Pauli repulsions between closed electrons shells.
Keesom forces are the forces between the permanent dipoles of two polar molecules. London dispersion forces are the forces between induced dipoles of different molecules. There can also be an interaction between a permanent dipole in one molecule and an induced dipole in another molecule.
Hydrogen bonds of the form A--H•••B occur when A and B are two highly electronegative atoms (usually N, O or F) such that A forms a highly polar covalent bond with H so that H has a partial positive charge, and B has a lone pair of electrons which is attracted to this partial positive charge and forms a hydrogen bond. Hydrogen bonds are responsible for the high boiling points of water and ammonia with respect to their heavier analogues. In some cases a similar halogen bond can be formed by a halogen atom located between two electronegative atoms on different molecules.
At short distances, repulsive forces between atoms also become important.
In the (unrealistic) limit of "pure" ionic bonding, electrons are perfectly localized on one of the two atoms in the bond. Such bonds can be understood by classical physics. The force between the atoms depends on isotropic continuum electrostatic potentials. The magnitude of the force is in simple proportion to the product of the two ionic charges according to Coulomb's law.
Covalent bonds are better understood by valence bond (VB) theory or molecular orbital (MO) theory. The properties of the atoms involved can be understood using concepts such as oxidation number, formal charge, and electronegativity. The electron density within a bond is not assigned to individual atoms, but is instead delocalized between atoms. In valence bond theory, bonding is conceptualized as being built up from electron pairs that are localized and shared by two atoms via the overlap of atomic orbitals. The concepts of orbital hybridization and resonance augment this basic notion of the electron pair bond. In molecular orbital theory, bonding is viewed as being delocalized and apportioned in orbitals that extend throughout the molecule and are adapted to its symmetry properties, typically by considering linear combinations of atomic orbitals (LCAO). Valence bond theory is more chemically intuitive by being spatially localized, allowing attention to be focused on the parts of the molecule undergoing chemical change. In contrast, molecular orbitals are more "natural" from a quantum mechanical point of view, with orbital energies being physically significant and directly linked to experimental ionization energies from photoelectron spectroscopy. Consequently, valence bond theory and molecular orbital theory are often viewed as competing but complementary frameworks that offer different insights into chemical systems. As approaches for electronic structure theory, both MO and VB methods can give approximations to any desired level of accuracy, at least in principle. However, at lower levels, the approximations differ, and one approach may be better suited for computations involving a particular system or property than the other.
Unlike the spherically symmetrical Coulombic forces in pure ionic bonds, covalent bonds are generally directed and anisotropic. These are often classified based on their symmetry with respect to a molecular plane as sigma bonds and pi bonds. In the general case, atoms form bonds that are intermediate between ionic and covalent, depending on the relative electronegativity of the atoms involved. Bonds of this type are known as polar covalent bonds. | [
{
"paragraph_id": 0,
"text": "A chemical bond is a lasting attraction between atoms or ions that enables the formation of molecules, crystals, and other structures. The bond may result from the electrostatic force between oppositely charged ions as in ionic bonds, or through the sharing of electrons as in covalent bonds. The strength of chemical bonds varies considerably; there are \"strong bonds\" or \"primary bonds\" such as covalent, ionic and metallic bonds, and \"weak bonds\" or \"secondary bonds\" such as dipole–dipole interactions, the London dispersion force, and hydrogen bonding.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Since opposite electric charges attract, the negatively charged electrons surrounding the nucleus and the positively charged protons within a nucleus attract each other. Electrons shared between two nuclei will be attracted to both of them. \"Constructive quantum mechanical wavefunction interference\" stabilizes the paired nuclei (see Theories of chemical bonding). Bonded nuclei maintain an optimal distance (the bond distance) balancing attractive and repulsive effects explained quantitatively by quantum theory.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The atoms in molecules, crystals, metals and other forms of matter are held together by chemical bonds, which determine the structure and properties of matter.",
"title": ""
},
{
"paragraph_id": 3,
"text": "All bonds can be described by quantum theory, but, in practice, simplified rules and other theories allow chemists to predict the strength, directionality, and polarity of bonds. The octet rule and VSEPR theory are examples. More sophisticated theories are valence bond theory, which includes orbital hybridization and resonance, and molecular orbital theory which includes the linear combination of atomic orbitals and ligand field theory. Electrostatics are used to describe bond polarities and the effects they have on chemical substances.",
"title": ""
},
{
"paragraph_id": 4,
"text": "A chemical bond is an attraction between atoms. This attraction may be seen as the result of different behaviors of the outermost or valence electrons of atoms. These behaviors merge into each other seamlessly in various circumstances, so that there is no clear line to be drawn between them. However it remains useful and customary to differentiate between different types of bond, which result in different properties of condensed matter.",
"title": "Overview of main types of chemical bonds"
},
{
"paragraph_id": 5,
"text": "In the simplest view of a covalent bond, one or more electrons (often a pair of electrons) are drawn into the space between the two atomic nuclei. Energy is released by bond formation. This is not as a result of reduction in potential energy, because the attraction of the two electrons to the two protons is offset by the electron-electron and proton-proton repulsions. Instead, the release of energy (and hence stability of the bond) arises from the reduction in kinetic energy due to the electrons being in a more spatially distributed (i.e. longer de Broglie wavelength) orbital compared with each electron being confined closer to its respective nucleus. These bonds exist between two particular identifiable atoms and have a direction in space, allowing them to be shown as single connecting lines between atoms in drawings, or modeled as sticks between spheres in models.",
"title": "Overview of main types of chemical bonds"
},
{
"paragraph_id": 6,
"text": "In a polar covalent bond, one or more electrons are unequally shared between two nuclei. Covalent bonds often result in the formation of small collections of better-connected atoms called molecules, which in solids and liquids are bound to other molecules by forces that are often much weaker than the covalent bonds that hold the molecules internally together. Such weak intermolecular bonds give organic molecular substances, such as waxes and oils, their soft bulk character, and their low melting points (in liquids, molecules must cease most structured or oriented contact with each other). When covalent bonds link long chains of atoms in large molecules, however (as in polymers such as nylon), or when covalent bonds extend in networks through solids that are not composed of discrete molecules (such as diamond or quartz or the silicate minerals in many types of rock) then the structures that result may be both strong and tough, at least in the direction oriented correctly with networks of covalent bonds. Also, the melting points of such covalent polymers and networks increase greatly.",
"title": "Overview of main types of chemical bonds"
},
{
"paragraph_id": 7,
"text": "In a simplified view of an ionic bond, the bonding electron is not shared at all, but transferred. In this type of bond, the outer atomic orbital of one atom has a vacancy which allows the addition of one or more electrons. These newly added electrons potentially occupy a lower energy-state (effectively closer to more nuclear charge) than they experience in a different atom. Thus, one nucleus offers a more tightly bound position to an electron than does another nucleus, with the result that one atom may transfer an electron to the other. This transfer causes one atom to assume a net positive charge, and the other to assume a net negative charge. The bond then results from electrostatic attraction between the positive and negatively charged ions. Ionic bonds may be seen as extreme examples of polarization in covalent bonds. Often, such bonds have no particular orientation in space, since they result from equal electrostatic attraction of each ion to all ions around them. Ionic bonds are strong (and thus ionic substances require high temperatures to melt) but also brittle, since the forces between ions are short-range and do not easily bridge cracks and fractures. This type of bond gives rise to the physical characteristics of crystals of classic mineral salts, such as table salt.",
"title": "Overview of main types of chemical bonds"
},
{
"paragraph_id": 8,
"text": "A less often mentioned type of bonding is metallic bonding. In this type of bonding, each atom in a metal donates one or more electrons to a \"sea\" of electrons that reside between many metal atoms. In this sea, each electron is free (by virtue of its wave nature) to be associated with a great many atoms at once. The bond results because the metal atoms become somewhat positively charged due to loss of their electrons while the electrons remain attracted to many atoms, without being part of any given atom. Metallic bonding may be seen as an extreme example of delocalization of electrons over a large system of covalent bonds, in which every atom participates. This type of bonding is often very strong (resulting in the tensile strength of metals). However, metallic bonding is more collective in nature than other types, and so they allow metal crystals to more easily deform, because they are composed of atoms attracted to each other, but not in any particularly-oriented ways. This results in the malleability of metals. The cloud of electrons in metallic bonding causes the characteristically good electrical and thermal conductivity of metals, and also their shiny lustre that reflects most frequencies of white light.",
"title": "Overview of main types of chemical bonds"
},
{
"paragraph_id": 9,
"text": "Early speculations about the nature of the chemical bond, from as early as the 12th century, supposed that certain types of chemical species were joined by a type of chemical affinity. In 1704, Sir Isaac Newton famously outlined his atomic bonding theory, in \"Query 31\" of his Opticks, whereby atoms attach to each other by some \"force\". Specifically, after acknowledging the various popular theories in vogue at the time, of how atoms were reasoned to attach to each other, i.e. \"hooked atoms\", \"glued together by rest\", or \"stuck together by conspiring motions\", Newton states that he would rather infer from their cohesion, that \"particles attract one another by some force, which in immediate contact is exceedingly strong, at small distances performs the chemical operations, and reaches not far from the particles with any sensible effect.\"",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1819, on the heels of the invention of the voltaic pile, Jöns Jakob Berzelius developed a theory of chemical combination stressing the electronegative and electropositive characters of the combining atoms. By the mid 19th century, Edward Frankland, F.A. Kekulé, A.S. Couper, Alexander Butlerov, and Hermann Kolbe, building on the theory of radicals, developed the theory of valency, originally called \"combining power\", in which compounds were joined owing to an attraction of positive and negative poles. In 1904, Richard Abegg proposed his rule that the difference between the maximum and minimum valencies of an element is often eight. At this point, valency was still an empirical number based only on chemical properties.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "However the nature of the atom became clearer with Ernest Rutherford's 1911 discovery that of an atomic nucleus surrounded by electrons in which he quoted Nagaoka rejected Thomson's model on the grounds that opposite charges are impenetrable. In 1904, Nagaoka proposed an alternative planetary model of the atom in which a positively charged center is surrounded by a number of revolving electrons, in the manner of Saturn and its rings.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Nagaoka's model made two predictions:",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Rutherford mentions Nagaoka's model in his 1911 paper in which the atomic nucleus is proposed.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "At the 1911 Solvay Conference, in the discussion of what could regulate energy differences between atoms, Max Planck stated: \"The intermediaries could be the electrons.\" These nuclear models suggested that electrons determine chemical behavior.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Next came Niels Bohr's 1913 model of a nuclear atom with electron orbits. In 1916, chemist Gilbert N. Lewis developed the concept of electron-pair bonds, in which two atoms may share one to six electrons, thus forming the single electron bond, a single bond, a double bond, or a triple bond; in Lewis's own words, \"An electron may form a part of the shell of two different atoms and cannot be said to belong to either one exclusively.\"",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Also in 1916, Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonding. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904).",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Niels Bohr also proposed a model of the chemical bond in 1913. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1927, the first mathematically complete quantum description of a simple chemical bond, i.e. that produced by one electron in the hydrogen molecular ion, H2, was derived by the Danish physicist Øyvind Burrau. This work showed that the quantum approach to chemical bonds could be fundamentally and quantitatively correct, but the mathematical methods used could not be extended to molecules containing more than one electron. A more practical, albeit less quantitative, approach was put forward in the same year by Walter Heitler and Fritz London. The Heitler–London method forms the basis of what is now called valence bond theory. In 1929, the linear combination of atomic orbitals molecular orbital method (LCAO) approximation was introduced by Sir John Lennard-Jones, who also suggested methods to derive electronic structures of molecules of F2 (fluorine) and O2 (oxygen) molecules, from basic quantum principles. This molecular orbital theory represented a covalent bond as an orbital formed by combining the quantum mechanical Schrödinger atomic orbitals which had been hypothesized for electrons in single atoms. The equations for bonding electrons in multi-electron atoms could not be solved to mathematical perfection (i.e., analytically), but approximations for them still gave many good qualitative predictions and results. Most quantitative calculations in modern quantum chemistry use either valence bond or molecular orbital theory as a starting point, although a third approach, density functional theory, has become increasingly popular in recent years.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 1933, H. H. James and A. S. Coolidge carried out a calculation on the dihydrogen molecule that, unlike all previous calculation which used functions only of the distance of the electron from the atomic nucleus, used functions which also explicitly added the distance between the two electrons. With up to 13 adjustable parameters they obtained a result very close to the experimental result for the dissociation energy. Later extensions have used up to 54 parameters and gave excellent agreement with experiments. This calculation convinced the scientific community that quantum theory could give agreement with experiment. However this approach has none of the physical pictures of the valence bond and molecular orbital theories and is difficult to extend to larger molecules.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Because atoms and molecules are three-dimensional, it is difficult to use a single method to indicate orbitals and bonds. In molecular formulas the chemical bonds (binding orbitals) between atoms are indicated in different ways depending on the type of discussion. Sometimes, some details are neglected. For example, in organic chemistry one is sometimes concerned only with the functional group of the molecule. Thus, the molecular formula of ethanol may be written in conformational form, three-dimensional form, full two-dimensional form (indicating every bond with no three-dimensional directions), compressed two-dimensional form (CH3–CH2–OH), by separating the functional group from another part of the molecule (C2H5OH), or by its atomic constituents (C2H6O), according to what is discussed. Sometimes, even the non-bonding valence shell electrons (with the two-dimensional approximate directions) are marked, e.g. for elemental carbon .C. Some chemists may also mark the respective orbitals, e.g. the hypothetical ethene anion (\\C=C/ ) indicating the possibility of bond formation.",
"title": "Bonds in chemical formulas"
},
{
"paragraph_id": 21,
"text": "Strong chemical bonds are the intramolecular forces that hold atoms together in molecules. A strong chemical bond is formed from the transfer or sharing of electrons between atomic centers and relies on the electrostatic attraction between the protons in nuclei and the electrons in the orbitals.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 22,
"text": "The types of strong bond differ due to the difference in electronegativity of the constituent elements. Electronegativity is the tendency for an atom of a given chemical element to attract shared electrons when forming a chemical bond, where the higher the associated electronegativity then the more it attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, which characterizes a bond along the continuous scale from covalent to ionic bonding. A large difference in electronegativity leads to more polar (ionic) character in the bond.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 23,
"text": "Ionic bonding is a type of electrostatic interaction between atoms that have a large electronegativity difference. There is no precise value that distinguishes ionic from covalent bonding, but an electronegativity difference of over 1.7 is likely to be ionic while a difference of less than 1.7 is likely to be covalent. Ionic bonding leads to separate positive and negative ions. Ionic charges are commonly between −3e to +3e. Ionic bonding commonly occurs in metal salts such as sodium chloride (table salt). A typical feature of ionic bonds is that the species form into ionic crystals, in which no ion is specifically paired with any single other ion in a specific directional bond. Rather, each species of ion is surrounded by ions of the opposite charge, and the spacing between it and each of the oppositely charged ions near it is the same for all surrounding atoms of the same type. It is thus no longer possible to associate an ion with any specific other single ionized atom near it. This is a situation unlike that in covalent crystals, where covalent bonds between specific atoms are still discernible from the shorter distances between them, as measured via such techniques as X-ray diffraction.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 24,
"text": "Ionic crystals may contain a mixture of covalent and ionic species, as for example salts of complex acids such as sodium cyanide, NaCN. X-ray diffraction shows that in NaCN, for example, the bonds between sodium cations (Na) and the cyanide anions (CN) are ionic, with no sodium ion associated with any particular cyanide. However, the bonds between the carbon (C) and nitrogen (N) atoms in cyanide are of the covalent type, so that each carbon is strongly bound to just one nitrogen, to which it is physically much closer than it is to other carbons or nitrogens in a sodium cyanide crystal.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 25,
"text": "When such crystals are melted into liquids, the ionic bonds are broken first because they are non-directional and allow the charged species to move freely. Similarly, when such salts dissolve into water, the ionic bonds are typically broken by the interaction with water but the covalent bonds continue to hold. For example, in solution, the cyanide ions, still bound together as single CN ions, move independently through the solution, as do sodium ions, as Na. In water, charged ions move apart because each of them are more strongly attracted to a number of water molecules than to each other. The attraction between ions and water molecules in such solutions is due to a type of weak dipole-dipole type chemical bond. In melted ionic compounds, the ions continue to be attracted to each other, but not in any ordered or crystalline way.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 26,
"text": "Covalent bonding is a common type of bonding in which two or more atoms share valence electrons more or less equally. The simplest and most common type is a single bond in which two atoms share two electrons. Other types include the double bond, the triple bond, one- and three-electron bonds, the three-center two-electron bond and three-center four-electron bond.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 27,
"text": "In non-polar covalent bonds, the electronegativity difference between the bonded atoms is small, typically 0 to 0.3. Bonds within most organic compounds are described as covalent. The figure shows methane (CH4), in which each hydrogen forms a covalent bond with the carbon. See sigma bonds and pi bonds for LCAO descriptions of such bonding.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 28,
"text": "Molecules that are formed primarily from non-polar covalent bonds are often immiscible in water or other polar solvents, but much more soluble in non-polar solvents such as hexane.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 29,
"text": "A polar covalent bond is a covalent bond with a significant ionic character. This means that the two shared electrons are closer to one of the atoms than the other, creating an imbalance of charge. Such bonds occur between two atoms with moderately different electronegativities and give rise to dipole–dipole interactions. The electronegativity difference between the two atoms in these bonds is 0.3 to 1.7.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 30,
"text": "A single bond between two atoms corresponds to the sharing of one pair of electrons. The Hydrogen (H) atom has one valence electron. Two Hydrogen atoms can then form a molecule, held together by the shared pair of electrons. Each H atom now has the noble gas electron configuration of helium (He). The pair of shared electrons forms a single covalent bond. The electron density of these two bonding electrons in the region between the two atoms increases from the density of two non-interacting H atoms.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 31,
"text": "A double bond has two shared pairs of electrons, one in a sigma bond and one in a pi bond with electron density concentrated on two opposite sides of the internuclear axis. A triple bond consists of three shared electron pairs, forming one sigma and two pi bonds. An example is nitrogen. Quadruple and higher bonds are very rare and occur only between certain transition metal atoms.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 32,
"text": "A coordinate covalent bond is a covalent bond in which the two shared bonding electrons are from the same one of the atoms involved in the bond. For example, boron trifluoride (BF3) and ammonia (NH3) form an adduct or coordination complex F3B←NH3 with a B–N bond in which a lone pair of electrons on N is shared with an empty atomic orbital on B. BF3 with an empty orbital is described as an electron pair acceptor or Lewis acid, while NH3 with a lone pair that can be shared is described as an electron-pair donor or Lewis base. The electrons are shared roughly equally between the atoms in contrast to ionic bonding. Such bonding is shown by an arrow pointing to the Lewis acid. (In the Figure, solid lines are bonds in the plane of the diagram, wedged bonds point towards the observer, and dashed bonds point away from the observer.)",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 33,
"text": "Transition metal complexes are generally bound by coordinate covalent bonds. For example, the ion Ag reacts as a Lewis acid with two molecules of the Lewis base NH3 to form the complex ion Ag(NH3)2, which has two Ag←N coordinate covalent bonds.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 34,
"text": "In metallic bonding, bonding electrons are delocalized over a lattice of atoms. By contrast, in ionic compounds, the locations of the binding electrons and their charges are static. The free movement or delocalization of bonding electrons leads to classical metallic properties such as luster (surface light reflectivity), electrical and thermal conductivity, ductility, and high tensile strength.",
"title": "Strong chemical bonds"
},
{
"paragraph_id": 35,
"text": "There are several types of weak bonds that can be formed between two or more molecules which are not covalently bound. Intermolecular forces cause molecules to attract or repel each other. Often, these forces influence physical characteristics (such as the melting point) of a substance.",
"title": "Intermolecular bonding"
},
{
"paragraph_id": 36,
"text": "Van der Waals forces are interactions between closed-shell molecules. They include both Coulombic interactions between partial charges in polar molecules, and Pauli repulsions between closed electrons shells.",
"title": "Intermolecular bonding"
},
{
"paragraph_id": 37,
"text": "Keesom forces are the forces between the permanent dipoles of two polar molecules. London dispersion forces are the forces between induced dipoles of different molecules. There can also be an interaction between a permanent dipole in one molecule and an induced dipole in another molecule.",
"title": "Intermolecular bonding"
},
{
"paragraph_id": 38,
"text": "Hydrogen bonds of the form A--H•••B occur when A and B are two highly electronegative atoms (usually N, O or F) such that A forms a highly polar covalent bond with H so that H has a partial positive charge, and B has a lone pair of electrons which is attracted to this partial positive charge and forms a hydrogen bond. Hydrogen bonds are responsible for the high boiling points of water and ammonia with respect to their heavier analogues. In some cases a similar halogen bond can be formed by a halogen atom located between two electronegative atoms on different molecules.",
"title": "Intermolecular bonding"
},
{
"paragraph_id": 39,
"text": "At short distances, repulsive forces between atoms also become important.",
"title": "Intermolecular bonding"
},
{
"paragraph_id": 40,
"text": "In the (unrealistic) limit of \"pure\" ionic bonding, electrons are perfectly localized on one of the two atoms in the bond. Such bonds can be understood by classical physics. The force between the atoms depends on isotropic continuum electrostatic potentials. The magnitude of the force is in simple proportion to the product of the two ionic charges according to Coulomb's law.",
"title": "Theories of chemical bonding"
},
{
"paragraph_id": 41,
"text": "Covalent bonds are better understood by valence bond (VB) theory or molecular orbital (MO) theory. The properties of the atoms involved can be understood using concepts such as oxidation number, formal charge, and electronegativity. The electron density within a bond is not assigned to individual atoms, but is instead delocalized between atoms. In valence bond theory, bonding is conceptualized as being built up from electron pairs that are localized and shared by two atoms via the overlap of atomic orbitals. The concepts of orbital hybridization and resonance augment this basic notion of the electron pair bond. In molecular orbital theory, bonding is viewed as being delocalized and apportioned in orbitals that extend throughout the molecule and are adapted to its symmetry properties, typically by considering linear combinations of atomic orbitals (LCAO). Valence bond theory is more chemically intuitive by being spatially localized, allowing attention to be focused on the parts of the molecule undergoing chemical change. In contrast, molecular orbitals are more \"natural\" from a quantum mechanical point of view, with orbital energies being physically significant and directly linked to experimental ionization energies from photoelectron spectroscopy. Consequently, valence bond theory and molecular orbital theory are often viewed as competing but complementary frameworks that offer different insights into chemical systems. As approaches for electronic structure theory, both MO and VB methods can give approximations to any desired level of accuracy, at least in principle. However, at lower levels, the approximations differ, and one approach may be better suited for computations involving a particular system or property than the other.",
"title": "Theories of chemical bonding"
},
{
"paragraph_id": 42,
"text": "Unlike the spherically symmetrical Coulombic forces in pure ionic bonds, covalent bonds are generally directed and anisotropic. These are often classified based on their symmetry with respect to a molecular plane as sigma bonds and pi bonds. In the general case, atoms form bonds that are intermediate between ionic and covalent, depending on the relative electronegativity of the atoms involved. Bonds of this type are known as polar covalent bonds.",
"title": "Theories of chemical bonding"
}
]
| A chemical bond is a lasting attraction between atoms or ions that enables the formation of molecules, crystals, and other structures. The bond may result from the electrostatic force between oppositely charged ions as in ionic bonds, or through the sharing of electrons as in covalent bonds. The strength of chemical bonds varies considerably; there are "strong bonds" or "primary bonds" such as covalent, ionic and metallic bonds, and "weak bonds" or "secondary bonds" such as dipole–dipole interactions, the London dispersion force, and hydrogen bonding. Since opposite electric charges attract, the negatively charged electrons surrounding the nucleus and the positively charged protons within a nucleus attract each other. Electrons shared between two nuclei will be attracted to both of them. "Constructive quantum mechanical wavefunction interference" stabilizes the paired nuclei. Bonded nuclei maintain an optimal distance balancing attractive and repulsive effects explained quantitatively by quantum theory. The atoms in molecules, crystals, metals and other forms of matter are held together by chemical bonds, which determine the structure and properties of matter. All bonds can be described by quantum theory, but, in practice, simplified rules and other theories allow chemists to predict the strength, directionality, and polarity of bonds. The octet rule and VSEPR theory are examples. More sophisticated theories are valence bond theory, which includes orbital hybridization and resonance, and molecular orbital theory which includes the linear combination of atomic orbitals and ligand field theory. Electrostatics are used to describe bond polarities and the effects they have on chemical substances. | 2001-09-02T14:10:37Z | 2023-12-21T15:48:53Z | [
"Template:Distinguish",
"Template:Pp-semi-indef",
"Template:Chem",
"Template:Main",
"Template:Commons category",
"Template:Branches of chemistry",
"Template:Webarchive",
"Template:Cite web",
"Template:Short description",
"Template:Pp-move-indef",
"Template:Color",
"Template:Cn",
"Template:Cite journal",
"Template:Citation",
"Template:Authority control",
"Template:Reflist",
"Template:Cite book",
"Template:Wikiquote",
"Template:Chemical bonding theory",
"Template:Rp",
"Template:R",
"Template:Chemical bonds"
]
| https://en.wikipedia.org/wiki/Chemical_bond |
5,995 | Cell | Cell most often refers to:
Cell may also refer to: | [
{
"paragraph_id": 0,
"text": "Cell most often refers to:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cell may also refer to:",
"title": ""
}
]
| Cell most often refers to: Cell (biology), the functional basic unit of life Cell may also refer to: | 2001-09-22T18:01:22Z | 2023-10-18T00:28:22Z | [
"Template:Pp-semi-indef",
"Template:Wiktionary",
"Template:TOC right",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Cell |
5,999 | Climate | Climate is the long-term weather pattern in a region, typically averaged over 30 years. More rigorously, it is the mean and variability of meteorological variables over a time spanning from months to millions of years. Some of the meteorological variables that are commonly measured are temperature, humidity, atmospheric pressure, wind, and precipitation. In a broader sense, climate is the state of the components of the climate system, including the atmosphere, hydrosphere, cryosphere, lithosphere and biosphere and the interactions between them. The climate of a location is affected by its latitude, longitude, terrain, altitude, land use and nearby water bodies and their currents.
Climates can be classified according to the average and typical variables, most commonly temperature and precipitation. The most widely used classification scheme was the Köppen climate classification. The Thornthwaite system, in use since 1948, incorporates evapotranspiration along with temperature and precipitation information and is used in studying biological diversity and how climate change affects it. The major classifications in Thornthwaite’s climate classification are microthermal, mesothermal, and megathermal.
Finally, the Bergeron and Spatial Synoptic Classification systems focus on the origin of air masses that define the climate of a region.
Paleoclimatology is the study of ancient climates. Paleoclimatologists seek to explain climate variations for all parts of the Earth during any given geologic period, beginning with the time of the Earth's formation. Since very few direct observations of climate were available before the 19th century, paleoclimates are inferred from proxy variables. They include non-biotic evidence—such as sediments found in lake beds and ice cores—and biotic evidence—such as tree rings and coral. Climate models are mathematical models of past, present, and future climates. Climate change may occur over long and short timescales from various factors. Recent warming is discussed in terms of global warming, which results in redistributions of biota. For example, as climate scientist Lesley Ann Hughes has written: "a 3 °C [5 °F] change in mean annual temperature corresponds to a shift in isotherms of approximately 300–400 km [190–250 mi] in latitude (in the temperate zone) or 500 m [1,600 ft] in elevation. Therefore, species are expected to move upwards in elevation or towards the poles in latitude in response to shifting climate zones."
Climate (from Ancient Greek κλίμα 'inclination') is commonly defined as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose. Climate also includes statistics other than the average, such as the magnitudes of day-to-day or year-to-year variations. The Intergovernmental Panel on Climate Change (IPCC) 2001 glossary definition is as follows:
Climate in a narrow sense is usually defined as the "average weather", or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period ranging from months to thousands or millions of years. The classical period is 30 years, as defined by the World Meteorological Organization (WMO). These quantities are most often surface variables such as temperature, precipitation, and wind. Climate in a wider sense is the state, including a statistical description, of the climate system.
The World Meteorological Organization (WMO) describes "climate normals" as "reference points used by climatologists to compare current climatological trends to that of the past or what is considered typical. A climate normal is defined as the arithmetic average of a climate element (e.g. temperature) over a 30-year period. A 30-year period is used as it is long enough to filter out any interannual variation or anomalies such as El Niño–Southern Oscillation, but also short enough to be able to show longer climatic trends."
The WMO originated from the International Meteorological Organization which set up a technical commission for climatology in 1929. At its 1934 Wiesbaden meeting, the technical commission designated the thirty-year period from 1901 to 1930 as the reference time frame for climatological standard normals. In 1982, the WMO agreed to update climate normals, and these were subsequently completed on the basis of climate data from 1 January 1961 to 31 December 1990. The 1961–1990 climate normals serve as the baseline reference period. The next set of climate normals to be published by WMO is from 1991 to 2010. Aside from collecting from the most common atmospheric variables (air temperature, pressure, precipitation and wind), other variables such as humidity, visibility, cloud amount, solar radiation, soil temperature, pan evaporation rate, days with thunder and days with hail are also collected to measure change in climate conditions.
The difference between climate and weather is usefully summarized by the popular phrase "Climate is what you expect, weather is what you get." Over historical time spans, there are a number of nearly constant variables that determine climate, including latitude, altitude, proportion of land to water, and proximity to oceans and mountains. All of these variables change only over periods of millions of years due to processes such as plate tectonics. Other climate determinants are more dynamic: the thermohaline circulation of the ocean leads to a 5 °C (41 °F) warming of the northern Atlantic Ocean compared to other ocean basins. Other ocean currents redistribute heat between land and water on a more regional scale. The density and type of vegetation coverage affects solar heat absorption, water retention, and rainfall on a regional level. Alterations in the quantity of atmospheric greenhouse gases (particularly carbon dioxide and methane determines the amount of solar energy retained by the planet, leading to global warming or global cooling. The variables which determine climate are numerous and the interactions complex, but there is general agreement that the broad outlines are understood, at least insofar as the determinants of historical climate change are concerned.
Climate classifications are systems that categorize the world's climates. A climate classification may correlate closely with a biome classification, as climate is a major influence on life in a region. One of the most used is the Köppen climate classification scheme first developed in 1899.
There are several ways to classify climates into similar regimes. Originally, climes were defined in Ancient Greece to describe the weather depending upon a location's latitude. Modern climate classification methods can be broadly divided into genetic methods, which focus on the causes of climate, and empiric methods, which focus on the effects of climate. Examples of genetic classification include methods based on the relative frequency of different air mass types or locations within synoptic weather disturbances. Examples of empiric classifications include climate zones defined by plant hardiness, evapotranspiration, or more generally the Köppen climate classification which was originally designed to identify the climates associated with certain biomes. A common shortcoming of these classification schemes is that they produce distinct boundaries between the zones they define, rather than the gradual transition of climate properties more common in nature.
Paleoclimatology is the study of past climate over a great period of the Earth's history. It uses evidence with different time scales (from decades to millennia) from ice sheets, tree rings, sediments, pollen, coral, and rocks to determine the past state of the climate. It demonstrates periods of stability and periods of change and can indicate whether changes follow patterns such as regular cycles.
Details of the modern climate record are known through the taking of measurements from such weather instruments as thermometers, barometers, and anemometers during the past few centuries. The instruments used to study weather over the modern time scale, their observation frequency, their known error, their immediate environment, and their exposure have changed over the years, which must be considered when studying the climate of centuries past. Long-term modern climate records skew towards population centres and affluent countries. Since the 1960s, the launch of satellites allow records to be gathered on a global scale, including areas with little to no human presence, such as the Arctic region and oceans.
Climate variability is the term to describe variations in the mean state and other characteristics of climate (such as chances or possibility of extreme weather, etc.) "on all spatial and temporal scales beyond that of individual weather events." Some of the variability does not appear to be caused systematically and occurs at random times. Such variability is called random variability or noise. On the other hand, periodic variability occurs relatively regularly and in distinct modes of variability or climate patterns.
There are close correlations between Earth's climate oscillations and astronomical factors (barycenter changes, solar variation, cosmic ray flux, cloud albedo feedback, Milankovic cycles), and modes of heat distribution between the ocean-atmosphere climate system. In some cases, current, historical and paleoclimatological natural oscillations may be masked by significant volcanic eruptions, impact events, irregularities in climate proxy data, positive feedback processes or anthropogenic emissions of substances such as greenhouse gases.
Over the years, the definitions of climate variability and the related term climate change have shifted. While the term climate change now implies change that is both long-term and of human causation, in the 1960s the word climate change was used for what we now describe as climate variability, that is, climatic inconsistencies and anomalies.
Climate change is the variation in global or regional climates over time. It reflects changes in the variability or average state of the atmosphere over time scales ranging from decades to millions of years. These changes can be caused by processes internal to the Earth, external forces (e.g. variations in sunlight intensity) or, more recently, human activities. In recent usage, especially in the context of environmental policy, the term "climate change" often refers only to changes in modern climate, including the rise in average surface temperature known as global warming. In some cases, the term is also used with a presumption of human causation, as in the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC uses "climate variability" for non-human caused variations.
Earth has undergone periodic climate shifts in the past, including four major ice ages. These consist of glacial periods where conditions are colder than normal, separated by interglacial periods. The accumulation of snow and ice during a glacial period increases the surface albedo, reflecting more of the Sun's energy into space and maintaining a lower atmospheric temperature. Increases in greenhouse gases, such as by volcanic activity, can increase the global temperature and produce an interglacial period. Suggested causes of ice age periods include the positions of the continents, variations in the Earth's orbit, changes in the solar output, and volcanism. However, these naturally-caused changes in climate occur on a much slower time scale than the present rate of change which is caused by the emission of greenhouse gases by human activities.
Climate models use quantitative methods to simulate the interactions and transfer of radiative energy between the atmosphere, oceans, land surface and ice through a series of physics equations. They are used for a variety of purposes; from the study of the dynamics of the weather and climate system, to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the Earth with outgoing energy as long wave (infrared) electromagnetic radiation from the earth. Any imbalance results in a change in the average temperature of the earth.
Climate models are available on different resolutions ranging from >100 km to 1 km. High resolutions in global climate models require significant computational resources, and so only a few global datasets exist. Global climate models can be dynamically or statistically downscaled to regional climate models to analyze impacts of climate change on a local scale. Examples are ICON or mechanistically downscaled data such as CHELSA (Climatologies at high resolution for the earth's land surface areas).
The most talked-about applications of these models in recent years have been their use to infer the consequences of increasing greenhouse gases in the atmosphere, primarily carbon dioxide (see greenhouse gas). These models predict an upward trend in the global mean surface temperature, with the most rapid increase in temperature being projected for the higher latitudes of the Northern Hemisphere.
Models can range from relatively simple to quite complex. Simple radiant heat transfer models treat the earth as a single point and average outgoing energy. This can be expanded vertically (as in radiative-convective models), or horizontally. Finally, more complex (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange. | [
{
"paragraph_id": 0,
"text": "Climate is the long-term weather pattern in a region, typically averaged over 30 years. More rigorously, it is the mean and variability of meteorological variables over a time spanning from months to millions of years. Some of the meteorological variables that are commonly measured are temperature, humidity, atmospheric pressure, wind, and precipitation. In a broader sense, climate is the state of the components of the climate system, including the atmosphere, hydrosphere, cryosphere, lithosphere and biosphere and the interactions between them. The climate of a location is affected by its latitude, longitude, terrain, altitude, land use and nearby water bodies and their currents.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Climates can be classified according to the average and typical variables, most commonly temperature and precipitation. The most widely used classification scheme was the Köppen climate classification. The Thornthwaite system, in use since 1948, incorporates evapotranspiration along with temperature and precipitation information and is used in studying biological diversity and how climate change affects it. The major classifications in Thornthwaite’s climate classification are microthermal, mesothermal, and megathermal.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Finally, the Bergeron and Spatial Synoptic Classification systems focus on the origin of air masses that define the climate of a region.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Paleoclimatology is the study of ancient climates. Paleoclimatologists seek to explain climate variations for all parts of the Earth during any given geologic period, beginning with the time of the Earth's formation. Since very few direct observations of climate were available before the 19th century, paleoclimates are inferred from proxy variables. They include non-biotic evidence—such as sediments found in lake beds and ice cores—and biotic evidence—such as tree rings and coral. Climate models are mathematical models of past, present, and future climates. Climate change may occur over long and short timescales from various factors. Recent warming is discussed in terms of global warming, which results in redistributions of biota. For example, as climate scientist Lesley Ann Hughes has written: \"a 3 °C [5 °F] change in mean annual temperature corresponds to a shift in isotherms of approximately 300–400 km [190–250 mi] in latitude (in the temperate zone) or 500 m [1,600 ft] in elevation. Therefore, species are expected to move upwards in elevation or towards the poles in latitude in response to shifting climate zones.\"",
"title": ""
},
{
"paragraph_id": 4,
"text": "Climate (from Ancient Greek κλίμα 'inclination') is commonly defined as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose. Climate also includes statistics other than the average, such as the magnitudes of day-to-day or year-to-year variations. The Intergovernmental Panel on Climate Change (IPCC) 2001 glossary definition is as follows:",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "Climate in a narrow sense is usually defined as the \"average weather\", or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period ranging from months to thousands or millions of years. The classical period is 30 years, as defined by the World Meteorological Organization (WMO). These quantities are most often surface variables such as temperature, precipitation, and wind. Climate in a wider sense is the state, including a statistical description, of the climate system.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "The World Meteorological Organization (WMO) describes \"climate normals\" as \"reference points used by climatologists to compare current climatological trends to that of the past or what is considered typical. A climate normal is defined as the arithmetic average of a climate element (e.g. temperature) over a 30-year period. A 30-year period is used as it is long enough to filter out any interannual variation or anomalies such as El Niño–Southern Oscillation, but also short enough to be able to show longer climatic trends.\"",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "The WMO originated from the International Meteorological Organization which set up a technical commission for climatology in 1929. At its 1934 Wiesbaden meeting, the technical commission designated the thirty-year period from 1901 to 1930 as the reference time frame for climatological standard normals. In 1982, the WMO agreed to update climate normals, and these were subsequently completed on the basis of climate data from 1 January 1961 to 31 December 1990. The 1961–1990 climate normals serve as the baseline reference period. The next set of climate normals to be published by WMO is from 1991 to 2010. Aside from collecting from the most common atmospheric variables (air temperature, pressure, precipitation and wind), other variables such as humidity, visibility, cloud amount, solar radiation, soil temperature, pan evaporation rate, days with thunder and days with hail are also collected to measure change in climate conditions.",
"title": "Definition"
},
{
"paragraph_id": 8,
"text": "The difference between climate and weather is usefully summarized by the popular phrase \"Climate is what you expect, weather is what you get.\" Over historical time spans, there are a number of nearly constant variables that determine climate, including latitude, altitude, proportion of land to water, and proximity to oceans and mountains. All of these variables change only over periods of millions of years due to processes such as plate tectonics. Other climate determinants are more dynamic: the thermohaline circulation of the ocean leads to a 5 °C (41 °F) warming of the northern Atlantic Ocean compared to other ocean basins. Other ocean currents redistribute heat between land and water on a more regional scale. The density and type of vegetation coverage affects solar heat absorption, water retention, and rainfall on a regional level. Alterations in the quantity of atmospheric greenhouse gases (particularly carbon dioxide and methane determines the amount of solar energy retained by the planet, leading to global warming or global cooling. The variables which determine climate are numerous and the interactions complex, but there is general agreement that the broad outlines are understood, at least insofar as the determinants of historical climate change are concerned.",
"title": "Definition"
},
{
"paragraph_id": 9,
"text": "Climate classifications are systems that categorize the world's climates. A climate classification may correlate closely with a biome classification, as climate is a major influence on life in a region. One of the most used is the Köppen climate classification scheme first developed in 1899.",
"title": "Climate classification"
},
{
"paragraph_id": 10,
"text": "There are several ways to classify climates into similar regimes. Originally, climes were defined in Ancient Greece to describe the weather depending upon a location's latitude. Modern climate classification methods can be broadly divided into genetic methods, which focus on the causes of climate, and empiric methods, which focus on the effects of climate. Examples of genetic classification include methods based on the relative frequency of different air mass types or locations within synoptic weather disturbances. Examples of empiric classifications include climate zones defined by plant hardiness, evapotranspiration, or more generally the Köppen climate classification which was originally designed to identify the climates associated with certain biomes. A common shortcoming of these classification schemes is that they produce distinct boundaries between the zones they define, rather than the gradual transition of climate properties more common in nature.",
"title": "Climate classification"
},
{
"paragraph_id": 11,
"text": "Paleoclimatology is the study of past climate over a great period of the Earth's history. It uses evidence with different time scales (from decades to millennia) from ice sheets, tree rings, sediments, pollen, coral, and rocks to determine the past state of the climate. It demonstrates periods of stability and periods of change and can indicate whether changes follow patterns such as regular cycles.",
"title": "Record"
},
{
"paragraph_id": 12,
"text": "Details of the modern climate record are known through the taking of measurements from such weather instruments as thermometers, barometers, and anemometers during the past few centuries. The instruments used to study weather over the modern time scale, their observation frequency, their known error, their immediate environment, and their exposure have changed over the years, which must be considered when studying the climate of centuries past. Long-term modern climate records skew towards population centres and affluent countries. Since the 1960s, the launch of satellites allow records to be gathered on a global scale, including areas with little to no human presence, such as the Arctic region and oceans.",
"title": "Record"
},
{
"paragraph_id": 13,
"text": "Climate variability is the term to describe variations in the mean state and other characteristics of climate (such as chances or possibility of extreme weather, etc.) \"on all spatial and temporal scales beyond that of individual weather events.\" Some of the variability does not appear to be caused systematically and occurs at random times. Such variability is called random variability or noise. On the other hand, periodic variability occurs relatively regularly and in distinct modes of variability or climate patterns.",
"title": "Climate variability"
},
{
"paragraph_id": 14,
"text": "There are close correlations between Earth's climate oscillations and astronomical factors (barycenter changes, solar variation, cosmic ray flux, cloud albedo feedback, Milankovic cycles), and modes of heat distribution between the ocean-atmosphere climate system. In some cases, current, historical and paleoclimatological natural oscillations may be masked by significant volcanic eruptions, impact events, irregularities in climate proxy data, positive feedback processes or anthropogenic emissions of substances such as greenhouse gases.",
"title": "Climate variability"
},
{
"paragraph_id": 15,
"text": "Over the years, the definitions of climate variability and the related term climate change have shifted. While the term climate change now implies change that is both long-term and of human causation, in the 1960s the word climate change was used for what we now describe as climate variability, that is, climatic inconsistencies and anomalies.",
"title": "Climate variability"
},
{
"paragraph_id": 16,
"text": "Climate change is the variation in global or regional climates over time. It reflects changes in the variability or average state of the atmosphere over time scales ranging from decades to millions of years. These changes can be caused by processes internal to the Earth, external forces (e.g. variations in sunlight intensity) or, more recently, human activities. In recent usage, especially in the context of environmental policy, the term \"climate change\" often refers only to changes in modern climate, including the rise in average surface temperature known as global warming. In some cases, the term is also used with a presumption of human causation, as in the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC uses \"climate variability\" for non-human caused variations.",
"title": "Climate change"
},
{
"paragraph_id": 17,
"text": "Earth has undergone periodic climate shifts in the past, including four major ice ages. These consist of glacial periods where conditions are colder than normal, separated by interglacial periods. The accumulation of snow and ice during a glacial period increases the surface albedo, reflecting more of the Sun's energy into space and maintaining a lower atmospheric temperature. Increases in greenhouse gases, such as by volcanic activity, can increase the global temperature and produce an interglacial period. Suggested causes of ice age periods include the positions of the continents, variations in the Earth's orbit, changes in the solar output, and volcanism. However, these naturally-caused changes in climate occur on a much slower time scale than the present rate of change which is caused by the emission of greenhouse gases by human activities.",
"title": "Climate change"
},
{
"paragraph_id": 18,
"text": "Climate models use quantitative methods to simulate the interactions and transfer of radiative energy between the atmosphere, oceans, land surface and ice through a series of physics equations. They are used for a variety of purposes; from the study of the dynamics of the weather and climate system, to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the Earth with outgoing energy as long wave (infrared) electromagnetic radiation from the earth. Any imbalance results in a change in the average temperature of the earth.",
"title": "Climate models"
},
{
"paragraph_id": 19,
"text": "Climate models are available on different resolutions ranging from >100 km to 1 km. High resolutions in global climate models require significant computational resources, and so only a few global datasets exist. Global climate models can be dynamically or statistically downscaled to regional climate models to analyze impacts of climate change on a local scale. Examples are ICON or mechanistically downscaled data such as CHELSA (Climatologies at high resolution for the earth's land surface areas).",
"title": "Climate models"
},
{
"paragraph_id": 20,
"text": "The most talked-about applications of these models in recent years have been their use to infer the consequences of increasing greenhouse gases in the atmosphere, primarily carbon dioxide (see greenhouse gas). These models predict an upward trend in the global mean surface temperature, with the most rapid increase in temperature being projected for the higher latitudes of the Northern Hemisphere.",
"title": "Climate models"
},
{
"paragraph_id": 21,
"text": "Models can range from relatively simple to quite complex. Simple radiant heat transfer models treat the earth as a single point and average outgoing energy. This can be expanded vertically (as in radiative-convective models), or horizontally. Finally, more complex (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.",
"title": "Climate models"
},
{
"paragraph_id": 22,
"text": "",
"title": "External links"
}
]
| Climate is the long-term weather pattern in a region, typically averaged over 30 years. More rigorously, it is the mean and variability of meteorological variables over a time spanning from months to millions of years. Some of the meteorological variables that are commonly measured are temperature, humidity, atmospheric pressure, wind, and precipitation. In a broader sense, climate is the state of the components of the climate system, including the atmosphere, hydrosphere, cryosphere, lithosphere and biosphere and the interactions between them. The climate of a location is affected by its latitude, longitude, terrain, altitude, land use and nearby water bodies and their currents. Climates can be classified according to the average and typical variables, most commonly temperature and precipitation. The most widely used classification scheme was the Köppen climate classification. The Thornthwaite system, in use since 1948, incorporates evapotranspiration along with temperature and precipitation information and is used in studying biological diversity and how climate change affects it. The major classifications in Thornthwaite’s climate classification are microthermal, mesothermal, and megathermal. Finally, the Bergeron and Spatial Synoptic Classification systems focus on the origin of air masses that define the climate of a region. Paleoclimatology is the study of ancient climates. Paleoclimatologists seek to explain climate variations for all parts of the Earth during any given geologic period, beginning with the time of the Earth's formation. Since very few direct observations of climate were available before the 19th century, paleoclimates are inferred from proxy variables. They include non-biotic evidence—such as sediments found in lake beds and ice cores—and biotic evidence—such as tree rings and coral. Climate models are mathematical models of past, present, and future climates. Climate change may occur over long and short timescales from various factors. Recent warming is discussed in terms of global warming, which results in redistributions of biota. For example, as climate scientist Lesley Ann Hughes has written: "a 3 °C [5 °F] change in mean annual temperature corresponds to a shift in isotherms of approximately 300–400 km [190–250 mi] in latitude or 500 m [1,600 ft] in elevation. Therefore, species are expected to move upwards in elevation or towards the poles in latitude in response to shifting climate zones." | 2001-08-01T09:19:53Z | 2023-12-27T14:17:35Z | [
"Template:Harvnb",
"Template:Convert",
"Template:Anchor",
"Template:Sfn",
"Template:Cite EB9",
"Template:Div col",
"Template:Div col end",
"Template:Refbegin",
"Template:Short description",
"Template:Pp-pc1",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:Commons category",
"Template:Authority control",
"Template:Other uses",
"Template:Main",
"Template:Cite book",
"Template:Citation",
"Template:Cite news",
"Template:NIE Poster",
"Template:Etymology",
"Template:See also",
"Template:Reflist",
"Template:Navboxes",
"Template:Portal bar",
"Template:Cvt",
"Template:Webarchive",
"Template:Refend",
"Template:Good article",
"Template:Atmospheric sciences",
"Template:Blockquote",
"Template:Cite journal",
"Template:Spoken Wikipedia"
]
| https://en.wikipedia.org/wiki/Climate |
6,000 | History of the Comoros | The history of the Comoros extends to about 800–1000 AD when the archipelago was first inhabited. The Comoros have been inhabited by various groups throughout this time. France colonised the islands in the 19th century, and they became independent in 1975.
There is uncertainty about the early population of Comoros. According to one study of early crops, the islands may have been settled first by South East Asian sailors the same way Madagascar was.
This influx of Austronesian sailors, who had earlier settled nearby Madagascar, arrived in the 8th to 13 centuries CE. They are the source for the earliest archeological evidence of farming in the islands. Crops from archeological sites in Sima are predominantly rice strains of both indica and japonica varieties from Southeast Asia, as well as various other Asian crops like mung bean and cotton. Only a minority of the examined crops were African-derived, like finger millet, African sorghum, and cowpea. The Comoros are believed to be the first site of contact and subsequent admixture between African and Asian populations (earlier than Madagascar). Comorians today still display at most 20% Austronesian admixture.
From around the 15th century AD, Shirazi slave traders established trading ports and brought in slaves from the mainland. In the 16th century, social changes on the East African coast probably linked to the arrival of the Portuguese saw the arrival of a number of Arabs of Hadrami who established alliances with the Shirazis and founded several royal clans.
Over the centuries, the Comoros have been settled by a succession of diverse groups from the coast of Africa, the Persian Gulf, Southeast Asia and Madagascar.
Portuguese explorers first visited the archipelago in 1505.
Apart from a visit by the French Parmentier brothers in 1529, for much of the 16th century the only Europeans to visit the islands were Portuguese. British and Dutch ships began arriving around the start of the 17th century and the island of Ndzwani soon became a major supply point on the route to the East Indies. Ndzwani was generally ruled by a single sultan, who occasionally attempted to extend his authority to Mayotte and Mwali; Ngazidja was more fragmented, on occasion being divided into as many as 12 small kingdoms.
Sir James Lancaster's voyage to the Indian Ocean in 1591 was the first attempt by the English to break into the spice trade, which was dominated by the Portuguese. Only one of his four ships made it back from the Indies on that voyage, and that one with a decimated crew of 5 men and a boy. Lancaster himself was marooned by a cyclone on the Comoros. Many of his crew were speared to death by angry islanders although Lancaster found his way home in 1594. (Dalrymple W. 2019; Bloomsbury Publishing ISBN 1635573955).
Both the British and the French turned their attention to the Comoros islands in the middle of the 19th century. The French finally acquired the islands through a cunning mixture of strategies, including the policy of "divide and conquer", chequebook politics and a serendipitous affair between a sultana and a French trader that was put to good use by the French, who kept control of the islands, quelling unrest and the occasional uprising.
William Sunley, a planter and British Consul from 1848 to 1866, was an influence on Anjouan.
France's presence in the western Indian Ocean dates to the early seventeenth century. The French established a settlement in southern Madagascar in 1634 and occupied the islands of Reunion and Rodrigues; in 1715 France claimed Mauritius (Ile de France), and in 1756 Seychelles. When France ceded Mauritius, Rodrigues, and Seychelles to Britain in 1814, it lost its Indian Ocean ports; Reunion, which remained French, did not offer a suitable natural harbor. In 1840 France acquired the island of Nosy-Be off the northwestern coast of Madagascar, but its potential as a port was limited. In 1841 the governor of Reunion, Admiral de Hell, negotiated with Andrian Souli, the Malagasy ruler of Mayotte, to cede Mayotte to France. Mahore offered a suitable site for port facilities, and its acquisition was justified by de Hell on the grounds that if France did not act, Britain would occupy the island.
Although France had established a foothold in Comoros, the acquisition of the other islands proceeded fitfully. At times the French were spurred on by the threat of British intervention, especially on Nzwani, and at other times, by the constant anarchy resulting from the sultans' wars upon each other. In the 1880s, Germany's growing influence on the East African coast added to the concerns of the French. Not until 1908, however, did the four Comoro Islands become part of France's colony of Madagascar and not until 1912 did the last sultan abdicate. Then, a colonial administration took over the islands and established a capital at Dzaoudzi on Mahore. Treaties of protectorate status marked a transition point between independence and annexation; such treaties were signed with the rulers of Njazidja, Nzwani, and Mwali in 1886.
The effects of French colonialism were mixed, at best. Colonial rule brought an end to the institution of slavery, but economic and social differences between former slaves and free persons and their descendants persisted. Health standards improved with the introduction of modern medicine, and the population increased about 50 percent between 1900 and 1960. France continued to dominate the economy. Food crop cultivation was neglected as French societes (companies) established cash crop plantations in the coastal regions. The result was an economy dependent on the exporting of vanilla, ylang-ylang, cloves, cocoa, copra, and other tropical crops. Most profits obtained from exports were diverted to France rather than invested in the infrastructure of the islands. Development was further limited by the colonial government's practice of concentrating public services on Madagascar. One consequence of this policy was the migration of large numbers of Comorans to Madagascar, where their presence would be a long-term source of tension between Comoros and its giant island neighbor. The Shirazi elite continued to play a prominent role as large landowners and civil servants. On the eve of independence, Comoros remained poor and undeveloped, having only one secondary school and practically nothing in the way of national media. Isolated from important trade routes by the opening of the Suez Canal in 1869, having few natural resources, and largely neglected by France, the islands were poorly equipped for independence.
On September 25 1942, British forces landed in the Comoros, occupying them until October 13 1946.
In 1946 the Comoro Islands became an overseas department of France with representation in the French National Assembly. The following year, the islands' administrative ties to Madagascar were severed; Comoros established its own customs regime in 1952. A Governing Council was elected in August 1957 on the four islands in conformity with the loi-cadre (enabling law) of June 23, 1956. A constitution providing for internal self-government was promulgated in 1961, following a 1958 referendum in which Comorans voted overwhelmingly to remain a part of France. This government consisted of a territorial assembly having, in 1975, thirty-nine members, and a Governing Council of six to nine ministers responsible to it.
Agreement was reached with France in 1973 for the Comoros to become independent in 1978. On July 6, 1975, however, the Comorian parliament passed a resolution declaring unilateral independence. The deputies of Mayotte abstained.
In 1961 the Comoros was granted autonomous rule and, in 1975, it broke all ties with France and established itself as an independent republic. From the very beginning Mayotte refused to join the new republic and aligned itself even more firmly to the French Republic, but the other islands remained committed to independence. The first president of the Comoros, Ahmed Abdallah Abderemane, did not last long before being ousted in a coup d'état by Ali Soilih, an atheist with an Islamic background.
Soilih began with a set of solid socialist ideals designed to modernize the country. However, the regime faced problems. A French mercenary by the name of Bob Denard, arrived in the Comoros at dawn on 13 May 1978, and removed Soilih from power. Solih was shot and killed during the coup. The mercenaries returned Abdallah to power and the mercenaries were given key positions in government.
In two referendums, in December 1974 and February 1976, the population of Mayotte voted against independence from France (by 63.8% and 99.4% respectively). Mayotte thus remains under French administration, and the Comorian Government has effective control over only Grande Comore, Anjouan, and Mohéli.
Later, French settlers, French-owned companies, and Arab merchants established a plantation-based economy that now uses about one-third of the land for export crops.
In 1978, president Ali Soilih, who had a firm anti-French line, was killed and Ahmed Abdallah came to power. Under the reign of Abdallah, Denard was commander of the Presidential Guard (PG) and de facto ruler of the country. He was trained, supported and funded by the white regimes in South Africa (SA) and Rhodesia (now Zimbabwe) in return for permission to set up a secret listening post on the islands. South-African agents kept an ear on the important ANC bases in Lusaka and Dar es Salaam and watched the war in Mozambique, in which SA played an active role. The Comoros were also used for the evasion of arms sanctions.
When in 1981 François Mitterrand was elected president Denard lost the support of the French intelligence service, but he managed to strengthen the link between SA and the Comoros. Besides the military, Denard established his own company SOGECOM, for both the security and construction, and seemed to profit by the arrangement. Between 1985 and 1987 the relationship of the PG with the local Comorians became worse.
At the end of the 1980s the South Africans did not wish to continue to support the mercenary regime and France was in agreement. Also President Abdallah wanted the mercenaries to leave. Their response was a (third) coup resulting in the death of President Abdallah, in which Denard and his men were probably involved. South Africa and the French government subsequently forced Denard and his mercenaries to leave the islands in 1989.
Said Mohamed Djohar became president. His time in office was turbulent, including an impeachment attempt in 1991 and a coup attempt in 1992.
On September 28, 1995 Bob Denard and a group of mercenaries took over the Comoros islands in a coup (named operation Kaskari by the mercenaries) against President Djohar. France immediately and severely denounced the coup, and backed by the 1978 defense agreement with the Comoros, President Jacques Chirac ordered his special forces to retake the island. Bob Denard began to take measures to stop the coming invasion. A new presidential guard was created. Strong points armed with heavy machine guns were set up around the island, particularly around the island's two airports.
On October 3, 1995, 11 p.m., the French deployed 600 men against a force of 33 mercenaries and a 300-man dissident force. Denard however ordered his mercenaries not to fight. Within 7 hours the airports at Iconi and Hahaya and the French Embassy in Moroni were secured. By 3:00 p.m. the next day Bob Denard and his mercenaries had surrendered. This (response) operation, codenamed Azalée, was remarkable, because there were no casualties, and just in seven days, plans were drawn up and soldiers were deployed. Denard was taken to France and jailed. Prime minister Caambi El-Yachourtu became acting president until Djohar returned from exile in January, 1996. In March 1996, following presidential elections, Mohamed Taki Abdoulkarim, a member of the civilian government that Denard had tried to set up in October 1995, became president. On 23 November 1996, Ethiopian Airlines Flight 961 crashed near a beach on the island after it was hijacked and ran out of fuel killing 125 people and leaving 50 survivors.
In 1997, the islands of Anjouan and Mohéli declared their independence from the Comoros. A subsequent attempt by the government to re-establish control over the rebellious islands by force failed, and presently the African Union is brokering negotiations to effect a reconciliation. This process is largely complete, at least in theory. According to some sources, Mohéli did return to government control in 1998. In 1999, Anjouan had internal conflicts and on August 1 of that year, the 80-year-old first president Foundi Abdallah Ibrahim resigned, transferring power to a national coordinator, Said Abeid. The government was overthrown in a coup by army and navy officers on August 9, 2001. Mohamed Bacar soon rose to leadership of the junta that took over and by the end of the month he was the leader of the country. Despite two coup attempts in the following three months, including one by Abeid, Bacar's government remained in power, and was apparently more willing to negotiate with the Comoros. Presidential elections were held for all of the Comoros in 2002, and presidents have been chosen for all three islands as well, which have become a confederation. Most notably, Mohammed Bacar was elected for a 5-year term as president of Anjouan. Grande Comore had experienced troubles of its own in the late 1990s, when President Taki died on November 6, 1998. Colonel Azali Assoumani became president following a military coup in 1999. There have been several coup attempts since, but he gained firm control of the country after stepping down temporarily and winning a presidential election in 2002.
In May 2006, Ahmed Abdallah Sambi was elected from the island of Anjouan to be the president of the Union of the Comoros. He is a Sunni cleric who studied in the Sudan, Iran and Saudi Arabia. He is nicknamed "Ayatollah" due to his time in Iran and his penchant for turbans.
Azali Assoumani is a former army officer, first came to power in a coup in 1999. Then he won presidency in 2002 election, having power until 2006. After ten years, he was elected again in 2016 election. In March 2019, he was re-elected in the elections opposition claimed to be full of irregularities.
Before the 2019 presidential election president Azali Assoumani had arranged a constitutional referendum in 2018 that approved extending the presidential mandate from one five-year term to two. The opposition had boycotted the referendum.
In January 2020, his party The Convention for the Renewal of the Comoros (CRC) won 20 out of 24 parliamentary seats in the parliamentary election.
Attribution: | [
{
"paragraph_id": 0,
"text": "The history of the Comoros extends to about 800–1000 AD when the archipelago was first inhabited. The Comoros have been inhabited by various groups throughout this time. France colonised the islands in the 19th century, and they became independent in 1975.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There is uncertainty about the early population of Comoros. According to one study of early crops, the islands may have been settled first by South East Asian sailors the same way Madagascar was.",
"title": "Early inhabitants"
},
{
"paragraph_id": 2,
"text": "This influx of Austronesian sailors, who had earlier settled nearby Madagascar, arrived in the 8th to 13 centuries CE. They are the source for the earliest archeological evidence of farming in the islands. Crops from archeological sites in Sima are predominantly rice strains of both indica and japonica varieties from Southeast Asia, as well as various other Asian crops like mung bean and cotton. Only a minority of the examined crops were African-derived, like finger millet, African sorghum, and cowpea. The Comoros are believed to be the first site of contact and subsequent admixture between African and Asian populations (earlier than Madagascar). Comorians today still display at most 20% Austronesian admixture.",
"title": "Early inhabitants"
},
{
"paragraph_id": 3,
"text": "From around the 15th century AD, Shirazi slave traders established trading ports and brought in slaves from the mainland. In the 16th century, social changes on the East African coast probably linked to the arrival of the Portuguese saw the arrival of a number of Arabs of Hadrami who established alliances with the Shirazis and founded several royal clans.",
"title": "Early inhabitants"
},
{
"paragraph_id": 4,
"text": "Over the centuries, the Comoros have been settled by a succession of diverse groups from the coast of Africa, the Persian Gulf, Southeast Asia and Madagascar.",
"title": "Early inhabitants"
},
{
"paragraph_id": 5,
"text": "Portuguese explorers first visited the archipelago in 1505.",
"title": "Early inhabitants"
},
{
"paragraph_id": 6,
"text": "Apart from a visit by the French Parmentier brothers in 1529, for much of the 16th century the only Europeans to visit the islands were Portuguese. British and Dutch ships began arriving around the start of the 17th century and the island of Ndzwani soon became a major supply point on the route to the East Indies. Ndzwani was generally ruled by a single sultan, who occasionally attempted to extend his authority to Mayotte and Mwali; Ngazidja was more fragmented, on occasion being divided into as many as 12 small kingdoms.",
"title": "Early inhabitants"
},
{
"paragraph_id": 7,
"text": "Sir James Lancaster's voyage to the Indian Ocean in 1591 was the first attempt by the English to break into the spice trade, which was dominated by the Portuguese. Only one of his four ships made it back from the Indies on that voyage, and that one with a decimated crew of 5 men and a boy. Lancaster himself was marooned by a cyclone on the Comoros. Many of his crew were speared to death by angry islanders although Lancaster found his way home in 1594. (Dalrymple W. 2019; Bloomsbury Publishing ISBN 1635573955).",
"title": "Early inhabitants"
},
{
"paragraph_id": 8,
"text": "Both the British and the French turned their attention to the Comoros islands in the middle of the 19th century. The French finally acquired the islands through a cunning mixture of strategies, including the policy of \"divide and conquer\", chequebook politics and a serendipitous affair between a sultana and a French trader that was put to good use by the French, who kept control of the islands, quelling unrest and the occasional uprising.",
"title": "Early inhabitants"
},
{
"paragraph_id": 9,
"text": "William Sunley, a planter and British Consul from 1848 to 1866, was an influence on Anjouan.",
"title": "Early inhabitants"
},
{
"paragraph_id": 10,
"text": "France's presence in the western Indian Ocean dates to the early seventeenth century. The French established a settlement in southern Madagascar in 1634 and occupied the islands of Reunion and Rodrigues; in 1715 France claimed Mauritius (Ile de France), and in 1756 Seychelles. When France ceded Mauritius, Rodrigues, and Seychelles to Britain in 1814, it lost its Indian Ocean ports; Reunion, which remained French, did not offer a suitable natural harbor. In 1840 France acquired the island of Nosy-Be off the northwestern coast of Madagascar, but its potential as a port was limited. In 1841 the governor of Reunion, Admiral de Hell, negotiated with Andrian Souli, the Malagasy ruler of Mayotte, to cede Mayotte to France. Mahore offered a suitable site for port facilities, and its acquisition was justified by de Hell on the grounds that if France did not act, Britain would occupy the island.",
"title": "French Comoros"
},
{
"paragraph_id": 11,
"text": "Although France had established a foothold in Comoros, the acquisition of the other islands proceeded fitfully. At times the French were spurred on by the threat of British intervention, especially on Nzwani, and at other times, by the constant anarchy resulting from the sultans' wars upon each other. In the 1880s, Germany's growing influence on the East African coast added to the concerns of the French. Not until 1908, however, did the four Comoro Islands become part of France's colony of Madagascar and not until 1912 did the last sultan abdicate. Then, a colonial administration took over the islands and established a capital at Dzaoudzi on Mahore. Treaties of protectorate status marked a transition point between independence and annexation; such treaties were signed with the rulers of Njazidja, Nzwani, and Mwali in 1886.",
"title": "French Comoros"
},
{
"paragraph_id": 12,
"text": "The effects of French colonialism were mixed, at best. Colonial rule brought an end to the institution of slavery, but economic and social differences between former slaves and free persons and their descendants persisted. Health standards improved with the introduction of modern medicine, and the population increased about 50 percent between 1900 and 1960. France continued to dominate the economy. Food crop cultivation was neglected as French societes (companies) established cash crop plantations in the coastal regions. The result was an economy dependent on the exporting of vanilla, ylang-ylang, cloves, cocoa, copra, and other tropical crops. Most profits obtained from exports were diverted to France rather than invested in the infrastructure of the islands. Development was further limited by the colonial government's practice of concentrating public services on Madagascar. One consequence of this policy was the migration of large numbers of Comorans to Madagascar, where their presence would be a long-term source of tension between Comoros and its giant island neighbor. The Shirazi elite continued to play a prominent role as large landowners and civil servants. On the eve of independence, Comoros remained poor and undeveloped, having only one secondary school and practically nothing in the way of national media. Isolated from important trade routes by the opening of the Suez Canal in 1869, having few natural resources, and largely neglected by France, the islands were poorly equipped for independence.",
"title": "French Comoros"
},
{
"paragraph_id": 13,
"text": "On September 25 1942, British forces landed in the Comoros, occupying them until October 13 1946.",
"title": "French Comoros"
},
{
"paragraph_id": 14,
"text": "In 1946 the Comoro Islands became an overseas department of France with representation in the French National Assembly. The following year, the islands' administrative ties to Madagascar were severed; Comoros established its own customs regime in 1952. A Governing Council was elected in August 1957 on the four islands in conformity with the loi-cadre (enabling law) of June 23, 1956. A constitution providing for internal self-government was promulgated in 1961, following a 1958 referendum in which Comorans voted overwhelmingly to remain a part of France. This government consisted of a territorial assembly having, in 1975, thirty-nine members, and a Governing Council of six to nine ministers responsible to it.",
"title": "French Comoros"
},
{
"paragraph_id": 15,
"text": "Agreement was reached with France in 1973 for the Comoros to become independent in 1978. On July 6, 1975, however, the Comorian parliament passed a resolution declaring unilateral independence. The deputies of Mayotte abstained.",
"title": "French Comoros"
},
{
"paragraph_id": 16,
"text": "In 1961 the Comoros was granted autonomous rule and, in 1975, it broke all ties with France and established itself as an independent republic. From the very beginning Mayotte refused to join the new republic and aligned itself even more firmly to the French Republic, but the other islands remained committed to independence. The first president of the Comoros, Ahmed Abdallah Abderemane, did not last long before being ousted in a coup d'état by Ali Soilih, an atheist with an Islamic background.",
"title": "French Comoros"
},
{
"paragraph_id": 17,
"text": "Soilih began with a set of solid socialist ideals designed to modernize the country. However, the regime faced problems. A French mercenary by the name of Bob Denard, arrived in the Comoros at dawn on 13 May 1978, and removed Soilih from power. Solih was shot and killed during the coup. The mercenaries returned Abdallah to power and the mercenaries were given key positions in government.",
"title": "French Comoros"
},
{
"paragraph_id": 18,
"text": "In two referendums, in December 1974 and February 1976, the population of Mayotte voted against independence from France (by 63.8% and 99.4% respectively). Mayotte thus remains under French administration, and the Comorian Government has effective control over only Grande Comore, Anjouan, and Mohéli.",
"title": "French Comoros"
},
{
"paragraph_id": 19,
"text": "Later, French settlers, French-owned companies, and Arab merchants established a plantation-based economy that now uses about one-third of the land for export crops.",
"title": "French Comoros"
},
{
"paragraph_id": 20,
"text": "In 1978, president Ali Soilih, who had a firm anti-French line, was killed and Ahmed Abdallah came to power. Under the reign of Abdallah, Denard was commander of the Presidential Guard (PG) and de facto ruler of the country. He was trained, supported and funded by the white regimes in South Africa (SA) and Rhodesia (now Zimbabwe) in return for permission to set up a secret listening post on the islands. South-African agents kept an ear on the important ANC bases in Lusaka and Dar es Salaam and watched the war in Mozambique, in which SA played an active role. The Comoros were also used for the evasion of arms sanctions.",
"title": "Abdallah regime"
},
{
"paragraph_id": 21,
"text": "When in 1981 François Mitterrand was elected president Denard lost the support of the French intelligence service, but he managed to strengthen the link between SA and the Comoros. Besides the military, Denard established his own company SOGECOM, for both the security and construction, and seemed to profit by the arrangement. Between 1985 and 1987 the relationship of the PG with the local Comorians became worse.",
"title": "Abdallah regime"
},
{
"paragraph_id": 22,
"text": "At the end of the 1980s the South Africans did not wish to continue to support the mercenary regime and France was in agreement. Also President Abdallah wanted the mercenaries to leave. Their response was a (third) coup resulting in the death of President Abdallah, in which Denard and his men were probably involved. South Africa and the French government subsequently forced Denard and his mercenaries to leave the islands in 1989.",
"title": "Abdallah regime"
},
{
"paragraph_id": 23,
"text": "Said Mohamed Djohar became president. His time in office was turbulent, including an impeachment attempt in 1991 and a coup attempt in 1992.",
"title": "1989–1996"
},
{
"paragraph_id": 24,
"text": "On September 28, 1995 Bob Denard and a group of mercenaries took over the Comoros islands in a coup (named operation Kaskari by the mercenaries) against President Djohar. France immediately and severely denounced the coup, and backed by the 1978 defense agreement with the Comoros, President Jacques Chirac ordered his special forces to retake the island. Bob Denard began to take measures to stop the coming invasion. A new presidential guard was created. Strong points armed with heavy machine guns were set up around the island, particularly around the island's two airports.",
"title": "1989–1996"
},
{
"paragraph_id": 25,
"text": "On October 3, 1995, 11 p.m., the French deployed 600 men against a force of 33 mercenaries and a 300-man dissident force. Denard however ordered his mercenaries not to fight. Within 7 hours the airports at Iconi and Hahaya and the French Embassy in Moroni were secured. By 3:00 p.m. the next day Bob Denard and his mercenaries had surrendered. This (response) operation, codenamed Azalée, was remarkable, because there were no casualties, and just in seven days, plans were drawn up and soldiers were deployed. Denard was taken to France and jailed. Prime minister Caambi El-Yachourtu became acting president until Djohar returned from exile in January, 1996. In March 1996, following presidential elections, Mohamed Taki Abdoulkarim, a member of the civilian government that Denard had tried to set up in October 1995, became president. On 23 November 1996, Ethiopian Airlines Flight 961 crashed near a beach on the island after it was hijacked and ran out of fuel killing 125 people and leaving 50 survivors.",
"title": "1989–1996"
},
{
"paragraph_id": 26,
"text": "In 1997, the islands of Anjouan and Mohéli declared their independence from the Comoros. A subsequent attempt by the government to re-establish control over the rebellious islands by force failed, and presently the African Union is brokering negotiations to effect a reconciliation. This process is largely complete, at least in theory. According to some sources, Mohéli did return to government control in 1998. In 1999, Anjouan had internal conflicts and on August 1 of that year, the 80-year-old first president Foundi Abdallah Ibrahim resigned, transferring power to a national coordinator, Said Abeid. The government was overthrown in a coup by army and navy officers on August 9, 2001. Mohamed Bacar soon rose to leadership of the junta that took over and by the end of the month he was the leader of the country. Despite two coup attempts in the following three months, including one by Abeid, Bacar's government remained in power, and was apparently more willing to negotiate with the Comoros. Presidential elections were held for all of the Comoros in 2002, and presidents have been chosen for all three islands as well, which have become a confederation. Most notably, Mohammed Bacar was elected for a 5-year term as president of Anjouan. Grande Comore had experienced troubles of its own in the late 1990s, when President Taki died on November 6, 1998. Colonel Azali Assoumani became president following a military coup in 1999. There have been several coup attempts since, but he gained firm control of the country after stepping down temporarily and winning a presidential election in 2002.",
"title": "Secession of Anjouan and Mohéli"
},
{
"paragraph_id": 27,
"text": "In May 2006, Ahmed Abdallah Sambi was elected from the island of Anjouan to be the president of the Union of the Comoros. He is a Sunni cleric who studied in the Sudan, Iran and Saudi Arabia. He is nicknamed \"Ayatollah\" due to his time in Iran and his penchant for turbans.",
"title": "Secession of Anjouan and Mohéli"
},
{
"paragraph_id": 28,
"text": "Azali Assoumani is a former army officer, first came to power in a coup in 1999. Then he won presidency in 2002 election, having power until 2006. After ten years, he was elected again in 2016 election. In March 2019, he was re-elected in the elections opposition claimed to be full of irregularities.",
"title": "Azali Assoumani in power since 2016"
},
{
"paragraph_id": 29,
"text": "Before the 2019 presidential election president Azali Assoumani had arranged a constitutional referendum in 2018 that approved extending the presidential mandate from one five-year term to two. The opposition had boycotted the referendum.",
"title": "Azali Assoumani in power since 2016"
},
{
"paragraph_id": 30,
"text": "In January 2020, his party The Convention for the Renewal of the Comoros (CRC) won 20 out of 24 parliamentary seats in the parliamentary election.",
"title": "Azali Assoumani in power since 2016"
},
{
"paragraph_id": 31,
"text": "",
"title": "See also"
},
{
"paragraph_id": 32,
"text": "Attribution:",
"title": "References"
}
]
| The history of the Comoros extends to about 800–1000 AD when the archipelago was first inhabited. The Comoros have been inhabited by various groups throughout this time. France colonised the islands in the 19th century, and they became independent in 1975. | 2002-02-25T15:43:11Z | 2023-12-01T03:24:43Z | [
"Template:Source attribution",
"Template:Short description",
"Template:For",
"Template:Infobox country",
"Template:Main",
"Template:Cite web",
"Template:Cite book",
"Template:Cite journal",
"Template:Reflist",
"Template:Cite news",
"Template:Comoros topics",
"Template:French overseas empire",
"Template:History of Comoros",
"Template:Sfn",
"Template:Citation needed",
"Template:History of Africa",
"Template:ISBN",
"Template:Harvnb",
"Template:Commonscatinline"
]
| https://en.wikipedia.org/wiki/History_of_the_Comoros |
6,001 | Geography of the Comoros | 12°10′S 44°15′E / 12.167°S 44.250°E / -12.167; 44.250
The Comoros archipelago consists of four main islands aligned along a northwest–southeast axis at the north end of the Mozambique Channel, between Mozambique and the island of Madagascar. Still widely known by their French names, the islands officially have been called by their Swahili names by the Comorian government. They are Grande Comore (Njazidja), Mohéli (Mwali), Anjouan (Nzwani), and Mayotte (Mahoré). The islands' distance from each other—Grande Comore is some 200 kilometers from Mayotte, forty kilometers from Mohéli, and eighty kilometers from Anjouan—along with a lack of good harbor facilities, make transportation and communication difficult. Comoros are sunny islands.
The islands have a total land area of 2,236 square kilometers (including Mayotte), and claim territorial waters of 320 square kilometers. Mount Karthala (2316 m) on Grande Comore is an active volcano. From April 17 to 19, 2005, the volcano began spewing ash and gas, forcing as many as 10,000 people to flee. Comoros is located within the Somali plate.
Grande Comore is the largest island, sixty-seven kilometers long and twenty-seven kilometers wide, with a total area of 1,146 square kilometers. The most recently formed of the four islands in the archipelago, it is also of volcanic origin. Two volcanoes form the island's most prominent topographic features: La Grille in the north, with an elevation of 1,000 meters, is extinct and largely eroded; Kartala in the south, rising to a height of 2,361 meters, last erupted in 1977. A plateau averaging 600 to 700 meters high connects the two mountains. Because Grande Comore is geologically a relatively new island, its soil is thin and rocky and cannot hold water. As a result, water from the island's heavy rainfall must be stored in catchment tanks. There are no coral reefs along the coast, and the island lacks a good harbor for ships. One of the largest remnants of the Comoros' once-extensive rain forests is on the slopes of Kartala. The national capital has been at Moroni since 1962.
Anjouan, triangular shaped and forty kilometers from apex to base, has an area of 424 square kilometers. Three mountain chains — Sima, Nioumakele, and Jimilime—emanate from a central peak, Mtingui (1,575 m), giving the island its distinctive shape. Older than Grande Comore, Anjouan has deeper soil cover, but overcultivation has caused serious erosion. A coral reef lies close to shore; the island's capital of Mutsamudu is also its main port.
Mohéli is thirty kilometers long and twelve kilometers wide, with an area of 290 square kilometers. It is the smallest of the four islands and has a central mountain chain reaching 860 meters at its highest. Like Grande Comore, it retains stands of rain forest. Mohéli's capital is Fomboni.
Mayotte, geologically the oldest of the four islands, is thirty-nine kilometers long and twenty-two kilometers wide, totaling 375 square kilometers, and its highest points are between 500 and 600 meters above sea level. Because of greater weathering of the volcanic rock, the soil is relatively rich in some areas. A well-developed coral reef that encircles much of the island ensures protection for ships and a habitat for fish. Dzaoudzi, capital of the Comoros until 1962 and now Mayotte's administrative center, is situated on a rocky outcropping off the east shore of the main island. Dzaoudzi is linked by a causeway to le Pamanzi, which at ten kilometers in area is the largest of several islets adjacent to Mayotte. Islets are also scattered in the coastal waters of Mayotte just as in Grande Comore, Anjouan, and Mohéli.
Comorian waters are the habitat of the coelacanth, a rare fish with limblike fins and a cartilaginous skeleton, the fossil remains of which date as far back as 400 million years and which was once thought to have become extinct about 70 million years ago. A live specimen was caught in 1938 off southern Africa; other coelacanths have since been found in the vicinity of the Comoro Islands.
Several mammals are unique to the islands themselves. Livingstone's fruit bat, although plentiful when discovered by explorer David Livingstone in 1863, has been reduced to a population of about 120, entirely on Anjouan. The world's largest bat, the jet-black Livingstone fruit bat has a wingspan of nearly two meters. A British preservation group sent an expedition to the Comoros in 1992 to bring some of the bats to Britain to establish a breeding population.
A hybrid of the common brown lemur (Eulemur fulvus) originally from Madagascar, was introduced by humans prior to European colonization and is found on Mayotte. The mongoose lemur (Eulemur mongoz), also introduced from Madagascar by humans, can be found on the islands of Mohéli and Anjouan.
22 species of bird are unique to the archipelago and 17 of these are restricted to the Union of the Comoros. These include the Karthala scops-owl, Anjouan scops-owl and Humblot's flycatcher.
Partly in response to international pressures, Comorians in the 1990s have become more concerned about the environment. Steps are being taken not only to preserve the rare fauna, but also to counteract degradation of the environment, especially on densely populated Anjouan. Specifically, to minimize the cutting down of trees for fuel, kerosene is being subsidized, and efforts are being made to replace the loss of the forest cover caused by ylang-ylang distillation for perfume. The Community Development Support Fund, sponsored by the International Development Association (IDA, a World Bank affiliate) and the Comorian government, is working to improve water supply on the islands as well.
The climate is marine tropical, with two seasons: hot and humid from November to April, the result of the northeastern monsoon, and a cooler, drier season the rest of the year. Average monthly temperatures range from 23 to 28 °C (73.4 to 82.4 °F) along the coasts. Although the average annual precipitation is 2,000 millimeters (78.7 in), water is a scarce commodity in many parts of the Comoros. Mohéli and Mayotte possess streams and other natural sources of water, but Grande Comore and Anjouan, whose mountainous landscapes retain water poorly, are almost devoid of naturally occurring running water. Cyclones, occurring during the hot and wet season, can cause extensive damage, especially in coastal areas. On the average, at least twice each decade houses, farms, and harbor facilities are devastated by these great storms.
This is a list of the extreme points of the Comoros, the points that are farther north, south, east or west than any other location. This list excludes the French-administered island of Mayotte which is claimed by the Comorian government.
Area: 2,235 km
Coastline: 340 km
Climate: tropical marine; rainy season (November to May)
Terrain: volcanic islands, interiors vary from steep mountains to low hills
Elevation extremes: lowest point: Indian Ocean 0 m highest point: Karthala 2,360 m
Natural resources: fish
Land use: arable land: 47.29% permanent crops: 29.55% other: 23.16% (2012 est.)
Irrigated land: 1.3 km (2003)
Total renewable water resources: 1.2 km (2011)
Freshwater withdrawal (domestic/industrial/agricultural): total: 0.01 km/yr (48%/5%/47%) per capital: 16.86 m/yr (1999)
Natural hazards: cyclones possible during rainy season (December to April); volcanic activity on Grand Comore
Environmental - current issues: soil degradation and erosion results from crop cultivation on slopes without proper terracing; deforestation | [
{
"paragraph_id": 0,
"text": "12°10′S 44°15′E / 12.167°S 44.250°E / -12.167; 44.250",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Comoros archipelago consists of four main islands aligned along a northwest–southeast axis at the north end of the Mozambique Channel, between Mozambique and the island of Madagascar. Still widely known by their French names, the islands officially have been called by their Swahili names by the Comorian government. They are Grande Comore (Njazidja), Mohéli (Mwali), Anjouan (Nzwani), and Mayotte (Mahoré). The islands' distance from each other—Grande Comore is some 200 kilometers from Mayotte, forty kilometers from Mohéli, and eighty kilometers from Anjouan—along with a lack of good harbor facilities, make transportation and communication difficult. Comoros are sunny islands.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The islands have a total land area of 2,236 square kilometers (including Mayotte), and claim territorial waters of 320 square kilometers. Mount Karthala (2316 m) on Grande Comore is an active volcano. From April 17 to 19, 2005, the volcano began spewing ash and gas, forcing as many as 10,000 people to flee. Comoros is located within the Somali plate.",
"title": "Details"
},
{
"paragraph_id": 3,
"text": "Grande Comore is the largest island, sixty-seven kilometers long and twenty-seven kilometers wide, with a total area of 1,146 square kilometers. The most recently formed of the four islands in the archipelago, it is also of volcanic origin. Two volcanoes form the island's most prominent topographic features: La Grille in the north, with an elevation of 1,000 meters, is extinct and largely eroded; Kartala in the south, rising to a height of 2,361 meters, last erupted in 1977. A plateau averaging 600 to 700 meters high connects the two mountains. Because Grande Comore is geologically a relatively new island, its soil is thin and rocky and cannot hold water. As a result, water from the island's heavy rainfall must be stored in catchment tanks. There are no coral reefs along the coast, and the island lacks a good harbor for ships. One of the largest remnants of the Comoros' once-extensive rain forests is on the slopes of Kartala. The national capital has been at Moroni since 1962.",
"title": "Grande Comore"
},
{
"paragraph_id": 4,
"text": "Anjouan, triangular shaped and forty kilometers from apex to base, has an area of 424 square kilometers. Three mountain chains — Sima, Nioumakele, and Jimilime—emanate from a central peak, Mtingui (1,575 m), giving the island its distinctive shape. Older than Grande Comore, Anjouan has deeper soil cover, but overcultivation has caused serious erosion. A coral reef lies close to shore; the island's capital of Mutsamudu is also its main port.",
"title": "Anjouan"
},
{
"paragraph_id": 5,
"text": "Mohéli is thirty kilometers long and twelve kilometers wide, with an area of 290 square kilometers. It is the smallest of the four islands and has a central mountain chain reaching 860 meters at its highest. Like Grande Comore, it retains stands of rain forest. Mohéli's capital is Fomboni.",
"title": "Mohéli"
},
{
"paragraph_id": 6,
"text": "Mayotte, geologically the oldest of the four islands, is thirty-nine kilometers long and twenty-two kilometers wide, totaling 375 square kilometers, and its highest points are between 500 and 600 meters above sea level. Because of greater weathering of the volcanic rock, the soil is relatively rich in some areas. A well-developed coral reef that encircles much of the island ensures protection for ships and a habitat for fish. Dzaoudzi, capital of the Comoros until 1962 and now Mayotte's administrative center, is situated on a rocky outcropping off the east shore of the main island. Dzaoudzi is linked by a causeway to le Pamanzi, which at ten kilometers in area is the largest of several islets adjacent to Mayotte. Islets are also scattered in the coastal waters of Mayotte just as in Grande Comore, Anjouan, and Mohéli.",
"title": "Mayotte"
},
{
"paragraph_id": 7,
"text": "Comorian waters are the habitat of the coelacanth, a rare fish with limblike fins and a cartilaginous skeleton, the fossil remains of which date as far back as 400 million years and which was once thought to have become extinct about 70 million years ago. A live specimen was caught in 1938 off southern Africa; other coelacanths have since been found in the vicinity of the Comoro Islands.",
"title": "Flora and fauna"
},
{
"paragraph_id": 8,
"text": "Several mammals are unique to the islands themselves. Livingstone's fruit bat, although plentiful when discovered by explorer David Livingstone in 1863, has been reduced to a population of about 120, entirely on Anjouan. The world's largest bat, the jet-black Livingstone fruit bat has a wingspan of nearly two meters. A British preservation group sent an expedition to the Comoros in 1992 to bring some of the bats to Britain to establish a breeding population.",
"title": "Flora and fauna"
},
{
"paragraph_id": 9,
"text": "A hybrid of the common brown lemur (Eulemur fulvus) originally from Madagascar, was introduced by humans prior to European colonization and is found on Mayotte. The mongoose lemur (Eulemur mongoz), also introduced from Madagascar by humans, can be found on the islands of Mohéli and Anjouan.",
"title": "Flora and fauna"
},
{
"paragraph_id": 10,
"text": "22 species of bird are unique to the archipelago and 17 of these are restricted to the Union of the Comoros. These include the Karthala scops-owl, Anjouan scops-owl and Humblot's flycatcher.",
"title": "Flora and fauna"
},
{
"paragraph_id": 11,
"text": "Partly in response to international pressures, Comorians in the 1990s have become more concerned about the environment. Steps are being taken not only to preserve the rare fauna, but also to counteract degradation of the environment, especially on densely populated Anjouan. Specifically, to minimize the cutting down of trees for fuel, kerosene is being subsidized, and efforts are being made to replace the loss of the forest cover caused by ylang-ylang distillation for perfume. The Community Development Support Fund, sponsored by the International Development Association (IDA, a World Bank affiliate) and the Comorian government, is working to improve water supply on the islands as well.",
"title": "Flora and fauna"
},
{
"paragraph_id": 12,
"text": "The climate is marine tropical, with two seasons: hot and humid from November to April, the result of the northeastern monsoon, and a cooler, drier season the rest of the year. Average monthly temperatures range from 23 to 28 °C (73.4 to 82.4 °F) along the coasts. Although the average annual precipitation is 2,000 millimeters (78.7 in), water is a scarce commodity in many parts of the Comoros. Mohéli and Mayotte possess streams and other natural sources of water, but Grande Comore and Anjouan, whose mountainous landscapes retain water poorly, are almost devoid of naturally occurring running water. Cyclones, occurring during the hot and wet season, can cause extensive damage, especially in coastal areas. On the average, at least twice each decade houses, farms, and harbor facilities are devastated by these great storms.",
"title": "Climate"
},
{
"paragraph_id": 13,
"text": "This is a list of the extreme points of the Comoros, the points that are farther north, south, east or west than any other location. This list excludes the French-administered island of Mayotte which is claimed by the Comorian government.",
"title": "Extreme points"
},
{
"paragraph_id": 14,
"text": "Area: 2,235 km",
"title": "Statistics"
},
{
"paragraph_id": 15,
"text": "Coastline: 340 km",
"title": "Statistics"
},
{
"paragraph_id": 16,
"text": "Climate: tropical marine; rainy season (November to May)",
"title": "Statistics"
},
{
"paragraph_id": 17,
"text": "Terrain: volcanic islands, interiors vary from steep mountains to low hills",
"title": "Statistics"
},
{
"paragraph_id": 18,
"text": "Elevation extremes: lowest point: Indian Ocean 0 m highest point: Karthala 2,360 m",
"title": "Statistics"
},
{
"paragraph_id": 19,
"text": "Natural resources: fish",
"title": "Statistics"
},
{
"paragraph_id": 20,
"text": "Land use: arable land: 47.29% permanent crops: 29.55% other: 23.16% (2012 est.)",
"title": "Statistics"
},
{
"paragraph_id": 21,
"text": "Irrigated land: 1.3 km (2003)",
"title": "Statistics"
},
{
"paragraph_id": 22,
"text": "Total renewable water resources: 1.2 km (2011)",
"title": "Statistics"
},
{
"paragraph_id": 23,
"text": "Freshwater withdrawal (domestic/industrial/agricultural): total: 0.01 km/yr (48%/5%/47%) per capital: 16.86 m/yr (1999)",
"title": "Statistics"
},
{
"paragraph_id": 24,
"text": "Natural hazards: cyclones possible during rainy season (December to April); volcanic activity on Grand Comore",
"title": "Statistics"
},
{
"paragraph_id": 25,
"text": "Environmental - current issues: soil degradation and erosion results from crop cultivation on slopes without proper terracing; deforestation",
"title": "Statistics"
}
]
| The Comoros archipelago consists of four main islands aligned along a northwest–southeast axis at the north end of the Mozambique Channel, between Mozambique and the island of Madagascar. Still widely known by their French names, the islands officially have been called by their Swahili names by the Comorian government. They are Grande Comore (Njazidja), Mohéli (Mwali), Anjouan (Nzwani), and Mayotte (Mahoré). The islands' distance from each other—Grande Comore is some 200 kilometers from Mayotte, forty kilometers from Mohéli, and eighty kilometers from Anjouan—along with a lack of good harbor facilities, make transportation and communication difficult. Comoros are sunny islands. | 2023-03-15T06:58:33Z | [
"Template:Weather box",
"Template:Wikiatlas",
"Template:Reflist",
"Template:LoM3",
"Template:Country study",
"Template:Coord",
"Template:More citations needed",
"Template:LoM3 Sfn",
"Template:See also",
"Template:Convert",
"Template:Comoros topics",
"Template:Geography of Africa",
"Template:Africa topic"
]
| https://en.wikipedia.org/wiki/Geography_of_the_Comoros |
|
6,002 | Demographics of the Comoros | The Comorians (Arabic: القمري) inhabiting Grande Comore, Anjouan, and Mohéli (86% of the population) share African-Arab origins. Islam is the dominant religion, and Quranic schools for children reinforce its influence. Although Islamic culture is firmly established throughout, a small minority are Christian.
The most common language is Comorian, related to Swahili. French and Arabic also are spoken. About 89% of the population is literate.
The Comoros have had eight censuses since World War II:
The latest official estimate (for 1 July 2020) is 897,219.
Population density figures conceal a great disparity between the republic's most crowded island, Nzwani, which had a density of 772 persons per square kilometer in 2017; Njazidja, which had a density of 331 persons per square kilometer in 2017; and Mwali, where the 2017 population density figure was 178 persons per square kilometer. By comparison, estimates of the population density per square kilometer of the Indian Ocean's other island microstates ranged from 241 (Seychelles) to 690 (Maldives) in 1993. Given the rugged terrain of Njazidja and Nzwani, and the dedication of extensive tracts to agriculture on all three islands, population pressures on the Comoros are becoming increasingly critical.
The age structure of the population of the Comoros is similar to that of many developing countries, in that the republic has a very large proportion of young people. In 1989, 46.4 percent of the population was under fifteen years of age, an above-average proportion even for sub-Saharan Africa. The population's rate of growth was a relatively high 3.5 percent per annum in the mid 1980s, up substantially from 2.0 percent in the mid-1970s and 2.1 percent in the mid-1960s.
In 1983 the Abdallah regime borrowed US$2.85 million from the International Development Association to devise a national family planning program. However, Islamic reservations about contraception made forthright advocacy and implementation of birth control programs politically hazardous, and consequently little was done in the way of public policy.
The Comorian population has become increasingly urbanized in recent years. In 1991 the percentage of Comorians residing in cities and towns of more than 5,000 persons was about 30 percent, up from 25 percent in 1985 and 23 percent in 1980. The Comoros' largest cities were the capital, Moroni, with about 30,000 people, and the port city of Mutsamudu, on the island of Nzwani, with about 20,000 people.
Migration among the various islands is important. Natives of Nzwani have settled in significant numbers on less crowded Mwali, causing some social tensions, and many Nzwani also migrate to Maore. In 1977 Maore expelled peasants from Ngazidja and Nzwani who had recently settled in large numbers on the island. Some were allowed to reenter starting in 1981 but solely as migrant labor.
The number of Comorians living abroad has been estimated at between 80,000 and 100,000; during the colonial period, most of them lived in Tanzania, Madagascar, and other parts of Southeast Africa. The number of Comorians residing in Madagascar was drastically reduced after anti-Comorian rioting in December 1976 in Mahajanga, in which at least 1,400 Comorians were killed. As many as 17,000 Comorians left Madagascar to seek refuge in their native land in 1977 alone. About 100,000 Comorians live in France; many of them had gone there for a university education and never returned. Small numbers of Indians, Malagasy, South Africans, and Europeans (mostly French) live on the islands and play an important role in the economy. Most French left after independence in 1975.
Some Persian Gulf countries started buying Comorian citizenship for their stateless bidoon residents and deporting them to Comoros.
90% of the people living in the Comoros are black, and 10% are mixed race, mostly black and white.
Statistics as of 2010:
Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR):
Structure of the population (DHS 2012) (Males 11 088, Females 12 284 = 23 373) :
Fertility data as of 2012 (DHS Program):
Demographic statistics according to the World Population Review in 2022.
The following demographic statistics are from the CIA World Factbook.
Sunni Muslim 98%, other (including Shia Muslim, Roman Catholic, Jehovah's Witness, Protestant) 2% note: Sunni Islam is the state religion
total dependency ratio: 75.5 (2015 est.) youth dependency ratio: 70.5 (2015 est.) elderly dependency ratio: 5.1 (2015 est.) potential support ratio: 19.7 (2015 est.)
Attribution: This article incorporates public domain material from The World Factbook (2023 ed.). CIA. (Archived 2006 edition) This article incorporates text from this source, which is in the public domain. Indian Ocean : five island countries. Federal Research Division. | [
{
"paragraph_id": 0,
"text": "The Comorians (Arabic: القمري) inhabiting Grande Comore, Anjouan, and Mohéli (86% of the population) share African-Arab origins. Islam is the dominant religion, and Quranic schools for children reinforce its influence. Although Islamic culture is firmly established throughout, a small minority are Christian.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The most common language is Comorian, related to Swahili. French and Arabic also are spoken. About 89% of the population is literate.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Comoros have had eight censuses since World War II:",
"title": ""
},
{
"paragraph_id": 3,
"text": "The latest official estimate (for 1 July 2020) is 897,219.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Population density figures conceal a great disparity between the republic's most crowded island, Nzwani, which had a density of 772 persons per square kilometer in 2017; Njazidja, which had a density of 331 persons per square kilometer in 2017; and Mwali, where the 2017 population density figure was 178 persons per square kilometer. By comparison, estimates of the population density per square kilometer of the Indian Ocean's other island microstates ranged from 241 (Seychelles) to 690 (Maldives) in 1993. Given the rugged terrain of Njazidja and Nzwani, and the dedication of extensive tracts to agriculture on all three islands, population pressures on the Comoros are becoming increasingly critical.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The age structure of the population of the Comoros is similar to that of many developing countries, in that the republic has a very large proportion of young people. In 1989, 46.4 percent of the population was under fifteen years of age, an above-average proportion even for sub-Saharan Africa. The population's rate of growth was a relatively high 3.5 percent per annum in the mid 1980s, up substantially from 2.0 percent in the mid-1970s and 2.1 percent in the mid-1960s.",
"title": ""
},
{
"paragraph_id": 6,
"text": "In 1983 the Abdallah regime borrowed US$2.85 million from the International Development Association to devise a national family planning program. However, Islamic reservations about contraception made forthright advocacy and implementation of birth control programs politically hazardous, and consequently little was done in the way of public policy.",
"title": ""
},
{
"paragraph_id": 7,
"text": "The Comorian population has become increasingly urbanized in recent years. In 1991 the percentage of Comorians residing in cities and towns of more than 5,000 persons was about 30 percent, up from 25 percent in 1985 and 23 percent in 1980. The Comoros' largest cities were the capital, Moroni, with about 30,000 people, and the port city of Mutsamudu, on the island of Nzwani, with about 20,000 people.",
"title": ""
},
{
"paragraph_id": 8,
"text": "Migration among the various islands is important. Natives of Nzwani have settled in significant numbers on less crowded Mwali, causing some social tensions, and many Nzwani also migrate to Maore. In 1977 Maore expelled peasants from Ngazidja and Nzwani who had recently settled in large numbers on the island. Some were allowed to reenter starting in 1981 but solely as migrant labor.",
"title": ""
},
{
"paragraph_id": 9,
"text": "The number of Comorians living abroad has been estimated at between 80,000 and 100,000; during the colonial period, most of them lived in Tanzania, Madagascar, and other parts of Southeast Africa. The number of Comorians residing in Madagascar was drastically reduced after anti-Comorian rioting in December 1976 in Mahajanga, in which at least 1,400 Comorians were killed. As many as 17,000 Comorians left Madagascar to seek refuge in their native land in 1977 alone. About 100,000 Comorians live in France; many of them had gone there for a university education and never returned. Small numbers of Indians, Malagasy, South Africans, and Europeans (mostly French) live on the islands and play an important role in the economy. Most French left after independence in 1975.",
"title": ""
},
{
"paragraph_id": 10,
"text": "Some Persian Gulf countries started buying Comorian citizenship for their stateless bidoon residents and deporting them to Comoros.",
"title": ""
},
{
"paragraph_id": 11,
"text": "90% of the people living in the Comoros are black, and 10% are mixed race, mostly black and white.",
"title": ""
},
{
"paragraph_id": 12,
"text": "Statistics as of 2010:",
"title": "Vital statistics"
},
{
"paragraph_id": 13,
"text": "Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR):",
"title": "Vital statistics"
},
{
"paragraph_id": 14,
"text": "Structure of the population (DHS 2012) (Males 11 088, Females 12 284 = 23 373) :",
"title": "Vital statistics"
},
{
"paragraph_id": 15,
"text": "Fertility data as of 2012 (DHS Program):",
"title": "Vital statistics"
},
{
"paragraph_id": 16,
"text": "Demographic statistics according to the World Population Review in 2022.",
"title": "Other demographic statistics"
},
{
"paragraph_id": 17,
"text": "The following demographic statistics are from the CIA World Factbook.",
"title": "Other demographic statistics"
},
{
"paragraph_id": 18,
"text": "Sunni Muslim 98%, other (including Shia Muslim, Roman Catholic, Jehovah's Witness, Protestant) 2% note: Sunni Islam is the state religion",
"title": "Other demographic statistics"
},
{
"paragraph_id": 19,
"text": "total dependency ratio: 75.5 (2015 est.) youth dependency ratio: 70.5 (2015 est.) elderly dependency ratio: 5.1 (2015 est.) potential support ratio: 19.7 (2015 est.)",
"title": "Other demographic statistics"
},
{
"paragraph_id": 20,
"text": "Attribution: This article incorporates public domain material from The World Factbook (2023 ed.). CIA. (Archived 2006 edition) This article incorporates text from this source, which is in the public domain. Indian Ocean : five island countries. Federal Research Division.",
"title": "References"
}
]
| The Comorians inhabiting Grande Comore, Anjouan, and Mohéli share African-Arab origins. Islam is the dominant religion, and Quranic schools for children reinforce its influence. Although Islamic culture is firmly established throughout, a small minority are Christian. The most common language is Comorian, related to Swahili. French and Arabic also are spoken. About 89% of the population is literate. The Comoros have had eight censuses since World War II: 1951
1956
1958-09-07: 183,133
1966-07-06
Note: in 1974 Mayotte was removed from the Comoros
1980-09-15: 335,150
1991-09-15: 446,817
2003-09-15: 575,660
2017-12-15: 758,316 The latest official estimate is 897,219. Population density figures conceal a great disparity between the republic's most crowded island, Nzwani, which had a density of 772 persons per square kilometer in 2017; Njazidja, which had a density of 331 persons per square kilometer in 2017; and Mwali, where the 2017 population density figure was 178 persons per square kilometer. By comparison, estimates of the population density per square kilometer of the Indian Ocean's other island microstates ranged from 241 (Seychelles) to 690 (Maldives) in 1993. Given the rugged terrain of Njazidja and Nzwani, and the dedication of extensive tracts to agriculture on all three islands, population pressures on the Comoros are becoming increasingly critical. The age structure of the population of the Comoros is similar to that of many developing countries, in that the republic has a very large proportion of young people. In 1989, 46.4 percent of the population was under fifteen years of age, an above-average proportion even for sub-Saharan Africa. The population's rate of growth was a relatively high 3.5 percent per annum in the mid 1980s, up substantially from 2.0 percent in the mid-1970s and 2.1 percent in the mid-1960s. In 1983 the Abdallah regime borrowed US$2.85 million from the International Development Association to devise a national family planning program. However, Islamic reservations about contraception made forthright advocacy and implementation of birth control programs politically hazardous, and consequently little was done in the way of public policy. The Comorian population has become increasingly urbanized in recent years. In 1991 the percentage of Comorians residing in cities and towns of more than 5,000 persons was about 30 percent, up from 25 percent in 1985 and 23 percent in 1980. The Comoros' largest cities were the capital, Moroni, with about 30,000 people, and the port city of Mutsamudu, on the island of Nzwani, with about 20,000 people. Migration among the various islands is important. Natives of Nzwani have settled in significant numbers on less crowded Mwali, causing some social tensions, and many Nzwani also migrate to Maore. In 1977 Maore expelled peasants from Ngazidja and Nzwani who had recently settled in large numbers on the island. Some were allowed to reenter starting in 1981 but solely as migrant labor. The number of Comorians living abroad has been estimated at between 80,000 and 100,000; during the colonial period, most of them lived in Tanzania, Madagascar, and other parts of Southeast Africa. The number of Comorians residing in Madagascar was drastically reduced after anti-Comorian rioting in December 1976 in Mahajanga, in which at least 1,400 Comorians were killed. As many as 17,000 Comorians left Madagascar to seek refuge in their native land in 1977 alone. About 100,000 Comorians live in France; many of them had gone there for a university education and never returned. Small numbers of Indians, Malagasy, South Africans, and Europeans live on the islands and play an important role in the economy. Most French left after independence in 1975. Some Persian Gulf countries started buying Comorian citizenship for their stateless bidoon residents and deporting them to Comoros. 90% of the people living in the Comoros are black, and 10% are mixed race, mostly black and white. | 2002-02-25T15:43:11Z | 2023-12-14T17:04:36Z | [
"Template:Infobox place demographics",
"Template:Webarchive",
"Template:Source-attribution",
"Template:Country study",
"Template:Comoros topics",
"Template:Short description",
"Template:Lang-ar",
"Template:Main",
"Template:Cite web",
"Template:CIA World Factbook",
"Template:Bar chart",
"Template:As of",
"Template:Reflist",
"Template:Cite book",
"Template:Citation",
"Template:Africa in topic",
"Template:Commons category"
]
| https://en.wikipedia.org/wiki/Demographics_of_the_Comoros |
6,003 | Politics of the Comoros | The Union of the Comoros consists of the three islands Njazidja (Grande Comoros), Mwali (Moheli) and Nzwani (Anjouan) while the island of Mayotte remains under French administration. The Politics of the Union of the Comoros take place in a framework of a federal presidential republic, whereby the President of the Comoros is both head of state and head of government, and of a multi-party system. Executive power is exercised by the government. Federal legislative power is vested in both the government and parliament. The precolonial legacies of the sultanates linger while the political situation in Comoros has been extremely fluid since the country's independence in 1975, subject to the volatility of coups and political insurrection.
As of 2008, Comoros and Mauritania were considered by US-based organization Freedom House as the only real “electoral democracies” of the Arab World.
Sultanates in the late nineteenth century used a cyclic age system and hierarchical lineage membership to provide the foundation for participation in the political process. In the capital, "the sultan was assisted by his ministers and by a madjelis, an advisory council composed of elders, whom he consulted regularly". Apart from local administration, the age system was used to include the population in decision making, depending on the scope of the decision being made. For example, the elders of the island of Njazidja held considerable influence on the authority of the sultan. Though sultanates granted rights to their free inhabitants, were provided with warriors during war and taxed the towns under their authority, their definition as a state is open to debate. The islands' incorporation as a province of the colony of Madagascar into the French colonial empire marked the end of the sultanates.
Despite French colonization, Comorans identify first with kinship or regional ties and rarely ever with the central government. This is a lingering effect of the sovereign sultanates of pre-colonial times. French colonial administration was based on a misconception that the sultanates operated as absolute monarchs: district boundaries were the same as the sultanates', multiple new taxes forced men into wage labor on colonial plantations and was reinforced through a compulsory public labor system that had little effect on infrastructure. French policy was hampered by an absence of settlers, effective communication across islands, rough geographical terrain and hostility towards the colonial government. Policies were made to apply to Madagascar as a whole and seldom to the nuances of each province: civil servants were typically Christian, unaware of local customs and unable to speak the local language. The French established the Ouatou Akouba in 1915, a local form of governance based on "customary structures" already in place that attempted to model itself after the age system in place under the sultanates. Their understanding of the elders' council as a corporate group bypassed the reality that there were men "who had accomplished the necessary customary rituals to be accorded the status of elder and thus be eligible to participate in the political process in the village", which effectively rendered the French elders' council ineffective. Though the Ouatou Akouba was disbanded, it resulted in the consolidation and formalization of the age system as access to power in the customary and local government spheres. The French failure to establish a functioning state in the Comoros has had repercussions in the post-independence era.
At independence there were five main political parties: OUDZIMA, UMMA, the Comoro People's Democratic Rally, the Comoro National Liberation Movement and the Socialist Objective Party. The political groups previously known simply as the 'green' and 'white' party became the Rassemblement Démocratique du Peuple Comorien (RDPC) and the Union Démocratique des Comores (UDC), headed by Sayyid Muhammad Cheikh and Sayyid Ibrahim. Members from both parties later merged to form OUDZIMA under the leadership of first president Ahmad Abdallah while dissidents from both created UMMA under the leadership of future president Ali Soilih.
Prince Said Ibrahim took power in 1970 but was democratically elected out of office in 1972 in favor of former French senator Ahmed Abdallah. President Abdallah declared independence for all islands, except Mayotte which remained under French administration, in 1975. The threat of renewed socioeconomic marginalization following the transfer of the capital to Ngazidja in 1962, more than social or cultural differences, underlay the island's subsequent rejection of independence. France withdrew all economic and technical support for the now independent state, which would encourage a revolutionary regime under future president Ali Soilih. French military and financial aid to mercenaries brought Prince Said Mohammed Jaffar to power after the United National Front of the Comoros (FNU) party toppled Abdallah's government. This mercenary coup was unique in that, unlike other coups on the continent, it was "uninspired by any ideological convictions". The Jaffar regime's inefficient distribution of resources and poor mismanagement was shown through the expulsion of French civil servants as well as endemic unemployment and food shortages. The regime used famine as "an opportunity to switch food patronage from France to the World Food Programme's emergency aid".
President Jaffar's ousting by Minister of Defense and Justice, Ali Soilih, brought about the "periode noire" (dark period) of the country; you could vote at 14, most civil servants were dismissed and there was a ban on some Islamic customs. He implemented revolutionary social reforms such as replacing French with Shikomoro, burning down the national archives and nationalizing land. His government received support from Egypt, Iraq and Sudan. Soilih's attacks on religious and customary authority contributed to his eventual ousting through a French-backed coup consisting of mercenaries and ex-politicians who together formed the Politico Military Doctorate.
Abdallah was reinstated and constructed a mercantile state by resuscitating the structures of the colonial era. His establishment of a one party state and intolerance for dissent further alienated civil society from the state. In May 1978 the Comoros were renamed the Islamic Republic of the Comoros and continued strengthening ties with the Arab world which resulted in their joining the Arab League. Abdallah's government sought to reverse Soilih's 'de-sacralization' by re-introducing the grand marriage, declaring Arabic the second official language behind French, and creating the office of the Grand Mufti. The doctorate & compromise government was dissolved, constitutional changes removed succession from a politician and neutralized the post of another possible challenger in abolishing the position of Prime Minister, which effectively cemented a client-patron network by making the civil service position dependent on Abdallah's political base. The Democratic Front's (DF) internal opposition to Abdallah was suppressed through the incarceration of over 600 people allegedly involved in a failed coup attempt. Abdallah then stocked the House of Assembly with loyal clientelist supporters through rigged parliamentary elections. All of these actions effectively consolidated Abdallah's position.
Muhammed Djohar succeeded president Abdallah after his assassination in 1989 but was evacuated by French troops after a failed coup attempt in 1996. The Comoros were led by Muhammed Taki Abd al-Karim beginning in 1996 and he was followed by interim president Said Massunde who eventually gave way to Assoumani Azali. Taki's lack of Arab heritage led to his lack of understanding Nzwani's cultural differences and economic problems, as seen by the establishment of the elders council with only loyal Taki supporters. As a result, the council was ignored by the true elders of the island. After Taki's death, a military coup in 1999, the nation's eighteenth since independence in 1975, installed Azali in to power. Colonel Azali Assoumani seized power in a bloodless coup in April 1999, overthrowing Interim President Tadjidine Ben Said Massounde, who himself had held the office since the death of democratically elected President Mohamed Taki Abdoulkarim in November, 1998. In May 1999, Azali decreed a constitution that gave him both executive and legislative powers. Bowing somewhat to international criticism, Azali appointed a civilian Prime Minister, Bainrifi Tarmidi, in December 1999; however, Azali retained the mantle of Head of State and army Commander. In December 2000, Azali named a new civilian Prime Minister, Hamada Madi, and formed a new civilian Cabinet. When Azali took power he also pledged to step down in April 2000 and relinquish control to a democratically elected president—a pledge with mixed results. Under Mohammed Taki and Assoumani Azali, access to the state was used to support client networks which led to crumbling infrastructure that cultivated in the islands of Nzwani and Mwali declaring independence only to be stopped by French troops. Azali lacked the social obligations required to address the elders and when combined with his gross mismanagement and increasing economic and social dependence on foreign entities, made managing daily life near nonexistent in the state. Therefore, local administrative structures began popping up and drifting away from reliance on the state, funded by remittances from the expatriate community in France.
The Comoros Islands have experienced five different constitutions.
In a separate nod to pressure to restore civilian rule, the government organized several committees to compose a new constitution, including the August 2000 National Congress and November 2000 Tripartite Commission. The opposition parties initially refused to participate in the Tripartite Commission, but on 17 February, representatives of the government, the Anjouan separatists, the political opposition, and civil society organizations signed a "Framework Accord for Reconciliation in Comoros," brokered by the Organization for African Unity
The accord called for the creation of a new Tripartite Commission for National Reconciliation to develop a "New Comorian Entity" with a new constitution. The new federal Constitution came into effect in 2002; it included elements of consociationalism, including a presidency that rotates every four years among the islands and extensive autonomy for each island. Presidential elections were held in 2002, at which Azali Assoumani was elected president. In April 2004 legislative elections were held, completing the implementation of the new constitution.
The new Union of the Comoros consists of three islands, Grande Comore, Anjouan and Mohéli. Each island has a president, who shares the presidency of the Union on a rotating basis. The president and his vice-presidents are elected for a term of four years. The constitution states that, "the islands enjoy financial autonomy, freely draw up and manage their budgets".
President Assoumani Azali of Grande Comore is the first Union president. President Mohamed Bacar of Anjouan formed his 13-member government at the end of April, 2003.
On 15 May 2006, Ahmed Abdallah Sambi, a cleric and successful businessman educated in Iran, Saudi Arabia and Sudan, was declared the winner of elections for President of the Republic. He is considered a moderate Islamist and is called Ayatollah by his supporters. He beat out retired French air force officer Mohamed Djaanfari and long-time politician Ibrahim Halidi, whose candidacy was backed by Azali Assoumani, the outgoing president.
A referendum took place on May 16, 2009 to decide whether to cut down the government's unwieldy political bureaucracy. 52.7% of those eligible voted, and 93.8% of votes were cast in approval of the referendum. The referendum would cause each island's president to become a governor and the ministers to become councilors.
The constitution gives Grande Comore, Anjouan and Mohéli the right to govern most of their own affairs with their own presidents, except the activities assigned to the Union of the Comoros like Foreign Policy, Defense, Nationality, Banking and others. Comoros considers Mayotte, an overseas collectivity of France, to be part of its territory, with an autonomous status
As of 2011, the three autonomous islands are subdivided into 16 prefectures, 54 communes, and 318 villes or villages.
The federal presidency is rotated between the islands' presidents. The Union of the Comoros abolished the position of Prime Minister in 2002. The position of Vice-President of the Comoros was used 2002–2019.
The Assembly of the Union has 33 seats, 24 elected in single seat constituencies and 9 representatives of the regional assemblies.
The Supreme Court or Cour Supreme, has two members appointed by the president, two members elected by the Federal Assembly, one by the Council of each island, and former presidents of the republic.
The Comoros are member of the ACCT, ACP, AfDB, AMF, African Union, FAO, G-77, IBRD, ICAO, ICCt (signatory), ICRM, IDA, IDB, IFAD, IFC, IFRCS, ILO, IMF, InOC, Interpol, IOC, ITU, LAS, NAM, OIC, OPCW (signatory), United Nations, UNCTAD, UNESCO, UNIDO, UPU, WCO, WHO, WMO. | [
{
"paragraph_id": 0,
"text": "The Union of the Comoros consists of the three islands Njazidja (Grande Comoros), Mwali (Moheli) and Nzwani (Anjouan) while the island of Mayotte remains under French administration. The Politics of the Union of the Comoros take place in a framework of a federal presidential republic, whereby the President of the Comoros is both head of state and head of government, and of a multi-party system. Executive power is exercised by the government. Federal legislative power is vested in both the government and parliament. The precolonial legacies of the sultanates linger while the political situation in Comoros has been extremely fluid since the country's independence in 1975, subject to the volatility of coups and political insurrection.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As of 2008, Comoros and Mauritania were considered by US-based organization Freedom House as the only real “electoral democracies” of the Arab World.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Sultanates in the late nineteenth century used a cyclic age system and hierarchical lineage membership to provide the foundation for participation in the political process. In the capital, \"the sultan was assisted by his ministers and by a madjelis, an advisory council composed of elders, whom he consulted regularly\". Apart from local administration, the age system was used to include the population in decision making, depending on the scope of the decision being made. For example, the elders of the island of Njazidja held considerable influence on the authority of the sultan. Though sultanates granted rights to their free inhabitants, were provided with warriors during war and taxed the towns under their authority, their definition as a state is open to debate. The islands' incorporation as a province of the colony of Madagascar into the French colonial empire marked the end of the sultanates.",
"title": "Precolonial and Colonial Political Structures[2]"
},
{
"paragraph_id": 3,
"text": "Despite French colonization, Comorans identify first with kinship or regional ties and rarely ever with the central government. This is a lingering effect of the sovereign sultanates of pre-colonial times. French colonial administration was based on a misconception that the sultanates operated as absolute monarchs: district boundaries were the same as the sultanates', multiple new taxes forced men into wage labor on colonial plantations and was reinforced through a compulsory public labor system that had little effect on infrastructure. French policy was hampered by an absence of settlers, effective communication across islands, rough geographical terrain and hostility towards the colonial government. Policies were made to apply to Madagascar as a whole and seldom to the nuances of each province: civil servants were typically Christian, unaware of local customs and unable to speak the local language. The French established the Ouatou Akouba in 1915, a local form of governance based on \"customary structures\" already in place that attempted to model itself after the age system in place under the sultanates. Their understanding of the elders' council as a corporate group bypassed the reality that there were men \"who had accomplished the necessary customary rituals to be accorded the status of elder and thus be eligible to participate in the political process in the village\", which effectively rendered the French elders' council ineffective. Though the Ouatou Akouba was disbanded, it resulted in the consolidation and formalization of the age system as access to power in the customary and local government spheres. The French failure to establish a functioning state in the Comoros has had repercussions in the post-independence era.",
"title": "Precolonial and Colonial Political Structures[2]"
},
{
"paragraph_id": 4,
"text": "At independence there were five main political parties: OUDZIMA, UMMA, the Comoro People's Democratic Rally, the Comoro National Liberation Movement and the Socialist Objective Party. The political groups previously known simply as the 'green' and 'white' party became the Rassemblement Démocratique du Peuple Comorien (RDPC) and the Union Démocratique des Comores (UDC), headed by Sayyid Muhammad Cheikh and Sayyid Ibrahim. Members from both parties later merged to form OUDZIMA under the leadership of first president Ahmad Abdallah while dissidents from both created UMMA under the leadership of future president Ali Soilih.",
"title": "Post-independence"
},
{
"paragraph_id": 5,
"text": "Prince Said Ibrahim took power in 1970 but was democratically elected out of office in 1972 in favor of former French senator Ahmed Abdallah. President Abdallah declared independence for all islands, except Mayotte which remained under French administration, in 1975. The threat of renewed socioeconomic marginalization following the transfer of the capital to Ngazidja in 1962, more than social or cultural differences, underlay the island's subsequent rejection of independence. France withdrew all economic and technical support for the now independent state, which would encourage a revolutionary regime under future president Ali Soilih. French military and financial aid to mercenaries brought Prince Said Mohammed Jaffar to power after the United National Front of the Comoros (FNU) party toppled Abdallah's government. This mercenary coup was unique in that, unlike other coups on the continent, it was \"uninspired by any ideological convictions\". The Jaffar regime's inefficient distribution of resources and poor mismanagement was shown through the expulsion of French civil servants as well as endemic unemployment and food shortages. The regime used famine as \"an opportunity to switch food patronage from France to the World Food Programme's emergency aid\".",
"title": "Post-independence"
},
{
"paragraph_id": 6,
"text": "President Jaffar's ousting by Minister of Defense and Justice, Ali Soilih, brought about the \"periode noire\" (dark period) of the country; you could vote at 14, most civil servants were dismissed and there was a ban on some Islamic customs. He implemented revolutionary social reforms such as replacing French with Shikomoro, burning down the national archives and nationalizing land. His government received support from Egypt, Iraq and Sudan. Soilih's attacks on religious and customary authority contributed to his eventual ousting through a French-backed coup consisting of mercenaries and ex-politicians who together formed the Politico Military Doctorate.",
"title": "Post-independence"
},
{
"paragraph_id": 7,
"text": "Abdallah was reinstated and constructed a mercantile state by resuscitating the structures of the colonial era. His establishment of a one party state and intolerance for dissent further alienated civil society from the state. In May 1978 the Comoros were renamed the Islamic Republic of the Comoros and continued strengthening ties with the Arab world which resulted in their joining the Arab League. Abdallah's government sought to reverse Soilih's 'de-sacralization' by re-introducing the grand marriage, declaring Arabic the second official language behind French, and creating the office of the Grand Mufti. The doctorate & compromise government was dissolved, constitutional changes removed succession from a politician and neutralized the post of another possible challenger in abolishing the position of Prime Minister, which effectively cemented a client-patron network by making the civil service position dependent on Abdallah's political base. The Democratic Front's (DF) internal opposition to Abdallah was suppressed through the incarceration of over 600 people allegedly involved in a failed coup attempt. Abdallah then stocked the House of Assembly with loyal clientelist supporters through rigged parliamentary elections. All of these actions effectively consolidated Abdallah's position.",
"title": "Post-independence"
},
{
"paragraph_id": 8,
"text": "Muhammed Djohar succeeded president Abdallah after his assassination in 1989 but was evacuated by French troops after a failed coup attempt in 1996. The Comoros were led by Muhammed Taki Abd al-Karim beginning in 1996 and he was followed by interim president Said Massunde who eventually gave way to Assoumani Azali. Taki's lack of Arab heritage led to his lack of understanding Nzwani's cultural differences and economic problems, as seen by the establishment of the elders council with only loyal Taki supporters. As a result, the council was ignored by the true elders of the island. After Taki's death, a military coup in 1999, the nation's eighteenth since independence in 1975, installed Azali in to power. Colonel Azali Assoumani seized power in a bloodless coup in April 1999, overthrowing Interim President Tadjidine Ben Said Massounde, who himself had held the office since the death of democratically elected President Mohamed Taki Abdoulkarim in November, 1998. In May 1999, Azali decreed a constitution that gave him both executive and legislative powers. Bowing somewhat to international criticism, Azali appointed a civilian Prime Minister, Bainrifi Tarmidi, in December 1999; however, Azali retained the mantle of Head of State and army Commander. In December 2000, Azali named a new civilian Prime Minister, Hamada Madi, and formed a new civilian Cabinet. When Azali took power he also pledged to step down in April 2000 and relinquish control to a democratically elected president—a pledge with mixed results. Under Mohammed Taki and Assoumani Azali, access to the state was used to support client networks which led to crumbling infrastructure that cultivated in the islands of Nzwani and Mwali declaring independence only to be stopped by French troops. Azali lacked the social obligations required to address the elders and when combined with his gross mismanagement and increasing economic and social dependence on foreign entities, made managing daily life near nonexistent in the state. Therefore, local administrative structures began popping up and drifting away from reliance on the state, funded by remittances from the expatriate community in France.",
"title": "Post-independence"
},
{
"paragraph_id": 9,
"text": "The Comoros Islands have experienced five different constitutions.",
"title": "Post-independence"
},
{
"paragraph_id": 10,
"text": "In a separate nod to pressure to restore civilian rule, the government organized several committees to compose a new constitution, including the August 2000 National Congress and November 2000 Tripartite Commission. The opposition parties initially refused to participate in the Tripartite Commission, but on 17 February, representatives of the government, the Anjouan separatists, the political opposition, and civil society organizations signed a \"Framework Accord for Reconciliation in Comoros,\" brokered by the Organization for African Unity",
"title": "Fourth Constitution"
},
{
"paragraph_id": 11,
"text": "The accord called for the creation of a new Tripartite Commission for National Reconciliation to develop a \"New Comorian Entity\" with a new constitution. The new federal Constitution came into effect in 2002; it included elements of consociationalism, including a presidency that rotates every four years among the islands and extensive autonomy for each island. Presidential elections were held in 2002, at which Azali Assoumani was elected president. In April 2004 legislative elections were held, completing the implementation of the new constitution.",
"title": "Fourth Constitution"
},
{
"paragraph_id": 12,
"text": "The new Union of the Comoros consists of three islands, Grande Comore, Anjouan and Mohéli. Each island has a president, who shares the presidency of the Union on a rotating basis. The president and his vice-presidents are elected for a term of four years. The constitution states that, \"the islands enjoy financial autonomy, freely draw up and manage their budgets\".",
"title": "Fourth Constitution"
},
{
"paragraph_id": 13,
"text": "President Assoumani Azali of Grande Comore is the first Union president. President Mohamed Bacar of Anjouan formed his 13-member government at the end of April, 2003.",
"title": "Fourth Constitution"
},
{
"paragraph_id": 14,
"text": "On 15 May 2006, Ahmed Abdallah Sambi, a cleric and successful businessman educated in Iran, Saudi Arabia and Sudan, was declared the winner of elections for President of the Republic. He is considered a moderate Islamist and is called Ayatollah by his supporters. He beat out retired French air force officer Mohamed Djaanfari and long-time politician Ibrahim Halidi, whose candidacy was backed by Azali Assoumani, the outgoing president.",
"title": "Fourth Constitution"
},
{
"paragraph_id": 15,
"text": "A referendum took place on May 16, 2009 to decide whether to cut down the government's unwieldy political bureaucracy. 52.7% of those eligible voted, and 93.8% of votes were cast in approval of the referendum. The referendum would cause each island's president to become a governor and the ministers to become councilors.",
"title": "Fourth Constitution"
},
{
"paragraph_id": 16,
"text": "The constitution gives Grande Comore, Anjouan and Mohéli the right to govern most of their own affairs with their own presidents, except the activities assigned to the Union of the Comoros like Foreign Policy, Defense, Nationality, Banking and others. Comoros considers Mayotte, an overseas collectivity of France, to be part of its territory, with an autonomous status",
"title": "Autonomous islands"
},
{
"paragraph_id": 17,
"text": "As of 2011, the three autonomous islands are subdivided into 16 prefectures, 54 communes, and 318 villes or villages.",
"title": "Autonomous islands"
},
{
"paragraph_id": 18,
"text": "The federal presidency is rotated between the islands' presidents. The Union of the Comoros abolished the position of Prime Minister in 2002. The position of Vice-President of the Comoros was used 2002–2019.",
"title": "Executive branch"
},
{
"paragraph_id": 19,
"text": "The Assembly of the Union has 33 seats, 24 elected in single seat constituencies and 9 representatives of the regional assemblies.",
"title": "Legislative branch"
},
{
"paragraph_id": 20,
"text": "The Supreme Court or Cour Supreme, has two members appointed by the president, two members elected by the Federal Assembly, one by the Council of each island, and former presidents of the republic.",
"title": "Judicial branch"
},
{
"paragraph_id": 21,
"text": "The Comoros are member of the ACCT, ACP, AfDB, AMF, African Union, FAO, G-77, IBRD, ICAO, ICCt (signatory), ICRM, IDA, IDB, IFAD, IFC, IFRCS, ILO, IMF, InOC, Interpol, IOC, ITU, LAS, NAM, OIC, OPCW (signatory), United Nations, UNCTAD, UNESCO, UNIDO, UPU, WCO, WHO, WMO.",
"title": "International organization participation"
}
]
| The Union of the Comoros consists of the three islands Njazidja, Mwali (Moheli) and Nzwani (Anjouan) while the island of Mayotte remains under French administration. The Politics of the Union of the Comoros take place in a framework of a federal presidential republic, whereby the President of the Comoros is both head of state and head of government, and of a multi-party system. Executive power is exercised by the government. Federal legislative power is vested in both the government and parliament. The precolonial legacies of the sultanates linger while the political situation in Comoros has been extremely fluid since the country's independence in 1975, subject to the volatility of coups and political insurrection. As of 2008, Comoros and Mauritania were considered by US-based organization Freedom House as the only real “electoral democracies” of the Arab World. | 2002-02-25T15:43:11Z | 2023-11-18T15:17:12Z | [
"Template:Cite web",
"Template:Cite journal",
"Template:Cite news",
"Template:Comoros topics",
"Template:Authority control",
"Template:Politics of Comoros",
"Template:Office-table",
"Template:Elect",
"Template:Reflist",
"Template:Cite book",
"Template:Politics of Africa",
"Template:Update"
]
| https://en.wikipedia.org/wiki/Politics_of_the_Comoros |
6,005 | Telecommunications in the Comoros | In large part thanks to international aid programs, Moroni has international telecommunications service. Telephone service, however, is largely limited to the islands' few towns.
Telephones – main lines in use: 5,000 (1995)
Telephones – mobile cellular: 0 (1995)
Telephone system: sparse system of microwave radio relay and HF radiotelephone communication stations domestic: HF radiotelephone communications and microwave radio relay CMDA mobile network (Huri, operated by Comores Telecom) international: HF radiotelephone communications to Madagascar and Réunion
Radio broadcast stations: AM 1, FM 2, shortwave 1 (1998)
Radios: 90,000 (1997)
Television broadcast stations: 0 (1998)
Televisions: 1,000 (1997)
Internet Service Providers (ISPs): 1 (1999)
Country code (Top-level domain): KM
In October 2011 the State of Qatar launched a special program for the construction of a wireless network to interconnect the three islands of the archipelago, by means of low cost, repeatable technology. The project has been developed by Qatar University and Politecnico di Torino, under the supervision of prof. Mazen Hasna and prof. Daniele Trinchero, with a major participation of local actors (Comorian Government, NRTIC, University of the Comoros). The project has been referred as an example of technology transfer and Sustainable Inclusion in developing countries
This article incorporates text from this source, which is in the public domain. Indian Ocean : five island countries. Federal Research Division. | [
{
"paragraph_id": 0,
"text": "In large part thanks to international aid programs, Moroni has international telecommunications service. Telephone service, however, is largely limited to the islands' few towns.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Telephones – main lines in use: 5,000 (1995)",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "Telephones – mobile cellular: 0 (1995)",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "Telephone system: sparse system of microwave radio relay and HF radiotelephone communication stations domestic: HF radiotelephone communications and microwave radio relay CMDA mobile network (Huri, operated by Comores Telecom) international: HF radiotelephone communications to Madagascar and Réunion",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "Radio broadcast stations: AM 1, FM 2, shortwave 1 (1998)",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "Radios: 90,000 (1997)",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "Television broadcast stations: 0 (1998)",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "Televisions: 1,000 (1997)",
"title": "Overview"
},
{
"paragraph_id": 8,
"text": "Internet Service Providers (ISPs): 1 (1999)",
"title": "Overview"
},
{
"paragraph_id": 9,
"text": "Country code (Top-level domain): KM",
"title": "Overview"
},
{
"paragraph_id": 10,
"text": "In October 2011 the State of Qatar launched a special program for the construction of a wireless network to interconnect the three islands of the archipelago, by means of low cost, repeatable technology. The project has been developed by Qatar University and Politecnico di Torino, under the supervision of prof. Mazen Hasna and prof. Daniele Trinchero, with a major participation of local actors (Comorian Government, NRTIC, University of the Comoros). The project has been referred as an example of technology transfer and Sustainable Inclusion in developing countries",
"title": "Special projects"
},
{
"paragraph_id": 11,
"text": "This article incorporates text from this source, which is in the public domain. Indian Ocean : five island countries. Federal Research Division.",
"title": "References"
},
{
"paragraph_id": 12,
"text": "",
"title": "References"
}
]
| In large part thanks to international aid programs, Moroni has international telecommunications service. Telephone service, however, is largely limited to the islands' few towns. | 2023-02-25T22:19:43Z | [
"Template:Use dmy dates",
"Template:Multiple issues",
"Template:Telecommunications",
"Template:Comoros topics",
"Template:Comoros-stub",
"Template:Telecommunications-stub",
"Template:Reflist",
"Template:Country study",
"Template:Economy of Comoros",
"Template:Africa topic"
]
| https://en.wikipedia.org/wiki/Telecommunications_in_the_Comoros |
|
6,006 | Transport in the Comoros | There are a number of systems of transport in the Comoros. The Comoros possesses 880 km (547 mi) of road, of which 673 km (418 mi) are paved. It has three seaports: Fomboni, Moroni and Moutsamoudou, but does not have a merchant marine, and no longer has any railway network. It has four airports, all with paved runways, one with runways over 2,438 m (7,999 ft) long, with the others having runways shorter than 1,523 m (4,997 ft).
The isolation of the Comoros had made air traffic a major means of transportation. One of President Abdallah's accomplishments was to make the Comoros more accessible by air. During his administration, he negotiated agreements to initiate or enhance commercial air links with Tanzania and Madagascar. The Djohar regime reached an agreement in 1990 to link Moroni and Brussels by air. By the early 1990s, commercial flights connected the Comoros with France, Mauritius, Kenya, South Africa, Tanzania, and Madagascar. The national airline was Air Comores. Daily flights linked the three main islands, and air service was also available to Mahoré; each island had airstrips. In 1986 the republic received a grant from the French government's CCCE to renovate and expand Hahaya airport, near Moroni. Because of the absence of scheduled sea transport between the islands, nearly all interisland passenger traffic is by air.
More than 99% of freight is transported by sea. Both Moroni on Njazidja and Mutsamudu on Nzwani have artificial harbors. There is also a harbor at Fomboni, on Mwali. Despite extensive internationally financed programs to upgrade the harbors at Moroni and Mutsamudu, by the early 1990s only Mutsamudu was operational as a deepwater facility. Its harbor could accommodate vessels of up to eleven meters' draught. At Moroni, ocean-going vessels typically lie offshore and are loaded or unloaded by smaller craft, a costly and sometimes dangerous procedure. Most freight continues to be sent to Tanzania, Kenya, Reunion, or Madagascar for transshipment to the Comoros. Use of Comoran ports is further restricted by the threat of cyclones from December through March. The privately operated Comoran Navigation Company (Société Comorienne de Navigation) is based in Moroni, and provides services to Madagascar.
Roads serve the coastal areas, rather than the interior, and the mountainous terrain makes surface travel difficult. | [
{
"paragraph_id": 0,
"text": "There are a number of systems of transport in the Comoros. The Comoros possesses 880 km (547 mi) of road, of which 673 km (418 mi) are paved. It has three seaports: Fomboni, Moroni and Moutsamoudou, but does not have a merchant marine, and no longer has any railway network. It has four airports, all with paved runways, one with runways over 2,438 m (7,999 ft) long, with the others having runways shorter than 1,523 m (4,997 ft).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The isolation of the Comoros had made air traffic a major means of transportation. One of President Abdallah's accomplishments was to make the Comoros more accessible by air. During his administration, he negotiated agreements to initiate or enhance commercial air links with Tanzania and Madagascar. The Djohar regime reached an agreement in 1990 to link Moroni and Brussels by air. By the early 1990s, commercial flights connected the Comoros with France, Mauritius, Kenya, South Africa, Tanzania, and Madagascar. The national airline was Air Comores. Daily flights linked the three main islands, and air service was also available to Mahoré; each island had airstrips. In 1986 the republic received a grant from the French government's CCCE to renovate and expand Hahaya airport, near Moroni. Because of the absence of scheduled sea transport between the islands, nearly all interisland passenger traffic is by air.",
"title": ""
},
{
"paragraph_id": 2,
"text": "More than 99% of freight is transported by sea. Both Moroni on Njazidja and Mutsamudu on Nzwani have artificial harbors. There is also a harbor at Fomboni, on Mwali. Despite extensive internationally financed programs to upgrade the harbors at Moroni and Mutsamudu, by the early 1990s only Mutsamudu was operational as a deepwater facility. Its harbor could accommodate vessels of up to eleven meters' draught. At Moroni, ocean-going vessels typically lie offshore and are loaded or unloaded by smaller craft, a costly and sometimes dangerous procedure. Most freight continues to be sent to Tanzania, Kenya, Reunion, or Madagascar for transshipment to the Comoros. Use of Comoran ports is further restricted by the threat of cyclones from December through March. The privately operated Comoran Navigation Company (Société Comorienne de Navigation) is based in Moroni, and provides services to Madagascar.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Roads serve the coastal areas, rather than the interior, and the mountainous terrain makes surface travel difficult.",
"title": ""
},
{
"paragraph_id": 4,
"text": "",
"title": "References"
}
]
| There are a number of systems of transport in the Comoros. The Comoros possesses 880 km (547 mi) of road, of which 673 km (418 mi) are paved. It has three seaports: Fomboni, Moroni and Moutsamoudou, but does not have a merchant marine, and no longer has any railway network. It has four airports, all with paved runways, one with runways over 2,438 m (7,999 ft) long, with the others having runways shorter than 1,523 m (4,997 ft). The isolation of the Comoros had made air traffic a major means of transportation. One of President Abdallah's accomplishments was to make the Comoros more accessible by air. During his administration, he negotiated agreements to initiate or enhance commercial air links with Tanzania and Madagascar. The Djohar regime reached an agreement in 1990 to link Moroni and Brussels by air. By the early 1990s, commercial flights connected the Comoros with France, Mauritius, Kenya, South Africa, Tanzania, and Madagascar. The national airline was Air Comores. Daily flights linked the three main islands, and air service was also available to Mahoré; each island had airstrips. In 1986 the republic received a grant from the French government's CCCE to renovate and expand Hahaya airport, near Moroni. Because of the absence of scheduled sea transport between the islands, nearly all interisland passenger traffic is by air. More than 99% of freight is transported by sea. Both Moroni on Njazidja and Mutsamudu on Nzwani have artificial harbors. There is also a harbor at Fomboni, on Mwali. Despite extensive internationally financed programs to upgrade the harbors at Moroni and Mutsamudu, by the early 1990s only Mutsamudu was operational as a deepwater facility. Its harbor could accommodate vessels of up to eleven meters' draught. At Moroni, ocean-going vessels typically lie offshore and are loaded or unloaded by smaller craft, a costly and sometimes dangerous procedure. Most freight continues to be sent to Tanzania, Kenya, Reunion, or Madagascar for transshipment to the Comoros. Use of Comoran ports is further restricted by the threat of cyclones from December through March. The privately operated Comoran Navigation Company is based in Moroni, and provides services to Madagascar. Roads serve the coastal areas, rather than the interior, and the mountainous terrain makes surface travel difficult. | 2022-09-21T12:00:06Z | [
"Template:Comoros-stub",
"Template:Use dmy dates",
"Template:No footnotes",
"Template:Convert",
"Template:Portal",
"Template:Country study",
"Template:Comoros topics",
"Template:Economy of Comoros",
"Template:Africa in topic"
]
| https://en.wikipedia.org/wiki/Transport_in_the_Comoros |
|
6,007 | Foreign relations of the Comoros | In November 1975, Comoros became the 143rd member of the United Nations. The new nation was defined as consisting of the entire archipelago, despite the fact that France maintains control over Mayotte.
Comoros also is a member of the African Union, the Arab League, the European Development Fund, the World Bank, the International Monetary Fund, the Indian Ocean Commission, and the African Development Bank.
The government fostered close relationships with the more conservative (and oil-rich) Arab states, such as Saudi Arabia and Kuwait. It frequently received aid from those countries and the regional financial institutions they influenced, such as the Arab Bank for Economic Development in Africa and the Arab Fund for Economic and Social Development. In October 1993, Comoros joined the League of Arab States, after having been rejected when it applied for membership initially in 1977.
Regional relations generally were good. In 1985 Madagascar, Mauritius, and Seychelles agreed to admit Comoros as the fourth member of the Indian Ocean Commission (IOC), an organization established in 1982 to encourage regional cooperation. In 1993 Mauritius and Seychelles had two of the five embassies in Moroni, and Mauritius and Madagascar were connected to the republic by regularly scheduled commercial flights.
In November 1975, Comoros became the 143d member of the UN. In the 1990s, the republic continued to represent Mahoré in the UN. Comoros was also a member of the OAU, the EDF, the World Bank, the IMF, the IOC, and the African Development Bank.
Comoros thus cultivated relations with various nations, both East and West, seeking to increase trade and obtain financial assistance. In 1994, however, it was increasingly facing the need to control its expenditures and reorganize its economy so that it would be viewed as a sounder recipient of investment. Comoros also confronted domestically the problem of the degree of democracy the government was prepared to grant to its citizens, a consideration that related to its standing in the world community. | [
{
"paragraph_id": 0,
"text": "In November 1975, Comoros became the 143rd member of the United Nations. The new nation was defined as consisting of the entire archipelago, despite the fact that France maintains control over Mayotte.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Comoros also is a member of the African Union, the Arab League, the European Development Fund, the World Bank, the International Monetary Fund, the Indian Ocean Commission, and the African Development Bank.",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "The government fostered close relationships with the more conservative (and oil-rich) Arab states, such as Saudi Arabia and Kuwait. It frequently received aid from those countries and the regional financial institutions they influenced, such as the Arab Bank for Economic Development in Africa and the Arab Fund for Economic and Social Development. In October 1993, Comoros joined the League of Arab States, after having been rejected when it applied for membership initially in 1977.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "Regional relations generally were good. In 1985 Madagascar, Mauritius, and Seychelles agreed to admit Comoros as the fourth member of the Indian Ocean Commission (IOC), an organization established in 1982 to encourage regional cooperation. In 1993 Mauritius and Seychelles had two of the five embassies in Moroni, and Mauritius and Madagascar were connected to the republic by regularly scheduled commercial flights.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "In November 1975, Comoros became the 143d member of the UN. In the 1990s, the republic continued to represent Mahoré in the UN. Comoros was also a member of the OAU, the EDF, the World Bank, the IMF, the IOC, and the African Development Bank.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "Comoros thus cultivated relations with various nations, both East and West, seeking to increase trade and obtain financial assistance. In 1994, however, it was increasingly facing the need to control its expenditures and reorganize its economy so that it would be viewed as a sounder recipient of investment. Comoros also confronted domestically the problem of the degree of democracy the government was prepared to grant to its citizens, a consideration that related to its standing in the world community.",
"title": "Overview"
}
]
| In November 1975, Comoros became the 143rd member of the United Nations. The new nation was defined as consisting of the entire archipelago, despite the fact that France maintains control over Mayotte. | 2002-02-25T15:51:15Z | 2023-12-18T20:40:48Z | [
"Template:Flag",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Comoros topics",
"Template:Short description",
"Template:Politics of Comoros",
"Template:Cite book",
"Template:Cbignore",
"Template:Foreign relations of Comoros",
"Template:Africa in topic"
]
| https://en.wikipedia.org/wiki/Foreign_relations_of_the_Comoros |
6,008 | Army of National Development | The Comorian Armed Forces (French: Armée nationale de développement, AND; lit. 'Army of National Development') are the national military of the Comoros. The armed forces consist of a small standing army and a 500-member police force, as well as a 500-member defense force. A defense treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains a small troop presence in the Comoros at government request. France maintains a small Navy base and a Foreign Legion Detachment (DLEM) in Mayotte.
The AND consists of the following components:
Note: The last comprehensive aircraft inventory list was from Aviation Week & Space Technology in 2007. | [
{
"paragraph_id": 0,
"text": "The Comorian Armed Forces (French: Armée nationale de développement, AND; lit. 'Army of National Development') are the national military of the Comoros. The armed forces consist of a small standing army and a 500-member police force, as well as a 500-member defense force. A defense treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains a small troop presence in the Comoros at government request. France maintains a small Navy base and a Foreign Legion Detachment (DLEM) in Mayotte.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The AND consists of the following components:",
"title": "Structure"
},
{
"paragraph_id": 2,
"text": "Note: The last comprehensive aircraft inventory list was from Aviation Week & Space Technology in 2007.",
"title": "Aircraft"
},
{
"paragraph_id": 3,
"text": "",
"title": "References"
}
]
| The Comorian Armed Forces are the national military of the Comoros. The armed forces consist of a small standing army and a 500-member police force, as well as a 500-member defense force. A defense treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains a small troop presence in the Comoros at government request. France maintains a small Navy base and a Foreign Legion Detachment (DLEM) in Mayotte. | 2023-06-21T05:09:52Z | [
"Template:Military of Africa",
"Template:Comoros-stub",
"Template:Literally",
"Template:Cite web",
"Template:Reflist",
"Template:Comoros topics",
"Template:Military of the Arab world",
"Template:Africa-mil-stub",
"Template:Infobox national military",
"Template:Lang-fr"
]
| https://en.wikipedia.org/wiki/Army_of_National_Development |
|
6,010 | Computer worm | A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behaviour will continue. Computer worms use recursive methods to copy themselves without host programs and distribute themselves based on exploiting the advantages of exponential growth, thus controlling and infecting more and more computers in a short time. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.
Many worms are designed only to spread, and do not attempt to change the systems they pass through. However, as the Morris worm and Mydoom showed, even these "payload-free" worms can cause major disruption by increasing network traffic and other unintended effects.
The term "worm" was first used in John Brunner's 1975 novel, The Shockwave Rider. In the novel, Nichlas Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful men who run a national electronic information web that induces mass conformity. "You have the biggest-ever worm loose in the net, and it automatically sabotages any attempt to monitor it. There's never been a worm with that tough a head or that long a tail!" "Then the answer dawned on him, and he almost laughed. Fluckner had resorted to one of the oldest tricks in the store and turned loose in the continental net a self-perpetuating tapeworm, probably headed by a denunciation group "borrowed" from a major corporation, which would shunt itself from one nexus to another every time his credit-code was punched into a keyboard. It could take days to kill a worm like that, and sometimes weeks."
The second ever computer worm was devised to be an anti-virus software. Named Reaper, it was created by Ray Tomlinson to replicate itself across the ARPANET and delete the experimental Creeper program (the first computer worm, 1971).
On November 2, 1988, Robert Tappan Morris, a Cornell University computer science graduate student, unleashed what became known as the Morris worm, disrupting many computers then on the Internet, guessed at the time to be one tenth of all those connected. During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the worm from each installation at between $200 and $53,000; this work prompted the formation of the CERT Coordination Center and Phage mailing list. Morris himself became the first person tried and convicted under the 1986 Computer Fraud and Abuse Act.
Conficker, a computer worm discovered in 2008 that primarily targeted Microsoft Windows operating systems, is a worm that employs 3 different spreading strategies: local probing, neighborhood probing, and global probing.
Independence
Computer viruses generally require a host program. The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. A worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by the host program, but can run independently and actively carry out attacks.
Exploit attacks
Because a worm is not limited by the host program, worms can take advantage of various operating system vulnerabilities to carry out active attacks. For example, the "Nimda" virus exploits vulnerabilities to attack.
Complexity
Some worms are combined with web page scripts, and are hidden in HTML pages using VBScript, ActiveX and other technologies. When a user accesses a webpage containing a virus, the virus automatically resides in memory and waits to be triggered. There are also some worms that are combined with backdoor programs or Trojan horses, such as "Code Red".
Contagiousness
Worms are more infectious than traditional viruses. They not only infect local computers, but also all servers and clients on the network based on the local computer. Worms can easily spread through shared folders, e-mails, malicious web pages, and servers with a large number of vulnerabilities in the network.
Any code designed to do more than spread the worm is typically referred to as the "payload". Typical malicious payloads might delete files on a host system (e.g., the ExploreZip worm), encrypt files in a ransomware attack, or exfiltrate data such as confidential documents or passwords.
Some worms may install a backdoor. This allows the computer to be remotely controlled by the worm author as a "zombie". Networks of such machines are often referred to as botnets and are very commonly used for a range of malicious purposes, including sending spam or performing DoS attacks.
Some special worms attack industrial systems in a targeted manner. Stuxnet was primarily transmitted through LANs and infected thumb-drives, as its targets were never connected to untrusted networks, like the internet. This virus can destroy the core production control computer software used by chemical, power generation and power transmission companies in various countries around the world - in Stuxnet's case, Iran, Indonesia and India were hardest hit - it was used to "issue orders" to other equipment in the factory, and to hide those commands from being detected. Stuxnet used multiple vulnerabilities and four different zero-day exploits (eg: ) in Windows systems and Siemens SIMATICWinCC systems to attack the embedded programmable logic controllers of industrial machines. Although these systems operate independently from the network, if the operator inserts a virus-infected drive into the system's USB interface, the virus will be able to gain control of the system without any other operational requirements or prompts.
Worms spread by exploiting vulnerabilities in operating systems. Vendors with security problems supply regular security updates (see "Patch Tuesday"), and if these are installed to a machine, then the majority of worms are unable to spread to it. If a vulnerability is disclosed before the security patch released by the vendor, a zero-day attack is possible.
Users need to be wary of opening unexpected email, and should not run attached files or programs, or visit web sites that are linked to such emails. However, as with the ILOVEYOU worm, and with the increased growth and efficiency of phishing attacks, it remains possible to trick the end-user into running malicious code.
Anti-virus and anti-spyware software are helpful, but must be kept up-to-date with new pattern files at least every few days. The use of a firewall is also recommended.
Users can minimize the threat posed by worms by keeping their computers' operating system and other software up to date, avoiding opening unrecognized or unexpected emails and running firewall and antivirus software.
Mitigation techniques include:
Infections can sometimes be detected by their behavior - typically scanning the Internet randomly, looking for vulnerable hosts to infect. In addition, machine learning techniques can be used to detect new worms, by analyzing the behavior of the suspected computer.
A helpful worm or anti-worm is a worm designed to do something that its author feels is helpful, though not necessarily with the permission of the executing computer's owner. Beginning with the first research into worms at Xerox PARC, there have been attempts to create useful worms. Those worms allowed John Shoch and Jon Hupp to test the Ethernet principles on their network of Xerox Alto computers. Similarly, the Nachi family of worms tried to download and install patches from Microsoft's website to fix vulnerabilities in the host system by exploiting those same vulnerabilities. In practice, although this may have made these systems more secure, it generated considerable network traffic, rebooted the machine in the course of patching it, and did its work without the consent of the computer's owner or user. Regardless of their payload or their writers' intentions, security experts regard all worms as malware. Another example of this approach is Roku OS patching a bug allowing for Roku OS to be rooted via an update to their screensaver channels, which the screensaver would attempt to connect to the telnet and patch the device.
One study proposed the first computer worm that operates on the second layer of the OSI model (Data link Layer), utilizing topology information such as Content-addressable memory (CAM) tables and Spanning Tree information stored in switches to propagate and probe for vulnerable nodes until the enterprise network is covered.
Anti-worms have been used to combat the effects of the Code Red, Blaster, and Santy worms. Welchia is an example of a helpful worm. Utilizing the same deficiencies exploited by the Blaster worm, Welchia infected computers and automatically began downloading Microsoft security updates for Windows without the users' consent. Welchia automatically reboots the computers it infects after installing the updates. One of these updates was the patch that fixed the exploit.
Other examples of helpful worms are "Den_Zuko", "Cheeze", "CodeGreen", and "Millenium".
Art worms support artists in the performance of massive scale ephemeral artworks. It turns the infected computers into nodes that contribute to the artwork. | [
{
"paragraph_id": 0,
"text": "A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behaviour will continue. Computer worms use recursive methods to copy themselves without host programs and distribute themselves based on exploiting the advantages of exponential growth, thus controlling and infecting more and more computers in a short time. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Many worms are designed only to spread, and do not attempt to change the systems they pass through. However, as the Morris worm and Mydoom showed, even these \"payload-free\" worms can cause major disruption by increasing network traffic and other unintended effects.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term \"worm\" was first used in John Brunner's 1975 novel, The Shockwave Rider. In the novel, Nichlas Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful men who run a national electronic information web that induces mass conformity. \"You have the biggest-ever worm loose in the net, and it automatically sabotages any attempt to monitor it. There's never been a worm with that tough a head or that long a tail!\" \"Then the answer dawned on him, and he almost laughed. Fluckner had resorted to one of the oldest tricks in the store and turned loose in the continental net a self-perpetuating tapeworm, probably headed by a denunciation group \"borrowed\" from a major corporation, which would shunt itself from one nexus to another every time his credit-code was punched into a keyboard. It could take days to kill a worm like that, and sometimes weeks.\"",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The second ever computer worm was devised to be an anti-virus software. Named Reaper, it was created by Ray Tomlinson to replicate itself across the ARPANET and delete the experimental Creeper program (the first computer worm, 1971).",
"title": "History"
},
{
"paragraph_id": 4,
"text": "On November 2, 1988, Robert Tappan Morris, a Cornell University computer science graduate student, unleashed what became known as the Morris worm, disrupting many computers then on the Internet, guessed at the time to be one tenth of all those connected. During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the worm from each installation at between $200 and $53,000; this work prompted the formation of the CERT Coordination Center and Phage mailing list. Morris himself became the first person tried and convicted under the 1986 Computer Fraud and Abuse Act.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Conficker, a computer worm discovered in 2008 that primarily targeted Microsoft Windows operating systems, is a worm that employs 3 different spreading strategies: local probing, neighborhood probing, and global probing.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Independence",
"title": "Features"
},
{
"paragraph_id": 7,
"text": "Computer viruses generally require a host program. The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. A worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by the host program, but can run independently and actively carry out attacks.",
"title": "Features"
},
{
"paragraph_id": 8,
"text": "Exploit attacks",
"title": "Features"
},
{
"paragraph_id": 9,
"text": "Because a worm is not limited by the host program, worms can take advantage of various operating system vulnerabilities to carry out active attacks. For example, the \"Nimda\" virus exploits vulnerabilities to attack.",
"title": "Features"
},
{
"paragraph_id": 10,
"text": "Complexity",
"title": "Features"
},
{
"paragraph_id": 11,
"text": "Some worms are combined with web page scripts, and are hidden in HTML pages using VBScript, ActiveX and other technologies. When a user accesses a webpage containing a virus, the virus automatically resides in memory and waits to be triggered. There are also some worms that are combined with backdoor programs or Trojan horses, such as \"Code Red\".",
"title": "Features"
},
{
"paragraph_id": 12,
"text": "Contagiousness",
"title": "Features"
},
{
"paragraph_id": 13,
"text": "Worms are more infectious than traditional viruses. They not only infect local computers, but also all servers and clients on the network based on the local computer. Worms can easily spread through shared folders, e-mails, malicious web pages, and servers with a large number of vulnerabilities in the network.",
"title": "Features"
},
{
"paragraph_id": 14,
"text": "Any code designed to do more than spread the worm is typically referred to as the \"payload\". Typical malicious payloads might delete files on a host system (e.g., the ExploreZip worm), encrypt files in a ransomware attack, or exfiltrate data such as confidential documents or passwords.",
"title": "Harm"
},
{
"paragraph_id": 15,
"text": "Some worms may install a backdoor. This allows the computer to be remotely controlled by the worm author as a \"zombie\". Networks of such machines are often referred to as botnets and are very commonly used for a range of malicious purposes, including sending spam or performing DoS attacks.",
"title": "Harm"
},
{
"paragraph_id": 16,
"text": "Some special worms attack industrial systems in a targeted manner. Stuxnet was primarily transmitted through LANs and infected thumb-drives, as its targets were never connected to untrusted networks, like the internet. This virus can destroy the core production control computer software used by chemical, power generation and power transmission companies in various countries around the world - in Stuxnet's case, Iran, Indonesia and India were hardest hit - it was used to \"issue orders\" to other equipment in the factory, and to hide those commands from being detected. Stuxnet used multiple vulnerabilities and four different zero-day exploits (eg: ) in Windows systems and Siemens SIMATICWinCC systems to attack the embedded programmable logic controllers of industrial machines. Although these systems operate independently from the network, if the operator inserts a virus-infected drive into the system's USB interface, the virus will be able to gain control of the system without any other operational requirements or prompts.",
"title": "Harm"
},
{
"paragraph_id": 17,
"text": "Worms spread by exploiting vulnerabilities in operating systems. Vendors with security problems supply regular security updates (see \"Patch Tuesday\"), and if these are installed to a machine, then the majority of worms are unable to spread to it. If a vulnerability is disclosed before the security patch released by the vendor, a zero-day attack is possible.",
"title": "Countermeasures"
},
{
"paragraph_id": 18,
"text": "Users need to be wary of opening unexpected email, and should not run attached files or programs, or visit web sites that are linked to such emails. However, as with the ILOVEYOU worm, and with the increased growth and efficiency of phishing attacks, it remains possible to trick the end-user into running malicious code.",
"title": "Countermeasures"
},
{
"paragraph_id": 19,
"text": "Anti-virus and anti-spyware software are helpful, but must be kept up-to-date with new pattern files at least every few days. The use of a firewall is also recommended.",
"title": "Countermeasures"
},
{
"paragraph_id": 20,
"text": "Users can minimize the threat posed by worms by keeping their computers' operating system and other software up to date, avoiding opening unrecognized or unexpected emails and running firewall and antivirus software.",
"title": "Countermeasures"
},
{
"paragraph_id": 21,
"text": "Mitigation techniques include:",
"title": "Countermeasures"
},
{
"paragraph_id": 22,
"text": "Infections can sometimes be detected by their behavior - typically scanning the Internet randomly, looking for vulnerable hosts to infect. In addition, machine learning techniques can be used to detect new worms, by analyzing the behavior of the suspected computer.",
"title": "Countermeasures"
},
{
"paragraph_id": 23,
"text": "A helpful worm or anti-worm is a worm designed to do something that its author feels is helpful, though not necessarily with the permission of the executing computer's owner. Beginning with the first research into worms at Xerox PARC, there have been attempts to create useful worms. Those worms allowed John Shoch and Jon Hupp to test the Ethernet principles on their network of Xerox Alto computers. Similarly, the Nachi family of worms tried to download and install patches from Microsoft's website to fix vulnerabilities in the host system by exploiting those same vulnerabilities. In practice, although this may have made these systems more secure, it generated considerable network traffic, rebooted the machine in the course of patching it, and did its work without the consent of the computer's owner or user. Regardless of their payload or their writers' intentions, security experts regard all worms as malware. Another example of this approach is Roku OS patching a bug allowing for Roku OS to be rooted via an update to their screensaver channels, which the screensaver would attempt to connect to the telnet and patch the device.",
"title": "Worms with good intent"
},
{
"paragraph_id": 24,
"text": "One study proposed the first computer worm that operates on the second layer of the OSI model (Data link Layer), utilizing topology information such as Content-addressable memory (CAM) tables and Spanning Tree information stored in switches to propagate and probe for vulnerable nodes until the enterprise network is covered.",
"title": "Worms with good intent"
},
{
"paragraph_id": 25,
"text": "Anti-worms have been used to combat the effects of the Code Red, Blaster, and Santy worms. Welchia is an example of a helpful worm. Utilizing the same deficiencies exploited by the Blaster worm, Welchia infected computers and automatically began downloading Microsoft security updates for Windows without the users' consent. Welchia automatically reboots the computers it infects after installing the updates. One of these updates was the patch that fixed the exploit.",
"title": "Worms with good intent"
},
{
"paragraph_id": 26,
"text": "Other examples of helpful worms are \"Den_Zuko\", \"Cheeze\", \"CodeGreen\", and \"Millenium\".",
"title": "Worms with good intent"
},
{
"paragraph_id": 27,
"text": "Art worms support artists in the performance of massive scale ephemeral artworks. It turns the infected computers into nodes that contribute to the artwork.",
"title": "Worms with good intent"
}
]
| A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behaviour will continue. Computer worms use recursive methods to copy themselves without host programs and distribute themselves based on exploiting the advantages of exponential growth, thus controlling and infecting more and more computers in a short time. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer. Many worms are designed only to spread, and do not attempt to change the systems they pass through. However, as the Morris worm and Mydoom showed, even these "payload-free" worms can cause major disruption by increasing network traffic and other unintended effects. | 2001-08-01T19:15:08Z | 2023-12-08T18:53:33Z | [
"Template:About",
"Template:Citation-needed",
"Template:Cite news",
"Template:Authority control",
"Template:Short description",
"Template:Distinguish",
"Template:Pp-move-indef",
"Template:Cite journal",
"Template:Cite book",
"Template:Reflist",
"Template:Cite web",
"Template:Citation",
"Template:Malware",
"Template:Information security"
]
| https://en.wikipedia.org/wiki/Computer_worm |
6,011 | Chomsky hierarchy | The Chomsky hierarchy (infrequently referred to as the Chomsky–Schützenberger hierarchy) in the fields of formal language theory, computer science, and linguistics, is a containment hierarchy of classes of formal grammars. A formal grammar describes how to form strings from a language's vocabulary (or alphabet) that are valid according to the language's syntax. Linguist Noam Chomsky theorized that four different classes of formal grammars existed that could generate increasingly complex languages. Each class can also completely generate the language of all inferior classes.
The general idea of a hierarchy of grammars was first described by linguist Noam Chomsky in "Three models for the description of language". Marcel-Paul Schützenberger also played a role in the development of the theory of formal languages; the paper "The algebraic theory of context free languages" describes the modern hierarchy including context-free grammars.
Independently and alongside linguists, mathematicians were developing computation models (automata). Parsing a sentence in a language is similar to computation, and the grammars described by Chomsky proved to both resemble and be equivalent in computational power to various machine models.
The following table summarizes each of Chomsky's four types of grammars, the class of language it generates, the type of automaton that recognizes it, and the form its rules must have.
Note that the set of grammars corresponding to recursive languages is not a member of this hierarchy; these would be properly between Type-0 and Type-1.
Every regular language is context-free, every context-free language is context-sensitive, every context-sensitive language is recursive and every recursive language is recursively enumerable. These are all proper inclusions, meaning that there exist recursively enumerable languages that are not context-sensitive, context-sensitive languages that are not context-free and context-free languages that are not regular.
Type-3 grammars generate the regular languages. Such a grammar restricts its rules to a single nonterminal on the left-hand side and a right-hand side consisting of a single terminal, possibly followed by a single nonterminal (right regular). Alternatively, the right-hand side of the grammar can consist of a single terminal, possibly preceded by a single nonterminal (left regular). These generate the same languages. However, if left-regular rules and right-regular rules are combined, the language need no longer be regular. The rule S → ε {\displaystyle S\rightarrow \varepsilon } is also allowed here if S {\displaystyle S} does not appear on the right side of any rule. These languages are exactly all languages that can be decided by a finite state automaton. Additionally, this family of formal languages can be obtained by regular expressions. Regular languages are commonly used to define search patterns and the lexical structure of programming languages.
Type-2 grammars generate the context-free languages. These are defined by rules of the form A → α {\displaystyle A\rightarrow \alpha } with A {\displaystyle A} being a nonterminal and α {\displaystyle \alpha } being a string of terminals and/or nonterminals. These languages are exactly all languages that can be recognized by a non-deterministic pushdown automaton. Context-free languages—or rather its subset of deterministic context-free languages—are the theoretical basis for the phrase structure of most programming languages, though their syntax also includes context-sensitive name resolution due to declarations and scope. Often a subset of grammars is used to make parsing easier, such as by an LL parser.
For example, the grammar G = ( { S } , { a , b } , P , S ) {\displaystyle G=(\{S\},\{a,b\},P,S)} with the following productions is context-free but not regular (by the pumping lemma).
Type-1 grammars generate context-sensitive languages. These grammars have rules of the form α A β → α γ β {\displaystyle \alpha A\beta \rightarrow \alpha \gamma \beta } with A {\displaystyle A} a nonterminal and α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } strings of terminals and/or nonterminals. The strings α {\displaystyle \alpha } and β {\displaystyle \beta } may be empty, but γ {\displaystyle \gamma } must be nonempty. The rule S → ϵ {\displaystyle S\rightarrow \epsilon } is allowed if S {\displaystyle S} does not appear on the right side of any rule. The languages described by these grammars are exactly all languages that can be recognized by a linear bounded automaton (a nondeterministic Turing machine whose tape is bounded by a constant times the length of the input.)
Type-0 grammars include all formal grammars. They generate exactly all languages that can be recognized by a Turing machine. These languages are also known as the recursively enumerable or Turing-recognizable languages. Note that this is different from the recursive languages, which can be decided by an always-halting Turing machine. | [
{
"paragraph_id": 0,
"text": "The Chomsky hierarchy (infrequently referred to as the Chomsky–Schützenberger hierarchy) in the fields of formal language theory, computer science, and linguistics, is a containment hierarchy of classes of formal grammars. A formal grammar describes how to form strings from a language's vocabulary (or alphabet) that are valid according to the language's syntax. Linguist Noam Chomsky theorized that four different classes of formal grammars existed that could generate increasingly complex languages. Each class can also completely generate the language of all inferior classes.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The general idea of a hierarchy of grammars was first described by linguist Noam Chomsky in \"Three models for the description of language\". Marcel-Paul Schützenberger also played a role in the development of the theory of formal languages; the paper \"The algebraic theory of context free languages\" describes the modern hierarchy including context-free grammars.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Independently and alongside linguists, mathematicians were developing computation models (automata). Parsing a sentence in a language is similar to computation, and the grammars described by Chomsky proved to both resemble and be equivalent in computational power to various machine models.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The following table summarizes each of Chomsky's four types of grammars, the class of language it generates, the type of automaton that recognizes it, and the form its rules must have.",
"title": "The hierarchy"
},
{
"paragraph_id": 4,
"text": "Note that the set of grammars corresponding to recursive languages is not a member of this hierarchy; these would be properly between Type-0 and Type-1.",
"title": "The hierarchy"
},
{
"paragraph_id": 5,
"text": "Every regular language is context-free, every context-free language is context-sensitive, every context-sensitive language is recursive and every recursive language is recursively enumerable. These are all proper inclusions, meaning that there exist recursively enumerable languages that are not context-sensitive, context-sensitive languages that are not context-free and context-free languages that are not regular.",
"title": "The hierarchy"
},
{
"paragraph_id": 6,
"text": "Type-3 grammars generate the regular languages. Such a grammar restricts its rules to a single nonterminal on the left-hand side and a right-hand side consisting of a single terminal, possibly followed by a single nonterminal (right regular). Alternatively, the right-hand side of the grammar can consist of a single terminal, possibly preceded by a single nonterminal (left regular). These generate the same languages. However, if left-regular rules and right-regular rules are combined, the language need no longer be regular. The rule S → ε {\\displaystyle S\\rightarrow \\varepsilon } is also allowed here if S {\\displaystyle S} does not appear on the right side of any rule. These languages are exactly all languages that can be decided by a finite state automaton. Additionally, this family of formal languages can be obtained by regular expressions. Regular languages are commonly used to define search patterns and the lexical structure of programming languages.",
"title": "The hierarchy"
},
{
"paragraph_id": 7,
"text": "Type-2 grammars generate the context-free languages. These are defined by rules of the form A → α {\\displaystyle A\\rightarrow \\alpha } with A {\\displaystyle A} being a nonterminal and α {\\displaystyle \\alpha } being a string of terminals and/or nonterminals. These languages are exactly all languages that can be recognized by a non-deterministic pushdown automaton. Context-free languages—or rather its subset of deterministic context-free languages—are the theoretical basis for the phrase structure of most programming languages, though their syntax also includes context-sensitive name resolution due to declarations and scope. Often a subset of grammars is used to make parsing easier, such as by an LL parser.",
"title": "The hierarchy"
},
{
"paragraph_id": 8,
"text": "For example, the grammar G = ( { S } , { a , b } , P , S ) {\\displaystyle G=(\\{S\\},\\{a,b\\},P,S)} with the following productions is context-free but not regular (by the pumping lemma).",
"title": "The hierarchy"
},
{
"paragraph_id": 9,
"text": "Type-1 grammars generate context-sensitive languages. These grammars have rules of the form α A β → α γ β {\\displaystyle \\alpha A\\beta \\rightarrow \\alpha \\gamma \\beta } with A {\\displaystyle A} a nonterminal and α {\\displaystyle \\alpha } , β {\\displaystyle \\beta } and γ {\\displaystyle \\gamma } strings of terminals and/or nonterminals. The strings α {\\displaystyle \\alpha } and β {\\displaystyle \\beta } may be empty, but γ {\\displaystyle \\gamma } must be nonempty. The rule S → ϵ {\\displaystyle S\\rightarrow \\epsilon } is allowed if S {\\displaystyle S} does not appear on the right side of any rule. The languages described by these grammars are exactly all languages that can be recognized by a linear bounded automaton (a nondeterministic Turing machine whose tape is bounded by a constant times the length of the input.)",
"title": "The hierarchy"
},
{
"paragraph_id": 10,
"text": "Type-0 grammars include all formal grammars. They generate exactly all languages that can be recognized by a Turing machine. These languages are also known as the recursively enumerable or Turing-recognizable languages. Note that this is different from the recursive languages, which can be decided by an always-halting Turing machine.",
"title": "The hierarchy"
}
]
| The Chomsky hierarchy in the fields of formal language theory, computer science, and linguistics, is a containment hierarchy of classes of formal grammars. A formal grammar describes how to form strings from a language's vocabulary that are valid according to the language's syntax. Linguist Noam Chomsky theorized that four different classes of formal grammars existed that could generate increasingly complex languages. Each class can also completely generate the language of all inferior classes. | 2001-10-29T16:19:52Z | 2023-11-25T20:47:53Z | [
"Template:Short description",
"Template:Main",
"Template:Cite web",
"Template:Cleanup section",
"Template:Reflist",
"Template:Formal languages and grammars",
"Template:R",
"Template:Math",
"Template:Cite journal",
"Template:Noam Chomsky",
"Template:Authority control",
"Template:Sfn",
"Template:Cite book",
"Template:Refbegin",
"Template:Refend"
]
| https://en.wikipedia.org/wiki/Chomsky_hierarchy |
6,013 | CRT | CRT or Crt may refer to: | [
{
"paragraph_id": 0,
"text": "CRT or Crt may refer to:",
"title": ""
}
]
| CRT or Crt may refer to: | 2023-04-30T17:25:47Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Abbr",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/CRT |
|
6,014 | Cathode-ray tube | A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms (oscilloscope), pictures (television set, computer monitor), radar targets, or other phenomena. A CRT on a television set is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons.
In CRT television sets and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and televisions the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.
A CRT is a glass envelope which is deep (i.e., long from front screen face to rear end), heavy, and fragile. The interior is evacuated to 0.01 pascals (1×10 atm) to 0.1 micropascals (1×10 atm) or less, to facilitate the free flight of electrons from the gun(s) to the tube's face without scattering due to collisions with air molecules. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. CRTs make up most of the weight of CRT TVs and computer monitors.
Since the mid–late 2000's, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and less bulky. Flat-panel displays can also be made in very large sizes whereas 40 in (100 cm) to 45 in (110 cm) was about the largest size of a CRT.
A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons.
Cathode rays were discovered by Julius Plücker and Johann Wilhelm Hittorf. Hittorf observed that some unknown rays were emitted from the cathode (negative electrode) which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, and William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the charge-mass-ratio of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first "subatomic particles", which had already been named electrons by Irish physicist George Johnstone Stoney in 1891. The earliest version of the CRT was known as the "Braun tube", invented by the German physicist Ferdinand Braun in 1897. It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device. The Braun tube became the foundation of 20th century television.
In 1908, Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal Nature, in which he described how "distant electric vision" could be achieved by using a cathode-ray tube (or "Braun" tube) as both a transmitting and receiving device. He expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society.
The first cathode-ray tube to use a hot cathode was developed by John Bertrand Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The introduction of hot cathodes allowed for lower acceleration anode voltages and higher electron beam currents, since the anode now only accelerated the electrons emitted by the hot cathode, and no longer had to have a very high voltage to induce electron emission from the cold cathode.
In 1926, Kenjiro Takayanagi demonstrated a CRT television receiver with a mechanical video camera that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, which was unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display. In 1927, Philo Farnsworth created a television prototype. The CRT was named in 1929 by inventor Vladimir K. Zworykin. RCA was granted a trademark for the term (for its cathode-ray tube) in 1932; it voluntarily released the term to the public domain in 1950.
In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, which was one of the factors that led to the widespread adoption of television.
The first commercially made electronic television sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934.
In 1947, the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen, was created.
From 1949 to the early 1960s, there was a shift from circular CRTs to rectangular CRTs, although the first rectangular CRTs were made in 1938 by Telefunken. While circular CRTs were the norm, European TV sets often blocked portions of the screen to make it appear somewhat rectangular while American sets often left the entire front of the CRT exposed or only blocked the upper and lower portions of the CRT.
In 1954, RCA produced some of the first color CRTs, the 15GP22 CRTs used in the CT-100, the first color TV set to be mass-produced. The first rectangular color CRTs were also made in 1954. However, the first rectangular color CRTs to be offered to the public were made in 1963. One of the challenges that had to be solved to produce the rectangular color CRT was convergence at the corners of the CRT. In 1965, brighter rare earth phosphors began replacing dimmer and cadmium-containing red and green phosphors. Eventually blue phosphors were replaced as well.
The size of CRTs increased over time, from 20 inches in 1938, to 21 inches in 1955, 35 inches by 1985, and 43 inches by 1989. However, experimental 31 inch CRTs were made as far back as 1938.
In 1960, the Aiken tube was invented. It was a CRT in a flat-panel display format with a single electron gun. Deflection was electrostatic and magnetic, but due to patent problems, it was never put into production. It was also envisioned as a head-up display in aircraft. By the time patent issues were solved, RCA had already invested heavily in conventional CRTs.
1968 marked the release of Sony Trinitron brand with the model KV-1310, which was based on Aperture Grille technology. It was acclaimed to have improved the output brightness. The Trinitron screen was identical with its upright cylindrical shape due to its unique triple cathode single gun construction.
In 1987, flat-screen CRTs were developed by Zenith for computer monitors, reducing reflections and helping increase image contrast and brightness. Such CRTs were expensive, which limited their use to computer monitors. Attempts were made to produce flat-screen CRTs using inexpensive and widely available float glass.
In 1990, the first CRTs with HD resolution were released to the market by Sony.
In the mid-1990s, some 160 million CRTs were made per year.
In the mid-2000s, Canon and Sony presented the surface-conduction electron-emitter display and field-emission displays, respectively. They both were flat-panel displays that had one (SED) or several (FED) electron emitters per subpixel in place of electron guns. The electron emitters were placed on a sheet of glass and the electrons were accelerated to a nearby sheet of glass with phosphors using an anode voltage. The electrons were not focused, making each subpixel essentially a flood beam CRT. They were never put into mass production as LCD technology was significantly cheaper, eliminating the market for such displays.
The last large-scale manufacturer of (in this case, recycled) CRTs, Videocon, ceased in 2015. CRT TVs stopped being made around the same time.
In 2015, several CRT manufacturers were convicted in the US for price fixing. The same occurred in Canada in 2018.
Worldwide sales of CRT computer monitors peaked in 2000, at 90 million units, while those of CRT TVs peaked in 2005 at 130 million units.
Beginning in the late 90s to the early 2000s, CRTs began to be replaced with LCDs, starting first with computer monitors smaller than 15 inches in size, largely because of their lower bulk. Among the first manufacturers to stop CRT production was Hitachi in 2001, followed by Sony in Japan in 2004, Flat-panel displays dropped in price and started significantly displacing cathode-ray tubes in the 2000s. LCD monitor sales began exceeding those of CRTs in 2003–2004 and LCD TV sales started exceeding those of CRTs in some markets in 2005.
Despite being a mainstay of display technology for decades, CRT-based computer monitors and televisions are now virtually a dead technology. Demand for CRT screens dropped in the late 2000s. Despite efforts from Samsung and LG to make CRTs competitive with their LCD and plasma counterparts, offering slimmer and cheaper models to compete with similarly sized and more expensive LCDs, CRTs eventually became obsolete and were relegated to developing markets once LCDs fell in price, with their lower bulk, weight and ability to be wall mounted coming as pluses.
Some industries still use CRTs because it is either too much effort, downtime, and/or cost to replace them, or there is no substitute available; a notable example is the airline industry. Planes such as the Boeing 747-400 and the Airbus A320 used CRT instruments in their glass cockpits instead of mechanical instruments. Airlines such as Lufthansa still use CRT technology, which also uses floppy disks for navigation updates. They are also used in some military equipment for similar reasons.
As of 2022, at least one company manufactures new CRTs for these markets.
A popular consumer usage of CRTs is for retrogaming. Some games are impossible to play without CRT display hardware, and some games play better. Light guns only work on CRTs because they depend on the progressive timing properties of CRTs.
The body of a CRT is usually made up of three parts: A screen/faceplate/panel, a cone/funnel, and a neck. The joined screen, funnel and neck are known as the bulb or envelope.
The neck is made from a glass tube while the funnel and screen are made by pouring and then pressing glass into a mold. The glass, known as CRT glass or TV glass, needs special properties to shield against x-rays while providing adequate light transmission in the screen or being very electrically insulating in the funnel and neck. The formulation that gives the glass its properties is also known as the melt. The glass is of very high quality, being almost contaminant and defect free. Most of the costs associated with glass production come from the energy used to melt the raw materials into glass. Glass furnaces for CRT glass production have several taps to allow molds to be replaced without stopping the furnace, to allow production of CRTs of several sizes. Only the glass used on the screen needs to have precise optical properties.
The optical properties of the glass used on the screen affects color reproduction and purity in Color CRTs. Transmittance, or how transparent the glass is, may be adjusted to be more transparent to certain colors (wavelengths) of light. Transmittance is measured at the center of the screen with a 546 nm wavelength light, and a 10.16mm thick screen. Transmittance goes down with increasing thickness. Standard transmittances for Color CRT screens are 86%, 73%, 57%, 46%, 42% and 30%. Lower transmittances are used to improve image contrast but they put more stress on the electron gun, requiring more power on the electron gun for a higher electron beam power to light the phosphors more brightly to compensate for the reduced transmittance. The transmittance must be uniform across the screen to ensure color purity. The radius (curvature) of screens has increased (grown less curved) over time, from 30 to 68 inches, ultimately evolving into completely flat screens, reducing reflections. The thickness of both curved and flat screens gradually increases from the center outwards, and with it, transmittance is gradually reduced. This means that flat-screen CRTs may not be completely flat on the inside.
The glass used in CRTs arrives from the glass factory to the CRT factory as either separate screens and funnels with fused necks, for Color CRTs, or as bulbs made up of a fused screen, funnel and neck. There were several glass formulations for different types of CRTs, that were classified using codes specific to each glass manufacturer. The compositions of the melts were also specific to each manufacturer. Those optimized for high color purity and contrast were doped with Neodymium, while those for monochrome CRTs were tinted to differing levels, depending on the formulation used and had transmittances of 42% or 30%. Purity is ensuring that the correct colors are activated (for example, ensuring that red is displayed uniformly across the screen) while convergence ensures that images are not distorted. Convergence may be modified using a cross hatch pattern.
CRT glass used to be made by dedicated companies such as AGC Inc., O-I Glass, Samsung Corning Precision Materials, Corning Inc., and Nippon Electric Glass; others such as Videocon, Sony for the US market and Thomson made their own glass.
The funnel and the neck are made of leaded potash-soda glass or lead silicate glass formulation to shield against x-rays generated by high voltage electrons as they decelerate after striking a target, such as the phosphor screen or shadow mask of a color CRT. The velocity of the electrons depends on the anode voltage of the CRT; the higher the voltage, the higher the speed. The amount of x-rays emitted by a CRT can also lowered by reducing the brightness of the image. Leaded glass is used because it is inexpensive, while also shielding heavily against x-rays, although some funnels may also contain barium. The screen is usually instead made out of a special lead-free silicate glass formulation with barium and strontium to shield against x-rays. Another glass formulation uses 2-3% of lead on the screen.
Monochrome CRTs may have a tinted barium-lead glass formulation in both the screen and funnel, with a potash-soda lead glass in the neck; the potash-soda and barium-lead formulations have different thermal expansion coefficients. The glass used in the neck must be an excellent electrical insulator to contain the voltages used in the electron optics of the electron gun, such as focusing lenses. The lead in the glass causes it to brown (darken) with use due to x-rays, usually the CRT cathode wears out due to cathode poisoning before browning becomes apparent. The glass formulation determines the highest possible anode voltage and hence the maximum possible CRT screen size. For color, maximum voltages are often 24 to 32 kV, while for monochrome it is usually 21 or 24.5kV, limiting the size of monochrome CRTs to 21 inches, or approx. 1 kV per inch. The voltage needed depends on the size and type of CRT. Since the formulations are different, they must be compatible with one another, having similar thermal expansion coefficients. The screen may also have an anti-glare or anti-reflective coating, or be ground to prevent reflections. CRTs may also have an anti-static coating.
The leaded glass in the funnels of CRTs may contain 21 to 25% of lead oxide (PbO), The neck may contain 30 to 40% of lead oxide, and the screen may contain 12% of barium oxide, and 12% of strontium oxide. A typical CRT contains several kilograms of lead as lead oxide in the glass depending on its size; 12 inch CRTs contain 0.5 kg of lead in total while 32 inch CRTs contain up to 3 kg. Strontium oxide began being used in CRTs, its major application, in the 1970s.
Some early CRTs used a metal funnel insulated with polyethylene instead of glass with conductive material. Others had ceramic or blown Pyrex instead of pressed glass funnels. Early CRTs did not have a dedicated anode cap connection; the funnel was the anode connection, so it was live during operation.
The funnel is coated on the inside and outside with a conductive coating, making the funnel a capacitor, helping stabilize and filter the anode voltage of the CRT, and significantly reducing the amount of time needed to turn on a CRT. The stability provided by the coating solved problems inherent to early power supply designs, as they used vacuum tubes. Because the funnel is used as a capacitor, the glass used in the funnel must be an excellent electrical insulator (dielectric). The inner coating has a positive voltage (the anode voltage that can be several kV) while the outer coating is connected to ground. CRTs powered by more modern power supplies do not need to be connected to ground, due to the more robust design of modern power supplies. The value of the capacitor formed by the funnel is .005-.01uF, although at the voltage the anode is normally supplied with. The capacitor formed by the funnel can also suffer from dielectric absorption, similarly to other types of capacitors. Because of this CRTs have to be discharged before handling to prevent injury.
The depth of a CRT is related to its screen size. Usual deflection angles were 90° for computer monitor CRTs and small CRTs and 110° which was the standard in larger TV CRTs, with 120 or 125° being used in slim CRTs made since 2001–2005 in an attempt to compete with LCD TVs. Over time, deflection angles increased as they became practical, from 50° in 1938 to 110° in 1959, and 125° in the 2000s. 140° deflection CRTs were researched but never commercialized, as convergence problems were never resolved.
The size of the screen of a CRT is measured in two ways: the size of the screen or the face diagonal, and the viewable image size/area or viewable screen diagonal, which is the part of the screen with phosphor. The size of the screen is the viewable image size plus its black edges which are not coated with phosphor. The viewable image may be perfectly square or rectangular while the edges of the CRT are black and have a curvature (such as in black stripe CRTs) or the edges may be black and truly flat (such as in Flatron CRTs), or the edges of the image may follow the curvature of the edges of the CRT, which may be the case in CRTs without and with black edges and curved edges. Black stripe CRTs were first made by Toshiba in 1972.
Small CRTs below 3 inches were made for handheld televisions such as the MTV-1 and viewfinders in camcorders. In these, there may be no black edges, that are however truly flat.
Most of the weight of a CRT comes from the thick glass screen, which comprises 65% of the total weight of a CRT. The funnel and neck glass comprise the remaining 30% and 5% respectively. The glass in the funnel is thinner than on the screen. Chemically or thermally tempered glass may be used to reduce the weight of the CRT glass.
The outer conductive coating is connected to ground while the inner conductive coating is connected using the anode button/cap through a series of capacitors and diodes (a Cockcroft–Walton generator) to the high voltage flyback transformer; the inner coating is the anode of the CRT, which, together with an electrode in the electron gun, is also known as the final anode. The inner coating is connected to the electrode using springs. The electrode forms part of a bipotential lens. The capacitors and diodes serve as a voltage multiplier for the current delivered by the flyback.
For the inner funnel coating, monochrome CRTs use aluminum while color CRTs use aquadag; Some CRTs may use iron oxide on the inside. On the outside, most CRTs (but not all) use aquadag. Aquadag is an electrically conductive graphite-based paint. In color CRTs, the aquadag is sprayed onto the interior of the funnel whereas historically aquadag was painted into the interior of monochrome CRTs.
The anode is used to accelerate the electrons towards the screen and also collects the secondary electrons that are emitted by the phosphor particles in the vacuum of the CRT.
The anode cap connection in modern CRTs must be able to handle up to 55–60kV depending on the size and brightness of the CRT. Higher voltages allow for larger CRTs, higher image brightness, or a tradeoff between the two. It consists of a metal clip that expands on the inside of an anode button that is embedded on the funnel glass of the CRT. The connection is insulated by a silicone suction cup, possibly also using silicone grease to prevent corona discharge.
The anode button must be specially shaped to establish a hermetic seal between the button and funnel. X-rays may leak through the anode button, although that may not be the case in newer CRTs starting from the late 1970s to early 1980s, thanks to a new button and clip design. The button may consist of a set of 3 nested cups, with the outermost cup being made of a Nickel–Chromium–Iron alloy containing 40 to 49% of Nickel and 3 to 6% of Chromium to make the button easy to fuse to the funnel glass, with a first inner cup made of thick inexpensive iron to shield against x-rays, and with the second innermost cup also being made of iron or any other electrically conductive metal to connect to the clip. The cups must be heat resistant enough and have similar thermal expansion coefficients similar to that of the funnel glass to withstand being fused to the funnel glass. The inner side of the button is connected to the inner conductive coating of the CRT. The anode button may be attached to the funnel while its being pressed into shape in a mold. Alternatively, the x-ray shielding may instead be built into the clip.
The flyback transformer is also known as an IHVT (Integrated High Voltage Transformer) if it includes a voltage multiplier. The flyback uses a ceramic or powdered iron core to enable efficient operation at high frequencies. The flyback contains one primary and many secondary windings that provide several different voltages. The main secondary winding supplies the voltage multiplier with voltage pulses to ultimately supply the CRT with the high anode voltage it uses, while the remaining windings supply the CRT's filament voltage, keying pulses, focus voltage and voltages derived from the scan raster. When the transformer is turned off, the flyback's magnetic field quickly collapses which induces high voltage in its windings. The speed at which the magnetic field collapses determines the voltage that is induced, so the voltage increases alongside its speed. A capacitor (Retrace Timing Capacitor) or series of capacitors (to provide redundancy) is used to slow the collapse of the magnetic field.
The design of the high voltage power supply in a product using a CRT has an influence in the amount of x-rays emitted by the CRT. The amount of emitted x-rays increases with both higher voltages and currents. If the product such as a TV set uses an unregulated high voltage power supply, meaning that anode and focus voltage go down with increasing electron current when displaying a bright image, the amount of emitted x-rays is as its highest when the CRT is displaying a moderately bright images, since when displaying dark or bright images, the higher anode voltage counteracts the lower electron beam current and vice versa respectively. The high voltage regulator and rectifier vacuum tubes in some old CRT TV sets may also emit x-rays.
The electron gun emits the electrons that ultimately hit the phosphors on the screen of the CRT. The electron gun contains a heater, which heats a cathode, which generates electrons that, using grids, are focused and ultimately accelerated into the screen of the CRT. The acceleration occurs in conjunction with the inner aluminum or aquadag coating of the CRT. The electron gun is positioned so that it aims at the center of the screen. It is inside the neck of the CRT, and it is held together and mounted to the neck using glass beads or glass support rods, which are the glass strips on the electron gun. The electron gun is made separately and then placed inside the neck through a process called "winding", or sealing. The electron gun has a glass wafer that is fused to the neck of the CRT. The connections to the electron gun penetrate the glass wafer. Once the electron gun is inside the neck, its metal parts (grids) are arced between each other using high voltage to smooth any rough edges in a process called spot knocking, to prevent the rough edges in the grids from generating secondary electrons.
The electron gun has a hot cathode that is heated by a tungsten filament heating element; the heater may draw 0.5 to 2 A of current depending on the CRT. The voltage applied to the heater can affect the life of the CRT. Heating the cathode energizes the electrons in it, aiding electron emission, while at the same time current is supplied to the cathode; typically anywhere from 140 mA at 1.5 V to 600 mA at 6.3 V. The cathode creates an electron cloud (emits electrons) whose electrons are extracted, accelerated and focused into an electron beam. Color CRTs have three cathodes: one for red, green and blue. The heater sits inside the cathode but does not touch it; the cathode has its own separate electrical connection. The cathode is coated onto a piece of nickel which provides the electrical connection and structural support; the heater sits inside this piece without touching it.
There are several short circuits that can occur in a CRT electron gun. One is a heater-to-cathode short, that causes the cathode to permanently emit electrons which may cause an image with a bright red, green or blue tint with retrace lines, depending on the cathode (s) affected. Alternatively, the cathode may short to the control grid, possibly causing similar effects, or, the control grid and screen grid (G2) can short causing a very dark image or no image at all. The cathode may be surrounded by a shield to prevent sputtering.
The cathode is a layer of barium oxide which is coated on a piece of nickel for electrical and mechanical support. The barium oxide must be activated by heating to enable it to release electrons. Activation is necessary because barium oxide is not stable in air, so it is applied to the cathode as barium carbonate, which cannot emit electrons. Activation heats the barium carbonate to decompose it into barium oxide and carbon dioxide while forming a thin layer of metallic barium on the cathode. Activation occurs during evacuation of (at the same time a vacuum is formed in) the CRT. After activation the oxide can become damaged by several common gases such as water vapor, carbon dioxide, and oxygen. Alternatively, barium strontium calcium carbonate may be used instead of barium carbonate, yielding barium, strontium and calcium oxides after activation. During operation, the barium oxide is heated to 800-1000°C, at which point it starts shedding electrons.
Since it is a hot cathode, it is prone to cathode poisoning, which is the formation of a positive ion layer that prevents the cathode from emitting electrons, reducing image brightness significantly or completely and causing focus and intensity to be affected by the frequency of the video signal preventing detailed images from being displayed by the CRT. The positive ions come from leftover air molecules inside the CRT or from the cathode itself that react over time with the surface of the hot cathode. Reducing metals such as manganese, zirconium, magnesium, aluminum or titanium may be added to the piece of nickel to lengthen the life of the cathode, as during activation, the reducing metals diffuse into the barium oxide, improving its lifespan, especially at high electron beam currents. In color CRTs with red, green and blue cathodes, one or more cathodes may be affected independently of the others, causing total or partial loss of one or more colors. CRTs can wear or burn out due to cathode poisoning. Cathode poisoning is accelerated by increased cathode current (overdriving). In color CRTs, since there are three cathodes, one for red, green and blue, a single or more poisoned cathode may cause the partial or complete loss of one or more colors, tinting the image. The layer may also act as a capacitor in series with the cathode, inducing thermal lag. The cathode may instead be made of scandium oxide or incorporate it as a dopant, to delay cathode poisoning, extending the life of the cathode by up to 15%.
The amount of electrons generated by the cathodes is related to their surface area. A cathode with more surface area creates more electrons, in a larger electron cloud, which makes focusing the electron cloud into an electron beam more difficult. Normally, only a part of the cathode emits electrons unless the CRT displays images with parts that are at full image brightness; only the parts at full brightness cause all of the cathode to emit electrons. The area of the cathode that emits electrons grows from the center outwards as brightness increases, so cathode wear may be uneven. When only the center of the cathode is worn, the CRT may light brightly those parts of images that have full image brightness but not show darker parts of images at all, in such a case the CRT displays a poor gamma characteristic.
The second (screen) grid of the gun (G2) accelerates the electrons towards the screen using several hundred DC volts. A negative current is applied to the first (control) grid (G1) to converge the electron beam. G1 in practice is a Wehnelt cylinder. The brightness of the screen is not controlled by varying the anode voltage nor the electron beam current (they are never varied) despite them having an influence on image brightness, rather image brightness is controlled by varying the difference in voltage between the cathode and the G1 control grid. A third grid (G3) electrostatically focuses the electron beam before it is deflected and accelerated by the anode voltage onto the screen. Electrostatic focusing of the electron beam may be accomplished using an Einzel lens energized at up to 600 volts. Before electrostatic focusing, focusing the electron beam required a large, heavy and complex mechanical focusing system placed outside the electron gun.
However, electrostatic focusing cannot be accomplished near the final anode of the CRT due to its high voltage in the dozens of Kilovolts, so a high voltage (≈600 to 8000 volt) electrode, together with an electrode at the final anode voltage of the CRT, may be used for focusing instead. Such an arrangement is called a bipotential lens, which also offers higher performance than an Einzel lens, or, focusing may be accomplished using a magnetic focusing coil together with a high anode voltage of dozens of kilovolts. However, magnetic focusing is expensive to implement, so it is rarely used in practice. Some CRTs may use two grids and lenses to focus the electron beam. The focus voltage is generated in the flyback using a subset of the flyback's high voltage winding in conjunction with a resistive voltage divider. The focus electrode is connected alongside the other connections that are in the neck of the CRT.
There is a voltage called cutoff voltage which is the voltage that creates black on the screen since it causes the image on the screen created by the electron beam to disappear, the voltage is applied to G1. In a color CRT with three guns, the guns have different cutoff voltages. Many CRTs share grid G1 and G2 across all three guns, increasing image brightness and simplifying adjustment since on such CRTs there is a single cutoff voltage for all three guns (since G1 is shared across all guns). but placing additional stress on the video amplifier used to feed video into the electron gun's cathodes, since the cutoff voltage becomes higher. Monochrome CRTs do not suffer from this problem. In monochrome CRTs video is fed to the gun by varying the voltage on the first control grid.
During retracing of the electron beam, the preamplifier that feeds the video amplifier is disabled and the video amplifier is biased to a voltage higher than the cutoff voltage to prevent retrace lines from showing, or G1 can have a large negative voltage applied to it to prevent electrons from getting out of the cathode. This is known as blanking. (see Vertical blanking interval and Horizontal blanking interval.) Incorrect biasing can lead to visible retrace lines on one or more colors, creating retrace lines that are tinted or white (for example, tinted red if the red color is affected, tinted magenta if the red and blue colors are affected, and white if all colors are affected). Alternatively, the amplifier may be driven by a video processor that also introduces an OSD (On Screen Display) into the video stream that is fed into the amplifier, using a fast blanking signal. TV sets and computer monitors that incorporate CRTs need a DC restoration circuit to provide a video signal to the CRT with a DC component, restoring the original brightness of different parts of the image.
The electron beam may be affected by the Earth's magnetic field, causing it to normally enter the focusing lens off-center; this can be corrected using astigmation controls. Astigmation controls are both magnetic and electronic (dynamic); magnetic does most of the work while electronic is used for fine adjustments. One of the ends of the electron gun has a glass disk, the edges of which are fused with the edge of the neck of the CRT, possibly using frit; the metal leads that connect the electron gun to the outside pass through the disk.
Some electron guns have a quadrupole lens with dynamic focus to alter the shape and adjust the focus of the electron beam, varying the focus voltage depending on the position of the electron beam to maintain image sharpness across the entire screen, specially at the corners. They may also have a bleeder resistor to derive voltages for the grids from the final anode voltage.
After the CRTs were manufactured, they were aged to allow cathode emission to stabilize.
The electron guns in color CRTs are driven by a video amplifier which takes a signal per color channel and amplifies it to 40-170v per channel, to be fed into the electron gun's cathodes; each electron gun has its own channel (one per color) and all channels may be driven by the same amplifier, which internally has three separate channels. The amplifier's capabilities limit the resolution, refresh rate and contrast ratio of the CRT, as the amplifier needs to provide high bandwidth and voltage variations at the same time; higher resolutions and refresh rates need higher bandwidths (speed at which voltage can be varied and thus switching between black and white) and higher contrast ratios need higher voltage variations or amplitude for lower black and higher white levels. 30Mhz of bandwidth can usually provide 720p or 1080i resolution, while 20Mhz usually provides around 600 (horizontal, from top to bottom) lines of resolution, for example. The difference in voltage between the cathode and the control grid is what modulates the electron beam, modulating its current and thus the brightness of the image. The phosphors used in color CRTs produce different amounts of light for a given amount of energy, so to produce white on a color CRT, all three guns must output differing amounts of energy. The gun that outputs the most energy is the red gun since the red phosphor emits the least amount of light.
CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship in an electron gun between applied video voltage and beam intensity).
There are two types of deflection: magnetic and electrostatic. Magnetic is usually used in TVs and monitors as it allows for higher deflection angles (and hence shallower CRTs) and deflection power (which allows for higher electron beam current and hence brighter images) while avoiding the need for high voltages for deflection of up to 2000 volts, while oscilloscopes often use electrostatic deflection since the raw waveforms captured by the oscilloscope can be applied directly (after amplification) to the vertical electrostatic deflection plates inside the CRT.
Those that use magnetic deflection may use a yoke that has two pairs of deflection coils; one pair for vertical, and another for horizontal deflection. The yoke can be bonded (be integral) or removable. Those that were bonded used glue or a plastic to bond the yoke to the area between the neck and the funnel of the CRT while those with removable yokes are clamped. The yoke generates heat whose removal is essential since the conductivity of glass goes up with increasing temperature, the glass needs to be insulating for the CRT to remain usable as a capacitor. The temperature of the glass below the yoke is thus checked during the design of a new yoke. The yoke contains the deflection and convergence coils with a ferrite core to reduce loss of magnetic force as well as the magnetized rings used to align or adjust the electron beams in color CRTs (The color purity and convergence rings, for example) and monochrome CRTs. The yoke may be connected using a connector, the order in which the deflection coils of the yoke are connected determines the orientation of the image displayed by the CRT. The deflection coils may be held in place using polyurethane glue.
The deflection coils are driven by sawtooth signals that may be delivered through VGA as horizontal and vertical sync signals. A CRT needs two deflection circuits: a horizontal and a vertical circuit, which are similar except that the horizontal circuit runs at a much higher frequency (a Horizontal scan rate) of 15 to 240 kHz depending on the refresh rate of the CRT and the number of horizontal lines to be drawn (the vertical resolution of the CRT). The higher frequency makes it more susceptible to interference, so an automatic frequency control (AFC) circuit may be used to lock the phase of the horizontal deflection signal to that of a sync signal, to prevent the image from becoming distorted diagonally. The vertical frequency varies according to the refresh rate of the CRT. So a CRT with a 60 Hz refresh rate has a vertical deflection circuit running at 60 Hz. The horizontal and vertical deflection signals may be generated using two circuits that work differently; the horizontal deflection signal may be generated using a voltage controlled oscillator (VCO) while the vertical signal may be generated using a triggered relaxation oscillator. In many TVs, the frequencies at which the deflection coils run is in part determined by the inductance value of the coils. CRTs had differing deflection angles; the higher the deflection angle, the shallower the CRT for a given screen size, but at the cost of more deflection power and lower optical performance.
Higher deflection power means more current is sent to the deflection coils to bend the electron beam at a higher angle, which in turn may generate more heat or require electronics that can handle the increased power. Heat is generated due to resistive and core losses. The deflection power is measured in mA per inch. The vertical deflection coils may require approximately 24 volts while the horizontal deflection coils require approx. 120 volts to operate.
The deflection coils are driven by deflection amplifiers. The horizontal deflection coils may also be driven in part by the horizontal output stage of a TV set. The stage contains a capacitor that is in series with the horizontal deflection coils that performs several functions, among them are: shaping the sawtooth deflection signal to match the curvature of the CRT and centering the image by preventing a DC bias from developing on the coil. At the beginning of retrace, the magnetic field of the coil collapses, causing the electron beam to return to the center of the screen, while at the same time the coil returns energy into capacitors, the energy of which is then used to force the electron beam to go to the left of the screen.
Due to the high frequency at which the horizontal deflection coils operate, the energy in the deflection coils must be recycled to reduce heat dissipation. Recycling is done by transferring the energy in the deflection coils' magnetic field to a set of capacitors. The voltage on the horizontal deflection coils is negative when the electron beam is on the left side of the screen and positive when the electron beam is on the right side of the screen. The energy required for deflection is dependent on the energy of the electrons. Higher energy (voltage and/or current) electron beams need more energy to be deflected, and are used to achieve higher image brightness.
Mostly used in oscilloscopes. Deflection is carried out by applying a voltage across two pairs of plates, one for horizontal, and the other for vertical deflection. The electron beam is steered by varying the voltage difference across plates in a pair; For example, applying a voltage to the upper plate of the vertical deflection pair, while keeping the voltage in the bottom plate at 0 volts, will cause the electron beam to be deflected towards the upper part of the screen; increasing the voltage in the upper plate while keeping the bottom plate at 0 will cause the electron beam to be deflected to a higher point in the screen (will cause the beam to be deflected at a higher deflection angle). The same applies with the horizontal deflection plates. Increasing the length and proximity between plates in a pair can also increase the deflection angle.
Burn-in is when images are physically "burned" into the screen of the CRT; this occurs due to degradation of the phosphors due to prolonged electron bombardment of the phosphors, and happens when a fixed image or logo is left for too long on the screen, causing it to appear as a "ghost" image or, in severe cases, also when the CRT is off. To counter this, screensavers were used in computers to minimize burn-in. Burn-in is not exclusive to CRTs, as it also happens to plasma displays and OLED displays.
CRTs are evacuated or exhausted (a vacuum is formed) inside an oven at approx. 375–475 °C, in a process called baking or bake-out. The evacuation process also outgasses any materials inside the CRT, while decomposing others such as the polyvinyl alcohol used to apply the phosphors. The heating and cooling are done gradually to avoid inducing stress, stiffening and possibly cracking the glass; the oven heats the gases inside the CRT, increasing the speed of the gas molecules which increases the chances of them getting drawn out by the vacuum pump. The temperature of the CRT is kept to below that of the oven, and the oven starts to cool just after the CRT reaches 400 °C, or, the CRT was kept at a temperature higher than 400 °C for up to 15–55 minutes. The CRT was heated during or after evacuation, and the heat may have been used simultaneously to melt the frit in the CRT, joining the screen and funnel. The pump used is a turbomolecular pump or a diffusion pump. Formerly mercury vacuum pumps were also used. After baking, the CRT is disconnected ("sealed or tipped off") from the vacuum pump. The getter is then fired using an RF (induction) coil. The getter is usually in the funnel or in the neck of the CRT. The getter material which is often barium-based, catches any remaining gas particles as it evaporates due to heating induced by the RF coil (that may be combined with exothermic heating within the material); the vapor fills the CRT, trapping any gas molecules that it encounters and condenses on the inside of the CRT forming a layer that contains trapped gas molecules. Hydrogen may be present in the material to help distribute the barium vapor. The material is heated to temperatures above 1000 °C, causing it to evaporate. Partial loss of vacuum in a CRT can result in a hazy image, blue glowing in the neck of the CRT, flashovers, loss of cathode emission or focusing problems. The vacuum inside of a CRT causes atmospheric pressure to exert (in a 27-inch CRT) a pressure of 5,800 pounds (2,600 kg) in total.
CRTs used to be rebuilt; repaired or refurbished. The rebuilding process included the disassembly of the CRT, the disassembly and repair or replacement of the electron gun(s), the removal and redeposition of phosphors and aquadag, etc. Rebuilding was popular until the 1960s because CRTs were expensive and wore out quickly, making repair worth it. The last CRT rebuilder in the US closed in 2010, and the last in Europe, RACS, which was located in France, closed in 2013.
Also known as rejuvenation, the goal is to temporarily restore the brightness of a worn CRT. This is often done by carefully increasing the voltage on the cathode heater and the current and voltage on the control grids of the electron gun manually. Some rejuvenators can also fix heater-to-cathode shorts by running a capacitive discharge through the short.
Phosphors in CRTs emit secondary electrons due to them being inside the vacuum of the CRT. The secondary electrons are collected by the anode of the CRT. Secondary electrons generated by phosphors need to be collected to prevent charges from developing in the screen, which would lead to reduced image brightness since the charge would repel the electron beam.
The phosphors used in CRTs often contain rare earth metals, replacing earlier dimmer phosphors. Early red and green phosphors contained Cadmium, and some black and white CRT phosphors also contained beryllium in the form of Zinc beryllium silicate, although white phosphors containing cadmium, zinc and magnesium with silver, copper or manganese as dopants were also used. The rare earth phosphors used in CRTs are more efficient (produce more light) than earlier phosphors. The phosphors adhere to the screen because of Van der Waals and electrostatic forces. Phosphors composed of smaller particles adhere more strongly to the screen. The phosphors together with the carbon used to prevent light bleeding (in color CRTs) can be easily removed by scratching.
Several dozen types of phosphors were available for CRTs. Phosphors were classified according to color, persistence, luminance rise and fall curves, color depending on anode voltage (for phosphors used in penetration CRTs), Intended use, chemical composition, safety, sensitivity to burn-in, and secondary emission properties. Examples of rare earth phosphors are yttrium oxide for red and yttrium silicide for blue in beam index tubes, while examples of earlier phosphors are copper cadmium sulfide for red,
SMPTE-C phosphors have properties defined by the SMPTE-C standard, which defines a color space of the same name. The standard prioritizes accurate color reproduction, which was made difficult by the different phosphors and color spaces used in the NTSC and PAL color systems. PAL TV sets have subjectively better color reproduction due to the use of saturated green phosphors, which have relatively long decay times that are tolerated in PAL since there is more time in PAL for phosphors to decay, due to its lower framerate. SMPTE-C phosphors were used in professional video monitors.
The phosphor coating on monochrome and color CRTs may have an aluminum coating on its rear side used to reflect light forward, provide protection against ions to prevent ion burn by negative ions on the phosphor, manage heat generated by electrons colliding against the phosphor, prevent static build up that could repel electrons from the screen, form part of the anode and collect the secondary electrons generated by the phosphors in the screen after being hit by the electron beam, providing the electrons with a return path. The electron beam passes through the aluminum coating before hitting the phosphors on the screen; the aluminum attenuates the electron beam voltage by about 1 kv. A film or lacquer may be applied to the phosphors to reduce the surface roughness of the surface formed by the phosphors to allow the aluminum coating to have a uniform surface and prevent it from touching the glass of the screen. This is known as filming. The lacquer contains solvents that are later evaporated; the lacquer may be chemically roughened to cause an aluminum coating with holes to be created to allow the solvents to escape.
Various phosphors are available depending upon the needs of the measurement or display application. The brightness, color, and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable. The phosphor persistence must be low enough to avoid smearing or ghosting artifacts at high refresh rates.
Variations in anode voltage can lead to variations in brightness in parts or all of the image, in addition to blooming, shrinkage or the image getting zoomed in or out. Lower voltages lead to blooming and zooming in, while higher voltages do the opposite. Some blooming is unavoidable, which can be seen as bright areas of an image that expand, distorting or pushing aside surrounding darker areas of the same image. Blooming occurs because bright areas have a higher electron beam current from the electron gun, making the beam wider and harder to focus. Poor voltage regulation causes focus and anode voltage to go down with increasing electron beam current.
Doming is a phenomenon found on some CRT televisions in which parts of the shadow mask become heated. In televisions that exhibit this behavior, it tends to occur in high-contrast scenes in which there is a largely dark scene with one or more localized bright spots. As the electron beam hits the shadow mask in these areas it heats unevenly. The shadow mask warps due to the heat differences, which causes the electron gun to hit the wrong colored phosphors and incorrect colors to be displayed in the affected area. Thermal expansion causes the shadow mask to expand by around 100 microns.
During normal operation, the shadow mask is heated to around 80–90 °C. Bright areas of images heat the shadow mask more than dark areas, leading to uneven heating of the shadow mask and warping (blooming) due to thermal expansion caused by heating by increased electron beam current. The shadow mask is usually made of steel but it can be made of Invar (a low-thermal expansion Nickel-Iron alloy) as it withstands two to three times more current than conventional masks without noticeable warping, while making higher resolution CRTs easier to achieve. Coatings that dissipate heat may be applied on the shadow mask to limit blooming in a process called blackening.
Bimetal springs may be used in CRTs used in TVs to compensate for warping that occurs as the electron beam heats the shadow mask, causing thermal expansion. The shadow mask is installed to the screen using metal pieces or a rail or frame that is fused to the funnel or the screen glass respectively, holding the shadow mask in tension to minimize warping (if the mask is flat, used in flat-screen CRT computer monitors) and allowing for higher image brightness and contrast.
Aperture grille screens are brighter since they allow more electrons through, but they require support wires. They are also more resistant to warping. Color CRTs need higher anode voltages than monochrome CRTs to achieve the same brightness since the shadow mask blocks most of the electron beam. Slot masks and specially Aperture grilles do not block as many electrons resulting in a brighter image for a given anode voltage, but aperture grille CRTs are heavier. Shadow masks block 80–85% of the electron beam while Aperture grilles allow more electrons to pass through.
Image brightness is related to the anode voltage and to the CRTs size, so higher voltages are needed for both larger screens and higher image brightness. Image brightness is also controlled by the current of the electron beam. Higher anode voltages and electron beam currents also mean higher amounts of x-rays and heat generation since the electrons have a higher speed and energy. Leaded glass and special barium-strontium glass are used to block most x-ray emissions.
Size is limited by anode voltage, as it would require a higher dielectric strength to prevent arcing (corona discharge) and the electrical losses and ozone generation it causes, without sacrificing image brightness. The weight of the CRT, which originates from the thick glass needed to safely sustain a vacuum, imposes a practical limit on the size of a CRT. The 43-inch Sony PVM-4300 CRT monitor weighs 440 pounds (200 kg). Smaller CRTs weigh significantly less, as an example, 32-inch CRTs weigh up to 163 pounds (74 kg) and 19-inch CRTs weigh up to 60 pounds (27 kg). For comparison, a 32-inch flat panel TV only weighs approx. 18 pounds (8.2 kg) and a 19-inch flat panel TV weighs 6.5 pounds (2.9 kg).
Shadow masks become more difficult to make with increasing resolution and size.
At high deflection angles, resolutions and refresh rates (since higher resolutions and refresh rates require significantly higher frequencies to be applied to the horizontal deflection coils), the deflection yoke starts to produce large amounts of heat, due to the need to move the electron beam at a higher angle, which in turn requires exponentially larger amounts of power. As an example, to increase the deflection angle from 90 to 120°, power consumption of the yoke must also go up from 40 watts to 80 watts, and to increase it further from 120 to 150°, deflection power must again go up from 80 watts to 160 watts. This normally makes CRTs that go beyond certain deflection angles, resolutions and refresh rates impractical, since the coils would generate too much heat due to resistance caused by the skin effect, surface and eddy current losses, and/or possibly causing the glass underneath the coil to become conductive (as the electrical conductivity of glass decreases with increasing temperature). Some deflection yokes are designed to dissipate the heat that comes from their operation. Higher deflection angles in color CRTs directly affect convergence at the corners of the screen which requires additional compensation circuitry to handle electron beam power and shape, leading to higher costs and power consumption. Higher deflection angles allow a CRT of a given size to be slimmer, however they also impose more stress on the CRT envelope, specially on the panel, the seal between the panel and funnel and on the funnel. The funnel needs to be long enough to minimize stress, as a longer funnel can be better shaped to have lower stress.
On CRTs, refresh rate depends on resolution, both of which are ultimately limited by the maximum horizontal scanning frequency of the CRT. Motion blur also depends on the decay time of the phosphors. Phosphors that decay too slowly for a given refresh rate may cause smearing or motion blur on the image. In practice, CRTs are limited to a refresh rate of 160 Hz. LCDs that can compete with OLED (Dual Layer, and mini-LED LCDs) are not available in high refresh rates, although quantum dot LCDs (QLEDs) are available in high refresh rates (up to 144 Hz) and are competitive in color reproduction with OLEDs.
CRT monitors can still outperform LCD and OLED monitors in input lag, as there is no signal processing between the CRT and the display connector of the monitor, since CRT monitors often use VGA which provides an analog signal that can be fed to a CRT directly. Video cards designed for use with CRTs may have a RAMDAC to generate the analog signals needed by the CRT. Also, CRT monitors are often capable of displaying sharp images at several resolutions, an ability known as multisyncing. Due to these reasons, CRTs are sometimes preferred by PC gamers in spite of their bulk, weight and heat generation.
CRTs tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist.
CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes were of higher resolution and when used in computer monitors sometimes had adjustable overscan, or sometimes underscan. Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes.
If the CRT is a black and white (B&W or monochrome) CRT, there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also increased image brightness since the aluminum reflected light (that would otherwise be lost inside the CRT) towards the outside of the CRT. In aluminized monochrome CRTs, Aquadag is used on the outside. There is a single aluminum coating covering the funnel and the screen.
The screen, funnel and neck are fused together into a single envelope, possibly using lead enamel seals, a hole is made in the funnel onto which the anode cap is installed and the phosphor, aquadag and aluminum are applied afterwards. Previously monochrome CRTs used ion traps that required magnets; the magnet was used to deflect the electrons away from the more difficult to deflect ions, letting the electrons through while letting the ions collide into a sheet of metal inside the electron gun. Ion burn results in premature wear of the phosphor. Since ions are harder to deflect than electrons, ion burn leaves a black dot in the center of the screen.
The interior aquadag or aluminum coating was the anode and served to accelerate the electrons towards the screen, collect them after hitting the screen while serving as a capacitor together with the outer aquadag coating. The screen has a single uniform phosphor coating and no shadow mask, technically having no resolution limit.
Monochrome CRTs may use ring magnets to adjust the centering of the electron beam and magnets around the deflection yoke to adjust the geometry of the image.
Color CRTs use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called "triads" (as in shadow mask CRTs).
Color CRTs have three electron guns, one for each primary color, (red, green and blue) arranged either in a straight line (in-line) or in an equilateral triangular configuration (the guns are usually constructed as a single unit). (The triangular configuration is often called "delta-gun", based on its relation to the shape of the Greek letter delta Δ.) The arrangement of the phosphors is the same as that of the electron guns. A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor.
A shadow mask tube uses a metal plate with tiny holes, typically in a delta configuration, placed so that the electron beam only illuminates the correct phosphors on the face of the tube; blocking all other electrons. Shadow masks that use slots instead of holes are known as slot masks. The holes or slots are tapered so that the electrons that strike the inside of any hole will be reflected back, if they are not absorbed (e.g. due to local charge accumulation), instead of bouncing through the hole to strike a random (wrong) spot on the screen. Another type of color CRT (Trinitron) uses an aperture grille of tensioned vertical wires to achieve the same result. The shadow mask has a single hole for each triad. The shadow mask is usually 1/2 inch behind the screen.
Trinitron CRTs were different from other color CRTs in that they had a single electron gun with three cathodes, an aperture grille which lets more electrons through, increasing image brightness (since the aperture grille does not block as many electrons), and a vertically cylindrical screen, rather than a curved screen.
The three electron guns are in the neck (except for Trinitrons) and the red, green and blue phosphors on the screen may be separated by a black grid or matrix (called black stripe by Toshiba).
The funnel is coated with aquadag on both sides while the screen has a separate aluminum coating applied in a vacuum, deposited after the phosphor coating is applied, facing the electron gun. The aluminum coating protects the phosphor from ions, absorbs secondary electrons, providing them with a return path, preventing them from electrostatically charging the screen which would then repel electrons and reduce image brightness, reflects the light from the phosphors forwards and helps manage heat. It also serves as the anode of the CRT together with the inner aquadag coating. The inner coating is electrically connected to an electrode of the electron gun using springs, forming the final anode. The outer aquadag coating is connected to ground, possibly using a series of springs or a harness that makes contact with the aquadag.
The shadow mask absorbs or reflects electrons that would otherwise strike the wrong phosphor dots, causing color purity issues (discoloration of images); in other words, when set up correctly, the shadow mask helps ensure color purity. When the electrons strike the shadow mask, they release their energy as heat and x-rays. If the electrons have too much energy due to an anode voltage that is too high for example, the shadow mask can warp due to the heat, which can also happen during the Lehr baking at approx. 435 °C of the frit seal between the faceplate and the funnel of the CRT.
Shadow masks were replaced in TVs by slot masks in the 1970s, since slot masks let more electrons through, increasing image brightness. Shadow masks may be connected electrically to the anode of the CRT. Trinitron used a single electron gun with three cathodes instead of three complete guns. CRT PC monitors usually use shadow masks, except for Sony's Trinitron, Mitsubishi's Diamondtron and NEC's Cromaclear; Trinitron and Diamondtron use aperture grilles while Cromaclear uses a slot mask. Some shadow mask CRTs have color phosphors that are smaller in diameter than the electron beams used to light them, with the intention being to cover the entire phosphor, increasing image brightness. Shadow masks may be pressed into a curved shape.
Early color CRTs did not have a black matrix, which was introduced by Zenith in 1969, and Panasonic in 1970. The black matrix eliminates light leaking from one phosphor to another since the black matrix isolates the phosphor dots from one another, so part of the electron beam touches the black matrix. This is also made necessary by warping of the shadow mask. Light bleeding may still occur due to stray electrons striking the wrong phosphor dots. At high resolutions and refresh rates, phosphors only receive a very small amount of energy, limiting image brightness.
Several methods were used to create the black matrix. One method coated the screen in photoresist such as dichromate-sensitized polyvinyl alcohol photoresist which was then dried and exposed; the unexposed areas were removed and the entire screen was coated in colloidal graphite to create a carbon film, and then hydrogen peroxide was used to remove the remaining photoresist alongside the carbon that was on top of it, creating holes that in turn created the black matrix. The photoresist had to be of the correct thickness to ensure sufficient adhesion to the screen, while the exposure step had to be controlled to avoid holes that were too small or large with ragged edges caused by light diffraction, ultimately limiting the maximum resolution of large color CRTs. The holes were then filled with phosphor using the method described above. Another method used phosphors suspended in an aromatic diazonium salt that adhered to the screen when exposed to light; the phosphors were applied, then exposed to cause them to adhere to the screen, repeating the process once for each color. Then carbon was applied to the remaining areas of the screen while exposing the entire screen to light to create the black matrix, and a fixing process using an aqueous polymer solution was applied to the screen to make the phosphors and black matrix resistant to water. Black chromium may be used instead of carbon in the black matrix. Other methods were also used.
The phosphors are applied using photolithography. The inner side of the screen is coated with phosphor particles suspended in PVA photoresist slurry, which is then dried using infrared light, exposed, and developed. The exposure is done using a "lighthouse" that uses an ultraviolet light source with a corrector lens to allow the CRT to achieve color purity. Removable shadow masks with spring-loaded clips are used as photomasks. The process is repeated with all colors. Usually the green phosphor is the first to be applied. After phosphor application, the screen is baked to eliminate any organic chemicals (such as the PVA that was used to deposit the phosphor) that may remain on the screen. Alternatively, the phosphors may be applied in a vacuum chamber by evaporating them and allowing them to condense on the screen, creating a very uniform coating. Early color CRTs had their phosphors deposited using silkscreen printing. Phosphors may have color filters over them (facing the viewer), contain pigment of the color emitted by the phosphor, or be encapsulated in color filters to improve color purity and reproduction while reducing glare. This technology was sold by Toshiba under the Microfilter brand name. Poor exposure due to insufficient light leads to poor phosphor adhesion to the screen, which limits the maximum resolution of a CRT, as the smaller phosphor dots required for higher resolutions cannot receive as much light due to their smaller size.
After the screen is coated with phosphor and aluminum and the shadow mask installed onto it the screen is bonded to the funnel using a glass frit that may contain 65 to 88% of lead oxide by weight. The lead oxide is necessary for the glass frit to have a low melting temperature. Boron oxide (III) may also present to stabilize the frit, with alumina powder as filler powder to control the thermal expansion of the frit. The frit may be applied as a paste consisting of frit particles suspended in amyl acetate or in a polymer with an alkyl methacrylate monomer together with an organic solvent to dissolve the polymer and monomer. The CRT is then baked in an oven in what is called a Lehr bake, to cure the frit, sealing the funnel and screen together. The frit contains a large quantity of lead, causing color CRTs to contain more lead than their monochrome counterparts. Monochrome CRTs on the other hand do not require frit; the funnel can be fused directly to the glass by melting and joining the edges of the funnel and screen using gas flames. Frit is used in color CRTs to prevent deformation of the shadow mask and screen during the fusing process. The edges of the screen and funnel of the CRT are never melted. A primer may be applied on the edges of the funnel and screen before the frit paste is applied to improve adhesion. The Lehr bake consists of several successive steps that heat and then cool the CRT gradually until it reaches a temperature of 435 to 475 °C (other sources may state different temperatures, such as 440 °C) After the Lehr bake, the CRT is flushed with air or nitrogen to remove contaminants, the electron gun is inserted and sealed into the neck of the CRT, and a vacuum is formed on the CRT.
Due to limitations in the dimensional precision with which CRTs can be manufactured economically, it has not been practically possible to build color CRTs in which three electron beams could be aligned to hit phosphors of respective color in acceptable coordination, solely on the basis of the geometric configuration of the electron gun axes and gun aperture positions, shadow mask apertures, etc. The shadow mask ensures that one beam will only hit spots of certain colors of phosphors, but minute variations in physical alignment of the internal parts among individual CRTs will cause variations in the exact alignment of the beams through the shadow mask, allowing some electrons from, for example, the red beam to hit, say, blue phosphors, unless some individual compensation is made for the variance among individual tubes.
Color convergence and color purity are two aspects of this single problem. Firstly, for correct color rendering it is necessary that regardless of where the beams are deflected on the screen, all three hit the same spot (and nominally pass through the same hole or slot) on the shadow mask. This is called convergence. More specifically, the convergence at the center of the screen (with no deflection field applied by the yoke) is called static convergence, and the convergence over the rest of the screen area (specially at the edges and corners) is called dynamic convergence. The beams may converge at the center of the screen and yet stray from each other as they are deflected toward the edges; such a CRT would be said to have good static convergence but poor dynamic convergence. Secondly, each beam must only strike the phosphors of the color it is intended to strike and no others. This is called purity. Like convergence, there is static purity and dynamic purity, with the same meanings of "static" and "dynamic" as for convergence. Convergence and purity are distinct parameters; a CRT could have good purity but poor convergence, or vice versa. Poor convergence causes color "shadows" or "ghosts" along displayed edges and contours, as if the image on the screen were intaglio printed with poor registration. Poor purity causes objects on the screen to appear off-color while their edges remain sharp. Purity and convergence problems can occur at the same time, in the same or different areas of the screen or both over the whole screen, and either uniformly or to greater or lesser degrees over different parts of the screen.
The solution to the static convergence and purity problems is a set of color alignment ring magnets installed around the neck of the CRT. These movable weak permanent magnets are usually mounted on the back end of the deflection yoke assembly and are set at the factory to compensate for any static purity and convergence errors that are intrinsic to the unadjusted tube. Typically there are two or three pairs of two magnets in the form of rings made of plastic impregnated with a magnetic material, with their magnetic fields parallel to the planes of the magnets, which are perpendicular to the electron gun axes. Often, one ring has two poles, another has 4, and the remaining ring has 6 poles. Each pair of magnetic rings forms a single effective magnet whose field vector can be fully and freely adjusted (in both direction and magnitude). By rotating a pair of magnets relative to each other, their relative field alignment can be varied, adjusting the effective field strength of the pair. (As they rotate relative to each other, each magnet's field can be considered to have two opposing components at right angles, and these four components [two each for two magnets] form two pairs, one pair reinforcing each other and the other pair opposing and canceling each other. Rotating away from alignment, the magnets' mutually reinforcing field components decrease as they are traded for increasing opposed, mutually cancelling components.) By rotating a pair of magnets together, preserving the relative angle between them, the direction of their collective magnetic field can be varied. Overall, adjusting all of the convergence/purity magnets allows a finely tuned slight electron beam deflection or lateral offset to be applied, which compensates for minor static convergence and purity errors intrinsic to the uncalibrated tube. Once set, these magnets are usually glued in place, but normally they can be freed and readjusted in the field (e.g. by a TV repair shop) if necessary.
On some CRTs, additional fixed adjustable magnets are added for dynamic convergence or dynamic purity at specific points on the screen, typically near the corners or edges. Further adjustment of dynamic convergence and purity typically cannot be done passively, but requires active compensation circuits, one to correct convergence horizontally and another to correct it vertically. The deflection yoke contains convergence coils, a set of two per color, wound on the same core, to which the convergence signals are applied. That means 6 convergence coils in groups of 3, with 2 coils per group, with one coil for horizontal convergence correction and another for vertical convergence correction, with each group sharing a core. The groups are separated 120° from one another. Dynamic convergence is necessary because the front of the CRT and the shadow mask are not spherical, compensating for electron beam defocusing and astigmatism. The fact that the CRT screen is not spherical leads to geometry problems which may be corrected using a circuit. The signals used for convergence are parabolic waveforms derived from three signals coming from a vertical output circuit. The parabolic signal is fed into the convergence coils, while the other two are sawtooth signals that, when mixed with the parabolic signals, create the necessary signal for convergence. A resistor and diode are used to lock the convergence signal to the center of the screen to prevent it from being affected by the static convergence. The horizontal and vertical convergence circuits are similar. Each circuit has two resonators, one usually tuned to 15,625 Hz and the other to 31,250 Hz, which set the frequency of the signal sent to the convergence coils. Dynamic convergence may be accomplished using electrostatic quadrupole fields in the electron gun. Dynamic convergence means that the electron beam does not travel in a perfectly straight line between the deflection coils and the screen, since the convergence coils cause it to become curved to conform to the screen.
The convergence signal may instead be a sawtooth signal with a slight sine wave appearance, the sine wave part is created using a capacitor in series with each deflection coil. In this case, the convergence signal is used to drive the deflection coils. The sine wave part of the signal causes the electron beam to move more slowly near the edges of the screen. The capacitors used to create the convergence signal are known as the s-capacitors. This type of convergence is necessary due to the high deflection angles and flat screens of many CRT computer monitors. The value of the s-capacitors must be chosen based on the scan rate of the CRT, so multi-syncing monitors must have different sets of s-capacitors, one for each refresh rate.
Dynamic convergence may instead be accomplished in some CRTs using only the ring magnets, magnets glued to the CRT, and by varying the position of the deflection yoke, whose position may be maintained using set screws, a clamp and rubber wedges. 90° deflection angle CRTs may use "self-convergence" without dynamic convergence, which together with the in-line triad arrangement, eliminates the need for separate convergence coils and related circuitry, reducing costs. complexity and CRT depth by 10 millimeters. Self-convergence works by means of "nonuniform" magnetic fields. Dynamic convergence is necessary in 110° deflection angle CRTs, and quadrupole windings on the deflection yoke at a certain frequency may also be used for dynamic convergence.
Dynamic color convergence and purity are one of the main reasons why until late in their history, CRTs were long-necked (deep) and had biaxially curved faces; these geometric design characteristics are necessary for intrinsic passive dynamic color convergence and purity. Only starting around the 1990s did sophisticated active dynamic convergence compensation circuits become available that made short-necked and flat-faced CRTs workable. These active compensation circuits use the deflection yoke to finely adjust beam deflection according to the beam target location. The same techniques (and major circuit components) also make possible the adjustment of display image rotation, skew, and other complex raster geometry parameters through electronics under user control.
The guns are aligned with one another (converged) using convergence rings placed right outside the neck; there is one ring per gun. The rings have north and south poles. There are 4 sets of rings, one to adjust RGB convergence, a second to adjust Red and Blue convergence, a third to adjust vertical raster shift, and a fourth to adjust purity. The vertical raster shift adjusts the straightness of the scan line. CRTs may also employ dynamic convergence circuits, which ensure correct convergence at the edges of the CRT. Permalloy magnets may also be used to correct the convergence at the edges. Convergence is carried out with the help of a crosshatch (grid) pattern. Other CRTs may instead use magnets that are pushed in and out instead of rings. In early color CRTs, the holes in the shadow mask became progressively smaller as they extended outwards from the center of the screen, to aid in convergence.
If the shadow mask or aperture grille becomes magnetized, its magnetic field alters the paths of the electron beams. This causes errors of "color purity" as the electrons no longer follow only their intended paths, and some will hit some phosphors of colors other than the one intended. For example, some electrons from the red beam may hit blue or green phosphors, imposing a magenta or yellow tint to parts of the image that are supposed to be pure red. (This effect is localized to a specific area of the screen if the magnetization is localized.) Therefore, it is important that the shadow mask or aperture grille not be magnetized. The earth's magnetic field may have an effect on the color purity of the CRT. Because of this, some CRTs have external magnetic shields over their funnels. The magnetic shield may be made of soft iron or mild steel and contain a degaussing coil. The magnetic shield and shadow mask may be permanently magnetized by the earth's magnetic field, adversely affecting color purity when the CRT is moved. This problem is solved with a built-in degaussing coil, found in many TVs and computer monitors. Degaussing may be automatic, occurring whenever the CRT is turned on. The magnetic shield may also be internal, being on the inside of the funnel of the CRT.
Color CRT displays in television sets and computer monitors often have a built-in degaussing (demagnetizing) coil mounted around the perimeter of the CRT face. Upon power-up of the CRT display, the degaussing circuit produces a brief, alternating current through the coil which fades to zero over a few seconds, producing a decaying alternating magnetic field from the coil. This degaussing field is strong enough to remove shadow mask magnetization in most cases, maintaining color purity. In unusual cases of strong magnetization where the internal degaussing field is not sufficient, the shadow mask may be degaussed externally with a stronger portable degausser or demagnetizer. However, an excessively strong magnetic field, whether alternating or constant, may mechanically deform (bend) the shadow mask, causing a permanent color distortion on the display which looks very similar to a magnetization effect.
Dot pitch defines the maximum resolution of the display, assuming delta-gun CRTs. In these, as the scanned resolution approaches the dot pitch resolution, moiré appears, as the detail being displayed is finer than what the shadow mask can render. Aperture grille monitors do not suffer from vertical moiré, however, because their phosphor stripes have no vertical detail. In smaller CRTs, these strips maintain position by themselves, but larger aperture-grille CRTs require one or two crosswise (horizontal) support strips; one for smaller CRTs, and two for larger ones. The support wires block electrons, causing the wires to be visible. In aperture grille CRTs, dot pitch is replaced by stripe pitch. Hitachi developed the Enhanced Dot Pitch (EDP) shadow mask, which uses oval holes instead of circular ones, with respective oval phosphor dots. Moiré is reduced in shadow mask CRTs by arranging the holes in the shadow mask in a honeycomb-like pattern.
Projection CRTs were used in CRT projectors and CRT rear-projection televisions, and are usually small (being 7 to 9 inches across); have a phosphor that generates either red, green or blue light, thus making them monochrome CRTs; and are similar in construction to other monochrome CRTs. Larger projection CRTs in general lasted longer, and were able to provide higher brightness levels and resolution, but were also more expensive. Projection CRTs have an unusually high anode voltage for their size (such as 27 or 25 kV for a 5 or 7-inch projection CRT respectively), and a specially made tungsten/barium cathode (instead of the pure barium oxide normally used) that consists of barium atoms embedded in 20% porous tungsten or barium and calcium aluminates or of barium, calcium and aluminum oxides coated on porous tungsten; the barium diffuses through the tungsten to emit electrons. The special cathode can deliver 2mA of current instead of the 0.3mA of normal cathodes, which makes them bright enough to be used as light sources for projection. The high anode voltage and the specially made cathode increase the voltage and current, respectively, of the electron beam, which increases the light emitted by the phosphors, and also the amount of heat generated during operation; this means that projector CRTs need cooling. The screen is usually cooled using a container (the screen forms part of the container) with glycol; the glycol may itself be dyed, or colorless glycol may be used inside a container which may be colored (forming a lens known as a c-element). Colored lenses or glycol are used for improving color reproduction at the cost of brightness, and are only used on red and green CRTs. Each CRT has its own glycol, which has access to an air bubble to allow the glycol to shrink and expand as it cools and warms. Projector CRTs may have adjustment rings just like color CRTs to adjust astigmatism, which is flaring of the electron beam (stray light similar to shadows). They have three adjustment rings; one with two poles, one with four poles, and another with 6 poles. When correctly adjusted, the projector can display perfectly round dots without flaring. The screens used in projection CRTs were more transparent than usual, with 90% transmittance. The first projection CRTs were made in 1933.
Projector CRTs were available with electrostatic and electromagnetic focusing, the latter being more expensive. Electrostatic focusing used electronics to focus the electron beam, together with focusing magnets around the neck of the CRT for fine focusing adjustments. This type of focusing degraded over time. Electromagnetic focusing was introduced in the early 1990s and included an electromagnetic focusing coil in addition to the already existing focusing magnets. Electromagnetic focusing was much more stable over the lifetime of the CRT, retaining 95% of its sharpness by the end of life of the CRT.
Beam-index tubes, also known as Uniray, Apple CRT or Indextron, was an attempt in the 1950s by Philco to create a color CRT without a shadow mask, eliminating convergence and purity problems, and allowing for shallower CRTs with higher deflection angles. It also required a lower voltage power supply for the final anode since it did not use a shadow mask, which normally blocks around 80% of the electrons generated by the electron gun. The lack of a shadow mask also made it immune to the earth's magnetic field while also making degaussing unnecessary and increasing image brightness. It was constructed similarly to a monochrome CRT, with an aquadag outer coating, an aluminum inner coating, and a single electron gun but with a screen with an alternating pattern of red, green, blue and UV (index) phosphor stripes (similarly to a Trinitron) with a side mounted photomultiplier tube or photodiode pointed towards the rear of the screen and mounted on the funnel of CRT, to track the electron beam to activate the phosphors separately from one another using the same electron beam. Only the index phosphor stripe was used for tracking, and it was the only phosphor that was not covered by an aluminum layer. It was shelved because of the precision required to produce it. It was revived by Sony in the 1980s as the Indextron but its adoption was limited, at least in part due to the development of LCD displays. Beam-index CRTs also suffered from poor contrast ratios of only around 50:1 since some light emission by the phosphors was required at all times by the photodiodes to track the electron beam. It allowed for single CRT color CRT projectors due to a lack of shadow mask; normally CRT projectors use three CRTs, one for each color, since a lot of heat is generated due to the high anode voltage and beam current, making a shadow mask impractical and inefficient since it would warp under the heat produced (shadow masks absorb most of the electron beam, and, hence, most of the energy carried by the relativistic electrons); the three CRTs meant that an involved calibration and adjustment procedure had to be carried out during installation of the projector, and moving the projector would require it to be recalibrated. A single CRT meant the need for calibration was eliminated, but brightness was decreased since the CRT screen had to be used for three colors instead of each color having its own CRT screen. A stripe pattern also imposes a horizontal resolution limit; in contrast, three-screen CRT projectors have no theoretical resolution limit, due to them having single, uniform phosphor coatings.
Flat CRTs are those with a flat screen. Despite having a flat screen, they may not be completely flat, especially on the inside, instead having a greatly increased curvature. A notable exception is the LG Flatron (made by LG.Philips Displays, later LP Displays) which is truly flat on the outside and inside, but has a bonded glass pane on the screen with a tensioned rim band to provide implosion protection. Such completely flat CRTs were first introduced by Zenith in 1986, and used flat tensioned shadow masks, where the shadow mask is held under tension, providing increased resistance to blooming. Flat CRTs have a number of challenges, like deflection. Vertical deflection boosters are required to increase the amount of current that is sent to the vertical deflection coils to compensate for the reduced curvature. The CRTs used in the Sinclair TV80, and in many Sony Watchmans were flat in that they were not deep and their front screens were flat, but their electron guns were put to a side of the screen. The TV80 used electrostatic deflection while the Watchman used magnetic deflection with a phosphor screen that was curved inwards. Similar CRTs were used in video door bells.
Radar CRTs such as the 7JP4 had a circular screen and scanned the beam from the center outwards. The deflection yoke rotated, causing the beam to rotate in a circular fashion. The screen often had two colors, often a bright short persistence color that only appeared as the beam scanned the display and a long persistence phosphor afterglow. When the beam strikes the phosphor, the phosphor brightly illuminates, and when the beam leaves, the dimmer long persistence afterglow would remain lit where the beam struck the phosphor, alongside the radar targets that were "written" by the beam, until the beam re-struck the phosphor.
In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with television and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. Televisions use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are relatively short for their size. Some Oscilloscope CRTs incorporate post deflection anodes (PDAs) that are spiral-shaped to ensure even anode potential across the CRT and operate at up to 15,000 volts. In PDA CRTs the electron beam is deflected before it is accelerated, improving sensitivity and legibility, specially when analyzing voltage pulses with short duty cycles.
When displaying fast one-shot events, the electron beam must deflect very quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of secondary emission, this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness) and improved sensitivity and spot size as well.
Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate made of glass or acrylic plastic. An internal graticule eliminates parallax error, but cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be illuminated from the side, which improves its visibility.
These are found in analog phosphor storage oscilloscopes. These are distinct from digital storage oscilloscopes which rely on solid state digital memory to store the image.
Where a single brief event is monitored by an oscilloscope, such an event will be displayed by a conventional tube only while it actually occurs. The use of a long persistence phosphor may allow the image to be observed after the event, but only for a few seconds at best. This limitation can be overcome by the use of a direct view storage cathode-ray tube (storage tube). A storage tube will continue to display the event after it has occurred until such time as it is erased. A storage tube is similar to a conventional tube except that it is equipped with a metal grid coated with a dielectric layer located immediately behind the phosphor screen. An externally applied voltage to the mesh initially ensures that the whole mesh is at a constant potential. This mesh is constantly exposed to a low velocity electron beam from a 'flood gun' which operates independently of the main gun. This flood gun is not deflected like the main gun but constantly 'illuminates' the whole of the storage mesh. The initial charge on the storage mesh is such as to repel the electrons from the flood gun which are prevented from striking the phosphor screen.
When the main electron gun writes an image to the screen, the energy in the main beam is sufficient to create a 'potential relief' on the storage mesh. The areas where this relief is created no longer repel the electrons from the flood gun which now pass through the mesh and illuminate the phosphor screen. Consequently, the image that was briefly traced out by the main gun continues to be displayed after it has occurred. The image can be 'erased' by resupplying the external voltage to the mesh restoring its constant potential. The time for which the image can be displayed was limited because, in practice, the flood gun slowly neutralises the charge on the storage mesh. One way of allowing the image to be retained for longer is temporarily to turn off the flood gun. It is then possible for the image to be retained for several days. The majority of storage tubes allow for a lower voltage to be applied to the storage mesh which slowly restores the initial charge state. By varying this voltage a variable persistence is obtained. Turning off the flood gun and the voltage supply to the storage mesh allows such a tube to operate as a conventional oscilloscope tube.
Vector monitors were used in early computer aided design systems and are in some late-1970s to mid-1980s arcade games such as Asteroids. They draw graphics point-to-point, rather than scanning a raster. Either monochrome or color CRTs can be used in vector displays, and the essential principles of CRT design and operation are the same for either type of display; the main difference is in the beam deflection patterns and circuits.
The Williams tube or Williams-Kilburn tube was a cathode-ray tube used to electronically store binary data. It was used in computers of the 1940s as a random-access digital storage device. In contrast to other CRTs in this article, the Williams tube was not a display device, and in fact could not be viewed since a metal plate covered its screen.
In some vacuum tube radio sets, a "Magic Eye" or "Tuning Eye" tube was provided to assist in tuning the receiver. Tuning would be adjusted until the width of a radial shadow was minimized. This was used instead of a more expensive electromechanical meter, which later came to be used on higher-end tuners when transistor sets lacked the high voltage required to drive the device. The same type of device was used with tape recorders as a recording level meter, and for various other applications including electrical test equipment.
Some displays for early computers (those that needed to display more text than was practical using vectors, or that required high speed for photographic output) used Charactron CRTs. These incorporate a perforated metal character mask (stencil), which shapes a wide electron beam to form a character on the screen. The system selects a character on the mask using one set of deflection circuits, but that causes the extruded beam to be aimed off-axis, so a second set of deflection plates has to re-aim the beam so it is headed toward the center of the screen. A third set of plates places the character wherever required. The beam is unblanked (turned on) briefly to draw the character at that position. Graphics could be drawn by selecting the position on the mask corresponding to the code for a space (in practice, they were simply not drawn), which had a small round hole in the center; this effectively disabled the character mask, and the system reverted to regular vector behavior. Charactrons had exceptionally long necks, because of the need for three deflection systems.
Nimo was the trademark of a family of small specialised CRTs manufactured by Industrial Electronic Engineers. These had 10 electron guns which produced electron beams in the form of digits in a manner similar to that of the charactron. The tubes were either simple single-digit displays or more complex 4- or 6- digit displays produced by means of a suitable magnetic deflection system. Having little of the complexities of a standard CRT, the tube required a relatively simple driving circuit, and as the image was projected on the glass face, it provided a much wider viewing angle than competitive types (e.g., nixie tubes). However, their requirement for several voltages and their high voltage made them uncommon.
Flood-beam CRTs are small tubes that are arranged as pixels for large video walls like Jumbotrons. The first screen using this technology (called Diamond Vision by Mitsubishi Electric) was introduced by Mitsubishi Electric for the 1980 Major League Baseball All-Star Game. It differs from a normal CRT in that the electron gun within does not produce a focused controllable beam. Instead, electrons are sprayed in a wide cone across the entire front of the phosphor screen, basically making each unit act as a single light bulb. Each one is coated with a red, green or blue phosphor, to make up the color sub-pixels. This technology has largely been replaced with light-emitting diode displays. Unfocused and undeflected CRTs were used as grid-controlled stroboscope lamps since 1958. Electron-stimulated luminescence (ESL) lamps, which use the same operating principle, were released in 2011.
CRTs with an unphosphored front glass but with fine wires embedded in it were used as electrostatic print heads in the 1960s. The wires would pass the electron beam current through the glass onto a sheet of paper where the desired content was therefore deposited as an electrical charge pattern. The paper was then passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image.
In the late 1990s and early 2000s Philips Research Laboratories experimented with a type of thin CRT known as the Zeus display, which contained CRT-like functionality in a flat-panel display. The devices were demonstrated but never marketed.
Some CRT manufacturers, both LG.Philips Displays (later LP Displays) and Samsung SDI, innovated CRT technology by creating a slimmer tube. Slimmer CRT had the trade names Superslim, Ultraslim, Vixlim (by Samsung) and Cybertube and Cybertube+ (both by LG Philips displays). A 21-inch (53 cm) flat CRT has a 447.2-millimetre (17.61 in) depth. The depth of Superslim was 352 millimetres (13.86 in) and Ultraslim was 295.7 millimetres (11.64 in).
CRTs can emit a small amount of X-ray radiation; this is a result of the electron beam's bombardment of the shadow mask/aperture grille and phosphors, which produces bremsstrahlung (braking radiation) as the high-energy electrons are decelerated. The amount of radiation escaping the front of the monitor is widely considered to be not harmful. The Food and Drug Administration regulations in 21 CFR 1020.10 are used to strictly limit, for instance, television receivers to 0.5 milliroentgens per hour at a distance of 5 cm (2 in) from any external surface; since 2007, most CRTs have emissions that fall well below this limit. Note that the roentgen is an outdated unit and does not account for dose absorption. The conversion rate is about .877 roentgen per rem. Assuming that the viewer absorbed the entire dose (which is unlikely), and that they watched TV for 2 hours a day, a .5 milliroentgen hourly dose would increase the viewers yearly dose by 320 millirem. For comparison, the average background radiation in the United States is 310 millirem a year. Negative effects of chronic radiation are not generally noticeable until doses over 20,000 millirem.
The density of the x-rays that would be generated by a CRT is low because the raster scan of a typical CRT distributes the energy of the electron beam across the entire screen. Voltages above 15,000 volts are enough to generate "soft" x-rays. However, since CRTs may stay on for several hours at a time, the amount of x-rays generated by the CRT may become significant, hence the importance of using materials to shield against x-rays, such as the thick leaded glass and barium-strontium glass used in CRTs.
Concerns about x-rays emitted by CRTs began in 1967 when it was found that TV sets made by General Electric were emitting "X-radiation in excess of desirable levels". It was later found that TV sets from all manufacturers were also emitting radiation. This caused television industry representatives to be brought before a U.S. congressional committee, which later proposed a federal radiation regulation bill, which became the 1968 Radiation Control for Health and Safety Act. It was recommended to TV set owners to always be at a distance of at least 6 feet from the screen of the TV set, and to avoid "prolonged exposure" at the sides, rear or underneath a TV set. It was discovered that most of the radiation was directed downwards. Owners were also told to not modify their set's internals to avoid exposure to radiation. Headlines about "radioactive" TV sets continued until the end of the 1960s. There once was a proposal by two New York congressmen that would have forced TV set manufacturers to "go into homes to test all of the nation's 15 million color sets and to install radiation devices in them". The FDA eventually began regulating radiation emissions from all electronic products in the US.
Older color and monochrome CRTs may have been manufactured with toxic substances, such as cadmium, in the phosphors. The rear glass tube of modern CRTs may be made from leaded glass, which represent an environmental hazard if disposed of improperly. Since 1970, glass in the front panel (the viewable portion of the CRT) used strontium oxide rather than lead, though the rear of the CRT was still produced from leaded glass. Monochrome CRTs typically do not contain enough leaded glass to fail EPA TCLP tests. While the TCLP process grinds the glass into fine particles in order to expose them to weak acids to test for leachate, intact CRT glass does not leach (The lead is vitrified, contained inside the glass itself, similar to leaded glass crystalware).
At low refresh rates (60 Hz and below), the periodic scanning of the display may produce a flicker that some people perceive more easily than others, especially when viewed with peripheral vision. Flicker is commonly associated with CRT as most televisions run at 50 Hz (PAL) or 60 Hz (NTSC), although there are some 100 Hz PAL televisions that are flicker-free. Typically only low-end monitors run at such low frequencies, with most computer monitors supporting at least 75 Hz and high-end monitors capable of 100 Hz or more to eliminate any perception of flicker. Though the 100 Hz PAL was often achieved using interleaved scanning, dividing the circuit and scan into two beams of 50 Hz. Non-computer CRTs or CRT for sonar or radar may have long persistence phosphor and are thus flicker free. If the persistence is too long on a video display, moving images will be blurred.
50 Hz/60 Hz CRTs used for television operate with horizontal scanning frequencies of 15,750 and 15,734.27 Hz (for NTSC systems) or 15,625 Hz (for PAL systems). These frequencies are at the upper range of human hearing and are inaudible to many people; however, some people (especially children) will perceive a high-pitched tone near an operating CRT television. The sound is due to magnetostriction in the magnetic core and periodic movement of windings of the flyback transformer but the sound can also be created by movement of the deflection coils, yoke or ferrite beads.
This problem does not occur on 100/120 Hz TVs and on non-CGA (Color Graphics Adapter) computer displays, because they use much higher horizontal scanning frequencies that produce sound which is inaudible to humans (22 kHz to over 100 kHz).
High vacuum inside glass-walled cathode-ray tubes permits electron beams to fly freely—without colliding into molecules of air or other gas. If the glass is damaged, atmospheric pressure can collapse the vacuum tube into dangerous fragments which accelerate inward and then spray at high speed in all directions. Although modern cathode-ray tubes used in televisions and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs must be handled carefully to avoid personal injury.
Early CRTs had a glass plate over the screen that was bonded to it using glue, creating a laminated glass screen: initially the glue was polyvinyl acetate (PVA), while later versions such as the LG Flatron used a resin, perhaps a UV-curable resin. The PVA degrades over time creating a "cataract", a ring of degraded glue around the edges of the CRT that does not allow light from the screen to pass through. Later CRTs instead use a tensioned metal rim band mounted around the perimeter that also provides mounting points for the CRT to be mounted to a housing. In a 19-inch CRT, the tensile stress in the rim band is 70 kg/cm².
Older CRTs were mounted to the TV set using a frame. The band is tensioned by heating it, then mounting it on the CRT; the band cools afterwards, shrinking in size and putting the glass under compression, which strengthens the glass and reduces the necessary thickness (and hence weight) of the glass. This makes the band an integral component that should never be removed from an intact CRT that still has a vacuum; attempting to remove it may cause the CRT to implode.
The rim band prevents the CRT from imploding should the screen be broken. The rim band may be glued to the perimeter of the CRT using epoxy, preventing cracks from spreading beyond the screen and into the funnel.
Alternatively the compression caused by the rim band may be used to cause any cracks in the screen to propagate laterally at a high speed so that they reach the funnel and fully penetrate it before they fully penetrate the screen. This is possible because the funnel has walls that are thinner than the screen. Fully penetrating the funnel first allows air to enter the CRT from a short distance behind the screen, and prevent an implosion by ensuring the screen is fully penetrated by the cracks and breaks only when the CRT already has air.
To accelerate the electrons from the cathode to the screen with enough energy to achieve sufficient image brightness, a very high voltage (EHT or extra-high tension) is required, from a few thousand volts for a small oscilloscope CRT to tens of thousands for a larger screen color TV. This is many times greater than household power supply voltage. Even after the power supply is turned off, some associated capacitors and the CRT itself may retain a charge for some time and therefore dissipate that charge suddenly through a ground such as an inattentive human grounding a capacitor discharge lead. An average monochrome CRT may use 1 to 1.5 kV of anode voltage per inch.
Under some circumstances, the signal radiated from the electron guns, scanning circuitry, and associated wiring of a CRT can be captured remotely and used to reconstruct what is shown on the CRT using a process called Van Eck phreaking. Special TEMPEST shielding can mitigate this effect. Such radiation of a potentially exploitable signal, however, occurs also with other display technologies and with electronics in general.
Due to the toxins contained in CRT monitors the United States Environmental Protection Agency created rules (in October 2001) stating that CRTs must be brought to special e-waste recycling facilities. In November 2002, the EPA began fining companies that disposed of CRTs through landfills or incineration. Regulatory agencies, local and statewide, monitor the disposal of CRTs and other computer equipment.
As electronic waste, CRTs are considered one of the hardest types to recycle. CRTs have relatively high concentration of lead and phosphors, both of which are necessary for the display. There are several companies in the United States that charge a small fee to collect CRTs, then subsidize their labor by selling the harvested copper, wire, and printed circuit boards. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous household waste" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage.
Various states participate in the recycling of CRTs, each with their reporting requirements for collectors and recycling facilities. For example, in California the recycling of CRTs is governed by CALRecycle, the California Department of Resources Recycling and Recovery through their Payment System. Recycling facilities that accept CRT devices from business and residential sector must obtain contact information such as address and phone number to ensure the CRTs come from a California source in order to participate in the CRT Recycling Payment System.
In Europe, disposal of CRT televisions and monitors is covered by the WEEE Directive.
Multiple methods have been proposed for the recycling of CRT glass. The methods involve thermal, mechanical and chemical processes. All proposed methods remove the lead oxide content from the glass. Some companies operated furnaces to separate the lead from the glass. A coalition called the Recytube project was once formed by several European companies to devise a method to recycle CRTs. The phosphors used in CRTs often contain rare earth metals. A CRT contains about 7g of phosphor.
The funnel can be separated from the screen of the CRT using laser cutting, diamond saws or wires or using a resistively heated nichrome wire.
Leaded CRT glass was sold to be remelted into other CRTs, or even broken down and used in road construction or used in tiles, concrete, concrete and cement bricks, fiberglass insulation or used as flux in metals smelting.
A considerable portion of CRT glass is landfilled, where it can pollute the surrounding environment. It is more common for CRT glass to be disposed of than being recycled.
Applying CRT in different display-purpose:
Historical aspects:
Safety and precautions: | [
{
"paragraph_id": 0,
"text": "A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms (oscilloscope), pictures (television set, computer monitor), radar targets, or other phenomena. A CRT on a television set is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In CRT television sets and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and televisions the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A CRT is a glass envelope which is deep (i.e., long from front screen face to rear end), heavy, and fragile. The interior is evacuated to 0.01 pascals (1×10 atm) to 0.1 micropascals (1×10 atm) or less, to facilitate the free flight of electrons from the gun(s) to the tube's face without scattering due to collisions with air molecules. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. CRTs make up most of the weight of CRT TVs and computer monitors.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since the mid–late 2000's, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and less bulky. Flat-panel displays can also be made in very large sizes whereas 40 in (100 cm) to 45 in (110 cm) was about the largest size of a CRT.",
"title": ""
},
{
"paragraph_id": 4,
"text": "A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Cathode rays were discovered by Julius Plücker and Johann Wilhelm Hittorf. Hittorf observed that some unknown rays were emitted from the cathode (negative electrode) which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, and William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the charge-mass-ratio of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first \"subatomic particles\", which had already been named electrons by Irish physicist George Johnstone Stoney in 1891. The earliest version of the CRT was known as the \"Braun tube\", invented by the German physicist Ferdinand Braun in 1897. It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device. The Braun tube became the foundation of 20th century television.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1908, Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal Nature, in which he described how \"distant electric vision\" could be achieved by using a cathode-ray tube (or \"Braun\" tube) as both a transmitting and receiving device. He expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The first cathode-ray tube to use a hot cathode was developed by John Bertrand Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The introduction of hot cathodes allowed for lower acceleration anode voltages and higher electron beam currents, since the anode now only accelerated the electrons emitted by the hot cathode, and no longer had to have a very high voltage to induce electron emission from the cold cathode.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1926, Kenjiro Takayanagi demonstrated a CRT television receiver with a mechanical video camera that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, which was unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display. In 1927, Philo Farnsworth created a television prototype. The CRT was named in 1929 by inventor Vladimir K. Zworykin. RCA was granted a trademark for the term (for its cathode-ray tube) in 1932; it voluntarily released the term to the public domain in 1950.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, which was one of the factors that led to the widespread adoption of television.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The first commercially made electronic television sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1947, the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen, was created.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "From 1949 to the early 1960s, there was a shift from circular CRTs to rectangular CRTs, although the first rectangular CRTs were made in 1938 by Telefunken. While circular CRTs were the norm, European TV sets often blocked portions of the screen to make it appear somewhat rectangular while American sets often left the entire front of the CRT exposed or only blocked the upper and lower portions of the CRT.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1954, RCA produced some of the first color CRTs, the 15GP22 CRTs used in the CT-100, the first color TV set to be mass-produced. The first rectangular color CRTs were also made in 1954. However, the first rectangular color CRTs to be offered to the public were made in 1963. One of the challenges that had to be solved to produce the rectangular color CRT was convergence at the corners of the CRT. In 1965, brighter rare earth phosphors began replacing dimmer and cadmium-containing red and green phosphors. Eventually blue phosphors were replaced as well.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The size of CRTs increased over time, from 20 inches in 1938, to 21 inches in 1955, 35 inches by 1985, and 43 inches by 1989. However, experimental 31 inch CRTs were made as far back as 1938.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1960, the Aiken tube was invented. It was a CRT in a flat-panel display format with a single electron gun. Deflection was electrostatic and magnetic, but due to patent problems, it was never put into production. It was also envisioned as a head-up display in aircraft. By the time patent issues were solved, RCA had already invested heavily in conventional CRTs.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "1968 marked the release of Sony Trinitron brand with the model KV-1310, which was based on Aperture Grille technology. It was acclaimed to have improved the output brightness. The Trinitron screen was identical with its upright cylindrical shape due to its unique triple cathode single gun construction.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1987, flat-screen CRTs were developed by Zenith for computer monitors, reducing reflections and helping increase image contrast and brightness. Such CRTs were expensive, which limited their use to computer monitors. Attempts were made to produce flat-screen CRTs using inexpensive and widely available float glass.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1990, the first CRTs with HD resolution were released to the market by Sony.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In the mid-1990s, some 160 million CRTs were made per year.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In the mid-2000s, Canon and Sony presented the surface-conduction electron-emitter display and field-emission displays, respectively. They both were flat-panel displays that had one (SED) or several (FED) electron emitters per subpixel in place of electron guns. The electron emitters were placed on a sheet of glass and the electrons were accelerated to a nearby sheet of glass with phosphors using an anode voltage. The electrons were not focused, making each subpixel essentially a flood beam CRT. They were never put into mass production as LCD technology was significantly cheaper, eliminating the market for such displays.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The last large-scale manufacturer of (in this case, recycled) CRTs, Videocon, ceased in 2015. CRT TVs stopped being made around the same time.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In 2015, several CRT manufacturers were convicted in the US for price fixing. The same occurred in Canada in 2018.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Worldwide sales of CRT computer monitors peaked in 2000, at 90 million units, while those of CRT TVs peaked in 2005 at 130 million units.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Beginning in the late 90s to the early 2000s, CRTs began to be replaced with LCDs, starting first with computer monitors smaller than 15 inches in size, largely because of their lower bulk. Among the first manufacturers to stop CRT production was Hitachi in 2001, followed by Sony in Japan in 2004, Flat-panel displays dropped in price and started significantly displacing cathode-ray tubes in the 2000s. LCD monitor sales began exceeding those of CRTs in 2003–2004 and LCD TV sales started exceeding those of CRTs in some markets in 2005.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Despite being a mainstay of display technology for decades, CRT-based computer monitors and televisions are now virtually a dead technology. Demand for CRT screens dropped in the late 2000s. Despite efforts from Samsung and LG to make CRTs competitive with their LCD and plasma counterparts, offering slimmer and cheaper models to compete with similarly sized and more expensive LCDs, CRTs eventually became obsolete and were relegated to developing markets once LCDs fell in price, with their lower bulk, weight and ability to be wall mounted coming as pluses.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Some industries still use CRTs because it is either too much effort, downtime, and/or cost to replace them, or there is no substitute available; a notable example is the airline industry. Planes such as the Boeing 747-400 and the Airbus A320 used CRT instruments in their glass cockpits instead of mechanical instruments. Airlines such as Lufthansa still use CRT technology, which also uses floppy disks for navigation updates. They are also used in some military equipment for similar reasons.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "As of 2022, at least one company manufactures new CRTs for these markets.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "A popular consumer usage of CRTs is for retrogaming. Some games are impossible to play without CRT display hardware, and some games play better. Light guns only work on CRTs because they depend on the progressive timing properties of CRTs.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The body of a CRT is usually made up of three parts: A screen/faceplate/panel, a cone/funnel, and a neck. The joined screen, funnel and neck are known as the bulb or envelope.",
"title": "Construction"
},
{
"paragraph_id": 30,
"text": "The neck is made from a glass tube while the funnel and screen are made by pouring and then pressing glass into a mold. The glass, known as CRT glass or TV glass, needs special properties to shield against x-rays while providing adequate light transmission in the screen or being very electrically insulating in the funnel and neck. The formulation that gives the glass its properties is also known as the melt. The glass is of very high quality, being almost contaminant and defect free. Most of the costs associated with glass production come from the energy used to melt the raw materials into glass. Glass furnaces for CRT glass production have several taps to allow molds to be replaced without stopping the furnace, to allow production of CRTs of several sizes. Only the glass used on the screen needs to have precise optical properties.",
"title": "Construction"
},
{
"paragraph_id": 31,
"text": "The optical properties of the glass used on the screen affects color reproduction and purity in Color CRTs. Transmittance, or how transparent the glass is, may be adjusted to be more transparent to certain colors (wavelengths) of light. Transmittance is measured at the center of the screen with a 546 nm wavelength light, and a 10.16mm thick screen. Transmittance goes down with increasing thickness. Standard transmittances for Color CRT screens are 86%, 73%, 57%, 46%, 42% and 30%. Lower transmittances are used to improve image contrast but they put more stress on the electron gun, requiring more power on the electron gun for a higher electron beam power to light the phosphors more brightly to compensate for the reduced transmittance. The transmittance must be uniform across the screen to ensure color purity. The radius (curvature) of screens has increased (grown less curved) over time, from 30 to 68 inches, ultimately evolving into completely flat screens, reducing reflections. The thickness of both curved and flat screens gradually increases from the center outwards, and with it, transmittance is gradually reduced. This means that flat-screen CRTs may not be completely flat on the inside.",
"title": "Construction"
},
{
"paragraph_id": 32,
"text": "The glass used in CRTs arrives from the glass factory to the CRT factory as either separate screens and funnels with fused necks, for Color CRTs, or as bulbs made up of a fused screen, funnel and neck. There were several glass formulations for different types of CRTs, that were classified using codes specific to each glass manufacturer. The compositions of the melts were also specific to each manufacturer. Those optimized for high color purity and contrast were doped with Neodymium, while those for monochrome CRTs were tinted to differing levels, depending on the formulation used and had transmittances of 42% or 30%. Purity is ensuring that the correct colors are activated (for example, ensuring that red is displayed uniformly across the screen) while convergence ensures that images are not distorted. Convergence may be modified using a cross hatch pattern.",
"title": "Construction"
},
{
"paragraph_id": 33,
"text": "CRT glass used to be made by dedicated companies such as AGC Inc., O-I Glass, Samsung Corning Precision Materials, Corning Inc., and Nippon Electric Glass; others such as Videocon, Sony for the US market and Thomson made their own glass.",
"title": "Construction"
},
{
"paragraph_id": 34,
"text": "The funnel and the neck are made of leaded potash-soda glass or lead silicate glass formulation to shield against x-rays generated by high voltage electrons as they decelerate after striking a target, such as the phosphor screen or shadow mask of a color CRT. The velocity of the electrons depends on the anode voltage of the CRT; the higher the voltage, the higher the speed. The amount of x-rays emitted by a CRT can also lowered by reducing the brightness of the image. Leaded glass is used because it is inexpensive, while also shielding heavily against x-rays, although some funnels may also contain barium. The screen is usually instead made out of a special lead-free silicate glass formulation with barium and strontium to shield against x-rays. Another glass formulation uses 2-3% of lead on the screen.",
"title": "Construction"
},
{
"paragraph_id": 35,
"text": "Monochrome CRTs may have a tinted barium-lead glass formulation in both the screen and funnel, with a potash-soda lead glass in the neck; the potash-soda and barium-lead formulations have different thermal expansion coefficients. The glass used in the neck must be an excellent electrical insulator to contain the voltages used in the electron optics of the electron gun, such as focusing lenses. The lead in the glass causes it to brown (darken) with use due to x-rays, usually the CRT cathode wears out due to cathode poisoning before browning becomes apparent. The glass formulation determines the highest possible anode voltage and hence the maximum possible CRT screen size. For color, maximum voltages are often 24 to 32 kV, while for monochrome it is usually 21 or 24.5kV, limiting the size of monochrome CRTs to 21 inches, or approx. 1 kV per inch. The voltage needed depends on the size and type of CRT. Since the formulations are different, they must be compatible with one another, having similar thermal expansion coefficients. The screen may also have an anti-glare or anti-reflective coating, or be ground to prevent reflections. CRTs may also have an anti-static coating.",
"title": "Construction"
},
{
"paragraph_id": 36,
"text": "The leaded glass in the funnels of CRTs may contain 21 to 25% of lead oxide (PbO), The neck may contain 30 to 40% of lead oxide, and the screen may contain 12% of barium oxide, and 12% of strontium oxide. A typical CRT contains several kilograms of lead as lead oxide in the glass depending on its size; 12 inch CRTs contain 0.5 kg of lead in total while 32 inch CRTs contain up to 3 kg. Strontium oxide began being used in CRTs, its major application, in the 1970s.",
"title": "Construction"
},
{
"paragraph_id": 37,
"text": "Some early CRTs used a metal funnel insulated with polyethylene instead of glass with conductive material. Others had ceramic or blown Pyrex instead of pressed glass funnels. Early CRTs did not have a dedicated anode cap connection; the funnel was the anode connection, so it was live during operation.",
"title": "Construction"
},
{
"paragraph_id": 38,
"text": "The funnel is coated on the inside and outside with a conductive coating, making the funnel a capacitor, helping stabilize and filter the anode voltage of the CRT, and significantly reducing the amount of time needed to turn on a CRT. The stability provided by the coating solved problems inherent to early power supply designs, as they used vacuum tubes. Because the funnel is used as a capacitor, the glass used in the funnel must be an excellent electrical insulator (dielectric). The inner coating has a positive voltage (the anode voltage that can be several kV) while the outer coating is connected to ground. CRTs powered by more modern power supplies do not need to be connected to ground, due to the more robust design of modern power supplies. The value of the capacitor formed by the funnel is .005-.01uF, although at the voltage the anode is normally supplied with. The capacitor formed by the funnel can also suffer from dielectric absorption, similarly to other types of capacitors. Because of this CRTs have to be discharged before handling to prevent injury.",
"title": "Construction"
},
{
"paragraph_id": 39,
"text": "The depth of a CRT is related to its screen size. Usual deflection angles were 90° for computer monitor CRTs and small CRTs and 110° which was the standard in larger TV CRTs, with 120 or 125° being used in slim CRTs made since 2001–2005 in an attempt to compete with LCD TVs. Over time, deflection angles increased as they became practical, from 50° in 1938 to 110° in 1959, and 125° in the 2000s. 140° deflection CRTs were researched but never commercialized, as convergence problems were never resolved.",
"title": "Construction"
},
{
"paragraph_id": 40,
"text": "The size of the screen of a CRT is measured in two ways: the size of the screen or the face diagonal, and the viewable image size/area or viewable screen diagonal, which is the part of the screen with phosphor. The size of the screen is the viewable image size plus its black edges which are not coated with phosphor. The viewable image may be perfectly square or rectangular while the edges of the CRT are black and have a curvature (such as in black stripe CRTs) or the edges may be black and truly flat (such as in Flatron CRTs), or the edges of the image may follow the curvature of the edges of the CRT, which may be the case in CRTs without and with black edges and curved edges. Black stripe CRTs were first made by Toshiba in 1972.",
"title": "Construction"
},
{
"paragraph_id": 41,
"text": "Small CRTs below 3 inches were made for handheld televisions such as the MTV-1 and viewfinders in camcorders. In these, there may be no black edges, that are however truly flat.",
"title": "Construction"
},
{
"paragraph_id": 42,
"text": "Most of the weight of a CRT comes from the thick glass screen, which comprises 65% of the total weight of a CRT. The funnel and neck glass comprise the remaining 30% and 5% respectively. The glass in the funnel is thinner than on the screen. Chemically or thermally tempered glass may be used to reduce the weight of the CRT glass.",
"title": "Construction"
},
{
"paragraph_id": 43,
"text": "The outer conductive coating is connected to ground while the inner conductive coating is connected using the anode button/cap through a series of capacitors and diodes (a Cockcroft–Walton generator) to the high voltage flyback transformer; the inner coating is the anode of the CRT, which, together with an electrode in the electron gun, is also known as the final anode. The inner coating is connected to the electrode using springs. The electrode forms part of a bipotential lens. The capacitors and diodes serve as a voltage multiplier for the current delivered by the flyback.",
"title": "Construction"
},
{
"paragraph_id": 44,
"text": "For the inner funnel coating, monochrome CRTs use aluminum while color CRTs use aquadag; Some CRTs may use iron oxide on the inside. On the outside, most CRTs (but not all) use aquadag. Aquadag is an electrically conductive graphite-based paint. In color CRTs, the aquadag is sprayed onto the interior of the funnel whereas historically aquadag was painted into the interior of monochrome CRTs.",
"title": "Construction"
},
{
"paragraph_id": 45,
"text": "The anode is used to accelerate the electrons towards the screen and also collects the secondary electrons that are emitted by the phosphor particles in the vacuum of the CRT.",
"title": "Construction"
},
{
"paragraph_id": 46,
"text": "The anode cap connection in modern CRTs must be able to handle up to 55–60kV depending on the size and brightness of the CRT. Higher voltages allow for larger CRTs, higher image brightness, or a tradeoff between the two. It consists of a metal clip that expands on the inside of an anode button that is embedded on the funnel glass of the CRT. The connection is insulated by a silicone suction cup, possibly also using silicone grease to prevent corona discharge.",
"title": "Construction"
},
{
"paragraph_id": 47,
"text": "The anode button must be specially shaped to establish a hermetic seal between the button and funnel. X-rays may leak through the anode button, although that may not be the case in newer CRTs starting from the late 1970s to early 1980s, thanks to a new button and clip design. The button may consist of a set of 3 nested cups, with the outermost cup being made of a Nickel–Chromium–Iron alloy containing 40 to 49% of Nickel and 3 to 6% of Chromium to make the button easy to fuse to the funnel glass, with a first inner cup made of thick inexpensive iron to shield against x-rays, and with the second innermost cup also being made of iron or any other electrically conductive metal to connect to the clip. The cups must be heat resistant enough and have similar thermal expansion coefficients similar to that of the funnel glass to withstand being fused to the funnel glass. The inner side of the button is connected to the inner conductive coating of the CRT. The anode button may be attached to the funnel while its being pressed into shape in a mold. Alternatively, the x-ray shielding may instead be built into the clip.",
"title": "Construction"
},
{
"paragraph_id": 48,
"text": "The flyback transformer is also known as an IHVT (Integrated High Voltage Transformer) if it includes a voltage multiplier. The flyback uses a ceramic or powdered iron core to enable efficient operation at high frequencies. The flyback contains one primary and many secondary windings that provide several different voltages. The main secondary winding supplies the voltage multiplier with voltage pulses to ultimately supply the CRT with the high anode voltage it uses, while the remaining windings supply the CRT's filament voltage, keying pulses, focus voltage and voltages derived from the scan raster. When the transformer is turned off, the flyback's magnetic field quickly collapses which induces high voltage in its windings. The speed at which the magnetic field collapses determines the voltage that is induced, so the voltage increases alongside its speed. A capacitor (Retrace Timing Capacitor) or series of capacitors (to provide redundancy) is used to slow the collapse of the magnetic field.",
"title": "Construction"
},
{
"paragraph_id": 49,
"text": "The design of the high voltage power supply in a product using a CRT has an influence in the amount of x-rays emitted by the CRT. The amount of emitted x-rays increases with both higher voltages and currents. If the product such as a TV set uses an unregulated high voltage power supply, meaning that anode and focus voltage go down with increasing electron current when displaying a bright image, the amount of emitted x-rays is as its highest when the CRT is displaying a moderately bright images, since when displaying dark or bright images, the higher anode voltage counteracts the lower electron beam current and vice versa respectively. The high voltage regulator and rectifier vacuum tubes in some old CRT TV sets may also emit x-rays.",
"title": "Construction"
},
{
"paragraph_id": 50,
"text": "The electron gun emits the electrons that ultimately hit the phosphors on the screen of the CRT. The electron gun contains a heater, which heats a cathode, which generates electrons that, using grids, are focused and ultimately accelerated into the screen of the CRT. The acceleration occurs in conjunction with the inner aluminum or aquadag coating of the CRT. The electron gun is positioned so that it aims at the center of the screen. It is inside the neck of the CRT, and it is held together and mounted to the neck using glass beads or glass support rods, which are the glass strips on the electron gun. The electron gun is made separately and then placed inside the neck through a process called \"winding\", or sealing. The electron gun has a glass wafer that is fused to the neck of the CRT. The connections to the electron gun penetrate the glass wafer. Once the electron gun is inside the neck, its metal parts (grids) are arced between each other using high voltage to smooth any rough edges in a process called spot knocking, to prevent the rough edges in the grids from generating secondary electrons.",
"title": "Construction"
},
{
"paragraph_id": 51,
"text": "The electron gun has a hot cathode that is heated by a tungsten filament heating element; the heater may draw 0.5 to 2 A of current depending on the CRT. The voltage applied to the heater can affect the life of the CRT. Heating the cathode energizes the electrons in it, aiding electron emission, while at the same time current is supplied to the cathode; typically anywhere from 140 mA at 1.5 V to 600 mA at 6.3 V. The cathode creates an electron cloud (emits electrons) whose electrons are extracted, accelerated and focused into an electron beam. Color CRTs have three cathodes: one for red, green and blue. The heater sits inside the cathode but does not touch it; the cathode has its own separate electrical connection. The cathode is coated onto a piece of nickel which provides the electrical connection and structural support; the heater sits inside this piece without touching it.",
"title": "Construction"
},
{
"paragraph_id": 52,
"text": "There are several short circuits that can occur in a CRT electron gun. One is a heater-to-cathode short, that causes the cathode to permanently emit electrons which may cause an image with a bright red, green or blue tint with retrace lines, depending on the cathode (s) affected. Alternatively, the cathode may short to the control grid, possibly causing similar effects, or, the control grid and screen grid (G2) can short causing a very dark image or no image at all. The cathode may be surrounded by a shield to prevent sputtering.",
"title": "Construction"
},
{
"paragraph_id": 53,
"text": "The cathode is a layer of barium oxide which is coated on a piece of nickel for electrical and mechanical support. The barium oxide must be activated by heating to enable it to release electrons. Activation is necessary because barium oxide is not stable in air, so it is applied to the cathode as barium carbonate, which cannot emit electrons. Activation heats the barium carbonate to decompose it into barium oxide and carbon dioxide while forming a thin layer of metallic barium on the cathode. Activation occurs during evacuation of (at the same time a vacuum is formed in) the CRT. After activation the oxide can become damaged by several common gases such as water vapor, carbon dioxide, and oxygen. Alternatively, barium strontium calcium carbonate may be used instead of barium carbonate, yielding barium, strontium and calcium oxides after activation. During operation, the barium oxide is heated to 800-1000°C, at which point it starts shedding electrons.",
"title": "Construction"
},
{
"paragraph_id": 54,
"text": "Since it is a hot cathode, it is prone to cathode poisoning, which is the formation of a positive ion layer that prevents the cathode from emitting electrons, reducing image brightness significantly or completely and causing focus and intensity to be affected by the frequency of the video signal preventing detailed images from being displayed by the CRT. The positive ions come from leftover air molecules inside the CRT or from the cathode itself that react over time with the surface of the hot cathode. Reducing metals such as manganese, zirconium, magnesium, aluminum or titanium may be added to the piece of nickel to lengthen the life of the cathode, as during activation, the reducing metals diffuse into the barium oxide, improving its lifespan, especially at high electron beam currents. In color CRTs with red, green and blue cathodes, one or more cathodes may be affected independently of the others, causing total or partial loss of one or more colors. CRTs can wear or burn out due to cathode poisoning. Cathode poisoning is accelerated by increased cathode current (overdriving). In color CRTs, since there are three cathodes, one for red, green and blue, a single or more poisoned cathode may cause the partial or complete loss of one or more colors, tinting the image. The layer may also act as a capacitor in series with the cathode, inducing thermal lag. The cathode may instead be made of scandium oxide or incorporate it as a dopant, to delay cathode poisoning, extending the life of the cathode by up to 15%.",
"title": "Construction"
},
{
"paragraph_id": 55,
"text": "The amount of electrons generated by the cathodes is related to their surface area. A cathode with more surface area creates more electrons, in a larger electron cloud, which makes focusing the electron cloud into an electron beam more difficult. Normally, only a part of the cathode emits electrons unless the CRT displays images with parts that are at full image brightness; only the parts at full brightness cause all of the cathode to emit electrons. The area of the cathode that emits electrons grows from the center outwards as brightness increases, so cathode wear may be uneven. When only the center of the cathode is worn, the CRT may light brightly those parts of images that have full image brightness but not show darker parts of images at all, in such a case the CRT displays a poor gamma characteristic.",
"title": "Construction"
},
{
"paragraph_id": 56,
"text": "The second (screen) grid of the gun (G2) accelerates the electrons towards the screen using several hundred DC volts. A negative current is applied to the first (control) grid (G1) to converge the electron beam. G1 in practice is a Wehnelt cylinder. The brightness of the screen is not controlled by varying the anode voltage nor the electron beam current (they are never varied) despite them having an influence on image brightness, rather image brightness is controlled by varying the difference in voltage between the cathode and the G1 control grid. A third grid (G3) electrostatically focuses the electron beam before it is deflected and accelerated by the anode voltage onto the screen. Electrostatic focusing of the electron beam may be accomplished using an Einzel lens energized at up to 600 volts. Before electrostatic focusing, focusing the electron beam required a large, heavy and complex mechanical focusing system placed outside the electron gun.",
"title": "Construction"
},
{
"paragraph_id": 57,
"text": "However, electrostatic focusing cannot be accomplished near the final anode of the CRT due to its high voltage in the dozens of Kilovolts, so a high voltage (≈600 to 8000 volt) electrode, together with an electrode at the final anode voltage of the CRT, may be used for focusing instead. Such an arrangement is called a bipotential lens, which also offers higher performance than an Einzel lens, or, focusing may be accomplished using a magnetic focusing coil together with a high anode voltage of dozens of kilovolts. However, magnetic focusing is expensive to implement, so it is rarely used in practice. Some CRTs may use two grids and lenses to focus the electron beam. The focus voltage is generated in the flyback using a subset of the flyback's high voltage winding in conjunction with a resistive voltage divider. The focus electrode is connected alongside the other connections that are in the neck of the CRT.",
"title": "Construction"
},
{
"paragraph_id": 58,
"text": "There is a voltage called cutoff voltage which is the voltage that creates black on the screen since it causes the image on the screen created by the electron beam to disappear, the voltage is applied to G1. In a color CRT with three guns, the guns have different cutoff voltages. Many CRTs share grid G1 and G2 across all three guns, increasing image brightness and simplifying adjustment since on such CRTs there is a single cutoff voltage for all three guns (since G1 is shared across all guns). but placing additional stress on the video amplifier used to feed video into the electron gun's cathodes, since the cutoff voltage becomes higher. Monochrome CRTs do not suffer from this problem. In monochrome CRTs video is fed to the gun by varying the voltage on the first control grid.",
"title": "Construction"
},
{
"paragraph_id": 59,
"text": "During retracing of the electron beam, the preamplifier that feeds the video amplifier is disabled and the video amplifier is biased to a voltage higher than the cutoff voltage to prevent retrace lines from showing, or G1 can have a large negative voltage applied to it to prevent electrons from getting out of the cathode. This is known as blanking. (see Vertical blanking interval and Horizontal blanking interval.) Incorrect biasing can lead to visible retrace lines on one or more colors, creating retrace lines that are tinted or white (for example, tinted red if the red color is affected, tinted magenta if the red and blue colors are affected, and white if all colors are affected). Alternatively, the amplifier may be driven by a video processor that also introduces an OSD (On Screen Display) into the video stream that is fed into the amplifier, using a fast blanking signal. TV sets and computer monitors that incorporate CRTs need a DC restoration circuit to provide a video signal to the CRT with a DC component, restoring the original brightness of different parts of the image.",
"title": "Construction"
},
{
"paragraph_id": 60,
"text": "The electron beam may be affected by the Earth's magnetic field, causing it to normally enter the focusing lens off-center; this can be corrected using astigmation controls. Astigmation controls are both magnetic and electronic (dynamic); magnetic does most of the work while electronic is used for fine adjustments. One of the ends of the electron gun has a glass disk, the edges of which are fused with the edge of the neck of the CRT, possibly using frit; the metal leads that connect the electron gun to the outside pass through the disk.",
"title": "Construction"
},
{
"paragraph_id": 61,
"text": "Some electron guns have a quadrupole lens with dynamic focus to alter the shape and adjust the focus of the electron beam, varying the focus voltage depending on the position of the electron beam to maintain image sharpness across the entire screen, specially at the corners. They may also have a bleeder resistor to derive voltages for the grids from the final anode voltage.",
"title": "Construction"
},
{
"paragraph_id": 62,
"text": "After the CRTs were manufactured, they were aged to allow cathode emission to stabilize.",
"title": "Construction"
},
{
"paragraph_id": 63,
"text": "The electron guns in color CRTs are driven by a video amplifier which takes a signal per color channel and amplifies it to 40-170v per channel, to be fed into the electron gun's cathodes; each electron gun has its own channel (one per color) and all channels may be driven by the same amplifier, which internally has three separate channels. The amplifier's capabilities limit the resolution, refresh rate and contrast ratio of the CRT, as the amplifier needs to provide high bandwidth and voltage variations at the same time; higher resolutions and refresh rates need higher bandwidths (speed at which voltage can be varied and thus switching between black and white) and higher contrast ratios need higher voltage variations or amplitude for lower black and higher white levels. 30Mhz of bandwidth can usually provide 720p or 1080i resolution, while 20Mhz usually provides around 600 (horizontal, from top to bottom) lines of resolution, for example. The difference in voltage between the cathode and the control grid is what modulates the electron beam, modulating its current and thus the brightness of the image. The phosphors used in color CRTs produce different amounts of light for a given amount of energy, so to produce white on a color CRT, all three guns must output differing amounts of energy. The gun that outputs the most energy is the red gun since the red phosphor emits the least amount of light.",
"title": "Construction"
},
{
"paragraph_id": 64,
"text": "CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship in an electron gun between applied video voltage and beam intensity).",
"title": "Construction"
},
{
"paragraph_id": 65,
"text": "There are two types of deflection: magnetic and electrostatic. Magnetic is usually used in TVs and monitors as it allows for higher deflection angles (and hence shallower CRTs) and deflection power (which allows for higher electron beam current and hence brighter images) while avoiding the need for high voltages for deflection of up to 2000 volts, while oscilloscopes often use electrostatic deflection since the raw waveforms captured by the oscilloscope can be applied directly (after amplification) to the vertical electrostatic deflection plates inside the CRT.",
"title": "Construction"
},
{
"paragraph_id": 66,
"text": "Those that use magnetic deflection may use a yoke that has two pairs of deflection coils; one pair for vertical, and another for horizontal deflection. The yoke can be bonded (be integral) or removable. Those that were bonded used glue or a plastic to bond the yoke to the area between the neck and the funnel of the CRT while those with removable yokes are clamped. The yoke generates heat whose removal is essential since the conductivity of glass goes up with increasing temperature, the glass needs to be insulating for the CRT to remain usable as a capacitor. The temperature of the glass below the yoke is thus checked during the design of a new yoke. The yoke contains the deflection and convergence coils with a ferrite core to reduce loss of magnetic force as well as the magnetized rings used to align or adjust the electron beams in color CRTs (The color purity and convergence rings, for example) and monochrome CRTs. The yoke may be connected using a connector, the order in which the deflection coils of the yoke are connected determines the orientation of the image displayed by the CRT. The deflection coils may be held in place using polyurethane glue.",
"title": "Construction"
},
{
"paragraph_id": 67,
"text": "The deflection coils are driven by sawtooth signals that may be delivered through VGA as horizontal and vertical sync signals. A CRT needs two deflection circuits: a horizontal and a vertical circuit, which are similar except that the horizontal circuit runs at a much higher frequency (a Horizontal scan rate) of 15 to 240 kHz depending on the refresh rate of the CRT and the number of horizontal lines to be drawn (the vertical resolution of the CRT). The higher frequency makes it more susceptible to interference, so an automatic frequency control (AFC) circuit may be used to lock the phase of the horizontal deflection signal to that of a sync signal, to prevent the image from becoming distorted diagonally. The vertical frequency varies according to the refresh rate of the CRT. So a CRT with a 60 Hz refresh rate has a vertical deflection circuit running at 60 Hz. The horizontal and vertical deflection signals may be generated using two circuits that work differently; the horizontal deflection signal may be generated using a voltage controlled oscillator (VCO) while the vertical signal may be generated using a triggered relaxation oscillator. In many TVs, the frequencies at which the deflection coils run is in part determined by the inductance value of the coils. CRTs had differing deflection angles; the higher the deflection angle, the shallower the CRT for a given screen size, but at the cost of more deflection power and lower optical performance.",
"title": "Construction"
},
{
"paragraph_id": 68,
"text": "Higher deflection power means more current is sent to the deflection coils to bend the electron beam at a higher angle, which in turn may generate more heat or require electronics that can handle the increased power. Heat is generated due to resistive and core losses. The deflection power is measured in mA per inch. The vertical deflection coils may require approximately 24 volts while the horizontal deflection coils require approx. 120 volts to operate.",
"title": "Construction"
},
{
"paragraph_id": 69,
"text": "The deflection coils are driven by deflection amplifiers. The horizontal deflection coils may also be driven in part by the horizontal output stage of a TV set. The stage contains a capacitor that is in series with the horizontal deflection coils that performs several functions, among them are: shaping the sawtooth deflection signal to match the curvature of the CRT and centering the image by preventing a DC bias from developing on the coil. At the beginning of retrace, the magnetic field of the coil collapses, causing the electron beam to return to the center of the screen, while at the same time the coil returns energy into capacitors, the energy of which is then used to force the electron beam to go to the left of the screen.",
"title": "Construction"
},
{
"paragraph_id": 70,
"text": "Due to the high frequency at which the horizontal deflection coils operate, the energy in the deflection coils must be recycled to reduce heat dissipation. Recycling is done by transferring the energy in the deflection coils' magnetic field to a set of capacitors. The voltage on the horizontal deflection coils is negative when the electron beam is on the left side of the screen and positive when the electron beam is on the right side of the screen. The energy required for deflection is dependent on the energy of the electrons. Higher energy (voltage and/or current) electron beams need more energy to be deflected, and are used to achieve higher image brightness.",
"title": "Construction"
},
{
"paragraph_id": 71,
"text": "Mostly used in oscilloscopes. Deflection is carried out by applying a voltage across two pairs of plates, one for horizontal, and the other for vertical deflection. The electron beam is steered by varying the voltage difference across plates in a pair; For example, applying a voltage to the upper plate of the vertical deflection pair, while keeping the voltage in the bottom plate at 0 volts, will cause the electron beam to be deflected towards the upper part of the screen; increasing the voltage in the upper plate while keeping the bottom plate at 0 will cause the electron beam to be deflected to a higher point in the screen (will cause the beam to be deflected at a higher deflection angle). The same applies with the horizontal deflection plates. Increasing the length and proximity between plates in a pair can also increase the deflection angle.",
"title": "Construction"
},
{
"paragraph_id": 72,
"text": "Burn-in is when images are physically \"burned\" into the screen of the CRT; this occurs due to degradation of the phosphors due to prolonged electron bombardment of the phosphors, and happens when a fixed image or logo is left for too long on the screen, causing it to appear as a \"ghost\" image or, in severe cases, also when the CRT is off. To counter this, screensavers were used in computers to minimize burn-in. Burn-in is not exclusive to CRTs, as it also happens to plasma displays and OLED displays.",
"title": "Construction"
},
{
"paragraph_id": 73,
"text": "CRTs are evacuated or exhausted (a vacuum is formed) inside an oven at approx. 375–475 °C, in a process called baking or bake-out. The evacuation process also outgasses any materials inside the CRT, while decomposing others such as the polyvinyl alcohol used to apply the phosphors. The heating and cooling are done gradually to avoid inducing stress, stiffening and possibly cracking the glass; the oven heats the gases inside the CRT, increasing the speed of the gas molecules which increases the chances of them getting drawn out by the vacuum pump. The temperature of the CRT is kept to below that of the oven, and the oven starts to cool just after the CRT reaches 400 °C, or, the CRT was kept at a temperature higher than 400 °C for up to 15–55 minutes. The CRT was heated during or after evacuation, and the heat may have been used simultaneously to melt the frit in the CRT, joining the screen and funnel. The pump used is a turbomolecular pump or a diffusion pump. Formerly mercury vacuum pumps were also used. After baking, the CRT is disconnected (\"sealed or tipped off\") from the vacuum pump. The getter is then fired using an RF (induction) coil. The getter is usually in the funnel or in the neck of the CRT. The getter material which is often barium-based, catches any remaining gas particles as it evaporates due to heating induced by the RF coil (that may be combined with exothermic heating within the material); the vapor fills the CRT, trapping any gas molecules that it encounters and condenses on the inside of the CRT forming a layer that contains trapped gas molecules. Hydrogen may be present in the material to help distribute the barium vapor. The material is heated to temperatures above 1000 °C, causing it to evaporate. Partial loss of vacuum in a CRT can result in a hazy image, blue glowing in the neck of the CRT, flashovers, loss of cathode emission or focusing problems. The vacuum inside of a CRT causes atmospheric pressure to exert (in a 27-inch CRT) a pressure of 5,800 pounds (2,600 kg) in total.",
"title": "Construction"
},
{
"paragraph_id": 74,
"text": "CRTs used to be rebuilt; repaired or refurbished. The rebuilding process included the disassembly of the CRT, the disassembly and repair or replacement of the electron gun(s), the removal and redeposition of phosphors and aquadag, etc. Rebuilding was popular until the 1960s because CRTs were expensive and wore out quickly, making repair worth it. The last CRT rebuilder in the US closed in 2010, and the last in Europe, RACS, which was located in France, closed in 2013.",
"title": "Construction"
},
{
"paragraph_id": 75,
"text": "Also known as rejuvenation, the goal is to temporarily restore the brightness of a worn CRT. This is often done by carefully increasing the voltage on the cathode heater and the current and voltage on the control grids of the electron gun manually. Some rejuvenators can also fix heater-to-cathode shorts by running a capacitive discharge through the short.",
"title": "Construction"
},
{
"paragraph_id": 76,
"text": "Phosphors in CRTs emit secondary electrons due to them being inside the vacuum of the CRT. The secondary electrons are collected by the anode of the CRT. Secondary electrons generated by phosphors need to be collected to prevent charges from developing in the screen, which would lead to reduced image brightness since the charge would repel the electron beam.",
"title": "Construction"
},
{
"paragraph_id": 77,
"text": "The phosphors used in CRTs often contain rare earth metals, replacing earlier dimmer phosphors. Early red and green phosphors contained Cadmium, and some black and white CRT phosphors also contained beryllium in the form of Zinc beryllium silicate, although white phosphors containing cadmium, zinc and magnesium with silver, copper or manganese as dopants were also used. The rare earth phosphors used in CRTs are more efficient (produce more light) than earlier phosphors. The phosphors adhere to the screen because of Van der Waals and electrostatic forces. Phosphors composed of smaller particles adhere more strongly to the screen. The phosphors together with the carbon used to prevent light bleeding (in color CRTs) can be easily removed by scratching.",
"title": "Construction"
},
{
"paragraph_id": 78,
"text": "Several dozen types of phosphors were available for CRTs. Phosphors were classified according to color, persistence, luminance rise and fall curves, color depending on anode voltage (for phosphors used in penetration CRTs), Intended use, chemical composition, safety, sensitivity to burn-in, and secondary emission properties. Examples of rare earth phosphors are yttrium oxide for red and yttrium silicide for blue in beam index tubes, while examples of earlier phosphors are copper cadmium sulfide for red,",
"title": "Construction"
},
{
"paragraph_id": 79,
"text": "SMPTE-C phosphors have properties defined by the SMPTE-C standard, which defines a color space of the same name. The standard prioritizes accurate color reproduction, which was made difficult by the different phosphors and color spaces used in the NTSC and PAL color systems. PAL TV sets have subjectively better color reproduction due to the use of saturated green phosphors, which have relatively long decay times that are tolerated in PAL since there is more time in PAL for phosphors to decay, due to its lower framerate. SMPTE-C phosphors were used in professional video monitors.",
"title": "Construction"
},
{
"paragraph_id": 80,
"text": "The phosphor coating on monochrome and color CRTs may have an aluminum coating on its rear side used to reflect light forward, provide protection against ions to prevent ion burn by negative ions on the phosphor, manage heat generated by electrons colliding against the phosphor, prevent static build up that could repel electrons from the screen, form part of the anode and collect the secondary electrons generated by the phosphors in the screen after being hit by the electron beam, providing the electrons with a return path. The electron beam passes through the aluminum coating before hitting the phosphors on the screen; the aluminum attenuates the electron beam voltage by about 1 kv. A film or lacquer may be applied to the phosphors to reduce the surface roughness of the surface formed by the phosphors to allow the aluminum coating to have a uniform surface and prevent it from touching the glass of the screen. This is known as filming. The lacquer contains solvents that are later evaporated; the lacquer may be chemically roughened to cause an aluminum coating with holes to be created to allow the solvents to escape.",
"title": "Construction"
},
{
"paragraph_id": 81,
"text": "Various phosphors are available depending upon the needs of the measurement or display application. The brightness, color, and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable. The phosphor persistence must be low enough to avoid smearing or ghosting artifacts at high refresh rates.",
"title": "Construction"
},
{
"paragraph_id": 82,
"text": "Variations in anode voltage can lead to variations in brightness in parts or all of the image, in addition to blooming, shrinkage or the image getting zoomed in or out. Lower voltages lead to blooming and zooming in, while higher voltages do the opposite. Some blooming is unavoidable, which can be seen as bright areas of an image that expand, distorting or pushing aside surrounding darker areas of the same image. Blooming occurs because bright areas have a higher electron beam current from the electron gun, making the beam wider and harder to focus. Poor voltage regulation causes focus and anode voltage to go down with increasing electron beam current.",
"title": "Construction"
},
{
"paragraph_id": 83,
"text": "Doming is a phenomenon found on some CRT televisions in which parts of the shadow mask become heated. In televisions that exhibit this behavior, it tends to occur in high-contrast scenes in which there is a largely dark scene with one or more localized bright spots. As the electron beam hits the shadow mask in these areas it heats unevenly. The shadow mask warps due to the heat differences, which causes the electron gun to hit the wrong colored phosphors and incorrect colors to be displayed in the affected area. Thermal expansion causes the shadow mask to expand by around 100 microns.",
"title": "Construction"
},
{
"paragraph_id": 84,
"text": "During normal operation, the shadow mask is heated to around 80–90 °C. Bright areas of images heat the shadow mask more than dark areas, leading to uneven heating of the shadow mask and warping (blooming) due to thermal expansion caused by heating by increased electron beam current. The shadow mask is usually made of steel but it can be made of Invar (a low-thermal expansion Nickel-Iron alloy) as it withstands two to three times more current than conventional masks without noticeable warping, while making higher resolution CRTs easier to achieve. Coatings that dissipate heat may be applied on the shadow mask to limit blooming in a process called blackening.",
"title": "Construction"
},
{
"paragraph_id": 85,
"text": "Bimetal springs may be used in CRTs used in TVs to compensate for warping that occurs as the electron beam heats the shadow mask, causing thermal expansion. The shadow mask is installed to the screen using metal pieces or a rail or frame that is fused to the funnel or the screen glass respectively, holding the shadow mask in tension to minimize warping (if the mask is flat, used in flat-screen CRT computer monitors) and allowing for higher image brightness and contrast.",
"title": "Construction"
},
{
"paragraph_id": 86,
"text": "Aperture grille screens are brighter since they allow more electrons through, but they require support wires. They are also more resistant to warping. Color CRTs need higher anode voltages than monochrome CRTs to achieve the same brightness since the shadow mask blocks most of the electron beam. Slot masks and specially Aperture grilles do not block as many electrons resulting in a brighter image for a given anode voltage, but aperture grille CRTs are heavier. Shadow masks block 80–85% of the electron beam while Aperture grilles allow more electrons to pass through.",
"title": "Construction"
},
{
"paragraph_id": 87,
"text": "Image brightness is related to the anode voltage and to the CRTs size, so higher voltages are needed for both larger screens and higher image brightness. Image brightness is also controlled by the current of the electron beam. Higher anode voltages and electron beam currents also mean higher amounts of x-rays and heat generation since the electrons have a higher speed and energy. Leaded glass and special barium-strontium glass are used to block most x-ray emissions.",
"title": "Construction"
},
{
"paragraph_id": 88,
"text": "Size is limited by anode voltage, as it would require a higher dielectric strength to prevent arcing (corona discharge) and the electrical losses and ozone generation it causes, without sacrificing image brightness. The weight of the CRT, which originates from the thick glass needed to safely sustain a vacuum, imposes a practical limit on the size of a CRT. The 43-inch Sony PVM-4300 CRT monitor weighs 440 pounds (200 kg). Smaller CRTs weigh significantly less, as an example, 32-inch CRTs weigh up to 163 pounds (74 kg) and 19-inch CRTs weigh up to 60 pounds (27 kg). For comparison, a 32-inch flat panel TV only weighs approx. 18 pounds (8.2 kg) and a 19-inch flat panel TV weighs 6.5 pounds (2.9 kg).",
"title": "Construction"
},
{
"paragraph_id": 89,
"text": "Shadow masks become more difficult to make with increasing resolution and size.",
"title": "Construction"
},
{
"paragraph_id": 90,
"text": "At high deflection angles, resolutions and refresh rates (since higher resolutions and refresh rates require significantly higher frequencies to be applied to the horizontal deflection coils), the deflection yoke starts to produce large amounts of heat, due to the need to move the electron beam at a higher angle, which in turn requires exponentially larger amounts of power. As an example, to increase the deflection angle from 90 to 120°, power consumption of the yoke must also go up from 40 watts to 80 watts, and to increase it further from 120 to 150°, deflection power must again go up from 80 watts to 160 watts. This normally makes CRTs that go beyond certain deflection angles, resolutions and refresh rates impractical, since the coils would generate too much heat due to resistance caused by the skin effect, surface and eddy current losses, and/or possibly causing the glass underneath the coil to become conductive (as the electrical conductivity of glass decreases with increasing temperature). Some deflection yokes are designed to dissipate the heat that comes from their operation. Higher deflection angles in color CRTs directly affect convergence at the corners of the screen which requires additional compensation circuitry to handle electron beam power and shape, leading to higher costs and power consumption. Higher deflection angles allow a CRT of a given size to be slimmer, however they also impose more stress on the CRT envelope, specially on the panel, the seal between the panel and funnel and on the funnel. The funnel needs to be long enough to minimize stress, as a longer funnel can be better shaped to have lower stress.",
"title": "Construction"
},
{
"paragraph_id": 91,
"text": "On CRTs, refresh rate depends on resolution, both of which are ultimately limited by the maximum horizontal scanning frequency of the CRT. Motion blur also depends on the decay time of the phosphors. Phosphors that decay too slowly for a given refresh rate may cause smearing or motion blur on the image. In practice, CRTs are limited to a refresh rate of 160 Hz. LCDs that can compete with OLED (Dual Layer, and mini-LED LCDs) are not available in high refresh rates, although quantum dot LCDs (QLEDs) are available in high refresh rates (up to 144 Hz) and are competitive in color reproduction with OLEDs.",
"title": "Comparison with other technologies"
},
{
"paragraph_id": 92,
"text": "CRT monitors can still outperform LCD and OLED monitors in input lag, as there is no signal processing between the CRT and the display connector of the monitor, since CRT monitors often use VGA which provides an analog signal that can be fed to a CRT directly. Video cards designed for use with CRTs may have a RAMDAC to generate the analog signals needed by the CRT. Also, CRT monitors are often capable of displaying sharp images at several resolutions, an ability known as multisyncing. Due to these reasons, CRTs are sometimes preferred by PC gamers in spite of their bulk, weight and heat generation.",
"title": "Comparison with other technologies"
},
{
"paragraph_id": 93,
"text": "CRTs tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist.",
"title": "Comparison with other technologies"
},
{
"paragraph_id": 94,
"text": "CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes were of higher resolution and when used in computer monitors sometimes had adjustable overscan, or sometimes underscan. Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes.",
"title": "Types"
},
{
"paragraph_id": 95,
"text": "If the CRT is a black and white (B&W or monochrome) CRT, there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also increased image brightness since the aluminum reflected light (that would otherwise be lost inside the CRT) towards the outside of the CRT. In aluminized monochrome CRTs, Aquadag is used on the outside. There is a single aluminum coating covering the funnel and the screen.",
"title": "Types"
},
{
"paragraph_id": 96,
"text": "The screen, funnel and neck are fused together into a single envelope, possibly using lead enamel seals, a hole is made in the funnel onto which the anode cap is installed and the phosphor, aquadag and aluminum are applied afterwards. Previously monochrome CRTs used ion traps that required magnets; the magnet was used to deflect the electrons away from the more difficult to deflect ions, letting the electrons through while letting the ions collide into a sheet of metal inside the electron gun. Ion burn results in premature wear of the phosphor. Since ions are harder to deflect than electrons, ion burn leaves a black dot in the center of the screen.",
"title": "Types"
},
{
"paragraph_id": 97,
"text": "The interior aquadag or aluminum coating was the anode and served to accelerate the electrons towards the screen, collect them after hitting the screen while serving as a capacitor together with the outer aquadag coating. The screen has a single uniform phosphor coating and no shadow mask, technically having no resolution limit.",
"title": "Types"
},
{
"paragraph_id": 98,
"text": "Monochrome CRTs may use ring magnets to adjust the centering of the electron beam and magnets around the deflection yoke to adjust the geometry of the image.",
"title": "Types"
},
{
"paragraph_id": 99,
"text": "Color CRTs use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called \"triads\" (as in shadow mask CRTs).",
"title": "Types"
},
{
"paragraph_id": 100,
"text": "Color CRTs have three electron guns, one for each primary color, (red, green and blue) arranged either in a straight line (in-line) or in an equilateral triangular configuration (the guns are usually constructed as a single unit). (The triangular configuration is often called \"delta-gun\", based on its relation to the shape of the Greek letter delta Δ.) The arrangement of the phosphors is the same as that of the electron guns. A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor.",
"title": "Types"
},
{
"paragraph_id": 101,
"text": "A shadow mask tube uses a metal plate with tiny holes, typically in a delta configuration, placed so that the electron beam only illuminates the correct phosphors on the face of the tube; blocking all other electrons. Shadow masks that use slots instead of holes are known as slot masks. The holes or slots are tapered so that the electrons that strike the inside of any hole will be reflected back, if they are not absorbed (e.g. due to local charge accumulation), instead of bouncing through the hole to strike a random (wrong) spot on the screen. Another type of color CRT (Trinitron) uses an aperture grille of tensioned vertical wires to achieve the same result. The shadow mask has a single hole for each triad. The shadow mask is usually 1/2 inch behind the screen.",
"title": "Types"
},
{
"paragraph_id": 102,
"text": "Trinitron CRTs were different from other color CRTs in that they had a single electron gun with three cathodes, an aperture grille which lets more electrons through, increasing image brightness (since the aperture grille does not block as many electrons), and a vertically cylindrical screen, rather than a curved screen.",
"title": "Types"
},
{
"paragraph_id": 103,
"text": "The three electron guns are in the neck (except for Trinitrons) and the red, green and blue phosphors on the screen may be separated by a black grid or matrix (called black stripe by Toshiba).",
"title": "Types"
},
{
"paragraph_id": 104,
"text": "The funnel is coated with aquadag on both sides while the screen has a separate aluminum coating applied in a vacuum, deposited after the phosphor coating is applied, facing the electron gun. The aluminum coating protects the phosphor from ions, absorbs secondary electrons, providing them with a return path, preventing them from electrostatically charging the screen which would then repel electrons and reduce image brightness, reflects the light from the phosphors forwards and helps manage heat. It also serves as the anode of the CRT together with the inner aquadag coating. The inner coating is electrically connected to an electrode of the electron gun using springs, forming the final anode. The outer aquadag coating is connected to ground, possibly using a series of springs or a harness that makes contact with the aquadag.",
"title": "Types"
},
{
"paragraph_id": 105,
"text": "The shadow mask absorbs or reflects electrons that would otherwise strike the wrong phosphor dots, causing color purity issues (discoloration of images); in other words, when set up correctly, the shadow mask helps ensure color purity. When the electrons strike the shadow mask, they release their energy as heat and x-rays. If the electrons have too much energy due to an anode voltage that is too high for example, the shadow mask can warp due to the heat, which can also happen during the Lehr baking at approx. 435 °C of the frit seal between the faceplate and the funnel of the CRT.",
"title": "Types"
},
{
"paragraph_id": 106,
"text": "Shadow masks were replaced in TVs by slot masks in the 1970s, since slot masks let more electrons through, increasing image brightness. Shadow masks may be connected electrically to the anode of the CRT. Trinitron used a single electron gun with three cathodes instead of three complete guns. CRT PC monitors usually use shadow masks, except for Sony's Trinitron, Mitsubishi's Diamondtron and NEC's Cromaclear; Trinitron and Diamondtron use aperture grilles while Cromaclear uses a slot mask. Some shadow mask CRTs have color phosphors that are smaller in diameter than the electron beams used to light them, with the intention being to cover the entire phosphor, increasing image brightness. Shadow masks may be pressed into a curved shape.",
"title": "Types"
},
{
"paragraph_id": 107,
"text": "Early color CRTs did not have a black matrix, which was introduced by Zenith in 1969, and Panasonic in 1970. The black matrix eliminates light leaking from one phosphor to another since the black matrix isolates the phosphor dots from one another, so part of the electron beam touches the black matrix. This is also made necessary by warping of the shadow mask. Light bleeding may still occur due to stray electrons striking the wrong phosphor dots. At high resolutions and refresh rates, phosphors only receive a very small amount of energy, limiting image brightness.",
"title": "Types"
},
{
"paragraph_id": 108,
"text": "Several methods were used to create the black matrix. One method coated the screen in photoresist such as dichromate-sensitized polyvinyl alcohol photoresist which was then dried and exposed; the unexposed areas were removed and the entire screen was coated in colloidal graphite to create a carbon film, and then hydrogen peroxide was used to remove the remaining photoresist alongside the carbon that was on top of it, creating holes that in turn created the black matrix. The photoresist had to be of the correct thickness to ensure sufficient adhesion to the screen, while the exposure step had to be controlled to avoid holes that were too small or large with ragged edges caused by light diffraction, ultimately limiting the maximum resolution of large color CRTs. The holes were then filled with phosphor using the method described above. Another method used phosphors suspended in an aromatic diazonium salt that adhered to the screen when exposed to light; the phosphors were applied, then exposed to cause them to adhere to the screen, repeating the process once for each color. Then carbon was applied to the remaining areas of the screen while exposing the entire screen to light to create the black matrix, and a fixing process using an aqueous polymer solution was applied to the screen to make the phosphors and black matrix resistant to water. Black chromium may be used instead of carbon in the black matrix. Other methods were also used.",
"title": "Types"
},
{
"paragraph_id": 109,
"text": "The phosphors are applied using photolithography. The inner side of the screen is coated with phosphor particles suspended in PVA photoresist slurry, which is then dried using infrared light, exposed, and developed. The exposure is done using a \"lighthouse\" that uses an ultraviolet light source with a corrector lens to allow the CRT to achieve color purity. Removable shadow masks with spring-loaded clips are used as photomasks. The process is repeated with all colors. Usually the green phosphor is the first to be applied. After phosphor application, the screen is baked to eliminate any organic chemicals (such as the PVA that was used to deposit the phosphor) that may remain on the screen. Alternatively, the phosphors may be applied in a vacuum chamber by evaporating them and allowing them to condense on the screen, creating a very uniform coating. Early color CRTs had their phosphors deposited using silkscreen printing. Phosphors may have color filters over them (facing the viewer), contain pigment of the color emitted by the phosphor, or be encapsulated in color filters to improve color purity and reproduction while reducing glare. This technology was sold by Toshiba under the Microfilter brand name. Poor exposure due to insufficient light leads to poor phosphor adhesion to the screen, which limits the maximum resolution of a CRT, as the smaller phosphor dots required for higher resolutions cannot receive as much light due to their smaller size.",
"title": "Types"
},
{
"paragraph_id": 110,
"text": "After the screen is coated with phosphor and aluminum and the shadow mask installed onto it the screen is bonded to the funnel using a glass frit that may contain 65 to 88% of lead oxide by weight. The lead oxide is necessary for the glass frit to have a low melting temperature. Boron oxide (III) may also present to stabilize the frit, with alumina powder as filler powder to control the thermal expansion of the frit. The frit may be applied as a paste consisting of frit particles suspended in amyl acetate or in a polymer with an alkyl methacrylate monomer together with an organic solvent to dissolve the polymer and monomer. The CRT is then baked in an oven in what is called a Lehr bake, to cure the frit, sealing the funnel and screen together. The frit contains a large quantity of lead, causing color CRTs to contain more lead than their monochrome counterparts. Monochrome CRTs on the other hand do not require frit; the funnel can be fused directly to the glass by melting and joining the edges of the funnel and screen using gas flames. Frit is used in color CRTs to prevent deformation of the shadow mask and screen during the fusing process. The edges of the screen and funnel of the CRT are never melted. A primer may be applied on the edges of the funnel and screen before the frit paste is applied to improve adhesion. The Lehr bake consists of several successive steps that heat and then cool the CRT gradually until it reaches a temperature of 435 to 475 °C (other sources may state different temperatures, such as 440 °C) After the Lehr bake, the CRT is flushed with air or nitrogen to remove contaminants, the electron gun is inserted and sealed into the neck of the CRT, and a vacuum is formed on the CRT.",
"title": "Types"
},
{
"paragraph_id": 111,
"text": "Due to limitations in the dimensional precision with which CRTs can be manufactured economically, it has not been practically possible to build color CRTs in which three electron beams could be aligned to hit phosphors of respective color in acceptable coordination, solely on the basis of the geometric configuration of the electron gun axes and gun aperture positions, shadow mask apertures, etc. The shadow mask ensures that one beam will only hit spots of certain colors of phosphors, but minute variations in physical alignment of the internal parts among individual CRTs will cause variations in the exact alignment of the beams through the shadow mask, allowing some electrons from, for example, the red beam to hit, say, blue phosphors, unless some individual compensation is made for the variance among individual tubes.",
"title": "Types"
},
{
"paragraph_id": 112,
"text": "Color convergence and color purity are two aspects of this single problem. Firstly, for correct color rendering it is necessary that regardless of where the beams are deflected on the screen, all three hit the same spot (and nominally pass through the same hole or slot) on the shadow mask. This is called convergence. More specifically, the convergence at the center of the screen (with no deflection field applied by the yoke) is called static convergence, and the convergence over the rest of the screen area (specially at the edges and corners) is called dynamic convergence. The beams may converge at the center of the screen and yet stray from each other as they are deflected toward the edges; such a CRT would be said to have good static convergence but poor dynamic convergence. Secondly, each beam must only strike the phosphors of the color it is intended to strike and no others. This is called purity. Like convergence, there is static purity and dynamic purity, with the same meanings of \"static\" and \"dynamic\" as for convergence. Convergence and purity are distinct parameters; a CRT could have good purity but poor convergence, or vice versa. Poor convergence causes color \"shadows\" or \"ghosts\" along displayed edges and contours, as if the image on the screen were intaglio printed with poor registration. Poor purity causes objects on the screen to appear off-color while their edges remain sharp. Purity and convergence problems can occur at the same time, in the same or different areas of the screen or both over the whole screen, and either uniformly or to greater or lesser degrees over different parts of the screen.",
"title": "Types"
},
{
"paragraph_id": 113,
"text": "The solution to the static convergence and purity problems is a set of color alignment ring magnets installed around the neck of the CRT. These movable weak permanent magnets are usually mounted on the back end of the deflection yoke assembly and are set at the factory to compensate for any static purity and convergence errors that are intrinsic to the unadjusted tube. Typically there are two or three pairs of two magnets in the form of rings made of plastic impregnated with a magnetic material, with their magnetic fields parallel to the planes of the magnets, which are perpendicular to the electron gun axes. Often, one ring has two poles, another has 4, and the remaining ring has 6 poles. Each pair of magnetic rings forms a single effective magnet whose field vector can be fully and freely adjusted (in both direction and magnitude). By rotating a pair of magnets relative to each other, their relative field alignment can be varied, adjusting the effective field strength of the pair. (As they rotate relative to each other, each magnet's field can be considered to have two opposing components at right angles, and these four components [two each for two magnets] form two pairs, one pair reinforcing each other and the other pair opposing and canceling each other. Rotating away from alignment, the magnets' mutually reinforcing field components decrease as they are traded for increasing opposed, mutually cancelling components.) By rotating a pair of magnets together, preserving the relative angle between them, the direction of their collective magnetic field can be varied. Overall, adjusting all of the convergence/purity magnets allows a finely tuned slight electron beam deflection or lateral offset to be applied, which compensates for minor static convergence and purity errors intrinsic to the uncalibrated tube. Once set, these magnets are usually glued in place, but normally they can be freed and readjusted in the field (e.g. by a TV repair shop) if necessary.",
"title": "Types"
},
{
"paragraph_id": 114,
"text": "On some CRTs, additional fixed adjustable magnets are added for dynamic convergence or dynamic purity at specific points on the screen, typically near the corners or edges. Further adjustment of dynamic convergence and purity typically cannot be done passively, but requires active compensation circuits, one to correct convergence horizontally and another to correct it vertically. The deflection yoke contains convergence coils, a set of two per color, wound on the same core, to which the convergence signals are applied. That means 6 convergence coils in groups of 3, with 2 coils per group, with one coil for horizontal convergence correction and another for vertical convergence correction, with each group sharing a core. The groups are separated 120° from one another. Dynamic convergence is necessary because the front of the CRT and the shadow mask are not spherical, compensating for electron beam defocusing and astigmatism. The fact that the CRT screen is not spherical leads to geometry problems which may be corrected using a circuit. The signals used for convergence are parabolic waveforms derived from three signals coming from a vertical output circuit. The parabolic signal is fed into the convergence coils, while the other two are sawtooth signals that, when mixed with the parabolic signals, create the necessary signal for convergence. A resistor and diode are used to lock the convergence signal to the center of the screen to prevent it from being affected by the static convergence. The horizontal and vertical convergence circuits are similar. Each circuit has two resonators, one usually tuned to 15,625 Hz and the other to 31,250 Hz, which set the frequency of the signal sent to the convergence coils. Dynamic convergence may be accomplished using electrostatic quadrupole fields in the electron gun. Dynamic convergence means that the electron beam does not travel in a perfectly straight line between the deflection coils and the screen, since the convergence coils cause it to become curved to conform to the screen.",
"title": "Types"
},
{
"paragraph_id": 115,
"text": "The convergence signal may instead be a sawtooth signal with a slight sine wave appearance, the sine wave part is created using a capacitor in series with each deflection coil. In this case, the convergence signal is used to drive the deflection coils. The sine wave part of the signal causes the electron beam to move more slowly near the edges of the screen. The capacitors used to create the convergence signal are known as the s-capacitors. This type of convergence is necessary due to the high deflection angles and flat screens of many CRT computer monitors. The value of the s-capacitors must be chosen based on the scan rate of the CRT, so multi-syncing monitors must have different sets of s-capacitors, one for each refresh rate.",
"title": "Types"
},
{
"paragraph_id": 116,
"text": "Dynamic convergence may instead be accomplished in some CRTs using only the ring magnets, magnets glued to the CRT, and by varying the position of the deflection yoke, whose position may be maintained using set screws, a clamp and rubber wedges. 90° deflection angle CRTs may use \"self-convergence\" without dynamic convergence, which together with the in-line triad arrangement, eliminates the need for separate convergence coils and related circuitry, reducing costs. complexity and CRT depth by 10 millimeters. Self-convergence works by means of \"nonuniform\" magnetic fields. Dynamic convergence is necessary in 110° deflection angle CRTs, and quadrupole windings on the deflection yoke at a certain frequency may also be used for dynamic convergence.",
"title": "Types"
},
{
"paragraph_id": 117,
"text": "Dynamic color convergence and purity are one of the main reasons why until late in their history, CRTs were long-necked (deep) and had biaxially curved faces; these geometric design characteristics are necessary for intrinsic passive dynamic color convergence and purity. Only starting around the 1990s did sophisticated active dynamic convergence compensation circuits become available that made short-necked and flat-faced CRTs workable. These active compensation circuits use the deflection yoke to finely adjust beam deflection according to the beam target location. The same techniques (and major circuit components) also make possible the adjustment of display image rotation, skew, and other complex raster geometry parameters through electronics under user control.",
"title": "Types"
},
{
"paragraph_id": 118,
"text": "The guns are aligned with one another (converged) using convergence rings placed right outside the neck; there is one ring per gun. The rings have north and south poles. There are 4 sets of rings, one to adjust RGB convergence, a second to adjust Red and Blue convergence, a third to adjust vertical raster shift, and a fourth to adjust purity. The vertical raster shift adjusts the straightness of the scan line. CRTs may also employ dynamic convergence circuits, which ensure correct convergence at the edges of the CRT. Permalloy magnets may also be used to correct the convergence at the edges. Convergence is carried out with the help of a crosshatch (grid) pattern. Other CRTs may instead use magnets that are pushed in and out instead of rings. In early color CRTs, the holes in the shadow mask became progressively smaller as they extended outwards from the center of the screen, to aid in convergence.",
"title": "Types"
},
{
"paragraph_id": 119,
"text": "If the shadow mask or aperture grille becomes magnetized, its magnetic field alters the paths of the electron beams. This causes errors of \"color purity\" as the electrons no longer follow only their intended paths, and some will hit some phosphors of colors other than the one intended. For example, some electrons from the red beam may hit blue or green phosphors, imposing a magenta or yellow tint to parts of the image that are supposed to be pure red. (This effect is localized to a specific area of the screen if the magnetization is localized.) Therefore, it is important that the shadow mask or aperture grille not be magnetized. The earth's magnetic field may have an effect on the color purity of the CRT. Because of this, some CRTs have external magnetic shields over their funnels. The magnetic shield may be made of soft iron or mild steel and contain a degaussing coil. The magnetic shield and shadow mask may be permanently magnetized by the earth's magnetic field, adversely affecting color purity when the CRT is moved. This problem is solved with a built-in degaussing coil, found in many TVs and computer monitors. Degaussing may be automatic, occurring whenever the CRT is turned on. The magnetic shield may also be internal, being on the inside of the funnel of the CRT.",
"title": "Types"
},
{
"paragraph_id": 120,
"text": "Color CRT displays in television sets and computer monitors often have a built-in degaussing (demagnetizing) coil mounted around the perimeter of the CRT face. Upon power-up of the CRT display, the degaussing circuit produces a brief, alternating current through the coil which fades to zero over a few seconds, producing a decaying alternating magnetic field from the coil. This degaussing field is strong enough to remove shadow mask magnetization in most cases, maintaining color purity. In unusual cases of strong magnetization where the internal degaussing field is not sufficient, the shadow mask may be degaussed externally with a stronger portable degausser or demagnetizer. However, an excessively strong magnetic field, whether alternating or constant, may mechanically deform (bend) the shadow mask, causing a permanent color distortion on the display which looks very similar to a magnetization effect.",
"title": "Types"
},
{
"paragraph_id": 121,
"text": "Dot pitch defines the maximum resolution of the display, assuming delta-gun CRTs. In these, as the scanned resolution approaches the dot pitch resolution, moiré appears, as the detail being displayed is finer than what the shadow mask can render. Aperture grille monitors do not suffer from vertical moiré, however, because their phosphor stripes have no vertical detail. In smaller CRTs, these strips maintain position by themselves, but larger aperture-grille CRTs require one or two crosswise (horizontal) support strips; one for smaller CRTs, and two for larger ones. The support wires block electrons, causing the wires to be visible. In aperture grille CRTs, dot pitch is replaced by stripe pitch. Hitachi developed the Enhanced Dot Pitch (EDP) shadow mask, which uses oval holes instead of circular ones, with respective oval phosphor dots. Moiré is reduced in shadow mask CRTs by arranging the holes in the shadow mask in a honeycomb-like pattern.",
"title": "Types"
},
{
"paragraph_id": 122,
"text": "Projection CRTs were used in CRT projectors and CRT rear-projection televisions, and are usually small (being 7 to 9 inches across); have a phosphor that generates either red, green or blue light, thus making them monochrome CRTs; and are similar in construction to other monochrome CRTs. Larger projection CRTs in general lasted longer, and were able to provide higher brightness levels and resolution, but were also more expensive. Projection CRTs have an unusually high anode voltage for their size (such as 27 or 25 kV for a 5 or 7-inch projection CRT respectively), and a specially made tungsten/barium cathode (instead of the pure barium oxide normally used) that consists of barium atoms embedded in 20% porous tungsten or barium and calcium aluminates or of barium, calcium and aluminum oxides coated on porous tungsten; the barium diffuses through the tungsten to emit electrons. The special cathode can deliver 2mA of current instead of the 0.3mA of normal cathodes, which makes them bright enough to be used as light sources for projection. The high anode voltage and the specially made cathode increase the voltage and current, respectively, of the electron beam, which increases the light emitted by the phosphors, and also the amount of heat generated during operation; this means that projector CRTs need cooling. The screen is usually cooled using a container (the screen forms part of the container) with glycol; the glycol may itself be dyed, or colorless glycol may be used inside a container which may be colored (forming a lens known as a c-element). Colored lenses or glycol are used for improving color reproduction at the cost of brightness, and are only used on red and green CRTs. Each CRT has its own glycol, which has access to an air bubble to allow the glycol to shrink and expand as it cools and warms. Projector CRTs may have adjustment rings just like color CRTs to adjust astigmatism, which is flaring of the electron beam (stray light similar to shadows). They have three adjustment rings; one with two poles, one with four poles, and another with 6 poles. When correctly adjusted, the projector can display perfectly round dots without flaring. The screens used in projection CRTs were more transparent than usual, with 90% transmittance. The first projection CRTs were made in 1933.",
"title": "Types"
},
{
"paragraph_id": 123,
"text": "Projector CRTs were available with electrostatic and electromagnetic focusing, the latter being more expensive. Electrostatic focusing used electronics to focus the electron beam, together with focusing magnets around the neck of the CRT for fine focusing adjustments. This type of focusing degraded over time. Electromagnetic focusing was introduced in the early 1990s and included an electromagnetic focusing coil in addition to the already existing focusing magnets. Electromagnetic focusing was much more stable over the lifetime of the CRT, retaining 95% of its sharpness by the end of life of the CRT.",
"title": "Types"
},
{
"paragraph_id": 124,
"text": "Beam-index tubes, also known as Uniray, Apple CRT or Indextron, was an attempt in the 1950s by Philco to create a color CRT without a shadow mask, eliminating convergence and purity problems, and allowing for shallower CRTs with higher deflection angles. It also required a lower voltage power supply for the final anode since it did not use a shadow mask, which normally blocks around 80% of the electrons generated by the electron gun. The lack of a shadow mask also made it immune to the earth's magnetic field while also making degaussing unnecessary and increasing image brightness. It was constructed similarly to a monochrome CRT, with an aquadag outer coating, an aluminum inner coating, and a single electron gun but with a screen with an alternating pattern of red, green, blue and UV (index) phosphor stripes (similarly to a Trinitron) with a side mounted photomultiplier tube or photodiode pointed towards the rear of the screen and mounted on the funnel of CRT, to track the electron beam to activate the phosphors separately from one another using the same electron beam. Only the index phosphor stripe was used for tracking, and it was the only phosphor that was not covered by an aluminum layer. It was shelved because of the precision required to produce it. It was revived by Sony in the 1980s as the Indextron but its adoption was limited, at least in part due to the development of LCD displays. Beam-index CRTs also suffered from poor contrast ratios of only around 50:1 since some light emission by the phosphors was required at all times by the photodiodes to track the electron beam. It allowed for single CRT color CRT projectors due to a lack of shadow mask; normally CRT projectors use three CRTs, one for each color, since a lot of heat is generated due to the high anode voltage and beam current, making a shadow mask impractical and inefficient since it would warp under the heat produced (shadow masks absorb most of the electron beam, and, hence, most of the energy carried by the relativistic electrons); the three CRTs meant that an involved calibration and adjustment procedure had to be carried out during installation of the projector, and moving the projector would require it to be recalibrated. A single CRT meant the need for calibration was eliminated, but brightness was decreased since the CRT screen had to be used for three colors instead of each color having its own CRT screen. A stripe pattern also imposes a horizontal resolution limit; in contrast, three-screen CRT projectors have no theoretical resolution limit, due to them having single, uniform phosphor coatings.",
"title": "Types"
},
{
"paragraph_id": 125,
"text": "Flat CRTs are those with a flat screen. Despite having a flat screen, they may not be completely flat, especially on the inside, instead having a greatly increased curvature. A notable exception is the LG Flatron (made by LG.Philips Displays, later LP Displays) which is truly flat on the outside and inside, but has a bonded glass pane on the screen with a tensioned rim band to provide implosion protection. Such completely flat CRTs were first introduced by Zenith in 1986, and used flat tensioned shadow masks, where the shadow mask is held under tension, providing increased resistance to blooming. Flat CRTs have a number of challenges, like deflection. Vertical deflection boosters are required to increase the amount of current that is sent to the vertical deflection coils to compensate for the reduced curvature. The CRTs used in the Sinclair TV80, and in many Sony Watchmans were flat in that they were not deep and their front screens were flat, but their electron guns were put to a side of the screen. The TV80 used electrostatic deflection while the Watchman used magnetic deflection with a phosphor screen that was curved inwards. Similar CRTs were used in video door bells.",
"title": "Types"
},
{
"paragraph_id": 126,
"text": "Radar CRTs such as the 7JP4 had a circular screen and scanned the beam from the center outwards. The deflection yoke rotated, causing the beam to rotate in a circular fashion. The screen often had two colors, often a bright short persistence color that only appeared as the beam scanned the display and a long persistence phosphor afterglow. When the beam strikes the phosphor, the phosphor brightly illuminates, and when the beam leaves, the dimmer long persistence afterglow would remain lit where the beam struck the phosphor, alongside the radar targets that were \"written\" by the beam, until the beam re-struck the phosphor.",
"title": "Types"
},
{
"paragraph_id": 127,
"text": "In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with television and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. Televisions use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are relatively short for their size. Some Oscilloscope CRTs incorporate post deflection anodes (PDAs) that are spiral-shaped to ensure even anode potential across the CRT and operate at up to 15,000 volts. In PDA CRTs the electron beam is deflected before it is accelerated, improving sensitivity and legibility, specially when analyzing voltage pulses with short duty cycles.",
"title": "Types"
},
{
"paragraph_id": 128,
"text": "When displaying fast one-shot events, the electron beam must deflect very quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of secondary emission, this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness) and improved sensitivity and spot size as well.",
"title": "Types"
},
{
"paragraph_id": 129,
"text": "Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate made of glass or acrylic plastic. An internal graticule eliminates parallax error, but cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be illuminated from the side, which improves its visibility.",
"title": "Types"
},
{
"paragraph_id": 130,
"text": "These are found in analog phosphor storage oscilloscopes. These are distinct from digital storage oscilloscopes which rely on solid state digital memory to store the image.",
"title": "Types"
},
{
"paragraph_id": 131,
"text": "Where a single brief event is monitored by an oscilloscope, such an event will be displayed by a conventional tube only while it actually occurs. The use of a long persistence phosphor may allow the image to be observed after the event, but only for a few seconds at best. This limitation can be overcome by the use of a direct view storage cathode-ray tube (storage tube). A storage tube will continue to display the event after it has occurred until such time as it is erased. A storage tube is similar to a conventional tube except that it is equipped with a metal grid coated with a dielectric layer located immediately behind the phosphor screen. An externally applied voltage to the mesh initially ensures that the whole mesh is at a constant potential. This mesh is constantly exposed to a low velocity electron beam from a 'flood gun' which operates independently of the main gun. This flood gun is not deflected like the main gun but constantly 'illuminates' the whole of the storage mesh. The initial charge on the storage mesh is such as to repel the electrons from the flood gun which are prevented from striking the phosphor screen.",
"title": "Types"
},
{
"paragraph_id": 132,
"text": "When the main electron gun writes an image to the screen, the energy in the main beam is sufficient to create a 'potential relief' on the storage mesh. The areas where this relief is created no longer repel the electrons from the flood gun which now pass through the mesh and illuminate the phosphor screen. Consequently, the image that was briefly traced out by the main gun continues to be displayed after it has occurred. The image can be 'erased' by resupplying the external voltage to the mesh restoring its constant potential. The time for which the image can be displayed was limited because, in practice, the flood gun slowly neutralises the charge on the storage mesh. One way of allowing the image to be retained for longer is temporarily to turn off the flood gun. It is then possible for the image to be retained for several days. The majority of storage tubes allow for a lower voltage to be applied to the storage mesh which slowly restores the initial charge state. By varying this voltage a variable persistence is obtained. Turning off the flood gun and the voltage supply to the storage mesh allows such a tube to operate as a conventional oscilloscope tube.",
"title": "Types"
},
{
"paragraph_id": 133,
"text": "Vector monitors were used in early computer aided design systems and are in some late-1970s to mid-1980s arcade games such as Asteroids. They draw graphics point-to-point, rather than scanning a raster. Either monochrome or color CRTs can be used in vector displays, and the essential principles of CRT design and operation are the same for either type of display; the main difference is in the beam deflection patterns and circuits.",
"title": "Types"
},
{
"paragraph_id": 134,
"text": "The Williams tube or Williams-Kilburn tube was a cathode-ray tube used to electronically store binary data. It was used in computers of the 1940s as a random-access digital storage device. In contrast to other CRTs in this article, the Williams tube was not a display device, and in fact could not be viewed since a metal plate covered its screen.",
"title": "Types"
},
{
"paragraph_id": 135,
"text": "In some vacuum tube radio sets, a \"Magic Eye\" or \"Tuning Eye\" tube was provided to assist in tuning the receiver. Tuning would be adjusted until the width of a radial shadow was minimized. This was used instead of a more expensive electromechanical meter, which later came to be used on higher-end tuners when transistor sets lacked the high voltage required to drive the device. The same type of device was used with tape recorders as a recording level meter, and for various other applications including electrical test equipment.",
"title": "Types"
},
{
"paragraph_id": 136,
"text": "Some displays for early computers (those that needed to display more text than was practical using vectors, or that required high speed for photographic output) used Charactron CRTs. These incorporate a perforated metal character mask (stencil), which shapes a wide electron beam to form a character on the screen. The system selects a character on the mask using one set of deflection circuits, but that causes the extruded beam to be aimed off-axis, so a second set of deflection plates has to re-aim the beam so it is headed toward the center of the screen. A third set of plates places the character wherever required. The beam is unblanked (turned on) briefly to draw the character at that position. Graphics could be drawn by selecting the position on the mask corresponding to the code for a space (in practice, they were simply not drawn), which had a small round hole in the center; this effectively disabled the character mask, and the system reverted to regular vector behavior. Charactrons had exceptionally long necks, because of the need for three deflection systems.",
"title": "Types"
},
{
"paragraph_id": 137,
"text": "Nimo was the trademark of a family of small specialised CRTs manufactured by Industrial Electronic Engineers. These had 10 electron guns which produced electron beams in the form of digits in a manner similar to that of the charactron. The tubes were either simple single-digit displays or more complex 4- or 6- digit displays produced by means of a suitable magnetic deflection system. Having little of the complexities of a standard CRT, the tube required a relatively simple driving circuit, and as the image was projected on the glass face, it provided a much wider viewing angle than competitive types (e.g., nixie tubes). However, their requirement for several voltages and their high voltage made them uncommon.",
"title": "Types"
},
{
"paragraph_id": 138,
"text": "Flood-beam CRTs are small tubes that are arranged as pixels for large video walls like Jumbotrons. The first screen using this technology (called Diamond Vision by Mitsubishi Electric) was introduced by Mitsubishi Electric for the 1980 Major League Baseball All-Star Game. It differs from a normal CRT in that the electron gun within does not produce a focused controllable beam. Instead, electrons are sprayed in a wide cone across the entire front of the phosphor screen, basically making each unit act as a single light bulb. Each one is coated with a red, green or blue phosphor, to make up the color sub-pixels. This technology has largely been replaced with light-emitting diode displays. Unfocused and undeflected CRTs were used as grid-controlled stroboscope lamps since 1958. Electron-stimulated luminescence (ESL) lamps, which use the same operating principle, were released in 2011.",
"title": "Types"
},
{
"paragraph_id": 139,
"text": "CRTs with an unphosphored front glass but with fine wires embedded in it were used as electrostatic print heads in the 1960s. The wires would pass the electron beam current through the glass onto a sheet of paper where the desired content was therefore deposited as an electrical charge pattern. The paper was then passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image.",
"title": "Types"
},
{
"paragraph_id": 140,
"text": "In the late 1990s and early 2000s Philips Research Laboratories experimented with a type of thin CRT known as the Zeus display, which contained CRT-like functionality in a flat-panel display. The devices were demonstrated but never marketed.",
"title": "Types"
},
{
"paragraph_id": 141,
"text": "Some CRT manufacturers, both LG.Philips Displays (later LP Displays) and Samsung SDI, innovated CRT technology by creating a slimmer tube. Slimmer CRT had the trade names Superslim, Ultraslim, Vixlim (by Samsung) and Cybertube and Cybertube+ (both by LG Philips displays). A 21-inch (53 cm) flat CRT has a 447.2-millimetre (17.61 in) depth. The depth of Superslim was 352 millimetres (13.86 in) and Ultraslim was 295.7 millimetres (11.64 in).",
"title": "Types"
},
{
"paragraph_id": 142,
"text": "CRTs can emit a small amount of X-ray radiation; this is a result of the electron beam's bombardment of the shadow mask/aperture grille and phosphors, which produces bremsstrahlung (braking radiation) as the high-energy electrons are decelerated. The amount of radiation escaping the front of the monitor is widely considered to be not harmful. The Food and Drug Administration regulations in 21 CFR 1020.10 are used to strictly limit, for instance, television receivers to 0.5 milliroentgens per hour at a distance of 5 cm (2 in) from any external surface; since 2007, most CRTs have emissions that fall well below this limit. Note that the roentgen is an outdated unit and does not account for dose absorption. The conversion rate is about .877 roentgen per rem. Assuming that the viewer absorbed the entire dose (which is unlikely), and that they watched TV for 2 hours a day, a .5 milliroentgen hourly dose would increase the viewers yearly dose by 320 millirem. For comparison, the average background radiation in the United States is 310 millirem a year. Negative effects of chronic radiation are not generally noticeable until doses over 20,000 millirem.",
"title": "Health concerns"
},
{
"paragraph_id": 143,
"text": "The density of the x-rays that would be generated by a CRT is low because the raster scan of a typical CRT distributes the energy of the electron beam across the entire screen. Voltages above 15,000 volts are enough to generate \"soft\" x-rays. However, since CRTs may stay on for several hours at a time, the amount of x-rays generated by the CRT may become significant, hence the importance of using materials to shield against x-rays, such as the thick leaded glass and barium-strontium glass used in CRTs.",
"title": "Health concerns"
},
{
"paragraph_id": 144,
"text": "Concerns about x-rays emitted by CRTs began in 1967 when it was found that TV sets made by General Electric were emitting \"X-radiation in excess of desirable levels\". It was later found that TV sets from all manufacturers were also emitting radiation. This caused television industry representatives to be brought before a U.S. congressional committee, which later proposed a federal radiation regulation bill, which became the 1968 Radiation Control for Health and Safety Act. It was recommended to TV set owners to always be at a distance of at least 6 feet from the screen of the TV set, and to avoid \"prolonged exposure\" at the sides, rear or underneath a TV set. It was discovered that most of the radiation was directed downwards. Owners were also told to not modify their set's internals to avoid exposure to radiation. Headlines about \"radioactive\" TV sets continued until the end of the 1960s. There once was a proposal by two New York congressmen that would have forced TV set manufacturers to \"go into homes to test all of the nation's 15 million color sets and to install radiation devices in them\". The FDA eventually began regulating radiation emissions from all electronic products in the US.",
"title": "Health concerns"
},
{
"paragraph_id": 145,
"text": "Older color and monochrome CRTs may have been manufactured with toxic substances, such as cadmium, in the phosphors. The rear glass tube of modern CRTs may be made from leaded glass, which represent an environmental hazard if disposed of improperly. Since 1970, glass in the front panel (the viewable portion of the CRT) used strontium oxide rather than lead, though the rear of the CRT was still produced from leaded glass. Monochrome CRTs typically do not contain enough leaded glass to fail EPA TCLP tests. While the TCLP process grinds the glass into fine particles in order to expose them to weak acids to test for leachate, intact CRT glass does not leach (The lead is vitrified, contained inside the glass itself, similar to leaded glass crystalware).",
"title": "Health concerns"
},
{
"paragraph_id": 146,
"text": "At low refresh rates (60 Hz and below), the periodic scanning of the display may produce a flicker that some people perceive more easily than others, especially when viewed with peripheral vision. Flicker is commonly associated with CRT as most televisions run at 50 Hz (PAL) or 60 Hz (NTSC), although there are some 100 Hz PAL televisions that are flicker-free. Typically only low-end monitors run at such low frequencies, with most computer monitors supporting at least 75 Hz and high-end monitors capable of 100 Hz or more to eliminate any perception of flicker. Though the 100 Hz PAL was often achieved using interleaved scanning, dividing the circuit and scan into two beams of 50 Hz. Non-computer CRTs or CRT for sonar or radar may have long persistence phosphor and are thus flicker free. If the persistence is too long on a video display, moving images will be blurred.",
"title": "Health concerns"
},
{
"paragraph_id": 147,
"text": "50 Hz/60 Hz CRTs used for television operate with horizontal scanning frequencies of 15,750 and 15,734.27 Hz (for NTSC systems) or 15,625 Hz (for PAL systems). These frequencies are at the upper range of human hearing and are inaudible to many people; however, some people (especially children) will perceive a high-pitched tone near an operating CRT television. The sound is due to magnetostriction in the magnetic core and periodic movement of windings of the flyback transformer but the sound can also be created by movement of the deflection coils, yoke or ferrite beads.",
"title": "Health concerns"
},
{
"paragraph_id": 148,
"text": "This problem does not occur on 100/120 Hz TVs and on non-CGA (Color Graphics Adapter) computer displays, because they use much higher horizontal scanning frequencies that produce sound which is inaudible to humans (22 kHz to over 100 kHz).",
"title": "Health concerns"
},
{
"paragraph_id": 149,
"text": "High vacuum inside glass-walled cathode-ray tubes permits electron beams to fly freely—without colliding into molecules of air or other gas. If the glass is damaged, atmospheric pressure can collapse the vacuum tube into dangerous fragments which accelerate inward and then spray at high speed in all directions. Although modern cathode-ray tubes used in televisions and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs must be handled carefully to avoid personal injury.",
"title": "Health concerns"
},
{
"paragraph_id": 150,
"text": "Early CRTs had a glass plate over the screen that was bonded to it using glue, creating a laminated glass screen: initially the glue was polyvinyl acetate (PVA), while later versions such as the LG Flatron used a resin, perhaps a UV-curable resin. The PVA degrades over time creating a \"cataract\", a ring of degraded glue around the edges of the CRT that does not allow light from the screen to pass through. Later CRTs instead use a tensioned metal rim band mounted around the perimeter that also provides mounting points for the CRT to be mounted to a housing. In a 19-inch CRT, the tensile stress in the rim band is 70 kg/cm².",
"title": "Health concerns"
},
{
"paragraph_id": 151,
"text": "Older CRTs were mounted to the TV set using a frame. The band is tensioned by heating it, then mounting it on the CRT; the band cools afterwards, shrinking in size and putting the glass under compression, which strengthens the glass and reduces the necessary thickness (and hence weight) of the glass. This makes the band an integral component that should never be removed from an intact CRT that still has a vacuum; attempting to remove it may cause the CRT to implode.",
"title": "Health concerns"
},
{
"paragraph_id": 152,
"text": "The rim band prevents the CRT from imploding should the screen be broken. The rim band may be glued to the perimeter of the CRT using epoxy, preventing cracks from spreading beyond the screen and into the funnel.",
"title": "Health concerns"
},
{
"paragraph_id": 153,
"text": "Alternatively the compression caused by the rim band may be used to cause any cracks in the screen to propagate laterally at a high speed so that they reach the funnel and fully penetrate it before they fully penetrate the screen. This is possible because the funnel has walls that are thinner than the screen. Fully penetrating the funnel first allows air to enter the CRT from a short distance behind the screen, and prevent an implosion by ensuring the screen is fully penetrated by the cracks and breaks only when the CRT already has air.",
"title": "Health concerns"
},
{
"paragraph_id": 154,
"text": "To accelerate the electrons from the cathode to the screen with enough energy to achieve sufficient image brightness, a very high voltage (EHT or extra-high tension) is required, from a few thousand volts for a small oscilloscope CRT to tens of thousands for a larger screen color TV. This is many times greater than household power supply voltage. Even after the power supply is turned off, some associated capacitors and the CRT itself may retain a charge for some time and therefore dissipate that charge suddenly through a ground such as an inattentive human grounding a capacitor discharge lead. An average monochrome CRT may use 1 to 1.5 kV of anode voltage per inch.",
"title": "Health concerns"
},
{
"paragraph_id": 155,
"text": "Under some circumstances, the signal radiated from the electron guns, scanning circuitry, and associated wiring of a CRT can be captured remotely and used to reconstruct what is shown on the CRT using a process called Van Eck phreaking. Special TEMPEST shielding can mitigate this effect. Such radiation of a potentially exploitable signal, however, occurs also with other display technologies and with electronics in general.",
"title": "Security concerns"
},
{
"paragraph_id": 156,
"text": "Due to the toxins contained in CRT monitors the United States Environmental Protection Agency created rules (in October 2001) stating that CRTs must be brought to special e-waste recycling facilities. In November 2002, the EPA began fining companies that disposed of CRTs through landfills or incineration. Regulatory agencies, local and statewide, monitor the disposal of CRTs and other computer equipment.",
"title": "Recycling"
},
{
"paragraph_id": 157,
"text": "As electronic waste, CRTs are considered one of the hardest types to recycle. CRTs have relatively high concentration of lead and phosphors, both of which are necessary for the display. There are several companies in the United States that charge a small fee to collect CRTs, then subsidize their labor by selling the harvested copper, wire, and printed circuit boards. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of \"hazardous household waste\" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage.",
"title": "Recycling"
},
{
"paragraph_id": 158,
"text": "Various states participate in the recycling of CRTs, each with their reporting requirements for collectors and recycling facilities. For example, in California the recycling of CRTs is governed by CALRecycle, the California Department of Resources Recycling and Recovery through their Payment System. Recycling facilities that accept CRT devices from business and residential sector must obtain contact information such as address and phone number to ensure the CRTs come from a California source in order to participate in the CRT Recycling Payment System.",
"title": "Recycling"
},
{
"paragraph_id": 159,
"text": "In Europe, disposal of CRT televisions and monitors is covered by the WEEE Directive.",
"title": "Recycling"
},
{
"paragraph_id": 160,
"text": "Multiple methods have been proposed for the recycling of CRT glass. The methods involve thermal, mechanical and chemical processes. All proposed methods remove the lead oxide content from the glass. Some companies operated furnaces to separate the lead from the glass. A coalition called the Recytube project was once formed by several European companies to devise a method to recycle CRTs. The phosphors used in CRTs often contain rare earth metals. A CRT contains about 7g of phosphor.",
"title": "Recycling"
},
{
"paragraph_id": 161,
"text": "The funnel can be separated from the screen of the CRT using laser cutting, diamond saws or wires or using a resistively heated nichrome wire.",
"title": "Recycling"
},
{
"paragraph_id": 162,
"text": "Leaded CRT glass was sold to be remelted into other CRTs, or even broken down and used in road construction or used in tiles, concrete, concrete and cement bricks, fiberglass insulation or used as flux in metals smelting.",
"title": "Recycling"
},
{
"paragraph_id": 163,
"text": "A considerable portion of CRT glass is landfilled, where it can pollute the surrounding environment. It is more common for CRT glass to be disposed of than being recycled.",
"title": "Recycling"
},
{
"paragraph_id": 164,
"text": "",
"title": "See also"
},
{
"paragraph_id": 165,
"text": "Applying CRT in different display-purpose:",
"title": "See also"
},
{
"paragraph_id": 166,
"text": "Historical aspects:",
"title": "See also"
},
{
"paragraph_id": 167,
"text": "Safety and precautions:",
"title": "See also"
}
]
| A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms (oscilloscope), pictures, radar targets, or other phenomena. A CRT on a television set is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons. In CRT television sets and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color with a video signal as a reference. In modern CRT monitors and televisions the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes. A CRT is a glass envelope which is deep, heavy, and fragile. The interior is evacuated to 0.01 pascals (1×10−7 atm) to 0.1 micropascals (1×10−12 atm) or less, to facilitate the free flight of electrons from the gun(s) to the tube's face without scattering due to collisions with air molecules. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. CRTs make up most of the weight of CRT TVs and computer monitors. Since the mid–late 2000's, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and less bulky. Flat-panel displays can also be made in very large sizes whereas 40 in (100 cm) to 45 in (110 cm) was about the largest size of a CRT. A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons. | 2001-10-30T10:54:10Z | 2023-12-28T05:30:46Z | [
"Template:Cite book",
"Template:US patent",
"Template:Tooltip",
"Template:Portal",
"Template:Reflist",
"Template:Cite journal",
"Template:Cn",
"Template:Div col",
"Template:Cite thesis",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Cite report",
"Template:Div col end",
"Template:Cite patent",
"Template:Commons category",
"Template:Further",
"Template:Citation needed",
"Template:Rp",
"Template:Cite web",
"Template:Cite conference",
"Template:Very long",
"Template:Numbered list",
"Template:Main",
"Template:CodeFedReg",
"Template:Cite interview",
"Template:Display technology",
"Template:Electronic components",
"Template:Redirect",
"Template:Asof",
"Template:Page needed",
"Template:Cite news",
"Template:Short description",
"Template:Cvt",
"Template:Clarify",
"Template:Webarchive",
"Template:Cite magazine",
"Template:Thermionic valves",
"Template:Pb",
"Template:Convert"
]
| https://en.wikipedia.org/wiki/Cathode-ray_tube |
6,015 | Crystal | A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, consisting of flat faces with specific, characteristic orientations. The scientific study of crystals and crystal formation is known as crystallography. The process of crystal formation via mechanisms of crystal growth is called crystallization or solidification.
The word crystal derives from the Ancient Greek word κρύσταλλος (krustallos), meaning both "ice" and "rock crystal", from κρύος (kruos), "icy cold, frost".
Examples of large crystals include snowflakes, diamonds, and table salt. Most inorganic solids are not crystals but polycrystals, i.e. many microscopic crystals fused together into a single solid. Polycrystals include most metals, rocks, ceramics, and ice. A third category of solids is amorphous solids, where the atoms have no periodic structure whatsoever. Examples of amorphous solids include glass, wax, and many plastics.
Despite the name, lead crystal, crystal glass, and related products are not crystals, but rather types of glass, i.e. amorphous solids.
Crystals, or crystalline solids, are often used in pseudoscientific practices such as crystal therapy, and, along with gemstones, are sometimes associated with spellwork in Wiccan beliefs and related religious movements.
The scientific definition of a "crystal" is based on the microscopic arrangement of atoms inside it, called the crystal structure. A crystal is a solid where the atoms form a periodic arrangement. (Quasicrystals are an exception, see below).
Not all solids are crystals. For example, when liquid water starts freezing, the phase change begins with small ice crystals that grow until they fuse, forming a polycrystalline structure. In the final block of ice, each of the small crystals (called "crystallites" or "grains") is a true crystal with a periodic arrangement of atoms, but the whole polycrystal does not have a periodic arrangement of atoms, because the periodic pattern is broken at the grain boundaries. Most macroscopic inorganic solids are polycrystalline, including almost all metals, ceramics, ice, rocks, etc. Solids that are neither crystalline nor polycrystalline, such as glass, are called amorphous solids, also called glassy, vitreous, or noncrystalline. These have no periodic order, even microscopically. There are distinct differences between crystalline solids and amorphous solids: most notably, the process of forming a glass does not release the latent heat of fusion, but forming a crystal does.
A crystal structure (an arrangement of atoms in a crystal) is characterized by its unit cell, a small imaginary box containing one or more atoms in a specific spatial arrangement. The unit cells are stacked in three-dimensional space to form the crystal.
The symmetry of a crystal is constrained by the requirement that the unit cells stack perfectly with no gaps. There are 219 possible crystal symmetries (230 is commonly cited, but this treats chiral equivalents as separate entities), called crystallographic space groups. These are grouped into 7 crystal systems, such as cubic crystal system (where the crystals may form cubes or rectangular boxes, such as halite shown at right) or hexagonal crystal system (where the crystals may form hexagons, such as ordinary water ice).
Crystals are commonly recognized, macroscopically, by their shape, consisting of flat faces with sharp angles. These shape characteristics are not necessary for a crystal—a crystal is scientifically defined by its microscopic atomic arrangement, not its macroscopic shape—but the characteristic macroscopic shape is often present and easy to see.
Euhedral crystals are those that have obvious, well-formed flat faces. Anhedral crystals do not, usually because the crystal is one grain in a polycrystalline solid.
The flat faces (also called facets) of a euhedral crystal are oriented in a specific way relative to the underlying atomic arrangement of the crystal: they are planes of relatively low Miller index. This occurs because some surface orientations are more stable than others (lower surface energy). As a crystal grows, new atoms attach easily to the rougher and less stable parts of the surface, but less easily to the flat, stable surfaces. Therefore, the flat surfaces tend to grow larger and smoother, until the whole crystal surface consists of these plane surfaces. (See diagram on right.)
One of the oldest techniques in the science of crystallography consists of measuring the three-dimensional orientations of the faces of a crystal, and using them to infer the underlying crystal symmetry.
A crystal's crystallographic forms are sets of possible faces of the crystal that are related by one of the symmetries of the crystal. For example, crystals of galena often take the shape of cubes, and the six faces of the cube belong to a crystallographic form that displays one of the symmetries of the isometric crystal system. Galena also sometimes crystallizes as octahedrons, and the eight faces of the octahedron belong to another crystallographic form reflecting a different symmetry of the isometric system. A crystallographic form is described by placing the Miller indices of one of its faces within brackets. For example, the octahedral form is written as {111}, and the other faces in the form are implied by the symmetry of the crystal.
Forms may be closed, meaning that the form can completely enclose a volume of space, or open, meaning that it cannot. The cubic and octahedral forms are examples of closed forms. All the forms of the isometric system are closed, while all the forms of the monoclinic and triclinic crystal systems are open. A crystal's faces may all belong to the same closed form, or they may be a combination of multiple open or closed forms.
A crystal's habit is its visible external shape. This is determined by the crystal structure (which restricts the possible facet orientations), the specific crystal chemistry and bonding (which may favor some facet types over others), and the conditions under which the crystal formed.
By volume and weight, the largest concentrations of crystals in the Earth are part of its solid bedrock. Crystals found in rocks typically range in size from a fraction of a millimetre to several centimetres across, although exceptionally large crystals are occasionally found. As of 1999, the world's largest known naturally occurring crystal is a crystal of beryl from Malakialina, Madagascar, 18 m (59 ft) long and 3.5 m (11 ft) in diameter, and weighing 380,000 kg (840,000 lb).
Some crystals have formed by magmatic and metamorphic processes, giving origin to large masses of crystalline rock. The vast majority of igneous rocks are formed from molten magma and the degree of crystallization depends primarily on the conditions under which they solidified. Such rocks as granite, which have cooled very slowly and under great pressures, have completely crystallized; but many kinds of lava were poured out at the surface and cooled very rapidly, and in this latter group a small amount of amorphous or glassy matter is common. Other crystalline rocks, the metamorphic rocks such as marbles, mica-schists and quartzites, are recrystallized. This means that they were at first fragmental rocks like limestone, shale and sandstone and have never been in a molten condition nor entirely in solution, but the high temperature and pressure conditions of metamorphism have acted on them by erasing their original structures and inducing recrystallization in the solid state.
Other rock crystals have formed out of precipitation from fluids, commonly water, to form druses or quartz veins. Evaporites such as halite, gypsum and some limestones have been deposited from aqueous solution, mostly owing to evaporation in arid climates.
Water-based ice in the form of snow, sea ice, and glaciers are common crystalline/polycrystalline structures on Earth and other planets. A single snowflake is a single crystal or a collection of crystals, while an ice cube is a polycrystal. Ice crystals may form from cooling liquid water below its freezing point, such as ice cubes or a frozen lake. Frost, snowflakes, or small ice crystals suspended in the air (ice fog) more often grow from a supersaturated gaseous-solution of water vapor and air, when the temperature of the air drops below its dew point, without passing through a liquid state. Another unusual property of water is that it expands rather than contracts when it crystallizes.
Many living organisms are able to produce crystals grown from an aqueous solution, for example calcite and aragonite in the case of most molluscs or hydroxylapatite in the case of bones and teeth in vertebrates.
The same group of atoms can often solidify in many different ways. Polymorphism is the ability of a solid to exist in more than one crystal form. For example, water ice is ordinarily found in the hexagonal form Ice Ih, but can also exist as the cubic Ice Ic, the rhombohedral ice II, and many other forms. The different polymorphs are usually called different phases.
In addition, the same atoms may be able to form noncrystalline phases. For example, water can also form amorphous ice, while SiO2 can form both fused silica (an amorphous glass) and quartz (a crystal). Likewise, if a substance can form crystals, it can also form polycrystals.
For pure chemical elements, polymorphism is known as allotropy. For example, diamond and graphite are two crystalline forms of carbon, while amorphous carbon is a noncrystalline form. Polymorphs, despite having the same atoms, may have very different properties. For example, diamond is the hardest substance known, while graphite is so soft that it is used as a lubricant. Chocolate can form six different types of crystals, but only one has the suitable hardness and melting point for candy bars and confections. Polymorphism in steel is responsible for its ability to be heat treated, giving it a wide range of properties.
Polyamorphism is a similar phenomenon where the same atoms can exist in more than one amorphous solid form.
Crystallization is the process of forming a crystalline structure from a fluid or from materials dissolved in a fluid. (More rarely, crystals may be deposited directly from gas; see: epitaxy and frost.)
Crystallization is a complex and extensively-studied field, because depending on the conditions, a single fluid can solidify into many different possible forms. It can form a single crystal, perhaps with various possible phases, stoichiometries, impurities, defects, and habits. Or, it can form a polycrystal, with various possibilities for the size, arrangement, orientation, and phase of its grains. The final form of the solid is determined by the conditions under which the fluid is being solidified, such as the chemistry of the fluid, the ambient pressure, the temperature, and the speed with which all these parameters are changing.
Specific industrial techniques to produce large single crystals (called boules) include the Czochralski process and the Bridgman technique. Other less exotic methods of crystallization may be used, depending on the physical properties of the substance, including hydrothermal synthesis, sublimation, or simply solvent-based crystallization.
Large single crystals can be created by geological processes. For example, selenite crystals in excess of 10 m are found in the Cave of the Crystals in Naica, Mexico. For more details on geological crystal formation, see above.
Crystals can also be formed by biological processes, see above. Conversely, some organisms have special techniques to prevent crystallization from occurring, such as antifreeze proteins.
An ideal crystal has every atom in a perfect, exactly repeating pattern. However, in reality, most crystalline materials have a variety of crystallographic defects, places where the crystal's pattern is interrupted. The types and structures of these defects may have a profound effect on the properties of the materials.
A few examples of crystallographic defects include vacancy defects (an empty space where an atom should fit), interstitial defects (an extra atom squeezed in where it does not fit), and dislocations (see figure at right). Dislocations are especially important in materials science, because they help determine the mechanical strength of materials.
Another common type of crystallographic defect is an impurity, meaning that the "wrong" type of atom is present in a crystal. For example, a perfect crystal of diamond would only contain carbon atoms, but a real crystal might perhaps contain a few boron atoms as well. These boron impurities change the diamond's color to slightly blue. Likewise, the only difference between ruby and sapphire is the type of impurities present in a corundum crystal.
In semiconductors, a special type of impurity, called a dopant, drastically changes the crystal's electrical properties. Semiconductor devices, such as transistors, are made possible largely by putting different semiconductor dopants into different places, in specific patterns.
Twinning is a phenomenon somewhere between a crystallographic defect and a grain boundary. Like a grain boundary, a twin boundary has different crystal orientations on its two sides. But unlike a grain boundary, the orientations are not random, but related in a specific, mirror-image way.
Mosaicity is a spread of crystal plane orientations. A mosaic crystal consists of smaller crystalline units that are somewhat misaligned with respect to each other.
In general, solids can be held together by various types of chemical bonds, such as metallic bonds, ionic bonds, covalent bonds, van der Waals bonds, and others. None of these are necessarily crystalline or non-crystalline. However, there are some general trends as follows:
Metals crystallize rapidly and are almost always polycrystalline, though there are exceptions like amorphous metal and single-crystal metals. The latter are grown synthetically, for example, fighter-jet turbines are typically made by first growing a single crystal of titanium alloy, increasing its strength and melting point over polycrystalline titanium. A small piece of metal may naturally form into a single crystal, such as Type 2 telluric iron, but larger pieces generally do not unless extremely slow cooling occurs. For example, iron meteorites are often composed of single crystal, or many large crystals that may be several meters in size, due to very slow cooling in the vacuum of space. The slow cooling may allow the precipitation of a separate phase within the crystal lattice, which form at specific angles determined by the lattice, called Widmanstatten patterns.
Ionic compounds typically form when a metal reacts with a non-metal, such as sodium with chlorine. These often form substances called salts, such as sodium chloride (table salt) or potassium nitrate (saltpeter), with crystals that are often brittle and cleave relatively easily. Ionic materials are usually crystalline or polycrystalline. In practice, large salt crystals can be created by solidification of a molten fluid, or by crystallization out of a solution. Some ionic compounds can be very hard, such as oxides like aluminium oxide found in many gemstones such as ruby and synthetic sapphire.
Covalently bonded solids (sometimes called covalent network solids) are typically formed from one or more non-metals, such as carbon or silicon and oxygen, and are often very hard, rigid, and brittle. These are also very common, notable examples being diamond and quartz respectively.
Weak van der Waals forces also help hold together certain crystals, such as crystalline molecular solids, as well as the interlayer bonding in graphite. Substances such as fats, lipids and wax form molecular bonds because the large molecules do not pack as tightly as atomic bonds. This leads to crystals that are much softer and more easily pulled apart or broken. Common examples include chocolates, candles, or viruses. Water ice and dry ice are examples of other materials with molecular bonding.Polymer materials generally will form crystalline regions, but the lengths of the molecules usually prevent complete crystallization—and sometimes polymers are completely amorphous.
A quasicrystal consists of arrays of atoms that are ordered but not strictly periodic. They have many attributes in common with ordinary crystals, such as displaying a discrete pattern in x-ray diffraction, and the ability to form shapes with smooth, flat faces.
Quasicrystals are most famous for their ability to show five-fold symmetry, which is impossible for an ordinary periodic crystal (see crystallographic restriction theorem).
The International Union of Crystallography has redefined the term "crystal" to include both ordinary periodic crystals and quasicrystals ("any solid having an essentially discrete diffraction diagram").
Quasicrystals, first discovered in 1982, are quite rare in practice. Only about 100 solids are known to form quasicrystals, compared to about 400,000 periodic crystals known in 2004. The 2011 Nobel Prize in Chemistry was awarded to Dan Shechtman for the discovery of quasicrystals.
Crystals can have certain special electrical, optical, and mechanical properties that glass and polycrystals normally cannot. These properties are related to the anisotropy of the crystal, i.e. the lack of rotational symmetry in its atomic arrangement. One such property is the piezoelectric effect, where a voltage across the crystal can shrink or stretch it. Another is birefringence, where a double image appears when looking through a crystal. Moreover, various properties of a crystal, including electrical conductivity, electrical permittivity, and Young's modulus, may be different in different directions in a crystal. For example, graphite crystals consist of a stack of sheets, and although each individual sheet is mechanically very strong, the sheets are rather loosely bound to each other. Therefore, the mechanical strength of the material is quite different depending on the direction of stress.
Not all crystals have all of these properties. Conversely, these properties are not quite exclusive to crystals. They can appear in glasses or polycrystals that have been made anisotropic by working or stress—for example, stress-induced birefringence.
Crystallography is the science of measuring the crystal structure (in other words, the atomic arrangement) of a crystal. One widely used crystallography technique is X-ray diffraction. Large numbers of known crystal structures are stored in crystallographic databases. | [
{
"paragraph_id": 0,
"text": "A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, consisting of flat faces with specific, characteristic orientations. The scientific study of crystals and crystal formation is known as crystallography. The process of crystal formation via mechanisms of crystal growth is called crystallization or solidification.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The word crystal derives from the Ancient Greek word κρύσταλλος (krustallos), meaning both \"ice\" and \"rock crystal\", from κρύος (kruos), \"icy cold, frost\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "Examples of large crystals include snowflakes, diamonds, and table salt. Most inorganic solids are not crystals but polycrystals, i.e. many microscopic crystals fused together into a single solid. Polycrystals include most metals, rocks, ceramics, and ice. A third category of solids is amorphous solids, where the atoms have no periodic structure whatsoever. Examples of amorphous solids include glass, wax, and many plastics.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Despite the name, lead crystal, crystal glass, and related products are not crystals, but rather types of glass, i.e. amorphous solids.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Crystals, or crystalline solids, are often used in pseudoscientific practices such as crystal therapy, and, along with gemstones, are sometimes associated with spellwork in Wiccan beliefs and related religious movements.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The scientific definition of a \"crystal\" is based on the microscopic arrangement of atoms inside it, called the crystal structure. A crystal is a solid where the atoms form a periodic arrangement. (Quasicrystals are an exception, see below).",
"title": "Crystal structure (microscopic)"
},
{
"paragraph_id": 6,
"text": "Not all solids are crystals. For example, when liquid water starts freezing, the phase change begins with small ice crystals that grow until they fuse, forming a polycrystalline structure. In the final block of ice, each of the small crystals (called \"crystallites\" or \"grains\") is a true crystal with a periodic arrangement of atoms, but the whole polycrystal does not have a periodic arrangement of atoms, because the periodic pattern is broken at the grain boundaries. Most macroscopic inorganic solids are polycrystalline, including almost all metals, ceramics, ice, rocks, etc. Solids that are neither crystalline nor polycrystalline, such as glass, are called amorphous solids, also called glassy, vitreous, or noncrystalline. These have no periodic order, even microscopically. There are distinct differences between crystalline solids and amorphous solids: most notably, the process of forming a glass does not release the latent heat of fusion, but forming a crystal does.",
"title": "Crystal structure (microscopic)"
},
{
"paragraph_id": 7,
"text": "A crystal structure (an arrangement of atoms in a crystal) is characterized by its unit cell, a small imaginary box containing one or more atoms in a specific spatial arrangement. The unit cells are stacked in three-dimensional space to form the crystal.",
"title": "Crystal structure (microscopic)"
},
{
"paragraph_id": 8,
"text": "The symmetry of a crystal is constrained by the requirement that the unit cells stack perfectly with no gaps. There are 219 possible crystal symmetries (230 is commonly cited, but this treats chiral equivalents as separate entities), called crystallographic space groups. These are grouped into 7 crystal systems, such as cubic crystal system (where the crystals may form cubes or rectangular boxes, such as halite shown at right) or hexagonal crystal system (where the crystals may form hexagons, such as ordinary water ice).",
"title": "Crystal structure (microscopic)"
},
{
"paragraph_id": 9,
"text": "Crystals are commonly recognized, macroscopically, by their shape, consisting of flat faces with sharp angles. These shape characteristics are not necessary for a crystal—a crystal is scientifically defined by its microscopic atomic arrangement, not its macroscopic shape—but the characteristic macroscopic shape is often present and easy to see.",
"title": "Crystal faces, shapes and crystallographic forms"
},
{
"paragraph_id": 10,
"text": "Euhedral crystals are those that have obvious, well-formed flat faces. Anhedral crystals do not, usually because the crystal is one grain in a polycrystalline solid.",
"title": "Crystal faces, shapes and crystallographic forms"
},
{
"paragraph_id": 11,
"text": "The flat faces (also called facets) of a euhedral crystal are oriented in a specific way relative to the underlying atomic arrangement of the crystal: they are planes of relatively low Miller index. This occurs because some surface orientations are more stable than others (lower surface energy). As a crystal grows, new atoms attach easily to the rougher and less stable parts of the surface, but less easily to the flat, stable surfaces. Therefore, the flat surfaces tend to grow larger and smoother, until the whole crystal surface consists of these plane surfaces. (See diagram on right.)",
"title": "Crystal faces, shapes and crystallographic forms"
},
{
"paragraph_id": 12,
"text": "One of the oldest techniques in the science of crystallography consists of measuring the three-dimensional orientations of the faces of a crystal, and using them to infer the underlying crystal symmetry.",
"title": "Crystal faces, shapes and crystallographic forms"
},
{
"paragraph_id": 13,
"text": "A crystal's crystallographic forms are sets of possible faces of the crystal that are related by one of the symmetries of the crystal. For example, crystals of galena often take the shape of cubes, and the six faces of the cube belong to a crystallographic form that displays one of the symmetries of the isometric crystal system. Galena also sometimes crystallizes as octahedrons, and the eight faces of the octahedron belong to another crystallographic form reflecting a different symmetry of the isometric system. A crystallographic form is described by placing the Miller indices of one of its faces within brackets. For example, the octahedral form is written as {111}, and the other faces in the form are implied by the symmetry of the crystal.",
"title": "Crystal faces, shapes and crystallographic forms"
},
{
"paragraph_id": 14,
"text": "Forms may be closed, meaning that the form can completely enclose a volume of space, or open, meaning that it cannot. The cubic and octahedral forms are examples of closed forms. All the forms of the isometric system are closed, while all the forms of the monoclinic and triclinic crystal systems are open. A crystal's faces may all belong to the same closed form, or they may be a combination of multiple open or closed forms.",
"title": "Crystal faces, shapes and crystallographic forms"
},
{
"paragraph_id": 15,
"text": "A crystal's habit is its visible external shape. This is determined by the crystal structure (which restricts the possible facet orientations), the specific crystal chemistry and bonding (which may favor some facet types over others), and the conditions under which the crystal formed.",
"title": "Crystal faces, shapes and crystallographic forms"
},
{
"paragraph_id": 16,
"text": "By volume and weight, the largest concentrations of crystals in the Earth are part of its solid bedrock. Crystals found in rocks typically range in size from a fraction of a millimetre to several centimetres across, although exceptionally large crystals are occasionally found. As of 1999, the world's largest known naturally occurring crystal is a crystal of beryl from Malakialina, Madagascar, 18 m (59 ft) long and 3.5 m (11 ft) in diameter, and weighing 380,000 kg (840,000 lb).",
"title": "Occurrence in nature"
},
{
"paragraph_id": 17,
"text": "Some crystals have formed by magmatic and metamorphic processes, giving origin to large masses of crystalline rock. The vast majority of igneous rocks are formed from molten magma and the degree of crystallization depends primarily on the conditions under which they solidified. Such rocks as granite, which have cooled very slowly and under great pressures, have completely crystallized; but many kinds of lava were poured out at the surface and cooled very rapidly, and in this latter group a small amount of amorphous or glassy matter is common. Other crystalline rocks, the metamorphic rocks such as marbles, mica-schists and quartzites, are recrystallized. This means that they were at first fragmental rocks like limestone, shale and sandstone and have never been in a molten condition nor entirely in solution, but the high temperature and pressure conditions of metamorphism have acted on them by erasing their original structures and inducing recrystallization in the solid state.",
"title": "Occurrence in nature"
},
{
"paragraph_id": 18,
"text": "Other rock crystals have formed out of precipitation from fluids, commonly water, to form druses or quartz veins. Evaporites such as halite, gypsum and some limestones have been deposited from aqueous solution, mostly owing to evaporation in arid climates.",
"title": "Occurrence in nature"
},
{
"paragraph_id": 19,
"text": "Water-based ice in the form of snow, sea ice, and glaciers are common crystalline/polycrystalline structures on Earth and other planets. A single snowflake is a single crystal or a collection of crystals, while an ice cube is a polycrystal. Ice crystals may form from cooling liquid water below its freezing point, such as ice cubes or a frozen lake. Frost, snowflakes, or small ice crystals suspended in the air (ice fog) more often grow from a supersaturated gaseous-solution of water vapor and air, when the temperature of the air drops below its dew point, without passing through a liquid state. Another unusual property of water is that it expands rather than contracts when it crystallizes.",
"title": "Occurrence in nature"
},
{
"paragraph_id": 20,
"text": "Many living organisms are able to produce crystals grown from an aqueous solution, for example calcite and aragonite in the case of most molluscs or hydroxylapatite in the case of bones and teeth in vertebrates.",
"title": "Occurrence in nature"
},
{
"paragraph_id": 21,
"text": "The same group of atoms can often solidify in many different ways. Polymorphism is the ability of a solid to exist in more than one crystal form. For example, water ice is ordinarily found in the hexagonal form Ice Ih, but can also exist as the cubic Ice Ic, the rhombohedral ice II, and many other forms. The different polymorphs are usually called different phases.",
"title": "Polymorphism and allotropy"
},
{
"paragraph_id": 22,
"text": "In addition, the same atoms may be able to form noncrystalline phases. For example, water can also form amorphous ice, while SiO2 can form both fused silica (an amorphous glass) and quartz (a crystal). Likewise, if a substance can form crystals, it can also form polycrystals.",
"title": "Polymorphism and allotropy"
},
{
"paragraph_id": 23,
"text": "For pure chemical elements, polymorphism is known as allotropy. For example, diamond and graphite are two crystalline forms of carbon, while amorphous carbon is a noncrystalline form. Polymorphs, despite having the same atoms, may have very different properties. For example, diamond is the hardest substance known, while graphite is so soft that it is used as a lubricant. Chocolate can form six different types of crystals, but only one has the suitable hardness and melting point for candy bars and confections. Polymorphism in steel is responsible for its ability to be heat treated, giving it a wide range of properties.",
"title": "Polymorphism and allotropy"
},
{
"paragraph_id": 24,
"text": "Polyamorphism is a similar phenomenon where the same atoms can exist in more than one amorphous solid form.",
"title": "Polymorphism and allotropy"
},
{
"paragraph_id": 25,
"text": "Crystallization is the process of forming a crystalline structure from a fluid or from materials dissolved in a fluid. (More rarely, crystals may be deposited directly from gas; see: epitaxy and frost.)",
"title": "Crystallization"
},
{
"paragraph_id": 26,
"text": "Crystallization is a complex and extensively-studied field, because depending on the conditions, a single fluid can solidify into many different possible forms. It can form a single crystal, perhaps with various possible phases, stoichiometries, impurities, defects, and habits. Or, it can form a polycrystal, with various possibilities for the size, arrangement, orientation, and phase of its grains. The final form of the solid is determined by the conditions under which the fluid is being solidified, such as the chemistry of the fluid, the ambient pressure, the temperature, and the speed with which all these parameters are changing.",
"title": "Crystallization"
},
{
"paragraph_id": 27,
"text": "Specific industrial techniques to produce large single crystals (called boules) include the Czochralski process and the Bridgman technique. Other less exotic methods of crystallization may be used, depending on the physical properties of the substance, including hydrothermal synthesis, sublimation, or simply solvent-based crystallization.",
"title": "Crystallization"
},
{
"paragraph_id": 28,
"text": "Large single crystals can be created by geological processes. For example, selenite crystals in excess of 10 m are found in the Cave of the Crystals in Naica, Mexico. For more details on geological crystal formation, see above.",
"title": "Crystallization"
},
{
"paragraph_id": 29,
"text": "Crystals can also be formed by biological processes, see above. Conversely, some organisms have special techniques to prevent crystallization from occurring, such as antifreeze proteins.",
"title": "Crystallization"
},
{
"paragraph_id": 30,
"text": "An ideal crystal has every atom in a perfect, exactly repeating pattern. However, in reality, most crystalline materials have a variety of crystallographic defects, places where the crystal's pattern is interrupted. The types and structures of these defects may have a profound effect on the properties of the materials.",
"title": "Defects, impurities, and twinning"
},
{
"paragraph_id": 31,
"text": "A few examples of crystallographic defects include vacancy defects (an empty space where an atom should fit), interstitial defects (an extra atom squeezed in where it does not fit), and dislocations (see figure at right). Dislocations are especially important in materials science, because they help determine the mechanical strength of materials.",
"title": "Defects, impurities, and twinning"
},
{
"paragraph_id": 32,
"text": "Another common type of crystallographic defect is an impurity, meaning that the \"wrong\" type of atom is present in a crystal. For example, a perfect crystal of diamond would only contain carbon atoms, but a real crystal might perhaps contain a few boron atoms as well. These boron impurities change the diamond's color to slightly blue. Likewise, the only difference between ruby and sapphire is the type of impurities present in a corundum crystal.",
"title": "Defects, impurities, and twinning"
},
{
"paragraph_id": 33,
"text": "In semiconductors, a special type of impurity, called a dopant, drastically changes the crystal's electrical properties. Semiconductor devices, such as transistors, are made possible largely by putting different semiconductor dopants into different places, in specific patterns.",
"title": "Defects, impurities, and twinning"
},
{
"paragraph_id": 34,
"text": "Twinning is a phenomenon somewhere between a crystallographic defect and a grain boundary. Like a grain boundary, a twin boundary has different crystal orientations on its two sides. But unlike a grain boundary, the orientations are not random, but related in a specific, mirror-image way.",
"title": "Defects, impurities, and twinning"
},
{
"paragraph_id": 35,
"text": "Mosaicity is a spread of crystal plane orientations. A mosaic crystal consists of smaller crystalline units that are somewhat misaligned with respect to each other.",
"title": "Defects, impurities, and twinning"
},
{
"paragraph_id": 36,
"text": "In general, solids can be held together by various types of chemical bonds, such as metallic bonds, ionic bonds, covalent bonds, van der Waals bonds, and others. None of these are necessarily crystalline or non-crystalline. However, there are some general trends as follows:",
"title": "Chemical bonds"
},
{
"paragraph_id": 37,
"text": "Metals crystallize rapidly and are almost always polycrystalline, though there are exceptions like amorphous metal and single-crystal metals. The latter are grown synthetically, for example, fighter-jet turbines are typically made by first growing a single crystal of titanium alloy, increasing its strength and melting point over polycrystalline titanium. A small piece of metal may naturally form into a single crystal, such as Type 2 telluric iron, but larger pieces generally do not unless extremely slow cooling occurs. For example, iron meteorites are often composed of single crystal, or many large crystals that may be several meters in size, due to very slow cooling in the vacuum of space. The slow cooling may allow the precipitation of a separate phase within the crystal lattice, which form at specific angles determined by the lattice, called Widmanstatten patterns.",
"title": "Chemical bonds"
},
{
"paragraph_id": 38,
"text": "Ionic compounds typically form when a metal reacts with a non-metal, such as sodium with chlorine. These often form substances called salts, such as sodium chloride (table salt) or potassium nitrate (saltpeter), with crystals that are often brittle and cleave relatively easily. Ionic materials are usually crystalline or polycrystalline. In practice, large salt crystals can be created by solidification of a molten fluid, or by crystallization out of a solution. Some ionic compounds can be very hard, such as oxides like aluminium oxide found in many gemstones such as ruby and synthetic sapphire.",
"title": "Chemical bonds"
},
{
"paragraph_id": 39,
"text": "Covalently bonded solids (sometimes called covalent network solids) are typically formed from one or more non-metals, such as carbon or silicon and oxygen, and are often very hard, rigid, and brittle. These are also very common, notable examples being diamond and quartz respectively.",
"title": "Chemical bonds"
},
{
"paragraph_id": 40,
"text": "Weak van der Waals forces also help hold together certain crystals, such as crystalline molecular solids, as well as the interlayer bonding in graphite. Substances such as fats, lipids and wax form molecular bonds because the large molecules do not pack as tightly as atomic bonds. This leads to crystals that are much softer and more easily pulled apart or broken. Common examples include chocolates, candles, or viruses. Water ice and dry ice are examples of other materials with molecular bonding.Polymer materials generally will form crystalline regions, but the lengths of the molecules usually prevent complete crystallization—and sometimes polymers are completely amorphous.",
"title": "Chemical bonds"
},
{
"paragraph_id": 41,
"text": "A quasicrystal consists of arrays of atoms that are ordered but not strictly periodic. They have many attributes in common with ordinary crystals, such as displaying a discrete pattern in x-ray diffraction, and the ability to form shapes with smooth, flat faces.",
"title": "Quasicrystals"
},
{
"paragraph_id": 42,
"text": "Quasicrystals are most famous for their ability to show five-fold symmetry, which is impossible for an ordinary periodic crystal (see crystallographic restriction theorem).",
"title": "Quasicrystals"
},
{
"paragraph_id": 43,
"text": "The International Union of Crystallography has redefined the term \"crystal\" to include both ordinary periodic crystals and quasicrystals (\"any solid having an essentially discrete diffraction diagram\").",
"title": "Quasicrystals"
},
{
"paragraph_id": 44,
"text": "Quasicrystals, first discovered in 1982, are quite rare in practice. Only about 100 solids are known to form quasicrystals, compared to about 400,000 periodic crystals known in 2004. The 2011 Nobel Prize in Chemistry was awarded to Dan Shechtman for the discovery of quasicrystals.",
"title": "Quasicrystals"
},
{
"paragraph_id": 45,
"text": "Crystals can have certain special electrical, optical, and mechanical properties that glass and polycrystals normally cannot. These properties are related to the anisotropy of the crystal, i.e. the lack of rotational symmetry in its atomic arrangement. One such property is the piezoelectric effect, where a voltage across the crystal can shrink or stretch it. Another is birefringence, where a double image appears when looking through a crystal. Moreover, various properties of a crystal, including electrical conductivity, electrical permittivity, and Young's modulus, may be different in different directions in a crystal. For example, graphite crystals consist of a stack of sheets, and although each individual sheet is mechanically very strong, the sheets are rather loosely bound to each other. Therefore, the mechanical strength of the material is quite different depending on the direction of stress.",
"title": "Special properties from anisotropy"
},
{
"paragraph_id": 46,
"text": "Not all crystals have all of these properties. Conversely, these properties are not quite exclusive to crystals. They can appear in glasses or polycrystals that have been made anisotropic by working or stress—for example, stress-induced birefringence.",
"title": "Special properties from anisotropy"
},
{
"paragraph_id": 47,
"text": "Crystallography is the science of measuring the crystal structure (in other words, the atomic arrangement) of a crystal. One widely used crystallography technique is X-ray diffraction. Large numbers of known crystal structures are stored in crystallographic databases.",
"title": "Crystallography"
}
]
| A crystal or crystalline solid is a solid material whose constituents are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, consisting of flat faces with specific, characteristic orientations. The scientific study of crystals and crystal formation is known as crystallography. The process of crystal formation via mechanisms of crystal growth is called crystallization or solidification. The word crystal derives from the Ancient Greek word κρύσταλλος (krustallos), meaning both "ice" and "rock crystal", from κρύος (kruos), "icy cold, frost". Examples of large crystals include snowflakes, diamonds, and table salt. Most inorganic solids are not crystals but polycrystals, i.e. many microscopic crystals fused together into a single solid. Polycrystals include most metals, rocks, ceramics, and ice. A third category of solids is amorphous solids, where the atoms have no periodic structure whatsoever. Examples of amorphous solids include glass, wax, and many plastics. Despite the name, lead crystal, crystal glass, and related products are not crystals, but rather types of glass, i.e. amorphous solids. Crystals, or crystalline solids, are often used in pseudoscientific practices such as crystal therapy, and, along with gemstones, are sometimes associated with spellwork in Wiccan beliefs and related religious movements. | 2001-11-07T01:49:53Z | 2023-11-17T05:22:20Z | [
"Template:About",
"Template:Main article",
"Template:Citation",
"Template:EB1911",
"Template:Cite journal",
"Template:Lang",
"Template:Div col end",
"Template:Sister project links",
"Template:Authority control",
"Template:Short description",
"Template:Redirect",
"Template:Anchor",
"Template:See also",
"Template:Cite web",
"Template:Cite book",
"Template:Reflist",
"Template:ISBN",
"Template:Pp",
"Template:Transl",
"Template:Multiple image",
"Template:As of",
"Template:Convert",
"Template:Div col",
"Template:Patterns in nature"
]
| https://en.wikipedia.org/wiki/Crystal |
6,016 | Cytosine | Cytosine (/ˈsaɪtəˌsiːn, -ˌziːn, -ˌsɪn/) (symbol C or Cyt) is one of the four nucleobases found in DNA and RNA, along with adenine, guanine, and thymine (uracil in RNA). It is a pyrimidine derivative, with a heterocyclic aromatic ring and two substituents attached (an amine group at position 4 and a keto group at position 2). The nucleoside of cytosine is cytidine. In Watson-Crick base pairing, it forms three hydrogen bonds with guanine.
Cytosine was discovered and named by Albrecht Kossel and Albert Neumann in 1894 when it was hydrolyzed from calf thymus tissues. A structure was proposed in 1903, and was synthesized (and thus confirmed) in the laboratory in the same year.
In 1998, cytosine was used in an early demonstration of quantum information processing when Oxford University researchers implemented the Deutsch-Jozsa algorithm on a two qubit nuclear magnetic resonance quantum computer (NMRQC).
In March 2015, NASA scientists reported the formation of cytosine, along with uracil and thymine, from pyrimidine under the space-like laboratory conditions, which is of interest because pyrimidine has been found in meteorites although its origin is unknown.
Cytosine can be found as part of DNA, as part of RNA, or as a part of a nucleotide. As cytidine triphosphate (CTP), it can act as a co-factor to enzymes, and can transfer a phosphate to convert adenosine diphosphate (ADP) to adenosine triphosphate (ATP).
In DNA and RNA, cytosine is paired with guanine. However, it is inherently unstable, and can change into uracil (spontaneous deamination). This can lead to a point mutation if not repaired by the DNA repair enzymes such as uracil glycosylase, which cleaves a uracil in DNA.
Cytosine can also be methylated into 5-methylcytosine by an enzyme called DNA methyltransferase or be methylated and hydroxylated to make 5-hydroxymethylcytosine. The difference in rates of deamination of cytosine and 5-methylcytosine (to uracil and thymine) forms the basis of bisulfite sequencing.
When found third in a codon of RNA, cytosine is synonymous with uracil, as they are interchangeable as the third base. When found as the second base in a codon, the third is always interchangeable. For example, UCU, UCC, UCA and UCG are all serine, regardless of the third base.
Active enzymatic deamination of cytosine or 5-methylcytosine by the APOBEC family of cytosine deaminases could have both beneficial and detrimental implications on various cellular processes as well as on organismal evolution. The implications of deamination on 5-hydroxymethylcytosine, on the other hand, remains less understood.
Until October 2021, Cytosine had not been found in meteorites, which suggested the first strands of RNA and DNA had to look elsewhere to obtain this building block. Cytosine likely formed within some meteorite parent bodies, however did not persist within these bodies due to an effective deamination reaction into uracil.
In October 2021, Cytosine was announced as having been found in meteorites by researchers in a joint Japan/NASA project, that used novel methods of detection which avoided damaging nucleotides as they were extracted from meteorites. | [
{
"paragraph_id": 0,
"text": "Cytosine (/ˈsaɪtəˌsiːn, -ˌziːn, -ˌsɪn/) (symbol C or Cyt) is one of the four nucleobases found in DNA and RNA, along with adenine, guanine, and thymine (uracil in RNA). It is a pyrimidine derivative, with a heterocyclic aromatic ring and two substituents attached (an amine group at position 4 and a keto group at position 2). The nucleoside of cytosine is cytidine. In Watson-Crick base pairing, it forms three hydrogen bonds with guanine.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cytosine was discovered and named by Albrecht Kossel and Albert Neumann in 1894 when it was hydrolyzed from calf thymus tissues. A structure was proposed in 1903, and was synthesized (and thus confirmed) in the laboratory in the same year.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "In 1998, cytosine was used in an early demonstration of quantum information processing when Oxford University researchers implemented the Deutsch-Jozsa algorithm on a two qubit nuclear magnetic resonance quantum computer (NMRQC).",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In March 2015, NASA scientists reported the formation of cytosine, along with uracil and thymine, from pyrimidine under the space-like laboratory conditions, which is of interest because pyrimidine has been found in meteorites although its origin is unknown.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Cytosine can be found as part of DNA, as part of RNA, or as a part of a nucleotide. As cytidine triphosphate (CTP), it can act as a co-factor to enzymes, and can transfer a phosphate to convert adenosine diphosphate (ADP) to adenosine triphosphate (ATP).",
"title": "Chemical reactions"
},
{
"paragraph_id": 5,
"text": "In DNA and RNA, cytosine is paired with guanine. However, it is inherently unstable, and can change into uracil (spontaneous deamination). This can lead to a point mutation if not repaired by the DNA repair enzymes such as uracil glycosylase, which cleaves a uracil in DNA.",
"title": "Chemical reactions"
},
{
"paragraph_id": 6,
"text": "Cytosine can also be methylated into 5-methylcytosine by an enzyme called DNA methyltransferase or be methylated and hydroxylated to make 5-hydroxymethylcytosine. The difference in rates of deamination of cytosine and 5-methylcytosine (to uracil and thymine) forms the basis of bisulfite sequencing.",
"title": "Chemical reactions"
},
{
"paragraph_id": 7,
"text": "When found third in a codon of RNA, cytosine is synonymous with uracil, as they are interchangeable as the third base. When found as the second base in a codon, the third is always interchangeable. For example, UCU, UCC, UCA and UCG are all serine, regardless of the third base.",
"title": "Biological function"
},
{
"paragraph_id": 8,
"text": "Active enzymatic deamination of cytosine or 5-methylcytosine by the APOBEC family of cytosine deaminases could have both beneficial and detrimental implications on various cellular processes as well as on organismal evolution. The implications of deamination on 5-hydroxymethylcytosine, on the other hand, remains less understood.",
"title": "Biological function"
},
{
"paragraph_id": 9,
"text": "Until October 2021, Cytosine had not been found in meteorites, which suggested the first strands of RNA and DNA had to look elsewhere to obtain this building block. Cytosine likely formed within some meteorite parent bodies, however did not persist within these bodies due to an effective deamination reaction into uracil.",
"title": "Theoretical aspects"
},
{
"paragraph_id": 10,
"text": "In October 2021, Cytosine was announced as having been found in meteorites by researchers in a joint Japan/NASA project, that used novel methods of detection which avoided damaging nucleotides as they were extracted from meteorites.",
"title": "Theoretical aspects"
}
]
| Cytosine is one of the four nucleobases found in DNA and RNA, along with adenine, guanine, and thymine. It is a pyrimidine derivative, with a heterocyclic aromatic ring and two substituents attached. The nucleoside of cytosine is cytidine. In Watson-Crick base pairing, it forms three hydrogen bonds with guanine. | 2001-08-03T17:04:57Z | 2023-10-18T19:41:31Z | [
"Template:Distinguish",
"Template:Chembox",
"Template:MerriamWebsterDictionary",
"Template:Authority control",
"Template:Cite news",
"Template:EINECSLink",
"Template:Purinergics",
"Template:IPAc-en",
"Template:Multiple image",
"Template:Dictionary.com",
"Template:Cite web",
"Template:Commons category",
"Template:Short description",
"Template:Clear",
"Template:Reflist",
"Template:Cite journal",
"Template:Nucleobases, nucleosides, and nucleotides"
]
| https://en.wikipedia.org/wiki/Cytosine |
6,019 | Computational chemistry | Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into computer programs, to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in many-body problem exacerbates the challenge of providing detailed descriptions in quantum mechanical systems. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict unobserved chemical phenomena.
Computational chemistry is different from theoretical chemistry. The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions.Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions.
Historically, computational chemistry has had two different aspects:
These aspects, along with computational chemistry's purpose, have resulted in a whole host of algorithms.
Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow.
With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One major advance came with the 1951 paper in Reviews of Modern Physics by Clemens C. J. Roothaan in 1951, largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals), for many years the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer.
In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.
In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger.
One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980.
Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems".
There are several fields within computational chemistry.
These fields can give rise to several applications as shown below.
Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties.
Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions.
Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs.
Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them.
Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry.
Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental.
Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following.
The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems.This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains.
In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately.
Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems.
The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It's important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry.
Solves Newton's equations of motion for atoms and molecules.
The standard pairwise interaction calculation in MD leads to an O ( N 2 ) {\displaystyle {\mathcal {O}}(N^{2})} complexity for N {\displaystyle N} particles. This is because each particle interacts with every other particle, resulting in N ( N − 1 ) 2 {\displaystyle {\frac {N(N-1)}{2}}} interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to O ( N log N ) {\displaystyle {\mathcal {O}}(N\log N)} or even O ( N ) {\displaystyle {\mathcal {O}}(N)} by grouping distant particles and treating them as a single entity or using clever mathematical approximations.
Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment.
The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as O ( M 2 ) {\displaystyle {\mathcal {O}}(M^{2})} , where M {\displaystyle M} is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved.
Finds a single Fock state that minimizes the energy.
NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as O ( N 3 ) {\displaystyle {\mathcal {O}}(N^{3})} to O ( N ) {\displaystyle {\mathcal {O}}(N)} depending on implementation, with N {\displaystyle N} being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism.
Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases.
Traditional implementations of DFT typically scale as O ( N 3 ) {\displaystyle {\mathcal {O}}(N^{3})} , mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements.
CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects.
Scales as O ( M 6 ) {\displaystyle {\mathcal {O}}(M^{6})} where M {\displaystyle M} is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation.
With the addition of perturbative triples, the complexity increases to O ( M 7 ) {\displaystyle {\mathcal {O}}(M^{7})} . This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations.
An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems.
Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy.
Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems.
For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations.
One molecular formula can represent more than one molecular isomer: a set of isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (i.e., the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization.
The determination of molecular structure by geometry optimization became routine only after efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues of the Hessian matrix, which contains second derivatives. If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (i.e., an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one and is usually of little interest. When one of these is found, it is necessary to move the search away from it if the experimenter is looking solely for local minima and transition structures.
The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born–Oppenheimer approximation, which allows for the separation of electronic and nuclear motions, thereby simplifying the Schrödinger equation. This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception is certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants of the major theme. For very large systems, the relative total energies can be compared using molecular mechanics.
The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called ab initio methods. A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made).
Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz.
A common type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory,where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit.
Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post-Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones.
In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used.
The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface.
A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods.
Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods.
Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods.
Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely empirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe Molecular Mechanics.
In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations.
The database of compounds used for parameterization, i.e., the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class.
These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules.
Computational chemical methods can be applied to solid-state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone.
Once the electronic and nuclear variables are separated (within the Born–Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms.
The most popular methods for propagating the wave packet associated to the molecular geometry are:
To better understand the split operator technique, an explanation is provided below.
How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into 2 different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution.
This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation.
e h ( A + B ) {\textstyle e^{h(A+B)}}
The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting.
e h ( A + B ) ≈ e h A e h B {\textstyle e^{h(A+B)}\approx e^{hA}e^{hB}}
There are ways to reduce this error, which include taking an average of two split equations.
Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy.
Computational chemists spend much time trying to find ways to make systems calculated with split operator technique more accurate while minimizing the computational cost. Finding that middle ground of accurate and plausible to calculate is a massive challenge for many chemists trying to simulate molecules or chemical environments.
Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion.
Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms.
QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes.
Quantum computational chemistry integrates quantum mechanics and computational methods to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum mechanics to represent and process information, such as Hamiltonian operators.
Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions.
Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior.
While these quantum techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments represent significant progress towards achieving more precise and resource-efficient quantum chemistry simulations.
Computational chemistry is not an exact description of real-life chemistry, as our mathematical models of the physical laws of nature can only provide us with an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme.
Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost.
Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT).
There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM).
Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in: | [
{
"paragraph_id": 0,
"text": "Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into computer programs, to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in many-body problem exacerbates the challenge of providing detailed descriptions in quantum mechanical systems. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict unobserved chemical phenomena.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Computational chemistry is different from theoretical chemistry. The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions.Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions.",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "Historically, computational chemistry has had two different aspects:",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "These aspects, along with computational chemistry's purpose, have resulted in a whole host of algorithms.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One major advance came with the 1951 paper in Reviews of Modern Physics by Clemens C. J. Roothaan in 1951, largely on the \"LCAO MO\" approach (Linear Combination of Atomic Orbitals Molecular Orbitals), for many years the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state \"It seems, therefore, that 'computational chemistry' can finally be more and more of a reality.\" During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, \"for his development of the density-functional theory\", and John Pople, \"for his development of computational methods in quantum chemistry\", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for \"the development of multiscale models for complex chemical systems\".",
"title": "History"
},
{
"paragraph_id": 10,
"text": "There are several fields within computational chemistry.",
"title": "Applications"
},
{
"paragraph_id": 11,
"text": "These fields can give rise to several applications as shown below.",
"title": "Applications"
},
{
"paragraph_id": 12,
"text": "Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties.",
"title": "Applications"
},
{
"paragraph_id": 13,
"text": "Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions.",
"title": "Applications"
},
{
"paragraph_id": 14,
"text": "Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs.",
"title": "Applications"
},
{
"paragraph_id": 15,
"text": "Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them.",
"title": "Applications"
},
{
"paragraph_id": 16,
"text": "Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry.",
"title": "Applications"
},
{
"paragraph_id": 17,
"text": "Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental.",
"title": "Applications"
},
{
"paragraph_id": 18,
"text": "Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following.",
"title": "Applications"
},
{
"paragraph_id": 19,
"text": "The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems.This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 20,
"text": "In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 21,
"text": "Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 22,
"text": "The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It's important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 23,
"text": "Solves Newton's equations of motion for atoms and molecules.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 24,
"text": "The standard pairwise interaction calculation in MD leads to an O ( N 2 ) {\\displaystyle {\\mathcal {O}}(N^{2})} complexity for N {\\displaystyle N} particles. This is because each particle interacts with every other particle, resulting in N ( N − 1 ) 2 {\\displaystyle {\\frac {N(N-1)}{2}}} interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to O ( N log N ) {\\displaystyle {\\mathcal {O}}(N\\log N)} or even O ( N ) {\\displaystyle {\\mathcal {O}}(N)} by grouping distant particles and treating them as a single entity or using clever mathematical approximations.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 25,
"text": "Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 26,
"text": "The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as O ( M 2 ) {\\displaystyle {\\mathcal {O}}(M^{2})} , where M {\\displaystyle M} is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 27,
"text": "Finds a single Fock state that minimizes the energy.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 28,
"text": "NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as O ( N 3 ) {\\displaystyle {\\mathcal {O}}(N^{3})} to O ( N ) {\\displaystyle {\\mathcal {O}}(N)} depending on implementation, with N {\\displaystyle N} being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 29,
"text": "Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 30,
"text": "Traditional implementations of DFT typically scale as O ( N 3 ) {\\displaystyle {\\mathcal {O}}(N^{3})} , mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 31,
"text": "CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 32,
"text": "Scales as O ( M 6 ) {\\displaystyle {\\mathcal {O}}(M^{6})} where M {\\displaystyle M} is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 33,
"text": "With the addition of perturbative triples, the complexity increases to O ( M 7 ) {\\displaystyle {\\mathcal {O}}(M^{7})} . This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 34,
"text": "An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 35,
"text": "Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 36,
"text": "Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 37,
"text": "For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations.",
"title": "Computational costs in chemistry algorithms"
},
{
"paragraph_id": 38,
"text": "One molecular formula can represent more than one molecular isomer: a set of isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (i.e., the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization.",
"title": "Methods"
},
{
"paragraph_id": 39,
"text": "The determination of molecular structure by geometry optimization became routine only after efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues of the Hessian matrix, which contains second derivatives. If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (i.e., an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one and is usually of little interest. When one of these is found, it is necessary to move the search away from it if the experimenter is looking solely for local minima and transition structures.",
"title": "Methods"
},
{
"paragraph_id": 40,
"text": "The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born–Oppenheimer approximation, which allows for the separation of electronic and nuclear motions, thereby simplifying the Schrödinger equation. This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception is certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants of the major theme. For very large systems, the relative total energies can be compared using molecular mechanics.",
"title": "Methods"
},
{
"paragraph_id": 41,
"text": "The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called ab initio methods. A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made).",
"title": "Methods"
},
{
"paragraph_id": 42,
"text": "Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz.",
"title": "Methods"
},
{
"paragraph_id": 43,
"text": "A common type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory,where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit.",
"title": "Methods"
},
{
"paragraph_id": 44,
"text": "Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post-Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones.",
"title": "Methods"
},
{
"paragraph_id": 45,
"text": "In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used.",
"title": "Methods"
},
{
"paragraph_id": 46,
"text": "The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface.",
"title": "Methods"
},
{
"paragraph_id": 47,
"text": "A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods.",
"title": "Methods"
},
{
"paragraph_id": 48,
"text": "Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods.",
"title": "Methods"
},
{
"paragraph_id": 49,
"text": "Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods.",
"title": "Methods"
},
{
"paragraph_id": 50,
"text": "Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as \"completely empirical\" because they do not derive from a Hamiltonian. Yet, the term \"empirical methods\", or \"empirical force fields\" is usually used to describe Molecular Mechanics.",
"title": "Methods"
},
{
"paragraph_id": 51,
"text": "In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations.",
"title": "Methods"
},
{
"paragraph_id": 52,
"text": "The database of compounds used for parameterization, i.e., the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class.",
"title": "Methods"
},
{
"paragraph_id": 53,
"text": "These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules.",
"title": "Methods"
},
{
"paragraph_id": 54,
"text": "Computational chemical methods can be applied to solid-state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone.",
"title": "Methods"
},
{
"paragraph_id": 55,
"text": "Once the electronic and nuclear variables are separated (within the Born–Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms.",
"title": "Methods"
},
{
"paragraph_id": 56,
"text": "The most popular methods for propagating the wave packet associated to the molecular geometry are:",
"title": "Methods"
},
{
"paragraph_id": 57,
"text": "To better understand the split operator technique, an explanation is provided below.",
"title": "Methods"
},
{
"paragraph_id": 58,
"text": "How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into 2 different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution.",
"title": "Methods"
},
{
"paragraph_id": 59,
"text": "This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation.",
"title": "Methods"
},
{
"paragraph_id": 60,
"text": "e h ( A + B ) {\\textstyle e^{h(A+B)}}",
"title": "Methods"
},
{
"paragraph_id": 61,
"text": "The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting.",
"title": "Methods"
},
{
"paragraph_id": 62,
"text": "e h ( A + B ) ≈ e h A e h B {\\textstyle e^{h(A+B)}\\approx e^{hA}e^{hB}}",
"title": "Methods"
},
{
"paragraph_id": 63,
"text": "There are ways to reduce this error, which include taking an average of two split equations.",
"title": "Methods"
},
{
"paragraph_id": 64,
"text": "Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy.",
"title": "Methods"
},
{
"paragraph_id": 65,
"text": "Computational chemists spend much time trying to find ways to make systems calculated with split operator technique more accurate while minimizing the computational cost. Finding that middle ground of accurate and plausible to calculate is a massive challenge for many chemists trying to simulate molecules or chemical environments.",
"title": "Methods"
},
{
"paragraph_id": 66,
"text": "Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion.",
"title": "Methods"
},
{
"paragraph_id": 67,
"text": "Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms.",
"title": "Methods"
},
{
"paragraph_id": 68,
"text": "QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes.",
"title": "Methods"
},
{
"paragraph_id": 69,
"text": "Quantum computational chemistry integrates quantum mechanics and computational methods to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum mechanics to represent and process information, such as Hamiltonian operators.",
"title": "Methods"
},
{
"paragraph_id": 70,
"text": "Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions.",
"title": "Methods"
},
{
"paragraph_id": 71,
"text": "Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior.",
"title": "Methods"
},
{
"paragraph_id": 72,
"text": "While these quantum techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments represent significant progress towards achieving more precise and resource-efficient quantum chemistry simulations.",
"title": "Methods"
},
{
"paragraph_id": 73,
"text": "Computational chemistry is not an exact description of real-life chemistry, as our mathematical models of the physical laws of nature can only provide us with an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme.",
"title": "Accuracy"
},
{
"paragraph_id": 74,
"text": "Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost.",
"title": "Accuracy"
},
{
"paragraph_id": 75,
"text": "Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT).",
"title": "Accuracy"
},
{
"paragraph_id": 76,
"text": "There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM).",
"title": "Accuracy"
},
{
"paragraph_id": 77,
"text": "Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in:",
"title": "Software packages"
}
]
| Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into computer programs, to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion, achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in many-body problem exacerbates the challenge of providing detailed descriptions in quantum mechanical systems. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict unobserved chemical phenomena. | 2001-11-26T18:01:20Z | 2023-12-25T05:02:17Z | [
"Template:Cite journal",
"Template:Copy edit",
"Template:Main",
"Template:See also",
"Template:Commons category",
"Template:Webarchive",
"Template:Reflist",
"Template:Citation",
"Template:Cite press release",
"Template:Computer science",
"Template:Short description",
"Template:Main article",
"Template:Cite web",
"Template:BranchesofChemistry",
"Template:Portal",
"Template:Columns-list",
"Template:Cite book",
"Template:Authority control",
"Template:Computational science"
]
| https://en.wikipedia.org/wiki/Computational_chemistry |
6,020 | Crash (Ballard novel) | Crash is a novel by English author J. G. Ballard, first published in 1973 with cover designed by Bill Botten. It follows a group of car-crash fetishists who become sexually aroused by staging and participating in car accidents, inspired by the famous crashes of celebrities.
The novel was released to divided critical reception, with many reviewers horrified by its provocative content. It was adapted into a controversial 1996 film of the same name by David Cronenberg.
The story is told through the eyes of narrator James Ballard, named after the author himself, but it centers on the sinister figure of Dr. Robert Vaughan, a former TV scientist turned "nightmare angel of the highways". James meets Vaughan after being injured in a car crash near London Airport. Gathering around Vaughan is a group of alienated people, all of them former crash victims, who follow him in his pursuit to re-enact the crashes of Hollywood celebrities such as Jayne Mansfield and James Dean, in order to experience what the narrator calls "a new sexuality, born from a perverse technology". Vaughan's ultimate fantasy is to die in a head-on collision with movie star Elizabeth Taylor.
The novel received divided reviews when originally published. One publisher's reader returned the verdict "This author is beyond psychiatric help. Do Not Publish!" A 1973 review in The New York Times was equally horrified: "Crash is, hands-down, the most repulsive book I've yet to come across."
However, retrospective opinion now considers Crash to be one of Ballard's best and most challenging works. Reassessing Crash in The Guardian, Zadie Smith wrote, "Crash is an existential book about how everybody uses everything. How everything uses everybody. And yet it is not a hopeless vision." On Ballard's legacy, she writes: "In Ballard's work there is always this mix of futuristic dread and excitement, a sweet spot where dystopia and utopia converge. For we cannot say we haven't got precisely what we dreamed of, what we always wanted, so badly."
The Papers of J.G. Ballard at the British Library include two revised drafts of Crash (Add MS 88938/3/8). Scanned extracts from Ballard's drafts are included in Crash: The Collector's Edition, ed. Chris Beckett.
Throughout Crash I have used the car not only as a sexual image, but as a total metaphor for man's life in today's society. As such the novel has a political role quite apart from its sexual content, but I would still like to think that Crash is the first pornographic novel based on technology. In a sense, pornography is the most political form of fiction, dealing with how we use and exploit each other in the most urgent and ruthless way. Needless to say, the ultimate role of Crash is cautionary, a warning against that brutal, erotic and overlit realm that beckons more and more persuasively to us from the margins of the technological landscape.
Crash has been difficult to characterize as a novel. At some points in his career, Ballard claimed that Crash was a "cautionary tale", a view that he would later regret, asserting that it is in fact "a psychopathic hymn. But it is a psychopathic hymn which has a point”. Likewise, Ballard previously characterized it a science fiction novel, a position he would later take back.
Jean Baudrillard wrote an analysis of Crash in Simulacra and Simulation in which he declared it "the first great novel of the universe of simulation". He made note of how the fetish in the story conflates the functionality of the automobiles with that of the human body and how the characters' injuries and the damage to the vehicles are used as equivalent signs. To him, the hyperfunctionality leads to the dysfunction in the story. Quotes were used extensively to illustrate that the language of the novel employs plain, mechanical terms for the parts of the automobile and proper, medical language for human sex organs and acts. The story is interpreted as showing a merger between technology, sexuality, and death, and he further argued that by pointing out Vaughan's character takes and keeps photos of the car crashes and the mutilated bodies involved. Baudrillard stated that there is no moral judgment about the events within the novel but that Ballard himself intended it as a warning against a cultural trend.
The story can be classed as dystopic.
The Normal's 1978 song "Warm Leatherette" was inspired by the novel as was "Miss the Girl," a 1983 single by The Creatures. The Manic Street Preachers' song "Mausoleum" from 1994's The Holy Bible contains the famous Ballard quote about his reasons for writing the book, "I wanted to rub the human face in its own vomit. I wanted to force it to look in the mirror." John Foxx's album Metamatic contains songs that have Ballardian themes, such as "No-one Driving".
An apparently unauthorized adaptation of Crash called Nightmare Angel was filmed in 1986 by Susan Emerling and Zoe Beloff. This short film bears the credit "Inspired by J.G. Ballard". | [
{
"paragraph_id": 0,
"text": "Crash is a novel by English author J. G. Ballard, first published in 1973 with cover designed by Bill Botten. It follows a group of car-crash fetishists who become sexually aroused by staging and participating in car accidents, inspired by the famous crashes of celebrities.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The novel was released to divided critical reception, with many reviewers horrified by its provocative content. It was adapted into a controversial 1996 film of the same name by David Cronenberg.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The story is told through the eyes of narrator James Ballard, named after the author himself, but it centers on the sinister figure of Dr. Robert Vaughan, a former TV scientist turned \"nightmare angel of the highways\". James meets Vaughan after being injured in a car crash near London Airport. Gathering around Vaughan is a group of alienated people, all of them former crash victims, who follow him in his pursuit to re-enact the crashes of Hollywood celebrities such as Jayne Mansfield and James Dean, in order to experience what the narrator calls \"a new sexuality, born from a perverse technology\". Vaughan's ultimate fantasy is to die in a head-on collision with movie star Elizabeth Taylor.",
"title": "Synopsis"
},
{
"paragraph_id": 3,
"text": "The novel received divided reviews when originally published. One publisher's reader returned the verdict \"This author is beyond psychiatric help. Do Not Publish!\" A 1973 review in The New York Times was equally horrified: \"Crash is, hands-down, the most repulsive book I've yet to come across.\"",
"title": "Critical reception"
},
{
"paragraph_id": 4,
"text": "However, retrospective opinion now considers Crash to be one of Ballard's best and most challenging works. Reassessing Crash in The Guardian, Zadie Smith wrote, \"Crash is an existential book about how everybody uses everything. How everything uses everybody. And yet it is not a hopeless vision.\" On Ballard's legacy, she writes: \"In Ballard's work there is always this mix of futuristic dread and excitement, a sweet spot where dystopia and utopia converge. For we cannot say we haven't got precisely what we dreamed of, what we always wanted, so badly.\"",
"title": "Critical reception"
},
{
"paragraph_id": 5,
"text": "The Papers of J.G. Ballard at the British Library include two revised drafts of Crash (Add MS 88938/3/8). Scanned extracts from Ballard's drafts are included in Crash: The Collector's Edition, ed. Chris Beckett.",
"title": "Critical reception"
},
{
"paragraph_id": 6,
"text": "Throughout Crash I have used the car not only as a sexual image, but as a total metaphor for man's life in today's society. As such the novel has a political role quite apart from its sexual content, but I would still like to think that Crash is the first pornographic novel based on technology. In a sense, pornography is the most political form of fiction, dealing with how we use and exploit each other in the most urgent and ruthless way. Needless to say, the ultimate role of Crash is cautionary, a warning against that brutal, erotic and overlit realm that beckons more and more persuasively to us from the margins of the technological landscape.",
"title": "Interpretation"
},
{
"paragraph_id": 7,
"text": "Crash has been difficult to characterize as a novel. At some points in his career, Ballard claimed that Crash was a \"cautionary tale\", a view that he would later regret, asserting that it is in fact \"a psychopathic hymn. But it is a psychopathic hymn which has a point”. Likewise, Ballard previously characterized it a science fiction novel, a position he would later take back.",
"title": "Interpretation"
},
{
"paragraph_id": 8,
"text": "Jean Baudrillard wrote an analysis of Crash in Simulacra and Simulation in which he declared it \"the first great novel of the universe of simulation\". He made note of how the fetish in the story conflates the functionality of the automobiles with that of the human body and how the characters' injuries and the damage to the vehicles are used as equivalent signs. To him, the hyperfunctionality leads to the dysfunction in the story. Quotes were used extensively to illustrate that the language of the novel employs plain, mechanical terms for the parts of the automobile and proper, medical language for human sex organs and acts. The story is interpreted as showing a merger between technology, sexuality, and death, and he further argued that by pointing out Vaughan's character takes and keeps photos of the car crashes and the mutilated bodies involved. Baudrillard stated that there is no moral judgment about the events within the novel but that Ballard himself intended it as a warning against a cultural trend.",
"title": "Interpretation"
},
{
"paragraph_id": 9,
"text": "The story can be classed as dystopic.",
"title": "Interpretation"
},
{
"paragraph_id": 10,
"text": "The Normal's 1978 song \"Warm Leatherette\" was inspired by the novel as was \"Miss the Girl,\" a 1983 single by The Creatures. The Manic Street Preachers' song \"Mausoleum\" from 1994's The Holy Bible contains the famous Ballard quote about his reasons for writing the book, \"I wanted to rub the human face in its own vomit. I wanted to force it to look in the mirror.\" John Foxx's album Metamatic contains songs that have Ballardian themes, such as \"No-one Driving\".",
"title": "References in popular art"
},
{
"paragraph_id": 11,
"text": "An apparently unauthorized adaptation of Crash called Nightmare Angel was filmed in 1986 by Susan Emerling and Zoe Beloff. This short film bears the credit \"Inspired by J.G. Ballard\".",
"title": "References in popular art"
}
]
| Crash is a novel by English author J. G. Ballard, first published in 1973 with cover designed by Bill Botten. It follows a group of car-crash fetishists who become sexually aroused by staging and participating in car accidents, inspired by the famous crashes of celebrities. The novel was released to divided critical reception, with many reviewers horrified by its provocative content. It was adapted into a controversial 1996 film of the same name by David Cronenberg. | 2001-08-04T22:19:34Z | 2023-11-29T12:55:37Z | [
"Template:Citation needed",
"Template:Cite book",
"Template:Short description",
"Template:Infobox book",
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Wikiquote",
"Template:Use dmy dates",
"Template:Blockquote",
"Template:Portal",
"Template:Cite magazine",
"Template:Cite news",
"Template:Webarchive",
"Template:J. G. Ballard"
]
| https://en.wikipedia.org/wiki/Crash_(Ballard_novel) |
6,021 | C (programming language) | C (pronounced /ˈsiː/ – like the letter c) is a general-purpose computer programming language. It was created in the 1970s by Dennis Ritchie, and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs. It has found lasting use in operating systems, device drivers, and protocol stacks, but its use in application software has been decreasing. C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems.
A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system. During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages, with C compilers available for practically all modern computer architectures and operating systems. The book The C Programming Language, co-authored by the original language designer, served for many years as the de facto standard for the language. C has been standardized since 1989 by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO).
C is an imperative procedural language, supporting structured programming, lexical variable scope, and recursion, with a static type system. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code.
Since 2000, C has consistently ranked among the top two languages in the TIOBE index, a measure of the popularity of programming languages.
C is an imperative, procedural language in the ALGOL tradition. It has a static type system. In C, all executable code is contained within subroutines (also called "functions", though not in the sense of functional programming). Function parameters are passed by value, although arrays are passed as pointers, i.e. the address of the first item in the array. Pass-by-reference is simulated in C by explicitly passing pointers to the thing being referenced.
C program source text is free-form code. Semicolons terminate statements, while curly braces are used to group statements into blocks.
The C language also exhibits the following characteristics:
While C does not include certain features found in other languages (such as object orientation and garbage collection), these can be implemented or emulated, often through the use of external libraries (e.g., the GLib Object System or the Boehm garbage collector).
Many later languages have borrowed directly or indirectly from C, including C++, C#, Unix's C shell, D, Go, Java, JavaScript (including transpilers), Julia, Limbo, LPC, Objective-C, Perl, PHP, Python, Ruby, Rust, Swift, Verilog and SystemVerilog (hardware description languages). These languages have drawn many of their control structures and other basic features from C. Most of them (Python being a dramatic exception) also express highly similar syntax to C, and they tend to combine the recognizable expression and statement syntax of C with underlying type systems, data models, and semantics that can be radically different.
The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Dennis Ritchie and Ken Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to a PDP-11. The original PDP-11 version of Unix was also developed in assembly language.
Thompson wanted a programming language for developing utilities for the new platform. At first, he tried to write a Fortran compiler, but soon gave up the idea. Instead, he created a cut-down version of the recently developed systems programming language called BCPL. The official description of BCPL was not available at the time and Thompson modified the syntax to be less wordy, and similar to a simplified ALGOL known as SMALGOL. Thompson called the result B. He described B as "BCPL semantics with a lot of SMALGOL syntax". Like BCPL, B had a bootstrapping compiler to facilitate porting to new machines. However, few utilities were ultimately written in B because it was too slow, and could not take advantage of PDP-11 features such as byte addressability.
In 1971, Ritchie started to improve B, to utilise the features of the more-powerful PDP-11. A significant addition was a character data type. He called this New B (NB). Thompson started to use NB to write the Unix kernel, and his requirements shaped the direction of the language development. Through to 1972, richer types were added to the NB language: NB had arrays of int and char. Pointers, the ability to generate pointers to other types, arrays of all types, and types to be returned from functions were all also added. Arrays within expressions became pointers. A new compiler was written, and the language was renamed C.
The C compiler and some utilities made with it were included in Version 2 Unix, which is also known as Research Unix.
At Version 4 Unix, released in November 1973, the Unix kernel was extensively re-implemented in C. By this time, the C language had acquired some powerful features such as struct types.
The preprocessor was introduced around 1973 at the urging of Alan Snyder and also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL and PL/I. Its original version provided only included files and simple string replacements: #include and #define of parameterless macros. Soon after that, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation.
Unix was one of the first operating system kernels implemented in a language other than assembly. Earlier instances include the Multics system (which was written in PL/I) and Master Control Program (MCP) for the Burroughs B5000 (which was written in ALGOL) in 1961. In around 1977, Ritchie and Stephen C. Johnson made further changes to the language to facilitate portability of the Unix operating system. Johnson's Portable C Compiler served as the basis for several implementations of C on new platforms.
In 1978, Brian Kernighan and Dennis Ritchie published the first edition of The C Programming Language. Known as K&R from the initials of its authors, the book served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as "K&R C". As this was released in 1978, it is also referred to as C78. The second edition of the book covers the later ANSI C standard, described below.
K&R introduced several language features:
Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.
In early versions of C, only functions that return types other than int must be declared if used before the function definition; functions used without prior declaration were presumed to return type int.
For example:
The int type specifiers which are commented out could be omitted in K&R C, but are required in later standards.
Since K&R function declarations did not include any information about function arguments, function parameter type checks were not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if multiple calls to an external function used different numbers or types of arguments. Separate tools such as Unix's lint utility were developed that (among other things) could check for consistency of function use across multiple source files.
In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particular PCC) and some other vendors. These included:
The large number of extensions and lack of agreement on a standard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.
During the late 1970s and 1980s, versions of C were implemented for a wide variety of mainframe computers, minicomputers, and microcomputers, including the IBM PC, as its popularity began to increase significantly.
In 1983, the American National Standards Institute (ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to the IEEE working group 1003 to become the basis for the 1988 POSIX standard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to as ANSI C, Standard C, or sometimes C89.
In 1990, the ANSI C standard (with formatting changes) was adopted by the International Organization for Standardization (ISO) as ISO/IEC 9899:1990, which is sometimes called C90. Therefore, the terms "C89" and "C90" refer to the same programming language.
ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working group ISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication.
One of the aims of the C standardization process was to produce a superset of K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such as function prototypes (borrowed from C++), void pointers, support for international character sets and locales, and preprocessor enhancements. Although the syntax for parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code.
C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on any platform with a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byte endianness.
In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the __STDC__ macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C.
After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.
The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda.
C99 introduced several new features, including inline functions, several new data types (including long long int and a complex type to represent complex numbers), variable-length arrays and flexible array members, improved support for IEEE 754 floating point, support for variadic macros (macros of variable arity), and support for one-line comments beginning with //, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers.
C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer has int implicitly assumed. A standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. GCC, Solaris Studio, and other C compilers now support many or all of the new features of C99. The C compiler in Microsoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility with C++11.
In addition, the C99 standard requires support for identifiers using Unicode in the form of escaped characters (e.g. \u0040 or \U0001f431) and suggests support for raw Unicode names.
In 2007, work began on another revision of the C standard, informally called "C1X" until its official publication of ISO/IEC 9899:2011 on 2011-12-08. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations.
The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro __STDC_VERSION__ is defined as 201112L to indicate that C11 support is available.
Published in June 2018 as ISO/IEC 9899:2018, C17 is the current standard for the C programming language. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro __STDC_VERSION__ is defined as 201710L.
C23 is the informal name for the next (after C17) major C language standard revision. It is expected to be published in 2024.
Historically, embedded C programming requires nonstandard extensions to the C language in order to support exotic features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations.
In 2008, the C Standards Committee published a technical report extending the C language to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.
C has a formal grammar specified by the C standard. Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters /* and */, or (since C99) following // until the end of the line. Comments delimited by /* and */ do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear inside string or character literals.
C source files contain declarations and function definitions. Function definitions, in turn, contain declarations and statements. Declarations either define new types using keywords such as struct, union, and enum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such as char and int specify built-in types. Sections of code are enclosed in braces ({ and }, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures.
As an imperative language, C uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables may be assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by if ... [else] conditional execution and by do ... while, while, and for iterative execution (looping). The for statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. break and continue can be used within the loop. Break is used to leave the innermost enclosing loop statement and continue is used to skip to its reinitialisation. There is also a non-structured goto statement which branches directly to the designated label within the function. switch selects a case to be executed based on the value of an integer expression. Different from many other languages, control-flow will fall through to the next case unless terminated by a break.
Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&, ||, ?: and the comma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages.
Kernighan and Ritchie say in the Introduction of The C Programming Language: "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better." The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software.
The basic C source character set includes the following characters:
Newline indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as one.
Additional multi-byte encoded characters may be used in string literals, but they are not entirely portable. The latest C standard (C11) allows multi-national Unicode characters to be embedded portably within C source text by using \uXXXX or \UXXXXXXXX encoding (where the X denotes a hexadecimal character), although this feature is not yet widely implemented.
The basic C execution character set contains the same characters, along with representations for alert, backspace, and carriage return. Run-time support for extended character sets has increased with each revision of the C standard.
C89 has 32 reserved words, also known as keywords, which are the words that cannot be used for any purposes other than those for which they are predefined:
C99 reserved five more words:
C11 reserved seven more words:
C23 will reserve 14 more words:
Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. Some of those words were added as keywords with their conventional spelling in C23 and the corresponding macros were removed. The language previously included a reserved word called entry, but this was seldom implemented, and has now been removed as a reserved word.
C supports a rich set of operators, which are symbols used within an expression to specify the manipulations to be performed while evaluating that expression. C has operators for:
C uses the operator = (used in mathematics to express equality) to indicate assignment, following the precedent of Fortran and PL/I, but unlike ALGOL and its derivatives. C uses the operator == to test for equality. The similarity between these two operators (assignment and equality) may result in the accidental use of one in place of the other, and in many cases, the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expression if (a == b + 1) might mistakenly be written as if (a = b + 1), which will be evaluated as true if a is not zero after the assignment.
The C operator precedence is not always intuitive. For example, the operator == binds more tightly than (is executed prior to) the operators & (bitwise AND) and | (bitwise OR) in expressions such as x & 1 == 0, which must be written as (x & 1) == 0 if that is the coder's intent.
The "hello, world" example, which appeared in the first edition of K&R, has become the model for an introductory program in most programming textbooks. The program prints "hello, world" to the standard output, which is usually a terminal or screen display.
The original version was:
A standard-conforming "hello, world" program is:
The first line of the program contains a preprocessing directive, indicated by #include. This causes the compiler to replace that line with the entire text of the stdio.h standard header, which contains declarations for standard input and output functions such as printf and scanf. The angle brackets surrounding stdio.h indicate that stdio.h can be located using a search strategy that prefers headers provided with the compiler to other headers having the same name, as opposed to double quotes which typically include local or project-specific header files.
The next line indicates that a function named main is being defined. The main function serves a special purpose in C programs; the run-time environment calls the main function to begin program execution. The type specifier int indicates that the value that is returned to the invoker (in this case the run-time environment) as a result of evaluating the main function, is an integer. The keyword void as a parameter list indicates that this function takes no arguments.
The opening curly brace indicates the beginning of the definition of the main function.
The next line calls (diverts execution to) a function named printf, which in this case is supplied from a system library. In this call, the printf function is passed (provided with) a single argument, the address of the first character in the string literal "hello, world\n". The string literal is an unnamed array with elements of type char, set up automatically by the compiler with a final NULL(ASCII value 0) character to mark the end of the array (for printf to know the length of the string).The NULL character can be also written as an escape sequence, written as \0. The \n is an escape sequence that C translates to a newline character, which on output signifies the end of the current line. The return value of the printf function is of type int, but it is silently discarded since it is not used. (A more careful program might test the return value to determine whether or not the printf function succeeded.) The semicolon ; terminates the statement.
The closing curly brace indicates the end of the code for the main function. According to the C99 specification and newer, the main function, unlike any other function, will implicitly return a value of 0 upon reaching the } that terminates the function. (Formerly an explicit return 0; statement was required.) This is interpreted by the run-time system as an exit code indicating successful execution.
The type system in C is static and weakly typed, which makes it similar to the type system of ALGOL descendants such as Pascal. There are built-in types for integers of various sizes, both signed and unsigned, floating-point numbers, and enumerated types (enum). Integer type char is often used for single-byte characters. C99 added a Boolean datatype. There are also derived types including arrays, pointers, records (struct), and unions (union).
C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using a type cast to explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way.
Some find C's declaration syntax unintuitive, particularly for function pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".)
C's usual arithmetic conversions allow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative.
C supports the use of pointers, a type of reference that records the address or location of an object or function in memory. Pointers can be dereferenced to access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment or pointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type.
Pointers are used for many purposes in C. Text strings are commonly manipulated using pointers into arrays of characters. Dynamic memory allocation is performed using pointers; the result of a malloc is usually cast to the data type of the data to be stored. Many data types, such as trees, are commonly implemented as dynamically allocated struct objects linked together using pointers. Pointers to other pointers are often used in multi-dimensional arrays and arrays of struct objects. Pointers to functions (function pointers) are useful for passing functions as arguments to higher-order functions (such as qsort or bsearch), in dispatch tables, or as callbacks to event handlers .
A null pointer value explicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in a segmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of a linked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, a null pointer constant can be written as 0, with or without explicit casting to a pointer type, as the NULL macro defined by several standard headers or, since C23 with the constant nullptr. In conditional contexts, null pointer values evaluate to false, while all other pointer values evaluate to true.
Void pointers (void *) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.
Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalid pointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictive reference types.
Array types in C are traditionally of a fixed, static size specified at compile time. The more recent C99 standard also allows a form of variable-length arrays. However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library's malloc function, and treat it as an array.
Since arrays are always accessed (in effect) via pointers, array accesses are typically not checked against the underlying array size, although some compilers may provide bounds checking as an option. Array bounds violations are therefore possible and can lead to various repercussions, including illegal memory accesses, corruption of data, buffer overruns, and run-time exceptions.
C does not have a special provision for declaring multi-dimensional arrays, but rather relies on recursion within the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing in row-major order. Multi-dimensional arrays are commonly used in numerical algorithms (mainly from applied linear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, in early versions of C the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this was to allocate the array with an additional "row vector" of pointers to the columns.) C99 introduced "variable-length arrays" which address this issue.
The following example using modern C (C99 or later) shows allocation of a two-dimensional array on the heap and the use of multi-dimensional array indexing for accesses (which can use bounds-checking on many C compilers):
And here is a similar implementation using C99's Auto VLA feature:
The subscript notation x[i] (where x designates a pointer) is syntactic sugar for *(x+i). Taking advantage of the compiler's knowledge of the pointer type, the address that x + i points to is not the base address (pointed to by x) incremented by i bytes, but rather is defined to be the base address incremented by i multiplied by the size of an element that x points to. Thus, x[i] designates the i+1th element of the array.
Furthermore, in most expression contexts (a notable exception is as operand of sizeof), an expression of array type is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C use pass-by-value semantics, arrays are in effect passed by reference.
The total size of an array x can be determined by applying sizeof to an expression of array type. The size of an element can be determined by applying the operator sizeof to any dereferenced element of an array A, as in n = sizeof A[0]. Thus, the number of elements in a declared array A can be determined as sizeof A / sizeof A[0]. Note, that if only a pointer to the first element is available as it is often the case in C code because of the automatic conversion described above, the information about the full type of the array and its length are lost.
One of the most important functions of a programming language is to provide facilities for managing memory and the objects that are stored in memory. C provides three principal ways to allocate memory for objects:
These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.
Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary. Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article on malloc for an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by the linker or loader, before the program can even begin execution.)
Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whatever bit pattern happens to be present in the storage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but both false positives and false negatives can occur.
Heap memory allocation has to be synchronized with its actual usage in any program to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before it is deallocated explicitly, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as a memory leak. Conversely, it is possible for memory to be freed, but is referenced subsequently, leading to unpredictable results. Typically, the failure symptoms appear in a portion of the program unrelated to the code that causes the error, making it difficult to diagnose the failure. Such issues are ameliorated in languages with automatic garbage collection.
The C programming language uses libraries as its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has a header file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. In order for a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requires compiler flags (e.g., -lm, shorthand for "link the math library").
The most common C library is the C standard library, which is specified by the ISO and ANSI C standards and comes with every C implementation (implementations which target limited environments such as embedded systems may provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example, stdio.h) specify the interfaces for these and other standard library facilities.
Another common set of C library functions are those used by applications specifically targeted for Unix and Unix-like systems, especially functions which provide an interface to the kernel. These functions are detailed in various standards such as POSIX and the Single UNIX Specification.
Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficient object code; programmers then create interfaces to the library so that the routines can be used from higher-level languages like Java, Perl, and Python.
File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g. stdio.h). File handling is generally implemented through high-level I/O which works through streams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high-level I/O is done through the association of a stream to a file. In the C standard library, a buffer (a memory area or queue) is temporarily used to store data before it is sent to the final destination. This reduces the time spent waiting for slower devices, for example a hard drive or solid state drive. Low-level I/O functions are not part of the standard C library but are generally part of "bare metal" programming (programming that's independent of any operating system such as most embedded programming). With few exceptions, implementations include low-level I/O.
A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler. The tool lint was the first such, leading to many others.
Automated source code checking and auditing are beneficial in any language, and for C many such tools exist, such as Lint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors. MISRA C is a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.
There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such as bounds checking for arrays, detection of buffer overflow, serialization, dynamic memory tracking, and automatic garbage collection.
Tools such as Purify or Valgrind and linking with libraries containing special versions of the memory allocation functions can help uncover runtime errors in memory usage.
C is widely used for systems programming in implementing operating systems and embedded system applications. This is for several reasons:
Historically, C was sometimes used for web development using the Common Gateway Interface (CGI) as a "gateway" for information between the web application, the server, and the browser. C may have been chosen over interpreted languages because of its speed, stability, and near-universal availability. It is no longer common practice for web development to be done in C, and many other web development tools exist.
A consequence of C's wide availability and efficiency is that compilers, libraries and interpreters of other programming languages are often implemented in C. For example, the reference implementations of Python, Perl, Ruby, and PHP are written in C.
C enables programmers to create efficient implementations of algorithms and data structures, because the layer of abstraction from hardware is thin, and its overhead is low, an important criterion for computationally intensive programs. For example, the GNU Multiple Precision Arithmetic Library, the GNU Scientific Library, Mathematica, and MATLAB are completely or partially written in C. Many languages support calling library functions in C, for example, the Python-based framework NumPy uses C for the high-performance and hardware-interacting aspects.
C is sometimes used as an intermediate language by implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of other C-based languages specifically designed for use as intermediate languages, such as C--. Also, contemporary major compilers GCC and LLVM both feature an intermediate representation that is not C, and those compilers support front ends for many languages including C.
C has also been widely used to implement end-user applications. However, such applications can also be written in newer, higher-level languages.
the power of assembly language and the convenience of ... assembly language
While C has been popular, influential and hugely successful, it has drawbacks, including:
For some purposes, restricted styles of C have been adopted, e.g. MISRA C or CERT C, in an attempt to reduce the opportunity for bugs. Databases such as CWE attempt to count the ways C etc. has vulnerabilities, along with recommendations for mitigation.
There are tools that can mitigate against some of the drawbacks. Contemporary C compilers include checks which may generate warnings to help identify many potential bugs.
Some of these drawbacks have prompted the construction of other languages.
C has both directly and indirectly influenced many later languages such as C++ and Java. The most pervasive influence has been syntactical; all of the languages mentioned combine the statement and (more or less recognizably) expression syntax of C with type systems, data models or large-scale program structures that differ from those of C, sometimes radically.
Several C or near-C interpreters exist, including Ch and CINT, which can also be used for scripting.
When object-oriented programming languages became popular, C++ and Objective-C were two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented as source-to-source compilers; source code was translated into C, and then compiled with a C compiler.
The C++ programming language (originally named "C with Classes") was devised by Bjarne Stroustrup as an approach to providing object-oriented functionality with a C-like syntax. C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permits generic programming via templates. Nearly a superset of C, C++ now supports most of C, with a few exceptions.
Objective-C was originally a very "thin" layer on top of C, and remains a strict superset of C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C and Smalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk.
In addition to C++ and Objective-C, Ch, Cilk, and Unified Parallel C are nearly supersets of C. | [
{
"paragraph_id": 0,
"text": "C (pronounced /ˈsiː/ – like the letter c) is a general-purpose computer programming language. It was created in the 1970s by Dennis Ritchie, and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs. It has found lasting use in operating systems, device drivers, and protocol stacks, but its use in application software has been decreasing. C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system. During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages, with C compilers available for practically all modern computer architectures and operating systems. The book The C Programming Language, co-authored by the original language designer, served for many years as the de facto standard for the language. C has been standardized since 1989 by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO).",
"title": ""
},
{
"paragraph_id": 2,
"text": "C is an imperative procedural language, supporting structured programming, lexical variable scope, and recursion, with a static type system. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since 2000, C has consistently ranked among the top two languages in the TIOBE index, a measure of the popularity of programming languages.",
"title": ""
},
{
"paragraph_id": 4,
"text": "C is an imperative, procedural language in the ALGOL tradition. It has a static type system. In C, all executable code is contained within subroutines (also called \"functions\", though not in the sense of functional programming). Function parameters are passed by value, although arrays are passed as pointers, i.e. the address of the first item in the array. Pass-by-reference is simulated in C by explicitly passing pointers to the thing being referenced.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "C program source text is free-form code. Semicolons terminate statements, while curly braces are used to group statements into blocks.",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "The C language also exhibits the following characteristics:",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "While C does not include certain features found in other languages (such as object orientation and garbage collection), these can be implemented or emulated, often through the use of external libraries (e.g., the GLib Object System or the Boehm garbage collector).",
"title": "Overview"
},
{
"paragraph_id": 8,
"text": "Many later languages have borrowed directly or indirectly from C, including C++, C#, Unix's C shell, D, Go, Java, JavaScript (including transpilers), Julia, Limbo, LPC, Objective-C, Perl, PHP, Python, Ruby, Rust, Swift, Verilog and SystemVerilog (hardware description languages). These languages have drawn many of their control structures and other basic features from C. Most of them (Python being a dramatic exception) also express highly similar syntax to C, and they tend to combine the recognizable expression and statement syntax of C with underlying type systems, data models, and semantics that can be radically different.",
"title": "Overview"
},
{
"paragraph_id": 9,
"text": "The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Dennis Ritchie and Ken Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to a PDP-11. The original PDP-11 version of Unix was also developed in assembly language.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Thompson wanted a programming language for developing utilities for the new platform. At first, he tried to write a Fortran compiler, but soon gave up the idea. Instead, he created a cut-down version of the recently developed systems programming language called BCPL. The official description of BCPL was not available at the time and Thompson modified the syntax to be less wordy, and similar to a simplified ALGOL known as SMALGOL. Thompson called the result B. He described B as \"BCPL semantics with a lot of SMALGOL syntax\". Like BCPL, B had a bootstrapping compiler to facilitate porting to new machines. However, few utilities were ultimately written in B because it was too slow, and could not take advantage of PDP-11 features such as byte addressability.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1971, Ritchie started to improve B, to utilise the features of the more-powerful PDP-11. A significant addition was a character data type. He called this New B (NB). Thompson started to use NB to write the Unix kernel, and his requirements shaped the direction of the language development. Through to 1972, richer types were added to the NB language: NB had arrays of int and char. Pointers, the ability to generate pointers to other types, arrays of all types, and types to be returned from functions were all also added. Arrays within expressions became pointers. A new compiler was written, and the language was renamed C.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The C compiler and some utilities made with it were included in Version 2 Unix, which is also known as Research Unix.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "At Version 4 Unix, released in November 1973, the Unix kernel was extensively re-implemented in C. By this time, the C language had acquired some powerful features such as struct types.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The preprocessor was introduced around 1973 at the urging of Alan Snyder and also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL and PL/I. Its original version provided only included files and simple string replacements: #include and #define of parameterless macros. Soon after that, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Unix was one of the first operating system kernels implemented in a language other than assembly. Earlier instances include the Multics system (which was written in PL/I) and Master Control Program (MCP) for the Burroughs B5000 (which was written in ALGOL) in 1961. In around 1977, Ritchie and Stephen C. Johnson made further changes to the language to facilitate portability of the Unix operating system. Johnson's Portable C Compiler served as the basis for several implementations of C on new platforms.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 1978, Brian Kernighan and Dennis Ritchie published the first edition of The C Programming Language. Known as K&R from the initials of its authors, the book served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as \"K&R C\". As this was released in 1978, it is also referred to as C78. The second edition of the book covers the later ANSI C standard, described below.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "K&R introduced several language features:",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the \"lowest common denominator\" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In early versions of C, only functions that return types other than int must be declared if used before the function definition; functions used without prior declaration were presumed to return type int.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "For example:",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The int type specifiers which are commented out could be omitted in K&R C, but are required in later standards.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Since K&R function declarations did not include any information about function arguments, function parameter type checks were not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if multiple calls to an external function used different numbers or types of arguments. Separate tools such as Unix's lint utility were developed that (among other things) could check for consistency of function use across multiple source files.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particular PCC) and some other vendors. These included:",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The large number of extensions and lack of agreement on a standard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "During the late 1970s and 1980s, versions of C were implemented for a wide variety of mainframe computers, minicomputers, and microcomputers, including the IBM PC, as its popularity began to increase significantly.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In 1983, the American National Standards Institute (ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to the IEEE working group 1003 to become the basis for the 1988 POSIX standard. In 1989, the C standard was ratified as ANSI X3.159-1989 \"Programming Language C\". This version of the language is often referred to as ANSI C, Standard C, or sometimes C89.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In 1990, the ANSI C standard (with formatting changes) was adopted by the International Organization for Standardization (ISO) as ISO/IEC 9899:1990, which is sometimes called C90. Therefore, the terms \"C89\" and \"C90\" refer to the same programming language.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working group ISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "One of the aims of the C standardization process was to produce a superset of K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such as function prototypes (borrowed from C++), void pointers, support for international character sets and locales, and preprocessor enhancements. Although the syntax for parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on any platform with a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byte endianness.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the __STDC__ macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as \"C99\". It has since been amended three times by Technical Corrigenda.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "C99 introduced several new features, including inline functions, several new data types (including long long int and a complex type to represent complex numbers), variable-length arrays and flexible array members, improved support for IEEE 754 floating point, support for variadic macros (macros of variable arity), and support for one-line comments beginning with //, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer has int implicitly assumed. A standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. GCC, Solaris Studio, and other C compilers now support many or all of the new features of C99. The C compiler in Microsoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility with C++11.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "In addition, the C99 standard requires support for identifiers using Unicode in the form of escaped characters (e.g. \\u0040 or \\U0001f431) and suggests support for raw Unicode names.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "In 2007, work began on another revision of the C standard, informally called \"C1X\" until its official publication of ISO/IEC 9899:2011 on 2011-12-08. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro __STDC_VERSION__ is defined as 201112L to indicate that C11 support is available.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Published in June 2018 as ISO/IEC 9899:2018, C17 is the current standard for the C programming language. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro __STDC_VERSION__ is defined as 201710L.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "C23 is the informal name for the next (after C17) major C language standard revision. It is expected to be published in 2024.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "Historically, embedded C programming requires nonstandard extensions to the C language in order to support exotic features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "In 2008, the C Standards Committee published a technical report extending the C language to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "C has a formal grammar specified by the C standard. Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters /* and */, or (since C99) following // until the end of the line. Comments delimited by /* and */ do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear inside string or character literals.",
"title": "Syntax"
},
{
"paragraph_id": 44,
"text": "C source files contain declarations and function definitions. Function definitions, in turn, contain declarations and statements. Declarations either define new types using keywords such as struct, union, and enum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such as char and int specify built-in types. Sections of code are enclosed in braces ({ and }, sometimes called \"curly brackets\") to limit the scope of declarations and to act as a single statement for control structures.",
"title": "Syntax"
},
{
"paragraph_id": 45,
"text": "As an imperative language, C uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables may be assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by if ... [else] conditional execution and by do ... while, while, and for iterative execution (looping). The for statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. break and continue can be used within the loop. Break is used to leave the innermost enclosing loop statement and continue is used to skip to its reinitialisation. There is also a non-structured goto statement which branches directly to the designated label within the function. switch selects a case to be executed based on the value of an integer expression. Different from many other languages, control-flow will fall through to the next case unless terminated by a break.",
"title": "Syntax"
},
{
"paragraph_id": 46,
"text": "Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next \"sequence point\"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&, ||, ?: and the comma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages.",
"title": "Syntax"
},
{
"paragraph_id": 47,
"text": "Kernighan and Ritchie say in the Introduction of The C Programming Language: \"C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better.\" The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software.",
"title": "Syntax"
},
{
"paragraph_id": 48,
"text": "The basic C source character set includes the following characters:",
"title": "Syntax"
},
{
"paragraph_id": 49,
"text": "Newline indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as one.",
"title": "Syntax"
},
{
"paragraph_id": 50,
"text": "Additional multi-byte encoded characters may be used in string literals, but they are not entirely portable. The latest C standard (C11) allows multi-national Unicode characters to be embedded portably within C source text by using \\uXXXX or \\UXXXXXXXX encoding (where the X denotes a hexadecimal character), although this feature is not yet widely implemented.",
"title": "Syntax"
},
{
"paragraph_id": 51,
"text": "The basic C execution character set contains the same characters, along with representations for alert, backspace, and carriage return. Run-time support for extended character sets has increased with each revision of the C standard.",
"title": "Syntax"
},
{
"paragraph_id": 52,
"text": "C89 has 32 reserved words, also known as keywords, which are the words that cannot be used for any purposes other than those for which they are predefined:",
"title": "Syntax"
},
{
"paragraph_id": 53,
"text": "C99 reserved five more words:",
"title": "Syntax"
},
{
"paragraph_id": 54,
"text": "C11 reserved seven more words:",
"title": "Syntax"
},
{
"paragraph_id": 55,
"text": "C23 will reserve 14 more words:",
"title": "Syntax"
},
{
"paragraph_id": 56,
"text": "Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. Some of those words were added as keywords with their conventional spelling in C23 and the corresponding macros were removed. The language previously included a reserved word called entry, but this was seldom implemented, and has now been removed as a reserved word.",
"title": "Syntax"
},
{
"paragraph_id": 57,
"text": "C supports a rich set of operators, which are symbols used within an expression to specify the manipulations to be performed while evaluating that expression. C has operators for:",
"title": "Syntax"
},
{
"paragraph_id": 58,
"text": "C uses the operator = (used in mathematics to express equality) to indicate assignment, following the precedent of Fortran and PL/I, but unlike ALGOL and its derivatives. C uses the operator == to test for equality. The similarity between these two operators (assignment and equality) may result in the accidental use of one in place of the other, and in many cases, the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expression if (a == b + 1) might mistakenly be written as if (a = b + 1), which will be evaluated as true if a is not zero after the assignment.",
"title": "Syntax"
},
{
"paragraph_id": 59,
"text": "The C operator precedence is not always intuitive. For example, the operator == binds more tightly than (is executed prior to) the operators & (bitwise AND) and | (bitwise OR) in expressions such as x & 1 == 0, which must be written as (x & 1) == 0 if that is the coder's intent.",
"title": "Syntax"
},
{
"paragraph_id": 60,
"text": "The \"hello, world\" example, which appeared in the first edition of K&R, has become the model for an introductory program in most programming textbooks. The program prints \"hello, world\" to the standard output, which is usually a terminal or screen display.",
"title": "\"Hello, world\" example"
},
{
"paragraph_id": 61,
"text": "The original version was:",
"title": "\"Hello, world\" example"
},
{
"paragraph_id": 62,
"text": "A standard-conforming \"hello, world\" program is:",
"title": "\"Hello, world\" example"
},
{
"paragraph_id": 63,
"text": "The first line of the program contains a preprocessing directive, indicated by #include. This causes the compiler to replace that line with the entire text of the stdio.h standard header, which contains declarations for standard input and output functions such as printf and scanf. The angle brackets surrounding stdio.h indicate that stdio.h can be located using a search strategy that prefers headers provided with the compiler to other headers having the same name, as opposed to double quotes which typically include local or project-specific header files.",
"title": "\"Hello, world\" example"
},
{
"paragraph_id": 64,
"text": "The next line indicates that a function named main is being defined. The main function serves a special purpose in C programs; the run-time environment calls the main function to begin program execution. The type specifier int indicates that the value that is returned to the invoker (in this case the run-time environment) as a result of evaluating the main function, is an integer. The keyword void as a parameter list indicates that this function takes no arguments.",
"title": "\"Hello, world\" example"
},
{
"paragraph_id": 65,
"text": "The opening curly brace indicates the beginning of the definition of the main function.",
"title": "\"Hello, world\" example"
},
{
"paragraph_id": 66,
"text": "The next line calls (diverts execution to) a function named printf, which in this case is supplied from a system library. In this call, the printf function is passed (provided with) a single argument, the address of the first character in the string literal \"hello, world\\n\". The string literal is an unnamed array with elements of type char, set up automatically by the compiler with a final NULL(ASCII value 0) character to mark the end of the array (for printf to know the length of the string).The NULL character can be also written as an escape sequence, written as \\0. The \\n is an escape sequence that C translates to a newline character, which on output signifies the end of the current line. The return value of the printf function is of type int, but it is silently discarded since it is not used. (A more careful program might test the return value to determine whether or not the printf function succeeded.) The semicolon ; terminates the statement.",
"title": "\"Hello, world\" example"
},
{
"paragraph_id": 67,
"text": "The closing curly brace indicates the end of the code for the main function. According to the C99 specification and newer, the main function, unlike any other function, will implicitly return a value of 0 upon reaching the } that terminates the function. (Formerly an explicit return 0; statement was required.) This is interpreted by the run-time system as an exit code indicating successful execution.",
"title": "\"Hello, world\" example"
},
{
"paragraph_id": 68,
"text": "The type system in C is static and weakly typed, which makes it similar to the type system of ALGOL descendants such as Pascal. There are built-in types for integers of various sizes, both signed and unsigned, floating-point numbers, and enumerated types (enum). Integer type char is often used for single-byte characters. C99 added a Boolean datatype. There are also derived types including arrays, pointers, records (struct), and unions (union).",
"title": "Data types"
},
{
"paragraph_id": 69,
"text": "C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using a type cast to explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way.",
"title": "Data types"
},
{
"paragraph_id": 70,
"text": "Some find C's declaration syntax unintuitive, particularly for function pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: \"declaration reflects use\".)",
"title": "Data types"
},
{
"paragraph_id": 71,
"text": "C's usual arithmetic conversions allow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative.",
"title": "Data types"
},
{
"paragraph_id": 72,
"text": "C supports the use of pointers, a type of reference that records the address or location of an object or function in memory. Pointers can be dereferenced to access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment or pointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type.",
"title": "Data types"
},
{
"paragraph_id": 73,
"text": "Pointers are used for many purposes in C. Text strings are commonly manipulated using pointers into arrays of characters. Dynamic memory allocation is performed using pointers; the result of a malloc is usually cast to the data type of the data to be stored. Many data types, such as trees, are commonly implemented as dynamically allocated struct objects linked together using pointers. Pointers to other pointers are often used in multi-dimensional arrays and arrays of struct objects. Pointers to functions (function pointers) are useful for passing functions as arguments to higher-order functions (such as qsort or bsearch), in dispatch tables, or as callbacks to event handlers .",
"title": "Data types"
},
{
"paragraph_id": 74,
"text": "A null pointer value explicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in a segmentation fault. Null pointer values are useful for indicating special cases such as no \"next\" pointer in the final node of a linked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, a null pointer constant can be written as 0, with or without explicit casting to a pointer type, as the NULL macro defined by several standard headers or, since C23 with the constant nullptr. In conditional contexts, null pointer values evaluate to false, while all other pointer values evaluate to true.",
"title": "Data types"
},
{
"paragraph_id": 75,
"text": "Void pointers (void *) point to objects of unspecified type, and can therefore be used as \"generic\" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.",
"title": "Data types"
},
{
"paragraph_id": 76,
"text": "Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalid pointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictive reference types.",
"title": "Data types"
},
{
"paragraph_id": 77,
"text": "Array types in C are traditionally of a fixed, static size specified at compile time. The more recent C99 standard also allows a form of variable-length arrays. However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library's malloc function, and treat it as an array.",
"title": "Data types"
},
{
"paragraph_id": 78,
"text": "Since arrays are always accessed (in effect) via pointers, array accesses are typically not checked against the underlying array size, although some compilers may provide bounds checking as an option. Array bounds violations are therefore possible and can lead to various repercussions, including illegal memory accesses, corruption of data, buffer overruns, and run-time exceptions.",
"title": "Data types"
},
{
"paragraph_id": 79,
"text": "C does not have a special provision for declaring multi-dimensional arrays, but rather relies on recursion within the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting \"multi-dimensional array\" can be thought of as increasing in row-major order. Multi-dimensional arrays are commonly used in numerical algorithms (mainly from applied linear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, in early versions of C the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this was to allocate the array with an additional \"row vector\" of pointers to the columns.) C99 introduced \"variable-length arrays\" which address this issue.",
"title": "Data types"
},
{
"paragraph_id": 80,
"text": "The following example using modern C (C99 or later) shows allocation of a two-dimensional array on the heap and the use of multi-dimensional array indexing for accesses (which can use bounds-checking on many C compilers):",
"title": "Data types"
},
{
"paragraph_id": 81,
"text": "And here is a similar implementation using C99's Auto VLA feature:",
"title": "Data types"
},
{
"paragraph_id": 82,
"text": "The subscript notation x[i] (where x designates a pointer) is syntactic sugar for *(x+i). Taking advantage of the compiler's knowledge of the pointer type, the address that x + i points to is not the base address (pointed to by x) incremented by i bytes, but rather is defined to be the base address incremented by i multiplied by the size of an element that x points to. Thus, x[i] designates the i+1th element of the array.",
"title": "Data types"
},
{
"paragraph_id": 83,
"text": "Furthermore, in most expression contexts (a notable exception is as operand of sizeof), an expression of array type is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C use pass-by-value semantics, arrays are in effect passed by reference.",
"title": "Data types"
},
{
"paragraph_id": 84,
"text": "The total size of an array x can be determined by applying sizeof to an expression of array type. The size of an element can be determined by applying the operator sizeof to any dereferenced element of an array A, as in n = sizeof A[0]. Thus, the number of elements in a declared array A can be determined as sizeof A / sizeof A[0]. Note, that if only a pointer to the first element is available as it is often the case in C code because of the automatic conversion described above, the information about the full type of the array and its length are lost.",
"title": "Data types"
},
{
"paragraph_id": 85,
"text": "One of the most important functions of a programming language is to provide facilities for managing memory and the objects that are stored in memory. C provides three principal ways to allocate memory for objects:",
"title": "Memory management"
},
{
"paragraph_id": 86,
"text": "These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.",
"title": "Memory management"
},
{
"paragraph_id": 87,
"text": "Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary. Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article on malloc for an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by the linker or loader, before the program can even begin execution.)",
"title": "Memory management"
},
{
"paragraph_id": 88,
"text": "Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whatever bit pattern happens to be present in the storage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but both false positives and false negatives can occur.",
"title": "Memory management"
},
{
"paragraph_id": 89,
"text": "Heap memory allocation has to be synchronized with its actual usage in any program to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before it is deallocated explicitly, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as a memory leak. Conversely, it is possible for memory to be freed, but is referenced subsequently, leading to unpredictable results. Typically, the failure symptoms appear in a portion of the program unrelated to the code that causes the error, making it difficult to diagnose the failure. Such issues are ameliorated in languages with automatic garbage collection.",
"title": "Memory management"
},
{
"paragraph_id": 90,
"text": "The C programming language uses libraries as its primary method of extension. In C, a library is a set of functions contained within a single \"archive\" file. Each library typically has a header file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. In order for a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requires compiler flags (e.g., -lm, shorthand for \"link the math library\").",
"title": "Libraries"
},
{
"paragraph_id": 91,
"text": "The most common C library is the C standard library, which is specified by the ISO and ANSI C standards and comes with every C implementation (implementations which target limited environments such as embedded systems may provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example, stdio.h) specify the interfaces for these and other standard library facilities.",
"title": "Libraries"
},
{
"paragraph_id": 92,
"text": "Another common set of C library functions are those used by applications specifically targeted for Unix and Unix-like systems, especially functions which provide an interface to the kernel. These functions are detailed in various standards such as POSIX and the Single UNIX Specification.",
"title": "Libraries"
},
{
"paragraph_id": 93,
"text": "Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficient object code; programmers then create interfaces to the library so that the routines can be used from higher-level languages like Java, Perl, and Python.",
"title": "Libraries"
},
{
"paragraph_id": 94,
"text": "File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g. stdio.h). File handling is generally implemented through high-level I/O which works through streams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high-level I/O is done through the association of a stream to a file. In the C standard library, a buffer (a memory area or queue) is temporarily used to store data before it is sent to the final destination. This reduces the time spent waiting for slower devices, for example a hard drive or solid state drive. Low-level I/O functions are not part of the standard C library but are generally part of \"bare metal\" programming (programming that's independent of any operating system such as most embedded programming). With few exceptions, implementations include low-level I/O.",
"title": "Libraries"
},
{
"paragraph_id": 95,
"text": "A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler. The tool lint was the first such, leading to many others.",
"title": "Language tools"
},
{
"paragraph_id": 96,
"text": "Automated source code checking and auditing are beneficial in any language, and for C many such tools exist, such as Lint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors. MISRA C is a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.",
"title": "Language tools"
},
{
"paragraph_id": 97,
"text": "There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such as bounds checking for arrays, detection of buffer overflow, serialization, dynamic memory tracking, and automatic garbage collection.",
"title": "Language tools"
},
{
"paragraph_id": 98,
"text": "Tools such as Purify or Valgrind and linking with libraries containing special versions of the memory allocation functions can help uncover runtime errors in memory usage.",
"title": "Language tools"
},
{
"paragraph_id": 99,
"text": "C is widely used for systems programming in implementing operating systems and embedded system applications. This is for several reasons:",
"title": "Uses"
},
{
"paragraph_id": 100,
"text": "Historically, C was sometimes used for web development using the Common Gateway Interface (CGI) as a \"gateway\" for information between the web application, the server, and the browser. C may have been chosen over interpreted languages because of its speed, stability, and near-universal availability. It is no longer common practice for web development to be done in C, and many other web development tools exist.",
"title": "Uses"
},
{
"paragraph_id": 101,
"text": "A consequence of C's wide availability and efficiency is that compilers, libraries and interpreters of other programming languages are often implemented in C. For example, the reference implementations of Python, Perl, Ruby, and PHP are written in C.",
"title": "Uses"
},
{
"paragraph_id": 102,
"text": "C enables programmers to create efficient implementations of algorithms and data structures, because the layer of abstraction from hardware is thin, and its overhead is low, an important criterion for computationally intensive programs. For example, the GNU Multiple Precision Arithmetic Library, the GNU Scientific Library, Mathematica, and MATLAB are completely or partially written in C. Many languages support calling library functions in C, for example, the Python-based framework NumPy uses C for the high-performance and hardware-interacting aspects.",
"title": "Uses"
},
{
"paragraph_id": 103,
"text": "C is sometimes used as an intermediate language by implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of other C-based languages specifically designed for use as intermediate languages, such as C--. Also, contemporary major compilers GCC and LLVM both feature an intermediate representation that is not C, and those compilers support front ends for many languages including C.",
"title": "Uses"
},
{
"paragraph_id": 104,
"text": "C has also been widely used to implement end-user applications. However, such applications can also be written in newer, higher-level languages.",
"title": "Uses"
},
{
"paragraph_id": 105,
"text": "the power of assembly language and the convenience of ... assembly language",
"title": "Limitations"
},
{
"paragraph_id": 106,
"text": "While C has been popular, influential and hugely successful, it has drawbacks, including:",
"title": "Limitations"
},
{
"paragraph_id": 107,
"text": "For some purposes, restricted styles of C have been adopted, e.g. MISRA C or CERT C, in an attempt to reduce the opportunity for bugs. Databases such as CWE attempt to count the ways C etc. has vulnerabilities, along with recommendations for mitigation.",
"title": "Limitations"
},
{
"paragraph_id": 108,
"text": "There are tools that can mitigate against some of the drawbacks. Contemporary C compilers include checks which may generate warnings to help identify many potential bugs.",
"title": "Limitations"
},
{
"paragraph_id": 109,
"text": "Some of these drawbacks have prompted the construction of other languages.",
"title": "Limitations"
},
{
"paragraph_id": 110,
"text": "C has both directly and indirectly influenced many later languages such as C++ and Java. The most pervasive influence has been syntactical; all of the languages mentioned combine the statement and (more or less recognizably) expression syntax of C with type systems, data models or large-scale program structures that differ from those of C, sometimes radically.",
"title": "Related languages"
},
{
"paragraph_id": 111,
"text": "Several C or near-C interpreters exist, including Ch and CINT, which can also be used for scripting.",
"title": "Related languages"
},
{
"paragraph_id": 112,
"text": "When object-oriented programming languages became popular, C++ and Objective-C were two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented as source-to-source compilers; source code was translated into C, and then compiled with a C compiler.",
"title": "Related languages"
},
{
"paragraph_id": 113,
"text": "The C++ programming language (originally named \"C with Classes\") was devised by Bjarne Stroustrup as an approach to providing object-oriented functionality with a C-like syntax. C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permits generic programming via templates. Nearly a superset of C, C++ now supports most of C, with a few exceptions.",
"title": "Related languages"
},
{
"paragraph_id": 114,
"text": "Objective-C was originally a very \"thin\" layer on top of C, and remains a strict superset of C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C and Smalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk.",
"title": "Related languages"
},
{
"paragraph_id": 115,
"text": "In addition to C++ and Objective-C, Ch, Cilk, and Unified Parallel C are nearly supersets of C.",
"title": "Related languages"
}
]
| C is a general-purpose computer programming language. It was created in the 1970s by Dennis Ritchie, and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs. It has found lasting use in operating systems, device drivers, and protocol stacks, but its use in application software has been decreasing. C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems. A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system. During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages, with C compilers available for practically all modern computer architectures and operating systems. The book The C Programming Language, co-authored by the original language designer, served for many years as the de facto standard for the language. C has been standardized since 1989 by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO). C is an imperative procedural language, supporting structured programming, lexical variable scope, and recursion, with a static type system. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code. Since 2000, C has consistently ranked among the top two languages in the TIOBE index, a measure of the popularity of programming languages. | 2001-11-10T09:40:48Z | 2023-12-31T18:16:36Z | [
"Template:When",
"Template:Code",
"Template:Efn",
"Template:Cite journal",
"Template:Sister project links",
"Template:C programming language",
"Template:N/A",
"Template:Integrated development environments",
"Template:Programming languages",
"Template:Authority control",
"Template:Notelist",
"Template:Cite report",
"Template:Cite magazine",
"Template:Small",
"Template:Distinguish",
"Template:Div col",
"Template:Rquote",
"Template:Harvtxt",
"Template:Short description",
"Template:Citation needed",
"Template:Anchor",
"Template:See also",
"Template:More citations needed section",
"Template:Clarify",
"Template:Reflist",
"Template:Cite news",
"Template:Codes",
"Template:Webarchive",
"Template:Div col end",
"Template:Portal",
"Template:Main",
"Template:Use mdy dates",
"Template:Infobox programming language",
"Template:Outdated-inline",
"Template:Sfnp",
"Template:Pp-pc",
"Template:Cite web",
"Template:Cite book",
"Template:Cite tech report",
"Template:Cite conference",
"Template:IPAc-en"
]
| https://en.wikipedia.org/wiki/C_(programming_language) |
6,023 | Castle of the Winds | Castle of the Winds is a tile-based roguelike video game for Microsoft Windows. It was developed by Rick Saada in 1989 and distributed by Epic MegaGames in 1993. The game was released around 1998 as a freeware download by the author. Though it is secondary to its hack and slash gameplay, Castle of the Winds has a plot loosely based on Norse mythology, told with setting changes, unique items, and occasional passages of text. The game is composed of two parts: A Question of Vengeance, released as shareware, and Lifthransir's Bane, sold commercially. A combined license for both parts was also sold.
The game differs from most roguelikes in a number of ways. Its interface is mouse-dependent, but supports keyboard shortcuts (such as 'g' to get an item). Castle of the Winds also allows the player to restore saved games after dying.
The game favors the use of magic in combat, as spells are the only weapons that work from a distance. The player character automatically gains a spell with each experience level, and can permanently gain others using corresponding books, until all thirty spells available are learned. There are two opposing pairs of elements: cold vs. fire and lightning vs. acid/poison. Spells are divided into six categories: attack, defense, healing, movement, divination, and miscellaneous.
Castle of the Winds possesses an inventory system that limits a player's load based on weight and bulk, rather than by number of items. It allows the character to use different containers, including packs, belts, chests, and bags. Other items include weapons, armor, protective clothing, purses, and ornamental jewellery. Almost every item in the game can be normal, cursed, or enchanted, with curses and enchantments working in a manner similar to NetHack. Although items do not break with use, they may already be broken or rusted when found. Most objects that the character currently carries can be renamed.
Wherever the player goes before entering the dungeon, there is always a town which offers the basic services of a temple for healing and curing curses, a junk store where anything can be sold for a few copper coins, a sage who can identify items and (from the second town onwards) a bank for storing the total capacity of coins to lighten the player's load. Other services that differ and vary in what they sell are outfitters, weaponsmiths, armoursmiths, magic shops and general stores.
The game tracks how much time has been spent playing the game. Although story events are not triggered by the passage of time, it does determine when merchants rotate their stock. Victorious players are listed as "Valhalla's Champions" in the order of time taken, from fastest to slowest. If the player dies, they are still put on the list, but are categorized as "Dead", with their experience point total listed as at the final killing blow. The amount of time spent also determines the difficulty of the last boss.
The player begins in a tiny hamlet, near which they used to live. Their farm has been destroyed and godparents killed. After clearing out an abandoned mine, the player finds a scrap of parchment that reveals the death of the player's godparents was ordered by an unknown enemy. The player then returns to the hamlet to find it pillaged, and decides to travel to Bjarnarhaven.
Once in Bjarnarhaven, the player explores the levels beneath a nearby fortress, eventually facing Hrungnir, the Hill Giant Lord, responsible for ordering the player's godparents' death. Hrungnir carries the Enchanted Amulet of Kings. Upon activating the amulet, the player is informed of their past by their dead father, after which the player is transported to the town of Crossroads, and Part I ends. The game can be imported or started over in Part II.
The town of Crossroads is run by a Jarl who at first does not admit the player, but later (on up to three occasions) provides advice and rewards. The player then enters the nearby ruined titular Castle of the Winds. There the player meets his/her deceased grandfather, who instructs them to venture into the dungeons below, defeat Surtur, and reclaim their birthright. Venturing deeper, the player encounters monsters run rampant, a desecrated crypt, a necromancer, and the installation of various special rooms for elementals. The player eventually meets and defeats the Wolf-Man leader, Bear-Man leader, the four Jotun kings, a Demon Lord, and finally Surtur. Upon defeating Surtur and escaping the dungeons, the player sits upon the throne, completing the game.
Inspired by his love of RPGs and while learning Windows programming in the 80s, Rick Saada designed and completed Castle of the Winds. The game sold 13,500 copies. By 1998, the game's author, Rick Saada, decided to distribute the entirety of Castle of the Winds free of charge.
The game is public domain per Rick Saada's words:
Rick Saada, creator of Castle of the Winds, decided to give permission for anyone to distribute it for free. Epic doesn't have an exclusive license to sell it.
All terrain tiles, some landscape features, all monsters and objects, and some spell/effect graphics take the form of Windows 3.1 icons and were done by Paul Canniff. Multi-tile graphics, such as ball spells and town buildings, are bitmaps included in the executable file. No graphics use colors other than the Windows-standard 16-color palette, plus transparency. They exist in monochrome versions as well, meaning that the game will display well on monochrome monitors.
The map view is identical to the playing-field view, except for scaling to fit on one screen. A simplified map view is available to improve performance on slower computers. The latter functionality also presents a cleaner display, as the aforementioned scaling routine does not always work correctly.
Computer Gaming World rated the gameplay as good and the graphics simple but effective, while noticing the lack of audio, but regarded the game itself enjoyable. | [
{
"paragraph_id": 0,
"text": "Castle of the Winds is a tile-based roguelike video game for Microsoft Windows. It was developed by Rick Saada in 1989 and distributed by Epic MegaGames in 1993. The game was released around 1998 as a freeware download by the author. Though it is secondary to its hack and slash gameplay, Castle of the Winds has a plot loosely based on Norse mythology, told with setting changes, unique items, and occasional passages of text. The game is composed of two parts: A Question of Vengeance, released as shareware, and Lifthransir's Bane, sold commercially. A combined license for both parts was also sold.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The game differs from most roguelikes in a number of ways. Its interface is mouse-dependent, but supports keyboard shortcuts (such as 'g' to get an item). Castle of the Winds also allows the player to restore saved games after dying.",
"title": "Gameplay"
},
{
"paragraph_id": 2,
"text": "The game favors the use of magic in combat, as spells are the only weapons that work from a distance. The player character automatically gains a spell with each experience level, and can permanently gain others using corresponding books, until all thirty spells available are learned. There are two opposing pairs of elements: cold vs. fire and lightning vs. acid/poison. Spells are divided into six categories: attack, defense, healing, movement, divination, and miscellaneous.",
"title": "Gameplay"
},
{
"paragraph_id": 3,
"text": "Castle of the Winds possesses an inventory system that limits a player's load based on weight and bulk, rather than by number of items. It allows the character to use different containers, including packs, belts, chests, and bags. Other items include weapons, armor, protective clothing, purses, and ornamental jewellery. Almost every item in the game can be normal, cursed, or enchanted, with curses and enchantments working in a manner similar to NetHack. Although items do not break with use, they may already be broken or rusted when found. Most objects that the character currently carries can be renamed.",
"title": "Gameplay"
},
{
"paragraph_id": 4,
"text": "Wherever the player goes before entering the dungeon, there is always a town which offers the basic services of a temple for healing and curing curses, a junk store where anything can be sold for a few copper coins, a sage who can identify items and (from the second town onwards) a bank for storing the total capacity of coins to lighten the player's load. Other services that differ and vary in what they sell are outfitters, weaponsmiths, armoursmiths, magic shops and general stores.",
"title": "Gameplay"
},
{
"paragraph_id": 5,
"text": "The game tracks how much time has been spent playing the game. Although story events are not triggered by the passage of time, it does determine when merchants rotate their stock. Victorious players are listed as \"Valhalla's Champions\" in the order of time taken, from fastest to slowest. If the player dies, they are still put on the list, but are categorized as \"Dead\", with their experience point total listed as at the final killing blow. The amount of time spent also determines the difficulty of the last boss.",
"title": "Gameplay"
},
{
"paragraph_id": 6,
"text": "The player begins in a tiny hamlet, near which they used to live. Their farm has been destroyed and godparents killed. After clearing out an abandoned mine, the player finds a scrap of parchment that reveals the death of the player's godparents was ordered by an unknown enemy. The player then returns to the hamlet to find it pillaged, and decides to travel to Bjarnarhaven.",
"title": "Plot"
},
{
"paragraph_id": 7,
"text": "Once in Bjarnarhaven, the player explores the levels beneath a nearby fortress, eventually facing Hrungnir, the Hill Giant Lord, responsible for ordering the player's godparents' death. Hrungnir carries the Enchanted Amulet of Kings. Upon activating the amulet, the player is informed of their past by their dead father, after which the player is transported to the town of Crossroads, and Part I ends. The game can be imported or started over in Part II.",
"title": "Plot"
},
{
"paragraph_id": 8,
"text": "The town of Crossroads is run by a Jarl who at first does not admit the player, but later (on up to three occasions) provides advice and rewards. The player then enters the nearby ruined titular Castle of the Winds. There the player meets his/her deceased grandfather, who instructs them to venture into the dungeons below, defeat Surtur, and reclaim their birthright. Venturing deeper, the player encounters monsters run rampant, a desecrated crypt, a necromancer, and the installation of various special rooms for elementals. The player eventually meets and defeats the Wolf-Man leader, Bear-Man leader, the four Jotun kings, a Demon Lord, and finally Surtur. Upon defeating Surtur and escaping the dungeons, the player sits upon the throne, completing the game.",
"title": "Plot"
},
{
"paragraph_id": 9,
"text": "Inspired by his love of RPGs and while learning Windows programming in the 80s, Rick Saada designed and completed Castle of the Winds. The game sold 13,500 copies. By 1998, the game's author, Rick Saada, decided to distribute the entirety of Castle of the Winds free of charge.",
"title": "Development"
},
{
"paragraph_id": 10,
"text": "The game is public domain per Rick Saada's words:",
"title": "Development"
},
{
"paragraph_id": 11,
"text": "Rick Saada, creator of Castle of the Winds, decided to give permission for anyone to distribute it for free. Epic doesn't have an exclusive license to sell it.",
"title": "Development"
},
{
"paragraph_id": 12,
"text": "All terrain tiles, some landscape features, all monsters and objects, and some spell/effect graphics take the form of Windows 3.1 icons and were done by Paul Canniff. Multi-tile graphics, such as ball spells and town buildings, are bitmaps included in the executable file. No graphics use colors other than the Windows-standard 16-color palette, plus transparency. They exist in monochrome versions as well, meaning that the game will display well on monochrome monitors.",
"title": "Development"
},
{
"paragraph_id": 13,
"text": "The map view is identical to the playing-field view, except for scaling to fit on one screen. A simplified map view is available to improve performance on slower computers. The latter functionality also presents a cleaner display, as the aforementioned scaling routine does not always work correctly.",
"title": "Development"
},
{
"paragraph_id": 14,
"text": "Computer Gaming World rated the gameplay as good and the graphics simple but effective, while noticing the lack of audio, but regarded the game itself enjoyable.",
"title": "Reception"
}
]
| Castle of the Winds is a tile-based roguelike video game for Microsoft Windows. It was developed by Rick Saada in 1989 and distributed by Epic MegaGames in 1993.
The game was released around 1998 as a freeware download by the author. Though it is secondary to its hack and slash gameplay, Castle of the Winds has a plot loosely based on Norse mythology, told with setting changes, unique items, and occasional passages of text. The game is composed of two parts: A Question of Vengeance, released as shareware, and Lifthransir's Bane, sold commercially. A combined license for both parts was also sold. | 2002-02-25T15:51:15Z | 2023-12-25T14:10:40Z | [
"Template:Blockquote",
"Template:Reflist",
"Template:Cite web",
"Template:Cite magazine",
"Template:Moby game",
"Template:Infobox video game"
]
| https://en.wikipedia.org/wiki/Castle_of_the_Winds |
6,024 | Calvinism | Calvinism, also called Reformed Christianity, is a major branch of Protestantism that follows the theological tradition and forms of Christian practice set down by John Calvin and various other Reformation-era theologians. It emphasizes the sovereignty of God and the authority of the Bible.
Calvinists broke from the Roman Catholic Church in the 16th century. Calvinists differ from Lutherans, another major branch of the Reformation, on the spiritual real presence of Christ in the Lord's Supper, theories of worship, the purpose and meaning of baptism, and the use of God's law for believers, among other points.
The namesake and founder of the movement, French reformer John Calvin, embraced Protestant beliefs in the late 1520s or early 1530s, as the earliest notions of later Reformed tradition were already espoused by Huldrych Zwingli. The movement was first called "Calvinism" in the early 1550s by Lutherans who opposed it, however many in the tradition find it either a nondescript or inappropriate term and prefer the term Reformed.
The most important Reformed theologians include Calvin, Zwingli, Martin Bucer, William Farel, Heinrich Bullinger, Thomas Cranmer, Nicholas Ridley, Peter Martyr Vermigli, Theodore Beza, John Knox, and John à Lasco. In the 20th century, Abraham Kuyper, Herman Bavinck, B. B. Warfield, J. Gresham Machen, Louis Berkhof, Karl Barth, Martyn Lloyd-Jones, Cornelius Van Til, R. C. Sproul, and J. I. Packer were influential. More contemporary Reformed theologians include the late Tim Keller, Desiring God Ministries founder John Piper, as well as Joel Beeke and Michael Horton.
The Reformed tradition is largely represented by the Continental Reformed, Presbyterian, Reformed Anglican, Congregationalist, and Reformed Baptist denominations. Several forms of ecclesiastical polity are exercised by a group of Reformed churches, including presbyterian, congregationalist, and some episcopal. The biggest Reformed association is the World Communion of Reformed Churches, with more than 100 million members in 211 member denominations around the world. More conservative Reformed federations include the World Reformed Fellowship and the International Conference of Reformed Churches.
Calvinism is named after John Calvin. Calvin denounced the designation himself:
They could attach us no greater insult than this word, Calvinism. It is not hard to guess where such a deadly hatred comes from that they hold against me.
Since the Arminian controversy, the Reformed tradition as a branch of Protestantism is distinguished from Lutheranism and divided into two groups, Arminians and Calvinists.
The first wave of reformist theologians include Huldrych Zwingli (1484–1531), Martin Bucer (1491–1551), Wolfgang Capito (1478–1541), John Oecolampadius (1482–1531), and Guillaume Farel (1489–1565). While from diverse academic backgrounds, their work already contained key themes within Reformed theology, especially the priority of scripture as a source of authority. Scripture was also viewed as a unified whole, which led to a covenantal theology of the sacraments of baptism and the Lord's Supper as visible signs of the covenant of grace. Another shared perspective was their denial of the Real presence of Christ in the Eucharist. Each understood salvation to be by grace alone and affirmed a doctrine of unconditional election, the teaching that some people are chosen by God to be saved. Martin Luther and his successor, Philipp Melanchthon were significant influences on these theologians, and to a larger extent, those who followed. The doctrine of justification by faith alone, also known as sola fide, was a direct inheritance from Luther.
The second generation featured John Calvin (1509–1564), Heinrich Bullinger (1504–1575), Wolfgang Musculus (1497–1563), Peter Martyr Vermigli (1500–1562), Andreas Hyperius (1511–1564) and John à Lasco (1499–1560). Written between 1536 and 1539, Calvin's Institutes of the Christian Religion was one of the most influential works of the era. Toward the middle of the 16th century, these beliefs were formed into one consistent creed, which would shape the future definition of the Reformed faith. The 1549 Consensus Tigurinus unified Zwingli and Bullinger's memorialist theology of the Eucharist, which taught that it was simply a reminder of Christ's death, with Calvin's view of it as a means of grace with Christ actually present, though spiritually rather than bodily as in Catholic doctrine. The document demonstrates the diversity as well as unity in early Reformed theology, giving it a stability that enabled it to spread rapidly throughout Europe. This stands in marked contrast to the bitter controversy experienced by Lutherans prior to the 1579 Formula of Concord.
Due to Calvin's missionary work in France, his program of reform eventually reached the French-speaking provinces of the Netherlands. Calvinism was adopted in the Electorate of the Palatinate under Frederick III, which led to the formulation of the Heidelberg Catechism in 1563. This and the Belgic Confession were adopted as confessional standards in the first synod of the Dutch Reformed Church in 1571.
In 1573, William the Silent joined the Calvinist Church. Calvinism was declared the official religion of the Kingdom of Navarre by the queen regnant Jeanne d'Albret after her conversion in 1560. Leading divines, either Calvinist or those sympathetic to Calvinism, settled in England, including Martin Bucer, Peter Martyr, and John Łaski, as did John Knox in Scotland.
During the First English Civil War, English and Scots Presbyterians produced the Westminster Confession, which became the confessional standard for Presbyterians in the English-speaking world. Having established itself in Europe, the movement continued to spread to areas including North America, South Africa and Korea.
While Calvin did not live to see the foundation of his work grow into an international movement, his death allowed his ideas to spread far beyond their city of origin and their borders and to establish their own distinct character.
Although much of Calvin's work was in Geneva, his publications spread his ideas of a correctly Reformed church to many parts of Europe. In Switzerland, some cantons are still Reformed, and some are Catholic. Calvinism became the dominant doctrine within the Church of Scotland, the Dutch Republic, some communities in Flanders, and parts of Germany, especially those adjacent to the Netherlands in the Palatinate, Kassel, and Lippe, spread by Olevianus and Zacharias Ursinus among others. Protected by the local nobility, Calvinism became a significant religion in Eastern Hungary and Hungarian-speaking areas of Transylvania. Today there are about 3.5 million Hungarian Reformed people worldwide.
Calvinism was influential in France, Lithuania, and Poland before being mostly erased during the Counter Reformation. One of the most important Polish reformed theologists was John a Lasco, who was also involved into organising churches in East Frisia and Stranger's Church in London. Later, a faction called the Polish Brethren broke away from Calvinism on January 22, 1556, when Piotr of Goniądz, a Polish student, spoke out against the doctrine of the Trinity during the general synod of the Reformed churches of Poland held in the village of Secemin. Calvinism gained some popularity in Scandinavia, especially Sweden, but was rejected in favor of Lutheranism after the Synod of Uppsala in 1593.
Many 17th century European settlers in the Thirteen Colonies in British America were Calvinists, who emigrated because of arguments over church structure, including the Pilgrim Fathers. Others were forced into exile, including the French Huguenots. Dutch and French Calvinist settlers were also among the first European colonizers of South Africa, beginning in the 17th century, who became known as Boers or Afrikaners.
Sierra Leone was largely colonized by Calvinist settlers from Nova Scotia, many of whom were Black Loyalists who fought for the British Empire during the American War of Independence. John Marrant had organized a congregation there under the auspices of the Huntingdon Connection. Some of the largest Calvinist communions were started by 19th- and 20th-century missionaries. Especially large are those in Indonesia, Korea and Nigeria. In South Korea there are 20,000 Presbyterian congregations with about 9–10 million church members, scattered in more than 100 Presbyterian denominations. In South Korea, Presbyterianism is the largest Christian denomination.
A 2011 report of the Pew Forum on Religious and Public Life estimated that members of Presbyterian or Reformed churches make up 7% of the estimated 801 million Protestants globally, or approximately 56 million people. Though the broadly defined Reformed faith is much larger, as it constitutes Congregationalist (0.5%), most of the United and uniting churches (unions of different denominations) (7.2%) and most likely some of the other Protestant denominations (38.2%). All three are distinct categories from Presbyterian or Reformed (7%) in this report.
The Reformed family of churches is one of the largest Christian denominations. According to adherents.com the Reformed/Presbyterian/Congregational/United churches represent 75 million believers worldwide.
The World Communion of Reformed Churches, which includes some United Churches, has 80 million believers. WCRC is the third largest Christian communion in the world, after the Roman Catholic Church and the Eastern Orthodox Churches.
Many conservative Reformed churches which are strongly Calvinistic formed the World Reformed Fellowship which has about 70 member denominations. Most are not part of the World Communion of Reformed Churches because of its ecumenical attire. The International Conference of Reformed Churches is another conservative association.
Church of Tuvalu is an officially established state church in the Calvinist tradition.
Reformed theologians believe that God communicates knowledge of himself to people through the Word of God. People are not able to know anything about God except through this self-revelation. (With the exception of general revelation of God; "His invisible attributes, His eternal power and divine nature, have been clearly seen, being understood through what has been made, so that they are without excuse" (Romans 1:20).) Speculation about anything which God has not revealed through his Word is not warranted. The knowledge people have of God is different from that which they have of anything else because God is infinite, and finite people are incapable of comprehending an infinite being. While the knowledge revealed by God to people is never incorrect, it is also never comprehensive.
According to Reformed theologians, God's self-revelation is always through his son Jesus Christ, because Christ is the only mediator between God and people. Revelation of God through Christ comes through two basic channels. The first is creation and providence, which is God's creating and continuing to work in the world. This action of God gives everyone knowledge about God, but this knowledge is only sufficient to make people culpable for their sin; it does not include knowledge of the gospel. The second channel through which God reveals himself is redemption, which is the gospel of salvation from condemnation which is punishment for sin.
In Reformed theology, the Word of God takes several forms. Jesus Christ himself is the Word Incarnate. The prophecies about him said to be found in the Old Testament and the ministry of the apostles who saw him and communicated his message are also the Word of God. Further, the preaching of ministers about God is the very Word of God because God is considered to be speaking through them. God also speaks through human writers in the Bible, which is composed of texts set apart by God for self-revelation. Reformed theologians emphasize the Bible as a uniquely important means by which God communicates with people. People gain knowledge of God from the Bible which cannot be gained in any other way.
Reformed theologians affirm that the Bible is true, but differences emerge among them over the meaning and extent of its truthfulness. Conservative followers of the Princeton theologians take the view that the Bible is true and inerrant, or incapable of error or falsehood, in every place. This view is similar to that of Catholic orthodoxy as well as modern Evangelicalism. Another view, influenced by the teaching of Karl Barth and neo-orthodoxy, is found in the Presbyterian Church (U.S.A.)'s Confession of 1967. Those who take this view believe the Bible to be the primary source of our knowledge of God, but also that some parts of the Bible may be false, not witnesses to Christ, and not normative for today's church. In this view, Christ is the revelation of God, and the scriptures witness to this revelation rather than being the revelation itself.
Reformed theologians use the concept of covenant to describe the way God enters into fellowship with people in history. The concept of covenant is so prominent in Reformed theology that Reformed theology as a whole is sometimes called "covenant theology". However, sixteenth- and seventeenth-century theologians developed a particular theological system called "covenant theology" or "federal theology" which many conservative Reformed churches continue to affirm today. This framework orders God's life with people primarily in two covenants: the covenant of works and the covenant of grace.
The covenant of works is made with Adam and Eve in the Garden of Eden. The terms of the covenant are that God provides a blessed life in the garden on condition that Adam and Eve obey God's law perfectly. Because Adam and Eve broke the covenant by eating the forbidden fruit, they became subject to death and were banished from the garden. This sin was passed down to all mankind because all people are said to be in Adam as a covenantal or "federal" head. Federal theologians usually imply that Adam and Eve would have gained immortality had they obeyed perfectly.
A second covenant, called the covenant of grace, is said to have been made immediately following Adam and Eve's sin. In it, God graciously offers salvation from death on condition of faith in God. This covenant is administered in different ways throughout the Old and New Testaments, but retains the substance of being free of a requirement of perfect obedience.
Through the influence of Karl Barth, many contemporary Reformed theologians have discarded the covenant of works, along with other concepts of federal theology. Barth saw the covenant of works as disconnected from Christ and the gospel, and rejected the idea that God works with people in this way. Instead, Barth argued that God always interacts with people under the covenant of grace, and that the covenant of grace is free of all conditions whatsoever. Barth's theology and that which follows him has been called "mono covenantal" as opposed to the "bi-covenantal" scheme of classical federal theology. Conservative contemporary Reformed theologians, such as John Murray, have also rejected the idea of covenants based on law rather than grace. Michael Horton, however, has defended the covenant of works as combining principles of law and love.
For the most part, the Reformed tradition did not modify the medieval consensus on the doctrine of God. God's character is described primarily using three adjectives: eternal, infinite, and unchangeable. Reformed theologians such as Shirley Guthrie have proposed that rather than conceiving of God in terms of his attributes and freedom to do as he pleases, the doctrine of God is to be based on God's work in history and his freedom to live with and empower people.
Reformed theologians have also traditionally followed the medieval tradition going back to before the early church councils of Nicaea and Chalcedon on the doctrine of the Trinity. God is affirmed to be one God in three persons: Father, Son, and Holy Spirit. The Son (Christ) is held to be eternally begotten by the Father and the Holy Spirit eternally proceeding from the Father and Son. However, contemporary theologians have been critical of aspects of Western views here as well. Drawing on the Eastern tradition, these Reformed theologians have proposed a "social trinitarianism" where the persons of the Trinity only exist in their life together as persons-in-relationship. Contemporary Reformed confessions such as the Barmen Confession and Brief Statement of Faith of the Presbyterian Church (USA) have avoided language about the attributes of God and have emphasized his work of reconciliation and empowerment of people. Feminist theologian Letty Russell used the image of partnership for the persons of the Trinity. According to Russell, thinking this way encourages Christians to interact in terms of fellowship rather than reciprocity. Conservative Reformed theologian Michael Horton, however, has argued that social trinitarianism is untenable because it abandons the essential unity of God in favor of a community of separate beings.
Reformed theologians affirm the historic Christian belief that Christ is eternally one person with a divine and a human nature. Reformed Christians have especially emphasized that Christ truly became human so that people could be saved. Christ's human nature has been a point of contention between Reformed and Lutheran Christology. In accord with the belief that finite humans cannot comprehend infinite divinity, Reformed theologians hold that Christ's human body cannot be in multiple locations at the same time. Because Lutherans believe that Christ is bodily present in the Eucharist, they hold that Christ is bodily present in many locations simultaneously. For Reformed Christians, such a belief denies that Christ actually became human. Some contemporary Reformed theologians have moved away from the traditional language of one person in two natures, viewing it as unintelligible to contemporary people. Instead, theologians tend to emphasize Jesus' context and particularity as a first-century Jew.
John Calvin and many Reformed theologians who followed him describe Christ's work of redemption in terms of three offices: prophet, priest, and king. Christ is said to be a prophet in that he teaches perfect doctrine, a priest in that he intercedes to the Father on believers' behalf and offered himself as a sacrifice for sin, and a king in that he rules the church and fights on believers' behalf. The threefold office links the work of Christ to God's work in ancient Israel. Many, but not all, Reformed theologians continue to make use of the threefold office as a framework because of its emphasis on the connection of Christ's work to Israel. They have, however, often reinterpreted the meaning of each of the offices. For example, Karl Barth interpreted Christ's prophetic office in terms of political engagement on behalf of the poor.
Christians believe Jesus' death and resurrection make it possible for believers to receive forgiveness for sin and reconciliation with God through the atonement. Reformed Protestants generally subscribe to a particular view of the atonement called penal substitutionary atonement, which explains Christ's death as a sacrificial payment for sin. Christ is believed to have died in place of the believer, who is accounted righteous as a result of this sacrificial payment.
In Christian theology, people are created good and in the image of God but have become corrupted by sin, which causes them to be imperfect and overly self-interested. Reformed Christians, following the tradition of Augustine of Hippo, believe that this corruption of human nature was brought on by Adam and Eve's first sin, a doctrine called original sin.
Although earlier Christian authors taught the elements of physical death, moral weakness, and a sin propensity within original sin, Augustine was the first Christian to add the concept of inherited guilt (reatus) from Adam whereby every infant is born eternally damned and humans lack any residual ability to respond to God. Reformed theologians emphasize that this sinfulness affects all of a person's nature, including their will. This view, that sin so dominates people that they are unable to avoid sin, has been called total depravity. As a consequence, every one of their descendants inherited a stain of corruption and depravity. This condition, innate to all humans, is known in Christian theology as original sin.
Calvin thought original sin was "a hereditary corruption and depravity of our nature, extending to all the parts of the soul." Calvin asserted people were so warped by original sin that "everything which our mind conceives, meditates, plans, and resolves, is always evil." The depraved condition of every human being is not the result of sins people commit during their lives. Instead, before we are born, while we are in our mother's womb, "we are in God's sight defiled and polluted." Calvin thought people were justly condemned to hell because their corrupted state is "naturally hateful to God."
In colloquial English, the term "total depravity" can be easily misunderstood to mean that people are absent of any goodness or unable to do any good. However the Reformed teaching is actually that while people continue to bear God's image and may do things that appear outwardly good, their sinful intentions affect all of their nature and actions so that they are not pleasing to God. From a Calvinist viewpoint, a person who has sinned was predestined to sin, and no matter what a person does, they will go to Heaven or Hell based on that determination. There is no repenting from sin since the most evil thing is the sinner's own actions, thoughts, and words.
Some contemporary theologians in the Reformed tradition, such as those associated with the Presbyterian Church (USA)'s Confession of 1967, have emphasized the social character of human sinfulness. These theologians have sought to bring attention to issues of environmental, economic, and political justice as areas of human life that have been affected by sin.
Reformed theologians, along with other Protestants, believe salvation from punishment for sin is to be given to all those who have faith in Christ. Faith is not purely intellectual, but involves trust in God's promise to save. Protestants do not hold there to be any other requirement for salvation, but that faith alone is sufficient.
Justification is the part of salvation where God pardons the sin of those who believe in Christ. It is historically held by Protestants to be the most important article of Christian faith, though more recently it is sometimes given less importance out of ecumenical concerns. People are not on their own able to fully repent of their sin or prepare themselves to repent because of their sinfulness. Therefore, justification is held to arise solely from God's free and gracious act.
Sanctification is the part of salvation in which God makes believers holy, by enabling them to exercise greater love for God and for other people. The good works accomplished by believers as they are sanctified are considered to be the necessary outworking of the believer's salvation, though they do not cause the believer to be saved. Sanctification, like justification, is by faith, because doing good works is simply living as the child of God one has become.
Reformed theologians teach that sin so affects human nature that they are unable even to exercise faith in Christ by their own will. While people are said to retain will, in that they willfully sin, they are unable not to sin because of the corruption of their nature due to original sin. Reformed Christians believe that God predestined some people to be saved and others were predestined to eternal damnation. This choice by God to save some is held to be unconditional and not based on any characteristic or action on the part of the person chosen. This view is opposed to the Arminian view that God's choice of whom to save is conditional or based on his foreknowledge of who would respond positively to God.
Karl Barth reinterpreted the Reformed doctrine of predestination to apply only to Christ. Individual people are only said to be elected through their being in Christ. Reformed theologians who followed Barth, including Jürgen Moltmann, David Migliore, and Shirley Guthrie, have argued that the traditional Reformed concept of predestination is speculative and have proposed alternative models. These theologians claim that a properly trinitarian doctrine emphasizes God's freedom to love all people, rather than choosing some for salvation and others for damnation. God's justice towards and condemnation of sinful people is spoken of by these theologians as out of his love for them and a desire to reconcile them to himself.
Much attention surrounding Calvinism focuses on the "Five Points of Calvinism" (also called the doctrines of grace). The five points have been summarized under the acrostic TULIP. The five points are popularly said to summarize the Canons of Dort; however, there is no historical relationship between them, and some scholars argue that their language distorts the meaning of the Canons, Calvin's theology, and the theology of 17th-century Calvinistic orthodoxy, particularly in the language of total depravity and limited atonement. The five points were more recently popularized in the 1963 booklet The Five Points of Calvinism Defined, Defended, Documented by David N. Steele and Curtis C. Thomas. The origins of the five points and the acrostic are uncertain, but they appear to be outlined in the Counter Remonstrance of 1611, a lesser-known Reformed reply to the Arminians, which was written prior to the Canons of Dort. The acrostic was used by Cleland Boyd McAfee as early as circa 1905. An early printed appearance of the acrostic can be found in Loraine Boettner's 1932 book, The Reformed Doctrine of Predestination.
The central assertion of TULIP is that God saves every person upon whom he has mercy, and that his efforts are not frustrated by the unrighteousness or inability of humans.
Reformed Christians see the Christian Church as the community with which God has made the covenant of grace, a promise of eternal life and relationship with God. This covenant extends to those under the "old covenant" whom God chose, beginning with Abraham and Sarah. The church is conceived of as both invisible and visible. The invisible church is the body of all believers, known only to God. The visible church is the institutional body which contains both members of the invisible church as well as those who appear to have faith in Christ, but are not truly part of God's elect.
In order to identify the visible church, Reformed theologians have spoken of certain marks of the Church. For some, the only mark is the pure preaching of the gospel of Christ. Others, including John Calvin, also include the right administration of the sacraments. Others, such as those following the Scots Confession, include a third mark of rightly administered church discipline, or exercise of censure against unrepentant sinners. These marks allowed the Reformed to identify the church based on its conformity to the Bible rather than the Magisterium or church tradition.
The regulative principle of worship is a teaching shared by some Calvinists and Anabaptists on how the Bible orders public worship. The substance of the doctrine regarding worship is that God institutes in the Scriptures everything he requires for worship in the Church and that everything else is prohibited. As the regulative principle is reflected in Calvin's own thought, it is driven by his evident antipathy toward the Roman Catholic Church and its worship practices, and it associates musical instruments with icons, which he considered violations of the Ten Commandments' prohibition of graven images.
On this basis, many early Calvinists also eschewed musical instruments and advocated a cappella exclusive psalmody in worship, though Calvin himself allowed other scriptural songs as well as psalms, and this practice typified Presbyterian worship and the worship of other Reformed churches for some time. The original Lord's Day service designed by John Calvin was a highly liturgical service with the Creed, Alms, Confession and Absolution, the Lord's supper, Doxologies, prayers, Psalms being sung, the Lords prayer being sung, and Benedictions.
Since the 19th century, however, some of the Reformed churches have modified their understanding of the regulative principle and make use of musical instruments, believing that Calvin and his early followers went beyond the biblical requirements and that such things are circumstances of worship requiring biblically rooted wisdom, rather than an explicit command. Despite the protestations of those who hold to a strict view of the regulative principle, today hymns and musical instruments are in common use, as are contemporary worship music styles with elements such as worship bands.
The Westminster Confession of Faith limits the sacraments to baptism and the Lord's Supper. Sacraments are denoted "signs and seals of the covenant of grace." Westminster speaks of "a sacramental relation, or a sacramental union, between the sign and the thing signified; whence it comes to pass that the names and effects of the one are attributed to the other." Baptism is for infant children of believers as well as believers, as it is for all the Reformed except Baptists and some Congregationalists. Baptism admits the baptized into the visible church, and in it all the benefits of Christ are offered to the baptized. On the Lord's supper, the Westminster Confession takes a position between Lutheran sacramental union and Zwinglian memorialism: "the Lord's supper really and indeed, yet not carnally and corporally, but spiritually, receive and feed upon Christ crucified, and all benefits of his death: the body and blood of Christ being then not corporally or carnally in, with, or under the bread and wine; yet, as really, but spiritually, present to the faith of believers in that ordinance as the elements themselves are to their outward senses."
The 1689 London Baptist Confession of Faith does not use the term sacrament, but describes baptism and the Lord's supper as ordinances, as do most Baptists, Calvinist or otherwise. Baptism is only for those who "actually profess repentance towards God", and not for the children of believers. Baptists also insist on immersion or dipping, in contradistinction to other Reformed Christians. The Baptist Confession describes the Lord's supper as "the body and blood of Christ being then not corporally or carnally, but spiritually present to the faith of believers in that ordinance", similarly to the Westminster Confession. There is significant latitude in Baptist congregations regarding the Lord's supper, and many hold the Zwinglian view.
There are two schools of thought regarding the logical order of God's decree to ordain the fall of man: supralapsarianism (from the Latin: supra, "above", here meaning "before" + lapsus, "fall") and infralapsarianism (from the Latin: infra, "beneath", here meaning "after" + lapsus, "fall"). The former view, sometimes called "high Calvinism", argues that the Fall occurred partly to facilitate God's purpose to choose some individuals for salvation and some for damnation. Infralapsarianism, sometimes called "low Calvinism", is the position that, while the Fall was indeed planned, it was not planned with reference to who would be saved.
Supralapsarianism is based on the belief that God chose which individuals to save logically prior to the decision to allow the race to fall and that the Fall serves as the means of realization of that prior decision to send some individuals to hell and others to heaven (that is, it provides the grounds of condemnation in the reprobate and the need for salvation in the elect). In contrast, infralapsarians hold that God planned the race to fall logically prior to the decision to save or damn any individuals because, it is argued, in order to be "saved", one must first need to be saved from something and therefore the decree of the Fall must precede predestination to salvation or damnation.
These two views vied with each other at the Synod of Dort, an international body representing Calvinist Christian churches from around Europe, and the judgments that came out of that council sided with infralapsarianism (Canons of Dort, First Point of Doctrine, Article 7). The Westminster Confession of Faith also teaches (in Hodge's words "clearly impl[ies]") the infralapsarian view, but is sensitive to those holding to supralapsarianism. The Lapsarian controversy has a few vocal proponents on each side today, but overall it does not receive much attention among modern Calvinists.
The Reformed tradition is largely represented by the Continental Reformed, Presbyterian, Reformed Anglican, Congregationalist, and Reformed Baptist denominational families.
Considered to be the oldest and most orthodox bearers of the Reformed faith, the continental Reformed Churches uphold the Helvetic Confessions and Heidelberg Catechism, which were adopted in Zurich and Heidelberg, respectively. In the United States, immigrants belonging to the continental Reformed churches joined the Dutch Reformed Church there, as well as the Anglican Church.
The Congregational churches are a part of the Reformed tradition founded under the influence of New England Puritanism. The Savoy Declaration is the confession of faith held by the Congregationalist churches. An example of a Christian denomination belonging to the Congregationalist tradition is the Conservative Congregational Christian Conference.
The Presbyterian churches are part of the Reformed tradition and were influenced by John Knox's teachings in the Church of Scotland. Presbyterianism upholds the Westminster Confession of Faith.
Historic Anglicanism is a part of the wider Reformed tradition, as "the founding documents of the Anglican church—the Book of Homilies, the Book of Common Prayer, and the Thirty-Nine Articles of Religion—expresses a theology in keeping with the Reformed theology of the Swiss and South German Reformation." The Most Rev. Peter Robinson, presiding bishop of the United Episcopal Church of North America, writes:
Cranmer's personal journey of faith left its mark on the Church of England in the form of a Liturgy that remains to this day more closely allied to Lutheran practice, but that liturgy is couple to a doctrinal stance that is broadly, but decidedly Reformed. ... The 42 Articles of 1552 and the 39 Articles of 1563, both commit the Church of England to the fundamentals of the Reformed Faith. Both sets of Articles affirm the centrality of Scripture, and take a monergist position on Justification. Both sets of Articles affirm that the Church of England accepts the doctrine of predestination and election as a 'comfort to the faithful' but warn against over much speculation concerning that doctrine. Indeed a casual reading of the Wurttemburg Confession of 1551, the Second Helvetic Confession, the Scots Confession of 1560, and the XXXIX Articles of Religion reveal them to be cut from the same bolt of cloth.
Reformed Baptist churches are Baptists (a Christian denominational family that teaches credobaptism rather than infant baptism) who adhere to Reformed theology as explicated in the 1689 Baptist Confession of Faith or other Reformed Baptist Confessions.
Calvinistic Baptist churches are Baptists who accept reformed soteriology as summarized in the acronym TULIP, but do not necessarily hold to a specific confession, or to covenant theology. This group is much less defined than other groups of Reformed Churches since the group subscribes to fewer specific standards.
Amyraldism (or sometimes Amyraldianism, also known as the School of Saumur, hypothetical universalism, post redemptionism, moderate Calvinism, or four-point Calvinism) is the belief that God, prior to his decree of election, decreed Christ's atonement for all alike if they believe, but seeing that none would believe on their own, he then elected those whom he will bring to faith in Christ, thereby preserving the Calvinist doctrine of unconditional election. The efficacy of the atonement remains limited to those who believe.
Named after its formulator Moses Amyraut, this doctrine is still viewed as a variety of Calvinism in that it maintains the particularity of sovereign grace in the application of the atonement. However, detractors like B. B. Warfield have termed it "an inconsistent and therefore unstable form of Calvinism."
Hyper-Calvinism first referred to a view that appeared among the early English Particular Baptists in the 18th century. Their system denied that the call of the gospel to "repent and believe" is directed to every single person and that it is the duty of every person to trust in Christ for salvation. The term also occasionally appears in both theological and secular controversial contexts, where it usually connotes a negative opinion about some variety of theological determinism, predestination, or a version of Evangelical Christianity or Calvinism that is deemed by the critic to be unenlightened, harsh, or extreme.
The Westminster Confession of Faith says that the gospel is to be freely offered to sinners, and the Larger Catechism makes clear that the gospel is offered to the non-elect.
Beginning in the 1880s, Neo-Calvinism, a form of Dutch Calvinism, is the movement initiated by the theologian and former Dutch prime minister Abraham Kuyper. James Bratt has identified a number of different types of Dutch Calvinism: The Seceders—split into the Reformed Church "West" and the Confessionalists; and the Neo-Calvinists—the Positives and the Antithetical Calvinists. The Seceders were largely infralapsarian and the Neo-Calvinists usually supralapsarian.
Kuyper wanted to awaken the church from what he viewed as its pietistic slumber. He declared:
No single piece of our mental world is to be sealed off from the rest and there is not a square inch in the whole domain of human existence over which Christ, who is sovereign over all, does not cry: 'Mine!'
This refrain has become something of a rallying call for Neo-Calvinists.
Christian Reconstructionism is a fundamentalist Calvinist theonomic movement that has remained rather obscure. Founded by R. J. Rushdoony, the movement has had an important influence on the Christian Right in the United States. The movement peaked in the 1990s. However, it lives on in small denominations such as the Reformed Presbyterian Church in the United States and as a minority position in other denominations. Christian Reconstructionists are usually postmillennialists and followers of the presuppositional apologetics of Cornelius Van Til. They tend to support a decentralized political order resulting in laissez-faire capitalism.
New Calvinism is a growing perspective within conservative Evangelicalism that embraces the fundamentals of 16th century Calvinism while also trying to be relevant in the present day world. In March 2009, Time magazine described the New Calvinism as one of the "10 ideas changing the world". Some of the major figures who have been associated with the New Calvinism are John Piper, Mark Driscoll, Al Mohler, Mark Dever, C. J. Mahaney, and Tim Keller. New Calvinists have been criticized for blending Calvinist soteriology with popular Evangelical positions on the sacraments and continuationism and for rejecting tenets seen as crucial to the Reformed faith such as confessionalism and covenant theology.
Calvin expressed himself on usury in a 1545 letter to a friend, Claude de Sachin, in which he criticized the use of certain passages of scripture invoked by people opposed to the charging of interest. He reinterpreted some of these passages, and suggested that others of them had been rendered irrelevant by changed conditions. He also dismissed the argument (based upon the writings of Aristotle) that it is wrong to charge interest for money because money itself is barren. He said that the walls and the roof of a house are barren, too, but it is permissible to charge someone for allowing him to use them. In the same way, money can be made fruitful.
He qualified his view, however, by saying that money should be lent to people in dire need without hope of interest, while a modest interest rate of 5% should be permitted in relation to other borrowers.
In The Protestant Ethic and the Spirit of Capitalism, Max Weber wrote that capitalism in Northern Europe evolved when the Protestant (particularly Calvinist) ethic influenced large numbers of people to engage in work in the secular world, developing their own enterprises and engaging in trade and the accumulation of wealth for investment. In other words, the Protestant work ethic was an important force behind the unplanned and uncoordinated emergence of modern capitalism.
Expert researchers and authors have referred to the United States as a "Protestant nation" or "founded on Protestant principles," specifically emphasizing its Calvinist heritage.
Calvin's concepts of God and man led to ideas which were gradually put into practice after his death, in particular in the fields of politics and society. After their fight for independence from Spain (1579), the Netherlands, under Calvinist leadership, granted asylum to religious minorities, including French Huguenots, English Independents (Congregationalists), and Jews from Spain and Portugal. The ancestors of the philosopher Baruch Spinoza were Portuguese Jews. Aware of the trial against Galileo, René Descartes lived in the Netherlands, out of reach of the Inquisition, from 1628 to 1649. Pierre Bayle, a Reformed Frenchman, also felt safer in the Netherlands than in his home country. He was the first prominent philosopher who demanded tolerance for atheists. Hugo Grotius (1583–1645) was able to publish a rather liberal interpretation of the Bible and his ideas about natural law in the Netherlands. Moreover, the Calvinist Dutch authorities allowed the printing of books that could not be published elsewhere, such as Galileo's Discorsi (1638).
Alongside the liberal development of the Netherlands came the rise of modern democracy in England and North America. In the Middle Ages, state and church had been closely connected. Martin Luther's doctrine of the two kingdoms separated state and church in principle. His doctrine of the priesthood of all believers raised the laity to the same level as the clergy. Going one step further, Calvin included elected laymen (church elders, presbyters) in his concept of church government. The Huguenots added synods whose members were also elected by the congregations. The other Reformed churches took over this system of church self-government, which was essentially a representative democracy. Baptists, Quakers, and Methodists are organized in a similar way. These denominations and the Anglican Church were influenced by Calvin's theology in varying degrees.
In another factor in the rise of democracy in the Anglo-American world, Calvin favored a mixture of democracy and aristocracy as the best form of government (mixed government). He appreciated the advantages of democracy. His political thought aimed to safeguard the rights and freedoms of ordinary men and women. In order to minimize the misuse of political power he suggested dividing it among several institutions in a system of checks and balances (separation of powers). Finally, Calvin taught that if worldly rulers rise up against God they should be put down. In this way, he and his followers stood in the vanguard of resistance to political absolutism and furthered the cause of democracy. The Congregationalists who founded Plymouth Colony (1620) and Massachusetts Bay Colony (1628) were convinced that the democratic form of government was the will of God. Enjoying self-rule, they practiced separation of powers. Rhode Island, Connecticut, and Pennsylvania, founded by Roger Williams, Thomas Hooker, and William Penn, respectively, combined democratic government with a limited freedom of religion that did not extend to Catholics (Congregationalism being the established, tax-supported religion in Connecticut). These colonies became safe havens for persecuted religious minorities, including Jews.
In England, Baptists Thomas Helwys (c. 1575–c. 1616), and John Smyth (c. 1554–c. 1612) influenced the liberal political thought of the Presbyterian poet and politician John Milton (1608–1674) and of the philosopher John Locke (1632–1704), who in turn had both a strong impact on the political development in their home country (English Civil War of 1642–1651, Glorious Revolution of 1688) as well as in North America. The ideological basis of the American Revolution was largely provided by the radical Whigs, who had been inspired by Milton, Locke, James Harrington (1611–1677), Algernon Sidney (1623–1683), and other thinkers. The Whigs' "perceptions of politics attracted widespread support in America because they revived the traditional concerns of a Protestantism that had always verged on Puritanism". The United States Declaration of Independence, the United States Constitution and (American) Bill of Rights initiated a tradition of human and civil rights that continued in the French Declaration of the Rights of Man and of the Citizen and the constitutions of numerous countries around the world, e.g. Latin America, Japan, India, Germany, and other European countries. It is also echoed in the United Nations Charter and the Universal Declaration of Human Rights.
In the 19th century, churches based on or influenced by Calvin's theology became deeply involved in social reforms, e.g. the abolition of slavery (William Wilberforce, Harriet Beecher Stowe, Abraham Lincoln, and others), women suffrage, and prison reforms. Members of these churches formed co-operatives to help the impoverished masses. The founders of the Red Cross Movement, including Henry Dunant, were Reformed Christians. Their movement also initiated the Geneva Conventions.
Others view Calvinist influence as not always being solely positive. The Boers and Afrikaner Calvinists combined ideas from Calvinism and Kuyperian theology to justify apartheid in South Africa. As late as 1974 the majority of the Dutch Reformed Church in South Africa was convinced that their theological stances (including the story of the Tower of Babel) could justify apartheid. In 1990 the Dutch Reformed Church document Church and Society maintained that although they were changing their stance on apartheid, they believed that within apartheid and under God's sovereign guidance, "...everything was not without significance, but was of service to the Kingdom of God." These views were not universal and were condemned by many Calvinists outside South Africa. Pressure from both outside and inside the Dutch Reformed Calvinist church helped reverse apartheid in South Africa.
Throughout the world, the Reformed churches operate hospitals, homes for handicapped or elderly people, and educational institutions on all levels. For example, American Congregationalists founded Harvard (1636), Yale (1701), and about a dozen other colleges. A particular stream of influence of Calvinism concerns art. Visual art cemented society in the first modern nation state, the Netherlands, and also Neo-Calvinism put much weight on this aspect of life. Hans Rookmaaker is the most prolific example. In literature one can think of Marilynne Robinson. In her non-fiction she powerfully demonstrates the modernity of Calvin's thinking, calling him a humanist scholar (p. 174, The Death of Adam). | [
{
"paragraph_id": 0,
"text": "Calvinism, also called Reformed Christianity, is a major branch of Protestantism that follows the theological tradition and forms of Christian practice set down by John Calvin and various other Reformation-era theologians. It emphasizes the sovereignty of God and the authority of the Bible.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Calvinists broke from the Roman Catholic Church in the 16th century. Calvinists differ from Lutherans, another major branch of the Reformation, on the spiritual real presence of Christ in the Lord's Supper, theories of worship, the purpose and meaning of baptism, and the use of God's law for believers, among other points.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The namesake and founder of the movement, French reformer John Calvin, embraced Protestant beliefs in the late 1520s or early 1530s, as the earliest notions of later Reformed tradition were already espoused by Huldrych Zwingli. The movement was first called \"Calvinism\" in the early 1550s by Lutherans who opposed it, however many in the tradition find it either a nondescript or inappropriate term and prefer the term Reformed.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The most important Reformed theologians include Calvin, Zwingli, Martin Bucer, William Farel, Heinrich Bullinger, Thomas Cranmer, Nicholas Ridley, Peter Martyr Vermigli, Theodore Beza, John Knox, and John à Lasco. In the 20th century, Abraham Kuyper, Herman Bavinck, B. B. Warfield, J. Gresham Machen, Louis Berkhof, Karl Barth, Martyn Lloyd-Jones, Cornelius Van Til, R. C. Sproul, and J. I. Packer were influential. More contemporary Reformed theologians include the late Tim Keller, Desiring God Ministries founder John Piper, as well as Joel Beeke and Michael Horton.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Reformed tradition is largely represented by the Continental Reformed, Presbyterian, Reformed Anglican, Congregationalist, and Reformed Baptist denominations. Several forms of ecclesiastical polity are exercised by a group of Reformed churches, including presbyterian, congregationalist, and some episcopal. The biggest Reformed association is the World Communion of Reformed Churches, with more than 100 million members in 211 member denominations around the world. More conservative Reformed federations include the World Reformed Fellowship and the International Conference of Reformed Churches.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Calvinism is named after John Calvin. Calvin denounced the designation himself:",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "They could attach us no greater insult than this word, Calvinism. It is not hard to guess where such a deadly hatred comes from that they hold against me.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "Since the Arminian controversy, the Reformed tradition as a branch of Protestantism is distinguished from Lutheranism and divided into two groups, Arminians and Calvinists.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "The first wave of reformist theologians include Huldrych Zwingli (1484–1531), Martin Bucer (1491–1551), Wolfgang Capito (1478–1541), John Oecolampadius (1482–1531), and Guillaume Farel (1489–1565). While from diverse academic backgrounds, their work already contained key themes within Reformed theology, especially the priority of scripture as a source of authority. Scripture was also viewed as a unified whole, which led to a covenantal theology of the sacraments of baptism and the Lord's Supper as visible signs of the covenant of grace. Another shared perspective was their denial of the Real presence of Christ in the Eucharist. Each understood salvation to be by grace alone and affirmed a doctrine of unconditional election, the teaching that some people are chosen by God to be saved. Martin Luther and his successor, Philipp Melanchthon were significant influences on these theologians, and to a larger extent, those who followed. The doctrine of justification by faith alone, also known as sola fide, was a direct inheritance from Luther.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The second generation featured John Calvin (1509–1564), Heinrich Bullinger (1504–1575), Wolfgang Musculus (1497–1563), Peter Martyr Vermigli (1500–1562), Andreas Hyperius (1511–1564) and John à Lasco (1499–1560). Written between 1536 and 1539, Calvin's Institutes of the Christian Religion was one of the most influential works of the era. Toward the middle of the 16th century, these beliefs were formed into one consistent creed, which would shape the future definition of the Reformed faith. The 1549 Consensus Tigurinus unified Zwingli and Bullinger's memorialist theology of the Eucharist, which taught that it was simply a reminder of Christ's death, with Calvin's view of it as a means of grace with Christ actually present, though spiritually rather than bodily as in Catholic doctrine. The document demonstrates the diversity as well as unity in early Reformed theology, giving it a stability that enabled it to spread rapidly throughout Europe. This stands in marked contrast to the bitter controversy experienced by Lutherans prior to the 1579 Formula of Concord.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Due to Calvin's missionary work in France, his program of reform eventually reached the French-speaking provinces of the Netherlands. Calvinism was adopted in the Electorate of the Palatinate under Frederick III, which led to the formulation of the Heidelberg Catechism in 1563. This and the Belgic Confession were adopted as confessional standards in the first synod of the Dutch Reformed Church in 1571.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1573, William the Silent joined the Calvinist Church. Calvinism was declared the official religion of the Kingdom of Navarre by the queen regnant Jeanne d'Albret after her conversion in 1560. Leading divines, either Calvinist or those sympathetic to Calvinism, settled in England, including Martin Bucer, Peter Martyr, and John Łaski, as did John Knox in Scotland.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "During the First English Civil War, English and Scots Presbyterians produced the Westminster Confession, which became the confessional standard for Presbyterians in the English-speaking world. Having established itself in Europe, the movement continued to spread to areas including North America, South Africa and Korea.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "While Calvin did not live to see the foundation of his work grow into an international movement, his death allowed his ideas to spread far beyond their city of origin and their borders and to establish their own distinct character.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Although much of Calvin's work was in Geneva, his publications spread his ideas of a correctly Reformed church to many parts of Europe. In Switzerland, some cantons are still Reformed, and some are Catholic. Calvinism became the dominant doctrine within the Church of Scotland, the Dutch Republic, some communities in Flanders, and parts of Germany, especially those adjacent to the Netherlands in the Palatinate, Kassel, and Lippe, spread by Olevianus and Zacharias Ursinus among others. Protected by the local nobility, Calvinism became a significant religion in Eastern Hungary and Hungarian-speaking areas of Transylvania. Today there are about 3.5 million Hungarian Reformed people worldwide.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Calvinism was influential in France, Lithuania, and Poland before being mostly erased during the Counter Reformation. One of the most important Polish reformed theologists was John a Lasco, who was also involved into organising churches in East Frisia and Stranger's Church in London. Later, a faction called the Polish Brethren broke away from Calvinism on January 22, 1556, when Piotr of Goniądz, a Polish student, spoke out against the doctrine of the Trinity during the general synod of the Reformed churches of Poland held in the village of Secemin. Calvinism gained some popularity in Scandinavia, especially Sweden, but was rejected in favor of Lutheranism after the Synod of Uppsala in 1593.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Many 17th century European settlers in the Thirteen Colonies in British America were Calvinists, who emigrated because of arguments over church structure, including the Pilgrim Fathers. Others were forced into exile, including the French Huguenots. Dutch and French Calvinist settlers were also among the first European colonizers of South Africa, beginning in the 17th century, who became known as Boers or Afrikaners.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Sierra Leone was largely colonized by Calvinist settlers from Nova Scotia, many of whom were Black Loyalists who fought for the British Empire during the American War of Independence. John Marrant had organized a congregation there under the auspices of the Huntingdon Connection. Some of the largest Calvinist communions were started by 19th- and 20th-century missionaries. Especially large are those in Indonesia, Korea and Nigeria. In South Korea there are 20,000 Presbyterian congregations with about 9–10 million church members, scattered in more than 100 Presbyterian denominations. In South Korea, Presbyterianism is the largest Christian denomination.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "A 2011 report of the Pew Forum on Religious and Public Life estimated that members of Presbyterian or Reformed churches make up 7% of the estimated 801 million Protestants globally, or approximately 56 million people. Though the broadly defined Reformed faith is much larger, as it constitutes Congregationalist (0.5%), most of the United and uniting churches (unions of different denominations) (7.2%) and most likely some of the other Protestant denominations (38.2%). All three are distinct categories from Presbyterian or Reformed (7%) in this report.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Reformed family of churches is one of the largest Christian denominations. According to adherents.com the Reformed/Presbyterian/Congregational/United churches represent 75 million believers worldwide.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The World Communion of Reformed Churches, which includes some United Churches, has 80 million believers. WCRC is the third largest Christian communion in the world, after the Roman Catholic Church and the Eastern Orthodox Churches.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Many conservative Reformed churches which are strongly Calvinistic formed the World Reformed Fellowship which has about 70 member denominations. Most are not part of the World Communion of Reformed Churches because of its ecumenical attire. The International Conference of Reformed Churches is another conservative association.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Church of Tuvalu is an officially established state church in the Calvinist tradition.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Reformed theologians believe that God communicates knowledge of himself to people through the Word of God. People are not able to know anything about God except through this self-revelation. (With the exception of general revelation of God; \"His invisible attributes, His eternal power and divine nature, have been clearly seen, being understood through what has been made, so that they are without excuse\" (Romans 1:20).) Speculation about anything which God has not revealed through his Word is not warranted. The knowledge people have of God is different from that which they have of anything else because God is infinite, and finite people are incapable of comprehending an infinite being. While the knowledge revealed by God to people is never incorrect, it is also never comprehensive.",
"title": "Theology"
},
{
"paragraph_id": 24,
"text": "According to Reformed theologians, God's self-revelation is always through his son Jesus Christ, because Christ is the only mediator between God and people. Revelation of God through Christ comes through two basic channels. The first is creation and providence, which is God's creating and continuing to work in the world. This action of God gives everyone knowledge about God, but this knowledge is only sufficient to make people culpable for their sin; it does not include knowledge of the gospel. The second channel through which God reveals himself is redemption, which is the gospel of salvation from condemnation which is punishment for sin.",
"title": "Theology"
},
{
"paragraph_id": 25,
"text": "In Reformed theology, the Word of God takes several forms. Jesus Christ himself is the Word Incarnate. The prophecies about him said to be found in the Old Testament and the ministry of the apostles who saw him and communicated his message are also the Word of God. Further, the preaching of ministers about God is the very Word of God because God is considered to be speaking through them. God also speaks through human writers in the Bible, which is composed of texts set apart by God for self-revelation. Reformed theologians emphasize the Bible as a uniquely important means by which God communicates with people. People gain knowledge of God from the Bible which cannot be gained in any other way.",
"title": "Theology"
},
{
"paragraph_id": 26,
"text": "Reformed theologians affirm that the Bible is true, but differences emerge among them over the meaning and extent of its truthfulness. Conservative followers of the Princeton theologians take the view that the Bible is true and inerrant, or incapable of error or falsehood, in every place. This view is similar to that of Catholic orthodoxy as well as modern Evangelicalism. Another view, influenced by the teaching of Karl Barth and neo-orthodoxy, is found in the Presbyterian Church (U.S.A.)'s Confession of 1967. Those who take this view believe the Bible to be the primary source of our knowledge of God, but also that some parts of the Bible may be false, not witnesses to Christ, and not normative for today's church. In this view, Christ is the revelation of God, and the scriptures witness to this revelation rather than being the revelation itself.",
"title": "Theology"
},
{
"paragraph_id": 27,
"text": "Reformed theologians use the concept of covenant to describe the way God enters into fellowship with people in history. The concept of covenant is so prominent in Reformed theology that Reformed theology as a whole is sometimes called \"covenant theology\". However, sixteenth- and seventeenth-century theologians developed a particular theological system called \"covenant theology\" or \"federal theology\" which many conservative Reformed churches continue to affirm today. This framework orders God's life with people primarily in two covenants: the covenant of works and the covenant of grace.",
"title": "Theology"
},
{
"paragraph_id": 28,
"text": "The covenant of works is made with Adam and Eve in the Garden of Eden. The terms of the covenant are that God provides a blessed life in the garden on condition that Adam and Eve obey God's law perfectly. Because Adam and Eve broke the covenant by eating the forbidden fruit, they became subject to death and were banished from the garden. This sin was passed down to all mankind because all people are said to be in Adam as a covenantal or \"federal\" head. Federal theologians usually imply that Adam and Eve would have gained immortality had they obeyed perfectly.",
"title": "Theology"
},
{
"paragraph_id": 29,
"text": "A second covenant, called the covenant of grace, is said to have been made immediately following Adam and Eve's sin. In it, God graciously offers salvation from death on condition of faith in God. This covenant is administered in different ways throughout the Old and New Testaments, but retains the substance of being free of a requirement of perfect obedience.",
"title": "Theology"
},
{
"paragraph_id": 30,
"text": "Through the influence of Karl Barth, many contemporary Reformed theologians have discarded the covenant of works, along with other concepts of federal theology. Barth saw the covenant of works as disconnected from Christ and the gospel, and rejected the idea that God works with people in this way. Instead, Barth argued that God always interacts with people under the covenant of grace, and that the covenant of grace is free of all conditions whatsoever. Barth's theology and that which follows him has been called \"mono covenantal\" as opposed to the \"bi-covenantal\" scheme of classical federal theology. Conservative contemporary Reformed theologians, such as John Murray, have also rejected the idea of covenants based on law rather than grace. Michael Horton, however, has defended the covenant of works as combining principles of law and love.",
"title": "Theology"
},
{
"paragraph_id": 31,
"text": "For the most part, the Reformed tradition did not modify the medieval consensus on the doctrine of God. God's character is described primarily using three adjectives: eternal, infinite, and unchangeable. Reformed theologians such as Shirley Guthrie have proposed that rather than conceiving of God in terms of his attributes and freedom to do as he pleases, the doctrine of God is to be based on God's work in history and his freedom to live with and empower people.",
"title": "Theology"
},
{
"paragraph_id": 32,
"text": "Reformed theologians have also traditionally followed the medieval tradition going back to before the early church councils of Nicaea and Chalcedon on the doctrine of the Trinity. God is affirmed to be one God in three persons: Father, Son, and Holy Spirit. The Son (Christ) is held to be eternally begotten by the Father and the Holy Spirit eternally proceeding from the Father and Son. However, contemporary theologians have been critical of aspects of Western views here as well. Drawing on the Eastern tradition, these Reformed theologians have proposed a \"social trinitarianism\" where the persons of the Trinity only exist in their life together as persons-in-relationship. Contemporary Reformed confessions such as the Barmen Confession and Brief Statement of Faith of the Presbyterian Church (USA) have avoided language about the attributes of God and have emphasized his work of reconciliation and empowerment of people. Feminist theologian Letty Russell used the image of partnership for the persons of the Trinity. According to Russell, thinking this way encourages Christians to interact in terms of fellowship rather than reciprocity. Conservative Reformed theologian Michael Horton, however, has argued that social trinitarianism is untenable because it abandons the essential unity of God in favor of a community of separate beings.",
"title": "Theology"
},
{
"paragraph_id": 33,
"text": "Reformed theologians affirm the historic Christian belief that Christ is eternally one person with a divine and a human nature. Reformed Christians have especially emphasized that Christ truly became human so that people could be saved. Christ's human nature has been a point of contention between Reformed and Lutheran Christology. In accord with the belief that finite humans cannot comprehend infinite divinity, Reformed theologians hold that Christ's human body cannot be in multiple locations at the same time. Because Lutherans believe that Christ is bodily present in the Eucharist, they hold that Christ is bodily present in many locations simultaneously. For Reformed Christians, such a belief denies that Christ actually became human. Some contemporary Reformed theologians have moved away from the traditional language of one person in two natures, viewing it as unintelligible to contemporary people. Instead, theologians tend to emphasize Jesus' context and particularity as a first-century Jew.",
"title": "Theology"
},
{
"paragraph_id": 34,
"text": "John Calvin and many Reformed theologians who followed him describe Christ's work of redemption in terms of three offices: prophet, priest, and king. Christ is said to be a prophet in that he teaches perfect doctrine, a priest in that he intercedes to the Father on believers' behalf and offered himself as a sacrifice for sin, and a king in that he rules the church and fights on believers' behalf. The threefold office links the work of Christ to God's work in ancient Israel. Many, but not all, Reformed theologians continue to make use of the threefold office as a framework because of its emphasis on the connection of Christ's work to Israel. They have, however, often reinterpreted the meaning of each of the offices. For example, Karl Barth interpreted Christ's prophetic office in terms of political engagement on behalf of the poor.",
"title": "Theology"
},
{
"paragraph_id": 35,
"text": "Christians believe Jesus' death and resurrection make it possible for believers to receive forgiveness for sin and reconciliation with God through the atonement. Reformed Protestants generally subscribe to a particular view of the atonement called penal substitutionary atonement, which explains Christ's death as a sacrificial payment for sin. Christ is believed to have died in place of the believer, who is accounted righteous as a result of this sacrificial payment.",
"title": "Theology"
},
{
"paragraph_id": 36,
"text": "In Christian theology, people are created good and in the image of God but have become corrupted by sin, which causes them to be imperfect and overly self-interested. Reformed Christians, following the tradition of Augustine of Hippo, believe that this corruption of human nature was brought on by Adam and Eve's first sin, a doctrine called original sin.",
"title": "Theology"
},
{
"paragraph_id": 37,
"text": "Although earlier Christian authors taught the elements of physical death, moral weakness, and a sin propensity within original sin, Augustine was the first Christian to add the concept of inherited guilt (reatus) from Adam whereby every infant is born eternally damned and humans lack any residual ability to respond to God. Reformed theologians emphasize that this sinfulness affects all of a person's nature, including their will. This view, that sin so dominates people that they are unable to avoid sin, has been called total depravity. As a consequence, every one of their descendants inherited a stain of corruption and depravity. This condition, innate to all humans, is known in Christian theology as original sin.",
"title": "Theology"
},
{
"paragraph_id": 38,
"text": "Calvin thought original sin was \"a hereditary corruption and depravity of our nature, extending to all the parts of the soul.\" Calvin asserted people were so warped by original sin that \"everything which our mind conceives, meditates, plans, and resolves, is always evil.\" The depraved condition of every human being is not the result of sins people commit during their lives. Instead, before we are born, while we are in our mother's womb, \"we are in God's sight defiled and polluted.\" Calvin thought people were justly condemned to hell because their corrupted state is \"naturally hateful to God.\"",
"title": "Theology"
},
{
"paragraph_id": 39,
"text": "In colloquial English, the term \"total depravity\" can be easily misunderstood to mean that people are absent of any goodness or unable to do any good. However the Reformed teaching is actually that while people continue to bear God's image and may do things that appear outwardly good, their sinful intentions affect all of their nature and actions so that they are not pleasing to God. From a Calvinist viewpoint, a person who has sinned was predestined to sin, and no matter what a person does, they will go to Heaven or Hell based on that determination. There is no repenting from sin since the most evil thing is the sinner's own actions, thoughts, and words.",
"title": "Theology"
},
{
"paragraph_id": 40,
"text": "Some contemporary theologians in the Reformed tradition, such as those associated with the Presbyterian Church (USA)'s Confession of 1967, have emphasized the social character of human sinfulness. These theologians have sought to bring attention to issues of environmental, economic, and political justice as areas of human life that have been affected by sin.",
"title": "Theology"
},
{
"paragraph_id": 41,
"text": "Reformed theologians, along with other Protestants, believe salvation from punishment for sin is to be given to all those who have faith in Christ. Faith is not purely intellectual, but involves trust in God's promise to save. Protestants do not hold there to be any other requirement for salvation, but that faith alone is sufficient.",
"title": "Theology"
},
{
"paragraph_id": 42,
"text": "Justification is the part of salvation where God pardons the sin of those who believe in Christ. It is historically held by Protestants to be the most important article of Christian faith, though more recently it is sometimes given less importance out of ecumenical concerns. People are not on their own able to fully repent of their sin or prepare themselves to repent because of their sinfulness. Therefore, justification is held to arise solely from God's free and gracious act.",
"title": "Theology"
},
{
"paragraph_id": 43,
"text": "Sanctification is the part of salvation in which God makes believers holy, by enabling them to exercise greater love for God and for other people. The good works accomplished by believers as they are sanctified are considered to be the necessary outworking of the believer's salvation, though they do not cause the believer to be saved. Sanctification, like justification, is by faith, because doing good works is simply living as the child of God one has become.",
"title": "Theology"
},
{
"paragraph_id": 44,
"text": "Reformed theologians teach that sin so affects human nature that they are unable even to exercise faith in Christ by their own will. While people are said to retain will, in that they willfully sin, they are unable not to sin because of the corruption of their nature due to original sin. Reformed Christians believe that God predestined some people to be saved and others were predestined to eternal damnation. This choice by God to save some is held to be unconditional and not based on any characteristic or action on the part of the person chosen. This view is opposed to the Arminian view that God's choice of whom to save is conditional or based on his foreknowledge of who would respond positively to God.",
"title": "Theology"
},
{
"paragraph_id": 45,
"text": "Karl Barth reinterpreted the Reformed doctrine of predestination to apply only to Christ. Individual people are only said to be elected through their being in Christ. Reformed theologians who followed Barth, including Jürgen Moltmann, David Migliore, and Shirley Guthrie, have argued that the traditional Reformed concept of predestination is speculative and have proposed alternative models. These theologians claim that a properly trinitarian doctrine emphasizes God's freedom to love all people, rather than choosing some for salvation and others for damnation. God's justice towards and condemnation of sinful people is spoken of by these theologians as out of his love for them and a desire to reconcile them to himself.",
"title": "Theology"
},
{
"paragraph_id": 46,
"text": "Much attention surrounding Calvinism focuses on the \"Five Points of Calvinism\" (also called the doctrines of grace). The five points have been summarized under the acrostic TULIP. The five points are popularly said to summarize the Canons of Dort; however, there is no historical relationship between them, and some scholars argue that their language distorts the meaning of the Canons, Calvin's theology, and the theology of 17th-century Calvinistic orthodoxy, particularly in the language of total depravity and limited atonement. The five points were more recently popularized in the 1963 booklet The Five Points of Calvinism Defined, Defended, Documented by David N. Steele and Curtis C. Thomas. The origins of the five points and the acrostic are uncertain, but they appear to be outlined in the Counter Remonstrance of 1611, a lesser-known Reformed reply to the Arminians, which was written prior to the Canons of Dort. The acrostic was used by Cleland Boyd McAfee as early as circa 1905. An early printed appearance of the acrostic can be found in Loraine Boettner's 1932 book, The Reformed Doctrine of Predestination.",
"title": "Theology"
},
{
"paragraph_id": 47,
"text": "The central assertion of TULIP is that God saves every person upon whom he has mercy, and that his efforts are not frustrated by the unrighteousness or inability of humans.",
"title": "Theology"
},
{
"paragraph_id": 48,
"text": "Reformed Christians see the Christian Church as the community with which God has made the covenant of grace, a promise of eternal life and relationship with God. This covenant extends to those under the \"old covenant\" whom God chose, beginning with Abraham and Sarah. The church is conceived of as both invisible and visible. The invisible church is the body of all believers, known only to God. The visible church is the institutional body which contains both members of the invisible church as well as those who appear to have faith in Christ, but are not truly part of God's elect.",
"title": "Theology"
},
{
"paragraph_id": 49,
"text": "In order to identify the visible church, Reformed theologians have spoken of certain marks of the Church. For some, the only mark is the pure preaching of the gospel of Christ. Others, including John Calvin, also include the right administration of the sacraments. Others, such as those following the Scots Confession, include a third mark of rightly administered church discipline, or exercise of censure against unrepentant sinners. These marks allowed the Reformed to identify the church based on its conformity to the Bible rather than the Magisterium or church tradition.",
"title": "Theology"
},
{
"paragraph_id": 50,
"text": "The regulative principle of worship is a teaching shared by some Calvinists and Anabaptists on how the Bible orders public worship. The substance of the doctrine regarding worship is that God institutes in the Scriptures everything he requires for worship in the Church and that everything else is prohibited. As the regulative principle is reflected in Calvin's own thought, it is driven by his evident antipathy toward the Roman Catholic Church and its worship practices, and it associates musical instruments with icons, which he considered violations of the Ten Commandments' prohibition of graven images.",
"title": "Theology"
},
{
"paragraph_id": 51,
"text": "On this basis, many early Calvinists also eschewed musical instruments and advocated a cappella exclusive psalmody in worship, though Calvin himself allowed other scriptural songs as well as psalms, and this practice typified Presbyterian worship and the worship of other Reformed churches for some time. The original Lord's Day service designed by John Calvin was a highly liturgical service with the Creed, Alms, Confession and Absolution, the Lord's supper, Doxologies, prayers, Psalms being sung, the Lords prayer being sung, and Benedictions.",
"title": "Theology"
},
{
"paragraph_id": 52,
"text": "Since the 19th century, however, some of the Reformed churches have modified their understanding of the regulative principle and make use of musical instruments, believing that Calvin and his early followers went beyond the biblical requirements and that such things are circumstances of worship requiring biblically rooted wisdom, rather than an explicit command. Despite the protestations of those who hold to a strict view of the regulative principle, today hymns and musical instruments are in common use, as are contemporary worship music styles with elements such as worship bands.",
"title": "Theology"
},
{
"paragraph_id": 53,
"text": "The Westminster Confession of Faith limits the sacraments to baptism and the Lord's Supper. Sacraments are denoted \"signs and seals of the covenant of grace.\" Westminster speaks of \"a sacramental relation, or a sacramental union, between the sign and the thing signified; whence it comes to pass that the names and effects of the one are attributed to the other.\" Baptism is for infant children of believers as well as believers, as it is for all the Reformed except Baptists and some Congregationalists. Baptism admits the baptized into the visible church, and in it all the benefits of Christ are offered to the baptized. On the Lord's supper, the Westminster Confession takes a position between Lutheran sacramental union and Zwinglian memorialism: \"the Lord's supper really and indeed, yet not carnally and corporally, but spiritually, receive and feed upon Christ crucified, and all benefits of his death: the body and blood of Christ being then not corporally or carnally in, with, or under the bread and wine; yet, as really, but spiritually, present to the faith of believers in that ordinance as the elements themselves are to their outward senses.\"",
"title": "Theology"
},
{
"paragraph_id": 54,
"text": "The 1689 London Baptist Confession of Faith does not use the term sacrament, but describes baptism and the Lord's supper as ordinances, as do most Baptists, Calvinist or otherwise. Baptism is only for those who \"actually profess repentance towards God\", and not for the children of believers. Baptists also insist on immersion or dipping, in contradistinction to other Reformed Christians. The Baptist Confession describes the Lord's supper as \"the body and blood of Christ being then not corporally or carnally, but spiritually present to the faith of believers in that ordinance\", similarly to the Westminster Confession. There is significant latitude in Baptist congregations regarding the Lord's supper, and many hold the Zwinglian view.",
"title": "Theology"
},
{
"paragraph_id": 55,
"text": "There are two schools of thought regarding the logical order of God's decree to ordain the fall of man: supralapsarianism (from the Latin: supra, \"above\", here meaning \"before\" + lapsus, \"fall\") and infralapsarianism (from the Latin: infra, \"beneath\", here meaning \"after\" + lapsus, \"fall\"). The former view, sometimes called \"high Calvinism\", argues that the Fall occurred partly to facilitate God's purpose to choose some individuals for salvation and some for damnation. Infralapsarianism, sometimes called \"low Calvinism\", is the position that, while the Fall was indeed planned, it was not planned with reference to who would be saved.",
"title": "Theology"
},
{
"paragraph_id": 56,
"text": "Supralapsarianism is based on the belief that God chose which individuals to save logically prior to the decision to allow the race to fall and that the Fall serves as the means of realization of that prior decision to send some individuals to hell and others to heaven (that is, it provides the grounds of condemnation in the reprobate and the need for salvation in the elect). In contrast, infralapsarians hold that God planned the race to fall logically prior to the decision to save or damn any individuals because, it is argued, in order to be \"saved\", one must first need to be saved from something and therefore the decree of the Fall must precede predestination to salvation or damnation.",
"title": "Theology"
},
{
"paragraph_id": 57,
"text": "These two views vied with each other at the Synod of Dort, an international body representing Calvinist Christian churches from around Europe, and the judgments that came out of that council sided with infralapsarianism (Canons of Dort, First Point of Doctrine, Article 7). The Westminster Confession of Faith also teaches (in Hodge's words \"clearly impl[ies]\") the infralapsarian view, but is sensitive to those holding to supralapsarianism. The Lapsarian controversy has a few vocal proponents on each side today, but overall it does not receive much attention among modern Calvinists.",
"title": "Theology"
},
{
"paragraph_id": 58,
"text": "The Reformed tradition is largely represented by the Continental Reformed, Presbyterian, Reformed Anglican, Congregationalist, and Reformed Baptist denominational families.",
"title": "Reformed churches"
},
{
"paragraph_id": 59,
"text": "Considered to be the oldest and most orthodox bearers of the Reformed faith, the continental Reformed Churches uphold the Helvetic Confessions and Heidelberg Catechism, which were adopted in Zurich and Heidelberg, respectively. In the United States, immigrants belonging to the continental Reformed churches joined the Dutch Reformed Church there, as well as the Anglican Church.",
"title": "Reformed churches"
},
{
"paragraph_id": 60,
"text": "The Congregational churches are a part of the Reformed tradition founded under the influence of New England Puritanism. The Savoy Declaration is the confession of faith held by the Congregationalist churches. An example of a Christian denomination belonging to the Congregationalist tradition is the Conservative Congregational Christian Conference.",
"title": "Reformed churches"
},
{
"paragraph_id": 61,
"text": "The Presbyterian churches are part of the Reformed tradition and were influenced by John Knox's teachings in the Church of Scotland. Presbyterianism upholds the Westminster Confession of Faith.",
"title": "Reformed churches"
},
{
"paragraph_id": 62,
"text": "Historic Anglicanism is a part of the wider Reformed tradition, as \"the founding documents of the Anglican church—the Book of Homilies, the Book of Common Prayer, and the Thirty-Nine Articles of Religion—expresses a theology in keeping with the Reformed theology of the Swiss and South German Reformation.\" The Most Rev. Peter Robinson, presiding bishop of the United Episcopal Church of North America, writes:",
"title": "Reformed churches"
},
{
"paragraph_id": 63,
"text": "Cranmer's personal journey of faith left its mark on the Church of England in the form of a Liturgy that remains to this day more closely allied to Lutheran practice, but that liturgy is couple to a doctrinal stance that is broadly, but decidedly Reformed. ... The 42 Articles of 1552 and the 39 Articles of 1563, both commit the Church of England to the fundamentals of the Reformed Faith. Both sets of Articles affirm the centrality of Scripture, and take a monergist position on Justification. Both sets of Articles affirm that the Church of England accepts the doctrine of predestination and election as a 'comfort to the faithful' but warn against over much speculation concerning that doctrine. Indeed a casual reading of the Wurttemburg Confession of 1551, the Second Helvetic Confession, the Scots Confession of 1560, and the XXXIX Articles of Religion reveal them to be cut from the same bolt of cloth.",
"title": "Reformed churches"
},
{
"paragraph_id": 64,
"text": "Reformed Baptist churches are Baptists (a Christian denominational family that teaches credobaptism rather than infant baptism) who adhere to Reformed theology as explicated in the 1689 Baptist Confession of Faith or other Reformed Baptist Confessions.",
"title": "Reformed churches"
},
{
"paragraph_id": 65,
"text": "Calvinistic Baptist churches are Baptists who accept reformed soteriology as summarized in the acronym TULIP, but do not necessarily hold to a specific confession, or to covenant theology. This group is much less defined than other groups of Reformed Churches since the group subscribes to fewer specific standards.",
"title": "Reformed churches"
},
{
"paragraph_id": 66,
"text": "Amyraldism (or sometimes Amyraldianism, also known as the School of Saumur, hypothetical universalism, post redemptionism, moderate Calvinism, or four-point Calvinism) is the belief that God, prior to his decree of election, decreed Christ's atonement for all alike if they believe, but seeing that none would believe on their own, he then elected those whom he will bring to faith in Christ, thereby preserving the Calvinist doctrine of unconditional election. The efficacy of the atonement remains limited to those who believe.",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 67,
"text": "Named after its formulator Moses Amyraut, this doctrine is still viewed as a variety of Calvinism in that it maintains the particularity of sovereign grace in the application of the atonement. However, detractors like B. B. Warfield have termed it \"an inconsistent and therefore unstable form of Calvinism.\"",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 68,
"text": "Hyper-Calvinism first referred to a view that appeared among the early English Particular Baptists in the 18th century. Their system denied that the call of the gospel to \"repent and believe\" is directed to every single person and that it is the duty of every person to trust in Christ for salvation. The term also occasionally appears in both theological and secular controversial contexts, where it usually connotes a negative opinion about some variety of theological determinism, predestination, or a version of Evangelical Christianity or Calvinism that is deemed by the critic to be unenlightened, harsh, or extreme.",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 69,
"text": "The Westminster Confession of Faith says that the gospel is to be freely offered to sinners, and the Larger Catechism makes clear that the gospel is offered to the non-elect.",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 70,
"text": "Beginning in the 1880s, Neo-Calvinism, a form of Dutch Calvinism, is the movement initiated by the theologian and former Dutch prime minister Abraham Kuyper. James Bratt has identified a number of different types of Dutch Calvinism: The Seceders—split into the Reformed Church \"West\" and the Confessionalists; and the Neo-Calvinists—the Positives and the Antithetical Calvinists. The Seceders were largely infralapsarian and the Neo-Calvinists usually supralapsarian.",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 71,
"text": "Kuyper wanted to awaken the church from what he viewed as its pietistic slumber. He declared:",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 72,
"text": "No single piece of our mental world is to be sealed off from the rest and there is not a square inch in the whole domain of human existence over which Christ, who is sovereign over all, does not cry: 'Mine!'",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 73,
"text": "This refrain has become something of a rallying call for Neo-Calvinists.",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 74,
"text": "Christian Reconstructionism is a fundamentalist Calvinist theonomic movement that has remained rather obscure. Founded by R. J. Rushdoony, the movement has had an important influence on the Christian Right in the United States. The movement peaked in the 1990s. However, it lives on in small denominations such as the Reformed Presbyterian Church in the United States and as a minority position in other denominations. Christian Reconstructionists are usually postmillennialists and followers of the presuppositional apologetics of Cornelius Van Til. They tend to support a decentralized political order resulting in laissez-faire capitalism.",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 75,
"text": "New Calvinism is a growing perspective within conservative Evangelicalism that embraces the fundamentals of 16th century Calvinism while also trying to be relevant in the present day world. In March 2009, Time magazine described the New Calvinism as one of the \"10 ideas changing the world\". Some of the major figures who have been associated with the New Calvinism are John Piper, Mark Driscoll, Al Mohler, Mark Dever, C. J. Mahaney, and Tim Keller. New Calvinists have been criticized for blending Calvinist soteriology with popular Evangelical positions on the sacraments and continuationism and for rejecting tenets seen as crucial to the Reformed faith such as confessionalism and covenant theology.",
"title": "Variants in Reformed theology"
},
{
"paragraph_id": 76,
"text": "Calvin expressed himself on usury in a 1545 letter to a friend, Claude de Sachin, in which he criticized the use of certain passages of scripture invoked by people opposed to the charging of interest. He reinterpreted some of these passages, and suggested that others of them had been rendered irrelevant by changed conditions. He also dismissed the argument (based upon the writings of Aristotle) that it is wrong to charge interest for money because money itself is barren. He said that the walls and the roof of a house are barren, too, but it is permissible to charge someone for allowing him to use them. In the same way, money can be made fruitful.",
"title": "Social and economic influences"
},
{
"paragraph_id": 77,
"text": "He qualified his view, however, by saying that money should be lent to people in dire need without hope of interest, while a modest interest rate of 5% should be permitted in relation to other borrowers.",
"title": "Social and economic influences"
},
{
"paragraph_id": 78,
"text": "In The Protestant Ethic and the Spirit of Capitalism, Max Weber wrote that capitalism in Northern Europe evolved when the Protestant (particularly Calvinist) ethic influenced large numbers of people to engage in work in the secular world, developing their own enterprises and engaging in trade and the accumulation of wealth for investment. In other words, the Protestant work ethic was an important force behind the unplanned and uncoordinated emergence of modern capitalism.",
"title": "Social and economic influences"
},
{
"paragraph_id": 79,
"text": "Expert researchers and authors have referred to the United States as a \"Protestant nation\" or \"founded on Protestant principles,\" specifically emphasizing its Calvinist heritage.",
"title": "Social and economic influences"
},
{
"paragraph_id": 80,
"text": "Calvin's concepts of God and man led to ideas which were gradually put into practice after his death, in particular in the fields of politics and society. After their fight for independence from Spain (1579), the Netherlands, under Calvinist leadership, granted asylum to religious minorities, including French Huguenots, English Independents (Congregationalists), and Jews from Spain and Portugal. The ancestors of the philosopher Baruch Spinoza were Portuguese Jews. Aware of the trial against Galileo, René Descartes lived in the Netherlands, out of reach of the Inquisition, from 1628 to 1649. Pierre Bayle, a Reformed Frenchman, also felt safer in the Netherlands than in his home country. He was the first prominent philosopher who demanded tolerance for atheists. Hugo Grotius (1583–1645) was able to publish a rather liberal interpretation of the Bible and his ideas about natural law in the Netherlands. Moreover, the Calvinist Dutch authorities allowed the printing of books that could not be published elsewhere, such as Galileo's Discorsi (1638).",
"title": "Politics and society"
},
{
"paragraph_id": 81,
"text": "Alongside the liberal development of the Netherlands came the rise of modern democracy in England and North America. In the Middle Ages, state and church had been closely connected. Martin Luther's doctrine of the two kingdoms separated state and church in principle. His doctrine of the priesthood of all believers raised the laity to the same level as the clergy. Going one step further, Calvin included elected laymen (church elders, presbyters) in his concept of church government. The Huguenots added synods whose members were also elected by the congregations. The other Reformed churches took over this system of church self-government, which was essentially a representative democracy. Baptists, Quakers, and Methodists are organized in a similar way. These denominations and the Anglican Church were influenced by Calvin's theology in varying degrees.",
"title": "Politics and society"
},
{
"paragraph_id": 82,
"text": "In another factor in the rise of democracy in the Anglo-American world, Calvin favored a mixture of democracy and aristocracy as the best form of government (mixed government). He appreciated the advantages of democracy. His political thought aimed to safeguard the rights and freedoms of ordinary men and women. In order to minimize the misuse of political power he suggested dividing it among several institutions in a system of checks and balances (separation of powers). Finally, Calvin taught that if worldly rulers rise up against God they should be put down. In this way, he and his followers stood in the vanguard of resistance to political absolutism and furthered the cause of democracy. The Congregationalists who founded Plymouth Colony (1620) and Massachusetts Bay Colony (1628) were convinced that the democratic form of government was the will of God. Enjoying self-rule, they practiced separation of powers. Rhode Island, Connecticut, and Pennsylvania, founded by Roger Williams, Thomas Hooker, and William Penn, respectively, combined democratic government with a limited freedom of religion that did not extend to Catholics (Congregationalism being the established, tax-supported religion in Connecticut). These colonies became safe havens for persecuted religious minorities, including Jews.",
"title": "Politics and society"
},
{
"paragraph_id": 83,
"text": "In England, Baptists Thomas Helwys (c. 1575–c. 1616), and John Smyth (c. 1554–c. 1612) influenced the liberal political thought of the Presbyterian poet and politician John Milton (1608–1674) and of the philosopher John Locke (1632–1704), who in turn had both a strong impact on the political development in their home country (English Civil War of 1642–1651, Glorious Revolution of 1688) as well as in North America. The ideological basis of the American Revolution was largely provided by the radical Whigs, who had been inspired by Milton, Locke, James Harrington (1611–1677), Algernon Sidney (1623–1683), and other thinkers. The Whigs' \"perceptions of politics attracted widespread support in America because they revived the traditional concerns of a Protestantism that had always verged on Puritanism\". The United States Declaration of Independence, the United States Constitution and (American) Bill of Rights initiated a tradition of human and civil rights that continued in the French Declaration of the Rights of Man and of the Citizen and the constitutions of numerous countries around the world, e.g. Latin America, Japan, India, Germany, and other European countries. It is also echoed in the United Nations Charter and the Universal Declaration of Human Rights.",
"title": "Politics and society"
},
{
"paragraph_id": 84,
"text": "In the 19th century, churches based on or influenced by Calvin's theology became deeply involved in social reforms, e.g. the abolition of slavery (William Wilberforce, Harriet Beecher Stowe, Abraham Lincoln, and others), women suffrage, and prison reforms. Members of these churches formed co-operatives to help the impoverished masses. The founders of the Red Cross Movement, including Henry Dunant, were Reformed Christians. Their movement also initiated the Geneva Conventions.",
"title": "Politics and society"
},
{
"paragraph_id": 85,
"text": "Others view Calvinist influence as not always being solely positive. The Boers and Afrikaner Calvinists combined ideas from Calvinism and Kuyperian theology to justify apartheid in South Africa. As late as 1974 the majority of the Dutch Reformed Church in South Africa was convinced that their theological stances (including the story of the Tower of Babel) could justify apartheid. In 1990 the Dutch Reformed Church document Church and Society maintained that although they were changing their stance on apartheid, they believed that within apartheid and under God's sovereign guidance, \"...everything was not without significance, but was of service to the Kingdom of God.\" These views were not universal and were condemned by many Calvinists outside South Africa. Pressure from both outside and inside the Dutch Reformed Calvinist church helped reverse apartheid in South Africa.",
"title": "Politics and society"
},
{
"paragraph_id": 86,
"text": "Throughout the world, the Reformed churches operate hospitals, homes for handicapped or elderly people, and educational institutions on all levels. For example, American Congregationalists founded Harvard (1636), Yale (1701), and about a dozen other colleges. A particular stream of influence of Calvinism concerns art. Visual art cemented society in the first modern nation state, the Netherlands, and also Neo-Calvinism put much weight on this aspect of life. Hans Rookmaaker is the most prolific example. In literature one can think of Marilynne Robinson. In her non-fiction she powerfully demonstrates the modernity of Calvin's thinking, calling him a humanist scholar (p. 174, The Death of Adam).",
"title": "Politics and society"
}
]
| Calvinism, also called Reformed Christianity, is a major branch of Protestantism that follows the theological tradition and forms of Christian practice set down by John Calvin and various other Reformation-era theologians. It emphasizes the sovereignty of God and the authority of the Bible. Calvinists broke from the Roman Catholic Church in the 16th century. Calvinists differ from Lutherans, another major branch of the Reformation, on the spiritual real presence of Christ in the Lord's Supper, theories of worship, the purpose and meaning of baptism, and the use of God's law for believers, among other points. The namesake and founder of the movement, French reformer John Calvin, embraced Protestant beliefs in the late 1520s or early 1530s, as the earliest notions of later Reformed tradition were already espoused by Huldrych Zwingli. The movement was first called "Calvinism" in the early 1550s by Lutherans who opposed it, however many in the tradition find it either a nondescript or inappropriate term and prefer the term Reformed. The most important Reformed theologians include Calvin, Zwingli, Martin Bucer, William Farel, Heinrich Bullinger, Thomas Cranmer, Nicholas Ridley, Peter Martyr Vermigli, Theodore Beza, John Knox, and John à Lasco. In the 20th century, Abraham Kuyper, Herman Bavinck, B. B. Warfield, J. Gresham Machen, Louis Berkhof, Karl Barth, Martyn Lloyd-Jones, Cornelius Van Til, R. C. Sproul, and J. I. Packer were influential. More contemporary Reformed theologians include the late Tim Keller, Desiring God Ministries founder John Piper, as well as Joel Beeke and Michael Horton. The Reformed tradition is largely represented by the Continental Reformed, Presbyterian, Reformed Anglican, Congregationalist, and Reformed Baptist denominations. Several forms of ecclesiastical polity are exercised by a group of Reformed churches, including presbyterian, congregationalist, and some episcopal. The biggest Reformed association is the World Communion of Reformed Churches, with more than 100 million members in 211 member denominations around the world. More conservative Reformed federations include the World Reformed Fellowship and the International Conference of Reformed Churches. | 2001-08-20T04:03:50Z | 2023-12-29T22:19:05Z | [
"Template:Portal",
"Template:Notelist",
"Template:Cite book",
"Template:Short description",
"Template:Redirect",
"Template:Protestantism",
"Template:Blockquote",
"Template:Citation needed",
"Template:Cite web",
"Template:Refend",
"Template:Authority control",
"Template:Further",
"Template:Cite wikisource",
"Template:Cite conference",
"Template:Use American English",
"Template:Christianity",
"Template:Efn",
"Template:Main",
"Template:Comparison among Protestants",
"Template:Cite magazine",
"Template:Refbegin",
"Template:Christianity footer",
"Template:Calvinism",
"Template:TULIP",
"Template:Cite journal",
"Template:Citation",
"Template:Sister project links",
"Template:Use dmy dates",
"Template:Quotation",
"Template:Reign",
"Template:Cite news",
"Template:Harvnb",
"Template:In Our Time",
"Template:Religion topics",
"Template:Blacklisted-links",
"Template:See also",
"Template:Circa",
"Template:Reflist",
"Template:Heresies condemned by the Catholic Church",
"Template:Sfn",
"Template:Cite encyclopedia"
]
| https://en.wikipedia.org/wiki/Calvinism |
6,026 | Countable set | In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is countable if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements.
In more technical terms, assuming the axiom of countable choice, a set is countable if its cardinality (the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to be countably infinite.
The concept is attributed to Georg Cantor, who proved the existence of uncountable sets, that is, sets that are not countable; for example the set of the real numbers.
Although the terms "countable" and "countably infinite" as defined here are quite common, the terminology is not universal. An alternative style uses countable to mean what is here called countably infinite, and at most countable to mean what is here called countable. To avoid ambiguity, one may limit oneself to the terms "at most countable" and "countably infinite", although with respect to concision this is the worst of both worlds. The reader is advised to check the definition in use when encountering the term "countable" in the literature.
The terms enumerable and denumerable may also be used, e.g. referring to countable and countably infinite respectively, but as definitions vary the reader is once again advised to check the definition in use.
A set S {\displaystyle S} is countable if:
All of these definitions are equivalent.
A set S {\displaystyle S} is countably infinite if:
A set is uncountable if it is not countable, i.e. its cardinality is greater than ℵ 0 {\displaystyle \aleph _{0}} .
In 1874, in his first set theory article, Cantor proved that the set of real numbers is uncountable, thus showing that not all infinite sets are countable. In 1878, he used one-to-one correspondences to define and compare cardinalities. In 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities.
A set is a collection of elements, and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted { 3 , 4 , 5 } {\displaystyle \{3,4,5\}} , called roster form. This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis ("...") is used to represent many elements between the starting element and the end element in a set, if the writer believes that the reader can easily guess what ... represents; for example, { 1 , 2 , 3 , … , 100 } {\displaystyle \{1,2,3,\dots ,100\}} presumably denotes the set of integers from 1 to 100. Even in this case, however, it is still possible to list all the elements, because the number of elements in the set is finite. If we number the elements of the set 1, 2, and so on, up to n {\displaystyle n} , this gives us the usual definition of "sets of size n {\displaystyle n} ".
Some sets are infinite; these sets have more than n {\displaystyle n} elements where n {\displaystyle n} is any integer that can be specified. (No matter how large the specified integer n {\displaystyle n} is, such as n = 10 1000 {\displaystyle n=10^{1000}} , infinite sets have more than n {\displaystyle n} elements.) For example, the set of natural numbers, denotable by { 0 , 1 , 2 , 3 , 4 , 5 , … } {\displaystyle \{0,1,2,3,4,5,\dots \}} , has infinitely many elements, and we cannot use any natural number to give its size. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view works well for countably infinite sets and was the prevailing assumption before Georg Cantor's work. For example, there are infinitely many odd integers, infinitely many even integers, and also infinitely many integers overall. We can consider all these sets to have the same "size" because we can arrange things such that, for every integer, there is a distinct even integer:
or, more generally, n → 2 n {\displaystyle n\rightarrow 2n} (see picture). What we have done here is arrange the integers and the even integers into a one-to-one correspondence (or bijection), which is a function that maps between two sets such that each element of each set corresponds to a single element in the other set. This mathematical notion of "size", cardinality, is that two sets are of the same size if and only if there is a bijection between them. We call all sets that are in one-to-one correspondence with the integers countably infinite and say they have cardinality ℵ 0 {\displaystyle \aleph _{0}} .
Georg Cantor showed that not all infinite sets are countably infinite. For example, the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers). The set of real numbers has a greater cardinality than the set of natural numbers and is said to be uncountable.
By definition, a set S {\displaystyle S} is countable if there exists a bijection between S {\displaystyle S} and a subset of the natural numbers N = { 0 , 1 , 2 , … } {\displaystyle \mathbb {N} =\{0,1,2,\dots \}} . For example, define the correspondence
Since every element of S = { a , b , c } {\displaystyle S=\{a,b,c\}} is paired with precisely one element of { 1 , 2 , 3 } {\displaystyle \{1,2,3\}} , and vice versa, this defines a bijection, and shows that S {\displaystyle S} is countable. Similarly we can show all finite sets are countable.
As for the case of infinite sets, a set S {\displaystyle S} is countably infinite if there is a bijection between S {\displaystyle S} and all of N {\displaystyle \mathbb {N} } . As examples, consider the sets A = { 1 , 2 , 3 , … } {\displaystyle A=\{1,2,3,\dots \}} , the set of positive integers, and B = { 0 , 2 , 4 , 6 , … } {\displaystyle B=\{0,2,4,6,\dots \}} , the set of even integers. We can show these sets are countably infinite by exhibiting a bijection to the natural numbers. This can be achieved using the assignments n ↔ n + 1 {\displaystyle n\leftrightarrow n+1} and n ↔ 2 n {\displaystyle n\leftrightarrow 2n} , so that
Every countably infinite set is countable, and every infinite countable set is countably infinite. Furthermore, any subset of the natural numbers is countable, and more generally:
Theorem — A subset of a countable set is countable.
The set of all ordered pairs of natural numbers (the Cartesian product of two sets of natural numbers, N × N {\displaystyle \mathbb {N} \times \mathbb {N} } is countably infinite, as can be seen by following a path like the one in the picture:
The resulting mapping proceeds as follows:
This mapping covers all such ordered pairs.
This form of triangular mapping recursively generalizes to n {\displaystyle n} -tuples of natural numbers, i.e., ( a 1 , a 2 , a 3 , … , a n ) {\displaystyle (a_{1},a_{2},a_{3},\dots ,a_{n})} where a i {\displaystyle a_{i}} and n {\displaystyle n} are natural numbers, by repeatedly mapping the first two elements of an n {\displaystyle n} -tuple to a natural number. For example, ( 0 , 2 , 3 ) {\displaystyle (0,2,3)} can be written as ( ( 0 , 2 ) , 3 ) {\displaystyle ((0,2),3)} . Then ( 0 , 2 ) {\displaystyle (0,2)} maps to 5 so ( ( 0 , 2 ) , 3 ) {\displaystyle ((0,2),3)} maps to ( 5 , 3 ) {\displaystyle (5,3)} , then ( 5 , 3 ) {\displaystyle (5,3)} maps to 39. Since a different 2-tuple, that is a pair such as ( a , b ) {\displaystyle (a,b)} , maps to a different natural number, a difference between two n-tuples by a single element is enough to ensure the n-tuples being mapped to different natural numbers. So, an injection from the set of n {\displaystyle n} -tuples to the set of natural numbers N {\displaystyle \mathbb {N} } is proved. For the set of n {\displaystyle n} -tuples made by the Cartesian product of finitely many different sets, each element in each tuple has the correspondence to a natural number, so every tuple can be written in natural numbers then the same logic is applied to prove the theorem.
Theorem — The Cartesian product of finitely many countable sets is countable.
The set of all integers Z {\displaystyle \mathbb {Z} } and the set of all rational numbers Q {\displaystyle \mathbb {Q} } may intuitively seem much bigger than N {\displaystyle \mathbb {N} } . But looks can be deceiving. If a pair is treated as the numerator and denominator of a vulgar fraction (a fraction in the form of a / b {\displaystyle a/b} where a {\displaystyle a} and b ≠ 0 {\displaystyle b\neq 0} are integers), then for every positive fraction, we can come up with a distinct natural number corresponding to it. This representation also includes the natural numbers, since every natural number n {\displaystyle n} is also a fraction n / 1 {\displaystyle n/1} . So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is also true for all rational numbers, as can be seen below.
Theorem — Z {\displaystyle \mathbb {Z} } (the set of all integers) and Q {\displaystyle \mathbb {Q} } (the set of all rational numbers) are countable.
In a similar manner, the set of algebraic numbers is countable.
Sometimes more than one mapping is useful: a set A {\displaystyle A} to be shown as countable is one-to-one mapped (injection) to another set B {\displaystyle B} , then A {\displaystyle A} is proved as countable if B {\displaystyle B} is one-to-one mapped to the set of natural numbers. For example, the set of positive rational numbers can easily be one-to-one mapped to the set of natural number pairs (2-tuples) because p / q {\displaystyle p/q} maps to ( p , q ) {\displaystyle (p,q)} . Since the set of natural number pairs is one-to-one mapped (actually one-to-one correspondence or bijection) to the set of natural numbers as shown above, the positive rational number set is proved as countable.
Theorem — Any finite union of countable sets is countable.
With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is "yes" and "no", we can extend it, but we need to assume a new axiom to do so.
Theorem — (Assuming the axiom of countable choice) The union of countably many countable sets is countable.
For example, given countable sets a , b , c , … {\displaystyle {\textbf {a}},{\textbf {b}},{\textbf {c}},\dots } , we first assign each element of each set a tuple, then we assign each tuple an index using a variant of the triangular enumeration we saw above:
We need the axiom of countable choice to index all the sets a , b , c , … {\displaystyle {\textbf {a}},{\textbf {b}},{\textbf {c}},\dots } simultaneously.
Theorem — The set of all finite-length sequences of natural numbers is countable.
This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, each of which is a countable set (finite Cartesian product). So we are talking about a countable union of countable sets, which is countable by the previous theorem.
Theorem — The set of all finite subsets of the natural numbers is countable.
The elements of any finite subset can be ordered into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets.
Theorem — Let S {\displaystyle S} and T {\displaystyle T} be sets.
These follow from the definitions of countable set as injective / surjective functions.
Cantor's theorem asserts that if A {\displaystyle A} is a set and P ( A ) {\displaystyle {\mathcal {P}}(A)} is its power set, i.e. the set of all subsets of A {\displaystyle A} , then there is no surjective function from A {\displaystyle A} to P ( A ) {\displaystyle {\mathcal {P}}(A)} . A proof is given in the article Cantor's theorem. As an immediate consequence of this and the Basic Theorem above we have:
Proposition — The set P ( N ) {\displaystyle {\mathcal {P}}(\mathbb {N} )} is not countable; i.e. it is uncountable.
For an elaboration of this result see Cantor's diagonal argument.
The set of real numbers is uncountable, and so is the set of all infinite sequences of natural numbers.
If there is a set that is a standard model (see inner model) of ZFC set theory, then there is a minimal standard model (see Constructible universe). The Löwenheim–Skolem theorem can be used to show that this minimal model is countable. The fact that the notion of "uncountability" makes sense even in this model, and in particular that this model M contains elements that are:
was seen as paradoxical in the early days of set theory, see Skolem's paradox for more.
The minimal standard model includes all the algebraic numbers and all effectively computable transcendental numbers, as well as many other kinds of numbers.
Countable sets can be totally ordered in various ways, for example:
In both examples of well orders here, any subset has a least element; and in both examples of non-well orders, some subsets do not have a least element. This is the key definition that determines whether a total order is also a well order. | [
{
"paragraph_id": 0,
"text": "In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is countable if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In more technical terms, assuming the axiom of countable choice, a set is countable if its cardinality (the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to be countably infinite.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The concept is attributed to Georg Cantor, who proved the existence of uncountable sets, that is, sets that are not countable; for example the set of the real numbers.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although the terms \"countable\" and \"countably infinite\" as defined here are quite common, the terminology is not universal. An alternative style uses countable to mean what is here called countably infinite, and at most countable to mean what is here called countable. To avoid ambiguity, one may limit oneself to the terms \"at most countable\" and \"countably infinite\", although with respect to concision this is the worst of both worlds. The reader is advised to check the definition in use when encountering the term \"countable\" in the literature.",
"title": "A note on terminology "
},
{
"paragraph_id": 4,
"text": "The terms enumerable and denumerable may also be used, e.g. referring to countable and countably infinite respectively, but as definitions vary the reader is once again advised to check the definition in use.",
"title": "A note on terminology "
},
{
"paragraph_id": 5,
"text": "A set S {\\displaystyle S} is countable if:",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "All of these definitions are equivalent.",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "A set S {\\displaystyle S} is countably infinite if:",
"title": "Definition"
},
{
"paragraph_id": 8,
"text": "A set is uncountable if it is not countable, i.e. its cardinality is greater than ℵ 0 {\\displaystyle \\aleph _{0}} .",
"title": "Definition"
},
{
"paragraph_id": 9,
"text": "In 1874, in his first set theory article, Cantor proved that the set of real numbers is uncountable, thus showing that not all infinite sets are countable. In 1878, he used one-to-one correspondences to define and compare cardinalities. In 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "A set is a collection of elements, and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted { 3 , 4 , 5 } {\\displaystyle \\{3,4,5\\}} , called roster form. This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis (\"...\") is used to represent many elements between the starting element and the end element in a set, if the writer believes that the reader can easily guess what ... represents; for example, { 1 , 2 , 3 , … , 100 } {\\displaystyle \\{1,2,3,\\dots ,100\\}} presumably denotes the set of integers from 1 to 100. Even in this case, however, it is still possible to list all the elements, because the number of elements in the set is finite. If we number the elements of the set 1, 2, and so on, up to n {\\displaystyle n} , this gives us the usual definition of \"sets of size n {\\displaystyle n} \".",
"title": "Introduction"
},
{
"paragraph_id": 11,
"text": "Some sets are infinite; these sets have more than n {\\displaystyle n} elements where n {\\displaystyle n} is any integer that can be specified. (No matter how large the specified integer n {\\displaystyle n} is, such as n = 10 1000 {\\displaystyle n=10^{1000}} , infinite sets have more than n {\\displaystyle n} elements.) For example, the set of natural numbers, denotable by { 0 , 1 , 2 , 3 , 4 , 5 , … } {\\displaystyle \\{0,1,2,3,4,5,\\dots \\}} , has infinitely many elements, and we cannot use any natural number to give its size. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view works well for countably infinite sets and was the prevailing assumption before Georg Cantor's work. For example, there are infinitely many odd integers, infinitely many even integers, and also infinitely many integers overall. We can consider all these sets to have the same \"size\" because we can arrange things such that, for every integer, there is a distinct even integer:",
"title": "Introduction"
},
{
"paragraph_id": 12,
"text": "or, more generally, n → 2 n {\\displaystyle n\\rightarrow 2n} (see picture). What we have done here is arrange the integers and the even integers into a one-to-one correspondence (or bijection), which is a function that maps between two sets such that each element of each set corresponds to a single element in the other set. This mathematical notion of \"size\", cardinality, is that two sets are of the same size if and only if there is a bijection between them. We call all sets that are in one-to-one correspondence with the integers countably infinite and say they have cardinality ℵ 0 {\\displaystyle \\aleph _{0}} .",
"title": "Introduction"
},
{
"paragraph_id": 13,
"text": "Georg Cantor showed that not all infinite sets are countably infinite. For example, the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers). The set of real numbers has a greater cardinality than the set of natural numbers and is said to be uncountable.",
"title": "Introduction"
},
{
"paragraph_id": 14,
"text": "By definition, a set S {\\displaystyle S} is countable if there exists a bijection between S {\\displaystyle S} and a subset of the natural numbers N = { 0 , 1 , 2 , … } {\\displaystyle \\mathbb {N} =\\{0,1,2,\\dots \\}} . For example, define the correspondence",
"title": "Formal overview"
},
{
"paragraph_id": 15,
"text": "Since every element of S = { a , b , c } {\\displaystyle S=\\{a,b,c\\}} is paired with precisely one element of { 1 , 2 , 3 } {\\displaystyle \\{1,2,3\\}} , and vice versa, this defines a bijection, and shows that S {\\displaystyle S} is countable. Similarly we can show all finite sets are countable.",
"title": "Formal overview"
},
{
"paragraph_id": 16,
"text": "As for the case of infinite sets, a set S {\\displaystyle S} is countably infinite if there is a bijection between S {\\displaystyle S} and all of N {\\displaystyle \\mathbb {N} } . As examples, consider the sets A = { 1 , 2 , 3 , … } {\\displaystyle A=\\{1,2,3,\\dots \\}} , the set of positive integers, and B = { 0 , 2 , 4 , 6 , … } {\\displaystyle B=\\{0,2,4,6,\\dots \\}} , the set of even integers. We can show these sets are countably infinite by exhibiting a bijection to the natural numbers. This can be achieved using the assignments n ↔ n + 1 {\\displaystyle n\\leftrightarrow n+1} and n ↔ 2 n {\\displaystyle n\\leftrightarrow 2n} , so that",
"title": "Formal overview"
},
{
"paragraph_id": 17,
"text": "Every countably infinite set is countable, and every infinite countable set is countably infinite. Furthermore, any subset of the natural numbers is countable, and more generally:",
"title": "Formal overview"
},
{
"paragraph_id": 18,
"text": "Theorem — A subset of a countable set is countable.",
"title": "Formal overview"
},
{
"paragraph_id": 19,
"text": "The set of all ordered pairs of natural numbers (the Cartesian product of two sets of natural numbers, N × N {\\displaystyle \\mathbb {N} \\times \\mathbb {N} } is countably infinite, as can be seen by following a path like the one in the picture:",
"title": "Formal overview"
},
{
"paragraph_id": 20,
"text": "The resulting mapping proceeds as follows:",
"title": "Formal overview"
},
{
"paragraph_id": 21,
"text": "This mapping covers all such ordered pairs.",
"title": "Formal overview"
},
{
"paragraph_id": 22,
"text": "This form of triangular mapping recursively generalizes to n {\\displaystyle n} -tuples of natural numbers, i.e., ( a 1 , a 2 , a 3 , … , a n ) {\\displaystyle (a_{1},a_{2},a_{3},\\dots ,a_{n})} where a i {\\displaystyle a_{i}} and n {\\displaystyle n} are natural numbers, by repeatedly mapping the first two elements of an n {\\displaystyle n} -tuple to a natural number. For example, ( 0 , 2 , 3 ) {\\displaystyle (0,2,3)} can be written as ( ( 0 , 2 ) , 3 ) {\\displaystyle ((0,2),3)} . Then ( 0 , 2 ) {\\displaystyle (0,2)} maps to 5 so ( ( 0 , 2 ) , 3 ) {\\displaystyle ((0,2),3)} maps to ( 5 , 3 ) {\\displaystyle (5,3)} , then ( 5 , 3 ) {\\displaystyle (5,3)} maps to 39. Since a different 2-tuple, that is a pair such as ( a , b ) {\\displaystyle (a,b)} , maps to a different natural number, a difference between two n-tuples by a single element is enough to ensure the n-tuples being mapped to different natural numbers. So, an injection from the set of n {\\displaystyle n} -tuples to the set of natural numbers N {\\displaystyle \\mathbb {N} } is proved. For the set of n {\\displaystyle n} -tuples made by the Cartesian product of finitely many different sets, each element in each tuple has the correspondence to a natural number, so every tuple can be written in natural numbers then the same logic is applied to prove the theorem.",
"title": "Formal overview"
},
{
"paragraph_id": 23,
"text": "Theorem — The Cartesian product of finitely many countable sets is countable.",
"title": "Formal overview"
},
{
"paragraph_id": 24,
"text": "The set of all integers Z {\\displaystyle \\mathbb {Z} } and the set of all rational numbers Q {\\displaystyle \\mathbb {Q} } may intuitively seem much bigger than N {\\displaystyle \\mathbb {N} } . But looks can be deceiving. If a pair is treated as the numerator and denominator of a vulgar fraction (a fraction in the form of a / b {\\displaystyle a/b} where a {\\displaystyle a} and b ≠ 0 {\\displaystyle b\\neq 0} are integers), then for every positive fraction, we can come up with a distinct natural number corresponding to it. This representation also includes the natural numbers, since every natural number n {\\displaystyle n} is also a fraction n / 1 {\\displaystyle n/1} . So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is also true for all rational numbers, as can be seen below.",
"title": "Formal overview"
},
{
"paragraph_id": 25,
"text": "Theorem — Z {\\displaystyle \\mathbb {Z} } (the set of all integers) and Q {\\displaystyle \\mathbb {Q} } (the set of all rational numbers) are countable.",
"title": "Formal overview"
},
{
"paragraph_id": 26,
"text": "In a similar manner, the set of algebraic numbers is countable.",
"title": "Formal overview"
},
{
"paragraph_id": 27,
"text": "Sometimes more than one mapping is useful: a set A {\\displaystyle A} to be shown as countable is one-to-one mapped (injection) to another set B {\\displaystyle B} , then A {\\displaystyle A} is proved as countable if B {\\displaystyle B} is one-to-one mapped to the set of natural numbers. For example, the set of positive rational numbers can easily be one-to-one mapped to the set of natural number pairs (2-tuples) because p / q {\\displaystyle p/q} maps to ( p , q ) {\\displaystyle (p,q)} . Since the set of natural number pairs is one-to-one mapped (actually one-to-one correspondence or bijection) to the set of natural numbers as shown above, the positive rational number set is proved as countable.",
"title": "Formal overview"
},
{
"paragraph_id": 28,
"text": "Theorem — Any finite union of countable sets is countable.",
"title": "Formal overview"
},
{
"paragraph_id": 29,
"text": "With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is \"yes\" and \"no\", we can extend it, but we need to assume a new axiom to do so.",
"title": "Formal overview"
},
{
"paragraph_id": 30,
"text": "Theorem — (Assuming the axiom of countable choice) The union of countably many countable sets is countable.",
"title": "Formal overview"
},
{
"paragraph_id": 31,
"text": "For example, given countable sets a , b , c , … {\\displaystyle {\\textbf {a}},{\\textbf {b}},{\\textbf {c}},\\dots } , we first assign each element of each set a tuple, then we assign each tuple an index using a variant of the triangular enumeration we saw above:",
"title": "Formal overview"
},
{
"paragraph_id": 32,
"text": "We need the axiom of countable choice to index all the sets a , b , c , … {\\displaystyle {\\textbf {a}},{\\textbf {b}},{\\textbf {c}},\\dots } simultaneously.",
"title": "Formal overview"
},
{
"paragraph_id": 33,
"text": "Theorem — The set of all finite-length sequences of natural numbers is countable.",
"title": "Formal overview"
},
{
"paragraph_id": 34,
"text": "This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, each of which is a countable set (finite Cartesian product). So we are talking about a countable union of countable sets, which is countable by the previous theorem.",
"title": "Formal overview"
},
{
"paragraph_id": 35,
"text": "Theorem — The set of all finite subsets of the natural numbers is countable.",
"title": "Formal overview"
},
{
"paragraph_id": 36,
"text": "The elements of any finite subset can be ordered into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets.",
"title": "Formal overview"
},
{
"paragraph_id": 37,
"text": "Theorem — Let S {\\displaystyle S} and T {\\displaystyle T} be sets.",
"title": "Formal overview"
},
{
"paragraph_id": 38,
"text": "These follow from the definitions of countable set as injective / surjective functions.",
"title": "Formal overview"
},
{
"paragraph_id": 39,
"text": "Cantor's theorem asserts that if A {\\displaystyle A} is a set and P ( A ) {\\displaystyle {\\mathcal {P}}(A)} is its power set, i.e. the set of all subsets of A {\\displaystyle A} , then there is no surjective function from A {\\displaystyle A} to P ( A ) {\\displaystyle {\\mathcal {P}}(A)} . A proof is given in the article Cantor's theorem. As an immediate consequence of this and the Basic Theorem above we have:",
"title": "Formal overview"
},
{
"paragraph_id": 40,
"text": "Proposition — The set P ( N ) {\\displaystyle {\\mathcal {P}}(\\mathbb {N} )} is not countable; i.e. it is uncountable.",
"title": "Formal overview"
},
{
"paragraph_id": 41,
"text": "For an elaboration of this result see Cantor's diagonal argument.",
"title": "Formal overview"
},
{
"paragraph_id": 42,
"text": "The set of real numbers is uncountable, and so is the set of all infinite sequences of natural numbers.",
"title": "Formal overview"
},
{
"paragraph_id": 43,
"text": "If there is a set that is a standard model (see inner model) of ZFC set theory, then there is a minimal standard model (see Constructible universe). The Löwenheim–Skolem theorem can be used to show that this minimal model is countable. The fact that the notion of \"uncountability\" makes sense even in this model, and in particular that this model M contains elements that are:",
"title": "Minimal model of set theory is countable"
},
{
"paragraph_id": 44,
"text": "was seen as paradoxical in the early days of set theory, see Skolem's paradox for more.",
"title": "Minimal model of set theory is countable"
},
{
"paragraph_id": 45,
"text": "The minimal standard model includes all the algebraic numbers and all effectively computable transcendental numbers, as well as many other kinds of numbers.",
"title": "Minimal model of set theory is countable"
},
{
"paragraph_id": 46,
"text": "Countable sets can be totally ordered in various ways, for example:",
"title": "Total orders"
},
{
"paragraph_id": 47,
"text": "In both examples of well orders here, any subset has a least element; and in both examples of non-well orders, some subsets do not have a least element. This is the key definition that determines whether a total order is also a well order.",
"title": "Total orders"
}
]
| In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is countable if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements. In more technical terms, assuming the axiom of countable choice, a set is countable if its cardinality is not greater than that of the natural numbers. A countable set that is not finite is said to be countably infinite. The concept is attributed to Georg Cantor, who proved the existence of uncountable sets, that is, sets that are not countable; for example the set of the real numbers. | 2001-08-12T18:46:09Z | 2023-11-13T06:44:37Z | [
"Template:Short description",
"Template:Harvard citation no brackets",
"Template:Portal bar",
"Template:Set theory",
"Template:Hatnote group",
"Template:Efn",
"Template:Harvnb",
"Template:Cite web",
"Template:ISBN",
"Template:Mathematical logic",
"Template:Cn",
"Template:Notelist",
"Template:Reflist",
"Template:Citation",
"Template:Wiktionary",
"Template:Math theorem",
"Template:Cite book",
"Template:Number systems"
]
| https://en.wikipedia.org/wiki/Countable_set |
6,034 | Cahn–Ingold–Prelog priority rules | In organic chemistry, the Cahn–Ingold–Prelog (CIP) sequence rules (also the CIP priority convention; named after Robert Sidney Cahn, Christopher Kelk Ingold, and Vladimir Prelog) are a standard process to completely and unequivocally name a stereoisomer of a molecule. The purpose of the CIP system is to assign an R or S descriptor to each stereocenter and an E or Z descriptor to each double bond so that the configuration of the entire molecule can be specified uniquely by including the descriptors in its systematic name. A molecule may contain any number of stereocenters and any number of double bonds, and each usually gives rise to two possible isomers. A molecule with an integer n describing the number of stereocenters will usually have 2 stereoisomers, and 2 diastereomers each having an associated pair of enantiomers. The CIP sequence rules contribute to the precise naming of every stereoisomer of every organic molecule with all atoms of ligancy of fewer than 4 (but including ligancy of 6 as well, this term referring to the "number of neighboring atoms" bonded to a center).
The key article setting out the CIP sequence rules was published in 1966, and was followed by further refinements, before it was incorporated into the rules of the International Union of Pure and Applied Chemistry (IUPAC), the official body that defines organic nomenclature, in 1974. The rules have since been revised, most recently in 2013, as part of the IUPAC book Nomenclature of Organic Chemistry. The IUPAC presentation of the rules constitute the official, formal standard for their use, and it notes that "the method has been developed to cover all compounds with ligancy up to 4... and… [extended to the case of] ligancy 6… [as well as] for all configurations and conformations of such compounds." Nevertheless, though the IUPAC documentation presents a thorough introduction, it includes the caution that "it is essential to study the original papers, especially the 1966 paper, before using the sequence rule for other than fairly simple cases."
A recent paper argues for changes to some of the rules (sequence rules 1b and 2) to address certain molecules for which the correct descriptors were unclear. However, a different problem remains: in rare cases, two different stereoisomers of the same molecule can have the same CIP descriptors, so the CIP system may not be able to unambiguously name a stereoisomer, and other systems may be preferable.
The steps for naming molecules using the CIP system are often presented as:
R/S and E/Z descriptors are assigned by using a system for ranking priority of the groups attached to each stereocenter. This procedure, often known as the sequence rules, is the heart of the CIP system. The overview in this section omits some rules that are needed only in rare cases.
If two groups differ only in isotopes, then the larger atomic mass is used to set the priority.
If an atom, A, is double-bonded to another atom, then atom A should be treated as though it is "connected to the same atom twice". An atom that is double-bonded has a higher priority than an atom that is single bonded. When dealing with double bonded priority groups, one is allowed to visit the same atom twice as one creates an arc.
When B is replaced with a list of attached atoms, A itself, but not its "phantom", is excluded in accordance with the general principle of not doubling back along a bond that has just been followed. A triple bond is handled the same way except that A and B are each connected to two phantom atoms of the other.
If two substituents on an atom are geometric isomers of each other, the Z-isomer has higher priority than the E-isomer. A stereoisomer that contains two higher priority groups on the same face of the double bond (cis) is classified as "Z." The stereoisomer with two higher priority groups on opposite sides of a carbon-carbon double bond (trans) is classified as "E."
To handle a molecule containing one or more cycles, one must first expand it into a tree (called a hierarchical digraph) by traversing bonds in all possible paths starting at the stereocenter. When the traversal encounters an atom through which the current path has already passed, a phantom atom is generated in order to keep the tree finite. A single atom of the original molecule may appear in many places (some as phantoms, some not) in the tree.
A chiral sp hybridized isomer contains four different substituents. All four substituents are assigned prorites based on its atomic numbers. After the substituents of a stereocenter have been assigned their priorities, the molecule is oriented in space so that the group with the lowest priority is pointed away from the observer. If the substituents are numbered from 1 (highest priority) to 4 (lowest priority), then the sense of rotation of a curve passing through 1, 2 and 3 distinguishes the stereoisomers. In a configurational isomer, the lowest priority group (most times hydrogen) is positioned behind the plane or the hatched bond going away from the reader. The highest priority group will have an arc drawn connecting to the rest of the groups, finishing at the group of third priority. An arc drawn clockwise, has the rectus (R) assignment. An arc drawn counterclockwise, has the sinister (S) assignment. The names are derived from the Latin for 'right' and 'left', respectively. When naming an organic isomer, the abbreviation for either rectus or sinister assignment is placed in front of the name in parentheses. For example, 3-methyl-1-pentene with a rectus assignment is formatted as (R)-3-methyl-1-pentene.
A practical method of determining whether an enantiomer is R or S is by using the right-hand rule: one wraps the molecule with the fingers in the direction 1 → 2 → 3. If the thumb points in the direction of the fourth substituent, the enantiomer is R; otherwise, it is S.
It is possible in rare cases that two substituents on an atom differ only in their absolute configuration (R or S). If the relative priorities of these substituents need to be established, R takes priority over S. When this happens, the descriptor of the stereocenter is a lowercase letter (r or s) instead of the uppercase letter normally used.
For double bonded molecules, Cahn–Ingold–Prelog priority rules (CIP rules) are followed to determine the priority of substituents of the double bond. If both of the high priority groups are on the same side of the double bond (cis configuration), then the stereoisomer is assigned the configuration Z (zusammen, German word meaning "together"). If the high priority groups are on opposite sides of the double bond ( trans configuration ), then the stereoisomer is assigned the configuration E (entgegen, German word meaning "opposed")
In some cases where stereogenic centers are formed, the configuration must be specified. Without the presence of a non-covalent interaction, a compound is achiral. Some professionals have proposed a new rule to account for this. This rule states that "non-covalent interactions have a fictitious number between 0 and 1" when assigning priority. Compounds in which this occurs are referred to as coordination compounds.
Spiro structures contain chiral molecules with no say asymmetric center. The rings of a spiro structure lie at right angles to each other. It's important to note that the mirror images of spiro structures are non-superimposable and are enantiomers.
Optical isomers are compounds that have four different substituents attached to a central carbon. Optical isomers play a significant role in biological activity. Optical isomers have the ability to rotate plane-polarized clockwise (R) or counterclockwise (S). When optical isomers create two enantiomers, one will rotate clockwise while the other rotates counterclockwise. A mixture of the two isomers, however, will not rotate plane-polarized light. These two isomers may be identical chemically, but are indistinguishable.
The following are examples of application of the nomenclature.
If a compound has more than one chiral stereocenter, each center is denoted by either R or S. For example, ephedrine exists in (1R,2S) and (1S,2R) stereoisomers, which are distinct mirror-image forms of each other, making them enantiomers. This compound also exists as the two enantiomers written (1R,2R) and (1S,2S), which are named pseudoephedrine rather than ephedrine. All four of these isomers are named 2-methylamino-1-phenyl-1-propanol in systematic nomenclature. However, ephedrine and pseudoephedrine are diastereomers, or stereoisomers that are not enantiomers because they are not related as mirror-image copies. Pseudoephedrine and ephedrine are given different names because, as diastereomers, they have different chemical properties, even for racemic mixtures of each.
More generally, for any pair of enantiomers, all of the descriptors are opposite: (R,R) and (S,S) are enantiomers, as are (R,S) and (S,R). Diastereomers have at least one descriptor in common; for example (R,S) and (R,R) are diastereomers, as are (S,R) and (S,S). This holds true also for compounds having more than two stereocenters: if two stereoisomers have at least one descriptor in common, they are diastereomers. If all the descriptors are opposite, they are enantiomers.
A meso compound is an achiral molecule, despite having two or more stereogenic centers. A meso compound is "superimposable" on its mirror image, therefore it reduces the number of stereoisomers predicted by the 2 rule. This occurs because the molecule obtains a plane of symmetry that causes the molecule to rotate around the central carbon–carbon bond. One example is meso-tartaric acid, in which (R,S) is the same as the (S,R) form. In meso compounds the R and S stereocenters occur in symmetrically positioned pairs.
The relative configuration of two stereoisomers may be denoted by the descriptors R and S with an asterisk (*). (R*,R*) means two centers having identical configurations, (R,R) or (S,S); (R*,S*) means two centers having opposite configurations, (R,S) or (S,R). To begin, the lowest-numbered (according to IUPAC systematic numbering) stereogenic center is given the R* descriptor.
To designate two anomers the relative stereodescriptors alpha (α) and beta (β) are used. In the α anomer the anomeric carbon atom and the reference atom do have opposite configurations (R,S) or (S,R), whereas in the β anomer they are the same (R,R) or (S,S).
Stereochemistry also plays a role assigning faces to trigonal molecules such as ketones. A nucleophile in a nucleophilic addition can approach the carbonyl group from two opposite sides or faces. When an achiral nucleophile attacks acetone, both faces are identical and there is only one reaction product. When the nucleophile attacks butanone, the faces are not identical (enantiotopic) and a racemic product results. When the nucleophile is a chiral molecule diastereoisomers are formed. When one face of a molecule is shielded by substituents or geometric constraints compared to the other face the faces are called diastereotopic. The same rules that determine the stereochemistry of a stereocenter (R or S) also apply when assigning the face of a molecular group. The faces are then called the Re-face and Si-face. In the example displayed on the right, the compound acetophenone is viewed from the Re-face. Hydride addition as in a reduction process from this side will form the (S)-enantiomer and attack from the opposite Si-face will give the (R)-enantiomer. However, one should note that adding a chemical group to the prochiral center from the Re-face will not always lead to an (S)-stereocenter, as the priority of the chemical group has to be taken into account. That is, the absolute stereochemistry of the product is determined on its own and not by considering which face it was attacked from. In the above-mentioned example, if chloride (Z = 17) were added to the prochiral center from the Re-face, this would result in an (R)-enantiomer. | [
{
"paragraph_id": 0,
"text": "In organic chemistry, the Cahn–Ingold–Prelog (CIP) sequence rules (also the CIP priority convention; named after Robert Sidney Cahn, Christopher Kelk Ingold, and Vladimir Prelog) are a standard process to completely and unequivocally name a stereoisomer of a molecule. The purpose of the CIP system is to assign an R or S descriptor to each stereocenter and an E or Z descriptor to each double bond so that the configuration of the entire molecule can be specified uniquely by including the descriptors in its systematic name. A molecule may contain any number of stereocenters and any number of double bonds, and each usually gives rise to two possible isomers. A molecule with an integer n describing the number of stereocenters will usually have 2 stereoisomers, and 2 diastereomers each having an associated pair of enantiomers. The CIP sequence rules contribute to the precise naming of every stereoisomer of every organic molecule with all atoms of ligancy of fewer than 4 (but including ligancy of 6 as well, this term referring to the \"number of neighboring atoms\" bonded to a center).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The key article setting out the CIP sequence rules was published in 1966, and was followed by further refinements, before it was incorporated into the rules of the International Union of Pure and Applied Chemistry (IUPAC), the official body that defines organic nomenclature, in 1974. The rules have since been revised, most recently in 2013, as part of the IUPAC book Nomenclature of Organic Chemistry. The IUPAC presentation of the rules constitute the official, formal standard for their use, and it notes that \"the method has been developed to cover all compounds with ligancy up to 4... and… [extended to the case of] ligancy 6… [as well as] for all configurations and conformations of such compounds.\" Nevertheless, though the IUPAC documentation presents a thorough introduction, it includes the caution that \"it is essential to study the original papers, especially the 1966 paper, before using the sequence rule for other than fairly simple cases.\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "A recent paper argues for changes to some of the rules (sequence rules 1b and 2) to address certain molecules for which the correct descriptors were unclear. However, a different problem remains: in rare cases, two different stereoisomers of the same molecule can have the same CIP descriptors, so the CIP system may not be able to unambiguously name a stereoisomer, and other systems may be preferable.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The steps for naming molecules using the CIP system are often presented as:",
"title": "Steps for naming"
},
{
"paragraph_id": 4,
"text": "R/S and E/Z descriptors are assigned by using a system for ranking priority of the groups attached to each stereocenter. This procedure, often known as the sequence rules, is the heart of the CIP system. The overview in this section omits some rules that are needed only in rare cases.",
"title": "Steps for naming"
},
{
"paragraph_id": 5,
"text": "If two groups differ only in isotopes, then the larger atomic mass is used to set the priority.",
"title": "Steps for naming"
},
{
"paragraph_id": 6,
"text": "If an atom, A, is double-bonded to another atom, then atom A should be treated as though it is \"connected to the same atom twice\". An atom that is double-bonded has a higher priority than an atom that is single bonded. When dealing with double bonded priority groups, one is allowed to visit the same atom twice as one creates an arc.",
"title": "Steps for naming"
},
{
"paragraph_id": 7,
"text": "When B is replaced with a list of attached atoms, A itself, but not its \"phantom\", is excluded in accordance with the general principle of not doubling back along a bond that has just been followed. A triple bond is handled the same way except that A and B are each connected to two phantom atoms of the other.",
"title": "Steps for naming"
},
{
"paragraph_id": 8,
"text": "If two substituents on an atom are geometric isomers of each other, the Z-isomer has higher priority than the E-isomer. A stereoisomer that contains two higher priority groups on the same face of the double bond (cis) is classified as \"Z.\" The stereoisomer with two higher priority groups on opposite sides of a carbon-carbon double bond (trans) is classified as \"E.\"",
"title": "Steps for naming"
},
{
"paragraph_id": 9,
"text": "To handle a molecule containing one or more cycles, one must first expand it into a tree (called a hierarchical digraph) by traversing bonds in all possible paths starting at the stereocenter. When the traversal encounters an atom through which the current path has already passed, a phantom atom is generated in order to keep the tree finite. A single atom of the original molecule may appear in many places (some as phantoms, some not) in the tree.",
"title": "Steps for naming"
},
{
"paragraph_id": 10,
"text": "A chiral sp hybridized isomer contains four different substituents. All four substituents are assigned prorites based on its atomic numbers. After the substituents of a stereocenter have been assigned their priorities, the molecule is oriented in space so that the group with the lowest priority is pointed away from the observer. If the substituents are numbered from 1 (highest priority) to 4 (lowest priority), then the sense of rotation of a curve passing through 1, 2 and 3 distinguishes the stereoisomers. In a configurational isomer, the lowest priority group (most times hydrogen) is positioned behind the plane or the hatched bond going away from the reader. The highest priority group will have an arc drawn connecting to the rest of the groups, finishing at the group of third priority. An arc drawn clockwise, has the rectus (R) assignment. An arc drawn counterclockwise, has the sinister (S) assignment. The names are derived from the Latin for 'right' and 'left', respectively. When naming an organic isomer, the abbreviation for either rectus or sinister assignment is placed in front of the name in parentheses. For example, 3-methyl-1-pentene with a rectus assignment is formatted as (R)-3-methyl-1-pentene.",
"title": "Steps for naming"
},
{
"paragraph_id": 11,
"text": "A practical method of determining whether an enantiomer is R or S is by using the right-hand rule: one wraps the molecule with the fingers in the direction 1 → 2 → 3. If the thumb points in the direction of the fourth substituent, the enantiomer is R; otherwise, it is S.",
"title": "Steps for naming"
},
{
"paragraph_id": 12,
"text": "It is possible in rare cases that two substituents on an atom differ only in their absolute configuration (R or S). If the relative priorities of these substituents need to be established, R takes priority over S. When this happens, the descriptor of the stereocenter is a lowercase letter (r or s) instead of the uppercase letter normally used.",
"title": "Steps for naming"
},
{
"paragraph_id": 13,
"text": "For double bonded molecules, Cahn–Ingold–Prelog priority rules (CIP rules) are followed to determine the priority of substituents of the double bond. If both of the high priority groups are on the same side of the double bond (cis configuration), then the stereoisomer is assigned the configuration Z (zusammen, German word meaning \"together\"). If the high priority groups are on opposite sides of the double bond ( trans configuration ), then the stereoisomer is assigned the configuration E (entgegen, German word meaning \"opposed\")",
"title": "Steps for naming"
},
{
"paragraph_id": 14,
"text": "In some cases where stereogenic centers are formed, the configuration must be specified. Without the presence of a non-covalent interaction, a compound is achiral. Some professionals have proposed a new rule to account for this. This rule states that \"non-covalent interactions have a fictitious number between 0 and 1\" when assigning priority. Compounds in which this occurs are referred to as coordination compounds.",
"title": "Steps for naming"
},
{
"paragraph_id": 15,
"text": "Spiro structures contain chiral molecules with no say asymmetric center. The rings of a spiro structure lie at right angles to each other. It's important to note that the mirror images of spiro structures are non-superimposable and are enantiomers.",
"title": "Steps for naming"
},
{
"paragraph_id": 16,
"text": "Optical isomers are compounds that have four different substituents attached to a central carbon. Optical isomers play a significant role in biological activity. Optical isomers have the ability to rotate plane-polarized clockwise (R) or counterclockwise (S). When optical isomers create two enantiomers, one will rotate clockwise while the other rotates counterclockwise. A mixture of the two isomers, however, will not rotate plane-polarized light. These two isomers may be identical chemically, but are indistinguishable.",
"title": "Steps for naming"
},
{
"paragraph_id": 17,
"text": "The following are examples of application of the nomenclature.",
"title": "Steps for naming"
},
{
"paragraph_id": 18,
"text": "If a compound has more than one chiral stereocenter, each center is denoted by either R or S. For example, ephedrine exists in (1R,2S) and (1S,2R) stereoisomers, which are distinct mirror-image forms of each other, making them enantiomers. This compound also exists as the two enantiomers written (1R,2R) and (1S,2S), which are named pseudoephedrine rather than ephedrine. All four of these isomers are named 2-methylamino-1-phenyl-1-propanol in systematic nomenclature. However, ephedrine and pseudoephedrine are diastereomers, or stereoisomers that are not enantiomers because they are not related as mirror-image copies. Pseudoephedrine and ephedrine are given different names because, as diastereomers, they have different chemical properties, even for racemic mixtures of each.",
"title": "Describing multiple centers"
},
{
"paragraph_id": 19,
"text": "More generally, for any pair of enantiomers, all of the descriptors are opposite: (R,R) and (S,S) are enantiomers, as are (R,S) and (S,R). Diastereomers have at least one descriptor in common; for example (R,S) and (R,R) are diastereomers, as are (S,R) and (S,S). This holds true also for compounds having more than two stereocenters: if two stereoisomers have at least one descriptor in common, they are diastereomers. If all the descriptors are opposite, they are enantiomers.",
"title": "Describing multiple centers"
},
{
"paragraph_id": 20,
"text": "A meso compound is an achiral molecule, despite having two or more stereogenic centers. A meso compound is \"superimposable\" on its mirror image, therefore it reduces the number of stereoisomers predicted by the 2 rule. This occurs because the molecule obtains a plane of symmetry that causes the molecule to rotate around the central carbon–carbon bond. One example is meso-tartaric acid, in which (R,S) is the same as the (S,R) form. In meso compounds the R and S stereocenters occur in symmetrically positioned pairs.",
"title": "Describing multiple centers"
},
{
"paragraph_id": 21,
"text": "The relative configuration of two stereoisomers may be denoted by the descriptors R and S with an asterisk (*). (R*,R*) means two centers having identical configurations, (R,R) or (S,S); (R*,S*) means two centers having opposite configurations, (R,S) or (S,R). To begin, the lowest-numbered (according to IUPAC systematic numbering) stereogenic center is given the R* descriptor.",
"title": "Relative configuration"
},
{
"paragraph_id": 22,
"text": "To designate two anomers the relative stereodescriptors alpha (α) and beta (β) are used. In the α anomer the anomeric carbon atom and the reference atom do have opposite configurations (R,S) or (S,R), whereas in the β anomer they are the same (R,R) or (S,S).",
"title": "Relative configuration"
},
{
"paragraph_id": 23,
"text": "Stereochemistry also plays a role assigning faces to trigonal molecules such as ketones. A nucleophile in a nucleophilic addition can approach the carbonyl group from two opposite sides or faces. When an achiral nucleophile attacks acetone, both faces are identical and there is only one reaction product. When the nucleophile attacks butanone, the faces are not identical (enantiotopic) and a racemic product results. When the nucleophile is a chiral molecule diastereoisomers are formed. When one face of a molecule is shielded by substituents or geometric constraints compared to the other face the faces are called diastereotopic. The same rules that determine the stereochemistry of a stereocenter (R or S) also apply when assigning the face of a molecular group. The faces are then called the Re-face and Si-face. In the example displayed on the right, the compound acetophenone is viewed from the Re-face. Hydride addition as in a reduction process from this side will form the (S)-enantiomer and attack from the opposite Si-face will give the (R)-enantiomer. However, one should note that adding a chemical group to the prochiral center from the Re-face will not always lead to an (S)-stereocenter, as the priority of the chemical group has to be taken into account. That is, the absolute stereochemistry of the product is determined on its own and not by considering which face it was attacked from. In the above-mentioned example, if chloride (Z = 17) were added to the prochiral center from the Re-face, this would result in an (R)-enantiomer.",
"title": "Faces"
}
]
| In organic chemistry, the Cahn–Ingold–Prelog (CIP) sequence rules are a standard process to completely and unequivocally name a stereoisomer of a molecule. The purpose of the CIP system is to assign an R or S descriptor to each stereocenter and an E or Z descriptor to each double bond so that the configuration of the entire molecule can be specified uniquely by including the descriptors in its systematic name. A molecule may contain any number of stereocenters and any number of double bonds, and each usually gives rise to two possible isomers. A molecule with an integer n describing the number of stereocenters will usually have 2n stereoisomers, and 2n−1 diastereomers each having an associated pair of enantiomers. The CIP sequence rules contribute to the precise naming of every stereoisomer of every organic molecule with all atoms of ligancy of fewer than 4. The key article setting out the CIP sequence rules was published in 1966, and was followed by further refinements, before it was incorporated into the rules of the International Union of Pure and Applied Chemistry (IUPAC), the official body that defines organic nomenclature, in 1974. The rules have since been revised, most recently in 2013, as part of the IUPAC book Nomenclature of Organic Chemistry. The IUPAC presentation of the rules constitute the official, formal standard for their use, and it notes that "the method has been developed to cover all compounds with ligancy up to 4... and… [extended to the case of] ligancy 6… [as well as] for all configurations and conformations of such compounds." Nevertheless, though the IUPAC documentation presents a thorough introduction, it includes the caution that "it is essential to study the original papers, especially the 1966 paper, before using the sequence rule for other than fairly simple cases." A recent paper argues for changes to some of the rules to address certain molecules for which the correct descriptors were unclear. However, a different problem remains: in rare cases, two different stereoisomers of the same molecule can have the same CIP descriptors, so the CIP system may not be able to unambiguously name a stereoisomer, and other systems may be preferable. | 2001-08-06T00:08:40Z | 2023-11-14T00:02:16Z | [
"Template:Nowrap",
"Template:Reflist",
"Template:Cite web",
"Template:Navbox stereochemistry",
"Template:Main article",
"Template:BlueBook2013",
"Template:Dead link",
"Template:Short description",
"Template:Citation needed",
"Template:Rp",
"Template:Mvar",
"Template:Math",
"Template:Cite book",
"Template:Cite journal",
"Template:GoldBookRef",
"Template:More citations needed",
"Template:According to whom",
"Template:Cite conference"
]
| https://en.wikipedia.org/wiki/Cahn%E2%80%93Ingold%E2%80%93Prelog_priority_rules |
6,035 | Celibacy | Celibacy (from Latin caelibatus) is the state of voluntarily being unmarried, sexually abstinent, or both, usually for religious reasons. It is often in association with the role of a religious official or devotee. In its narrow sense, the term celibacy is applied only to those for whom the unmarried state is the result of a sacred vow, act of renunciation, or religious conviction. In a wider sense, it is commonly understood to only mean abstinence from sexual activity.
Celibacy has existed in one form or another throughout history, in virtually all the major religions of the world, and views on it have varied.
Classical Hindu culture encouraged asceticism and celibacy in the later stages of life, after one has met one's societal obligations. Jainism, on the other hand, preached complete celibacy even for young monks and considered celibacy to be an essential behavior to attain moksha. Buddhism is similar to Jainism in this respect. There were, however, significant cultural differences in the various areas where Buddhism spread, which affected the local attitudes toward celibacy. A somewhat similar situation existed in Japan, where the Shinto tradition also opposed celibacy. In most native African and Native American religious traditions, celibacy has been viewed negatively as well, although there were exceptions like periodic celibacy practiced by some Mesoamerican warriors.
The Romans viewed celibacy as an aberration and legislated fiscal penalties against it, with the exception of the Vestal Virgins, who took a 30-year vow of chastity in order to devote themselves to the study and correct observance of state rituals.
In Christianity, celibacy means the promise to live either virginal or celibate in the future. Such a "vow of celibacy" has been normal for some centuries for Catholic priests, Catholic and Eastern Orthodox monks, and nuns. In addition, a promise or vow of celibacy may be made in the Anglican Communion and some Protestant churches or communities—such as the Shakers, for example—for members of religious orders, hermits, consecrated virgins, and deaconesses.
Judaism and Islam have denounced celibacy, as both religions emphasize marriage and family life. However, the priests of the Essenes, a Jewish sect during the Second Temple period, practised celibacy. Several hadiths indicate that the Islamic prophet Muhammad denounced celibacy.
The English word celibacy derives from the Latin caelibatus, "state of being unmarried", from Latin caelebs, meaning "unmarried". This word derives from two Proto-Indo-European stems, *kaiwelo- "alone" and *lib(h)s- "living".
The words abstinence and celibacy are often used interchangeably, but are not necessarily the same thing. Sexual abstinence, also known as continence, is abstaining from some or all aspects of sexual activity, often for some limited period of time, while celibacy may be defined as a voluntary religious vow not to marry or engage in sexual activity. Asexuality is commonly conflated with celibacy and sexual abstinence, but it is considered distinct from the two, as celibacy and sexual abstinence are behavioral and those who use those terms for themselves are generally motivated by factors such as an individual's personal or religious beliefs.
A. W. Richard Sipe, while focusing on the topic of celibacy in Catholicism, states that "the most commonly assumed definition of celibate is simply an unmarried or single person, and celibacy is perceived as synonymous with sexual abstinence or restraint." Sipe adds that even in the relatively uniform milieu of Catholic priests in the United States there seems to be "simply no clear operational definition of celibacy". Elizabeth Abbott commented on the terminology in her A History of Celibacy (2001) writing that she "drafted a definition of celibacy that discarded the rigidly pedantic and unhelpful distinctions between celibacy, chastity, and virginity..."
The concept of "new" celibacy was introduced by Gabrielle Brown in her 1980 book The New Celibacy. In a revised version (1989) of her book, she claims abstinence to be "a response on the outside to what's going on, and celibacy is a response from the inside". According to her definition, celibacy (even short-term celibacy that is pursued for non-religious reasons) is much more than not having sex. It is more intentional than abstinence, and its goal is personal growth and empowerment. Although Brown repeatedly states that celibacy is a matter of choice, she clearly suggests that those who do not choose this route are somehow missing out. This new perspective on celibacy is echoed by several authors including Elizabeth Abbott, Wendy Keller, and Wendy Shalit.
The rule of celibacy in the Buddhist religion, whether Mahayana or Theravada, has a long history. Celibacy was advocated as an ideal rule of life for all monks and nuns by Gautama Buddha, except in Japan where it is not strictly followed due to historical and political developments following the Meiji Restoration. In Japan, celibacy was an ideal among Buddhist clerics for hundreds of years. But violations of clerical celibacy were so common for so long that finally, in 1872, state laws made marriage legal for Buddhist clerics. Subsequently, ninety percent of Buddhist monks/clerics married. An example is Higashifushimi Kunihide, a prominent Buddhist priest of Japanese royal ancestry who was married and a father whilst serving as a monk for most of his lifetime.
Gautama, later known as the Buddha, is known for his renunciation of his wife, Princess Yasodharā, and son, Rahula. In order to pursue an ascetic life, he needed to renounce aspects of the impermanent world, including his wife and son. Later on both his wife and son joined the ascetic community and are mentioned in the Buddhist texts to have become enlightened. In another sense, a buddhavacana recorded the zen patriarch Vimalakirti as being an advocate of marital continence instead of monastic renunciation. This sutra became somewhat popular due to its brash humour as well as its integration of the role of women in lay and spiritual life.
In the religious movement of Brahma Kumaris, celibacy is also promoted for peace and to defeat power of lust.
There is no commandment in the New Testament that Jesus Christ's disciples have to live in celibacy, although it is a general view that Christ himself lived a life of perfect chastity, thus "Voluntary chastity is the imitation of him who was the virgin Son of a virgin Mother". One of his invocations is "King of virgins and lover of stainless chastity" (Rex virginum, amator castitatis). Furthermore, Christ says the following in Matthew 19, verse 12: "There are those who choose to live like eunuchs for the sake of the kingdom of heaven. The one who can accept this should accept it." Many supporters of priestly celibacy rely on this passage.
While eunuchs were not generally celibate, over subsequent centuries this statement has come to be interpreted as referring to celibacy.
Paul the Apostle emphasized the importance of overcoming the desires of the flesh and saw the state of celibacy being superior to that of marriage. Paul made parallels between the relations between spouses and God's relationship with the church. "Husbands, love your wives even as Christ loved the church. Husbands should love their wives as their own bodies" (Ephesians 5:25–28). Paul himself was celibate and said that his wish was "that all of you were as I am" (1 Corinthians 7:7). In fact, this entire chapter is a defense of and a call to celibacy.
The early Christians lived in the belief that the end of the world would soon come upon them, and saw no point in planning new families and having children. According to Chadwick, this was why Paul encouraged both celibate and marital lifestyles among the members of the Corinthian congregation, regarding celibacy as the preferable of the two.
In the counsels of perfection (evangelical counsels) Jesus Christ "gave the rule of the higher life founded upon his own most perfect live. According to this counsels persons may be called to voluntary celibacy".
A number of early Christian martyrs were women or girls who had given themselves to Christ in perpetual virginity, such as Saint Agnes and Saint Lucy. According to most Christian thought, the first sacred virgin was Mary, the mother of Jesus, who was consecrated by the Holy Spirit during the Annunciation. Tradition also has it that the Apostle Matthew consecrated virgins. In the Catholic Church and the Orthodox churches, a consecrated virgin, is a woman who has been consecrated by the church to a life of perpetual virginity in the service of the church.
The Desert Fathers were Christian hermits, and ascetics who had a major influence on the development of Christianity and celibacy. Paul of Thebes is often credited with being the first hermit or anachorite to go to the desert, but it was Anthony the Great who launched the movement that became the Desert Fathers. Sometime around AD 270, Anthony heard a Sunday sermon stating that perfection could be achieved by selling all of one's possessions, giving the proceeds to the poor, and following Christ (Matthew 19:21). He followed the advice and made the further step of moving deep into the desert to seek complete solitude.
Over time, the model of Anthony and other hermits attracted many followers, who lived alone in the desert or in small groups. They chose a life of extreme asceticism, renouncing all the pleasures of the senses, rich food, baths, rest, and anything that made them comfortable. Thousands joined them in the desert, mostly men but also a handful of women. Religious seekers also began going to the desert seeking advice and counsel from the early Desert Fathers. By the time of Anthony's death, there were so many men and women living in the desert in celibacy that it was described as "a city" by Anthony's biographer.
The first Conciliar document on clerical celibacy of the Western Church (Synod of Elvira, c. 305 can. xxxiii) states that the discipline of celibacy is to refrain from the use of marriage, i.e. refrain from having carnal contact with one's spouse.
According to the later St. Jerome (c. 347 – 420), celibacy is a moral virtue, consisting of living in the flesh, but outside the flesh, and so being not corrupted by it (vivere in carne praeter carnem). Celibacy excludes not only libidinous acts, but also sinful thoughts or desires of the flesh. Jerome referred to marriage prohibition for priests when he claimed in Against Jovinianus that Peter and the other apostles had been married before they were called, but subsequently gave up their marital relations.
In the Catholic, Orthodox and Oriental Orthodox traditions, bishops are required to be celibate. In the Eastern Catholic and Orthodox traditions, priests and deacons are allowed to be married, yet have to remain celibate if they are unmarried at the time of ordination.
In the early Church, higher clerics lived in marriages. Augustine taught that the original sin of Adam and Eve was either an act of foolishness (insipientia) followed by pride and disobedience to God, or else inspired by pride. The first couple disobeyed God, who had told them not to eat of the tree of the knowledge of good and evil (Gen 2:17). The tree was a symbol of the order of creation. Self-centeredness made Adam and Eve eat of it, thus failing to acknowledge and respect the world as it was created by God, with its hierarchy of beings and values. They would not have fallen into pride and lack of wisdom, if Satan had not sown into their senses "the root of evil" (radix mali). Their nature was wounded by concupiscence or libido, which affected human intelligence and will, as well as affections and desires, including sexual desire. The sin of Adam is inherited by all human beings. Already in his pre-Pelagian writings, Augustine taught that original sin was transmitted by concupiscence, which he regarded as the passion of both soul and body, making humanity a massa damnata (mass of perdition, condemned crowd) and much enfeebling, though not destroying, the freedom of the will.
In the early 3rd century, the Canons of the Apostolic Constitutions decreed that only lower clerics might still marry after their ordination, but marriage of bishops, priests, and deacons were not allowed.
One explanation for the origin of obligatory celibacy is that it is based on the writings of Saint Paul, who wrote of the advantages of celibacy allowed a man in serving the Lord. Celibacy was popularised by the early Christian theologians like Saint Augustine of Hippo and Origen. Another possible explanation for the origins of obligatory celibacy revolves around more practical reason, "the need to avoid claims on church property by priests' offspring". It remains a matter of Canon Law (and often a criterion for certain religious orders, especially Franciscans) that priests may not own land and therefore cannot pass it on to legitimate or illegitimate children. The land belongs to the Church through the local diocese as administered by the Local Ordinary (usually a bishop), who is often an ex officio corporation sole. Celibacy is viewed differently by the Catholic Church and the various Protestant communities. It includes clerical celibacy, celibacy of the consecrated life, voluntary lay celibacy, and celibacy outside of marriage.
The Protestant Reformation rejected celibate life and sexual continence for preachers. Protestant celibate communities have emerged, especially from Anglican and Lutheran backgrounds. A few minor Christian sects advocate celibacy as a better way of life. These groups included the Shakers, the Harmony Society and the Ephrata Cloister.
Many evangelicals prefer the term "abstinence" to "celibacy". Assuming everyone will marry, they focus their discussion on refraining from premarital sex and focusing on the joys of a future marriage. But some evangelicals, particularly older singles, desire a positive message of celibacy that moves beyond the "wait until marriage" message of abstinence campaigns. They seek a new understanding of celibacy that is focused on God rather than a future marriage or a lifelong vow to the Church.
There are also many Pentecostal churches which practice celibate ministry. For instance, the full-time ministers of the Pentecostal Mission are celibate and generally single. Married couples who enter full-time ministry may become celibate and could be sent to different locations.
During the first three or four centuries, no law was promulgated prohibiting clerical marriage. Celibacy was a matter of choice for bishops, priests, and deacons.
Statutes forbidding clergy from having wives were written beginning with the Council of Elvira (306) but these early statutes were not universal and were often defied by clerics and then retracted by hierarchy. The Synod of Gangra (345) condemned a false asceticism whereby worshipers boycotted celebrations presided over by married clergy. The Apostolic Constitutions (c. 400) excommunicated a priest or bishop who left his wife "under the pretense of piety" (Mansi, 1:51).
"A famous letter of Synesius of Cyrene (c. 414) is evidence both for the respecting of personal decision in the matter and for contemporary appreciation of celibacy. For priests and deacons clerical marriage continued to be in vogue".
"The Second Lateran Council (1139) seems to have enacted the first written law making sacred orders a direct impediment to marriage for the universal Church." Celibacy was first required of some clerics in 1123 at the First Lateran Council. Because clerics resisted it, the celibacy mandate was restated at the Second Lateran Council (1139) and the Council of Trent (1545–64). In places, coercion and enslavement of clerical wives and children was apparently involved in the enforcement of the law. "The earliest decree in which the children [of clerics] were declared to be slaves and never to be enfranchised [freed] seems to have been a canon of the Synod of Pavia in 1018. Similar penalties were promulgated against wives and concubines (see the Synod of Melfi, 1189 can. xii), who by the very fact of their unlawful connexion with a subdeacon or clerk of higher rank became liable to be seized by the over-lord".
In the Roman Catholic Church, the Twelve Apostles are considered to have been the first priests and bishops of the Church. Some say the call to be eunuchs for the sake of Heaven in Matthew 19 was a call to be sexually continent and that this developed into celibacy for priests as the successors of the apostles. Others see the call to be sexually continent in Matthew 19 to be a caution for men who were too readily divorcing and remarrying.
The view of the Church is that celibacy is a reflection of life in Heaven, a source of detachment from the material world which aids in one's relationship with God. Celibacy is designed to "consecrate themselves with undivided heart to the Lord and to "the affairs of the Lord, they give themselves entirely to God and to men. It is a sign of this new life to the service of which the Church's minister is consecrated; accepted with a joyous heart celibacy radiantly proclaims the Reign of God." In contrast, Saint Peter, whom the Church considers its first Pope, was married given that he had a mother-in-law whom Christ healed (Matthew 8). But some argue that Peter was a widower, due to the fact that this passage does not mention his wife, and that his mother-in-law is the one who serves Christ and the apostles after she is healed. Furthermore, Peter himself states: "Then Peter spoke up, 'We have left everything to follow you!' 'Truly I tell you', Jesus replied, 'no one who has left home or brothers or sisters or mother or father or children or fields for me and the gospel will fail to receive a hundred times as much'." (Mark 10,28–30).
Usually, only celibate men are ordained as priests in the Latin Church. Married clergy who have converted from other Christian denominations can be ordained Roman Catholic priests without becoming celibate. Priestly celibacy is not doctrine of the Church (such as the belief in the Assumption of Mary) but a matter of discipline, like the use of the vernacular (local) language in Mass or Lenten fasting and abstinence. As such, it can theoretically change at any time though it still must be obeyed by Catholics until the change were to take place. The Eastern Catholic Churches ordain both celibate and married men. However, in both the East and the West, bishops are chosen from among those who are celibate. In Ireland, several priests have fathered children, the two most prominent being Bishop Eamonn Casey and Father Michael Cleary.
The classical heritage flourished throughout the Middle Ages in both the Byzantine Greek East and the Latin West. When discerning the population of Christendom in Medieval Europe during the Middle Ages, Will Durant, referring to Plato's ideal community, stated on the oratores (clergy):
"The clergy, like Plato's guardians, were placed in authority not by the suffrages of the people, but by their talent as shown in ecclesiastical studies and administration, by their disposition to a life of meditation and simplicity, and (perhaps it should be added) by the influence of their relatives with the powers of state and church. In the latter half of the period in which they ruled [AD 800 onwards], the clergy were as free from family cares as even Plato could desire; and in some cases it would seem they enjoyed no little of the reproductive freedom accorded to the guardians. Celibacy was part of the psychological structure of the power of the clergy; for on the one hand they were unimpeded by the narrowing egoism of the family, and on the other their apparent superiority to the call of the flesh added to the awe in which lay sinners held them …"
With respect to clerical celibacy, Richard P. O'Brien stated in 1995, that in his opinion, "greater understanding of human psychology has led to questions regarding the impact of celibacy on the human development of the clergy. The realization that many non-European countries view celibacy negatively has prompted questions concerning the value of retaining celibacy as an absolute and universal requirement for ordained ministry in the Roman Catholic Church".
Some homosexual Christians choose to be celibate following their denomination's teachings on homosexuality.
In 2014, the American Association of Christian Counselors amended its code of ethics to eliminate the promotion of conversion therapy for homosexuals and encouraged them to be celibate instead.
In Hinduism, celibacy is usually associated with the sadhus ("holy men"), ascetics who withdraw from society and renounce all worldly ties. Celibacy, termed brahmacharya in Vedic scripture, is the fourth of the yamas and the word literally translated means "dedicated to the Divinity of Life". The word is often used in yogic practice to refer to celibacy or denying pleasure, but this is only a small part of what brahmacharya represents. The purpose of practicing brahmacharya is to keep a person focused on the purpose in life, the things that instill a feeling of peace and contentment. It is also used to cultivate occult powers and many supernatural feats, called siddhi.
Islamic attitudes toward celibacy have been complex, Muhammad denounced it, however some Sufi orders embrace it. Islam does not promote celibacy; rather it condemns premarital sex and extramarital sex. In fact, according to Islam, marriage enables one to attain the highest form of righteousness within this sacred spiritual bond but the Qur'an does not state it as an obligation. The Qur'an (Q57:27) states, "But the Monasticism which they (who followed Jesus) invented for themselves, We did not prescribe for them but only to please God therewith, but that they did not observe it with the right observance." Therefore, religion is clearly not a reason to stay unmarried although people are allowed to live their lives however they are comfortable; but relationships and sex outside of marriage, let alone forced marriage, is definitely a sin, "Oh you who believe! You are forbidden to inherit women against their will" (Q4:19). In addition, marriage partners can be distractions from practicing religion at the same time, "Your mates and children are only a trial for you" (Q64:15) however that still does not mean Islam does not encourage people who have sexual desires and are willing to marry. Anyone who does not (intend to) get married in this life can always do it in the Hereafter instead.
Celibacy appears as a peculiarity among some Sufis.
Celibacy was practiced by women saints in Sufism. Celibacy was debated along with women's roles in Sufism in medieval times.
Celibacy, poverty, meditation, and mysticism within an ascetic context along with worship centered around saints' tombs were promoted by the Qadiri Sufi order among Hui Muslims in China. In China, unlike other Muslim sects, the leaders (Shaikhs) of the Qadiriyya Sufi order are celibate. Unlike other Sufi orders in China, the leadership within the order is not a hereditary position, rather, one of the disciples of the celibate Shaikh is chosen by the Shaikh to succeed him. The 92-year-old celibate Shaikh Yang Shijun was the leader of the Qadiriya order in China as of 1998.
Celibacy is practiced by Haydariya Sufi dervishes.
The spiritual teacher Meher Baba stated that "[F]or the [spiritual] aspirant a life of strict celibacy is preferable to married life, if restraint comes to him easily without undue sense of self-repression. Such restraint is difficult for most persons and sometimes impossible, and for them married life is decidedly more helpful than a life of celibacy. For ordinary persons, married life is undoubtedly advisable unless they have a special aptitude for celibacy". Baba also asserted that "The value of celibacy lies in the habit of restraint and the sense of detachment and independence which it gives" and that "The aspirant must choose one of the two courses which are open to him. He must take to the life of celibacy or to the married life, and he must avoid at all costs a cheap compromise between the two. Promiscuity in sex gratification is bound to land the aspirant in a most pitiful and dangerous chaos of ungovernable lust."
In Sparta and many other Greek cities, failure to marry was grounds for loss of citizenship, and could be prosecuted as a crime. Both Cicero and Dionysius of Halicarnassus stated that Roman law forbade celibacy. There are no records of such a prosecution, nor is the Roman punishment for refusing to marry known.
Pythagoreanism was the system of esoteric and metaphysical beliefs held by Pythagoras and his followers. Pythagorean thinking was dominated by a profoundly mystical view of the world. The Pythagorean code further restricted his members from eating meat, fish, and beans which they practised for religious, ethical and ascetic reasons, in particular the idea of metempsychosis – the transmigration of souls into the bodies of other animals. "Pythagoras himself established a small community that set a premium on study, vegetarianism, and sexual restraint or abstinence. Later philosophers believed that celibacy would be conducive to the detachment and equilibrium required by the philosopher's calling."
The tradition of sworn virgins developed out of the Kanuni i Lekë Dukagjinit (English: The Code of Lekë Dukagjini, or simply the Kanun). The Kanun is not a religious document – many groups follow this code, including Roman Catholics, the Albanian Orthodox, and Muslims.
Women who become sworn virgins make a vow of celibacy, and are allowed to take on the social role of men: inheriting land, wearing male clothing, etc. | [
{
"paragraph_id": 0,
"text": "Celibacy (from Latin caelibatus) is the state of voluntarily being unmarried, sexually abstinent, or both, usually for religious reasons. It is often in association with the role of a religious official or devotee. In its narrow sense, the term celibacy is applied only to those for whom the unmarried state is the result of a sacred vow, act of renunciation, or religious conviction. In a wider sense, it is commonly understood to only mean abstinence from sexual activity.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Celibacy has existed in one form or another throughout history, in virtually all the major religions of the world, and views on it have varied.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Classical Hindu culture encouraged asceticism and celibacy in the later stages of life, after one has met one's societal obligations. Jainism, on the other hand, preached complete celibacy even for young monks and considered celibacy to be an essential behavior to attain moksha. Buddhism is similar to Jainism in this respect. There were, however, significant cultural differences in the various areas where Buddhism spread, which affected the local attitudes toward celibacy. A somewhat similar situation existed in Japan, where the Shinto tradition also opposed celibacy. In most native African and Native American religious traditions, celibacy has been viewed negatively as well, although there were exceptions like periodic celibacy practiced by some Mesoamerican warriors.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Romans viewed celibacy as an aberration and legislated fiscal penalties against it, with the exception of the Vestal Virgins, who took a 30-year vow of chastity in order to devote themselves to the study and correct observance of state rituals.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In Christianity, celibacy means the promise to live either virginal or celibate in the future. Such a \"vow of celibacy\" has been normal for some centuries for Catholic priests, Catholic and Eastern Orthodox monks, and nuns. In addition, a promise or vow of celibacy may be made in the Anglican Communion and some Protestant churches or communities—such as the Shakers, for example—for members of religious orders, hermits, consecrated virgins, and deaconesses.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Judaism and Islam have denounced celibacy, as both religions emphasize marriage and family life. However, the priests of the Essenes, a Jewish sect during the Second Temple period, practised celibacy. Several hadiths indicate that the Islamic prophet Muhammad denounced celibacy.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The English word celibacy derives from the Latin caelibatus, \"state of being unmarried\", from Latin caelebs, meaning \"unmarried\". This word derives from two Proto-Indo-European stems, *kaiwelo- \"alone\" and *lib(h)s- \"living\".",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "The words abstinence and celibacy are often used interchangeably, but are not necessarily the same thing. Sexual abstinence, also known as continence, is abstaining from some or all aspects of sexual activity, often for some limited period of time, while celibacy may be defined as a voluntary religious vow not to marry or engage in sexual activity. Asexuality is commonly conflated with celibacy and sexual abstinence, but it is considered distinct from the two, as celibacy and sexual abstinence are behavioral and those who use those terms for themselves are generally motivated by factors such as an individual's personal or religious beliefs.",
"title": "Abstinence and celibacy"
},
{
"paragraph_id": 8,
"text": "A. W. Richard Sipe, while focusing on the topic of celibacy in Catholicism, states that \"the most commonly assumed definition of celibate is simply an unmarried or single person, and celibacy is perceived as synonymous with sexual abstinence or restraint.\" Sipe adds that even in the relatively uniform milieu of Catholic priests in the United States there seems to be \"simply no clear operational definition of celibacy\". Elizabeth Abbott commented on the terminology in her A History of Celibacy (2001) writing that she \"drafted a definition of celibacy that discarded the rigidly pedantic and unhelpful distinctions between celibacy, chastity, and virginity...\"",
"title": "Abstinence and celibacy"
},
{
"paragraph_id": 9,
"text": "The concept of \"new\" celibacy was introduced by Gabrielle Brown in her 1980 book The New Celibacy. In a revised version (1989) of her book, she claims abstinence to be \"a response on the outside to what's going on, and celibacy is a response from the inside\". According to her definition, celibacy (even short-term celibacy that is pursued for non-religious reasons) is much more than not having sex. It is more intentional than abstinence, and its goal is personal growth and empowerment. Although Brown repeatedly states that celibacy is a matter of choice, she clearly suggests that those who do not choose this route are somehow missing out. This new perspective on celibacy is echoed by several authors including Elizabeth Abbott, Wendy Keller, and Wendy Shalit.",
"title": "Abstinence and celibacy"
},
{
"paragraph_id": 10,
"text": "The rule of celibacy in the Buddhist religion, whether Mahayana or Theravada, has a long history. Celibacy was advocated as an ideal rule of life for all monks and nuns by Gautama Buddha, except in Japan where it is not strictly followed due to historical and political developments following the Meiji Restoration. In Japan, celibacy was an ideal among Buddhist clerics for hundreds of years. But violations of clerical celibacy were so common for so long that finally, in 1872, state laws made marriage legal for Buddhist clerics. Subsequently, ninety percent of Buddhist monks/clerics married. An example is Higashifushimi Kunihide, a prominent Buddhist priest of Japanese royal ancestry who was married and a father whilst serving as a monk for most of his lifetime.",
"title": "Buddhism"
},
{
"paragraph_id": 11,
"text": "Gautama, later known as the Buddha, is known for his renunciation of his wife, Princess Yasodharā, and son, Rahula. In order to pursue an ascetic life, he needed to renounce aspects of the impermanent world, including his wife and son. Later on both his wife and son joined the ascetic community and are mentioned in the Buddhist texts to have become enlightened. In another sense, a buddhavacana recorded the zen patriarch Vimalakirti as being an advocate of marital continence instead of monastic renunciation. This sutra became somewhat popular due to its brash humour as well as its integration of the role of women in lay and spiritual life.",
"title": "Buddhism"
},
{
"paragraph_id": 12,
"text": "In the religious movement of Brahma Kumaris, celibacy is also promoted for peace and to defeat power of lust.",
"title": "Brahma Kumaris"
},
{
"paragraph_id": 13,
"text": "There is no commandment in the New Testament that Jesus Christ's disciples have to live in celibacy, although it is a general view that Christ himself lived a life of perfect chastity, thus \"Voluntary chastity is the imitation of him who was the virgin Son of a virgin Mother\". One of his invocations is \"King of virgins and lover of stainless chastity\" (Rex virginum, amator castitatis). Furthermore, Christ says the following in Matthew 19, verse 12: \"There are those who choose to live like eunuchs for the sake of the kingdom of heaven. The one who can accept this should accept it.\" Many supporters of priestly celibacy rely on this passage.",
"title": "Christianity"
},
{
"paragraph_id": 14,
"text": "While eunuchs were not generally celibate, over subsequent centuries this statement has come to be interpreted as referring to celibacy.",
"title": "Christianity"
},
{
"paragraph_id": 15,
"text": "Paul the Apostle emphasized the importance of overcoming the desires of the flesh and saw the state of celibacy being superior to that of marriage. Paul made parallels between the relations between spouses and God's relationship with the church. \"Husbands, love your wives even as Christ loved the church. Husbands should love their wives as their own bodies\" (Ephesians 5:25–28). Paul himself was celibate and said that his wish was \"that all of you were as I am\" (1 Corinthians 7:7). In fact, this entire chapter is a defense of and a call to celibacy.",
"title": "Christianity"
},
{
"paragraph_id": 16,
"text": "The early Christians lived in the belief that the end of the world would soon come upon them, and saw no point in planning new families and having children. According to Chadwick, this was why Paul encouraged both celibate and marital lifestyles among the members of the Corinthian congregation, regarding celibacy as the preferable of the two.",
"title": "Christianity"
},
{
"paragraph_id": 17,
"text": "In the counsels of perfection (evangelical counsels) Jesus Christ \"gave the rule of the higher life founded upon his own most perfect live. According to this counsels persons may be called to voluntary celibacy\".",
"title": "Christianity"
},
{
"paragraph_id": 18,
"text": "A number of early Christian martyrs were women or girls who had given themselves to Christ in perpetual virginity, such as Saint Agnes and Saint Lucy. According to most Christian thought, the first sacred virgin was Mary, the mother of Jesus, who was consecrated by the Holy Spirit during the Annunciation. Tradition also has it that the Apostle Matthew consecrated virgins. In the Catholic Church and the Orthodox churches, a consecrated virgin, is a woman who has been consecrated by the church to a life of perpetual virginity in the service of the church.",
"title": "Christianity"
},
{
"paragraph_id": 19,
"text": "The Desert Fathers were Christian hermits, and ascetics who had a major influence on the development of Christianity and celibacy. Paul of Thebes is often credited with being the first hermit or anachorite to go to the desert, but it was Anthony the Great who launched the movement that became the Desert Fathers. Sometime around AD 270, Anthony heard a Sunday sermon stating that perfection could be achieved by selling all of one's possessions, giving the proceeds to the poor, and following Christ (Matthew 19:21). He followed the advice and made the further step of moving deep into the desert to seek complete solitude.",
"title": "Christianity"
},
{
"paragraph_id": 20,
"text": "Over time, the model of Anthony and other hermits attracted many followers, who lived alone in the desert or in small groups. They chose a life of extreme asceticism, renouncing all the pleasures of the senses, rich food, baths, rest, and anything that made them comfortable. Thousands joined them in the desert, mostly men but also a handful of women. Religious seekers also began going to the desert seeking advice and counsel from the early Desert Fathers. By the time of Anthony's death, there were so many men and women living in the desert in celibacy that it was described as \"a city\" by Anthony's biographer.",
"title": "Christianity"
},
{
"paragraph_id": 21,
"text": "The first Conciliar document on clerical celibacy of the Western Church (Synod of Elvira, c. 305 can. xxxiii) states that the discipline of celibacy is to refrain from the use of marriage, i.e. refrain from having carnal contact with one's spouse.",
"title": "Christianity"
},
{
"paragraph_id": 22,
"text": "According to the later St. Jerome (c. 347 – 420), celibacy is a moral virtue, consisting of living in the flesh, but outside the flesh, and so being not corrupted by it (vivere in carne praeter carnem). Celibacy excludes not only libidinous acts, but also sinful thoughts or desires of the flesh. Jerome referred to marriage prohibition for priests when he claimed in Against Jovinianus that Peter and the other apostles had been married before they were called, but subsequently gave up their marital relations.",
"title": "Christianity"
},
{
"paragraph_id": 23,
"text": "In the Catholic, Orthodox and Oriental Orthodox traditions, bishops are required to be celibate. In the Eastern Catholic and Orthodox traditions, priests and deacons are allowed to be married, yet have to remain celibate if they are unmarried at the time of ordination.",
"title": "Christianity"
},
{
"paragraph_id": 24,
"text": "In the early Church, higher clerics lived in marriages. Augustine taught that the original sin of Adam and Eve was either an act of foolishness (insipientia) followed by pride and disobedience to God, or else inspired by pride. The first couple disobeyed God, who had told them not to eat of the tree of the knowledge of good and evil (Gen 2:17). The tree was a symbol of the order of creation. Self-centeredness made Adam and Eve eat of it, thus failing to acknowledge and respect the world as it was created by God, with its hierarchy of beings and values. They would not have fallen into pride and lack of wisdom, if Satan had not sown into their senses \"the root of evil\" (radix mali). Their nature was wounded by concupiscence or libido, which affected human intelligence and will, as well as affections and desires, including sexual desire. The sin of Adam is inherited by all human beings. Already in his pre-Pelagian writings, Augustine taught that original sin was transmitted by concupiscence, which he regarded as the passion of both soul and body, making humanity a massa damnata (mass of perdition, condemned crowd) and much enfeebling, though not destroying, the freedom of the will.",
"title": "Christianity"
},
{
"paragraph_id": 25,
"text": "In the early 3rd century, the Canons of the Apostolic Constitutions decreed that only lower clerics might still marry after their ordination, but marriage of bishops, priests, and deacons were not allowed.",
"title": "Christianity"
},
{
"paragraph_id": 26,
"text": "One explanation for the origin of obligatory celibacy is that it is based on the writings of Saint Paul, who wrote of the advantages of celibacy allowed a man in serving the Lord. Celibacy was popularised by the early Christian theologians like Saint Augustine of Hippo and Origen. Another possible explanation for the origins of obligatory celibacy revolves around more practical reason, \"the need to avoid claims on church property by priests' offspring\". It remains a matter of Canon Law (and often a criterion for certain religious orders, especially Franciscans) that priests may not own land and therefore cannot pass it on to legitimate or illegitimate children. The land belongs to the Church through the local diocese as administered by the Local Ordinary (usually a bishop), who is often an ex officio corporation sole. Celibacy is viewed differently by the Catholic Church and the various Protestant communities. It includes clerical celibacy, celibacy of the consecrated life, voluntary lay celibacy, and celibacy outside of marriage.",
"title": "Christianity"
},
{
"paragraph_id": 27,
"text": "The Protestant Reformation rejected celibate life and sexual continence for preachers. Protestant celibate communities have emerged, especially from Anglican and Lutheran backgrounds. A few minor Christian sects advocate celibacy as a better way of life. These groups included the Shakers, the Harmony Society and the Ephrata Cloister.",
"title": "Christianity"
},
{
"paragraph_id": 28,
"text": "Many evangelicals prefer the term \"abstinence\" to \"celibacy\". Assuming everyone will marry, they focus their discussion on refraining from premarital sex and focusing on the joys of a future marriage. But some evangelicals, particularly older singles, desire a positive message of celibacy that moves beyond the \"wait until marriage\" message of abstinence campaigns. They seek a new understanding of celibacy that is focused on God rather than a future marriage or a lifelong vow to the Church.",
"title": "Christianity"
},
{
"paragraph_id": 29,
"text": "There are also many Pentecostal churches which practice celibate ministry. For instance, the full-time ministers of the Pentecostal Mission are celibate and generally single. Married couples who enter full-time ministry may become celibate and could be sent to different locations.",
"title": "Christianity"
},
{
"paragraph_id": 30,
"text": "During the first three or four centuries, no law was promulgated prohibiting clerical marriage. Celibacy was a matter of choice for bishops, priests, and deacons.",
"title": "Christianity"
},
{
"paragraph_id": 31,
"text": "Statutes forbidding clergy from having wives were written beginning with the Council of Elvira (306) but these early statutes were not universal and were often defied by clerics and then retracted by hierarchy. The Synod of Gangra (345) condemned a false asceticism whereby worshipers boycotted celebrations presided over by married clergy. The Apostolic Constitutions (c. 400) excommunicated a priest or bishop who left his wife \"under the pretense of piety\" (Mansi, 1:51).",
"title": "Christianity"
},
{
"paragraph_id": 32,
"text": "\"A famous letter of Synesius of Cyrene (c. 414) is evidence both for the respecting of personal decision in the matter and for contemporary appreciation of celibacy. For priests and deacons clerical marriage continued to be in vogue\".",
"title": "Christianity"
},
{
"paragraph_id": 33,
"text": "\"The Second Lateran Council (1139) seems to have enacted the first written law making sacred orders a direct impediment to marriage for the universal Church.\" Celibacy was first required of some clerics in 1123 at the First Lateran Council. Because clerics resisted it, the celibacy mandate was restated at the Second Lateran Council (1139) and the Council of Trent (1545–64). In places, coercion and enslavement of clerical wives and children was apparently involved in the enforcement of the law. \"The earliest decree in which the children [of clerics] were declared to be slaves and never to be enfranchised [freed] seems to have been a canon of the Synod of Pavia in 1018. Similar penalties were promulgated against wives and concubines (see the Synod of Melfi, 1189 can. xii), who by the very fact of their unlawful connexion with a subdeacon or clerk of higher rank became liable to be seized by the over-lord\".",
"title": "Christianity"
},
{
"paragraph_id": 34,
"text": "In the Roman Catholic Church, the Twelve Apostles are considered to have been the first priests and bishops of the Church. Some say the call to be eunuchs for the sake of Heaven in Matthew 19 was a call to be sexually continent and that this developed into celibacy for priests as the successors of the apostles. Others see the call to be sexually continent in Matthew 19 to be a caution for men who were too readily divorcing and remarrying.",
"title": "Christianity"
},
{
"paragraph_id": 35,
"text": "The view of the Church is that celibacy is a reflection of life in Heaven, a source of detachment from the material world which aids in one's relationship with God. Celibacy is designed to \"consecrate themselves with undivided heart to the Lord and to \"the affairs of the Lord, they give themselves entirely to God and to men. It is a sign of this new life to the service of which the Church's minister is consecrated; accepted with a joyous heart celibacy radiantly proclaims the Reign of God.\" In contrast, Saint Peter, whom the Church considers its first Pope, was married given that he had a mother-in-law whom Christ healed (Matthew 8). But some argue that Peter was a widower, due to the fact that this passage does not mention his wife, and that his mother-in-law is the one who serves Christ and the apostles after she is healed. Furthermore, Peter himself states: \"Then Peter spoke up, 'We have left everything to follow you!' 'Truly I tell you', Jesus replied, 'no one who has left home or brothers or sisters or mother or father or children or fields for me and the gospel will fail to receive a hundred times as much'.\" (Mark 10,28–30).",
"title": "Christianity"
},
{
"paragraph_id": 36,
"text": "Usually, only celibate men are ordained as priests in the Latin Church. Married clergy who have converted from other Christian denominations can be ordained Roman Catholic priests without becoming celibate. Priestly celibacy is not doctrine of the Church (such as the belief in the Assumption of Mary) but a matter of discipline, like the use of the vernacular (local) language in Mass or Lenten fasting and abstinence. As such, it can theoretically change at any time though it still must be obeyed by Catholics until the change were to take place. The Eastern Catholic Churches ordain both celibate and married men. However, in both the East and the West, bishops are chosen from among those who are celibate. In Ireland, several priests have fathered children, the two most prominent being Bishop Eamonn Casey and Father Michael Cleary.",
"title": "Christianity"
},
{
"paragraph_id": 37,
"text": "The classical heritage flourished throughout the Middle Ages in both the Byzantine Greek East and the Latin West. When discerning the population of Christendom in Medieval Europe during the Middle Ages, Will Durant, referring to Plato's ideal community, stated on the oratores (clergy):",
"title": "Christianity"
},
{
"paragraph_id": 38,
"text": "\"The clergy, like Plato's guardians, were placed in authority not by the suffrages of the people, but by their talent as shown in ecclesiastical studies and administration, by their disposition to a life of meditation and simplicity, and (perhaps it should be added) by the influence of their relatives with the powers of state and church. In the latter half of the period in which they ruled [AD 800 onwards], the clergy were as free from family cares as even Plato could desire; and in some cases it would seem they enjoyed no little of the reproductive freedom accorded to the guardians. Celibacy was part of the psychological structure of the power of the clergy; for on the one hand they were unimpeded by the narrowing egoism of the family, and on the other their apparent superiority to the call of the flesh added to the awe in which lay sinners held them …\"",
"title": "Christianity"
},
{
"paragraph_id": 39,
"text": "With respect to clerical celibacy, Richard P. O'Brien stated in 1995, that in his opinion, \"greater understanding of human psychology has led to questions regarding the impact of celibacy on the human development of the clergy. The realization that many non-European countries view celibacy negatively has prompted questions concerning the value of retaining celibacy as an absolute and universal requirement for ordained ministry in the Roman Catholic Church\".",
"title": "Christianity"
},
{
"paragraph_id": 40,
"text": "Some homosexual Christians choose to be celibate following their denomination's teachings on homosexuality.",
"title": "Christianity"
},
{
"paragraph_id": 41,
"text": "In 2014, the American Association of Christian Counselors amended its code of ethics to eliminate the promotion of conversion therapy for homosexuals and encouraged them to be celibate instead.",
"title": "Christianity"
},
{
"paragraph_id": 42,
"text": "In Hinduism, celibacy is usually associated with the sadhus (\"holy men\"), ascetics who withdraw from society and renounce all worldly ties. Celibacy, termed brahmacharya in Vedic scripture, is the fourth of the yamas and the word literally translated means \"dedicated to the Divinity of Life\". The word is often used in yogic practice to refer to celibacy or denying pleasure, but this is only a small part of what brahmacharya represents. The purpose of practicing brahmacharya is to keep a person focused on the purpose in life, the things that instill a feeling of peace and contentment. It is also used to cultivate occult powers and many supernatural feats, called siddhi.",
"title": "Hinduism"
},
{
"paragraph_id": 43,
"text": "Islamic attitudes toward celibacy have been complex, Muhammad denounced it, however some Sufi orders embrace it. Islam does not promote celibacy; rather it condemns premarital sex and extramarital sex. In fact, according to Islam, marriage enables one to attain the highest form of righteousness within this sacred spiritual bond but the Qur'an does not state it as an obligation. The Qur'an (Q57:27) states, \"But the Monasticism which they (who followed Jesus) invented for themselves, We did not prescribe for them but only to please God therewith, but that they did not observe it with the right observance.\" Therefore, religion is clearly not a reason to stay unmarried although people are allowed to live their lives however they are comfortable; but relationships and sex outside of marriage, let alone forced marriage, is definitely a sin, \"Oh you who believe! You are forbidden to inherit women against their will\" (Q4:19). In addition, marriage partners can be distractions from practicing religion at the same time, \"Your mates and children are only a trial for you\" (Q64:15) however that still does not mean Islam does not encourage people who have sexual desires and are willing to marry. Anyone who does not (intend to) get married in this life can always do it in the Hereafter instead.",
"title": "Islam"
},
{
"paragraph_id": 44,
"text": "Celibacy appears as a peculiarity among some Sufis.",
"title": "Islam"
},
{
"paragraph_id": 45,
"text": "Celibacy was practiced by women saints in Sufism. Celibacy was debated along with women's roles in Sufism in medieval times.",
"title": "Islam"
},
{
"paragraph_id": 46,
"text": "Celibacy, poverty, meditation, and mysticism within an ascetic context along with worship centered around saints' tombs were promoted by the Qadiri Sufi order among Hui Muslims in China. In China, unlike other Muslim sects, the leaders (Shaikhs) of the Qadiriyya Sufi order are celibate. Unlike other Sufi orders in China, the leadership within the order is not a hereditary position, rather, one of the disciples of the celibate Shaikh is chosen by the Shaikh to succeed him. The 92-year-old celibate Shaikh Yang Shijun was the leader of the Qadiriya order in China as of 1998.",
"title": "Islam"
},
{
"paragraph_id": 47,
"text": "Celibacy is practiced by Haydariya Sufi dervishes.",
"title": "Islam"
},
{
"paragraph_id": 48,
"text": "The spiritual teacher Meher Baba stated that \"[F]or the [spiritual] aspirant a life of strict celibacy is preferable to married life, if restraint comes to him easily without undue sense of self-repression. Such restraint is difficult for most persons and sometimes impossible, and for them married life is decidedly more helpful than a life of celibacy. For ordinary persons, married life is undoubtedly advisable unless they have a special aptitude for celibacy\". Baba also asserted that \"The value of celibacy lies in the habit of restraint and the sense of detachment and independence which it gives\" and that \"The aspirant must choose one of the two courses which are open to him. He must take to the life of celibacy or to the married life, and he must avoid at all costs a cheap compromise between the two. Promiscuity in sex gratification is bound to land the aspirant in a most pitiful and dangerous chaos of ungovernable lust.\"",
"title": "Meher Baba"
},
{
"paragraph_id": 49,
"text": "In Sparta and many other Greek cities, failure to marry was grounds for loss of citizenship, and could be prosecuted as a crime. Both Cicero and Dionysius of Halicarnassus stated that Roman law forbade celibacy. There are no records of such a prosecution, nor is the Roman punishment for refusing to marry known.",
"title": "Ancient Greece and Rome"
},
{
"paragraph_id": 50,
"text": "Pythagoreanism was the system of esoteric and metaphysical beliefs held by Pythagoras and his followers. Pythagorean thinking was dominated by a profoundly mystical view of the world. The Pythagorean code further restricted his members from eating meat, fish, and beans which they practised for religious, ethical and ascetic reasons, in particular the idea of metempsychosis – the transmigration of souls into the bodies of other animals. \"Pythagoras himself established a small community that set a premium on study, vegetarianism, and sexual restraint or abstinence. Later philosophers believed that celibacy would be conducive to the detachment and equilibrium required by the philosopher's calling.\"",
"title": "Ancient Greece and Rome"
},
{
"paragraph_id": 51,
"text": "The tradition of sworn virgins developed out of the Kanuni i Lekë Dukagjinit (English: The Code of Lekë Dukagjini, or simply the Kanun). The Kanun is not a religious document – many groups follow this code, including Roman Catholics, the Albanian Orthodox, and Muslims.",
"title": "The Balkans"
},
{
"paragraph_id": 52,
"text": "Women who become sworn virgins make a vow of celibacy, and are allowed to take on the social role of men: inheriting land, wearing male clothing, etc.",
"title": "The Balkans"
}
]
| Celibacy is the state of voluntarily being unmarried, sexually abstinent, or both, usually for religious reasons. It is often in association with the role of a religious official or devotee. In its narrow sense, the term celibacy is applied only to those for whom the unmarried state is the result of a sacred vow, act of renunciation, or religious conviction. In a wider sense, it is commonly understood to only mean abstinence from sexual activity. Celibacy has existed in one form or another throughout history, in virtually all the major religions of the world, and views on it have varied. Classical Hindu culture encouraged asceticism and celibacy in the later stages of life, after one has met one's societal obligations. Jainism, on the other hand, preached complete celibacy even for young monks and considered celibacy to be an essential behavior to attain moksha. Buddhism is similar to Jainism in this respect. There were, however, significant cultural differences in the various areas where Buddhism spread, which affected the local attitudes toward celibacy. A somewhat similar situation existed in Japan, where the Shinto tradition also opposed celibacy. In most native African and Native American religious traditions, celibacy has been viewed negatively as well, although there were exceptions like periodic celibacy practiced by some Mesoamerican warriors. The Romans viewed celibacy as an aberration and legislated fiscal penalties against it, with the exception of the Vestal Virgins, who took a 30-year vow of chastity in order to devote themselves to the study and correct observance of state rituals. In Christianity, celibacy means the promise to live either virginal or celibate in the future. Such a "vow of celibacy" has been normal for some centuries for Catholic priests, Catholic and Eastern Orthodox monks, and nuns. In addition, a promise or vow of celibacy may be made in the Anglican Communion and some Protestant churches or communities—such as the Shakers, for example—for members of religious orders, hermits, consecrated virgins, and deaconesses. Judaism and Islam have denounced celibacy, as both religions emphasize marriage and family life. However, the priests of the Essenes, a Jewish sect during the Second Temple period, practised celibacy. Several hadiths indicate that the Islamic prophet Muhammad denounced celibacy. | 2001-08-06T02:33:49Z | 2023-12-23T11:53:51Z | [
"Template:About-distinguish",
"Template:Citation needed",
"Template:Cite book",
"Template:Cite encyclopedia",
"Template:Commons category",
"Template:See also",
"Template:Wikiquote",
"Template:Authority control",
"Template:Short description",
"Template:Use dmy dates",
"Template:Cite CCC",
"Template:Cite news",
"Template:Qref",
"Template:Redirect",
"Template:Lang",
"Template:Main",
"Template:Circa",
"Template:Snd",
"Template:CathEncy",
"Template:Cite web",
"Template:Cite magazine",
"Template:Sex",
"Template:PIE",
"Template:Lang-en",
"Template:Citation",
"Template:Wiktionary",
"Template:EB1911 Poster",
"Template:Reflist",
"Template:Cite journal",
"Template:ISBN",
"Template:Human sexuality"
]
| https://en.wikipedia.org/wiki/Celibacy |
6,036 | Coalition government | A coalition government is a form of government in which political parties cooperate to form a government. The usual reason for such an arrangement is that no single party has achieved an absolute majority after an election, an atypical outcome in nations with majoritarian electoral systems, but common under proportional representation. A coalition government might also be created in a time of national difficulty or crisis (for example, during wartime or economic crisis) to give a government the high degree of perceived political legitimacy or collective identity, it can also play a role in diminishing internal political strife. In such times, parties have formed all-party coalitions (national unity governments, grand coalitions). If a coalition collapses, the Prime Minister and cabinet may be ousted by a vote of no confidence, call snap elections, form a new majority coalition, or continue as a minority government.
In multi-party states, a coalition agreement is an agreement negotiated between the parties that form a coalition government. It codifies the most important shared goals and objectives of the cabinet. It is often written by the leaders of the parliamentary groups. Coalitions that have a written agreement are more productive than those that do not.
Countries which often operate with coalition cabinets include: the Nordic countries, the Benelux countries, Australia, Austria, Brazil, Chile, Cyprus, France, Germany, Greece, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Kosovo, Latvia, Lebanon, Lesotho Lithuania, Malaysia, Nepal, New Zealand, Pakistan, Thailand, Spain, Trinidad and Tobago, Turkey, and Ukraine. Switzerland has been ruled by a consensus government with a coalition of the four strongest parties in parliament since 1959, called the "Magic Formula". Between 2010 and 2015, the United Kingdom also operated a formal coalition between the Conservative and the Liberal Democrat parties, but this was unusual: the UK usually has a single-party majority government. Not every parliament forms a coalition government, for example the European Parliament.
Armenia became an independent state in 1991, following the collapse of the Soviet Union. Since then, many political parties were formed in it, who mainly work with each other to form coalition governments. The country was governed by the My Step Alliance coalition after successfully gaining a majority in the National Assembly of Armenia following the 2018 Armenian parliamentary election.
In federal Australian politics, the conservative Liberal, National, Country Liberal and Liberal National parties are united in a coalition, known simply as the Coalition.
While nominally two parties, the Coalition has become so stable, at least at the federal level, that in practice the lower house of Parliament has become a two-party system, with the Coalition and the Labor Party being the major parties. This coalition is also found in the states of New South Wales and Victoria. In South Australia and Western Australia the Liberal and National parties compete separately, while in the Northern Territory and Queensland the two parties have merged, forming the Country Liberal Party, in 1978, and the Liberal National Party, in 2008, respectively.
Coalition governments involving the Labor Party and the Australian Greens have occurred at state and territory level, for example following the 2010 Tasmanian state election and the 2016 and 2020 Australian Capital Territory elections.
In Belgium, a nation internally divided along linguistic lines (primarily between Dutch-speaking Flanders in the north and French-speaking Wallonia in the south, with Brussels also being by and large Francophone), each main political disposition (Social democracy, liberalism, right-wing populism, etc.) is, with the exception of the far-left Workers' Party of Belgium, split between Francophone and Dutch-speaking parties (e.g. the Dutch-speaking Vooruit and French-speaking Socialist Party being the two social-democratic parties). In the 2019 federal election, no party got more than 17% of the vote. Thus, forming a coalition government is an expected and necessary part of Belgian politics. In Belgium, coalition governments containing ministers from six or more parties are not uncommon; consequently, government formation can take an exceptionally long time. Between 2007 and 2011, Belgium operated under a caretaker government as no coalition could be formed.
In Canada, the Great Coalition was formed in 1864 by the Clear Grits, Parti bleu, and Liberal-Conservative Party. During the First World War, Prime Minister Robert Borden attempted to form a coalition with the opposition Liberals to broaden support for controversial conscription legislation. The Liberal Party refused the offer but some of their members did cross the floor and join the government. Although sometimes referred to as a coalition government, according to the definition above, it was not. It was disbanded after the end of the war.
During the 2008–09 Canadian parliamentary dispute, two of Canada's opposition parties signed an agreement to form what would become the country's second federal coalition government since Confederation if the minority Conservative government was defeated on a vote of non-confidence, unseating Stephen Harper as Prime Minister. The agreement outlined a formal coalition consisting of two opposition parties, the Liberal Party and the New Democratic Party. The Bloc Québécois agreed to support the proposed coalition on confidence matters for 18 months. In the end, parliament was prorogued by the Governor General, and the coalition dispersed before parliament was reconvened.
According to historian Christopher Moore, coalition governments in Canada became much less possible in 1919, when the leaders of parties were no longer chosen by elected MPs but instead began to be chosen by party members. Such a manner of leadership election had never been tried in any parliamentary system before. According to Moore, as long as that kind of leadership selection process remains in place and concentrates power in the hands of the leader, as opposed to backbenchers, then coalition governments will be very difficult to form. Moore shows that the diffusion of power within a party tends to also lead to a diffusion of power in the parliament in which that party operates, thereby making coalitions more likely.
Several coalition governments have been formed within provincial politics. As a result of the 1919 Ontario election, the United Farmers of Ontario and the Labour Party, together with three independent MLAs, formed a coalition that governed Ontario until 1923.
In British Columbia, the governing Liberals formed a coalition with the opposition Conservatives in order to prevent the surging, left-wing Cooperative Commonwealth Federation from taking power in the 1941 British Columbia general election. Liberal premier Duff Pattullo refused to form a coalition with the third-place Conservatives, so his party removed him. The Liberal–Conservative coalition introduced a winner-take-all preferential voting system (the "Alternative Vote") in the hopes that their supporters would rank the other party as their second preference; however, this strategy backfired in the subsequent 1952 British Columbia general election where, to the surprise of many, the right-wing populist BC Social Credit Party won a minority. They were able to win a majority in the subsequent election as Liberal and Conservative supporters shifted their anti-CCF vote to Social Credit.
Manitoba has had more formal coalition governments than any other province. Following gains by the United Farmer's/Progressive movement elsewhere in the country, the United Farmers of Manitoba unexpectedly won the 1921 election. Like their counterparts in Ontario, they had not expected to win and did not have a leader. They asked John Bracken, a professor in animal husbandry, to become leader and premier. Bracken changed the party's name to the Progressive Party of Manitoba. During the Great Depression, Bracken survived at a time when other premiers were being defeated by forming a coalition government with the Manitoba Liberals (eventually, the two parties would merge into the Liberal-Progressive Party of Manitoba, and decades later, the party would change its name to the Manitoba Liberal Party). In 1940, Bracken formed a wartime coalition government with almost every party in the Manitoba Legislature (the Conservatives, CCF, and Social Credit; however, the CCF broke with the coalition after a few years over policy differences). The only party not included was the small, communist Labor-Progressive Party, which had a handful of seats.
In Saskatchewan, NDP premier Roy Romanow formed a formal coalition with the Saskatchewan Liberals in 1999 after being reduced to a minority. After two years, the newly elected Liberal leader David Karwacki ordered the coalition be disbanded, the Liberal caucus disagreed with him and left the Liberals to run as New Democrats in the upcoming election. The Saskatchewan NDP was re-elected with a majority under its new leader Lorne Calvert, while the Saskatchewan Liberals lost their remaining seats and have not been competitive in the province since.
From the creation of the Folketing in 1849 through the introduction of proportional representation in 1918, there were only single-party governments in Denmark. Thorvald Stauning formed his second government and Denmark's first coalition government in 1929. Since then, the norm has been coalition governments, though there have been periods where single-party governments were frequent, such as the decade after the end of World War II, during the 1970s, and in the late 2010s. Every government from 1982 until the 2015 elections were coalitions. While Mette Frederiksen's first government only consisted of her own Social Democrats, her second government is a coalition of the Social Democrats, Venstre, and the Moderates.
When the Social Democrats under Stauning won 46% of the votes in the 1935 election, this was the closest any party has gotten to winning an outright majority in parliament since 1918. One party has thus never held a majority alone, and even one-party governments have needed to have confidence agreements with at least one other party to govern. For example, though Frederiksen's first government only consisted of the Social Democrats, it also relied on the support of the Social Liberal Party, the Socialist People's Party, and the Red–Green Alliance.
In Finland, no party has had an absolute majority in the parliament since independence, and multi-party coalitions have been the norm. Finland experienced its most stable government (Lipponen I and II) since independence with a five-party governing coalition, a so-called "rainbow government". The Lipponen cabinets set the stability record and were unusual in the respect that both the centre-left (SDP) and radical left-wing (Left Alliance) parties sat in the government with the major centre-right party (National Coalition). The Katainen cabinet was also a rainbow coalition of a total of five parties.
In Germany, coalition governments are the norm, as it is rare for any single party to win a majority in parliament. The German political system makes extensive use of the constructive vote of no confidence, which requires governments to control an absolute majority of seats. Every government since the foundation of the Federal Republic in 1949 has involved at least two political parties. Typically, governments involve one of the two major parties forming a coalition with a smaller party. For example, from 1982 to 1998, the country was governed by a coalition of the CDU/CSU with the minor Free Democratic Party (FDP); from 1998 to 2005, a coalition of the Social Democratic Party of Germany (SPD) and the minor Greens held power. The CDU/CSU comprises an alliance of the Christian Democratic Union of Germany and Christian Social Union in Bavaria, described as "sister parties" which form a joint parliamentary group, and for this purpose are always considered a single party. Coalition arrangements are often given names based on the colours of the parties involved, such as "red-green" for the SPD and Greens. Coalitions of three parties are often named after countries whose flags contain those colours, such as the black-yellow-green Jamaica coalition.
Grand coalitions of the two major parties also occur, but these are relatively rare, as they typically prefer to associate with smaller ones. However, if the major parties are unable to assemble a majority, a grand coalition may be the only practical option. This was the case following the 2005 federal election, in which the incumbent SPD–Green government was defeated but the opposition CDU/CSU–FDP coalition also fell short of a majority. A grand coalition government was subsequently formed between the CDU/CSU and the SPD. Partnerships like these typically involve carefully structured cabinets: Angela Merkel of the CDU/CSU became Chancellor while the SPD was granted the majority of cabinet posts.
Coalition formation has become increasingly complex as voters increasingly migrate away from the major parties during the 2000s and 2010s. While coalitions of more than two parties were extremely rare in preceding decades, they have become common on the state level. These often include the liberal FDP and the Greens alongside one of the major parties, or "red–red–green" coalitions of the SPD, Greens, and The Left. In the eastern states, dwindling support for moderate parties has seen the rise of new forms of grand coalitions such as the Kenya coalition. The rise of populist parties also increases the time that it takes for a successful coalition to form. By 2016, the Greens were participating eleven governing coalitions on the state level in seven different constellations. During campaigns, parties often declare which coalitions or partners they prefer or reject. This tendency toward fragmentation also spread to the federal level, particularly during the 2021 federal election, which saw the CDU/CSU and SPD fall short of a combined majority of votes for the first time in history.
After India's Independence on 15 August 1947, the Indian National Congress, the major political party instrumental in the Indian independence movement, ruled the nation. The first Prime Minister, Jawaharlal Nehru, his successor Lal Bahadur Shastri, and the third Prime Minister, Indira Gandhi, were all members of the Congress party. However, Raj Narain, who had unsuccessfully contested an election against Indira from the constituency of Rae Bareilly in 1971, lodged a case alleging electoral malpractice. In June 1975, Indira was found guilty and barred by the High Court from holding public office for six years. In response, a state of emergency was declared under the pretext of national security. The next election resulted in the formation of India's first ever national coalition government under the prime ministership of Morarji Desai, which was also the first non-Congress national government. It existed from 24 March 1977 to 15 July 1979, headed by the Janata Party, an amalgam of political parties opposed to the emergency imposed between 1975 and 1977. As the popularity of the Janata Party dwindled, Desai had to resign, and Chaudhary Charan Singh, a rival of his, became the fifth Prime Minister. However, due to lack of support, this coalition government did not complete its five-year term.
Congress returned to power in 1980 under Indira Gandhi, and later under Rajiv Gandhi as the sixth Prime Minister. However, the general election of 1989 once again brought a coalition government under National Front, which lasted until 1991, with two Prime Ministers, the second one being supported by Congress. The 1991 election resulted in a Congress-led stable minority government for five years. The eleventh parliament produced three Prime Ministers in two years and forced the country back to the polls in 1998. The first successful coalition government in India which completed a whole five-year term was the Bharatiya Janata Party (BJP)-led National Democratic Alliance with Atal Bihari Vajpayee as Prime Minister from 1999 to 2004. Then another coalition, the Congress-led United Progressive Alliance, consisting of 13 separate parties, ruled India for two terms from 2004 to 2014 with Manmohan Singh as PM. However, in the 16th general election in May 2014, the BJP secured a majority on its own (becoming the first party to do so since the 1984 election), and the National Democratic Alliance came into power, with Narendra Modi as Prime Minister. In 2019, Narendra Modi was re-elected as Prime Minister as the National Democratic Alliance again secured a majority in the 17th general election.
As a result of the toppling of Suharto, political freedom is significantly increased. Compared to only three parties allowed to exist in the New Order era, a total of 48 political parties participated in the 1999 election and always a total of more than 10 parties in next elections. There are no majority winner of those elections and coalition governments are inevitable. The current government is a coalition of seven parties led by the major centre-left PDIP to let governing big tent Onward Indonesia Coalition
In Ireland, coalition governments are common; not since 1977 has a single party formed a majority government. Coalition governments to date have been led by either Fianna Fáil or Fine Gael. They have been joined in government by one or more smaller parties or independent members of parliament (TDs).
Ireland's first coalition government was formed after the 1948 general election, with five parties and independents represented at cabinet. Before 1989, Fianna Fáil had opposed participation in coalition governments, preferring single-party minority government instead. It formed a coalition government with the Progressive Democrats in that year.
The Labour Party has been in government on eight occasions. On all but one of those occasions, it was as a junior coalition party to Fine Gael. The exception was a government with Fianna Fáil from 1993 to 1994. The 29th Government of Ireland (2011–16), was a grand coalition of the two largest parties, as Fianna Fáil had fallen to third place in the Dáil.
The current government is a Fianna Fáil, Fine Gael and the Green Party. It is the first time Fianna Fáil and Fine Gael have served in government together, having derived from opposing sides in the Irish Civil War (1922–23).
A similar situation exists in Israel, which typically has at least 10 parties holding representation in the Knesset. The only faction to ever gain the majority of Knesset seats was Alignment, an alliance of the Labor Party and Mapam that held an absolute majority for a brief period from 1968 to 1969. Historically, control of the Israeli government has alternated between periods of rule by the right-wing Likud in coalition with several right-wing and religious parties and periods of rule by the center-left Labor in coalition with several left-wing parties. Ariel Sharon's formation of the centrist Kadima party in 2006 drew support from former Labor and Likud members, and Kadima ruled in coalition with several other parties.
Israel also formed a national unity government from 1984–1988. The premiership and foreign ministry portfolio were held by the head of each party for two years, and they switched roles in 1986.
In Japan, controlling a majority in the House of Representatives is enough to decide the election of the prime minister (=recorded, two-round votes in both houses of the National Diet, yet the vote of the House of Representatives decision eventually overrides a dissenting House of Councillors vote automatically after the mandatory conference committee procedure fails which, by precedent, it does without real attempt to reconcile the different votes). Therefore, a party that controls the lower house can form a government on its own. It can also pass a budget on its own. But passing any law (including important budget-related laws) requires either majorities in both houses of the legislature or, with the drawback of longer legislative proceedings, a two-thirds majority in the House of Representatives.
In recent decades, single-party full legislative control is rare, and coalition governments are the norm: Most governments of Japan since the 1990s and, as of 2020, all since 1999 have been coalition governments, some of them still fell short of a legislative majority. The Liberal Democratic Party (LDP) held a legislative majority of its own in the National Diet until 1989 (when it initially continued to govern alone), and between the 2016 and 2019 elections (when it remained in its previous ruling coalition). The Democratic Party of Japan (through accessions in the House of Councillors) briefly controlled a single-party legislative majority for a few weeks before it lost the 2010 election (it, too, continued to govern as part of its previous ruling coalition).
From the constitutional establishment of parliamentary cabinets and the introduction of the new, now directly elected upper house of parliament in 1947 until the formation of the LDP and the reunification of the Japanese Socialist Party in 1955, no single party formally controlled a legislative majority on its own. Only few formal coalition governments (46th, 47th, initially 49th cabinet) interchanged with technical minority governments and cabinets without technical control of the House of Councillors (later called "twisted Diets", nejire kokkai, when they were not only technically, but actually divided). But during most of that period, the centrist Ryokufūkai was the strongest overall or decisive cross-bench group in the House of Councillors, and it was willing to cooperate with both centre-left and centre-right governments even when it was not formally part of the cabinet; and in the House of Representatives, minority governments of Liberals or Democrats (or their precursors; loose, indirect successors to the two major pre-war parties) could usually count on support from some members of the other major conservative party or from smaller conservative parties and independents. Finally in 1955, when Hatoyama Ichirō's Democratic Party minority government called early House of Representatives elections and, while gaining seats substantially, remained in the minority, the Liberal Party refused to cooperate until negotiations on a long-debated "conservative merger" of the two parties were agreed upon, and eventually successful.
After it was founded in 1955, the Liberal Democratic Party dominated Japan's governments for a long period: The new party governed alone without interruption until 1983, again from 1986 to 1993 and most recently between 1996 and 1999. The first time the LDP entered a coalition government followed its third loss of its House of Representatives majority in the 1983 House of Representatives general election. The LDP-New Liberal Club coalition government lasted until 1986 when the LDP won landslide victories in simultaneous double elections to both houses of parliament.
There have been coalition cabinets where the post of prime minister was given to a junior coalition partner: the JSP-DP-Cooperativist coalition government in 1948 of prime minister Ashida Hitoshi (DP) who took over after his JSP predecessor Tetsu Katayama had been toppled by the left wing of his own party, the JSP-Renewal-Kōmei-DSP-JNP-Sakigake-SDF-DRP coalition in 1993 with Morihiro Hosokawa (JNP) as compromise PM for the Ichirō Ozawa-negotiated rainbow coalition that removed the LDP from power for the first time to break up in less than a year, and the LDP-JSP-Sakigake government that was formed in 1994 when the LDP had agreed, if under internal turmoil and with some defections, to bury the main post-war partisan rivalry and support the election of JSP prime minister Tomiichi Murayama in exchange for the return to government.
Ever since Malaysia gained independence in 1957, none of its federal governments have ever been controlled by a single political party. Due to the social nature of the country, the first federal government was formed by a three-party Alliance coalition, composed of the United Malays National Organisations (UMNO), the Malaysian Chinese Association (MCA), and the Malaysian Indian Congress (MIC). It was later expanded and rebranded as Barisan Nasional (BN), which includes parties representing the Malaysian states of Sabah and Sarawak.
The 2018 Malaysian general election saw the first non-BN coalition federal government in the country's electoral history, formed through an alliance between the Pakatan Harapan (PH) coalition and the Sabah Heritage Party (WARISAN). The federal government formed after the 2020–2022 Malaysian political crisis was the first to be established through coordination between multiple political coalitions. This occurred when the newly formed Perikatan Nasional (PN) coalition partnered with BN and Gabungan Parti Sarawak (GPS). In 2022 after its registration, Sabah-based Gabungan Rakyat Sabah (GRS) formally joined the government (though it had been a part of an informal coalition since 2020). The current government led by Prime Minister Anwar Ibrahim is composed of four political coalitions and 19 parties.
MMP was introduced in New Zealand in the 1996 election. In order to get into power, parties need to get a total of 50% of the approximately (there can be more if an Overhang seat exists) 120 seats in parliament – 61. Since it is rare for a party to win a full majority, they must form coalitions with other parties. For example, during the 2017 general election, Labour won 46 seats and New Zealand First won nine. The two formed a Coalition Government with confidence and supply from the Green Party who won eight seats.
Since 2015, there are many more coalition governments than previously in municipalities, autonomous regions and, since 2020 (coming from the November 2019 Spanish general election), in the Spanish Government. There are two ways of conforming them: all of them based on a program and its institutional architecture, one consists on distributing the different areas of government between the parties conforming the coalition and the other one is, like in the Valencian Community, where the ministries are structured with members of all the political parties being represented, so that conflicts that may occur are regarding competences and not fights between parties.
Coalition governments in Spain had already existed during the 2nd Republic, and have been common in some specific Autonomous Communities since the 1980s. Nonetheless, the prevalence of two big parties overall has been eroded and the need for coalitions appears to be the new normal since around 2015.
Turkey's first coalition government was formed after the 1961 general election, with two political parties and independents represented at cabinet. It was also Turkey's first grand coalition as the two largest political parties of opposing political ideologies (Republican People's Party and Justice Party) united. Between 1960 and 2002, 17 coalition governments were formed in Turkey. The media and the general public view coalition governments as unfavorable and unstable due to their lack of effectiveness and short lifespan. Following Turkey's transition to a presidential system in 2017, political parties focussed more on forming electoral alliances. Due to separation of powers, the government doesn't have to be formed by parliamentarians and therefore not obliged to result in a coalition government. However, the parliament can dissolve the cabinet if the parliamentary opposition is in majority.
In the United Kingdom, coalition governments (sometimes known as "national governments") usually have only been formed at times of national crisis. The most prominent was the National Government of 1931 to 1940. There were multi-party coalitions during both world wars. Apart from this, when no party has had a majority, minority governments normally have been formed with one or more opposition parties agreeing to vote in favour of the legislation which governments need to function: for instance the Labour government of James Callaghan formed a pact with the Liberals from March 1977 until July 1978, following a series of by-election defeats had eroded Labour's majority of three seats which had been gained at the October 1974 election. However, in the run-up to the 1997 general election, Labour opposition leader Tony Blair was in talks with Liberal Democrat leader Paddy Ashdown about forming a coalition government if Labour failed to win a majority at the election; but there proved to be no need for a coalition as Labour won the election by a landslide. The 2010 general election resulted in a hung parliament (Britain's first for 36 years), and the Conservatives, led by David Cameron, which had won the largest number of seats, formed a coalition with the Liberal Democrats in order to gain a parliamentary majority, ending 13 years of Labour government. This was the first time that the Conservatives and Lib Dems had made a power-sharing deal at Westminster. It was also the first full coalition in Britain since 1945, having been formed 70 years virtually to the day after the establishment of Winston Churchill's wartime coalition, Labour and the Liberal Democrats have entered into a coalition twice in the Scottish Parliament, as well as twice in the Welsh Assembly.
Since the 1989 election, there have been 4 coalition governments, all including at least both the conservative National Party and the liberal Colorado Party. The first one was after the election of the blanco Luis Alberto Lacalle and lasted until 1992 due to policy disagreements, the longest lasting coalition was the Colorado-led coalition under the second government of Julio María Sanguinetti, in which the national leader Alberto Volonté was frequently described as a "Prime Minister", the next coalition (under president Jorge Batlle) was also Colorado-led, but it lasted only until after the 2002 Uruguay banking crisis, when the blancos abandoned the government. Following the 2019 Uruguayan general election, the blanco Luis Lacalle Pou formed the coalición multicolor, composed of his own National Party, the liberal Colorado Party, the eclectic Open Cabildo and the center left Independent Party.
Advocates of proportional representation suggest that a coalition government leads to more consensus-based politics, as a government comprising differing parties (often based on different ideologies) need to compromise about governmental policy. Another stated advantage is that a coalition government better reflects the popular opinion of the electorate within a country; this means, for instance, that the political system contains just one majority-based mechanism. Contrast this with district voting in which the majority mechanism occurs twice: first, the majority of voters pick the representative and, second, the body of representatives make a subsequent majority decision. The doubled majority decision undermines voter support for that decision. The benefit of proportional representation is that it contains that majority mechanism just once. Additionally, coalition partnership may play an important role in moderating the level of affective polarization over parties, that is, the animosity and hostility against the opponent party identifiers/supporters.
Those who disapprove of coalition governments believe that such governments have a tendency to be fractious and prone to disharmony, as their component parties hold differing beliefs and thus may not always agree on policy. Sometimes the results of an election mean that the coalitions which are mathematically most probable are ideologically infeasible, for example in Flanders or Northern Ireland. A second difficulty might be the ability of minor parties to play "kingmaker" and, particularly in close elections, gain far more power in exchange for their support than the size of their vote would otherwise justify.
Germany is the largest nation ever to have had proportional representation during the interbellum. After WW II, the German system, district based but then proportionally adjusted afterward, contains a threshold that keeps the number of parties limited. The threshold is set at five percent, resulting in empowered parties with at least a minimum amount of political gravity.
Coalition governments have also been criticized for sustaining a consensus on issues when disagreement and the consequent discussion would be more fruitful. To forge a consensus, the leaders of ruling coalition parties can agree to silence their disagreements on an issue to unify the coalition against the opposition. The coalition partners, if they control the parliamentary majority, can collude to make the parliamentary discussion on the issue irrelevant by consistently disregarding the arguments of the opposition and voting against the opposition's proposals — even if there is disagreement within the ruling parties about the issue. However, in winner-take-all this seems always to be the case.
Powerful parties can also act in an oligocratic way to form an alliance to stifle the growth of emerging parties. Of course, such an event is rare in coalition governments when compared to two-party systems, which typically exist because of stifling of the growth of emerging parties, often through discriminatory nomination rules regulations and plurality voting systems, and so on.
A single, more powerful party can shape the policies of the coalition disproportionately. Smaller or less powerful parties can be intimidated to not openly disagree. In order to maintain the coalition, they would have to vote against their own party's platform in the parliament. If they do not, the party has to leave the government and loses executive power. However, this is contradicted by the "kingmaker" factor mentioned above.
Finally, a strength that can also be seen as a weakness is that proportional representation puts the emphasis on collaboration. All parties involved are looking at the other parties in the best light possible, since they may be (future) coalition partners. The pendulum may therefore show less of a swing between political extremes. Still, facing external issues may then also be approached from a collaborative perspective, even when the outside force is not benevolent.
A legislative coalition or voting coalition is when political parties in a legislature align on voting to push forward specific policies or legislation, but do not engage in power-sharing of the executive branch like in coalition governments.
In a parliamentary system, political parties may form a confidence and supply arrangement, pledging to support the governing party on legislative bills and motions that carry a vote of confidence. Unlike a coalition government, which is a more formalised partnership characterised by the sharing of the executive branch, a confidence and supply arrangement does not entail executive "power-sharing". Instead, it involves the governing party supporting specific proposals and priorities of the other parties in the arrangement, in return for their continued support on motions of confidence.
In the United States, political parties have formed legislative coalitions in the past in order to push forward specific policies or legislation in the United States Congress. In 1855, a coalition was formed between members of the American party, Opposition Party and Republican Party to elect Nathaniel P. Banks speaker of the House. The most recent legislative coalition took place in 1917, a coalition was formed between members of the Democratic Party, Progressive Party and Socialist Party of America to elect Champ Clark speaker.
A coalition government, in which "power-sharing" of executive offices is performed, has not occurred in the United States. The norms that allow coalition governments to form and persist do not exist in the United States. | [
{
"paragraph_id": 0,
"text": "A coalition government is a form of government in which political parties cooperate to form a government. The usual reason for such an arrangement is that no single party has achieved an absolute majority after an election, an atypical outcome in nations with majoritarian electoral systems, but common under proportional representation. A coalition government might also be created in a time of national difficulty or crisis (for example, during wartime or economic crisis) to give a government the high degree of perceived political legitimacy or collective identity, it can also play a role in diminishing internal political strife. In such times, parties have formed all-party coalitions (national unity governments, grand coalitions). If a coalition collapses, the Prime Minister and cabinet may be ousted by a vote of no confidence, call snap elections, form a new majority coalition, or continue as a minority government.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In multi-party states, a coalition agreement is an agreement negotiated between the parties that form a coalition government. It codifies the most important shared goals and objectives of the cabinet. It is often written by the leaders of the parliamentary groups. Coalitions that have a written agreement are more productive than those that do not.",
"title": "Coalition agreement"
},
{
"paragraph_id": 2,
"text": "Countries which often operate with coalition cabinets include: the Nordic countries, the Benelux countries, Australia, Austria, Brazil, Chile, Cyprus, France, Germany, Greece, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Kosovo, Latvia, Lebanon, Lesotho Lithuania, Malaysia, Nepal, New Zealand, Pakistan, Thailand, Spain, Trinidad and Tobago, Turkey, and Ukraine. Switzerland has been ruled by a consensus government with a coalition of the four strongest parties in parliament since 1959, called the \"Magic Formula\". Between 2010 and 2015, the United Kingdom also operated a formal coalition between the Conservative and the Liberal Democrat parties, but this was unusual: the UK usually has a single-party majority government. Not every parliament forms a coalition government, for example the European Parliament.",
"title": "Distribution"
},
{
"paragraph_id": 3,
"text": "Armenia became an independent state in 1991, following the collapse of the Soviet Union. Since then, many political parties were formed in it, who mainly work with each other to form coalition governments. The country was governed by the My Step Alliance coalition after successfully gaining a majority in the National Assembly of Armenia following the 2018 Armenian parliamentary election.",
"title": "Distribution"
},
{
"paragraph_id": 4,
"text": "In federal Australian politics, the conservative Liberal, National, Country Liberal and Liberal National parties are united in a coalition, known simply as the Coalition.",
"title": "Distribution"
},
{
"paragraph_id": 5,
"text": "While nominally two parties, the Coalition has become so stable, at least at the federal level, that in practice the lower house of Parliament has become a two-party system, with the Coalition and the Labor Party being the major parties. This coalition is also found in the states of New South Wales and Victoria. In South Australia and Western Australia the Liberal and National parties compete separately, while in the Northern Territory and Queensland the two parties have merged, forming the Country Liberal Party, in 1978, and the Liberal National Party, in 2008, respectively.",
"title": "Distribution"
},
{
"paragraph_id": 6,
"text": "Coalition governments involving the Labor Party and the Australian Greens have occurred at state and territory level, for example following the 2010 Tasmanian state election and the 2016 and 2020 Australian Capital Territory elections.",
"title": "Distribution"
},
{
"paragraph_id": 7,
"text": "In Belgium, a nation internally divided along linguistic lines (primarily between Dutch-speaking Flanders in the north and French-speaking Wallonia in the south, with Brussels also being by and large Francophone), each main political disposition (Social democracy, liberalism, right-wing populism, etc.) is, with the exception of the far-left Workers' Party of Belgium, split between Francophone and Dutch-speaking parties (e.g. the Dutch-speaking Vooruit and French-speaking Socialist Party being the two social-democratic parties). In the 2019 federal election, no party got more than 17% of the vote. Thus, forming a coalition government is an expected and necessary part of Belgian politics. In Belgium, coalition governments containing ministers from six or more parties are not uncommon; consequently, government formation can take an exceptionally long time. Between 2007 and 2011, Belgium operated under a caretaker government as no coalition could be formed.",
"title": "Distribution"
},
{
"paragraph_id": 8,
"text": "In Canada, the Great Coalition was formed in 1864 by the Clear Grits, Parti bleu, and Liberal-Conservative Party. During the First World War, Prime Minister Robert Borden attempted to form a coalition with the opposition Liberals to broaden support for controversial conscription legislation. The Liberal Party refused the offer but some of their members did cross the floor and join the government. Although sometimes referred to as a coalition government, according to the definition above, it was not. It was disbanded after the end of the war.",
"title": "Distribution"
},
{
"paragraph_id": 9,
"text": "During the 2008–09 Canadian parliamentary dispute, two of Canada's opposition parties signed an agreement to form what would become the country's second federal coalition government since Confederation if the minority Conservative government was defeated on a vote of non-confidence, unseating Stephen Harper as Prime Minister. The agreement outlined a formal coalition consisting of two opposition parties, the Liberal Party and the New Democratic Party. The Bloc Québécois agreed to support the proposed coalition on confidence matters for 18 months. In the end, parliament was prorogued by the Governor General, and the coalition dispersed before parliament was reconvened.",
"title": "Distribution"
},
{
"paragraph_id": 10,
"text": "According to historian Christopher Moore, coalition governments in Canada became much less possible in 1919, when the leaders of parties were no longer chosen by elected MPs but instead began to be chosen by party members. Such a manner of leadership election had never been tried in any parliamentary system before. According to Moore, as long as that kind of leadership selection process remains in place and concentrates power in the hands of the leader, as opposed to backbenchers, then coalition governments will be very difficult to form. Moore shows that the diffusion of power within a party tends to also lead to a diffusion of power in the parliament in which that party operates, thereby making coalitions more likely.",
"title": "Distribution"
},
{
"paragraph_id": 11,
"text": "Several coalition governments have been formed within provincial politics. As a result of the 1919 Ontario election, the United Farmers of Ontario and the Labour Party, together with three independent MLAs, formed a coalition that governed Ontario until 1923.",
"title": "Distribution"
},
{
"paragraph_id": 12,
"text": "In British Columbia, the governing Liberals formed a coalition with the opposition Conservatives in order to prevent the surging, left-wing Cooperative Commonwealth Federation from taking power in the 1941 British Columbia general election. Liberal premier Duff Pattullo refused to form a coalition with the third-place Conservatives, so his party removed him. The Liberal–Conservative coalition introduced a winner-take-all preferential voting system (the \"Alternative Vote\") in the hopes that their supporters would rank the other party as their second preference; however, this strategy backfired in the subsequent 1952 British Columbia general election where, to the surprise of many, the right-wing populist BC Social Credit Party won a minority. They were able to win a majority in the subsequent election as Liberal and Conservative supporters shifted their anti-CCF vote to Social Credit.",
"title": "Distribution"
},
{
"paragraph_id": 13,
"text": "Manitoba has had more formal coalition governments than any other province. Following gains by the United Farmer's/Progressive movement elsewhere in the country, the United Farmers of Manitoba unexpectedly won the 1921 election. Like their counterparts in Ontario, they had not expected to win and did not have a leader. They asked John Bracken, a professor in animal husbandry, to become leader and premier. Bracken changed the party's name to the Progressive Party of Manitoba. During the Great Depression, Bracken survived at a time when other premiers were being defeated by forming a coalition government with the Manitoba Liberals (eventually, the two parties would merge into the Liberal-Progressive Party of Manitoba, and decades later, the party would change its name to the Manitoba Liberal Party). In 1940, Bracken formed a wartime coalition government with almost every party in the Manitoba Legislature (the Conservatives, CCF, and Social Credit; however, the CCF broke with the coalition after a few years over policy differences). The only party not included was the small, communist Labor-Progressive Party, which had a handful of seats.",
"title": "Distribution"
},
{
"paragraph_id": 14,
"text": "In Saskatchewan, NDP premier Roy Romanow formed a formal coalition with the Saskatchewan Liberals in 1999 after being reduced to a minority. After two years, the newly elected Liberal leader David Karwacki ordered the coalition be disbanded, the Liberal caucus disagreed with him and left the Liberals to run as New Democrats in the upcoming election. The Saskatchewan NDP was re-elected with a majority under its new leader Lorne Calvert, while the Saskatchewan Liberals lost their remaining seats and have not been competitive in the province since.",
"title": "Distribution"
},
{
"paragraph_id": 15,
"text": "From the creation of the Folketing in 1849 through the introduction of proportional representation in 1918, there were only single-party governments in Denmark. Thorvald Stauning formed his second government and Denmark's first coalition government in 1929. Since then, the norm has been coalition governments, though there have been periods where single-party governments were frequent, such as the decade after the end of World War II, during the 1970s, and in the late 2010s. Every government from 1982 until the 2015 elections were coalitions. While Mette Frederiksen's first government only consisted of her own Social Democrats, her second government is a coalition of the Social Democrats, Venstre, and the Moderates.",
"title": "Distribution"
},
{
"paragraph_id": 16,
"text": "When the Social Democrats under Stauning won 46% of the votes in the 1935 election, this was the closest any party has gotten to winning an outright majority in parliament since 1918. One party has thus never held a majority alone, and even one-party governments have needed to have confidence agreements with at least one other party to govern. For example, though Frederiksen's first government only consisted of the Social Democrats, it also relied on the support of the Social Liberal Party, the Socialist People's Party, and the Red–Green Alliance.",
"title": "Distribution"
},
{
"paragraph_id": 17,
"text": "In Finland, no party has had an absolute majority in the parliament since independence, and multi-party coalitions have been the norm. Finland experienced its most stable government (Lipponen I and II) since independence with a five-party governing coalition, a so-called \"rainbow government\". The Lipponen cabinets set the stability record and were unusual in the respect that both the centre-left (SDP) and radical left-wing (Left Alliance) parties sat in the government with the major centre-right party (National Coalition). The Katainen cabinet was also a rainbow coalition of a total of five parties.",
"title": "Distribution"
},
{
"paragraph_id": 18,
"text": "In Germany, coalition governments are the norm, as it is rare for any single party to win a majority in parliament. The German political system makes extensive use of the constructive vote of no confidence, which requires governments to control an absolute majority of seats. Every government since the foundation of the Federal Republic in 1949 has involved at least two political parties. Typically, governments involve one of the two major parties forming a coalition with a smaller party. For example, from 1982 to 1998, the country was governed by a coalition of the CDU/CSU with the minor Free Democratic Party (FDP); from 1998 to 2005, a coalition of the Social Democratic Party of Germany (SPD) and the minor Greens held power. The CDU/CSU comprises an alliance of the Christian Democratic Union of Germany and Christian Social Union in Bavaria, described as \"sister parties\" which form a joint parliamentary group, and for this purpose are always considered a single party. Coalition arrangements are often given names based on the colours of the parties involved, such as \"red-green\" for the SPD and Greens. Coalitions of three parties are often named after countries whose flags contain those colours, such as the black-yellow-green Jamaica coalition.",
"title": "Distribution"
},
{
"paragraph_id": 19,
"text": "Grand coalitions of the two major parties also occur, but these are relatively rare, as they typically prefer to associate with smaller ones. However, if the major parties are unable to assemble a majority, a grand coalition may be the only practical option. This was the case following the 2005 federal election, in which the incumbent SPD–Green government was defeated but the opposition CDU/CSU–FDP coalition also fell short of a majority. A grand coalition government was subsequently formed between the CDU/CSU and the SPD. Partnerships like these typically involve carefully structured cabinets: Angela Merkel of the CDU/CSU became Chancellor while the SPD was granted the majority of cabinet posts.",
"title": "Distribution"
},
{
"paragraph_id": 20,
"text": "Coalition formation has become increasingly complex as voters increasingly migrate away from the major parties during the 2000s and 2010s. While coalitions of more than two parties were extremely rare in preceding decades, they have become common on the state level. These often include the liberal FDP and the Greens alongside one of the major parties, or \"red–red–green\" coalitions of the SPD, Greens, and The Left. In the eastern states, dwindling support for moderate parties has seen the rise of new forms of grand coalitions such as the Kenya coalition. The rise of populist parties also increases the time that it takes for a successful coalition to form. By 2016, the Greens were participating eleven governing coalitions on the state level in seven different constellations. During campaigns, parties often declare which coalitions or partners they prefer or reject. This tendency toward fragmentation also spread to the federal level, particularly during the 2021 federal election, which saw the CDU/CSU and SPD fall short of a combined majority of votes for the first time in history.",
"title": "Distribution"
},
{
"paragraph_id": 21,
"text": "After India's Independence on 15 August 1947, the Indian National Congress, the major political party instrumental in the Indian independence movement, ruled the nation. The first Prime Minister, Jawaharlal Nehru, his successor Lal Bahadur Shastri, and the third Prime Minister, Indira Gandhi, were all members of the Congress party. However, Raj Narain, who had unsuccessfully contested an election against Indira from the constituency of Rae Bareilly in 1971, lodged a case alleging electoral malpractice. In June 1975, Indira was found guilty and barred by the High Court from holding public office for six years. In response, a state of emergency was declared under the pretext of national security. The next election resulted in the formation of India's first ever national coalition government under the prime ministership of Morarji Desai, which was also the first non-Congress national government. It existed from 24 March 1977 to 15 July 1979, headed by the Janata Party, an amalgam of political parties opposed to the emergency imposed between 1975 and 1977. As the popularity of the Janata Party dwindled, Desai had to resign, and Chaudhary Charan Singh, a rival of his, became the fifth Prime Minister. However, due to lack of support, this coalition government did not complete its five-year term.",
"title": "Distribution"
},
{
"paragraph_id": 22,
"text": "Congress returned to power in 1980 under Indira Gandhi, and later under Rajiv Gandhi as the sixth Prime Minister. However, the general election of 1989 once again brought a coalition government under National Front, which lasted until 1991, with two Prime Ministers, the second one being supported by Congress. The 1991 election resulted in a Congress-led stable minority government for five years. The eleventh parliament produced three Prime Ministers in two years and forced the country back to the polls in 1998. The first successful coalition government in India which completed a whole five-year term was the Bharatiya Janata Party (BJP)-led National Democratic Alliance with Atal Bihari Vajpayee as Prime Minister from 1999 to 2004. Then another coalition, the Congress-led United Progressive Alliance, consisting of 13 separate parties, ruled India for two terms from 2004 to 2014 with Manmohan Singh as PM. However, in the 16th general election in May 2014, the BJP secured a majority on its own (becoming the first party to do so since the 1984 election), and the National Democratic Alliance came into power, with Narendra Modi as Prime Minister. In 2019, Narendra Modi was re-elected as Prime Minister as the National Democratic Alliance again secured a majority in the 17th general election.",
"title": "Distribution"
},
{
"paragraph_id": 23,
"text": "As a result of the toppling of Suharto, political freedom is significantly increased. Compared to only three parties allowed to exist in the New Order era, a total of 48 political parties participated in the 1999 election and always a total of more than 10 parties in next elections. There are no majority winner of those elections and coalition governments are inevitable. The current government is a coalition of seven parties led by the major centre-left PDIP to let governing big tent Onward Indonesia Coalition",
"title": "Distribution"
},
{
"paragraph_id": 24,
"text": "In Ireland, coalition governments are common; not since 1977 has a single party formed a majority government. Coalition governments to date have been led by either Fianna Fáil or Fine Gael. They have been joined in government by one or more smaller parties or independent members of parliament (TDs).",
"title": "Distribution"
},
{
"paragraph_id": 25,
"text": "Ireland's first coalition government was formed after the 1948 general election, with five parties and independents represented at cabinet. Before 1989, Fianna Fáil had opposed participation in coalition governments, preferring single-party minority government instead. It formed a coalition government with the Progressive Democrats in that year.",
"title": "Distribution"
},
{
"paragraph_id": 26,
"text": "The Labour Party has been in government on eight occasions. On all but one of those occasions, it was as a junior coalition party to Fine Gael. The exception was a government with Fianna Fáil from 1993 to 1994. The 29th Government of Ireland (2011–16), was a grand coalition of the two largest parties, as Fianna Fáil had fallen to third place in the Dáil.",
"title": "Distribution"
},
{
"paragraph_id": 27,
"text": "The current government is a Fianna Fáil, Fine Gael and the Green Party. It is the first time Fianna Fáil and Fine Gael have served in government together, having derived from opposing sides in the Irish Civil War (1922–23).",
"title": "Distribution"
},
{
"paragraph_id": 28,
"text": "A similar situation exists in Israel, which typically has at least 10 parties holding representation in the Knesset. The only faction to ever gain the majority of Knesset seats was Alignment, an alliance of the Labor Party and Mapam that held an absolute majority for a brief period from 1968 to 1969. Historically, control of the Israeli government has alternated between periods of rule by the right-wing Likud in coalition with several right-wing and religious parties and periods of rule by the center-left Labor in coalition with several left-wing parties. Ariel Sharon's formation of the centrist Kadima party in 2006 drew support from former Labor and Likud members, and Kadima ruled in coalition with several other parties.",
"title": "Distribution"
},
{
"paragraph_id": 29,
"text": "Israel also formed a national unity government from 1984–1988. The premiership and foreign ministry portfolio were held by the head of each party for two years, and they switched roles in 1986.",
"title": "Distribution"
},
{
"paragraph_id": 30,
"text": "In Japan, controlling a majority in the House of Representatives is enough to decide the election of the prime minister (=recorded, two-round votes in both houses of the National Diet, yet the vote of the House of Representatives decision eventually overrides a dissenting House of Councillors vote automatically after the mandatory conference committee procedure fails which, by precedent, it does without real attempt to reconcile the different votes). Therefore, a party that controls the lower house can form a government on its own. It can also pass a budget on its own. But passing any law (including important budget-related laws) requires either majorities in both houses of the legislature or, with the drawback of longer legislative proceedings, a two-thirds majority in the House of Representatives.",
"title": "Distribution"
},
{
"paragraph_id": 31,
"text": "In recent decades, single-party full legislative control is rare, and coalition governments are the norm: Most governments of Japan since the 1990s and, as of 2020, all since 1999 have been coalition governments, some of them still fell short of a legislative majority. The Liberal Democratic Party (LDP) held a legislative majority of its own in the National Diet until 1989 (when it initially continued to govern alone), and between the 2016 and 2019 elections (when it remained in its previous ruling coalition). The Democratic Party of Japan (through accessions in the House of Councillors) briefly controlled a single-party legislative majority for a few weeks before it lost the 2010 election (it, too, continued to govern as part of its previous ruling coalition).",
"title": "Distribution"
},
{
"paragraph_id": 32,
"text": "From the constitutional establishment of parliamentary cabinets and the introduction of the new, now directly elected upper house of parliament in 1947 until the formation of the LDP and the reunification of the Japanese Socialist Party in 1955, no single party formally controlled a legislative majority on its own. Only few formal coalition governments (46th, 47th, initially 49th cabinet) interchanged with technical minority governments and cabinets without technical control of the House of Councillors (later called \"twisted Diets\", nejire kokkai, when they were not only technically, but actually divided). But during most of that period, the centrist Ryokufūkai was the strongest overall or decisive cross-bench group in the House of Councillors, and it was willing to cooperate with both centre-left and centre-right governments even when it was not formally part of the cabinet; and in the House of Representatives, minority governments of Liberals or Democrats (or their precursors; loose, indirect successors to the two major pre-war parties) could usually count on support from some members of the other major conservative party or from smaller conservative parties and independents. Finally in 1955, when Hatoyama Ichirō's Democratic Party minority government called early House of Representatives elections and, while gaining seats substantially, remained in the minority, the Liberal Party refused to cooperate until negotiations on a long-debated \"conservative merger\" of the two parties were agreed upon, and eventually successful.",
"title": "Distribution"
},
{
"paragraph_id": 33,
"text": "After it was founded in 1955, the Liberal Democratic Party dominated Japan's governments for a long period: The new party governed alone without interruption until 1983, again from 1986 to 1993 and most recently between 1996 and 1999. The first time the LDP entered a coalition government followed its third loss of its House of Representatives majority in the 1983 House of Representatives general election. The LDP-New Liberal Club coalition government lasted until 1986 when the LDP won landslide victories in simultaneous double elections to both houses of parliament.",
"title": "Distribution"
},
{
"paragraph_id": 34,
"text": "There have been coalition cabinets where the post of prime minister was given to a junior coalition partner: the JSP-DP-Cooperativist coalition government in 1948 of prime minister Ashida Hitoshi (DP) who took over after his JSP predecessor Tetsu Katayama had been toppled by the left wing of his own party, the JSP-Renewal-Kōmei-DSP-JNP-Sakigake-SDF-DRP coalition in 1993 with Morihiro Hosokawa (JNP) as compromise PM for the Ichirō Ozawa-negotiated rainbow coalition that removed the LDP from power for the first time to break up in less than a year, and the LDP-JSP-Sakigake government that was formed in 1994 when the LDP had agreed, if under internal turmoil and with some defections, to bury the main post-war partisan rivalry and support the election of JSP prime minister Tomiichi Murayama in exchange for the return to government.",
"title": "Distribution"
},
{
"paragraph_id": 35,
"text": "Ever since Malaysia gained independence in 1957, none of its federal governments have ever been controlled by a single political party. Due to the social nature of the country, the first federal government was formed by a three-party Alliance coalition, composed of the United Malays National Organisations (UMNO), the Malaysian Chinese Association (MCA), and the Malaysian Indian Congress (MIC). It was later expanded and rebranded as Barisan Nasional (BN), which includes parties representing the Malaysian states of Sabah and Sarawak.",
"title": "Distribution"
},
{
"paragraph_id": 36,
"text": "The 2018 Malaysian general election saw the first non-BN coalition federal government in the country's electoral history, formed through an alliance between the Pakatan Harapan (PH) coalition and the Sabah Heritage Party (WARISAN). The federal government formed after the 2020–2022 Malaysian political crisis was the first to be established through coordination between multiple political coalitions. This occurred when the newly formed Perikatan Nasional (PN) coalition partnered with BN and Gabungan Parti Sarawak (GPS). In 2022 after its registration, Sabah-based Gabungan Rakyat Sabah (GRS) formally joined the government (though it had been a part of an informal coalition since 2020). The current government led by Prime Minister Anwar Ibrahim is composed of four political coalitions and 19 parties.",
"title": "Distribution"
},
{
"paragraph_id": 37,
"text": "MMP was introduced in New Zealand in the 1996 election. In order to get into power, parties need to get a total of 50% of the approximately (there can be more if an Overhang seat exists) 120 seats in parliament – 61. Since it is rare for a party to win a full majority, they must form coalitions with other parties. For example, during the 2017 general election, Labour won 46 seats and New Zealand First won nine. The two formed a Coalition Government with confidence and supply from the Green Party who won eight seats.",
"title": "Distribution"
},
{
"paragraph_id": 38,
"text": "Since 2015, there are many more coalition governments than previously in municipalities, autonomous regions and, since 2020 (coming from the November 2019 Spanish general election), in the Spanish Government. There are two ways of conforming them: all of them based on a program and its institutional architecture, one consists on distributing the different areas of government between the parties conforming the coalition and the other one is, like in the Valencian Community, where the ministries are structured with members of all the political parties being represented, so that conflicts that may occur are regarding competences and not fights between parties.",
"title": "Distribution"
},
{
"paragraph_id": 39,
"text": "Coalition governments in Spain had already existed during the 2nd Republic, and have been common in some specific Autonomous Communities since the 1980s. Nonetheless, the prevalence of two big parties overall has been eroded and the need for coalitions appears to be the new normal since around 2015.",
"title": "Distribution"
},
{
"paragraph_id": 40,
"text": "Turkey's first coalition government was formed after the 1961 general election, with two political parties and independents represented at cabinet. It was also Turkey's first grand coalition as the two largest political parties of opposing political ideologies (Republican People's Party and Justice Party) united. Between 1960 and 2002, 17 coalition governments were formed in Turkey. The media and the general public view coalition governments as unfavorable and unstable due to their lack of effectiveness and short lifespan. Following Turkey's transition to a presidential system in 2017, political parties focussed more on forming electoral alliances. Due to separation of powers, the government doesn't have to be formed by parliamentarians and therefore not obliged to result in a coalition government. However, the parliament can dissolve the cabinet if the parliamentary opposition is in majority.",
"title": "Distribution"
},
{
"paragraph_id": 41,
"text": "In the United Kingdom, coalition governments (sometimes known as \"national governments\") usually have only been formed at times of national crisis. The most prominent was the National Government of 1931 to 1940. There were multi-party coalitions during both world wars. Apart from this, when no party has had a majority, minority governments normally have been formed with one or more opposition parties agreeing to vote in favour of the legislation which governments need to function: for instance the Labour government of James Callaghan formed a pact with the Liberals from March 1977 until July 1978, following a series of by-election defeats had eroded Labour's majority of three seats which had been gained at the October 1974 election. However, in the run-up to the 1997 general election, Labour opposition leader Tony Blair was in talks with Liberal Democrat leader Paddy Ashdown about forming a coalition government if Labour failed to win a majority at the election; but there proved to be no need for a coalition as Labour won the election by a landslide. The 2010 general election resulted in a hung parliament (Britain's first for 36 years), and the Conservatives, led by David Cameron, which had won the largest number of seats, formed a coalition with the Liberal Democrats in order to gain a parliamentary majority, ending 13 years of Labour government. This was the first time that the Conservatives and Lib Dems had made a power-sharing deal at Westminster. It was also the first full coalition in Britain since 1945, having been formed 70 years virtually to the day after the establishment of Winston Churchill's wartime coalition, Labour and the Liberal Democrats have entered into a coalition twice in the Scottish Parliament, as well as twice in the Welsh Assembly.",
"title": "Distribution"
},
{
"paragraph_id": 42,
"text": "Since the 1989 election, there have been 4 coalition governments, all including at least both the conservative National Party and the liberal Colorado Party. The first one was after the election of the blanco Luis Alberto Lacalle and lasted until 1992 due to policy disagreements, the longest lasting coalition was the Colorado-led coalition under the second government of Julio María Sanguinetti, in which the national leader Alberto Volonté was frequently described as a \"Prime Minister\", the next coalition (under president Jorge Batlle) was also Colorado-led, but it lasted only until after the 2002 Uruguay banking crisis, when the blancos abandoned the government. Following the 2019 Uruguayan general election, the blanco Luis Lacalle Pou formed the coalición multicolor, composed of his own National Party, the liberal Colorado Party, the eclectic Open Cabildo and the center left Independent Party.",
"title": "Distribution"
},
{
"paragraph_id": 43,
"text": "Advocates of proportional representation suggest that a coalition government leads to more consensus-based politics, as a government comprising differing parties (often based on different ideologies) need to compromise about governmental policy. Another stated advantage is that a coalition government better reflects the popular opinion of the electorate within a country; this means, for instance, that the political system contains just one majority-based mechanism. Contrast this with district voting in which the majority mechanism occurs twice: first, the majority of voters pick the representative and, second, the body of representatives make a subsequent majority decision. The doubled majority decision undermines voter support for that decision. The benefit of proportional representation is that it contains that majority mechanism just once. Additionally, coalition partnership may play an important role in moderating the level of affective polarization over parties, that is, the animosity and hostility against the opponent party identifiers/supporters.",
"title": "Support and criticism"
},
{
"paragraph_id": 44,
"text": "Those who disapprove of coalition governments believe that such governments have a tendency to be fractious and prone to disharmony, as their component parties hold differing beliefs and thus may not always agree on policy. Sometimes the results of an election mean that the coalitions which are mathematically most probable are ideologically infeasible, for example in Flanders or Northern Ireland. A second difficulty might be the ability of minor parties to play \"kingmaker\" and, particularly in close elections, gain far more power in exchange for their support than the size of their vote would otherwise justify.",
"title": "Support and criticism"
},
{
"paragraph_id": 45,
"text": "Germany is the largest nation ever to have had proportional representation during the interbellum. After WW II, the German system, district based but then proportionally adjusted afterward, contains a threshold that keeps the number of parties limited. The threshold is set at five percent, resulting in empowered parties with at least a minimum amount of political gravity.",
"title": "Support and criticism"
},
{
"paragraph_id": 46,
"text": "Coalition governments have also been criticized for sustaining a consensus on issues when disagreement and the consequent discussion would be more fruitful. To forge a consensus, the leaders of ruling coalition parties can agree to silence their disagreements on an issue to unify the coalition against the opposition. The coalition partners, if they control the parliamentary majority, can collude to make the parliamentary discussion on the issue irrelevant by consistently disregarding the arguments of the opposition and voting against the opposition's proposals — even if there is disagreement within the ruling parties about the issue. However, in winner-take-all this seems always to be the case.",
"title": "Support and criticism"
},
{
"paragraph_id": 47,
"text": "Powerful parties can also act in an oligocratic way to form an alliance to stifle the growth of emerging parties. Of course, such an event is rare in coalition governments when compared to two-party systems, which typically exist because of stifling of the growth of emerging parties, often through discriminatory nomination rules regulations and plurality voting systems, and so on.",
"title": "Support and criticism"
},
{
"paragraph_id": 48,
"text": "A single, more powerful party can shape the policies of the coalition disproportionately. Smaller or less powerful parties can be intimidated to not openly disagree. In order to maintain the coalition, they would have to vote against their own party's platform in the parliament. If they do not, the party has to leave the government and loses executive power. However, this is contradicted by the \"kingmaker\" factor mentioned above.",
"title": "Support and criticism"
},
{
"paragraph_id": 49,
"text": "Finally, a strength that can also be seen as a weakness is that proportional representation puts the emphasis on collaboration. All parties involved are looking at the other parties in the best light possible, since they may be (future) coalition partners. The pendulum may therefore show less of a swing between political extremes. Still, facing external issues may then also be approached from a collaborative perspective, even when the outside force is not benevolent.",
"title": "Support and criticism"
},
{
"paragraph_id": 50,
"text": "A legislative coalition or voting coalition is when political parties in a legislature align on voting to push forward specific policies or legislation, but do not engage in power-sharing of the executive branch like in coalition governments.",
"title": "Legislative coalitions and agreements"
},
{
"paragraph_id": 51,
"text": "In a parliamentary system, political parties may form a confidence and supply arrangement, pledging to support the governing party on legislative bills and motions that carry a vote of confidence. Unlike a coalition government, which is a more formalised partnership characterised by the sharing of the executive branch, a confidence and supply arrangement does not entail executive \"power-sharing\". Instead, it involves the governing party supporting specific proposals and priorities of the other parties in the arrangement, in return for their continued support on motions of confidence.",
"title": "Legislative coalitions and agreements"
},
{
"paragraph_id": 52,
"text": "In the United States, political parties have formed legislative coalitions in the past in order to push forward specific policies or legislation in the United States Congress. In 1855, a coalition was formed between members of the American party, Opposition Party and Republican Party to elect Nathaniel P. Banks speaker of the House. The most recent legislative coalition took place in 1917, a coalition was formed between members of the Democratic Party, Progressive Party and Socialist Party of America to elect Champ Clark speaker.",
"title": "Legislative coalitions and agreements"
},
{
"paragraph_id": 53,
"text": "A coalition government, in which \"power-sharing\" of executive offices is performed, has not occurred in the United States. The norms that allow coalition governments to form and persist do not exist in the United States.",
"title": "Legislative coalitions and agreements"
}
]
| A coalition government is a form of government in which political parties cooperate to form a government. The usual reason for such an arrangement is that no single party has achieved an absolute majority after an election, an atypical outcome in nations with majoritarian electoral systems, but common under proportional representation. A coalition government might also be created in a time of national difficulty or crisis to give a government the high degree of perceived political legitimacy or collective identity, it can also play a role in diminishing internal political strife. In such times, parties have formed all-party coalitions. If a coalition collapses, the Prime Minister and cabinet may be ousted by a vote of no confidence, call snap elections, form a new majority coalition, or continue as a minority government. | 2002-02-25T15:51:15Z | 2023-12-28T23:22:33Z | [
"Template:Lang",
"Template:More citations needed section",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Harvnb",
"Template:Short description",
"Template:Party politics",
"Template:See also",
"Template:Main",
"Template:Div col end",
"Template:Cite magazine",
"Template:More citations needed",
"Template:Cite journal",
"Template:Politics",
"Template:By whom",
"Template:Div col",
"Template:Cite book",
"Template:Coalition Spectrum navbox",
"Template:Citation needed"
]
| https://en.wikipedia.org/wiki/Coalition_government |
6,038 | Chemical engineering | Chemical engineering is an engineering field which deals with the study of operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions.
Chemical engineers typically hold a degree in Chemical Engineering or Process Engineering. Practicing engineers may have professional certification and be accredited members of a professional body. Such bodies include the Institution of Chemical Engineers (IChemE) or the American Institute of Chemical Engineers (AIChE). A degree in chemical engineering is directly linked with all of the other engineering disciplines, to various extents.
A 1996 article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same paper, however, George E. Davis, an English consultant, was credited with having coined the term. Davis also tried to found a Society of Chemical Engineering, but instead, it was named the Society of Chemical Industry (1881), with Davis as its first secretary. The History of Science in United States: An Encyclopedia puts the use of the term around 1890. "Chemical engineering", describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, "chemical engineer," was already in common use in Britain and the United States.
In the 1940s, it became clear that unit operations alone were insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain and the United States continued until the 1960s, transport phenomena started to receive greater focus. Along with other novel concepts, such as process systems engineering (PSE), a "second paradigm" was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its synthetic elements, such as those of a control system and process design. Developments in chemical engineering before and after World War II were mainly incited by the petrochemical industry; however, advances in other fields were made as well. Advancements in biochemical engineering in the 1940s, for example, found application in the pharmaceutical industry, and allowed for the mass production of various antibiotics, including penicillin and streptomycin. Meanwhile, progress in polymer science in the 1950s paved way for the "age of plastics".
Concerns regarding large-scale chemical manufacturing facilities' safety and environmental impact were also raised during this period. Silent Spring, published in 1962, alerted its readers to the harmful effects of DDT, a potent insecticide. The 1974 Flixborough disaster in the United Kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby villages. 1984 Bhopal disaster in India resulted in almost 4,000 deaths. These incidents, along with other incidents, affected the reputation of the trade as industrial safety and environmental protection were given more focus. In response, the IChemE required safety to be part of every degree course that it accredited after 1982. By the 1970s, legislation and monitoring agencies were instituted in various countries, such as France, Germany, and the United States. In time, the systematic application of safety principles to chemical and other process plants began to be considered a specific discipline, known as process safety.
Advancements in computer science found applications for designing and managing plants, simplifying calculations and drawings that previously had to be done manually. The completion of the Human Genome Project is also seen as a major development, not only advancing chemical engineering but genetic engineering and genomics as well. Chemical engineering principles were used to produce DNA sequences in large quantities.
Chemical engineering involves the application of several principles. Key concepts are presented below.
Chemical engineering design concerns the creation of plans, specifications, and economic analyses for pilot plants, new plants, or plant modifications. Design engineers often work in a consulting role, designing plants to meet clients' needs. Design is limited by several factors, including funding, government regulations, and safety standards. These constraints dictate a plant's choice of process, materials, and equipment.
Plant construction is coordinated by project engineers and project managers, depending on the size of the investment. A chemical engineer may do the job of project engineer full-time or part of the time, which requires additional training and job skills or act as a consultant to the project group. In the USA the education of chemical engineering graduates from the Baccalaureate programs accredited by ABET do not usually stress project engineering education, which can be obtained by specialized training, as electives, or from graduate programs. Project engineering jobs are some of the largest employers for chemical engineers.
A unit operation is a physical step in an individual chemical engineering process. Unit operations (such as crystallization, filtration, drying and evaporation) are used to prepare reactants, purifying and separating its products, recycling unspent reactants, and controlling energy transfer in reactors. On the other hand, a unit process is the chemical equivalent of a unit operation. Along with unit operations, unit processes constitute a process operation. Unit processes (such as nitration, hydrogenation, and oxidation involve the conversion of materials by biochemical, thermochemical and other means. Chemical engineers responsible for these are called process engineers.
Process design requires the definition of equipment types and sizes as well as how they are connected and the materials of construction. Details are often printed on a Process Flow Diagram which is used to control the capacity and reliability of a new or existing chemical factory.
Education for chemical engineers in the first college degree 3 or 4 years of study stresses the principles and practices of process design. The same skills are used in existing chemical plants to evaluate the efficiency and make recommendations for improvements.
Modeling and analysis of transport phenomena is essential for many industrial applications. Transport phenomena involve fluid dynamics, heat transfer and mass transfer, which are governed mainly by momentum transfer, energy transfer and transport of chemical species, respectively. Models often involve separate considerations for macroscopic, microscopic and molecular level phenomena. Modeling of transport phenomena, therefore, requires an understanding of applied mathematics.
Chemical engineers "develop economic ways of using materials and energy". Chemical engineers use chemistry and engineering to turn raw materials into usable products, such as medicine, petrochemicals, and plastics on a large-scale, industrial setting. They are also involved in waste management and research. Both applied and research facets could make extensive use of computers.
Chemical engineers may be involved in industry or university research where they are tasked with designing and performing experiments, by scaling up theoretical chemical reactions, to create better and safer methods for production, pollution control, and resource conservation. They may be involved in designing and constructing plants as a project engineer. Chemical engineers serving as project engineers use their knowledge in selecting optimal production methods and plant equipment to minimize costs and maximize safety and profitability. After plant construction, chemical engineering project managers may be involved in equipment upgrades, troubleshooting, and daily operations in either full-time or consulting roles. | [
{
"paragraph_id": 0,
"text": "Chemical engineering is an engineering field which deals with the study of operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chemical engineers typically hold a degree in Chemical Engineering or Process Engineering. Practicing engineers may have professional certification and be accredited members of a professional body. Such bodies include the Institution of Chemical Engineers (IChemE) or the American Institute of Chemical Engineers (AIChE). A degree in chemical engineering is directly linked with all of the other engineering disciplines, to various extents.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A 1996 article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same paper, however, George E. Davis, an English consultant, was credited with having coined the term. Davis also tried to found a Society of Chemical Engineering, but instead, it was named the Society of Chemical Industry (1881), with Davis as its first secretary. The History of Science in United States: An Encyclopedia puts the use of the term around 1890. \"Chemical engineering\", describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, \"chemical engineer,\" was already in common use in Britain and the United States.",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "In the 1940s, it became clear that unit operations alone were insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain and the United States continued until the 1960s, transport phenomena started to receive greater focus. Along with other novel concepts, such as process systems engineering (PSE), a \"second paradigm\" was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its synthetic elements, such as those of a control system and process design. Developments in chemical engineering before and after World War II were mainly incited by the petrochemical industry; however, advances in other fields were made as well. Advancements in biochemical engineering in the 1940s, for example, found application in the pharmaceutical industry, and allowed for the mass production of various antibiotics, including penicillin and streptomycin. Meanwhile, progress in polymer science in the 1950s paved way for the \"age of plastics\".",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Concerns regarding large-scale chemical manufacturing facilities' safety and environmental impact were also raised during this period. Silent Spring, published in 1962, alerted its readers to the harmful effects of DDT, a potent insecticide. The 1974 Flixborough disaster in the United Kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby villages. 1984 Bhopal disaster in India resulted in almost 4,000 deaths. These incidents, along with other incidents, affected the reputation of the trade as industrial safety and environmental protection were given more focus. In response, the IChemE required safety to be part of every degree course that it accredited after 1982. By the 1970s, legislation and monitoring agencies were instituted in various countries, such as France, Germany, and the United States. In time, the systematic application of safety principles to chemical and other process plants began to be considered a specific discipline, known as process safety.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Advancements in computer science found applications for designing and managing plants, simplifying calculations and drawings that previously had to be done manually. The completion of the Human Genome Project is also seen as a major development, not only advancing chemical engineering but genetic engineering and genomics as well. Chemical engineering principles were used to produce DNA sequences in large quantities.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Chemical engineering involves the application of several principles. Key concepts are presented below.",
"title": "Concepts"
},
{
"paragraph_id": 7,
"text": "Chemical engineering design concerns the creation of plans, specifications, and economic analyses for pilot plants, new plants, or plant modifications. Design engineers often work in a consulting role, designing plants to meet clients' needs. Design is limited by several factors, including funding, government regulations, and safety standards. These constraints dictate a plant's choice of process, materials, and equipment.",
"title": "Concepts"
},
{
"paragraph_id": 8,
"text": "Plant construction is coordinated by project engineers and project managers, depending on the size of the investment. A chemical engineer may do the job of project engineer full-time or part of the time, which requires additional training and job skills or act as a consultant to the project group. In the USA the education of chemical engineering graduates from the Baccalaureate programs accredited by ABET do not usually stress project engineering education, which can be obtained by specialized training, as electives, or from graduate programs. Project engineering jobs are some of the largest employers for chemical engineers.",
"title": "Concepts"
},
{
"paragraph_id": 9,
"text": "A unit operation is a physical step in an individual chemical engineering process. Unit operations (such as crystallization, filtration, drying and evaporation) are used to prepare reactants, purifying and separating its products, recycling unspent reactants, and controlling energy transfer in reactors. On the other hand, a unit process is the chemical equivalent of a unit operation. Along with unit operations, unit processes constitute a process operation. Unit processes (such as nitration, hydrogenation, and oxidation involve the conversion of materials by biochemical, thermochemical and other means. Chemical engineers responsible for these are called process engineers.",
"title": "Concepts"
},
{
"paragraph_id": 10,
"text": "Process design requires the definition of equipment types and sizes as well as how they are connected and the materials of construction. Details are often printed on a Process Flow Diagram which is used to control the capacity and reliability of a new or existing chemical factory.",
"title": "Concepts"
},
{
"paragraph_id": 11,
"text": "Education for chemical engineers in the first college degree 3 or 4 years of study stresses the principles and practices of process design. The same skills are used in existing chemical plants to evaluate the efficiency and make recommendations for improvements.",
"title": "Concepts"
},
{
"paragraph_id": 12,
"text": "Modeling and analysis of transport phenomena is essential for many industrial applications. Transport phenomena involve fluid dynamics, heat transfer and mass transfer, which are governed mainly by momentum transfer, energy transfer and transport of chemical species, respectively. Models often involve separate considerations for macroscopic, microscopic and molecular level phenomena. Modeling of transport phenomena, therefore, requires an understanding of applied mathematics.",
"title": "Concepts"
},
{
"paragraph_id": 13,
"text": "Chemical engineers \"develop economic ways of using materials and energy\". Chemical engineers use chemistry and engineering to turn raw materials into usable products, such as medicine, petrochemicals, and plastics on a large-scale, industrial setting. They are also involved in waste management and research. Both applied and research facets could make extensive use of computers.",
"title": "Applications and practice"
},
{
"paragraph_id": 14,
"text": "Chemical engineers may be involved in industry or university research where they are tasked with designing and performing experiments, by scaling up theoretical chemical reactions, to create better and safer methods for production, pollution control, and resource conservation. They may be involved in designing and constructing plants as a project engineer. Chemical engineers serving as project engineers use their knowledge in selecting optimal production methods and plant equipment to minimize costs and maximize safety and profitability. After plant construction, chemical engineering project managers may be involved in equipment upgrades, troubleshooting, and daily operations in either full-time or consulting roles.",
"title": "Applications and practice"
}
]
| Chemical engineering is an engineering field which deals with the study of operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions. Chemical engineers typically hold a degree in Chemical Engineering or Process Engineering. Practicing engineers may have professional certification and be accredited members of a professional body. Such bodies include the Institution of Chemical Engineers (IChemE) or the American Institute of Chemical Engineers (AIChE). A degree in chemical engineering is directly linked with all of the other engineering disciplines, to various extents. | 2001-09-01T02:55:08Z | 2023-11-30T04:54:03Z | [
"Template:Authority control",
"Template:Chemical engineering",
"Template:Cite news",
"Template:Chemical engg",
"Template:Cite web",
"Template:Citation",
"Template:Branches of chemistry",
"Template:Div col",
"Template:Div col end",
"Template:ISBN",
"Template:Citation needed",
"Template:Refend",
"Template:Engineering fields",
"Template:Portal",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite book",
"Template:Refbegin",
"Template:Short description",
"Template:Sfn",
"Template:Main"
]
| https://en.wikipedia.org/wiki/Chemical_engineering |
6,041 | List of comedians | A comedian is one who entertains through comedy, such as jokes and other forms of humour. Following is a list of comedians, comedy groups, and comedy writers.
(sorted alphabetically by surname)
(sorted alphabetically by surname)
Lists of comedians by nationality
Other related lists | [
{
"paragraph_id": 0,
"text": "A comedian is one who entertains through comedy, such as jokes and other forms of humour. Following is a list of comedians, comedy groups, and comedy writers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "(sorted alphabetically by surname)",
"title": "Comedians"
},
{
"paragraph_id": 2,
"text": "(sorted alphabetically by surname)",
"title": "Comedy writers"
},
{
"paragraph_id": 3,
"text": "Lists of comedians by nationality",
"title": "See also"
},
{
"paragraph_id": 4,
"text": "Other related lists",
"title": "See also"
},
{
"paragraph_id": 5,
"text": "",
"title": "See also"
}
]
| A comedian is one who entertains through comedy, such as jokes and other forms of humour. Following is a list of comedians, comedy groups, and comedy writers. | 2001-08-07T14:54:10Z | 2023-12-31T16:55:04Z | [
"Template:Horizontal TOC",
"Template:Div col",
"Template:Div col end",
"Template:Portal",
"Template:Clear",
"Template:Short description",
"Template:Dynamic list"
]
| https://en.wikipedia.org/wiki/List_of_comedians |
6,042 | Compact space | In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., it includes all limiting values of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval [0,1] would be compact. Similarly, the space of rational numbers Q {\displaystyle \mathbb {Q} } is not compact, because it has infinitely many "punctures" corresponding to the irrational numbers, and the space of real numbers R {\displaystyle \mathbb {R} } is not compact either, because it excludes the two limiting values + ∞ {\displaystyle +\infty } and − ∞ {\displaystyle -\infty } . However, the extended real number line would be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topological spaces.
One such generalization is that a topological space is sequentially compact if every infinite sequence of points sampled from the space has an infinite subsequence that converges to some point of the space. The Bolzano–Weierstrass theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses an infinite number of points in the closed unit interval [0, 1], some of those points will get arbitrarily close to some real number in that space. For instance, some of the numbers in the sequence 1/2, 4/5, 1/3, 5/6, 1/4, 6/7, ... accumulate to 0 (while others accumulate to 1). Since neither 0 nor 1 are members of the open unit interval (0, 1), those same sets of points would not accumulate to any point of it, so the open unit interval is not compact. Although subsets (subspaces) of Euclidean space can be compact, the entire space itself is not compact, since it is not bounded. For example, considering R 1 {\displaystyle \mathbb {R} ^{1}} (the real number line), the sequence of points 0, 1, 2, 3, ... has no subsequence that converges to any real number.
Compactness was formally introduced by Maurice Fréchet in 1906 to generalize the Bolzano–Weierstrass theorem from spaces of geometrical points to spaces of functions. The Arzelà–Ascoli theorem and the Peano existence theorem exemplify applications of this notion of compactness to classical analysis. Following its initial introduction, various equivalent notions of compactness, including sequential compactness and limit point compactness, were developed in general metric spaces. In general topological spaces, however, these notions of compactness are not necessarily equivalent. The most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that "cover" the space in the sense that each point of the space lies in some set contained in the family. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this sense, it is often possible to patch together information that holds locally — that is, in a neighborhood of each point — into corresponding statements that hold throughout the space, and many theorems are of this character.
The term compact set is sometimes used as a synonym for compact space, but also often refers to a compact subspace of a topological space.
In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point. Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected. The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts — until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass.
In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points. The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà. The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's "limit point". Towards the beginning of the twentieth century, results similar to that of Arzelà and Ascoli began to accumulate in the area of integral equations, as investigated by David Hilbert and Erhard Schmidt. For a certain class of Green's functions coming from solutions of integral equations, Schmidt had shown that a property analogous to the Arzelà–Ascoli theorem held in the sense of mean convergence — or convergence in what would later be dubbed a Hilbert space. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space. It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property and coined the term compactness to refer to this general phenomenon (he used the term already in his 1904 paper which led to the famous 1906 thesis).
However, a different notion of compactness altogether had also slowly emerged at the end of the 19th century from the study of the continuum, which was seen as fundamental for the rigorous formulation of analysis. In 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous. In the course of the proof, he made use of a lemma that from any countable cover of the interval by smaller open intervals, it was possible to select a finite number of these that also covered it. The significance of this lemma was recognized by Émile Borel (1895), and it was generalized to arbitrary collections of intervals by Pierre Cousin (1895) and Henri Lebesgue (1904). The Heine–Borel theorem, as the result is now known, is another special property possessed by closed and bounded sets of real numbers.
This property was significant because it allowed for the passage from local information about a set (such as the continuity of a function) to global information about the set (such as the uniform continuity of a function). This sentiment was expressed by Lebesgue (1904), who also exploited it in the development of the integral now bearing his name. Ultimately, the Russian school of point-set topology, under the direction of Pavel Alexandrov and Pavel Urysohn, formulated Heine–Borel compactness in a way that could be applied to the modern notion of a topological space. Alexandrov & Urysohn (1929) showed that the earlier version of compactness due to Fréchet, now called (relative) sequential compactness, under appropriate conditions followed from the version of compactness that was formulated in terms of the existence of finite subcovers. It was this notion of compactness that became the dominant one, because it was not only a stronger property, but it could be formulated in a more general setting with a minimum of additional technical machinery, as it relied only on the structure of the open sets in a space.
Any finite space is compact; a finite subcover can be obtained by selecting, for each point, an open set containing it. A nontrivial example of a compact space is the (closed) unit interval [0,1] of real numbers. If one chooses an infinite number of distinct points in the unit interval, then there must be some accumulation point among these points in that interval. For instance, the odd-numbered terms of the sequence 1, 1/2, 1/3, 3/4, 1/5, 5/6, 1/7, 7/8, ... get arbitrarily close to 0, while the even-numbered ones get arbitrarily close to 1. The given example sequence shows the importance of including the boundary points of the interval, since the limit points must be in the space itself — an open (or half-open) interval of the real numbers is not compact. It is also crucial that the interval be bounded, since in the interval [0,∞), one could choose the sequence of points 0, 1, 2, 3, ..., of which no sub-sequence ultimately gets arbitrarily close to any given real number.
In two dimensions, closed disks are compact since for any infinite number of points sampled from a disk, some subset of those points must get arbitrarily close either to a point within the disc, or to a point on the boundary. However, an open disk is not compact, because a sequence of points can tend to the boundary — without getting arbitrarily close to any point in the interior. Likewise, spheres are compact, but a sphere missing a point is not since a sequence of points can still tend to the missing point, thereby not getting arbitrarily close to any point within the space. Lines and planes are not compact, since one can take a set of equally-spaced points in any given direction without approaching any point.
Various definitions of compactness may apply, depending on the level of generality. A subset of Euclidean space in particular is called compact if it is closed and bounded. This implies, by the Bolzano–Weierstrass theorem, that any infinite sequence from the set has a subsequence that converges to a point in the set. Various equivalent notions of compactness, such as sequential compactness and limit point compactness, can be developed in general metric spaces.
In contrast, the different notions of compactness are not equivalent in general topological spaces, and the most useful notion of compactness — originally called bicompactness — is defined using covers consisting of open sets (see Open cover definition below). That this form of compactness holds for closed and bounded subsets of Euclidean space is known as the Heine–Borel theorem. Compactness, when defined in this manner, often allows one to take information that is known locally — in a neighbourhood of each point of the space — and to extend it to information that holds globally throughout the space. An example of this phenomenon is Dirichlet's theorem, to which it was originally applied by Heine, that a continuous function on a compact interval is uniformly continuous; here, continuity is a local property of the function, and uniform continuity the corresponding global property.
Formally, a topological space X is called compact if every open cover of X has a finite subcover. That is, X is compact if for every collection C of open subsets of X such that
there is a finite subcollection F ⊆ C such that
Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta.
A subset K of a topological space X is said to be compact if it is compact as a subspace (in the subspace topology). That is, K is compact if for every arbitrary collection C of open subsets of X such that
there is a finite subcollection F ⊆ C such that
Compactness is a "topological" property. That is, if K ⊂ Z ⊂ Y {\displaystyle K\subset Z\subset Y} , with subset Z equipped with the subspace topology, then K is compact in Z if and only if K is compact in Y.
If X is a topological space then the following are equivalent:
Bourbaki defines a compact space (quasi-compact space) as a topological space where each filter has a cluster point (i.e., 8. in the above).
For any subset A of Euclidean space, A is compact if and only if it is closed and bounded; this is the Heine–Borel theorem.
As a Euclidean space is a metric space, the conditions in the next subsection also apply to all of its subsets. Of all of the equivalent conditions, it is in practice easiest to verify that a subset is closed and bounded, for example, for a closed interval or closed n-ball.
For any metric space (X, d), the following are equivalent (assuming countable choice):
A compact metric space (X, d) also satisfies the following properties:
For an ordered space (X, <) (i.e. a totally ordered set equipped with the order topology), the following are equivalent:
An ordered space satisfying (any one of) these conditions is called a complete lattice.
In addition, the following are equivalent for all ordered spaces (X, <), and (assuming countable choice) are true whenever (X, <) is compact. (The converse in general fails if (X, <) is not also metrizable.):
Let X be a topological space and C(X) the ring of real continuous functions on X. For each p ∈ X, the evaluation map ev p : C ( X ) → R {\displaystyle \operatorname {ev} _{p}\colon C(X)\to \mathbb {R} } given by evp(f) = f(p) is a ring homomorphism. The kernel of evp is a maximal ideal, since the residue field C(X)/ker evp is the field of real numbers, by the first isomorphism theorem. A topological space X is pseudocompact if and only if every maximal ideal in C(X) has residue field the real numbers. For completely regular spaces, this is equivalent to every maximal ideal being the kernel of an evaluation homomorphism. There are pseudocompact spaces that are not compact, though.
In general, for non-pseudocompact spaces there are always maximal ideals m in C(X) such that the residue field C(X)/m is a (non-Archimedean) hyperreal field. The framework of non-standard analysis allows for the following alternative characterization of compactness: a topological space X is compact if and only if every point x of the natural extension *X is infinitely close to a point x0 of X (more precisely, x is contained in the monad of x0).
A space X is compact if its hyperreal extension *X (constructed, for example, by the ultrapower construction) has the property that every point of *X is infinitely close to some point of X ⊂ *X. For example, an open real interval X = (0, 1) is not compact because its hyperreal extension *(0,1) contains infinitesimals, which are infinitely close to 0, which is not a point of X.
Since a continuous image of a compact space is compact, the extreme value theorem holds for such spaces: a continuous real-valued function on a nonempty compact space is bounded above and attains its supremum. (Slightly more generally, this is true for an upper semicontinuous function.) As a sort of converse to the above statements, the pre-image of a compact space under a proper map is compact.
Every topological space X is an open dense subspace of a compact space having at most one point more than X, by the Alexandroff one-point compactification. By the same construction, every locally compact Hausdorff space X is an open dense subspace of a compact Hausdorff space having at most one point more than X.
A nonempty compact subset of the real numbers has a greatest element and a least element.
Let X be a simply ordered set endowed with the order topology. Then X is compact if and only if X is a complete lattice (i.e. all subsets have suprema and infima).
This article incorporates material from Examples of compact spaces on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | [
{
"paragraph_id": 0,
"text": "In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no \"punctures\" or \"missing endpoints\", i.e., it includes all limiting values of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval [0,1] would be compact. Similarly, the space of rational numbers Q {\\displaystyle \\mathbb {Q} } is not compact, because it has infinitely many \"punctures\" corresponding to the irrational numbers, and the space of real numbers R {\\displaystyle \\mathbb {R} } is not compact either, because it excludes the two limiting values + ∞ {\\displaystyle +\\infty } and − ∞ {\\displaystyle -\\infty } . However, the extended real number line would be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topological spaces.",
"title": ""
},
{
"paragraph_id": 1,
"text": "One such generalization is that a topological space is sequentially compact if every infinite sequence of points sampled from the space has an infinite subsequence that converges to some point of the space. The Bolzano–Weierstrass theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses an infinite number of points in the closed unit interval [0, 1], some of those points will get arbitrarily close to some real number in that space. For instance, some of the numbers in the sequence 1/2, 4/5, 1/3, 5/6, 1/4, 6/7, ... accumulate to 0 (while others accumulate to 1). Since neither 0 nor 1 are members of the open unit interval (0, 1), those same sets of points would not accumulate to any point of it, so the open unit interval is not compact. Although subsets (subspaces) of Euclidean space can be compact, the entire space itself is not compact, since it is not bounded. For example, considering R 1 {\\displaystyle \\mathbb {R} ^{1}} (the real number line), the sequence of points 0, 1, 2, 3, ... has no subsequence that converges to any real number.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Compactness was formally introduced by Maurice Fréchet in 1906 to generalize the Bolzano–Weierstrass theorem from spaces of geometrical points to spaces of functions. The Arzelà–Ascoli theorem and the Peano existence theorem exemplify applications of this notion of compactness to classical analysis. Following its initial introduction, various equivalent notions of compactness, including sequential compactness and limit point compactness, were developed in general metric spaces. In general topological spaces, however, these notions of compactness are not necessarily equivalent. The most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that \"cover\" the space in the sense that each point of the space lies in some set contained in the family. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this sense, it is often possible to patch together information that holds locally — that is, in a neighborhood of each point — into corresponding statements that hold throughout the space, and many theorems are of this character.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The term compact set is sometimes used as a synonym for compact space, but also often refers to a compact subspace of a topological space.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point. Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected. The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts — until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass.",
"title": "Historical development"
},
{
"paragraph_id": 5,
"text": "In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points. The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà. The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's \"limit point\". Towards the beginning of the twentieth century, results similar to that of Arzelà and Ascoli began to accumulate in the area of integral equations, as investigated by David Hilbert and Erhard Schmidt. For a certain class of Green's functions coming from solutions of integral equations, Schmidt had shown that a property analogous to the Arzelà–Ascoli theorem held in the sense of mean convergence — or convergence in what would later be dubbed a Hilbert space. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space. It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property and coined the term compactness to refer to this general phenomenon (he used the term already in his 1904 paper which led to the famous 1906 thesis).",
"title": "Historical development"
},
{
"paragraph_id": 6,
"text": "However, a different notion of compactness altogether had also slowly emerged at the end of the 19th century from the study of the continuum, which was seen as fundamental for the rigorous formulation of analysis. In 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous. In the course of the proof, he made use of a lemma that from any countable cover of the interval by smaller open intervals, it was possible to select a finite number of these that also covered it. The significance of this lemma was recognized by Émile Borel (1895), and it was generalized to arbitrary collections of intervals by Pierre Cousin (1895) and Henri Lebesgue (1904). The Heine–Borel theorem, as the result is now known, is another special property possessed by closed and bounded sets of real numbers.",
"title": "Historical development"
},
{
"paragraph_id": 7,
"text": "This property was significant because it allowed for the passage from local information about a set (such as the continuity of a function) to global information about the set (such as the uniform continuity of a function). This sentiment was expressed by Lebesgue (1904), who also exploited it in the development of the integral now bearing his name. Ultimately, the Russian school of point-set topology, under the direction of Pavel Alexandrov and Pavel Urysohn, formulated Heine–Borel compactness in a way that could be applied to the modern notion of a topological space. Alexandrov & Urysohn (1929) showed that the earlier version of compactness due to Fréchet, now called (relative) sequential compactness, under appropriate conditions followed from the version of compactness that was formulated in terms of the existence of finite subcovers. It was this notion of compactness that became the dominant one, because it was not only a stronger property, but it could be formulated in a more general setting with a minimum of additional technical machinery, as it relied only on the structure of the open sets in a space.",
"title": "Historical development"
},
{
"paragraph_id": 8,
"text": "Any finite space is compact; a finite subcover can be obtained by selecting, for each point, an open set containing it. A nontrivial example of a compact space is the (closed) unit interval [0,1] of real numbers. If one chooses an infinite number of distinct points in the unit interval, then there must be some accumulation point among these points in that interval. For instance, the odd-numbered terms of the sequence 1, 1/2, 1/3, 3/4, 1/5, 5/6, 1/7, 7/8, ... get arbitrarily close to 0, while the even-numbered ones get arbitrarily close to 1. The given example sequence shows the importance of including the boundary points of the interval, since the limit points must be in the space itself — an open (or half-open) interval of the real numbers is not compact. It is also crucial that the interval be bounded, since in the interval [0,∞), one could choose the sequence of points 0, 1, 2, 3, ..., of which no sub-sequence ultimately gets arbitrarily close to any given real number.",
"title": "Basic examples"
},
{
"paragraph_id": 9,
"text": "In two dimensions, closed disks are compact since for any infinite number of points sampled from a disk, some subset of those points must get arbitrarily close either to a point within the disc, or to a point on the boundary. However, an open disk is not compact, because a sequence of points can tend to the boundary — without getting arbitrarily close to any point in the interior. Likewise, spheres are compact, but a sphere missing a point is not since a sequence of points can still tend to the missing point, thereby not getting arbitrarily close to any point within the space. Lines and planes are not compact, since one can take a set of equally-spaced points in any given direction without approaching any point.",
"title": "Basic examples"
},
{
"paragraph_id": 10,
"text": "Various definitions of compactness may apply, depending on the level of generality. A subset of Euclidean space in particular is called compact if it is closed and bounded. This implies, by the Bolzano–Weierstrass theorem, that any infinite sequence from the set has a subsequence that converges to a point in the set. Various equivalent notions of compactness, such as sequential compactness and limit point compactness, can be developed in general metric spaces.",
"title": "Definitions"
},
{
"paragraph_id": 11,
"text": "In contrast, the different notions of compactness are not equivalent in general topological spaces, and the most useful notion of compactness — originally called bicompactness — is defined using covers consisting of open sets (see Open cover definition below). That this form of compactness holds for closed and bounded subsets of Euclidean space is known as the Heine–Borel theorem. Compactness, when defined in this manner, often allows one to take information that is known locally — in a neighbourhood of each point of the space — and to extend it to information that holds globally throughout the space. An example of this phenomenon is Dirichlet's theorem, to which it was originally applied by Heine, that a continuous function on a compact interval is uniformly continuous; here, continuity is a local property of the function, and uniform continuity the corresponding global property.",
"title": "Definitions"
},
{
"paragraph_id": 12,
"text": "Formally, a topological space X is called compact if every open cover of X has a finite subcover. That is, X is compact if for every collection C of open subsets of X such that",
"title": "Definitions"
},
{
"paragraph_id": 13,
"text": "there is a finite subcollection F ⊆ C such that",
"title": "Definitions"
},
{
"paragraph_id": 14,
"text": "Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta.",
"title": "Definitions"
},
{
"paragraph_id": 15,
"text": "A subset K of a topological space X is said to be compact if it is compact as a subspace (in the subspace topology). That is, K is compact if for every arbitrary collection C of open subsets of X such that",
"title": "Definitions"
},
{
"paragraph_id": 16,
"text": "there is a finite subcollection F ⊆ C such that",
"title": "Definitions"
},
{
"paragraph_id": 17,
"text": "Compactness is a \"topological\" property. That is, if K ⊂ Z ⊂ Y {\\displaystyle K\\subset Z\\subset Y} , with subset Z equipped with the subspace topology, then K is compact in Z if and only if K is compact in Y.",
"title": "Definitions"
},
{
"paragraph_id": 18,
"text": "If X is a topological space then the following are equivalent:",
"title": "Definitions"
},
{
"paragraph_id": 19,
"text": "Bourbaki defines a compact space (quasi-compact space) as a topological space where each filter has a cluster point (i.e., 8. in the above).",
"title": "Definitions"
},
{
"paragraph_id": 20,
"text": "For any subset A of Euclidean space, A is compact if and only if it is closed and bounded; this is the Heine–Borel theorem.",
"title": "Definitions"
},
{
"paragraph_id": 21,
"text": "As a Euclidean space is a metric space, the conditions in the next subsection also apply to all of its subsets. Of all of the equivalent conditions, it is in practice easiest to verify that a subset is closed and bounded, for example, for a closed interval or closed n-ball.",
"title": "Definitions"
},
{
"paragraph_id": 22,
"text": "For any metric space (X, d), the following are equivalent (assuming countable choice):",
"title": "Definitions"
},
{
"paragraph_id": 23,
"text": "A compact metric space (X, d) also satisfies the following properties:",
"title": "Definitions"
},
{
"paragraph_id": 24,
"text": "For an ordered space (X, <) (i.e. a totally ordered set equipped with the order topology), the following are equivalent:",
"title": "Definitions"
},
{
"paragraph_id": 25,
"text": "An ordered space satisfying (any one of) these conditions is called a complete lattice.",
"title": "Definitions"
},
{
"paragraph_id": 26,
"text": "In addition, the following are equivalent for all ordered spaces (X, <), and (assuming countable choice) are true whenever (X, <) is compact. (The converse in general fails if (X, <) is not also metrizable.):",
"title": "Definitions"
},
{
"paragraph_id": 27,
"text": "Let X be a topological space and C(X) the ring of real continuous functions on X. For each p ∈ X, the evaluation map ev p : C ( X ) → R {\\displaystyle \\operatorname {ev} _{p}\\colon C(X)\\to \\mathbb {R} } given by evp(f) = f(p) is a ring homomorphism. The kernel of evp is a maximal ideal, since the residue field C(X)/ker evp is the field of real numbers, by the first isomorphism theorem. A topological space X is pseudocompact if and only if every maximal ideal in C(X) has residue field the real numbers. For completely regular spaces, this is equivalent to every maximal ideal being the kernel of an evaluation homomorphism. There are pseudocompact spaces that are not compact, though.",
"title": "Definitions"
},
{
"paragraph_id": 28,
"text": "In general, for non-pseudocompact spaces there are always maximal ideals m in C(X) such that the residue field C(X)/m is a (non-Archimedean) hyperreal field. The framework of non-standard analysis allows for the following alternative characterization of compactness: a topological space X is compact if and only if every point x of the natural extension *X is infinitely close to a point x0 of X (more precisely, x is contained in the monad of x0).",
"title": "Definitions"
},
{
"paragraph_id": 29,
"text": "A space X is compact if its hyperreal extension *X (constructed, for example, by the ultrapower construction) has the property that every point of *X is infinitely close to some point of X ⊂ *X. For example, an open real interval X = (0, 1) is not compact because its hyperreal extension *(0,1) contains infinitesimals, which are infinitely close to 0, which is not a point of X.",
"title": "Definitions"
},
{
"paragraph_id": 30,
"text": "Since a continuous image of a compact space is compact, the extreme value theorem holds for such spaces: a continuous real-valued function on a nonempty compact space is bounded above and attains its supremum. (Slightly more generally, this is true for an upper semicontinuous function.) As a sort of converse to the above statements, the pre-image of a compact space under a proper map is compact.",
"title": "Properties of compact spaces"
},
{
"paragraph_id": 31,
"text": "Every topological space X is an open dense subspace of a compact space having at most one point more than X, by the Alexandroff one-point compactification. By the same construction, every locally compact Hausdorff space X is an open dense subspace of a compact Hausdorff space having at most one point more than X.",
"title": "Properties of compact spaces"
},
{
"paragraph_id": 32,
"text": "A nonempty compact subset of the real numbers has a greatest element and a least element.",
"title": "Properties of compact spaces"
},
{
"paragraph_id": 33,
"text": "Let X be a simply ordered set endowed with the order topology. Then X is compact if and only if X is a complete lattice (i.e. all subsets have suprema and infima).",
"title": "Properties of compact spaces"
},
{
"paragraph_id": 34,
"text": "This article incorporates material from Examples of compact spaces on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.",
"title": "External links"
}
]
| In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., it includes all limiting values of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval [0,1] would be compact. Similarly, the space of rational numbers Q is not compact, because it has infinitely many "punctures" corresponding to the irrational numbers, and the space of real numbers R is not compact either, because it excludes the two limiting values + ∞ and − ∞ . However, the extended real number line would be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topological spaces. One such generalization is that a topological space is sequentially compact if every infinite sequence of points sampled from the space has an infinite subsequence that converges to some point of the space. The Bolzano–Weierstrass theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses an infinite number of points in the closed unit interval [0, 1], some of those points will get arbitrarily close to some real number in that space. For instance, some of the numbers in the sequence 1/2, 4/5, 1/3, 5/6, 1/4, 6/7, ... accumulate to 0. Since neither 0 nor 1 are members of the open unit interval, those same sets of points would not accumulate to any point of it, so the open unit interval is not compact. Although subsets (subspaces) of Euclidean space can be compact, the entire space itself is not compact, since it is not bounded. For example, considering R 1 , the sequence of points 0, 1, 2, 3, ... has no subsequence that converges to any real number. Compactness was formally introduced by Maurice Fréchet in 1906 to generalize the Bolzano–Weierstrass theorem from spaces of geometrical points to spaces of functions. The Arzelà–Ascoli theorem and the Peano existence theorem exemplify applications of this notion of compactness to classical analysis. Following its initial introduction, various equivalent notions of compactness, including sequential compactness and limit point compactness, were developed in general metric spaces. In general topological spaces, however, these notions of compactness are not necessarily equivalent. The most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that "cover" the space in the sense that each point of the space lies in some set contained in the family. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this sense, it is often possible to patch together information that holds locally — that is, in a neighborhood of each point — into corresponding statements that hold throughout the space, and many theorems are of this character. The term compact set is sometimes used as a synonym for compact space, but also often refers to a compact subspace of a topological space. | 2001-09-30T15:47:13Z | 2023-11-28T13:12:11Z | [
"Template:Topology",
"Template:Efn",
"Template:Notelist",
"Template:Math",
"Template:Nobr",
"Template:Closed-closed",
"Template:Nowrap",
"Template:Cite arXiv",
"Template:Sfn",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Cite journal",
"Template:Springer",
"Template:Mvar",
"Template:Div col",
"Template:Cite book",
"Template:Harvnb",
"Template:PlanetMath attribution",
"Template:Redirect",
"Template:Closed-open",
"Template:Div col end",
"Template:Cite web",
"Template:Howes Modern Analysis and Topology 1995",
"Template:Refend",
"Template:Short description",
"Template:Harvtxt",
"Template:Open-open",
"Template:Refbegin"
]
| https://en.wikipedia.org/wiki/Compact_space |
6,045 | Clodius | Clodius is an alternate form of the Roman nomen Claudius, a patrician gens that was traditionally regarded as Sabine in origin. The alternation of o and au is characteristic of the Sabine dialect. The feminine form is Clodia.
During the Late Republic, the spelling Clodius is most prominently associated with Publius Clodius Pulcher, a popularis politician who gave up his patrician status through an order in order to qualify for the office of tribune of the plebs. Clodius positioned himself as a champion of the urban plebs, supporting free grain for the poor and the right of association in guilds (collegia); because of this individual's ideology, Clodius has often been taken as a more "plebeian" spelling and a gesture of political solidarity. Clodius's two elder brothers, the Appius Claudius Pulcher who was consul in 54 BC and the C. Claudius Pulcher who was praetor in 56 BC, conducted more conventional political careers and are referred to in contemporary sources with the traditional spelling.
The view that Clodius represents a plebeian or politicized form has been questioned by Clodius's chief modern-era biographer. In The Patrician Tribune, W. Jeffrey Tatum points out that the spelling is also associated with Clodius's sisters and that "the political explanation … is almost certainly wrong." A plebeian branch of the gens, the Claudii Marcelli, retained the supposedly patrician spelling, while there is some inscriptional evidence that the -o- form may also have been used on occasion by close male relatives of the "patrician tribune" Clodius. Tatum argues that the use of -o- by the "chic" Clodia was a fashionable affectation, and that Clodius, whose perhaps inordinately loving relationship with his sister was the subject of much gossip and insinuation, was imitating his stylish sibling. The linguistic variation of o for au was characteristic of the Umbrian language, of which Sabine was a branch. Forms using o were considered archaic or rustic in the 50s BC, and the use of Clodius would have been either a whimsical gesture of pastoral fantasy, or a trendy assertion of antiquarian authenticity.
In addition to Clodius, Clodii from the Republican era include:
Women of the Claudii Marcelli branch were often called "Clodia" in the late Republic.
People using the name Clodius during the period of the Roman Empire include:
The Clodii Celsini continued to practice the traditional religions of antiquity in the face of Christian hegemony through at least the 4th century, when Clodius Celsinus Adelphius (see below) converted. Members of this branch include: | [
{
"paragraph_id": 0,
"text": "Clodius is an alternate form of the Roman nomen Claudius, a patrician gens that was traditionally regarded as Sabine in origin. The alternation of o and au is characteristic of the Sabine dialect. The feminine form is Clodia.",
"title": ""
},
{
"paragraph_id": 1,
"text": "During the Late Republic, the spelling Clodius is most prominently associated with Publius Clodius Pulcher, a popularis politician who gave up his patrician status through an order in order to qualify for the office of tribune of the plebs. Clodius positioned himself as a champion of the urban plebs, supporting free grain for the poor and the right of association in guilds (collegia); because of this individual's ideology, Clodius has often been taken as a more \"plebeian\" spelling and a gesture of political solidarity. Clodius's two elder brothers, the Appius Claudius Pulcher who was consul in 54 BC and the C. Claudius Pulcher who was praetor in 56 BC, conducted more conventional political careers and are referred to in contemporary sources with the traditional spelling.",
"title": "Republican era"
},
{
"paragraph_id": 2,
"text": "The view that Clodius represents a plebeian or politicized form has been questioned by Clodius's chief modern-era biographer. In The Patrician Tribune, W. Jeffrey Tatum points out that the spelling is also associated with Clodius's sisters and that \"the political explanation … is almost certainly wrong.\" A plebeian branch of the gens, the Claudii Marcelli, retained the supposedly patrician spelling, while there is some inscriptional evidence that the -o- form may also have been used on occasion by close male relatives of the \"patrician tribune\" Clodius. Tatum argues that the use of -o- by the \"chic\" Clodia was a fashionable affectation, and that Clodius, whose perhaps inordinately loving relationship with his sister was the subject of much gossip and insinuation, was imitating his stylish sibling. The linguistic variation of o for au was characteristic of the Umbrian language, of which Sabine was a branch. Forms using o were considered archaic or rustic in the 50s BC, and the use of Clodius would have been either a whimsical gesture of pastoral fantasy, or a trendy assertion of antiquarian authenticity.",
"title": "Republican era"
},
{
"paragraph_id": 3,
"text": "In addition to Clodius, Clodii from the Republican era include:",
"title": "Republican era"
},
{
"paragraph_id": 4,
"text": "Women of the Claudii Marcelli branch were often called \"Clodia\" in the late Republic.",
"title": "Republican era"
},
{
"paragraph_id": 5,
"text": "People using the name Clodius during the period of the Roman Empire include:",
"title": "Imperial era"
},
{
"paragraph_id": 6,
"text": "The Clodii Celsini continued to practice the traditional religions of antiquity in the face of Christian hegemony through at least the 4th century, when Clodius Celsinus Adelphius (see below) converted. Members of this branch include:",
"title": "Imperial era"
}
]
| Clodius is an alternate form of the Roman nomen Claudius, a patrician gens that was traditionally regarded as Sabine in origin. The alternation of o and au is characteristic of the Sabine dialect. The feminine form is Clodia. | 2023-03-17T16:25:20Z | [
"Template:Other uses",
"Template:Main",
"Template:Reflist",
"Template:Cite journal",
"Template:ISBN"
]
| https://en.wikipedia.org/wiki/Clodius |
|
6,046 | Cicero | Marcus Tullius Cicero (/ˈsɪsəroʊ/ SISS-ə-roh; Latin: [ˈmaːrkʊs ˈtʊlli.ʊs ˈkɪkɛroː]; 3 January 106 BC – 7 December 43 BC) was a Roman statesman, lawyer, scholar, philosopher, writer and Academic skeptic, who tried to uphold optimate principles during the political crises that led to the establishment of the Roman Empire. His extensive writings include treatises on rhetoric, philosophy and politics. He is considered one of Rome's greatest orators and prose stylists and the innovator of what became known as "Ciceronian rhetoric". Cicero was educated in Rome and in Greece. He came from a wealthy municipal family of the Roman equestrian order, and served as consul in 63 BC.
His influence on the Latin language was immense. He wrote more than three-quarters of extant Latin literature that is known to have existed in his lifetime, and it has been said that subsequent prose was either a reaction against or a return to his style, not only in Latin but in European languages up to the 19th century. Cicero introduced into Latin the arguments of the chief schools of Hellenistic philosophy and created a large amount of Latin philosophical vocabulary via lexical innovation (e.g. neologisms such as evidentia, generator, humanitas, infinitio, qualitas, quantitas), almost 150 of which had been introduced from the translation of Greek philosophical terms, demonstrating himself as both an adept scholar of philosophy as well as a skilled translator.
Though he was an accomplished orator and successful lawyer, Cicero believed his political career was his most important achievement. It was during his consulship that the Catiline conspiracy attempted to overthrow the government through an attack on the city by outside forces, and Cicero suppressed the revolt by summarily and controversially executing five conspirators without trial. During the chaotic middle period of the first century BC, marked by civil wars and the dictatorship of Julius Caesar, Cicero championed a return to the traditional republican government. Following Caesar's death, Cicero became an enemy of Mark Antony in the ensuing power struggle, attacking him in a series of speeches. He was proscribed as an enemy of the state by the Second Triumvirate and consequently executed by soldiers operating on their behalf in 43 BC, having been intercepted during an attempted flight from the Italian peninsula. His severed hands and head were then, as a final revenge of Mark Antony, displayed on the Rostra.
Petrarch's rediscovery of Cicero's letters is often credited for initiating the 14th-century Renaissance in public affairs, humanism, and classical Roman culture. According to Polish historian Tadeusz Zieliński, "the Renaissance was above all things a revival of Cicero, and only after him and through him of the rest of Classical antiquity." The peak of Cicero's authority and prestige came during the 18th-century Enlightenment, and his impact on leading Enlightenment thinkers and political theorists such as John Locke, David Hume, Montesquieu, and Edmund Burke was substantial. His works rank among the most influential in global culture, and today still constitute one of the most important bodies of primary material for the writing and revision of Roman history, especially the last days of the Roman Republic.
Marcus Tullius Cicero was born on 3 January 106 BC in Arpinum, a hill town 100 kilometers (62 mi) southeast of Rome. He belonged to the tribus Cornelia. His father was a well-to-do member of the equestrian order and possessed good connections in Rome. However, being a semi-invalid, he could not enter public life and studied extensively to compensate. Although little is known about Cicero's mother, Helvia, it was common for the wives of important Roman citizens to be responsible for the management of the household. Cicero's brother Quintus wrote in a letter that she was a thrifty housewife.
Cicero's cognomen, a hereditary nickname, comes from the Latin for chickpea, cicer. Plutarch explains that the name was originally given to one of Cicero's ancestors who had a cleft in the tip of his nose resembling a chickpea. Romans often chose down-to-earth personal surnames. The famous family names of Fabius, Lentulus, and Piso come from the Latin names of beans, lentils, and peas, respectively. Plutarch writes that Cicero was urged to change this deprecatory name when he entered politics, but refused, saying that he would make Cicero more glorious than Scaurus ("Swollen-ankled") and Catulus ("Puppy").
At the age of 15, in 90 BC, Cicero started serving under Pompey Strabo and later Sulla in the Social war between Rome and its Italian allies. When in Rome during the turbulent plebeian tribunate of Publius Sulpicius Rufus in 88 BC which saw a short bout of fighting between the Sulpicius and Sulla, who had been elected consul for that year, Cicero found himself greatly impressed by Sulpicius' oratory even if he disagreed with his politics. He continued his studies at Rome, writing a pamphlet titled On Invention relating to rhetorical argumentation and studying philosophy with Greek academics who had fled the ongoing First Mithridatic War.
During this period in Roman history, "cultured" meant being able to speak both Latin and Greek. Cicero was therefore educated in the teachings of the ancient Greek philosophers, poets and historians; as he obtained much of his understanding of the theory and practice of rhetoric from the Greek poet Archias. Cicero used his knowledge of Greek to translate many of the theoretical concepts of Greek philosophy into Latin, thus translating Greek philosophical works for a larger audience. It was precisely his broad education that tied him to the traditional Roman elite.
Cicero's interest in philosophy figured heavily in his later career and led to him providing a comprehensive account of Greek philosophy for a Roman audience, including creating a philosophical vocabulary in Latin. In 87 BC, Philo of Larissa, the head of the Platonic Academy that had been founded by Plato in Athens about 300 years earlier, arrived in Rome. Cicero, "inspired by an extraordinary zeal for philosophy", sat enthusiastically at his feet and absorbed Carneades' Academic Skeptic philosophy.
Cicero said of Plato's Dialogues, that if Zeus were to speak, he would use their language. He would, in due course, honor them with his own convivial dialogues.
According to Plutarch, Cicero was an extremely talented student, whose learning attracted attention from all over Rome, affording him the opportunity to study Roman law under Quintus Mucius Scaevola. Cicero's fellow students were Gaius Marius Minor, Servius Sulpicius Rufus (who became a famous lawyer, one of the few whom Cicero considered superior to himself in legal matters), and Titus Pomponius. The latter two became Cicero's friends for life, and Pomponius (who later received the nickname "Atticus", and whose sister married Cicero's brother) would become, in Cicero's own words, "as a second brother", with both maintaining a lifelong correspondence.
In 79 BC, Cicero left for Greece, Asia Minor and Rhodes. This was perhaps to avoid the potential wrath of Sulla, as Plutarch claims, though Cicero himself says it was to hone his skills and improve his physical fitness. In Athens he studied philosophy with Antiochus of Ascalon, the 'Old Academic' and initiator of Middle Platonism. In Asia Minor, he met the leading orators of the region and continued to study with them. Cicero then journeyed to Rhodes to meet his former teacher, Apollonius Molon, who had taught him in Rome. Molon helped Cicero hone the excesses in his style, as well as train his body and lungs for the demands of public speaking. Charting a middle path between the competing Attic and Asiatic styles, Cicero would ultimately become considered second only to Demosthenes among history's orators.
While Cicero had feared that the law courts would be closed forever, they were reopened in the aftermath of Sulla's civil war and the purging of Sulla's political opponents in the proscriptions. Many of the orators which Cicero admired in his youth were now dead from age or political violence. His first major appearance in the courts was in 81 BC at the age of 26 when he delivered, Pro Quinctio, a speech defending certain commercial transactions which Cicero had recorded and disseminated.
His more famous speech defending Sextus Roscius of Ameria – Pro Roscio Amerino – on charges of parricide in 80 BC was his first appearance in criminal court. In this high-profile case, Cicero accused a freedman of the dictator Sulla, Chrysogonus, of fabricating Roscius' father's proscription to obtain Roscius' family's property. Successful in his defence, Cicero tactfully avoided incriminating Sulla of any wrongdoing and developed a positive oratorical reputation for himself.
While Plutarch claims that Cicero left Rome shortly thereafter out of fear of Sulla's response, "most scholarly now dismiss this suggestion" because Cicero left Rome after Sulla resigned his dictatorship. Cicero, for his part, later claimed that he left Rome, headed for Asia, to develop his physique and develop his oratory. After marrying his wife, Terentia, in 80 BC, he eventually left for Asia Minor with his brother Quintus, his friend Titus Atticus, and others on a long trip spanning most of 79 through 77 BC. Returning to Rome in 77 BC, Cicero again busied himself with legal defence.
In 76 BC, at the quaestorian elections, Cicero was elected at the minimum age required – 30 years – in the first returns from the comitia tributa, to the post of quaestor. Ex officio, he also became a member of the Senate. In the quaestorian lot, he was assigned to Sicily for 75 BC. The post, which was largely one related to financial administration in support of the state or provincial governors, proved for Cicero an important place where he could gain clients in the provinces. His time in Sicily saw him balance his duties – largely in terms of sending more grain back to Rome – with his support for the provincials, Roman businessmen in the area, and local potentates. Adeptly balancing those responsibilities, he won their gratitude.
Promising to lend the Sicilians his oratorical voice, he was called on a few years after his quaestorship to prosecute the Roman province's governor Gaius Verres, for abuse of power and corruption. In 70 BC, at the age of 36, Cicero launched his first high-profile prosecution against Verres, an emblem of the corrupt Sullan supporters who had risen in the chaos of the civil war.
The prosecution of Gaius Verres was a great forensic success for Cicero. While Verres hired the prominent lawyer, Quintus Hortensius, after a lengthy period in Sicily collecting testimonials and evidence and persuading witnesses to come forward, Cicero returned to Rome and won the case in a series of dramatic court battles. His unique style of oratory set him apart from the flamboyant Hortensius. On the conclusion of this case, Cicero came to be considered the greatest orator in Rome. The view that Cicero may have taken the case for reasons of his own is viable. Hortensius was, at this point, known as the best lawyer in Rome; to beat him would guarantee much success and the prestige that Cicero needed to start his career. Cicero's oratorical ability is shown in his character assassination of Verres and various other techniques of persuasion used on the jury. One such example is found in the speech In Verrem, where he states "with you on this bench, gentlemen, with Marcus Acilius Glabrio as your president, I do not understand what Verres can hope to achieve". Oratory was considered a great art in ancient Rome and an important tool for disseminating knowledge and promoting oneself in elections, in part because there were no regular newspapers or mass media. Cicero was neither a patrician nor a plebeian noble; his rise to political office despite his relatively humble origins has traditionally been attributed to his brilliance as an orator.
Cicero grew up in a time of civil unrest and war. Sulla's victory in the first of a series of civil wars led to a new constitutional framework that undermined libertas (liberty), the fundamental value of the Roman Republic. Nonetheless, Sulla's reforms strengthened the position of the equestrian class, contributing to that class's growing political power. Cicero was both an Italian eques and a novus homo, but more importantly he was a Roman constitutionalist. His social class and loyalty to the Republic ensured that he would "command the support and confidence of the people as well as the Italian middle classes". The optimates faction never truly accepted Cicero, and this undermined his efforts to reform the Republic while preserving the constitution. Nevertheless, he successfully ascended the cursus honorum, holding each magistracy at or near the youngest possible age: quaestor in 75 BC (age 30), aedile in 69 BC (age 36), and praetor in 66 BC (age 39), when he served as president of the "Reclamation" (or extortion) Court. He was then elected consul at age 42.
Cicero, seizing the opportunity offered by optimate fear of reform, was elected consul for the year 63 BC; he was elected with the support of every unit of the centuriate assembly, rival members of the post-Sullan establishment, and the leaders of municipalities throughout post-Social War Italy. His co-consul for the year, Gaius Antonius Hybrida, played a minor role.
He began his consular year by opposing a land bill proposed by a plebeian tribune which would have appointed commissioners with semi-permanent authority over land reform. Cicero was also active in the courts, defending Gaius Rabirius from accusations of participating in the unlawful killing of plebeian tribune Lucius Appuleius Saturninus in 100 BC. The prosecution occurred before the comita centuriata and threatened to reopen conflict between the Marian and Sullan factions at Rome. Cicero defended the use of force as being authorised by a senatus consultum ultimum, which would prove similar to his own use of force under such conditions.
Most famously – in part because of his own publicity – he thwarted a conspiracy led by Lucius Sergius Catilina to overthrow the Roman Republic with the help of foreign armed forces. Cicero procured a senatus consultum ultimum (a recommendation from the senate attempting to legitimise the use of force) and drove Catiline from the city with four vehement speeches (the Catilinarian orations), which remain outstanding examples of his rhetorical style. The Orations listed Catiline and his followers' debaucheries, and denounced Catiline's senatorial sympathizers as roguish and dissolute debtors clinging to Catiline as a final and desperate hope. Cicero demanded that Catiline and his followers leave the city. At the conclusion of Cicero's first speech (which was made in the Temple of Jupiter Stator), Catiline hurriedly left the Senate. In his following speeches, Cicero did not directly address Catiline. He delivered the second and third orations before the people, and the last one again before the Senate. By these speeches, Cicero wanted to prepare the Senate for the worst possible case; he also delivered more evidence, against Catiline.
Catiline fled and left behind his followers to start the revolution from within while he himself assaulted the city with an army of "moral and financial bankrupts, or of honest fanatics and adventurers". It is alleged that Catiline had attempted to involve the Allobroges, a tribe of Transalpine Gaul, in their plot, but Cicero, working with the Gauls, was able to seize letters that incriminated the five conspirators and forced them to confess in front of the Senate. The senate then deliberated upon the conspirators' punishment. As it was the dominant advisory body to the various legislative assemblies rather than a judicial body, there were limits to its power; however, martial law was in effect, and it was feared that simple house arrest or exile – the standard options – would not remove the threat to the state. At first Decimus Junius Silanus spoke for the "extreme penalty"; many were swayed by Julius Caesar, who decried the precedent it would set and argued in favor of life imprisonment in various Italian towns. Cato the Younger rose in defense of the death penalty and the entire Senate finally agreed on the matter. Cicero had the conspirators taken to the Tullianum, the notorious Roman prison, where they were strangled. Cicero himself accompanied the former consul Publius Cornelius Lentulus Sura, one of the conspirators, to the Tullianum.
Cicero received the honorific "pater patriae" for his efforts to suppress the conspiracy, but lived thereafter in fear of trial or exile for having put Roman citizens to death without trial. While the senatus consultum ultimum gave some legitimacy to the use of force against the conspirators, Cicero also argued that Catiline's conspiracy, by virtue of its treason, made the conspirators enemies of the state and forfeited the protections intrinsically possessed by Roman citizens. The consuls moved decisively. Antonius Hybrida was dispatched to defeat Catiline in battle that year, preventing Crassus or Pompey from exploiting the situation for their own political aims.
After the suppression of the conspiracy, Cicero was proud of his accomplishment. Some of his political enemies argued that though the act gained Cicero popularity, he exaggerated the extent of his success. He overestimated his popularity again several years later after being exiled from Italy and then allowed back from exile. At this time, he claimed that the republic would be restored along with him.
Shortly after completing his consulship, in late 62 BC, Cicero arranged the purchase of a large townhouse on the Palatine Hill previously owned by Rome's richest citizen, Marcus Licinius Crassus. To finance the purchase, Cicero borrowed some two million sesterces from Publius Cornelius Sulla, whom he had previously defended from court. Cicero boasted his house was "in conspectu prope totius urbis" ("in sight of nearly the whole city"), only a short walk from the Roman Forum.
In 60 BC, Julius Caesar invited Cicero to be the fourth member of his existing partnership with Pompey and Marcus Licinius Crassus, an assembly that would eventually be called the First Triumvirate. Cicero refused the invitation because he suspected it would undermine the Republic.
During Caesar's consulship of 59 BC, the triumvirate had achieved many of their goals of land reform, publicani debt forgiveness, ratification of Pompeian conquests, etc. With Caesar leaving for his provinces, they wished to maintain their hold on politics. They engineered the adoption of patrician Publius Clodius Pulcher into a plebeian family and had him elected as one of the ten tribunes of the plebs for 58 BC. Clodius used the triumvirate's backing to push through legislation that benefited them. He introduced several laws (the leges Clodiae) that made him popular with the people, strengthening his power base, then he turned on Cicero by threatening exile to anyone who executed a Roman citizen without a trial. Cicero, having executed members of the Catiline conspiracy four years previously without formal trial, was clearly the intended target. Furthermore, many believed that Clodius acted in concert with the triumvirate who feared that Cicero would seek to abolish many of Caesar's accomplishments while consul the year before. Cicero argued that the senatus consultum ultimum indemnified him from punishment, and he attempted to gain the support of the senators and consuls, especially of Pompey.
Cicero grew out his hair, dressed in mourning and toured the streets. Clodius' gangs dogged him, hurling abuse, stones and even excrement. Hortensius, trying to rally to his old rival's support, was almost lynched. The Senate and the consuls were cowed. Caesar, who was still encamped near Rome, was apologetic but said he could do nothing when Cicero brought himself to grovel in the proconsul's tent. Everyone seemed to have abandoned Cicero.
After Clodius passed a law to deny to Cicero fire and water (i.e. shelter) within four hundred miles of Rome, Cicero went into exile. He arrived at Thessalonica, on 23 May 58 BC. In his absence, Clodius, who lived next door to Cicero on the Palatine, arranged for Cicero's house to be confiscated by the state, and was even able to purchase a part of the property in order to extend his own house. After demolishing Cicero's house, Clodius had the land consecrated and symbolically erected a temple of Liberty (aedes Libertatis) on the vacant land.
Cicero's exile caused him to fall into depression. He wrote to Atticus: "Your pleas have prevented me from committing suicide. But what is there to live for? Don't blame me for complaining. My afflictions surpass any you ever heard of earlier". After the intervention of recently elected tribune Titus Annius Milo, acting on the behalf of Pompey who wanted Cicero as a client, the Senate voted in favor of recalling Cicero from exile. Clodius cast the single vote against the decree. Cicero returned to Italy on 5 August 57 BC, landing at Brundisium. He was greeted by a cheering crowd, and, to his delight, his beloved daughter Tullia. In his Oratio De Domo Sua Ad Pontifices, Cicero convinced the College of Pontiffs to rule that the consecration of his land was invalid, thereby allowing him to regain his property and rebuild his house on the Palatine.
Cicero tried to re-enter politics as an independent operator, but his attempts to attack portions of Caesar's legislation were unsuccessful and encouraged Caesar to re-solidify his political alliance with Pompey and Crassus. The conference at Luca in 56 BC left the three-man alliance in domination of the republic's politics; this forced Cicero to recant and support the triumvirate out of fear from being entirely excluded from public life. After the conference Cicero lavishly praised Caesar's achievements, got the Senate to vote a thanksgiving for Caesar's victories and grant money to pay his troops. He also delivered a speech 'On the consular provinces' (Latin: de provinciis consularibus) which checked an attempt by Caesar's enemies to strip him of his provinces in Gaul. After this, a cowed Cicero concentrated on his literary works. It is uncertain whether he was directly involved in politics for the following few years.
In 51 BC he reluctantly accepted a promagistracy (as proconsul) in Cilicia for the year; there were few other former consuls eligible as a result of a legislative requirement enacted by Pompey in 52 BC specifying an interval of five years between a consulship or praetorship and a provincial command. He served as proconsul of Cilicia from May 51 BC, arriving in the provinces three months later around August.
In 53 BC Marcus Licinius Crassus had been defeated by the Parthians at the Battle of Carrhae. This opened the Roman East for a Parthian invasion, causing unrest in Syria and Cilicia. Cicero restored calm by his mild system of government. He discovered that a great amount of public property had been embezzled by corrupt previous governors and members of their staff, and did his utmost to restore it. Thus he greatly improved the condition of the cities. He retained the civil rights of, and exempted from penalties, the men who gave the property back. Besides this, he was extremely frugal in his outlays for staff and private expenses during his governorship, and this made him highly popular among the natives.
Besides his activity in ameliorating the hard pecuniary situation of the province, Cicero was also creditably active in the military sphere. Early in his governorship he received information that prince Pacorus, son of Orodes II the king of the Parthians, had crossed the Euphrates, and was ravaging the Syrian countryside and had even besieged Cassius (the interim Roman commander in Syria) in Antioch. Cicero eventually marched with two understrength legions and a large contingent of auxiliary cavalry to Cassius's relief. Pacorus and his army had already given up on besieging Antioch and were heading south through Syria, ravaging the countryside again. Cassius and his legions followed them, harrying them wherever they went, eventually ambushing and defeating them near Antigonea.
Another large troop of Parthian horsemen was defeated by Cicero's cavalry who happened to run into them while scouting ahead of the main army. Cicero next defeated some robbers who were based on Mount Amanus and was hailed as imperator by his troops. Afterwards he led his army against the independent Cilician mountain tribes, besieging their fortress of Pindenissum. It took him 47 days to reduce the place, which fell in December. On 30 July 50 BC Cicero left the province to his brother Quintus, who had accompanied him on his governorship as his legate. On his way back to Rome he stopped in Rhodes and then went to Athens, where he caught up with his old friend Titus Pomponius Atticus and met men of great learning.
Cicero arrived in Rome on 4 January 49 BC. He stayed outside the pomerium, to retain his promagisterial powers: either in expectation of a triumph or to retain his independent command authority in the coming civil war. The struggle between Pompey and Julius Caesar grew more intense in 50 BC. Cicero favored Pompey, seeing him as a defender of the senate and Republican tradition, but at that time avoided openly alienating Caesar. When Caesar invaded Italy in 49 BC, Cicero fled Rome. Caesar, seeking an endorsement by a senior senator, courted Cicero's favor, but even so Cicero slipped out of Italy and traveled to Dyrrhachium where Pompey's staff was situated. Cicero traveled with the Pompeian forces to Pharsalus in Macedonia in 48 BC, though he was quickly losing faith in the competence and righteousness of the Pompeian side. Eventually, he provoked the hostility of his fellow senator Cato, who told him that he would have been of more use to the cause of the optimates if he had stayed in Rome. After Caesar's victory at the Battle of Pharsalus on 9 August, Cicero refused to take command of the Pompeian forces and continue the war. He returned to Rome, still as a promagistrate with his lictors, in 47 BC, and dismissed them upon his crossing the pomerium and renouncing his command.
In a letter to Varro on c. 20 April 46 BC, Cicero outlined his strategy under Caesar's dictatorship. Cicero, however, was taken by surprise when the Liberatores assassinated Caesar on the ides of March, 44 BC. Cicero was not included in the conspiracy, even though the conspirators were sure of his sympathy. Marcus Junius Brutus called out Cicero's name, asking him to restore the republic when he lifted his bloodstained dagger after the assassination. A letter Cicero wrote in February 43 BC to Trebonius, one of the conspirators, began, "How I could wish that you had invited me to that most glorious banquet on the Ides of March!" Cicero became a popular leader during the period of instability following the assassination. He had no respect for Mark Antony, who was scheming to take revenge upon Caesar's murderers. In exchange for amnesty for the assassins, he arranged for the Senate to agree not to declare Caesar to have been a tyrant, which allowed the Caesarians to have lawful support and kept Caesar's reforms and policies intact.
In April 43 BC, "diehard republicans" may have revived the ancient position of princeps senatus (leader of the senate) for Cicero. This position had been very prestigious until the constitutional reforms of Sulla in 82–80 BC, which removed most of its importance.
On the other side, Antony was consul and leader of the Caesarian faction, and unofficial executor of Caesar's public will. Relations between the two were never friendly and worsened after Cicero claimed that Antony was taking liberties in interpreting Caesar's wishes and intentions. Octavian was Caesar's adopted son and heir. After he returned to Italy, Cicero began to play him against Antony. He praised Octavian, declaring he would not make the same mistakes as his father. He attacked Antony in a series of speeches he called the Philippics, after Demosthenes's denunciations of Philip II of Macedon. At the time, Cicero's popularity as a public figure was unrivalled.
Cicero supported Decimus Junius Brutus Albinus as governor of Cisalpine Gaul (Gallia Cisalpina) and urged the Senate to name Antony an enemy of the state. The speech of Lucius Piso, Caesar's father-in-law, delayed proceedings against Antony. Antony was later declared an enemy of the state when he refused to lift the siege of Mutina, which was in the hands of Decimus Brutus. Cicero's plan to drive out Antony failed. Antony and Octavian reconciled and allied with Lepidus to form the Second Triumvirate after the successive battles of Forum Gallorum and Mutina. The alliance came into official existence with the lex Titia, passed on 27 November 43 BC, which gave each triumvir a consular imperium for five years. The Triumvirate immediately began a proscription of their enemies, modeled after that of Sulla in 82 BC. Cicero and all of his contacts and supporters were numbered among the enemies of the state, even though Octavian argued for two days against Cicero being added to the list.
Cicero was one of the most viciously and doggedly hunted among the proscribed. He was viewed with sympathy by a large segment of the public and many people refused to report that they had seen him. He was caught on 7 December 43 BC leaving his villa in Formiae in a litter heading to the seaside, where he hoped to embark on a ship destined for Macedonia. When his killers – Herennius (a Centurion) and Popilius (a Tribune) – arrived, Cicero's own slaves said they had not seen him, but he was given away by Philologus, a freedman of his brother Quintus Cicero.
As reported by Seneca the Elder, according to the historian Aufidius Bassus, Cicero's last words are said to have been:
Ego vero consisto. Accede, veterane, et, si hoc saltim potes recte facere, incide cervicem.I go no further: approach, veteran soldier, and, if you can at least do so much properly, sever this neck.
He bowed to his captors, leaning his head out of the litter in a gladiatorial gesture to ease the task. By baring his neck and throat to the soldiers, he was indicating that he would not resist. According to Plutarch, Herennius first slew him, then cut off his head. On Antony's instructions his hands, which had penned the Philippics against Antony, were cut off as well; these were nailed along with his head on the Rostra in the Forum Romanum according to the tradition of Marius and Sulla, both of whom had displayed the heads of their enemies in the Forum. Cicero was the only victim of the proscriptions who was displayed in that manner. According to Cassius Dio, in a story often mistakenly attributed to Plutarch, Antony's wife Fulvia took Cicero's head, pulled out his tongue, and jabbed it repeatedly with her hairpin in final revenge against Cicero's power of speech.
Cicero's son, Marcus Tullius Cicero Minor, during his year as a consul in 30 BC, avenged his father's death, to a certain extent, when he announced to the Senate Mark Antony's naval defeat at Actium in 31 BC by Octavian.
Octavian is reported to have praised Cicero as a patriot and a scholar of meaning in later times, within the circle of his family. However, it was Octavian's acquiescence that had allowed Cicero to be killed, as Cicero was condemned by the new triumvirate.
Cicero's career as a statesman was marked by inconsistencies and a tendency to shift his position in response to changes in the political climate. His indecision may be attributed to his sensitive and impressionable personality; he was prone to overreaction in the face of political and private change.
"Would that he had been able to endure prosperity with greater self-control, and adversity with more fortitude!" wrote C. Asinius Pollio, a contemporary Roman statesman and historian.
Cicero married Terentia probably at the age of 27, in 79 BC. According to the upper-class mores of the day it was a marriage of convenience but lasted harmoniously for nearly 30 years. Terentia's family was wealthy, probably the plebeian noble house of Terenti Varrones, thus meeting the needs of Cicero's political ambitions in both economic and social terms. She had a half-sister named Fabia, who as a child had become a Vestal Virgin, a great honour. Terentia was a strong-willed woman and (citing Plutarch) "took more interest in her husband's political career than she allowed him to take in household affairs".
In the 50s BC, Cicero's letters to Terentia became shorter and colder. He complained to his friends that Terentia had betrayed him but did not specify in which sense. Perhaps the marriage could not outlast the strain of the political upheaval in Rome, Cicero's involvement in it, and various other disputes between the two. The divorce appears to have taken place in 51 BC or shortly before. In 46 or 45 BC, Cicero married a young girl, Publilia, who had been his ward. It is thought that Cicero needed her money, particularly after having to repay the dowry of Terentia, who came from a wealthy family.
Although his marriage to Terentia was one of convenience, it is commonly known that Cicero held great love for his daughter Tullia. When she suddenly became ill in February 45 BC and died after having seemingly recovered from giving birth to a son in January, Cicero was stunned. "I have lost the one thing that bound me to life," he wrote to Atticus. Atticus told him to come for a visit during the first weeks of his bereavement, so that he could comfort him when his pain was at its greatest. In Atticus's large library, Cicero read everything that the Greek philosophers had written about overcoming grief, "but my sorrow defeats all consolation." Caesar and Brutus, as well as Servius Sulpicius Rufus, sent him letters of condolence.
Cicero hoped that his son Marcus would become a philosopher like him, but Marcus himself wished for a military career. He joined the army of Pompey in 49 BC, and after Pompey's defeat at Pharsalus 48 BC, he was pardoned by Caesar. Cicero sent him to Athens to study as a disciple of the peripatetic philosopher Kratippos in 48 BC, but he used this absence from "his father's vigilant eye" to "eat, drink, and be merry." After Cicero's death, he joined the army of the Liberatores but was later pardoned by Augustus. Augustus's bad conscience for not having objected to Cicero's being put on the proscription list during the Second Triumvirate led him to aid considerably Marcus Minor's career. He became an augur and was nominated consul in 30 BC together with Augustus. As such, he was responsible for revoking the honors of Mark Antony, who was responsible for the proscription and could in this way take revenge. Later he was appointed proconsul of Syria and the province of Asia.
Cicero has been traditionally considered the master of Latin prose, with Quintilian declaring that Cicero was "not the name of a man, but of eloquence itself." The English words Ciceronian (meaning "eloquent") and cicerone (meaning "local guide") derive from his name. He is credited with transforming Latin from a modest utilitarian language into a versatile literary medium capable of expressing abstract and complicated thoughts with clarity. Julius Caesar praised Cicero's achievement by saying "it is more important to have greatly extended the frontiers of the Roman spirit than the frontiers of the Roman empire". According to John William Mackail, "Cicero's unique and imperishable glory is that he created the language of the civilized world, and used that language to create a style which nineteen centuries have not replaced, and in some respects have hardly altered."
Cicero was also an energetic writer with an interest in a wide variety of subjects, in keeping with the Hellenistic philosophical and rhetorical traditions in which he was trained. The quality and ready accessibility of Ciceronian texts favored very wide distribution and inclusion in teaching curricula, as suggested by a graffito at Pompeii, admonishing: "You will like Cicero, or you will be whipped".
Cicero was greatly admired by influential Church Fathers such as Augustine of Hippo, who credited Cicero's lost Hortensius for his eventual conversion to Christianity, and St. Jerome, who had a feverish vision in which he was accused of being "follower of Cicero and not of Christ" before the judgment seat.
This influence further increased after the Early Middle Ages in Europe, where more of his writings survived than any other Latin author. Medieval philosophers were influenced by Cicero's writings on natural law and innate rights.
Petrarch's rediscovery of Cicero's letters provided the impetus for searches for ancient Greek and Latin writings scattered throughout European monasteries, and the subsequent rediscovery of classical antiquity led to the Renaissance. Subsequently, Cicero became synonymous with classical Latin to such an extent that a number of humanist scholars began to assert that no Latin word or phrase should be used unless it appeared in Cicero's works, a stance criticised by Erasmus.
His voluminous correspondence, much of it addressed to his friend Atticus, has been especially influential, introducing the art of refined letter writing to European culture. Cornelius Nepos, the first century BC biographer of Atticus, remarked that Cicero's letters contained such a wealth of detail "concerning the inclinations of leading men, the faults of the generals, and the revolutions in the government" that their reader had little need for a history of the period.
Among Cicero's admirers were Desiderius Erasmus, Martin Luther, and John Locke. Following the invention of Johannes Gutenberg's printing press, De Officiis was the second book printed in Europe, after the Gutenberg Bible. Scholars note Cicero's influence on the rebirth of religious toleration in the 17th century.
Cicero was especially popular with the Philosophes of the 18th century, including Edward Gibbon, Diderot, David Hume, Montesquieu, and Voltaire. Gibbon wrote of his first experience reading the author's collective works thus: "I tasted the beauty of the language; I breathed the spirit of freedom; and I imbibed from his precepts and examples the public and private sense of a man...after finishing the great author, a library of eloquence and reason, I formed a more extensive plan of reviewing the Latin classics..."
Voltaire called Cicero "the greatest as well as the most elegant of Roman philosophers" and even staged a play based on Cicero's role in the Catilinarian conspiracy, called Rome Sauvée, ou Catilina, to "make young people who go to the theatre acquainted with Cicero." Voltaire was spurred to pen the drama as a rebuff to his rival Claude Prosper Jolyot de Crébillon's own play Catilina, which had portrayed Cicero as a coward and villain who hypocritically married his own daughter to Catiline.
Montesquieu produced his "Discourse on Cicero" in 1717, in which he heaped praise on the author because he rescued "philosophy from the hands of scholars, and freed it from the confusion of a foreign language". Montesquieu went on to declare that Cicero was "of all the ancients, the one who had the most personal merit, and whom I would prefer to resemble."
Internationally, Cicero the republican inspired the Founding Fathers of the United States and the revolutionaries of the French Revolution. John Adams said, "As all the ages of the world have not produced a greater statesman and philosopher united than Cicero, his authority should have great weight." Thomas Jefferson names Cicero as one of a handful of major figures who contributed to a tradition "of public right" that informed his draft of the Declaration of Independence and shaped American understandings of "the common sense" basis for the right of revolution. Camille Desmoulins said of the French republicans in 1789 that they were "mostly young people who, nourished by the reading of Cicero at school, had become passionate enthusiasts for liberty".
Jim Powell starts his book on the history of liberty with the sentence: "Marcus Tullius Cicero expressed principles that became the bedrock of liberty in the modern world."
Likewise, no other ancient personality has inspired as much venomous dislike as Cicero, especially in more modern times. His commitment to the values of the Republic accommodated a hatred of the poor and persistent opposition to the advocates and mechanisms of popular representation. Friedrich Engels referred to him as "the most contemptible scoundrel in history" for upholding republican "democracy" while at the same time denouncing land and class reforms. Cicero has faced criticism for exaggerating the democratic qualities of republican Rome, and for defending the Roman oligarchy against the popular reforms of Caesar. Michael Parenti admits Cicero's abilities as an orator, but finds him a vain, pompous and hypocritical personality who, when it suited him, could show public support for popular causes that he privately despised. Parenti presents Cicero's prosecution of the Catiline conspiracy as legally flawed at least, and possibly unlawful.
Cicero also had an influence on modern astronomy. Nicolaus Copernicus, searching for ancient views on earth motion, said that he "first ... found in Cicero that Hicetas supposed the earth to move."
Notably, "Cicero" was the name attributed to size 12 font in typesetting table drawers. For ease of reference, type sizes 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, and 20 were all given different names.
Cicero was declared a righteous pagan by the Early Church, and therefore many of his works were deemed worthy of preservation. Subsequent Roman and medieval Christian writers quoted liberally from his works De re publica (On the Commonwealth) and De Legibus (On the Laws), and much of his work has been recreated from these surviving fragments. Cicero also articulated an early, abstract conceptualization of rights, based on ancient law and custom. Of Cicero's books, six on rhetoric have survived, as well as parts of seven on philosophy. Of his speeches, 88 were recorded, but only 52 survive.
Cicero's great repute in Italy has led to numerous ruins being identified as having belonged to him, though none have been substantiated with absolute certainty. In Formia, two Roman-era ruins are popularly believed to be Cicero's mausoleum, the Tomba di Cicerone, and the villa where he was assassinated in 43 BC. The latter building is centered around a central hall with Doric columns and a coffered vault, with a separate nymphaeum, on five acres of land near Formia. A modern villa was built on the site after the Rubino family purchased the land from Ferdinand II of the Two Sicilies in 1868. Cicero's supposed tomb is a 24-meter (79 feet) tall tower on an opus quadratum base on the ancient Via Appia outside of Formia. Some suggest that it is not in fact Cicero's tomb, but a monument built on the spot where Cicero was intercepted and assassinated while trying to reach the sea.
In Pompeii, a large villa excavated in the mid 18th century just outside the Herculaneum Gate was widely believed to have been Cicero's, who was known to have owned a holiday villa in Pompeii he called his Pompeianum. The villa was stripped of its fine frescoes and mosaics and then re-buried after 1763 – it has yet to be re-excavated. However, contemporaneous descriptions of the building from the excavators combined with Cicero's own references to his Pompeianum differ, making it unlikely that it is Cicero's villa.
In Rome, the location of Cicero's house has been roughly identified from excavations of the Republican-era stratum on the northwestern slope of the Palatine Hill. Cicero's domus has long been known to have stood in the area, according to his own descriptions and those of later authors, but there is some debate about whether it stood near the base of the hill, very close to the Roman Forum, or nearer to the summit. During his life the area was the most desirable in Rome, densely occupied with Patrician houses including the Domus Publica of Julius Caesar and the home of Cicero's mortal enemy Clodius.
In Dante's 1320 poem the Divine Comedy, the author encounters Cicero, among other philosophers, in Limbo. Ben Jonson dramatised the conspiracy of Catiline in his play Catiline His Conspiracy, featuring Cicero as a character. Cicero also appears as a minor character in William Shakespeare's play Julius Caesar.
Cicero was portrayed on the motion picture screen by British actor Alan Napier in the 1953 film Julius Caesar, based on Shakespeare's play. He has also been played by such noted actors as Michael Hordern (in Cleopatra), and André Morell (in the 1970 Julius Caesar). Most recently, Cicero was portrayed by David Bamber in the HBO series Rome (2005–2007) and appeared in both seasons.
In the historical novel series Masters of Rome, Colleen McCullough presents a not-so-flattering depiction of Cicero's career, showing him struggling with an inferiority complex and vanity, morally flexible and fatally indiscreet, while his rival Julius Caesar is shown in a more approving light. Cicero is portrayed as a hero in the novel A Pillar of Iron by Taylor Caldwell (1965). Robert Harris' novels Imperium, Lustrum (published under the name Conspirata in the United States) and Dictator comprise a three-part series based on the life of Cicero. In these novels Cicero's character is depicted in a more favorable way than in those of McCullough, with his positive traits equaling or outweighing his weaknesses (while conversely Caesar is depicted as more sinister than in McCullough). Cicero is a major recurring character in the Roma Sub Rosa series of mystery novels by Steven Saylor. He also appears several times as a peripheral character in John Maddox Roberts' SPQR series.
Samuel Barnett portrays Cicero in a 2017 audio drama series pilot produced by Big Finish Productions. A full series was released the following year. All episodes are written by David Llewellyn and directed and produced by Scott Handcock.
Works by Cicero
Biographies and descriptions of Cicero's time
Plutarch's biography of Cicero contained in the Parallel Lives | [
{
"paragraph_id": 0,
"text": "Marcus Tullius Cicero (/ˈsɪsəroʊ/ SISS-ə-roh; Latin: [ˈmaːrkʊs ˈtʊlli.ʊs ˈkɪkɛroː]; 3 January 106 BC – 7 December 43 BC) was a Roman statesman, lawyer, scholar, philosopher, writer and Academic skeptic, who tried to uphold optimate principles during the political crises that led to the establishment of the Roman Empire. His extensive writings include treatises on rhetoric, philosophy and politics. He is considered one of Rome's greatest orators and prose stylists and the innovator of what became known as \"Ciceronian rhetoric\". Cicero was educated in Rome and in Greece. He came from a wealthy municipal family of the Roman equestrian order, and served as consul in 63 BC.",
"title": ""
},
{
"paragraph_id": 1,
"text": "His influence on the Latin language was immense. He wrote more than three-quarters of extant Latin literature that is known to have existed in his lifetime, and it has been said that subsequent prose was either a reaction against or a return to his style, not only in Latin but in European languages up to the 19th century. Cicero introduced into Latin the arguments of the chief schools of Hellenistic philosophy and created a large amount of Latin philosophical vocabulary via lexical innovation (e.g. neologisms such as evidentia, generator, humanitas, infinitio, qualitas, quantitas), almost 150 of which had been introduced from the translation of Greek philosophical terms, demonstrating himself as both an adept scholar of philosophy as well as a skilled translator.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Though he was an accomplished orator and successful lawyer, Cicero believed his political career was his most important achievement. It was during his consulship that the Catiline conspiracy attempted to overthrow the government through an attack on the city by outside forces, and Cicero suppressed the revolt by summarily and controversially executing five conspirators without trial. During the chaotic middle period of the first century BC, marked by civil wars and the dictatorship of Julius Caesar, Cicero championed a return to the traditional republican government. Following Caesar's death, Cicero became an enemy of Mark Antony in the ensuing power struggle, attacking him in a series of speeches. He was proscribed as an enemy of the state by the Second Triumvirate and consequently executed by soldiers operating on their behalf in 43 BC, having been intercepted during an attempted flight from the Italian peninsula. His severed hands and head were then, as a final revenge of Mark Antony, displayed on the Rostra.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Petrarch's rediscovery of Cicero's letters is often credited for initiating the 14th-century Renaissance in public affairs, humanism, and classical Roman culture. According to Polish historian Tadeusz Zieliński, \"the Renaissance was above all things a revival of Cicero, and only after him and through him of the rest of Classical antiquity.\" The peak of Cicero's authority and prestige came during the 18th-century Enlightenment, and his impact on leading Enlightenment thinkers and political theorists such as John Locke, David Hume, Montesquieu, and Edmund Burke was substantial. His works rank among the most influential in global culture, and today still constitute one of the most important bodies of primary material for the writing and revision of Roman history, especially the last days of the Roman Republic.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Marcus Tullius Cicero was born on 3 January 106 BC in Arpinum, a hill town 100 kilometers (62 mi) southeast of Rome. He belonged to the tribus Cornelia. His father was a well-to-do member of the equestrian order and possessed good connections in Rome. However, being a semi-invalid, he could not enter public life and studied extensively to compensate. Although little is known about Cicero's mother, Helvia, it was common for the wives of important Roman citizens to be responsible for the management of the household. Cicero's brother Quintus wrote in a letter that she was a thrifty housewife.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "Cicero's cognomen, a hereditary nickname, comes from the Latin for chickpea, cicer. Plutarch explains that the name was originally given to one of Cicero's ancestors who had a cleft in the tip of his nose resembling a chickpea. Romans often chose down-to-earth personal surnames. The famous family names of Fabius, Lentulus, and Piso come from the Latin names of beans, lentils, and peas, respectively. Plutarch writes that Cicero was urged to change this deprecatory name when he entered politics, but refused, saying that he would make Cicero more glorious than Scaurus (\"Swollen-ankled\") and Catulus (\"Puppy\").",
"title": "Early life"
},
{
"paragraph_id": 6,
"text": "At the age of 15, in 90 BC, Cicero started serving under Pompey Strabo and later Sulla in the Social war between Rome and its Italian allies. When in Rome during the turbulent plebeian tribunate of Publius Sulpicius Rufus in 88 BC which saw a short bout of fighting between the Sulpicius and Sulla, who had been elected consul for that year, Cicero found himself greatly impressed by Sulpicius' oratory even if he disagreed with his politics. He continued his studies at Rome, writing a pamphlet titled On Invention relating to rhetorical argumentation and studying philosophy with Greek academics who had fled the ongoing First Mithridatic War.",
"title": "Early life"
},
{
"paragraph_id": 7,
"text": "During this period in Roman history, \"cultured\" meant being able to speak both Latin and Greek. Cicero was therefore educated in the teachings of the ancient Greek philosophers, poets and historians; as he obtained much of his understanding of the theory and practice of rhetoric from the Greek poet Archias. Cicero used his knowledge of Greek to translate many of the theoretical concepts of Greek philosophy into Latin, thus translating Greek philosophical works for a larger audience. It was precisely his broad education that tied him to the traditional Roman elite.",
"title": "Early life"
},
{
"paragraph_id": 8,
"text": "Cicero's interest in philosophy figured heavily in his later career and led to him providing a comprehensive account of Greek philosophy for a Roman audience, including creating a philosophical vocabulary in Latin. In 87 BC, Philo of Larissa, the head of the Platonic Academy that had been founded by Plato in Athens about 300 years earlier, arrived in Rome. Cicero, \"inspired by an extraordinary zeal for philosophy\", sat enthusiastically at his feet and absorbed Carneades' Academic Skeptic philosophy.",
"title": "Early life"
},
{
"paragraph_id": 9,
"text": "Cicero said of Plato's Dialogues, that if Zeus were to speak, he would use their language. He would, in due course, honor them with his own convivial dialogues.",
"title": "Early life"
},
{
"paragraph_id": 10,
"text": "According to Plutarch, Cicero was an extremely talented student, whose learning attracted attention from all over Rome, affording him the opportunity to study Roman law under Quintus Mucius Scaevola. Cicero's fellow students were Gaius Marius Minor, Servius Sulpicius Rufus (who became a famous lawyer, one of the few whom Cicero considered superior to himself in legal matters), and Titus Pomponius. The latter two became Cicero's friends for life, and Pomponius (who later received the nickname \"Atticus\", and whose sister married Cicero's brother) would become, in Cicero's own words, \"as a second brother\", with both maintaining a lifelong correspondence.",
"title": "Early life"
},
{
"paragraph_id": 11,
"text": "In 79 BC, Cicero left for Greece, Asia Minor and Rhodes. This was perhaps to avoid the potential wrath of Sulla, as Plutarch claims, though Cicero himself says it was to hone his skills and improve his physical fitness. In Athens he studied philosophy with Antiochus of Ascalon, the 'Old Academic' and initiator of Middle Platonism. In Asia Minor, he met the leading orators of the region and continued to study with them. Cicero then journeyed to Rhodes to meet his former teacher, Apollonius Molon, who had taught him in Rome. Molon helped Cicero hone the excesses in his style, as well as train his body and lungs for the demands of public speaking. Charting a middle path between the competing Attic and Asiatic styles, Cicero would ultimately become considered second only to Demosthenes among history's orators.",
"title": "Early life"
},
{
"paragraph_id": 12,
"text": "While Cicero had feared that the law courts would be closed forever, they were reopened in the aftermath of Sulla's civil war and the purging of Sulla's political opponents in the proscriptions. Many of the orators which Cicero admired in his youth were now dead from age or political violence. His first major appearance in the courts was in 81 BC at the age of 26 when he delivered, Pro Quinctio, a speech defending certain commercial transactions which Cicero had recorded and disseminated.",
"title": "Early career"
},
{
"paragraph_id": 13,
"text": "His more famous speech defending Sextus Roscius of Ameria – Pro Roscio Amerino – on charges of parricide in 80 BC was his first appearance in criminal court. In this high-profile case, Cicero accused a freedman of the dictator Sulla, Chrysogonus, of fabricating Roscius' father's proscription to obtain Roscius' family's property. Successful in his defence, Cicero tactfully avoided incriminating Sulla of any wrongdoing and developed a positive oratorical reputation for himself.",
"title": "Early career"
},
{
"paragraph_id": 14,
"text": "While Plutarch claims that Cicero left Rome shortly thereafter out of fear of Sulla's response, \"most scholarly now dismiss this suggestion\" because Cicero left Rome after Sulla resigned his dictatorship. Cicero, for his part, later claimed that he left Rome, headed for Asia, to develop his physique and develop his oratory. After marrying his wife, Terentia, in 80 BC, he eventually left for Asia Minor with his brother Quintus, his friend Titus Atticus, and others on a long trip spanning most of 79 through 77 BC. Returning to Rome in 77 BC, Cicero again busied himself with legal defence.",
"title": "Early career"
},
{
"paragraph_id": 15,
"text": "In 76 BC, at the quaestorian elections, Cicero was elected at the minimum age required – 30 years – in the first returns from the comitia tributa, to the post of quaestor. Ex officio, he also became a member of the Senate. In the quaestorian lot, he was assigned to Sicily for 75 BC. The post, which was largely one related to financial administration in support of the state or provincial governors, proved for Cicero an important place where he could gain clients in the provinces. His time in Sicily saw him balance his duties – largely in terms of sending more grain back to Rome – with his support for the provincials, Roman businessmen in the area, and local potentates. Adeptly balancing those responsibilities, he won their gratitude.",
"title": "Early career"
},
{
"paragraph_id": 16,
"text": "Promising to lend the Sicilians his oratorical voice, he was called on a few years after his quaestorship to prosecute the Roman province's governor Gaius Verres, for abuse of power and corruption. In 70 BC, at the age of 36, Cicero launched his first high-profile prosecution against Verres, an emblem of the corrupt Sullan supporters who had risen in the chaos of the civil war.",
"title": "Early career"
},
{
"paragraph_id": 17,
"text": "The prosecution of Gaius Verres was a great forensic success for Cicero. While Verres hired the prominent lawyer, Quintus Hortensius, after a lengthy period in Sicily collecting testimonials and evidence and persuading witnesses to come forward, Cicero returned to Rome and won the case in a series of dramatic court battles. His unique style of oratory set him apart from the flamboyant Hortensius. On the conclusion of this case, Cicero came to be considered the greatest orator in Rome. The view that Cicero may have taken the case for reasons of his own is viable. Hortensius was, at this point, known as the best lawyer in Rome; to beat him would guarantee much success and the prestige that Cicero needed to start his career. Cicero's oratorical ability is shown in his character assassination of Verres and various other techniques of persuasion used on the jury. One such example is found in the speech In Verrem, where he states \"with you on this bench, gentlemen, with Marcus Acilius Glabrio as your president, I do not understand what Verres can hope to achieve\". Oratory was considered a great art in ancient Rome and an important tool for disseminating knowledge and promoting oneself in elections, in part because there were no regular newspapers or mass media. Cicero was neither a patrician nor a plebeian noble; his rise to political office despite his relatively humble origins has traditionally been attributed to his brilliance as an orator.",
"title": "Early career"
},
{
"paragraph_id": 18,
"text": "Cicero grew up in a time of civil unrest and war. Sulla's victory in the first of a series of civil wars led to a new constitutional framework that undermined libertas (liberty), the fundamental value of the Roman Republic. Nonetheless, Sulla's reforms strengthened the position of the equestrian class, contributing to that class's growing political power. Cicero was both an Italian eques and a novus homo, but more importantly he was a Roman constitutionalist. His social class and loyalty to the Republic ensured that he would \"command the support and confidence of the people as well as the Italian middle classes\". The optimates faction never truly accepted Cicero, and this undermined his efforts to reform the Republic while preserving the constitution. Nevertheless, he successfully ascended the cursus honorum, holding each magistracy at or near the youngest possible age: quaestor in 75 BC (age 30), aedile in 69 BC (age 36), and praetor in 66 BC (age 39), when he served as president of the \"Reclamation\" (or extortion) Court. He was then elected consul at age 42.",
"title": "Early career"
},
{
"paragraph_id": 19,
"text": "Cicero, seizing the opportunity offered by optimate fear of reform, was elected consul for the year 63 BC; he was elected with the support of every unit of the centuriate assembly, rival members of the post-Sullan establishment, and the leaders of municipalities throughout post-Social War Italy. His co-consul for the year, Gaius Antonius Hybrida, played a minor role.",
"title": "Consulship"
},
{
"paragraph_id": 20,
"text": "He began his consular year by opposing a land bill proposed by a plebeian tribune which would have appointed commissioners with semi-permanent authority over land reform. Cicero was also active in the courts, defending Gaius Rabirius from accusations of participating in the unlawful killing of plebeian tribune Lucius Appuleius Saturninus in 100 BC. The prosecution occurred before the comita centuriata and threatened to reopen conflict between the Marian and Sullan factions at Rome. Cicero defended the use of force as being authorised by a senatus consultum ultimum, which would prove similar to his own use of force under such conditions.",
"title": "Consulship"
},
{
"paragraph_id": 21,
"text": "Most famously – in part because of his own publicity – he thwarted a conspiracy led by Lucius Sergius Catilina to overthrow the Roman Republic with the help of foreign armed forces. Cicero procured a senatus consultum ultimum (a recommendation from the senate attempting to legitimise the use of force) and drove Catiline from the city with four vehement speeches (the Catilinarian orations), which remain outstanding examples of his rhetorical style. The Orations listed Catiline and his followers' debaucheries, and denounced Catiline's senatorial sympathizers as roguish and dissolute debtors clinging to Catiline as a final and desperate hope. Cicero demanded that Catiline and his followers leave the city. At the conclusion of Cicero's first speech (which was made in the Temple of Jupiter Stator), Catiline hurriedly left the Senate. In his following speeches, Cicero did not directly address Catiline. He delivered the second and third orations before the people, and the last one again before the Senate. By these speeches, Cicero wanted to prepare the Senate for the worst possible case; he also delivered more evidence, against Catiline.",
"title": "Consulship"
},
{
"paragraph_id": 22,
"text": "Catiline fled and left behind his followers to start the revolution from within while he himself assaulted the city with an army of \"moral and financial bankrupts, or of honest fanatics and adventurers\". It is alleged that Catiline had attempted to involve the Allobroges, a tribe of Transalpine Gaul, in their plot, but Cicero, working with the Gauls, was able to seize letters that incriminated the five conspirators and forced them to confess in front of the Senate. The senate then deliberated upon the conspirators' punishment. As it was the dominant advisory body to the various legislative assemblies rather than a judicial body, there were limits to its power; however, martial law was in effect, and it was feared that simple house arrest or exile – the standard options – would not remove the threat to the state. At first Decimus Junius Silanus spoke for the \"extreme penalty\"; many were swayed by Julius Caesar, who decried the precedent it would set and argued in favor of life imprisonment in various Italian towns. Cato the Younger rose in defense of the death penalty and the entire Senate finally agreed on the matter. Cicero had the conspirators taken to the Tullianum, the notorious Roman prison, where they were strangled. Cicero himself accompanied the former consul Publius Cornelius Lentulus Sura, one of the conspirators, to the Tullianum.",
"title": "Consulship"
},
{
"paragraph_id": 23,
"text": "Cicero received the honorific \"pater patriae\" for his efforts to suppress the conspiracy, but lived thereafter in fear of trial or exile for having put Roman citizens to death without trial. While the senatus consultum ultimum gave some legitimacy to the use of force against the conspirators, Cicero also argued that Catiline's conspiracy, by virtue of its treason, made the conspirators enemies of the state and forfeited the protections intrinsically possessed by Roman citizens. The consuls moved decisively. Antonius Hybrida was dispatched to defeat Catiline in battle that year, preventing Crassus or Pompey from exploiting the situation for their own political aims.",
"title": "Consulship"
},
{
"paragraph_id": 24,
"text": "After the suppression of the conspiracy, Cicero was proud of his accomplishment. Some of his political enemies argued that though the act gained Cicero popularity, he exaggerated the extent of his success. He overestimated his popularity again several years later after being exiled from Italy and then allowed back from exile. At this time, he claimed that the republic would be restored along with him.",
"title": "Consulship"
},
{
"paragraph_id": 25,
"text": "Shortly after completing his consulship, in late 62 BC, Cicero arranged the purchase of a large townhouse on the Palatine Hill previously owned by Rome's richest citizen, Marcus Licinius Crassus. To finance the purchase, Cicero borrowed some two million sesterces from Publius Cornelius Sulla, whom he had previously defended from court. Cicero boasted his house was \"in conspectu prope totius urbis\" (\"in sight of nearly the whole city\"), only a short walk from the Roman Forum.",
"title": "Consulship"
},
{
"paragraph_id": 26,
"text": "In 60 BC, Julius Caesar invited Cicero to be the fourth member of his existing partnership with Pompey and Marcus Licinius Crassus, an assembly that would eventually be called the First Triumvirate. Cicero refused the invitation because he suspected it would undermine the Republic.",
"title": "Exile and return"
},
{
"paragraph_id": 27,
"text": "During Caesar's consulship of 59 BC, the triumvirate had achieved many of their goals of land reform, publicani debt forgiveness, ratification of Pompeian conquests, etc. With Caesar leaving for his provinces, they wished to maintain their hold on politics. They engineered the adoption of patrician Publius Clodius Pulcher into a plebeian family and had him elected as one of the ten tribunes of the plebs for 58 BC. Clodius used the triumvirate's backing to push through legislation that benefited them. He introduced several laws (the leges Clodiae) that made him popular with the people, strengthening his power base, then he turned on Cicero by threatening exile to anyone who executed a Roman citizen without a trial. Cicero, having executed members of the Catiline conspiracy four years previously without formal trial, was clearly the intended target. Furthermore, many believed that Clodius acted in concert with the triumvirate who feared that Cicero would seek to abolish many of Caesar's accomplishments while consul the year before. Cicero argued that the senatus consultum ultimum indemnified him from punishment, and he attempted to gain the support of the senators and consuls, especially of Pompey.",
"title": "Exile and return"
},
{
"paragraph_id": 28,
"text": "Cicero grew out his hair, dressed in mourning and toured the streets. Clodius' gangs dogged him, hurling abuse, stones and even excrement. Hortensius, trying to rally to his old rival's support, was almost lynched. The Senate and the consuls were cowed. Caesar, who was still encamped near Rome, was apologetic but said he could do nothing when Cicero brought himself to grovel in the proconsul's tent. Everyone seemed to have abandoned Cicero.",
"title": "Exile and return"
},
{
"paragraph_id": 29,
"text": "After Clodius passed a law to deny to Cicero fire and water (i.e. shelter) within four hundred miles of Rome, Cicero went into exile. He arrived at Thessalonica, on 23 May 58 BC. In his absence, Clodius, who lived next door to Cicero on the Palatine, arranged for Cicero's house to be confiscated by the state, and was even able to purchase a part of the property in order to extend his own house. After demolishing Cicero's house, Clodius had the land consecrated and symbolically erected a temple of Liberty (aedes Libertatis) on the vacant land.",
"title": "Exile and return"
},
{
"paragraph_id": 30,
"text": "Cicero's exile caused him to fall into depression. He wrote to Atticus: \"Your pleas have prevented me from committing suicide. But what is there to live for? Don't blame me for complaining. My afflictions surpass any you ever heard of earlier\". After the intervention of recently elected tribune Titus Annius Milo, acting on the behalf of Pompey who wanted Cicero as a client, the Senate voted in favor of recalling Cicero from exile. Clodius cast the single vote against the decree. Cicero returned to Italy on 5 August 57 BC, landing at Brundisium. He was greeted by a cheering crowd, and, to his delight, his beloved daughter Tullia. In his Oratio De Domo Sua Ad Pontifices, Cicero convinced the College of Pontiffs to rule that the consecration of his land was invalid, thereby allowing him to regain his property and rebuild his house on the Palatine.",
"title": "Exile and return"
},
{
"paragraph_id": 31,
"text": "Cicero tried to re-enter politics as an independent operator, but his attempts to attack portions of Caesar's legislation were unsuccessful and encouraged Caesar to re-solidify his political alliance with Pompey and Crassus. The conference at Luca in 56 BC left the three-man alliance in domination of the republic's politics; this forced Cicero to recant and support the triumvirate out of fear from being entirely excluded from public life. After the conference Cicero lavishly praised Caesar's achievements, got the Senate to vote a thanksgiving for Caesar's victories and grant money to pay his troops. He also delivered a speech 'On the consular provinces' (Latin: de provinciis consularibus) which checked an attempt by Caesar's enemies to strip him of his provinces in Gaul. After this, a cowed Cicero concentrated on his literary works. It is uncertain whether he was directly involved in politics for the following few years.",
"title": "Exile and return"
},
{
"paragraph_id": 32,
"text": "In 51 BC he reluctantly accepted a promagistracy (as proconsul) in Cilicia for the year; there were few other former consuls eligible as a result of a legislative requirement enacted by Pompey in 52 BC specifying an interval of five years between a consulship or praetorship and a provincial command. He served as proconsul of Cilicia from May 51 BC, arriving in the provinces three months later around August.",
"title": "Governorship of Cilicia"
},
{
"paragraph_id": 33,
"text": "In 53 BC Marcus Licinius Crassus had been defeated by the Parthians at the Battle of Carrhae. This opened the Roman East for a Parthian invasion, causing unrest in Syria and Cilicia. Cicero restored calm by his mild system of government. He discovered that a great amount of public property had been embezzled by corrupt previous governors and members of their staff, and did his utmost to restore it. Thus he greatly improved the condition of the cities. He retained the civil rights of, and exempted from penalties, the men who gave the property back. Besides this, he was extremely frugal in his outlays for staff and private expenses during his governorship, and this made him highly popular among the natives.",
"title": "Governorship of Cilicia"
},
{
"paragraph_id": 34,
"text": "Besides his activity in ameliorating the hard pecuniary situation of the province, Cicero was also creditably active in the military sphere. Early in his governorship he received information that prince Pacorus, son of Orodes II the king of the Parthians, had crossed the Euphrates, and was ravaging the Syrian countryside and had even besieged Cassius (the interim Roman commander in Syria) in Antioch. Cicero eventually marched with two understrength legions and a large contingent of auxiliary cavalry to Cassius's relief. Pacorus and his army had already given up on besieging Antioch and were heading south through Syria, ravaging the countryside again. Cassius and his legions followed them, harrying them wherever they went, eventually ambushing and defeating them near Antigonea.",
"title": "Governorship of Cilicia"
},
{
"paragraph_id": 35,
"text": "Another large troop of Parthian horsemen was defeated by Cicero's cavalry who happened to run into them while scouting ahead of the main army. Cicero next defeated some robbers who were based on Mount Amanus and was hailed as imperator by his troops. Afterwards he led his army against the independent Cilician mountain tribes, besieging their fortress of Pindenissum. It took him 47 days to reduce the place, which fell in December. On 30 July 50 BC Cicero left the province to his brother Quintus, who had accompanied him on his governorship as his legate. On his way back to Rome he stopped in Rhodes and then went to Athens, where he caught up with his old friend Titus Pomponius Atticus and met men of great learning.",
"title": "Governorship of Cilicia"
},
{
"paragraph_id": 36,
"text": "Cicero arrived in Rome on 4 January 49 BC. He stayed outside the pomerium, to retain his promagisterial powers: either in expectation of a triumph or to retain his independent command authority in the coming civil war. The struggle between Pompey and Julius Caesar grew more intense in 50 BC. Cicero favored Pompey, seeing him as a defender of the senate and Republican tradition, but at that time avoided openly alienating Caesar. When Caesar invaded Italy in 49 BC, Cicero fled Rome. Caesar, seeking an endorsement by a senior senator, courted Cicero's favor, but even so Cicero slipped out of Italy and traveled to Dyrrhachium where Pompey's staff was situated. Cicero traveled with the Pompeian forces to Pharsalus in Macedonia in 48 BC, though he was quickly losing faith in the competence and righteousness of the Pompeian side. Eventually, he provoked the hostility of his fellow senator Cato, who told him that he would have been of more use to the cause of the optimates if he had stayed in Rome. After Caesar's victory at the Battle of Pharsalus on 9 August, Cicero refused to take command of the Pompeian forces and continue the war. He returned to Rome, still as a promagistrate with his lictors, in 47 BC, and dismissed them upon his crossing the pomerium and renouncing his command.",
"title": "Julius Caesar's civil war"
},
{
"paragraph_id": 37,
"text": "In a letter to Varro on c. 20 April 46 BC, Cicero outlined his strategy under Caesar's dictatorship. Cicero, however, was taken by surprise when the Liberatores assassinated Caesar on the ides of March, 44 BC. Cicero was not included in the conspiracy, even though the conspirators were sure of his sympathy. Marcus Junius Brutus called out Cicero's name, asking him to restore the republic when he lifted his bloodstained dagger after the assassination. A letter Cicero wrote in February 43 BC to Trebonius, one of the conspirators, began, \"How I could wish that you had invited me to that most glorious banquet on the Ides of March!\" Cicero became a popular leader during the period of instability following the assassination. He had no respect for Mark Antony, who was scheming to take revenge upon Caesar's murderers. In exchange for amnesty for the assassins, he arranged for the Senate to agree not to declare Caesar to have been a tyrant, which allowed the Caesarians to have lawful support and kept Caesar's reforms and policies intact.",
"title": "Julius Caesar's civil war"
},
{
"paragraph_id": 38,
"text": "In April 43 BC, \"diehard republicans\" may have revived the ancient position of princeps senatus (leader of the senate) for Cicero. This position had been very prestigious until the constitutional reforms of Sulla in 82–80 BC, which removed most of its importance.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 39,
"text": "On the other side, Antony was consul and leader of the Caesarian faction, and unofficial executor of Caesar's public will. Relations between the two were never friendly and worsened after Cicero claimed that Antony was taking liberties in interpreting Caesar's wishes and intentions. Octavian was Caesar's adopted son and heir. After he returned to Italy, Cicero began to play him against Antony. He praised Octavian, declaring he would not make the same mistakes as his father. He attacked Antony in a series of speeches he called the Philippics, after Demosthenes's denunciations of Philip II of Macedon. At the time, Cicero's popularity as a public figure was unrivalled.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 40,
"text": "Cicero supported Decimus Junius Brutus Albinus as governor of Cisalpine Gaul (Gallia Cisalpina) and urged the Senate to name Antony an enemy of the state. The speech of Lucius Piso, Caesar's father-in-law, delayed proceedings against Antony. Antony was later declared an enemy of the state when he refused to lift the siege of Mutina, which was in the hands of Decimus Brutus. Cicero's plan to drive out Antony failed. Antony and Octavian reconciled and allied with Lepidus to form the Second Triumvirate after the successive battles of Forum Gallorum and Mutina. The alliance came into official existence with the lex Titia, passed on 27 November 43 BC, which gave each triumvir a consular imperium for five years. The Triumvirate immediately began a proscription of their enemies, modeled after that of Sulla in 82 BC. Cicero and all of his contacts and supporters were numbered among the enemies of the state, even though Octavian argued for two days against Cicero being added to the list.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 41,
"text": "Cicero was one of the most viciously and doggedly hunted among the proscribed. He was viewed with sympathy by a large segment of the public and many people refused to report that they had seen him. He was caught on 7 December 43 BC leaving his villa in Formiae in a litter heading to the seaside, where he hoped to embark on a ship destined for Macedonia. When his killers – Herennius (a Centurion) and Popilius (a Tribune) – arrived, Cicero's own slaves said they had not seen him, but he was given away by Philologus, a freedman of his brother Quintus Cicero.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 42,
"text": "As reported by Seneca the Elder, according to the historian Aufidius Bassus, Cicero's last words are said to have been:",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 43,
"text": "Ego vero consisto. Accede, veterane, et, si hoc saltim potes recte facere, incide cervicem.I go no further: approach, veteran soldier, and, if you can at least do so much properly, sever this neck.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 44,
"text": "He bowed to his captors, leaning his head out of the litter in a gladiatorial gesture to ease the task. By baring his neck and throat to the soldiers, he was indicating that he would not resist. According to Plutarch, Herennius first slew him, then cut off his head. On Antony's instructions his hands, which had penned the Philippics against Antony, were cut off as well; these were nailed along with his head on the Rostra in the Forum Romanum according to the tradition of Marius and Sulla, both of whom had displayed the heads of their enemies in the Forum. Cicero was the only victim of the proscriptions who was displayed in that manner. According to Cassius Dio, in a story often mistakenly attributed to Plutarch, Antony's wife Fulvia took Cicero's head, pulled out his tongue, and jabbed it repeatedly with her hairpin in final revenge against Cicero's power of speech.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 45,
"text": "Cicero's son, Marcus Tullius Cicero Minor, during his year as a consul in 30 BC, avenged his father's death, to a certain extent, when he announced to the Senate Mark Antony's naval defeat at Actium in 31 BC by Octavian.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 46,
"text": "Octavian is reported to have praised Cicero as a patriot and a scholar of meaning in later times, within the circle of his family. However, it was Octavian's acquiescence that had allowed Cicero to be killed, as Cicero was condemned by the new triumvirate.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 47,
"text": "Cicero's career as a statesman was marked by inconsistencies and a tendency to shift his position in response to changes in the political climate. His indecision may be attributed to his sensitive and impressionable personality; he was prone to overreaction in the face of political and private change.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 48,
"text": "\"Would that he had been able to endure prosperity with greater self-control, and adversity with more fortitude!\" wrote C. Asinius Pollio, a contemporary Roman statesman and historian.",
"title": "Opposition to Mark Antony and death"
},
{
"paragraph_id": 49,
"text": "Cicero married Terentia probably at the age of 27, in 79 BC. According to the upper-class mores of the day it was a marriage of convenience but lasted harmoniously for nearly 30 years. Terentia's family was wealthy, probably the plebeian noble house of Terenti Varrones, thus meeting the needs of Cicero's political ambitions in both economic and social terms. She had a half-sister named Fabia, who as a child had become a Vestal Virgin, a great honour. Terentia was a strong-willed woman and (citing Plutarch) \"took more interest in her husband's political career than she allowed him to take in household affairs\".",
"title": "Personal life and family"
},
{
"paragraph_id": 50,
"text": "In the 50s BC, Cicero's letters to Terentia became shorter and colder. He complained to his friends that Terentia had betrayed him but did not specify in which sense. Perhaps the marriage could not outlast the strain of the political upheaval in Rome, Cicero's involvement in it, and various other disputes between the two. The divorce appears to have taken place in 51 BC or shortly before. In 46 or 45 BC, Cicero married a young girl, Publilia, who had been his ward. It is thought that Cicero needed her money, particularly after having to repay the dowry of Terentia, who came from a wealthy family.",
"title": "Personal life and family"
},
{
"paragraph_id": 51,
"text": "Although his marriage to Terentia was one of convenience, it is commonly known that Cicero held great love for his daughter Tullia. When she suddenly became ill in February 45 BC and died after having seemingly recovered from giving birth to a son in January, Cicero was stunned. \"I have lost the one thing that bound me to life,\" he wrote to Atticus. Atticus told him to come for a visit during the first weeks of his bereavement, so that he could comfort him when his pain was at its greatest. In Atticus's large library, Cicero read everything that the Greek philosophers had written about overcoming grief, \"but my sorrow defeats all consolation.\" Caesar and Brutus, as well as Servius Sulpicius Rufus, sent him letters of condolence.",
"title": "Personal life and family"
},
{
"paragraph_id": 52,
"text": "Cicero hoped that his son Marcus would become a philosopher like him, but Marcus himself wished for a military career. He joined the army of Pompey in 49 BC, and after Pompey's defeat at Pharsalus 48 BC, he was pardoned by Caesar. Cicero sent him to Athens to study as a disciple of the peripatetic philosopher Kratippos in 48 BC, but he used this absence from \"his father's vigilant eye\" to \"eat, drink, and be merry.\" After Cicero's death, he joined the army of the Liberatores but was later pardoned by Augustus. Augustus's bad conscience for not having objected to Cicero's being put on the proscription list during the Second Triumvirate led him to aid considerably Marcus Minor's career. He became an augur and was nominated consul in 30 BC together with Augustus. As such, he was responsible for revoking the honors of Mark Antony, who was responsible for the proscription and could in this way take revenge. Later he was appointed proconsul of Syria and the province of Asia.",
"title": "Personal life and family"
},
{
"paragraph_id": 53,
"text": "Cicero has been traditionally considered the master of Latin prose, with Quintilian declaring that Cicero was \"not the name of a man, but of eloquence itself.\" The English words Ciceronian (meaning \"eloquent\") and cicerone (meaning \"local guide\") derive from his name. He is credited with transforming Latin from a modest utilitarian language into a versatile literary medium capable of expressing abstract and complicated thoughts with clarity. Julius Caesar praised Cicero's achievement by saying \"it is more important to have greatly extended the frontiers of the Roman spirit than the frontiers of the Roman empire\". According to John William Mackail, \"Cicero's unique and imperishable glory is that he created the language of the civilized world, and used that language to create a style which nineteen centuries have not replaced, and in some respects have hardly altered.\"",
"title": "Legacy"
},
{
"paragraph_id": 54,
"text": "Cicero was also an energetic writer with an interest in a wide variety of subjects, in keeping with the Hellenistic philosophical and rhetorical traditions in which he was trained. The quality and ready accessibility of Ciceronian texts favored very wide distribution and inclusion in teaching curricula, as suggested by a graffito at Pompeii, admonishing: \"You will like Cicero, or you will be whipped\".",
"title": "Legacy"
},
{
"paragraph_id": 55,
"text": "Cicero was greatly admired by influential Church Fathers such as Augustine of Hippo, who credited Cicero's lost Hortensius for his eventual conversion to Christianity, and St. Jerome, who had a feverish vision in which he was accused of being \"follower of Cicero and not of Christ\" before the judgment seat.",
"title": "Legacy"
},
{
"paragraph_id": 56,
"text": "This influence further increased after the Early Middle Ages in Europe, where more of his writings survived than any other Latin author. Medieval philosophers were influenced by Cicero's writings on natural law and innate rights.",
"title": "Legacy"
},
{
"paragraph_id": 57,
"text": "Petrarch's rediscovery of Cicero's letters provided the impetus for searches for ancient Greek and Latin writings scattered throughout European monasteries, and the subsequent rediscovery of classical antiquity led to the Renaissance. Subsequently, Cicero became synonymous with classical Latin to such an extent that a number of humanist scholars began to assert that no Latin word or phrase should be used unless it appeared in Cicero's works, a stance criticised by Erasmus.",
"title": "Legacy"
},
{
"paragraph_id": 58,
"text": "His voluminous correspondence, much of it addressed to his friend Atticus, has been especially influential, introducing the art of refined letter writing to European culture. Cornelius Nepos, the first century BC biographer of Atticus, remarked that Cicero's letters contained such a wealth of detail \"concerning the inclinations of leading men, the faults of the generals, and the revolutions in the government\" that their reader had little need for a history of the period.",
"title": "Legacy"
},
{
"paragraph_id": 59,
"text": "Among Cicero's admirers were Desiderius Erasmus, Martin Luther, and John Locke. Following the invention of Johannes Gutenberg's printing press, De Officiis was the second book printed in Europe, after the Gutenberg Bible. Scholars note Cicero's influence on the rebirth of religious toleration in the 17th century.",
"title": "Legacy"
},
{
"paragraph_id": 60,
"text": "Cicero was especially popular with the Philosophes of the 18th century, including Edward Gibbon, Diderot, David Hume, Montesquieu, and Voltaire. Gibbon wrote of his first experience reading the author's collective works thus: \"I tasted the beauty of the language; I breathed the spirit of freedom; and I imbibed from his precepts and examples the public and private sense of a man...after finishing the great author, a library of eloquence and reason, I formed a more extensive plan of reviewing the Latin classics...\"",
"title": "Legacy"
},
{
"paragraph_id": 61,
"text": "Voltaire called Cicero \"the greatest as well as the most elegant of Roman philosophers\" and even staged a play based on Cicero's role in the Catilinarian conspiracy, called Rome Sauvée, ou Catilina, to \"make young people who go to the theatre acquainted with Cicero.\" Voltaire was spurred to pen the drama as a rebuff to his rival Claude Prosper Jolyot de Crébillon's own play Catilina, which had portrayed Cicero as a coward and villain who hypocritically married his own daughter to Catiline.",
"title": "Legacy"
},
{
"paragraph_id": 62,
"text": "Montesquieu produced his \"Discourse on Cicero\" in 1717, in which he heaped praise on the author because he rescued \"philosophy from the hands of scholars, and freed it from the confusion of a foreign language\". Montesquieu went on to declare that Cicero was \"of all the ancients, the one who had the most personal merit, and whom I would prefer to resemble.\"",
"title": "Legacy"
},
{
"paragraph_id": 63,
"text": "Internationally, Cicero the republican inspired the Founding Fathers of the United States and the revolutionaries of the French Revolution. John Adams said, \"As all the ages of the world have not produced a greater statesman and philosopher united than Cicero, his authority should have great weight.\" Thomas Jefferson names Cicero as one of a handful of major figures who contributed to a tradition \"of public right\" that informed his draft of the Declaration of Independence and shaped American understandings of \"the common sense\" basis for the right of revolution. Camille Desmoulins said of the French republicans in 1789 that they were \"mostly young people who, nourished by the reading of Cicero at school, had become passionate enthusiasts for liberty\".",
"title": "Legacy"
},
{
"paragraph_id": 64,
"text": "Jim Powell starts his book on the history of liberty with the sentence: \"Marcus Tullius Cicero expressed principles that became the bedrock of liberty in the modern world.\"",
"title": "Legacy"
},
{
"paragraph_id": 65,
"text": "Likewise, no other ancient personality has inspired as much venomous dislike as Cicero, especially in more modern times. His commitment to the values of the Republic accommodated a hatred of the poor and persistent opposition to the advocates and mechanisms of popular representation. Friedrich Engels referred to him as \"the most contemptible scoundrel in history\" for upholding republican \"democracy\" while at the same time denouncing land and class reforms. Cicero has faced criticism for exaggerating the democratic qualities of republican Rome, and for defending the Roman oligarchy against the popular reforms of Caesar. Michael Parenti admits Cicero's abilities as an orator, but finds him a vain, pompous and hypocritical personality who, when it suited him, could show public support for popular causes that he privately despised. Parenti presents Cicero's prosecution of the Catiline conspiracy as legally flawed at least, and possibly unlawful.",
"title": "Legacy"
},
{
"paragraph_id": 66,
"text": "Cicero also had an influence on modern astronomy. Nicolaus Copernicus, searching for ancient views on earth motion, said that he \"first ... found in Cicero that Hicetas supposed the earth to move.\"",
"title": "Legacy"
},
{
"paragraph_id": 67,
"text": "Notably, \"Cicero\" was the name attributed to size 12 font in typesetting table drawers. For ease of reference, type sizes 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, and 20 were all given different names.",
"title": "Legacy"
},
{
"paragraph_id": 68,
"text": "Cicero was declared a righteous pagan by the Early Church, and therefore many of his works were deemed worthy of preservation. Subsequent Roman and medieval Christian writers quoted liberally from his works De re publica (On the Commonwealth) and De Legibus (On the Laws), and much of his work has been recreated from these surviving fragments. Cicero also articulated an early, abstract conceptualization of rights, based on ancient law and custom. Of Cicero's books, six on rhetoric have survived, as well as parts of seven on philosophy. Of his speeches, 88 were recorded, but only 52 survive.",
"title": "Works"
},
{
"paragraph_id": 69,
"text": "Cicero's great repute in Italy has led to numerous ruins being identified as having belonged to him, though none have been substantiated with absolute certainty. In Formia, two Roman-era ruins are popularly believed to be Cicero's mausoleum, the Tomba di Cicerone, and the villa where he was assassinated in 43 BC. The latter building is centered around a central hall with Doric columns and a coffered vault, with a separate nymphaeum, on five acres of land near Formia. A modern villa was built on the site after the Rubino family purchased the land from Ferdinand II of the Two Sicilies in 1868. Cicero's supposed tomb is a 24-meter (79 feet) tall tower on an opus quadratum base on the ancient Via Appia outside of Formia. Some suggest that it is not in fact Cicero's tomb, but a monument built on the spot where Cicero was intercepted and assassinated while trying to reach the sea.",
"title": "In archaeology"
},
{
"paragraph_id": 70,
"text": "In Pompeii, a large villa excavated in the mid 18th century just outside the Herculaneum Gate was widely believed to have been Cicero's, who was known to have owned a holiday villa in Pompeii he called his Pompeianum. The villa was stripped of its fine frescoes and mosaics and then re-buried after 1763 – it has yet to be re-excavated. However, contemporaneous descriptions of the building from the excavators combined with Cicero's own references to his Pompeianum differ, making it unlikely that it is Cicero's villa.",
"title": "In archaeology"
},
{
"paragraph_id": 71,
"text": "In Rome, the location of Cicero's house has been roughly identified from excavations of the Republican-era stratum on the northwestern slope of the Palatine Hill. Cicero's domus has long been known to have stood in the area, according to his own descriptions and those of later authors, but there is some debate about whether it stood near the base of the hill, very close to the Roman Forum, or nearer to the summit. During his life the area was the most desirable in Rome, densely occupied with Patrician houses including the Domus Publica of Julius Caesar and the home of Cicero's mortal enemy Clodius.",
"title": "In archaeology"
},
{
"paragraph_id": 72,
"text": "In Dante's 1320 poem the Divine Comedy, the author encounters Cicero, among other philosophers, in Limbo. Ben Jonson dramatised the conspiracy of Catiline in his play Catiline His Conspiracy, featuring Cicero as a character. Cicero also appears as a minor character in William Shakespeare's play Julius Caesar.",
"title": "Notable fictional portrayals"
},
{
"paragraph_id": 73,
"text": "Cicero was portrayed on the motion picture screen by British actor Alan Napier in the 1953 film Julius Caesar, based on Shakespeare's play. He has also been played by such noted actors as Michael Hordern (in Cleopatra), and André Morell (in the 1970 Julius Caesar). Most recently, Cicero was portrayed by David Bamber in the HBO series Rome (2005–2007) and appeared in both seasons.",
"title": "Notable fictional portrayals"
},
{
"paragraph_id": 74,
"text": "In the historical novel series Masters of Rome, Colleen McCullough presents a not-so-flattering depiction of Cicero's career, showing him struggling with an inferiority complex and vanity, morally flexible and fatally indiscreet, while his rival Julius Caesar is shown in a more approving light. Cicero is portrayed as a hero in the novel A Pillar of Iron by Taylor Caldwell (1965). Robert Harris' novels Imperium, Lustrum (published under the name Conspirata in the United States) and Dictator comprise a three-part series based on the life of Cicero. In these novels Cicero's character is depicted in a more favorable way than in those of McCullough, with his positive traits equaling or outweighing his weaknesses (while conversely Caesar is depicted as more sinister than in McCullough). Cicero is a major recurring character in the Roma Sub Rosa series of mystery novels by Steven Saylor. He also appears several times as a peripheral character in John Maddox Roberts' SPQR series.",
"title": "Notable fictional portrayals"
},
{
"paragraph_id": 75,
"text": "Samuel Barnett portrays Cicero in a 2017 audio drama series pilot produced by Big Finish Productions. A full series was released the following year. All episodes are written by David Llewellyn and directed and produced by Scott Handcock.",
"title": "Notable fictional portrayals"
},
{
"paragraph_id": 76,
"text": "Works by Cicero",
"title": "External links"
},
{
"paragraph_id": 77,
"text": "Biographies and descriptions of Cicero's time",
"title": "External links"
},
{
"paragraph_id": 78,
"text": "Plutarch's biography of Cicero contained in the Parallel Lives",
"title": "External links"
}
]
| Marcus Tullius Cicero was a Roman statesman, lawyer, scholar, philosopher, writer and Academic skeptic, who tried to uphold optimate principles during the political crises that led to the establishment of the Roman Empire. His extensive writings include treatises on rhetoric, philosophy and politics. He is considered one of Rome's greatest orators and prose stylists and the innovator of what became known as "Ciceronian rhetoric". Cicero was educated in Rome and in Greece. He came from a wealthy municipal family of the Roman equestrian order, and served as consul in 63 BC. His influence on the Latin language was immense. He wrote more than three-quarters of extant Latin literature that is known to have existed in his lifetime, and it has been said that subsequent prose was either a reaction against or a return to his style, not only in Latin but in European languages up to the 19th century. Cicero introduced into Latin the arguments of the chief schools of Hellenistic philosophy and created a large amount of Latin philosophical vocabulary via lexical innovation, almost 150 of which had been introduced from the translation of Greek philosophical terms, demonstrating himself as both an adept scholar of philosophy as well as a skilled translator. Though he was an accomplished orator and successful lawyer, Cicero believed his political career was his most important achievement. It was during his consulship that the Catiline conspiracy attempted to overthrow the government through an attack on the city by outside forces, and Cicero suppressed the revolt by summarily and controversially executing five conspirators without trial. During the chaotic middle period of the first century BC, marked by civil wars and the dictatorship of Julius Caesar, Cicero championed a return to the traditional republican government. Following Caesar's death, Cicero became an enemy of Mark Antony in the ensuing power struggle, attacking him in a series of speeches. He was proscribed as an enemy of the state by the Second Triumvirate and consequently executed by soldiers operating on their behalf in 43 BC, having been intercepted during an attempted flight from the Italian peninsula. His severed hands and head were then, as a final revenge of Mark Antony, displayed on the Rostra. Petrarch's rediscovery of Cicero's letters is often credited for initiating the 14th-century Renaissance in public affairs, humanism, and classical Roman culture. According to Polish historian Tadeusz Zieliński, "the Renaissance was above all things a revival of Cicero, and only after him and through him of the rest of Classical antiquity." The peak of Cicero's authority and prestige came during the 18th-century Enlightenment, and his impact on leading Enlightenment thinkers and political theorists such as John Locke, David Hume, Montesquieu, and Edmund Burke was substantial. His works rank among the most influential in global culture, and today still constitute one of the most important bodies of primary material for the writing and revision of Roman history, especially the last days of the Roman Republic. | 2001-10-13T06:08:37Z | 2023-12-14T20:32:22Z | [
"Template:Additional citation needed",
"Template:S-end",
"Template:Convert",
"Template:Div col",
"Template:Reflist",
"Template:Encyclopaedia Iranica",
"Template:Wikisourcelang",
"Template:EB1911 poster",
"Template:Efn",
"Template:Blockquote",
"Template:Portal",
"Template:Refbegin",
"Template:Short description",
"Template:Sfn",
"Template:Cite encyclopedia",
"Template:Wikisource author",
"Template:StandardEbooks",
"Template:Ethics",
"Template:Page needed",
"Template:ISBN",
"Template:Authority control",
"Template:About",
"Template:Infobox person",
"Template:Div col end",
"Template:Commons",
"Template:Wikiversity",
"Template:Internet Archive author",
"Template:Sep entry",
"Template:Ancient Rome topics",
"Template:Notelist",
"Template:Cite web",
"Template:Harvnb",
"Template:Catholic virtue ethics",
"Template:Cite news",
"Template:Tcmdb title",
"Template:S-aft",
"Template:Republicanism sidebar",
"Template:Cite journal",
"Template:OEtymD",
"Template:Plutarch",
"Template:IPA-la",
"Template:Main",
"Template:Lang-la",
"Template:Use dmy dates",
"Template:See",
"Template:Cicero",
"Template:S-start",
"Template:Social and political philosophy",
"Template:Citation needed",
"Template:Wikiquote",
"Template:Cite book",
"Template:Platonists",
"Template:Lang",
"Template:Rhetoric",
"Template:Snd",
"Template:S-ttl",
"Template:Respell",
"Template:Gutenberg author",
"Template:S-bef",
"Template:Cn",
"Template:Refend",
"Template:Library resources box",
"Template:Librivox author",
"Template:S-off",
"Template:Ancient Rome and the fall of the Republic",
"Template:IPAc-en",
"Template:Circa"
]
| https://en.wikipedia.org/wiki/Cicero |
6,047 | Consul | Consul (abbrev. cos.; Latin plural consules) was the title of one of the two chief magistrates of the Roman Republic, and subsequently also an important title under the Roman Empire. The title was used in other European city-states through antiquity and the Middle Ages, in particular in the Republics of Genoa and Pisa, then revived in modern states, notably in the First French Republic. The related adjective is consular, from the Latin consularis.
This usage contrasts with modern terminology, where a consul is a type of diplomat.
A consul held the highest elected political office of the Roman Republic (509 to 27 BC), and ancient Romans considered the consulship the highest level of the cursus honorum (an ascending sequence of public offices to which politicians aspired). Consuls were elected to office and held power for one year. There were always two consuls in power at any time.
It was not uncommon for an organization under Roman private law to copy the terminology of state and city institutions for its own statutory agents. The founding statute, or contract, of such an organisation was called lex, 'law'. The people elected each year were patricians, members of the upper class.
While many cities, including the Gallic states and the Carthaginian Republic, had a double-headed chief magistracy, another title was often used, such as the Punic sufet, Duumvir, or native styles like Meddix.
The city-state of Genoa, unlike ancient Rome, bestowed the title of consul on various state officials, not necessarily restricted to the highest. Among these were Genoese officials stationed in various Mediterranean ports, whose role included helping Genoese merchants and sailors in difficulties with the local authorities. Great Britain reciprocated by appointing consuls to Genoa from 1722. This institution, with its name, was later emulated by other powers and is reflected in the modern usage of the word (see Consul (representative)).
In addition to the Genoese Republic, the Republic of Pisa also took the form of "Consul" in the early stages of its government. The Consulate of the Republic of Pisa was the major government institution present in Pisa between the 11th and 12th centuries. Despite losing space within the government since 1190 in favor of the Podestà, for some periods of the 13th century some citizens were again elected as consuls.
Throughout most of southern France, a consul (French: consul or consule) was an office equivalent to the échevins [fr] of the north and roughly similar with English aldermen. The most prominent were those of Bordeaux and Toulouse, which came to be known as jurats and capitouls, respectively. The capitouls of Toulouse were granted transmittable nobility. In many other smaller towns the first consul was the equivalent of a mayor today, assisted by a variable number of secondary consuls and jurats. His main task was to levy and collect tax.
The Dukes of Gaeta often used also the title of "consul" in its Greek form "Hypatos" (see List of Hypati and Dukes of Gaeta).
After Napoleon Bonaparte staged a coup against the Directory government in November 1799, the French Republic adopted a constitution which conferred executive powers upon three consuls, elected for a period of ten years. In reality, the first consul, Bonaparte, dominated his two colleagues and held supreme power, soon making himself consul for life (1802) and eventually, in 1804, emperor.
The office was held by:
The short-lived Bolognese Republic, proclaimed in 1796 as a French client republic in the Central Italian city of Bologna, had a government consisting of nine consuls and its head of state was the Presidente del Magistrato, i.e., chief magistrate, a presiding office held for four months by one of the consuls. Bologna already had consuls at some parts of its Medieval history.
The French-sponsored Roman Republic (15 February 1798 – 23 June 1800) was headed by multiple consuls:
Consular rule was interrupted by the Neapolitan occupation (27 November – 12 December 1798), which installed a Provisional Government:
Rome was occupied by France (11 July – 28 September 1799) and again by Naples (30 September 1799 – 23 June 1800), bringing an end to the Roman Republic.
Among the many petty local republics that were formed during the first year of the Greek Revolution, prior to the creation of a unified Provisional Government at the First National Assembly at Epidaurus, were:
Note: in Greek, the term for "consul" is "hypatos" (ὕπατος), which translates as "supreme one", and hence does not necessarily imply a joint office.
In between a series of juntas and various other short-lived regimes, the young republic was governed by "consuls of the republic", with two consuls alternating in power every 4 months:
After a few presidents of the Provisional Junta, there were again consuls of the republic, 14 March 1841 – 13 March 1844 (ruling jointly, but occasionally styled "first consul", "second consul"): Carlos Antonio López Ynsfrán (b. 1792 – d. 1862) + Mariano Roque Alonzo Romero (d. 1853) (the lasts of the aforementioned juntistas, Commandant-General of the Army) Thereafter all republican rulers were styled "president".
In modern terminology, a consul is a type of diplomat. The American Heritage Dictionary defines consul as "an official appointed by a government to reside in a foreign country and represent its interests there." The Devil's Dictionary defines Consul as "in American politics, a person who having failed to secure an office from the people is given one by the Administration on condition that he leave the country".
In most governments, the consul is the head of the consular section of an embassy, and is responsible for all consular services such as immigrant and non-immigrant visas, passports, and citizen services for expatriates living or traveling in the host country.
A less common modern usage is when the consul of one country takes a governing role in the host country.
Differently named, but same function
Modern UN System
Specific | [
{
"paragraph_id": 0,
"text": "Consul (abbrev. cos.; Latin plural consules) was the title of one of the two chief magistrates of the Roman Republic, and subsequently also an important title under the Roman Empire. The title was used in other European city-states through antiquity and the Middle Ages, in particular in the Republics of Genoa and Pisa, then revived in modern states, notably in the First French Republic. The related adjective is consular, from the Latin consularis.",
"title": ""
},
{
"paragraph_id": 1,
"text": "This usage contrasts with modern terminology, where a consul is a type of diplomat.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A consul held the highest elected political office of the Roman Republic (509 to 27 BC), and ancient Romans considered the consulship the highest level of the cursus honorum (an ascending sequence of public offices to which politicians aspired). Consuls were elected to office and held power for one year. There were always two consuls in power at any time.",
"title": "Roman consul"
},
{
"paragraph_id": 3,
"text": "It was not uncommon for an organization under Roman private law to copy the terminology of state and city institutions for its own statutory agents. The founding statute, or contract, of such an organisation was called lex, 'law'. The people elected each year were patricians, members of the upper class.",
"title": "Other uses in antiquity"
},
{
"paragraph_id": 4,
"text": "While many cities, including the Gallic states and the Carthaginian Republic, had a double-headed chief magistracy, another title was often used, such as the Punic sufet, Duumvir, or native styles like Meddix.",
"title": "Other uses in antiquity"
},
{
"paragraph_id": 5,
"text": "The city-state of Genoa, unlike ancient Rome, bestowed the title of consul on various state officials, not necessarily restricted to the highest. Among these were Genoese officials stationed in various Mediterranean ports, whose role included helping Genoese merchants and sailors in difficulties with the local authorities. Great Britain reciprocated by appointing consuls to Genoa from 1722. This institution, with its name, was later emulated by other powers and is reflected in the modern usage of the word (see Consul (representative)).",
"title": "Medieval city-states, communes and municipalities"
},
{
"paragraph_id": 6,
"text": "In addition to the Genoese Republic, the Republic of Pisa also took the form of \"Consul\" in the early stages of its government. The Consulate of the Republic of Pisa was the major government institution present in Pisa between the 11th and 12th centuries. Despite losing space within the government since 1190 in favor of the Podestà, for some periods of the 13th century some citizens were again elected as consuls.",
"title": "Medieval city-states, communes and municipalities"
},
{
"paragraph_id": 7,
"text": "Throughout most of southern France, a consul (French: consul or consule) was an office equivalent to the échevins [fr] of the north and roughly similar with English aldermen. The most prominent were those of Bordeaux and Toulouse, which came to be known as jurats and capitouls, respectively. The capitouls of Toulouse were granted transmittable nobility. In many other smaller towns the first consul was the equivalent of a mayor today, assisted by a variable number of secondary consuls and jurats. His main task was to levy and collect tax.",
"title": "Medieval city-states, communes and municipalities"
},
{
"paragraph_id": 8,
"text": "The Dukes of Gaeta often used also the title of \"consul\" in its Greek form \"Hypatos\" (see List of Hypati and Dukes of Gaeta).",
"title": "Medieval city-states, communes and municipalities"
},
{
"paragraph_id": 9,
"text": "",
"title": "Medieval city-states, communes and municipalities"
},
{
"paragraph_id": 10,
"text": "After Napoleon Bonaparte staged a coup against the Directory government in November 1799, the French Republic adopted a constitution which conferred executive powers upon three consuls, elected for a period of ten years. In reality, the first consul, Bonaparte, dominated his two colleagues and held supreme power, soon making himself consul for life (1802) and eventually, in 1804, emperor.",
"title": "French Revolution"
},
{
"paragraph_id": 11,
"text": "The office was held by:",
"title": "French Revolution"
},
{
"paragraph_id": 12,
"text": "The short-lived Bolognese Republic, proclaimed in 1796 as a French client republic in the Central Italian city of Bologna, had a government consisting of nine consuls and its head of state was the Presidente del Magistrato, i.e., chief magistrate, a presiding office held for four months by one of the consuls. Bologna already had consuls at some parts of its Medieval history.",
"title": "French Revolution"
},
{
"paragraph_id": 13,
"text": "The French-sponsored Roman Republic (15 February 1798 – 23 June 1800) was headed by multiple consuls:",
"title": "French Revolution"
},
{
"paragraph_id": 14,
"text": "Consular rule was interrupted by the Neapolitan occupation (27 November – 12 December 1798), which installed a Provisional Government:",
"title": "French Revolution"
},
{
"paragraph_id": 15,
"text": "Rome was occupied by France (11 July – 28 September 1799) and again by Naples (30 September 1799 – 23 June 1800), bringing an end to the Roman Republic.",
"title": "French Revolution"
},
{
"paragraph_id": 16,
"text": "Among the many petty local republics that were formed during the first year of the Greek Revolution, prior to the creation of a unified Provisional Government at the First National Assembly at Epidaurus, were:",
"title": "Revolutionary Greece, 1821"
},
{
"paragraph_id": 17,
"text": "Note: in Greek, the term for \"consul\" is \"hypatos\" (ὕπατος), which translates as \"supreme one\", and hence does not necessarily imply a joint office.",
"title": "Revolutionary Greece, 1821"
},
{
"paragraph_id": 18,
"text": "In between a series of juntas and various other short-lived regimes, the young republic was governed by \"consuls of the republic\", with two consuls alternating in power every 4 months:",
"title": "Paraguay, 1813–1844"
},
{
"paragraph_id": 19,
"text": "After a few presidents of the Provisional Junta, there were again consuls of the republic, 14 March 1841 – 13 March 1844 (ruling jointly, but occasionally styled \"first consul\", \"second consul\"): Carlos Antonio López Ynsfrán (b. 1792 – d. 1862) + Mariano Roque Alonzo Romero (d. 1853) (the lasts of the aforementioned juntistas, Commandant-General of the Army) Thereafter all republican rulers were styled \"president\".",
"title": "Paraguay, 1813–1844"
},
{
"paragraph_id": 20,
"text": "In modern terminology, a consul is a type of diplomat. The American Heritage Dictionary defines consul as \"an official appointed by a government to reside in a foreign country and represent its interests there.\" The Devil's Dictionary defines Consul as \"in American politics, a person who having failed to secure an office from the people is given one by the Administration on condition that he leave the country\".",
"title": "Modern uses of the term"
},
{
"paragraph_id": 21,
"text": "In most governments, the consul is the head of the consular section of an embassy, and is responsible for all consular services such as immigrant and non-immigrant visas, passports, and citizen services for expatriates living or traveling in the host country.",
"title": "Modern uses of the term"
},
{
"paragraph_id": 22,
"text": "A less common modern usage is when the consul of one country takes a governing role in the host country.",
"title": "Modern uses of the term"
},
{
"paragraph_id": 23,
"text": "Differently named, but same function",
"title": "See also"
},
{
"paragraph_id": 24,
"text": "Modern UN System",
"title": "See also"
},
{
"paragraph_id": 25,
"text": "Specific",
"title": "Sources and references"
}
]
| Consul was the title of one of the two chief magistrates of the Roman Republic, and subsequently also an important title under the Roman Empire. The title was used in other European city-states through antiquity and the Middle Ages, in particular in the Republics of Genoa and Pisa, then revived in modern states, notably in the First French Republic. The related adjective is consular, from the Latin consularis. This usage contrasts with modern terminology, where a consul is a type of diplomat. | 2001-08-08T21:39:55Z | 2023-07-30T12:08:54Z | [
"Template:Wiktionary",
"Template:Lang-fr",
"Template:Lang",
"Template:Main",
"Template:Reflist",
"Template:Short description",
"Template:About",
"Template:Cite journal",
"Template:Cite book",
"Template:Main article",
"Template:Interlanguage link multi"
]
| https://en.wikipedia.org/wiki/Consul |
6,050 | List of equations in classical mechanics | Classical mechanics is the branch of physics used to describe the motion of macroscopic objects. It is the most familiar of the theories of physics. The concepts it covers, such as mass, acceleration, and force, are commonly used and known. The subject is based upon a three-dimensional Euclidean space with fixed axes, called a frame of reference. The point of concurrency of the three axes is known as the origin of the particular space.
Classical mechanics utilises many equations—as well as other mathematical concepts—which relate various physical quantities to one another. These include differential equations, manifolds, Lie groups, and ergodic theory. This article gives a summary of the most important of these.
This article lists equations from Newtonian mechanics, see analytical mechanics for the more general formulation of classical mechanics (which includes Lagrangian and Hamiltonian mechanics).
Every conservative force has a potential energy. By following two principles one can consistently assign a non-relative value to U:
In the following rotational definitions, the angle can be any angle about the specified axis of rotation. It is customary to use θ, but this does not have to be the polar angle used in polar coordinate systems. The unit axial vector
defines the axis of rotation, e ^ r {\displaystyle \scriptstyle \mathbf {\hat {e}} _{r}} = unit vector in direction of r, e ^ θ {\displaystyle \scriptstyle \mathbf {\hat {e}} _{\theta }} = unit vector tangential to the angle.
The precession angular speed of a spinning top is given by:
where w is the weight of the spinning flywheel.
The mechanical work done by an external agent on a system is equal to the change in kinetic energy of the system:
The work done W by an external agent which exerts a force F (at r) and torque τ on an object along a curved path C is:
where θ is the angle of rotation about an axis defined by a unit vector n.
The change in kinetic energy for an object initially traveling at speed v 0 {\displaystyle v_{0}} and later at speed v {\displaystyle v} is:
For a stretched spring fixed at one end obeying Hooke's law, the elastic potential energy is
where r2 and r1 are collinear coordinates of the free end of the spring, in the direction of the extension/compression, and k is the spring constant.
Euler also worked out analogous laws of motion to those of Newton, see Euler's laws of motion. These extend the scope of Newton's laws to rigid bodies, but are essentially the same as above. A new equation Euler formulated is:
where I is the moment of inertia tensor.
The previous equations for planar motion can be used here: corollaries of momentum, angular momentum etc. can immediately follow by applying the above definitions. For any object moving in any path in a plane,
the following general results apply to the particle.
For a massive body moving in a central potential due to another object, which depends only on the radial separation between the centers of masses of the two objects, the equation of motion is:
These equations can be used only when acceleration is constant. If acceleration is not constant then the general calculus equations above must be used, found by integrating the definitions of position, velocity and acceleration (see above).
For classical (Galileo-Newtonian) mechanics, the transformation law from one inertial or accelerating (including rotation) frame (reference frame traveling at constant velocity - including zero) to another is the Galilean transform.
Unprimed quantities refer to position, velocity and acceleration in one frame F; primed quantities refer to position, velocity and acceleration in another frame F' moving at translational velocity V or angular velocity Ω relative to F. Conversely F moves at velocity (—V or —Ω) relative to F'. The situation is similar for relative accelerations.
SHM, DHM, SHO, and DHO refer to simple harmonic motion, damped harmonic motion, simple harmonic oscillator and damped harmonic oscillator respectively. | [
{
"paragraph_id": 0,
"text": "Classical mechanics is the branch of physics used to describe the motion of macroscopic objects. It is the most familiar of the theories of physics. The concepts it covers, such as mass, acceleration, and force, are commonly used and known. The subject is based upon a three-dimensional Euclidean space with fixed axes, called a frame of reference. The point of concurrency of the three axes is known as the origin of the particular space.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Classical mechanics utilises many equations—as well as other mathematical concepts—which relate various physical quantities to one another. These include differential equations, manifolds, Lie groups, and ergodic theory. This article gives a summary of the most important of these.",
"title": ""
},
{
"paragraph_id": 2,
"text": "This article lists equations from Newtonian mechanics, see analytical mechanics for the more general formulation of classical mechanics (which includes Lagrangian and Hamiltonian mechanics).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Every conservative force has a potential energy. By following two principles one can consistently assign a non-relative value to U:",
"title": "Classical mechanics"
},
{
"paragraph_id": 4,
"text": "In the following rotational definitions, the angle can be any angle about the specified axis of rotation. It is customary to use θ, but this does not have to be the polar angle used in polar coordinate systems. The unit axial vector",
"title": "Kinematics"
},
{
"paragraph_id": 5,
"text": "defines the axis of rotation, e ^ r {\\displaystyle \\scriptstyle \\mathbf {\\hat {e}} _{r}} = unit vector in direction of r, e ^ θ {\\displaystyle \\scriptstyle \\mathbf {\\hat {e}} _{\\theta }} = unit vector tangential to the angle.",
"title": "Kinematics"
},
{
"paragraph_id": 6,
"text": "The precession angular speed of a spinning top is given by:",
"title": "Dynamics"
},
{
"paragraph_id": 7,
"text": "where w is the weight of the spinning flywheel.",
"title": "Dynamics"
},
{
"paragraph_id": 8,
"text": "The mechanical work done by an external agent on a system is equal to the change in kinetic energy of the system:",
"title": "Energy"
},
{
"paragraph_id": 9,
"text": "The work done W by an external agent which exerts a force F (at r) and torque τ on an object along a curved path C is:",
"title": "Energy"
},
{
"paragraph_id": 10,
"text": "where θ is the angle of rotation about an axis defined by a unit vector n.",
"title": "Energy"
},
{
"paragraph_id": 11,
"text": "The change in kinetic energy for an object initially traveling at speed v 0 {\\displaystyle v_{0}} and later at speed v {\\displaystyle v} is:",
"title": "Energy"
},
{
"paragraph_id": 12,
"text": "For a stretched spring fixed at one end obeying Hooke's law, the elastic potential energy is",
"title": "Energy"
},
{
"paragraph_id": 13,
"text": "where r2 and r1 are collinear coordinates of the free end of the spring, in the direction of the extension/compression, and k is the spring constant.",
"title": "Energy"
},
{
"paragraph_id": 14,
"text": "Euler also worked out analogous laws of motion to those of Newton, see Euler's laws of motion. These extend the scope of Newton's laws to rigid bodies, but are essentially the same as above. A new equation Euler formulated is:",
"title": "Euler's equations for rigid body dynamics"
},
{
"paragraph_id": 15,
"text": "where I is the moment of inertia tensor.",
"title": "Euler's equations for rigid body dynamics"
},
{
"paragraph_id": 16,
"text": "The previous equations for planar motion can be used here: corollaries of momentum, angular momentum etc. can immediately follow by applying the above definitions. For any object moving in any path in a plane,",
"title": "General planar motion"
},
{
"paragraph_id": 17,
"text": "the following general results apply to the particle.",
"title": "General planar motion"
},
{
"paragraph_id": 18,
"text": "For a massive body moving in a central potential due to another object, which depends only on the radial separation between the centers of masses of the two objects, the equation of motion is:",
"title": "General planar motion"
},
{
"paragraph_id": 19,
"text": "These equations can be used only when acceleration is constant. If acceleration is not constant then the general calculus equations above must be used, found by integrating the definitions of position, velocity and acceleration (see above).",
"title": "Equations of motion (constant acceleration)"
},
{
"paragraph_id": 20,
"text": "For classical (Galileo-Newtonian) mechanics, the transformation law from one inertial or accelerating (including rotation) frame (reference frame traveling at constant velocity - including zero) to another is the Galilean transform.",
"title": "Galilean frame transforms"
},
{
"paragraph_id": 21,
"text": "Unprimed quantities refer to position, velocity and acceleration in one frame F; primed quantities refer to position, velocity and acceleration in another frame F' moving at translational velocity V or angular velocity Ω relative to F. Conversely F moves at velocity (—V or —Ω) relative to F'. The situation is similar for relative accelerations.",
"title": "Galilean frame transforms"
},
{
"paragraph_id": 22,
"text": "SHM, DHM, SHO, and DHO refer to simple harmonic motion, damped harmonic motion, simple harmonic oscillator and damped harmonic oscillator respectively.",
"title": "Mechanical oscillators"
}
]
| Classical mechanics is the branch of physics used to describe the motion of macroscopic objects. It is the most familiar of the theories of physics. The concepts it covers, such as mass, acceleration, and force, are commonly used and known. The subject is based upon a three-dimensional Euclidean space with fixed axes, called a frame of reference. The point of concurrency of the three axes is known as the origin of the particular space. Classical mechanics utilises many equations—as well as other mathematical concepts—which relate various physical quantities to one another. These include differential equations, manifolds, Lie groups, and ergodic theory. This article gives a summary of the most important of these. This article lists equations from Newtonian mechanics, see analytical mechanics for the more general formulation of classical mechanics. | 2001-08-21T03:13:18Z | 2023-12-13T18:29:20Z | [
"Template:Div col end",
"Template:Reflist",
"Template:Math",
"Template:Plainlist",
"Template:Harvnb",
"Template:Anchor",
"Template:See also",
"Template:Cite book",
"Template:Classical mechanics derived SI units",
"Template:Short description",
"Template:Endplainlist",
"Template:Cite web",
"Template:Citation",
"Template:Main article",
"Template:Div col"
]
| https://en.wikipedia.org/wiki/List_of_equations_in_classical_mechanics |
6,051 | Cursus honorum | The cursus honorum (Latin for 'course of honors', or more colloquially 'ladder of offices'; Latin: [ˈkʊrsʊs hɔˈnoːrũː]) was the sequential order of public offices held by aspiring politicians in the Roman Republic and the early Roman Empire. It was designed for men of senatorial rank. The cursus honorum comprised a mixture of military and political administration posts; the ultimate prize for winning election to each "rung" in the sequence was to become one of the two consuls in a given year.
These rules were altered and flagrantly ignored in the course of the last century of the Republic. For example, Gaius Marius held consulships for five years in a row between 104 BC and 100 BC. He was consul seven times in all, also serving in 107 and 86. Officially presented as opportunities for public service, the offices often became mere opportunities for self-aggrandizement. The constitutional reforms of Sulla between 82 and 79 BC required a ten-year interval before holding the same office again for another term.
To have held each office at the youngest possible age (suo anno, 'in his year') was considered a great political success. For instance, to miss out on a praetorship at 39 meant that one could not become consul at 42. Cicero expressed extreme pride not only in being a novus homo ('new man'; comparable to a "self-made man") who became consul even though none of his ancestors had ever served as a consul, but also in having become consul "in his year".
Prior to entering political life and the cursus honorum, a young man of senatorial rank was expected to serve around ten years of military duty. The years of service were intended to be mandatory in order to qualify for political office.
Advancement and honors would improve his political prospects, and a successful military career might culminate in the office of military tribune, to which 24 men were elected by the Tribal Assembly each year. The rank of military tribune is sometimes described as the first office of the cursus honorum.
The first official post was that of quaestor. Ever since the reforms of Sulla, candidates had to be at least 30 years old to hold the office. From the time of Augustus onwards, twenty quaestors served in the financial administration at Rome or as second-in-command to a governor in the provinces. They could also serve as the paymaster for a legion.
At 36 years of age, a promagistrate could stand for election to one of the aediles (pronounced /ˈiːdaɪl/ EE-dyle, from aedes, "temple edifice") positions. Of these aediles, two were plebeian and two were patrician, with the patrician aediles called curule aediles. The plebeian aediles were elected by the Plebeian Council and the curule aediles were either elected by the Tribal Assembly or appointed by the reigning consul. The aediles had administrative responsibilities in Rome. They had to take care of the temples (whence their title, from the Latin aedes, "temple"), organize games, and be responsible for the maintenance of the public buildings in Rome. Moreover, they took charge of Rome's water and food supplies; in their capacity as market superintendents, they served sometimes as judges in mercantile affairs.
The aedile was the supervisor of public works; the words "edifice" and "edification" stem from the same root. He oversaw the public works, temples and markets. Therefore, the aediles would have been in some cooperation with the current censors, who had similar or related duties. Also, they oversaw the organization of festivals and games (ludi), which made this a very sought-after office for a career minded politician of the late Republic, as it was a good means of gaining popularity by staging spectacles.
Curule aediles were added at a later date in the 4th century BC; their duties do not differ substantially from plebeian aediles. However, unlike plebeian aediles, curule aediles were allowed certain symbols of rank—the sella curulis or 'curule chair,' for example—and only patricians could stand for election to curule aedile. This later changed, and both plebeians and patricians could stand for curule aedileship.
The elections for curule aedile were at first alternated between patricians and plebeians, until late in the 2nd century BC, when the practice was abandoned and both classes became free to run during all years.
While part of the cursus honorum, this step was optional and not required to hold future offices. Though the office was usually held after the quaestorship and before the praetorship, there are some cases with former praetors serving as aediles.
After serving either as quaestor or as aedile, a man of 39 years could run for praetor. During the reign of Augustus this requirement was lowered to 30, at the request of Gaius Maecenas. The number of praetors elected varied through history, generally increasing with time. During the republic, six or eight were generally elected each year to serve judicial functions throughout Rome and other governmental responsibilities. In the absence of the consuls, a praetor would be given command of the garrison in Rome or in Italy. Also, a praetor could exercise the functions of the consuls throughout Rome, but their main function was that of a judge. They would preside over trials involving criminal acts, grant court orders and validate "illegal" acts as acts of administering justice. A praetor was escorted by six lictors, and wielded imperium. After a term as praetor, the magistrate could serve as a provincial governor with the title of propraetor, wielding propraetor imperium, commanding the province's legions, and possessing ultimate authority within his province(s).
Two of the praetors were more prestigious than the others. The first was the Praetor Peregrinus, who was the chief judge in trials involving one or more foreigners. The other was the Praetor Urbanus, the chief judicial office in Rome. He had the power to overturn any verdict by any other courts, and served as judge in cases involving criminal charges against provincial governors. The Praetor Urbanus was not allowed to leave the city for more than ten days. If one of these two praetors was absent from Rome, the other would perform the duties of both.
The office of consul was the most prestigious of all of the offices on the cursus honorum, and represented the summit of a successful career. The minimum age was 42. Years were identified by the names of the two consuls elected for a particular year; for instance, M. Messalla et M. Pisone consulibus, "in the consulship of Messalla and Piso", dates an event to 61 BC. Consuls were responsible for the city's political agenda, commanded large-scale armies and controlled important provinces. The consuls served for only a year (a restriction intended to limit the amassing of power by individuals) and could only rule when they agreed, because each consul could veto the other's decision.
The consuls would alternate monthly as the chairman of the Senate. They also were the supreme commanders in the Roman army, with each being granted two legions during their consular year. Consuls also exercised the highest juridical power in the Republic, being the only office with the power to override the decisions of the Praetor Urbanus. Only laws and the decrees of the Senate or the People's assembly limited their powers, and only the veto of a fellow consul or a tribune of the plebs could supersede their decisions.
A consul was escorted by twelve lictors, held imperium and wore the toga praetexta. Because the consul was the highest executive office within the Republic, they had the power to veto any action or proposal by any other magistrate, save that of the Tribune of the Plebs. After a consulship, a consul was assigned one of the more important provinces and acted as the governor in the same way that a propraetor did, only owning proconsular imperium. A second consulship could only be attempted after an interval of 10 years to prevent one man holding too much power.
Although not part of the cursus honorum, upon completing a term as either praetor or consul, an officer was required to serve a term as Propraetor and proconsul, respectively, in one of Rome's many provinces. These propraetors and proconsuls held near autocratic authority within their selected province or provinces. Because each governor held equal imperium to the equivalent magistrate, they were escorted by the same number of lictors (12) and could only be vetoed by a reigning consul or praetor. Their abilities to govern were only limited by the decrees of the Senate or the people's assemblies, and the Tribune of the Plebs was unable to veto their acts as long as the governor remained at least a mile outside of Rome.
After a term as consul, the final step in the cursus honorum was the office of censor. This was the only office in the Roman Republic whose term was a period of eighteen months instead of the usual twelve. Censors were elected every five years and although the office held no military imperium, it was considered a great honour. The censors took a regular census of the people and then apportioned the citizens into voting classes on the basis of income and tribal affiliation. The censors enrolled new citizens in tribes and voting classes as well. The censors were also in charge of the membership roll of the Senate, every five years adding new senators who had been elected to the requisite offices. Censors could also remove unworthy members from the Senate. This ability was lost during the dictatorship of Sulla. Censors were also responsible for construction of public buildings and the moral status of the city.
Censors also had financial duties, in that they had to put out to tender projects that were to be financed by the state. Also, the censors were in charge of the leasing out of conquered land for public use and auction. Though this office owned no imperium, meaning no lictors for protection, they were allowed to wear the toga praetexta.
The office of Tribune of the Plebs was an important step in the political career of plebeians. Patricians could not hold the office. They were not an official step in the cursus honorum. The Tribune was an office first created to protect the right of the common man in Roman politics and served as the head of the Plebeian Council. In the mid-to-late Republic, however, plebeians were often just as, and sometimes more, wealthy and powerful than patricians. Those who held the office were granted sacrosanctity (the right to be legally protected from any physical harm), the power to rescue any plebeian from the hands of a patrician magistrate, and the right to veto any act or proposal of any magistrate, including another tribune of the people and the consuls. The tribune also had the power to exercise capital punishment against any person who interfered in the performance of his duties. The tribunes could even convene a Senate meeting and lay legislation before it and arrest magistrates. Their houses had to remain open for visitors even during the night, and they were not allowed to be more than a day's journey from Rome. Due to their unique power of sacrosanctity, the Tribune had no need for lictors for protection and owned no imperium, nor could they wear the toga praetexta. For a period after Sulla's reforms, a person who had held the office of Tribune of the Plebs could no longer qualify for any other office, and the powers of the tribunes were more limited, but these restrictions were subsequently lifted.
Another office not officially a step in the cursus honorum was the princeps senatus, an extremely prestigious office for a patrician. The princeps senatus served as the leader of the Senate and was chosen to serve a five-year term by each pair of Censors every five years. Censors could, however, confirm a princeps senatus for a period of another five years. The princeps senatus was chosen from all Patricians who had served as a Consul, with former Censors usually holding the office. The office originally granted the holder the ability to speak first at session on the topic presented by the presiding magistrate, but eventually gained the power to open and close the senate sessions, decide the agenda, decide where the session should take place, impose order and other rules of the session, meet in the name of the senate with embassies of foreign countries, and write in the name of the senate letters and dispatches. This office, like the Tribune, did not own imperium, was not escorted by lictors, and could not wear the toga praetexta.
Of all the offices within the Roman Republic, none granted as much power and authority as the position of dictator, known as the Master of the People. In times of emergency, the Senate would declare that a dictator was required, and the current consuls would appoint a dictator. This was the only decision that could not be vetoed by the Tribune of the Plebs. The dictator was the sole exception to the Roman legal principles of having multiple magistrates in the same office and being legally able to be held to answer for actions in office. Essentially by definition, only one dictator could serve at a time, and no dictator could ever be held legally responsible for any action during his time in office for any reason.
The dictator was the highest magistrate in degree of imperium and was attended by twenty-four lictors (as were the former Kings of Rome). Although his term lasted only six months instead of twelve (except for the Dictatorships of Sulla and Caesar), all other magistrates reported to the dictator (except for the tribunes of the plebs - although they could not veto any of the dictator's acts), granting the dictator absolute authority in both civil and military matters throughout the Republic. The dictator was free from the control of the Senate in all that he did, could execute anyone without a trial for any reason, and could ignore any law in the performance of his duties. The dictator was the sole magistrate under the Republic that was truly independent in discharging his duties. All of the other offices were extensions of the Senate's executive authority and thus answerable to the Senate. Since the dictator exercised his own authority, he did not suffer this limitation, which was the cornerstone of the office's power.
When a dictator entered office, he appointed to serve as his second-in-command a magister equitum, the Master of the Horse, whose office ceased to exist once the dictator left office. The magister equitum held praetorian imperium, was attended by six lictors, and was charged with assisting the dictator in managing the State. When the dictator was away from Rome, the magister equitum usually remained behind to administer the city. The magister equitum, like the dictator, had unchallengeable authority in all civil and military affairs, with his decisions only being overturned by the dictator himself.
The dictatorship was definitively abolished in 44 BC after the assassination of Gaius Julius Caesar (Lex Antonia). | [
{
"paragraph_id": 0,
"text": "The cursus honorum (Latin for 'course of honors', or more colloquially 'ladder of offices'; Latin: [ˈkʊrsʊs hɔˈnoːrũː]) was the sequential order of public offices held by aspiring politicians in the Roman Republic and the early Roman Empire. It was designed for men of senatorial rank. The cursus honorum comprised a mixture of military and political administration posts; the ultimate prize for winning election to each \"rung\" in the sequence was to become one of the two consuls in a given year.",
"title": ""
},
{
"paragraph_id": 1,
"text": "These rules were altered and flagrantly ignored in the course of the last century of the Republic. For example, Gaius Marius held consulships for five years in a row between 104 BC and 100 BC. He was consul seven times in all, also serving in 107 and 86. Officially presented as opportunities for public service, the offices often became mere opportunities for self-aggrandizement. The constitutional reforms of Sulla between 82 and 79 BC required a ten-year interval before holding the same office again for another term.",
"title": ""
},
{
"paragraph_id": 2,
"text": "To have held each office at the youngest possible age (suo anno, 'in his year') was considered a great political success. For instance, to miss out on a praetorship at 39 meant that one could not become consul at 42. Cicero expressed extreme pride not only in being a novus homo ('new man'; comparable to a \"self-made man\") who became consul even though none of his ancestors had ever served as a consul, but also in having become consul \"in his year\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "Prior to entering political life and the cursus honorum, a young man of senatorial rank was expected to serve around ten years of military duty. The years of service were intended to be mandatory in order to qualify for political office.",
"title": "Military service"
},
{
"paragraph_id": 4,
"text": "Advancement and honors would improve his political prospects, and a successful military career might culminate in the office of military tribune, to which 24 men were elected by the Tribal Assembly each year. The rank of military tribune is sometimes described as the first office of the cursus honorum.",
"title": "Military service"
},
{
"paragraph_id": 5,
"text": "The first official post was that of quaestor. Ever since the reforms of Sulla, candidates had to be at least 30 years old to hold the office. From the time of Augustus onwards, twenty quaestors served in the financial administration at Rome or as second-in-command to a governor in the provinces. They could also serve as the paymaster for a legion.",
"title": "Quaestor"
},
{
"paragraph_id": 6,
"text": "At 36 years of age, a promagistrate could stand for election to one of the aediles (pronounced /ˈiːdaɪl/ EE-dyle, from aedes, \"temple edifice\") positions. Of these aediles, two were plebeian and two were patrician, with the patrician aediles called curule aediles. The plebeian aediles were elected by the Plebeian Council and the curule aediles were either elected by the Tribal Assembly or appointed by the reigning consul. The aediles had administrative responsibilities in Rome. They had to take care of the temples (whence their title, from the Latin aedes, \"temple\"), organize games, and be responsible for the maintenance of the public buildings in Rome. Moreover, they took charge of Rome's water and food supplies; in their capacity as market superintendents, they served sometimes as judges in mercantile affairs.",
"title": "Aedile"
},
{
"paragraph_id": 7,
"text": "The aedile was the supervisor of public works; the words \"edifice\" and \"edification\" stem from the same root. He oversaw the public works, temples and markets. Therefore, the aediles would have been in some cooperation with the current censors, who had similar or related duties. Also, they oversaw the organization of festivals and games (ludi), which made this a very sought-after office for a career minded politician of the late Republic, as it was a good means of gaining popularity by staging spectacles.",
"title": "Aedile"
},
{
"paragraph_id": 8,
"text": "Curule aediles were added at a later date in the 4th century BC; their duties do not differ substantially from plebeian aediles. However, unlike plebeian aediles, curule aediles were allowed certain symbols of rank—the sella curulis or 'curule chair,' for example—and only patricians could stand for election to curule aedile. This later changed, and both plebeians and patricians could stand for curule aedileship.",
"title": "Aedile"
},
{
"paragraph_id": 9,
"text": "The elections for curule aedile were at first alternated between patricians and plebeians, until late in the 2nd century BC, when the practice was abandoned and both classes became free to run during all years.",
"title": "Aedile"
},
{
"paragraph_id": 10,
"text": "While part of the cursus honorum, this step was optional and not required to hold future offices. Though the office was usually held after the quaestorship and before the praetorship, there are some cases with former praetors serving as aediles.",
"title": "Aedile"
},
{
"paragraph_id": 11,
"text": "After serving either as quaestor or as aedile, a man of 39 years could run for praetor. During the reign of Augustus this requirement was lowered to 30, at the request of Gaius Maecenas. The number of praetors elected varied through history, generally increasing with time. During the republic, six or eight were generally elected each year to serve judicial functions throughout Rome and other governmental responsibilities. In the absence of the consuls, a praetor would be given command of the garrison in Rome or in Italy. Also, a praetor could exercise the functions of the consuls throughout Rome, but their main function was that of a judge. They would preside over trials involving criminal acts, grant court orders and validate \"illegal\" acts as acts of administering justice. A praetor was escorted by six lictors, and wielded imperium. After a term as praetor, the magistrate could serve as a provincial governor with the title of propraetor, wielding propraetor imperium, commanding the province's legions, and possessing ultimate authority within his province(s).",
"title": "Praetor"
},
{
"paragraph_id": 12,
"text": "Two of the praetors were more prestigious than the others. The first was the Praetor Peregrinus, who was the chief judge in trials involving one or more foreigners. The other was the Praetor Urbanus, the chief judicial office in Rome. He had the power to overturn any verdict by any other courts, and served as judge in cases involving criminal charges against provincial governors. The Praetor Urbanus was not allowed to leave the city for more than ten days. If one of these two praetors was absent from Rome, the other would perform the duties of both.",
"title": "Praetor"
},
{
"paragraph_id": 13,
"text": "The office of consul was the most prestigious of all of the offices on the cursus honorum, and represented the summit of a successful career. The minimum age was 42. Years were identified by the names of the two consuls elected for a particular year; for instance, M. Messalla et M. Pisone consulibus, \"in the consulship of Messalla and Piso\", dates an event to 61 BC. Consuls were responsible for the city's political agenda, commanded large-scale armies and controlled important provinces. The consuls served for only a year (a restriction intended to limit the amassing of power by individuals) and could only rule when they agreed, because each consul could veto the other's decision.",
"title": "Consul"
},
{
"paragraph_id": 14,
"text": "The consuls would alternate monthly as the chairman of the Senate. They also were the supreme commanders in the Roman army, with each being granted two legions during their consular year. Consuls also exercised the highest juridical power in the Republic, being the only office with the power to override the decisions of the Praetor Urbanus. Only laws and the decrees of the Senate or the People's assembly limited their powers, and only the veto of a fellow consul or a tribune of the plebs could supersede their decisions.",
"title": "Consul"
},
{
"paragraph_id": 15,
"text": "A consul was escorted by twelve lictors, held imperium and wore the toga praetexta. Because the consul was the highest executive office within the Republic, they had the power to veto any action or proposal by any other magistrate, save that of the Tribune of the Plebs. After a consulship, a consul was assigned one of the more important provinces and acted as the governor in the same way that a propraetor did, only owning proconsular imperium. A second consulship could only be attempted after an interval of 10 years to prevent one man holding too much power.",
"title": "Consul"
},
{
"paragraph_id": 16,
"text": "Although not part of the cursus honorum, upon completing a term as either praetor or consul, an officer was required to serve a term as Propraetor and proconsul, respectively, in one of Rome's many provinces. These propraetors and proconsuls held near autocratic authority within their selected province or provinces. Because each governor held equal imperium to the equivalent magistrate, they were escorted by the same number of lictors (12) and could only be vetoed by a reigning consul or praetor. Their abilities to govern were only limited by the decrees of the Senate or the people's assemblies, and the Tribune of the Plebs was unable to veto their acts as long as the governor remained at least a mile outside of Rome.",
"title": "Governor"
},
{
"paragraph_id": 17,
"text": "After a term as consul, the final step in the cursus honorum was the office of censor. This was the only office in the Roman Republic whose term was a period of eighteen months instead of the usual twelve. Censors were elected every five years and although the office held no military imperium, it was considered a great honour. The censors took a regular census of the people and then apportioned the citizens into voting classes on the basis of income and tribal affiliation. The censors enrolled new citizens in tribes and voting classes as well. The censors were also in charge of the membership roll of the Senate, every five years adding new senators who had been elected to the requisite offices. Censors could also remove unworthy members from the Senate. This ability was lost during the dictatorship of Sulla. Censors were also responsible for construction of public buildings and the moral status of the city.",
"title": "Censor"
},
{
"paragraph_id": 18,
"text": "Censors also had financial duties, in that they had to put out to tender projects that were to be financed by the state. Also, the censors were in charge of the leasing out of conquered land for public use and auction. Though this office owned no imperium, meaning no lictors for protection, they were allowed to wear the toga praetexta.",
"title": "Censor"
},
{
"paragraph_id": 19,
"text": "The office of Tribune of the Plebs was an important step in the political career of plebeians. Patricians could not hold the office. They were not an official step in the cursus honorum. The Tribune was an office first created to protect the right of the common man in Roman politics and served as the head of the Plebeian Council. In the mid-to-late Republic, however, plebeians were often just as, and sometimes more, wealthy and powerful than patricians. Those who held the office were granted sacrosanctity (the right to be legally protected from any physical harm), the power to rescue any plebeian from the hands of a patrician magistrate, and the right to veto any act or proposal of any magistrate, including another tribune of the people and the consuls. The tribune also had the power to exercise capital punishment against any person who interfered in the performance of his duties. The tribunes could even convene a Senate meeting and lay legislation before it and arrest magistrates. Their houses had to remain open for visitors even during the night, and they were not allowed to be more than a day's journey from Rome. Due to their unique power of sacrosanctity, the Tribune had no need for lictors for protection and owned no imperium, nor could they wear the toga praetexta. For a period after Sulla's reforms, a person who had held the office of Tribune of the Plebs could no longer qualify for any other office, and the powers of the tribunes were more limited, but these restrictions were subsequently lifted.",
"title": "Tribune of the Plebs"
},
{
"paragraph_id": 20,
"text": "Another office not officially a step in the cursus honorum was the princeps senatus, an extremely prestigious office for a patrician. The princeps senatus served as the leader of the Senate and was chosen to serve a five-year term by each pair of Censors every five years. Censors could, however, confirm a princeps senatus for a period of another five years. The princeps senatus was chosen from all Patricians who had served as a Consul, with former Censors usually holding the office. The office originally granted the holder the ability to speak first at session on the topic presented by the presiding magistrate, but eventually gained the power to open and close the senate sessions, decide the agenda, decide where the session should take place, impose order and other rules of the session, meet in the name of the senate with embassies of foreign countries, and write in the name of the senate letters and dispatches. This office, like the Tribune, did not own imperium, was not escorted by lictors, and could not wear the toga praetexta.",
"title": "Princeps senatus"
},
{
"paragraph_id": 21,
"text": "Of all the offices within the Roman Republic, none granted as much power and authority as the position of dictator, known as the Master of the People. In times of emergency, the Senate would declare that a dictator was required, and the current consuls would appoint a dictator. This was the only decision that could not be vetoed by the Tribune of the Plebs. The dictator was the sole exception to the Roman legal principles of having multiple magistrates in the same office and being legally able to be held to answer for actions in office. Essentially by definition, only one dictator could serve at a time, and no dictator could ever be held legally responsible for any action during his time in office for any reason.",
"title": "Dictator and magister equitum"
},
{
"paragraph_id": 22,
"text": "The dictator was the highest magistrate in degree of imperium and was attended by twenty-four lictors (as were the former Kings of Rome). Although his term lasted only six months instead of twelve (except for the Dictatorships of Sulla and Caesar), all other magistrates reported to the dictator (except for the tribunes of the plebs - although they could not veto any of the dictator's acts), granting the dictator absolute authority in both civil and military matters throughout the Republic. The dictator was free from the control of the Senate in all that he did, could execute anyone without a trial for any reason, and could ignore any law in the performance of his duties. The dictator was the sole magistrate under the Republic that was truly independent in discharging his duties. All of the other offices were extensions of the Senate's executive authority and thus answerable to the Senate. Since the dictator exercised his own authority, he did not suffer this limitation, which was the cornerstone of the office's power.",
"title": "Dictator and magister equitum"
},
{
"paragraph_id": 23,
"text": "When a dictator entered office, he appointed to serve as his second-in-command a magister equitum, the Master of the Horse, whose office ceased to exist once the dictator left office. The magister equitum held praetorian imperium, was attended by six lictors, and was charged with assisting the dictator in managing the State. When the dictator was away from Rome, the magister equitum usually remained behind to administer the city. The magister equitum, like the dictator, had unchallengeable authority in all civil and military affairs, with his decisions only being overturned by the dictator himself.",
"title": "Dictator and magister equitum"
},
{
"paragraph_id": 24,
"text": "The dictatorship was definitively abolished in 44 BC after the assassination of Gaius Julius Caesar (Lex Antonia).",
"title": "Dictator and magister equitum"
}
]
| The cursus honorum was the sequential order of public offices held by aspiring politicians in the Roman Republic and the early Roman Empire. It was designed for men of senatorial rank. The cursus honorum comprised a mixture of military and political administration posts; the ultimate prize for winning election to each "rung" in the sequence was to become one of the two consuls in a given year. These rules were altered and flagrantly ignored in the course of the last century of the Republic. For example, Gaius Marius held consulships for five years in a row between 104 BC and 100 BC. He was consul seven times in all, also serving in 107 and 86. Officially presented as opportunities for public service, the offices often became mere opportunities for self-aggrandizement. The constitutional reforms of Sulla between 82 and 79 BC required a ten-year interval before holding the same office again for another term. To have held each office at the youngest possible age was considered a great political success. For instance, to miss out on a praetorship at 39 meant that one could not become consul at 42. Cicero expressed extreme pride not only in being a novus homo who became consul even though none of his ancestors had ever served as a consul, but also in having become consul "in his year". | 2001-09-01T17:56:34Z | 2023-12-31T22:21:47Z | [
"Template:For",
"Template:Main",
"Template:IPAc-en",
"Template:Citation needed",
"Template:Cite web",
"Template:See also",
"Template:IPA-la",
"Template:Authority control",
"Template:Short description",
"Template:Ancient Rome topics",
"Template:Cite journal",
"Template:More citations needed",
"Template:Cleanup",
"Template:Langnf",
"Template:Respell",
"Template:Reflist",
"Template:Cite book",
"Template:Cite encyclopedia",
"Template:Commons category",
"Template:Italics title",
"Template:Roman government"
]
| https://en.wikipedia.org/wiki/Cursus_honorum |
6,056 | Continental drift | Continental drift is the hypothesis that the Earth's continents have moved over geologic time relative to each other, thus appearing to have "drifted" across the ocean bed. The idea of continental drift has been subsumed into the science of plate tectonics, which studies the movement of the continents as they ride on plates of the Earth's lithosphere.
The speculation that continents might have "drifted" was first put forward by Abraham Ortelius in 1596. A pioneer of the modern view of mobilism was the Austrian geologist Otto Ampferer. The concept was independently and more fully developed by Alfred Wegener in his 1915 publication, "The Origin of Continents and Oceans". However, at that time the hypothesis was rejected by many for lack of any motive mechanism. The English geologist Arthur Holmes later proposed mantle convection for that mechanism.
Abraham Ortelius (Ortelius 1596), Theodor Christoph Lilienthal (1756), Alexander von Humboldt (1801 and 1845), Antonio Snider-Pellegrini (Snider-Pellegrini 1858), and others had noted earlier that the shapes of continents on opposite sides of the Atlantic Ocean (most notably, Africa and South America) seem to fit together. W. J. Kious described Ortelius' thoughts in this way:
Abraham Ortelius in his work Thesaurus Geographicus ... suggested that the Americas were "torn away from Europe and Africa ... by earthquakes and floods" and went on to say: "The vestiges of the rupture reveal themselves if someone brings forward a map of the world and considers carefully the coasts of the three [continents]."
In 1889, Alfred Russel Wallace remarked, "It was formerly a very general belief, even amongst geologists, that the great features of the earth's surface, no less than the smaller ones, were subject to continual mutations, and that during the course of known geological time the continents and great oceans had, again and again, changed places with each other." He quotes Charles Lyell as saying, "Continents, therefore, although permanent for whole geological epochs, shift their positions entirely in the course of ages." and claims that the first to throw doubt on this was James Dwight Dana in 1849.
In his Manual of Geology (1863), Dana wrote, "The continents and oceans had their general outline or form defined in earliest time. This has been proved with regard to North America from the position and distribution of the first beds of the Lower Silurian, – those of the Potsdam epoch. The facts indicate that the continent of North America had its surface near tide-level, part above and part below it (p.196); and this will probably be proved to be the condition in Primordial time of the other continents also. And, if the outlines of the continents were marked out, it follows that the outlines of the oceans were no less so". Dana was enormously influential in America—his Manual of Mineralogy is still in print in revised form—and the theory became known as the Permanence theory.
This appeared to be confirmed by the exploration of the deep sea beds conducted by the Challenger expedition, 1872–1876, which showed that contrary to expectation, land debris brought down by rivers to the ocean is deposited comparatively close to the shore on what is now known as the continental shelf. This suggested that the oceans were a permanent feature of the Earth's surface, rather than them having "changed places" with the continents.
Eduard Suess had proposed a supercontinent Gondwana in 1885 and the Tethys Ocean in 1893, assuming a land-bridge between the present continents submerged in the form of a geosyncline, and John Perry had written an 1895 paper proposing that the Earth's interior was fluid, and disagreeing with Lord Kelvin on the age of the Earth.
Apart from the earlier speculations mentioned above, the idea that the American continents had once formed a single landmass with Eurasia and Africa was postulated by several scientists before Alfred Wegener's 1912 paper. Although Wegener's theory was formed independently and was more complete than those of his predecessors, Wegener later credited a number of past authors with similar ideas: Franklin Coxworthy (between 1848 and 1890), Roberto Mantovani (between 1889 and 1909), William Henry Pickering (1907) and Frank Bursley Taylor (1908).
The similarity of southern continent geological formations had led Roberto Mantovani to conjecture in 1889 and 1909 that all the continents had once been joined into a supercontinent; Wegener noted the similarity of Mantovani's and his own maps of the former positions of the southern continents. In Mantovani's conjecture, this continent broke due to volcanic activity caused by thermal expansion, and the new continents drifted away from each other because of further expansion of the rip-zones, where the oceans now lie. This led Mantovani to propose a now-discredited Expanding Earth theory.
Continental drift without expansion was proposed by Frank Bursley Taylor, who suggested in 1908 (published in 1910) that the continents were moved into their present positions by a process of "continental creep", later proposing a mechanism of increased tidal forces during the Cretaceous dragging the crust towards the equator. He was the first to realize that one of the effects of continental motion would be the formation of mountains, attributing the formation of the Himalayas to the collision between the Indian subcontinent with Asia. Wegener said that of all those theories, Taylor's had the most similarities to his own. For a time in the mid-20th century, the theory of continental drift was referred to as the "Taylor-Wegener hypothesis".
Alfred Wegener first presented his hypothesis to the German Geological Society on 6 January 1912. His hypothesis was that the continents had once formed a single landmass, called Pangaea, before breaking apart and drifting to their present locations.
Wegener was the first to use the phrase "continental drift" (1912, 1915) (in German "die Verschiebung der Kontinente" – translated into English in 1922) and formally publish the hypothesis that the continents had somehow "drifted" apart. Although he presented much evidence for continental drift, he was unable to provide a convincing explanation for the physical processes which might have caused this drift. He suggested that the continents had been pulled apart by the centrifugal pseudoforce (Polflucht) of the Earth's rotation or by a small component of astronomical precession, but calculations showed that the force was not sufficient. The Polflucht hypothesis was also studied by Paul Sophus Epstein in 1920 and found to be implausible.
Although now accepted, the theory of continental drift was rejected for many years, with evidence in its favor considered insufficient. One problem was that a plausible driving force was missing. A second problem was that Wegener's estimate of the speed of continental motion, 250 cm/year, was implausibly high. (The currently accepted rate for the separation of the Americas from Europe and Africa is about 2.5 cm/year). Furthermore, Wegener was treated less seriously because he was not a geologist. Even today, the details of the forces propelling the plates are poorly understood.
The English geologist Arthur Holmes championed the theory of continental drift at a time when it was deeply unfashionable. He proposed in 1931 that the Earth's mantle contained convection cells which dissipated heat produced by radioactive decay and moved the crust at the surface. His Principles of Physical Geology, ending with a chapter on continental drift, was published in 1944.
Geological maps of the time showed huge land bridges spanning the Atlantic and Indian oceans to account for the similarities of fauna and flora and the divisions of the Asian continent in the Permian period, but failing to account for glaciation in India, Australia and South Africa.
Hans Stille and Leopold Kober opposed the idea of continental drift and worked on a "fixist" geosyncline model with Earth contraction playing a key role in the formation of orogens. Other geologists who opposed continental drift were Bailey Willis, Charles Schuchert, Rollin Chamberlin, Walther Bucher and Walther Penck. In 1939 an international geological conference was held in Frankfurt. This conference came to be dominated by the fixists, especially as those geologists specializing in tectonics were all fixists except Willem van der Gracht. Criticism of continental drift and mobilism was abundant at the conference not only from tectonicists but also from sedimentological (Nölke), paleontological (Nölke), mechanical (Lehmann) and oceanographic (Troll, Wüst) perspectives. Hans Cloos, the organizer of the conference, was also a fixist who together with Troll held the view that excepting the Pacific Ocean continents were not radically different from oceans in their behaviour. The mobilist theory of Émile Argand for the Alpine orogeny was criticized by Kurt Leuchs. The few drifters and mobilists at the conference appealed to biogeography (Kirsch, Wittmann), paleoclimatology (Wegener, K), paleontology (Gerth) and geodetic measurements (Wegener, K). F. Bernauer correctly equated Reykjanes in south-west Iceland with the Mid-Atlantic Ridge, arguing with this that the floor of the Atlantic Ocean was undergoing extension just like Reykjanes. Bernauer thought this extension had drifted the continents only 100–200 km apart, the approximate width of the volcanic zone in Iceland.
David Attenborough, who attended university in the second half of the 1940s, recounted an incident illustrating its lack of acceptance then: "I once asked one of my lecturers why he was not talking to us about continental drift and I was told, sneeringly, that if I could prove there was a force that could move continents, then he might think about it. The idea was moonshine, I was informed."
As late as 1953—just five years before Carey introduced the theory of plate tectonics—the theory of continental drift was rejected by the physicist Scheidegger on the following grounds.
From the 1930s to the late 1950s, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force. Holmes' views were particularly influential: in his bestselling textbook, Principles of Physical Geology, he included a chapter on continental drift, proposing that Earth's mantle contained convection cells which dissipated radioactive heat and moved the crust at the surface. Holmes' proposal resolved the phase disequilibrium objection (the underlying fluid was kept from solidifying by radioactive heating from the core). However, scientific communication in the 1930s and 1940s was inhibited by World War II, and the theory still required work to avoid foundering on the orogeny and isostasy objections. Worse, the most viable forms of the theory predicted the existence of convection cell boundaries reaching deep into the Earth, that had yet to be observed.
In 1947, a team of scientists led by Maurice Ewing confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the sediments was chemically and physically different from continental crust. As oceanographers continued to bathymeter the ocean basins, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the "Great Global Rift".
Meanwhile, scientists began recognizing odd magnetic variations across the ocean floor using devices developed during World War II to detect submarines. Over the next decade, it became increasingly clear that the magnetization patterns were not anomalies, as had been originally supposed. In a series of papers in 1959–1963, Heezen, Dietz, Hess, Mason, Vine, Matthews, and Morley collectively realized that the magnetization of the ocean floor formed extensive, zebra-like patterns: one stripe would exhibit normal polarity and the adjoining stripes reversed polarity. The best explanation was the "conveyor belt" or Vine–Matthews–Morley hypothesis. New magma from deep within the Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. The new crust is magnetized by the Earth's magnetic field, which undergoes occasional reversals. Formation of new crust then displaces the magnetized crust apart, akin to a conveyor belt – hence the name.
Without workable alternatives to explain the stripes, geophysicists were forced to conclude that Holmes had been right: ocean rifts were sites of perpetual orogeny at the boundaries of convection cells. By 1967, barely two decades after discovery of the mid-oceanic rifts, and a decade after discovery of the striping, plate tectonics had become axiomatic to modern geophysics.
In addition, Marie Tharp, in collaboration with Bruce Heezen, who was initially sceptical of Tharp's observations that her maps confirmed continental drift theory, provided essential corroboration, using her skills in cartography and seismographic data, to confirm the theory.
Geophysicist Jack Oliver is credited with providing seismologic evidence supporting plate tectonics which encompassed and superseded continental drift with the article "Seismology and the New Global Tectonics", published in 1968, using data collected from seismologic stations, including those he set up in the South Pacific. The modern theory of plate tectonics, refining Wegener, explains that there are two kinds of crust of different composition: continental crust and oceanic crust, both floating above a much deeper "plastic" mantle. Continental crust is inherently lighter. Oceanic crust is created at spreading centers, and this, along with subduction, drives the system of plates in a chaotic manner, resulting in continuous orogeny and areas of isostatic imbalance.
Evidence for the movement of continents on tectonic plates is now extensive. Similar plant and animal fossils are found around the shores of different continents, suggesting that they were once joined. The fossils of Mesosaurus, a freshwater reptile rather like a small crocodile, found both in Brazil and South Africa, are one example; another is the discovery of fossils of the land reptile Lystrosaurus in rocks of the same age at locations in Africa, India, and Antarctica. There is also living evidence, with the same animals being found on two continents. Some earthworm families (such as Ocnerodrilidae, Acanthodrilidae, Octochaetidae) are found in South America and Africa.
The complementary arrangement of the facing sides of South America and Africa is obvious but a temporary coincidence. In millions of years, slab pull, ridge-push, and other forces of tectonophysics will further separate and rotate those two continents. It was that temporary feature that inspired Wegener to study what he defined as continental drift although he did not live to see his hypothesis generally accepted.
The widespread distribution of Permo-Carboniferous glacial sediments in South America, Africa, Madagascar, Arabia, India, Antarctica and Australia was one of the major pieces of evidence for the theory of continental drift. The continuity of glaciers, inferred from oriented glacial striations and deposits called tillites, suggested the existence of the supercontinent of Gondwana, which became a central element of the concept of continental drift. Striations indicated glacial flow away from the equator and toward the poles, based on continents' current positions and orientations, and supported the idea that the southern continents had previously been in dramatically different locations that were contiguous with one another. | [
{
"paragraph_id": 0,
"text": "Continental drift is the hypothesis that the Earth's continents have moved over geologic time relative to each other, thus appearing to have \"drifted\" across the ocean bed. The idea of continental drift has been subsumed into the science of plate tectonics, which studies the movement of the continents as they ride on plates of the Earth's lithosphere.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The speculation that continents might have \"drifted\" was first put forward by Abraham Ortelius in 1596. A pioneer of the modern view of mobilism was the Austrian geologist Otto Ampferer. The concept was independently and more fully developed by Alfred Wegener in his 1915 publication, \"The Origin of Continents and Oceans\". However, at that time the hypothesis was rejected by many for lack of any motive mechanism. The English geologist Arthur Holmes later proposed mantle convection for that mechanism.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Abraham Ortelius (Ortelius 1596), Theodor Christoph Lilienthal (1756), Alexander von Humboldt (1801 and 1845), Antonio Snider-Pellegrini (Snider-Pellegrini 1858), and others had noted earlier that the shapes of continents on opposite sides of the Atlantic Ocean (most notably, Africa and South America) seem to fit together. W. J. Kious described Ortelius' thoughts in this way:",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Abraham Ortelius in his work Thesaurus Geographicus ... suggested that the Americas were \"torn away from Europe and Africa ... by earthquakes and floods\" and went on to say: \"The vestiges of the rupture reveal themselves if someone brings forward a map of the world and considers carefully the coasts of the three [continents].\"",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1889, Alfred Russel Wallace remarked, \"It was formerly a very general belief, even amongst geologists, that the great features of the earth's surface, no less than the smaller ones, were subject to continual mutations, and that during the course of known geological time the continents and great oceans had, again and again, changed places with each other.\" He quotes Charles Lyell as saying, \"Continents, therefore, although permanent for whole geological epochs, shift their positions entirely in the course of ages.\" and claims that the first to throw doubt on this was James Dwight Dana in 1849.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In his Manual of Geology (1863), Dana wrote, \"The continents and oceans had their general outline or form defined in earliest time. This has been proved with regard to North America from the position and distribution of the first beds of the Lower Silurian, – those of the Potsdam epoch. The facts indicate that the continent of North America had its surface near tide-level, part above and part below it (p.196); and this will probably be proved to be the condition in Primordial time of the other continents also. And, if the outlines of the continents were marked out, it follows that the outlines of the oceans were no less so\". Dana was enormously influential in America—his Manual of Mineralogy is still in print in revised form—and the theory became known as the Permanence theory.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "This appeared to be confirmed by the exploration of the deep sea beds conducted by the Challenger expedition, 1872–1876, which showed that contrary to expectation, land debris brought down by rivers to the ocean is deposited comparatively close to the shore on what is now known as the continental shelf. This suggested that the oceans were a permanent feature of the Earth's surface, rather than them having \"changed places\" with the continents.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Eduard Suess had proposed a supercontinent Gondwana in 1885 and the Tethys Ocean in 1893, assuming a land-bridge between the present continents submerged in the form of a geosyncline, and John Perry had written an 1895 paper proposing that the Earth's interior was fluid, and disagreeing with Lord Kelvin on the age of the Earth.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Apart from the earlier speculations mentioned above, the idea that the American continents had once formed a single landmass with Eurasia and Africa was postulated by several scientists before Alfred Wegener's 1912 paper. Although Wegener's theory was formed independently and was more complete than those of his predecessors, Wegener later credited a number of past authors with similar ideas: Franklin Coxworthy (between 1848 and 1890), Roberto Mantovani (between 1889 and 1909), William Henry Pickering (1907) and Frank Bursley Taylor (1908).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The similarity of southern continent geological formations had led Roberto Mantovani to conjecture in 1889 and 1909 that all the continents had once been joined into a supercontinent; Wegener noted the similarity of Mantovani's and his own maps of the former positions of the southern continents. In Mantovani's conjecture, this continent broke due to volcanic activity caused by thermal expansion, and the new continents drifted away from each other because of further expansion of the rip-zones, where the oceans now lie. This led Mantovani to propose a now-discredited Expanding Earth theory.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Continental drift without expansion was proposed by Frank Bursley Taylor, who suggested in 1908 (published in 1910) that the continents were moved into their present positions by a process of \"continental creep\", later proposing a mechanism of increased tidal forces during the Cretaceous dragging the crust towards the equator. He was the first to realize that one of the effects of continental motion would be the formation of mountains, attributing the formation of the Himalayas to the collision between the Indian subcontinent with Asia. Wegener said that of all those theories, Taylor's had the most similarities to his own. For a time in the mid-20th century, the theory of continental drift was referred to as the \"Taylor-Wegener hypothesis\".",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Alfred Wegener first presented his hypothesis to the German Geological Society on 6 January 1912. His hypothesis was that the continents had once formed a single landmass, called Pangaea, before breaking apart and drifting to their present locations.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Wegener was the first to use the phrase \"continental drift\" (1912, 1915) (in German \"die Verschiebung der Kontinente\" – translated into English in 1922) and formally publish the hypothesis that the continents had somehow \"drifted\" apart. Although he presented much evidence for continental drift, he was unable to provide a convincing explanation for the physical processes which might have caused this drift. He suggested that the continents had been pulled apart by the centrifugal pseudoforce (Polflucht) of the Earth's rotation or by a small component of astronomical precession, but calculations showed that the force was not sufficient. The Polflucht hypothesis was also studied by Paul Sophus Epstein in 1920 and found to be implausible.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Although now accepted, the theory of continental drift was rejected for many years, with evidence in its favor considered insufficient. One problem was that a plausible driving force was missing. A second problem was that Wegener's estimate of the speed of continental motion, 250 cm/year, was implausibly high. (The currently accepted rate for the separation of the Americas from Europe and Africa is about 2.5 cm/year). Furthermore, Wegener was treated less seriously because he was not a geologist. Even today, the details of the forces propelling the plates are poorly understood.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The English geologist Arthur Holmes championed the theory of continental drift at a time when it was deeply unfashionable. He proposed in 1931 that the Earth's mantle contained convection cells which dissipated heat produced by radioactive decay and moved the crust at the surface. His Principles of Physical Geology, ending with a chapter on continental drift, was published in 1944.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Geological maps of the time showed huge land bridges spanning the Atlantic and Indian oceans to account for the similarities of fauna and flora and the divisions of the Asian continent in the Permian period, but failing to account for glaciation in India, Australia and South Africa.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Hans Stille and Leopold Kober opposed the idea of continental drift and worked on a \"fixist\" geosyncline model with Earth contraction playing a key role in the formation of orogens. Other geologists who opposed continental drift were Bailey Willis, Charles Schuchert, Rollin Chamberlin, Walther Bucher and Walther Penck. In 1939 an international geological conference was held in Frankfurt. This conference came to be dominated by the fixists, especially as those geologists specializing in tectonics were all fixists except Willem van der Gracht. Criticism of continental drift and mobilism was abundant at the conference not only from tectonicists but also from sedimentological (Nölke), paleontological (Nölke), mechanical (Lehmann) and oceanographic (Troll, Wüst) perspectives. Hans Cloos, the organizer of the conference, was also a fixist who together with Troll held the view that excepting the Pacific Ocean continents were not radically different from oceans in their behaviour. The mobilist theory of Émile Argand for the Alpine orogeny was criticized by Kurt Leuchs. The few drifters and mobilists at the conference appealed to biogeography (Kirsch, Wittmann), paleoclimatology (Wegener, K), paleontology (Gerth) and geodetic measurements (Wegener, K). F. Bernauer correctly equated Reykjanes in south-west Iceland with the Mid-Atlantic Ridge, arguing with this that the floor of the Atlantic Ocean was undergoing extension just like Reykjanes. Bernauer thought this extension had drifted the continents only 100–200 km apart, the approximate width of the volcanic zone in Iceland.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "David Attenborough, who attended university in the second half of the 1940s, recounted an incident illustrating its lack of acceptance then: \"I once asked one of my lecturers why he was not talking to us about continental drift and I was told, sneeringly, that if I could prove there was a force that could move continents, then he might think about it. The idea was moonshine, I was informed.\"",
"title": "History"
},
{
"paragraph_id": 18,
"text": "As late as 1953—just five years before Carey introduced the theory of plate tectonics—the theory of continental drift was rejected by the physicist Scheidegger on the following grounds.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "From the 1930s to the late 1950s, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force. Holmes' views were particularly influential: in his bestselling textbook, Principles of Physical Geology, he included a chapter on continental drift, proposing that Earth's mantle contained convection cells which dissipated radioactive heat and moved the crust at the surface. Holmes' proposal resolved the phase disequilibrium objection (the underlying fluid was kept from solidifying by radioactive heating from the core). However, scientific communication in the 1930s and 1940s was inhibited by World War II, and the theory still required work to avoid foundering on the orogeny and isostasy objections. Worse, the most viable forms of the theory predicted the existence of convection cell boundaries reaching deep into the Earth, that had yet to be observed.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1947, a team of scientists led by Maurice Ewing confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the sediments was chemically and physically different from continental crust. As oceanographers continued to bathymeter the ocean basins, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the \"Great Global Rift\".",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Meanwhile, scientists began recognizing odd magnetic variations across the ocean floor using devices developed during World War II to detect submarines. Over the next decade, it became increasingly clear that the magnetization patterns were not anomalies, as had been originally supposed. In a series of papers in 1959–1963, Heezen, Dietz, Hess, Mason, Vine, Matthews, and Morley collectively realized that the magnetization of the ocean floor formed extensive, zebra-like patterns: one stripe would exhibit normal polarity and the adjoining stripes reversed polarity. The best explanation was the \"conveyor belt\" or Vine–Matthews–Morley hypothesis. New magma from deep within the Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. The new crust is magnetized by the Earth's magnetic field, which undergoes occasional reversals. Formation of new crust then displaces the magnetized crust apart, akin to a conveyor belt – hence the name.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Without workable alternatives to explain the stripes, geophysicists were forced to conclude that Holmes had been right: ocean rifts were sites of perpetual orogeny at the boundaries of convection cells. By 1967, barely two decades after discovery of the mid-oceanic rifts, and a decade after discovery of the striping, plate tectonics had become axiomatic to modern geophysics.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In addition, Marie Tharp, in collaboration with Bruce Heezen, who was initially sceptical of Tharp's observations that her maps confirmed continental drift theory, provided essential corroboration, using her skills in cartography and seismographic data, to confirm the theory.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Geophysicist Jack Oliver is credited with providing seismologic evidence supporting plate tectonics which encompassed and superseded continental drift with the article \"Seismology and the New Global Tectonics\", published in 1968, using data collected from seismologic stations, including those he set up in the South Pacific. The modern theory of plate tectonics, refining Wegener, explains that there are two kinds of crust of different composition: continental crust and oceanic crust, both floating above a much deeper \"plastic\" mantle. Continental crust is inherently lighter. Oceanic crust is created at spreading centers, and this, along with subduction, drives the system of plates in a chaotic manner, resulting in continuous orogeny and areas of isostatic imbalance.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Evidence for the movement of continents on tectonic plates is now extensive. Similar plant and animal fossils are found around the shores of different continents, suggesting that they were once joined. The fossils of Mesosaurus, a freshwater reptile rather like a small crocodile, found both in Brazil and South Africa, are one example; another is the discovery of fossils of the land reptile Lystrosaurus in rocks of the same age at locations in Africa, India, and Antarctica. There is also living evidence, with the same animals being found on two continents. Some earthworm families (such as Ocnerodrilidae, Acanthodrilidae, Octochaetidae) are found in South America and Africa.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The complementary arrangement of the facing sides of South America and Africa is obvious but a temporary coincidence. In millions of years, slab pull, ridge-push, and other forces of tectonophysics will further separate and rotate those two continents. It was that temporary feature that inspired Wegener to study what he defined as continental drift although he did not live to see his hypothesis generally accepted.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The widespread distribution of Permo-Carboniferous glacial sediments in South America, Africa, Madagascar, Arabia, India, Antarctica and Australia was one of the major pieces of evidence for the theory of continental drift. The continuity of glaciers, inferred from oriented glacial striations and deposits called tillites, suggested the existence of the supercontinent of Gondwana, which became a central element of the concept of continental drift. Striations indicated glacial flow away from the equator and toward the poles, based on continents' current positions and orientations, and supported the idea that the southern continents had previously been in dramatically different locations that were contiguous with one another.",
"title": "History"
}
]
| Continental drift is the hypothesis that the Earth's continents have moved over geologic time relative to each other, thus appearing to have "drifted" across the ocean bed. The idea of continental drift has been subsumed into the science of plate tectonics, which studies the movement of the continents as they ride on plates of the Earth's lithosphere. The speculation that continents might have "drifted" was first put forward by Abraham Ortelius in 1596. A pioneer of the modern view of mobilism was the Austrian geologist Otto Ampferer. The concept was independently and more fully developed by Alfred Wegener in his 1915 publication, "The Origin of Continents and Oceans". However, at that time the hypothesis was rejected by many for lack of any motive mechanism. The English geologist Arthur Holmes later proposed mantle convection for that mechanism. | 2001-10-19T19:29:40Z | 2023-11-10T19:21:25Z | [
"Template:ISBNT",
"Template:Authority control",
"Template:Further",
"Template:Blockquote",
"Template:See also",
"Template:Cite book",
"Template:Library resources box",
"Template:About",
"Template:Distinguish",
"Template:Main",
"Template:Annotated link",
"Template:Cite web",
"Template:Short description",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite encyclopedia",
"Template:Wikibooks",
"Template:Harv",
"Template:Citation needed"
]
| https://en.wikipedia.org/wiki/Continental_drift |
6,057 | Commodores | Commodores, often billed as the Commodores, is an American funk and soul group. The group's most successful period was in the late 1970s and early 1980s when Lionel Richie was the co-lead singer.
The members of the group met as mostly freshmen at Tuskegee Institute (now Tuskegee University) in 1968, and signed with Motown in November 1972, having first caught the public eye opening for the Jackson 5 while on tour.
The band's biggest hit singles are ballads such as "Easy", "Three Times a Lady", and "Nightshift"; and funk-influenced dance songs, including "Brick House", "Fancy Dancer", "Lady (You Bring Me Up)", and "Too Hot ta Trot".
Commodores were inducted into the Alabama Music Hall of Fame and Vocal Group Hall of Fame. The band has also won one Grammy Award out of nine nominations. The Commodores have sold over 70 million albums worldwide.
Commodores were formed from two former student groups, the Mystics and the Jays. Richie described some members of the Mystics as "jazz buffs". The new six-man band featured Lionel Richie, Thomas McClary, and William King from the Mystics, and Andre Callahan, Michael Gilbert, and Milan Williams from the Jays. To choose their name, William King opened a dictionary and randomly picked a word. "We lucked in," he remarked with a laugh when telling this story to People magazine. "We almost became 'The Commodes.'"
The bandmembers attended Tuskegee University in Alabama. After winning the university's annual freshman talent contest, they played at fraternity parties as well as a weekend gig at the Black Forest Inn, one of a few clubs in Tuskegee that catered to college students. They performed cover tunes and some original songs with their first singer, James Ingram (not the famous solo artist). Ingram, older than the rest of the band, left to serve in Vietnam, and was later replaced by drummer Walter "Clyde" Orange, who wrote or co-wrote many of their hits. Lionel Richie and Orange alternated as lead singers. Orange was the lead singer on the Top 10 hits "Brick House" (1977) and "Nightshift" (1985).
The early band was managed by Benny Ashburn, who brought them to his family's vacation lodge on Martha's Vineyard in 1971 and 1972. There, Ashburn test-marketed the group by having them play in parking lots and summer festivals.
"Machine Gun" (1974), the instrumental title track from the band's debut album, became a staple at American sporting events, and is also heard in many films, including Boogie Nights and Looking for Mr. Goodbar. It reached No. 22 on the Billboard Hot 100 in 1974. Another 1974 song "I Feel Sanctified" has been called a "prototype" of Wild Cherry's 1976 big hit "Play That Funky Music". Three albums released in 1975 and 1976, Caught in the Act was funk album, but Movin' On and Hot on the Tracks were pop albums. After those recordings the group developed the mellower sound hinted at in their 1976 top-ten hits, "Sweet Love" and "Just to Be Close to You". In 1977, the Commodores released "Easy", which became the group's biggest hit yet, reaching No. 4 in the US, followed by funky single "Brick House", also top 5, both from their album Commodores, as was "Zoom". The group reached No. 1 in 1978 with "Three Times a Lady". In 1979, the Commodores scored another top-five ballad, "Sail On", before reaching the top of the charts once again with another ballad, "Still". In 1981 they released two top-ten hits with "Oh No" (No. 4) and their first upbeat single in almost five years, "Lady (You Bring Me Up)" (No. 8).
Commodores made a brief appearance in the 1978 film, Thank God It's Friday. They performed the song "Too Hot ta Trot" during the dance contest; the songs "Brick House" and "Easy" were also played in the movie
In 1982, Lionel Richie left to pursue a solo career, and Skyler Jett replaced him as co-lead singer. Also in 1982, Ashburn died of a heart attack at the age of 54.
Founding member McClary left in 1984 (shortly after Richie) to pursue a solo career, and to develop a gospel music company. McClary was replaced by guitarist-vocalist Sheldon Reynolds. Then LaPread left in 1986 and moved to Auckland, New Zealand. Reynolds departed for Earth, Wind & Fire in 1987, which prompted trumpeter William "WAK" King to take over primary guitar duties for live performances. Keyboardist Milan Williams exited the band in 1989 after allegedly refusing to tour South Africa.
The group gradually abandoned its funk roots and moved into the more commercial pop arena. In 1984, former Heatwave singer James Dean "J.D." Nicholas assumed co-lead vocal duties with drummer Walter Orange. That line-up was hitless until 1985 when their final Motown album Nightshift, produced by Dennis Lambert (prior albums were produced by James Anthony Carmichael), delivered the title track "Nightshift", a loving tribute to Marvin Gaye and Jackie Wilson, both of whom had died the previous year. "Nightshift" hit no. 3 in the US and won the Commodores their first Grammy for Best R&B Performance by a Duo or Group With Vocals in 1985.
In 2010 a new version was recorded, dedicated to Michael Jackson. The Commodores were on a European tour performing at Wembley Arena, London, on June 25, 2009, when they walked off the stage after they were told that Michael Jackson had died. Initially the band thought it was a hoax. However, back in their dressing rooms they received confirmation and broke down in tears. The next night at Birmingham's NIA Arena, J.D. Nicholas added Jackson's name to the lyrics of the song, and henceforth the Commodores have mentioned Jackson and other deceased R&B singers. Thus came the inspiration upon the one-year anniversary of Jackson's death to re-record, with new lyrics, the hit song "Nightshift" as a tribute.
In 1990, they formed Commodores Records and re-recorded their 20 greatest hits as Commodores Hits Vol. I & II. They have recorded a live album, Commodores Live, along with a DVD of the same name, and a Christmas album titled Commodores Christmas. In 2012, the band was working on new material, with some contributions written by current and former members.
Commodores as of 2020 consist of Walter "Clyde" Orange, James Dean "J.D." Nicholas, and William "WAK" King, along with their five-piece band The Mean Machine.They continue to perform, playing at arenas, theaters, and festivals around the world.
The Commodores have won one Grammy Award out of ten nominations.
During 1995 the Commodores were inducted into the Alabama Music Hall of Fame.
During 2003 the Commodores were also inducted into the Vocal Group Hall of Fame. | [
{
"paragraph_id": 0,
"text": "Commodores, often billed as the Commodores, is an American funk and soul group. The group's most successful period was in the late 1970s and early 1980s when Lionel Richie was the co-lead singer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The members of the group met as mostly freshmen at Tuskegee Institute (now Tuskegee University) in 1968, and signed with Motown in November 1972, having first caught the public eye opening for the Jackson 5 while on tour.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The band's biggest hit singles are ballads such as \"Easy\", \"Three Times a Lady\", and \"Nightshift\"; and funk-influenced dance songs, including \"Brick House\", \"Fancy Dancer\", \"Lady (You Bring Me Up)\", and \"Too Hot ta Trot\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "Commodores were inducted into the Alabama Music Hall of Fame and Vocal Group Hall of Fame. The band has also won one Grammy Award out of nine nominations. The Commodores have sold over 70 million albums worldwide.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Commodores were formed from two former student groups, the Mystics and the Jays. Richie described some members of the Mystics as \"jazz buffs\". The new six-man band featured Lionel Richie, Thomas McClary, and William King from the Mystics, and Andre Callahan, Michael Gilbert, and Milan Williams from the Jays. To choose their name, William King opened a dictionary and randomly picked a word. \"We lucked in,\" he remarked with a laugh when telling this story to People magazine. \"We almost became 'The Commodes.'\"",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The bandmembers attended Tuskegee University in Alabama. After winning the university's annual freshman talent contest, they played at fraternity parties as well as a weekend gig at the Black Forest Inn, one of a few clubs in Tuskegee that catered to college students. They performed cover tunes and some original songs with their first singer, James Ingram (not the famous solo artist). Ingram, older than the rest of the band, left to serve in Vietnam, and was later replaced by drummer Walter \"Clyde\" Orange, who wrote or co-wrote many of their hits. Lionel Richie and Orange alternated as lead singers. Orange was the lead singer on the Top 10 hits \"Brick House\" (1977) and \"Nightshift\" (1985).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The early band was managed by Benny Ashburn, who brought them to his family's vacation lodge on Martha's Vineyard in 1971 and 1972. There, Ashburn test-marketed the group by having them play in parking lots and summer festivals.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "\"Machine Gun\" (1974), the instrumental title track from the band's debut album, became a staple at American sporting events, and is also heard in many films, including Boogie Nights and Looking for Mr. Goodbar. It reached No. 22 on the Billboard Hot 100 in 1974. Another 1974 song \"I Feel Sanctified\" has been called a \"prototype\" of Wild Cherry's 1976 big hit \"Play That Funky Music\". Three albums released in 1975 and 1976, Caught in the Act was funk album, but Movin' On and Hot on the Tracks were pop albums. After those recordings the group developed the mellower sound hinted at in their 1976 top-ten hits, \"Sweet Love\" and \"Just to Be Close to You\". In 1977, the Commodores released \"Easy\", which became the group's biggest hit yet, reaching No. 4 in the US, followed by funky single \"Brick House\", also top 5, both from their album Commodores, as was \"Zoom\". The group reached No. 1 in 1978 with \"Three Times a Lady\". In 1979, the Commodores scored another top-five ballad, \"Sail On\", before reaching the top of the charts once again with another ballad, \"Still\". In 1981 they released two top-ten hits with \"Oh No\" (No. 4) and their first upbeat single in almost five years, \"Lady (You Bring Me Up)\" (No. 8).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Commodores made a brief appearance in the 1978 film, Thank God It's Friday. They performed the song \"Too Hot ta Trot\" during the dance contest; the songs \"Brick House\" and \"Easy\" were also played in the movie",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 1982, Lionel Richie left to pursue a solo career, and Skyler Jett replaced him as co-lead singer. Also in 1982, Ashburn died of a heart attack at the age of 54.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Founding member McClary left in 1984 (shortly after Richie) to pursue a solo career, and to develop a gospel music company. McClary was replaced by guitarist-vocalist Sheldon Reynolds. Then LaPread left in 1986 and moved to Auckland, New Zealand. Reynolds departed for Earth, Wind & Fire in 1987, which prompted trumpeter William \"WAK\" King to take over primary guitar duties for live performances. Keyboardist Milan Williams exited the band in 1989 after allegedly refusing to tour South Africa.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The group gradually abandoned its funk roots and moved into the more commercial pop arena. In 1984, former Heatwave singer James Dean \"J.D.\" Nicholas assumed co-lead vocal duties with drummer Walter Orange. That line-up was hitless until 1985 when their final Motown album Nightshift, produced by Dennis Lambert (prior albums were produced by James Anthony Carmichael), delivered the title track \"Nightshift\", a loving tribute to Marvin Gaye and Jackie Wilson, both of whom had died the previous year. \"Nightshift\" hit no. 3 in the US and won the Commodores their first Grammy for Best R&B Performance by a Duo or Group With Vocals in 1985.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 2010 a new version was recorded, dedicated to Michael Jackson. The Commodores were on a European tour performing at Wembley Arena, London, on June 25, 2009, when they walked off the stage after they were told that Michael Jackson had died. Initially the band thought it was a hoax. However, back in their dressing rooms they received confirmation and broke down in tears. The next night at Birmingham's NIA Arena, J.D. Nicholas added Jackson's name to the lyrics of the song, and henceforth the Commodores have mentioned Jackson and other deceased R&B singers. Thus came the inspiration upon the one-year anniversary of Jackson's death to re-record, with new lyrics, the hit song \"Nightshift\" as a tribute.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1990, they formed Commodores Records and re-recorded their 20 greatest hits as Commodores Hits Vol. I & II. They have recorded a live album, Commodores Live, along with a DVD of the same name, and a Christmas album titled Commodores Christmas. In 2012, the band was working on new material, with some contributions written by current and former members.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Commodores as of 2020 consist of Walter \"Clyde\" Orange, James Dean \"J.D.\" Nicholas, and William \"WAK\" King, along with their five-piece band The Mean Machine.They continue to perform, playing at arenas, theaters, and festivals around the world.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Commodores have won one Grammy Award out of ten nominations.",
"title": "Accolades"
},
{
"paragraph_id": 16,
"text": "During 1995 the Commodores were inducted into the Alabama Music Hall of Fame.",
"title": "Accolades"
},
{
"paragraph_id": 17,
"text": "During 2003 the Commodores were also inducted into the Vocal Group Hall of Fame.",
"title": "Accolades"
}
]
| Commodores, often billed as the Commodores, is an American funk and soul group. The group's most successful period was in the late 1970s and early 1980s when Lionel Richie was the co-lead singer. The members of the group met as mostly freshmen at Tuskegee Institute in 1968, and signed with Motown in November 1972, having first caught the public eye opening for the Jackson 5 while on tour. The band's biggest hit singles are ballads such as "Easy", "Three Times a Lady", and "Nightshift"; and funk-influenced dance songs, including "Brick House", "Fancy Dancer", "Lady", and "Too Hot ta Trot". Commodores were inducted into the Alabama Music Hall of Fame and Vocal Group Hall of Fame. The band has also won one Grammy Award out of nine nominations. The Commodores have sold over 70 million albums worldwide. | 2001-08-09T17:08:25Z | 2023-12-10T22:26:43Z | [
"Template:Distinguish",
"Template:Use mdy dates",
"Template:Citation needed",
"Template:Won",
"Template:American Music Award for Favorite Soul/R&B Band/Duo/Group",
"Template:NAACP Image Award for Outstanding Duo or Group",
"Template:Main",
"Template:Official website",
"Template:Commodores",
"Template:Authority control",
"Template:Short description",
"Template:About",
"Template:Infobox musical artist",
"Template:Reflist",
"Template:Nom",
"Template:Cite web",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Commodores |
6,058 | Collagen | Collagen (/ˈkɒlədʒən/) is the main structural protein in the extracellular matrix found in the body's various connective tissues. As the main component of connective tissue, it is the most abundant protein in mammals, making up from 25% to 35% of the whole-body protein content. Collagen consists of amino acids bound together to form a triple helix of elongated fibril known as a collagen helix. It is mostly found in connective tissue such as cartilage, bones, tendons, ligaments, and skin. Collagen makes up 30% of the protein found in the human body. Vitamin E improves the production of collagen.
Depending upon the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes one to two percent of muscle tissue and accounts for 6% of the weight of the skeletal muscle tissue. The fibroblast is the most common cell that creates collagen. Gelatin, which is used in food and industry, is collagen that has been irreversibly hydrolyzed using heat, basic solutions or weak acids.
The name collagen comes from the Greek κόλλα (kólla), meaning "glue", and suffix -γέν, -gen, denoting "producing".
Over 90% of the collagen in the human body is type I collagen. However, as of 2011, 28 types of human collagen have been identified, described, and divided into several groups according to the structure they form. All of the types contain at least one triple helix. The number of types shows collagen's diverse functionality.
The five most common types are:
The collagenous cardiac skeleton which includes the four heart valve rings, is histologically, elastically and uniquely bound to cardiac muscle. The cardiac skeleton also includes the separating septa of the heart chambers – the interventricular septum and the atrioventricular septum. Collagen contribution to the measure of cardiac performance summarily represents a continuous torsional force opposed to the fluid mechanics of blood pressure emitted from the heart. The collagenous structure that divides the upper chambers of the heart from the lower chambers is an impermeable membrane that excludes both blood and electrical impulses through typical physiological means. With support from collagen, atrial fibrillation never deteriorates to ventricular fibrillation. Collagen is layered in variable densities with smooth muscle mass. The mass, distribution, age, and density of collagen all contribute to the compliance required to move blood back and forth. Individual cardiac valvular leaflets are folded into shape by specialized collagen under variable pressure. Gradual calcium deposition within collagen occurs as a natural function of aging. Calcified points within collagen matrices show contrast in a moving display of blood and muscle, enabling methods of cardiac imaging technology to arrive at ratios essentially stating blood in (cardiac input) and blood out (cardiac output). Pathology of the collagen underpinning of the heart is understood within the category of connective tissue disease.
As the skeleton forms the structure of the body, it is vital that it maintains its strength, even after breaks and injuries. Collagen is used in bone grafting as it has a triple helical structure, making it a very strong molecule. It is ideal for use in bones, as it does not compromise the structural integrity of the skeleton. The triple helical structure of collagen prevents it from being broken down by enzymes, it enables adhesiveness of cells and it is important for the proper assembly of the extracellular matrix.
Collagen scaffolds are used in tissue regeneration, whether in sponges, thin sheets, gels, or fibers. Collagen has favorable properties for tissue regeneration, such as pore structure, permeability, hydrophilicity, and stability in vivo. Collagen scaffolds also support deposition of cells, such as osteoblasts and fibroblasts, and once inserted, facilitate growth to proceed normally.
Collagens are widely employed in the construction of artificial skin substitutes used in the management of severe burns and wounds. These collagens may be derived from bovine, equine, porcine, or even human sources; and are sometimes used in combination with silicones, glycosaminoglycans, fibroblasts, growth factors and other substances.
Collagen is one of the body's key natural resources and a component of skin tissue that can benefit all stages of wound healing. When collagen is made available to the wound bed, closure can occur. Wound deterioration, followed sometimes by procedures such as amputation, can thus be avoided.
Collagen is a natural product and is thus used as a natural wound dressing and has properties that artificial wound dressings do not have. It is resistant against bacteria, which is of vital importance in a wound dressing. It helps to keep the wound sterile, because of its natural ability to fight infection. When collagen is used as a burn dressing, healthy granulation tissue is able to form very quickly over the burn, helping it to heal rapidly.
Throughout the four phases of wound healing, collagen performs the following functions:
Collagen is used in laboratory studies for cell culture, studying cell behavior and cellular interactions with the extracellular environment. Collagen is also widely used as a bioink for 3D bioprinting and biofabrication of 3D tissue models.
The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2). The amino acid composition of collagen is atypical for proteins, particularly with respect to its high hydroxyproline content. The most common motifs in the amino acid sequence of collagen are glycine-proline-X and glycine-X-hydroxyproline, where X is any amino acid other than glycine, proline or hydroxyproline. The average amino acid composition for fish and mammal skin is given.
First, a three-dimensional stranded structure is assembled, with the amino acids glycine and proline as its principal components. This is not yet collagen but its precursor, procollagen. Procollagen is then modified by the addition of hydroxyl groups to the amino acids proline and lysine. This step is important for later glycosylation and the formation of the triple helix structure of collagen. Because the hydroxylase enzymes that perform these reactions require vitamin C as a cofactor, a long-term deficiency in this vitamin results in impaired collagen synthesis and scurvy. These hydroxylation reactions are catalyzed by two different enzymes: prolyl-4-hydroxylase and lysyl-hydroxylase. The reaction consumes one ascorbate molecule per hydroxylation. The synthesis of collagen occurs inside and outside of the cell. The formation of collagen which results in fibrillary collagen (most common form) is discussed here. Meshwork collagen, which is often involved in the formation of filtration systems, is the other form of collagen. All types of collagens are triple helices, and the differences lie in the make-up of the alpha peptides created in step 2.
Collagen has an unusual amino acid composition and sequence:
Cortisol stimulates degradation of (skin) collagen into amino acids.
Most collagen forms in a similar manner, but the following process is typical for type I:
Vitamin C deficiency causes scurvy, a serious and painful disease in which defective collagen prevents the formation of strong connective tissue. Gums deteriorate and bleed, with loss of teeth; skin discolors, and wounds do not heal. Prior to the 18th century, this condition was notorious among long-duration military, particularly naval, expeditions during which participants were deprived of foods containing vitamin C.
An autoimmune disease such as lupus erythematosus or rheumatoid arthritis may attack healthy collagen fibers.
Many bacteria and viruses secrete virulence factors, such as the enzyme collagenase, which destroys collagen or interferes with its production.
A single collagen molecule, tropocollagen, is used to make up larger collagen aggregates, such as fibrils. It is approximately 300 nm long and 1.5 nm in diameter, and it is made up of three polypeptide strands (called alpha peptides, see step 2), each of which has the conformation of a left-handed helix – this should not be confused with the right-handed alpha helix. These three left-handed helices are twisted together into a right-handed triple helix or "super helix", a cooperative quaternary structure stabilized by many hydrogen bonds. With type I collagen and possibly all fibrillar collagens, if not all collagens, each triple-helix associates into a right-handed super-super-coil referred to as the collagen microfibril. Each microfibril is interdigitated with its neighboring microfibrils to a degree that might suggest they are individually unstable, although within collagen fibrils, they are so well ordered as to be crystalline.
A distinctive feature of collagen is the regular arrangement of amino acids in each of the three chains of these collagen subunits. The sequence often follows the pattern Gly-Pro-X or Gly-X-Hyp, where X may be any of various other amino acid residues. Proline or hydroxyproline constitute about 1/6 of the total sequence. With glycine accounting for the 1/3 of the sequence, this means approximately half of the collagen sequence is not glycine, proline or hydroxyproline, a fact often missed due to the distraction of the unusual GX1X2 character of collagen alpha-peptides. The high glycine content of collagen is important with respect to stabilization of the collagen helix as this allows the very close association of the collagen fibers within the molecule, facilitating hydrogen bonding and the formation of intermolecular cross-links. This kind of regular repetition and high glycine content is found in only a few other fibrous proteins, such as silk fibroin.
Collagen is not only a structural protein. Due to its key role in the determination of cell phenotype, cell adhesion, tissue regulation, and infrastructure, many sections of its non-proline-rich regions have cell or matrix association/regulation roles. The relatively high content of proline and hydroxyproline rings, with their geometrically constrained carboxyl and (secondary) amino groups, along with the rich abundance of glycine, accounts for the tendency of the individual polypeptide strands to form left-handed helices spontaneously, without any intrachain hydrogen bonding.
Because glycine is the smallest amino acid with no side chain, it plays a unique role in fibrous structural proteins. In collagen, Gly is required at every third position because the assembly of the triple helix puts this residue at the interior (axis) of the helix, where there is no space for a larger side group than glycine's single hydrogen atom. For the same reason, the rings of the Pro and Hyp must point outward. These two amino acids help stabilize the triple helix – Hyp even more so than Pro; a lower concentration of them is required in animals such as fish, whose body temperatures are lower than most warm-blooded animals. Lower proline and hydroxyproline contents are characteristic of cold-water, but not warm-water fish; the latter tend to have similar proline and hydroxyproline contents to mammals. The lower proline and hydroxyproline contents of cold-water fish and other poikilotherm animals leads to their collagen having a lower thermal stability than mammalian collagen. This lower thermal stability means that gelatin derived from fish collagen is not suitable for many food and industrial applications.
The tropocollagen subunits spontaneously self-assemble, with regularly staggered ends, into even larger arrays in the extracellular spaces of tissues. Additional assembly of fibrils is guided by fibroblasts, which deposit fully formed fibrils from fibripositors. In the fibrillar collagens, molecules are staggered to adjacent molecules by about 67 nm (a unit that is referred to as 'D' and changes depending upon the hydration state of the aggregate). In each D-period repeat of the microfibril, there is a part containing five molecules in cross-section, called the "overlap", and a part containing only four molecules, called the "gap". These overlap and gap regions are retained as microfibrils assemble into fibrils, and are thus viewable using electron microscopy. The triple helical tropocollagens in the microfibrils are arranged in a quasihexagonal packing pattern.
There is some covalent crosslinking within the triple helices and a variable amount of covalent crosslinking between tropocollagen helices forming well-organized aggregates (such as fibrils). Larger fibrillar bundles are formed with the aid of several different classes of proteins (including different collagen types), glycoproteins, and proteoglycans to form the different types of mature tissues from alternate combinations of the same key players. Collagen's insolubility was a barrier to the study of monomeric collagen until it was found that tropocollagen from young animals can be extracted because it is not yet fully crosslinked. However, advances in microscopy techniques (i.e. electron microscopy (EM) and atomic force microscopy (AFM)) and X-ray diffraction have enabled researchers to obtain increasingly detailed images of collagen structure in situ. These later advances are particularly important to better understanding the way in which collagen structure affects cell–cell and cell–matrix communication and how tissues are constructed in growth and repair and changed in development and disease. For example, using AFM–based nanoindentation it has been shown that a single collagen fibril is a heterogeneous material along its axial direction with significantly different mechanical properties in its gap and overlap regions, correlating with its different molecular organizations in these two regions.
Collagen fibrils/aggregates are arranged in different combinations and concentrations in various tissues to provide varying tissue properties. In bone, entire collagen triple helices lie in a parallel, staggered array. 40 nm gaps between the ends of the tropocollagen subunits (approximately equal to the gap region) probably serve as nucleation sites for the deposition of long, hard, fine crystals of the mineral component, which is hydroxylapatite (approximately) Ca10(OH)2(PO4)6. Type I collagen gives bone its tensile strength.
Collagen-related diseases most commonly arise from genetic defects or nutritional deficiencies that affect the biosynthesis, assembly, posttranslational modification, secretion, or other processes involved in normal collagen production.
In addition to the above-mentioned disorders, excessive deposition of collagen occurs in scleroderma.
One thousand mutations have been identified in 12 out of more than 20 types of collagen. These mutations can lead to various diseases at the tissue level.
Osteogenesis imperfecta – Caused by a mutation in type 1 collagen, dominant autosomal disorder, results in weak bones and irregular connective tissue, some cases can be mild while others can be lethal. Mild cases have lowered levels of collagen type 1 while severe cases have structural defects in collagen.
Chondrodysplasias – Skeletal disorder believed to be caused by a mutation in type 2 collagen, further research is being conducted to confirm this.
Ehlers–Danlos syndrome – Thirteen different types of this disorder, which lead to deformities in connective tissue, are known. Some of the rarer types can be lethal, leading to the rupture of arteries. Each syndrome is caused by a different mutation. For example, the vascular type (vEDS) of this disorder is caused by a mutation in collagen type 3.
Alport syndrome – Can be passed on genetically, usually as X-linked dominant, but also as both an autosomal dominant and autosomal recessive disorder, those with the condition have problems with their kidneys and eyes, loss of hearing can also develop during the childhood or adolescent years.
Knobloch syndrome – Caused by a mutation in the COL18A1 gene that codes for the production of collagen XVIII. Patients present with protrusion of the brain tissue and degeneration of the retina; an individual who has family members with the disorder is at an increased risk of developing it themselves since there is a hereditary link.
Collagen is one of the long, fibrous structural proteins whose functions are quite different from those of globular proteins, such as enzymes. Tough bundles of collagen called collagen fibers are a major component of the extracellular matrix that supports most tissues and gives cells structure from the outside, but collagen is also found inside certain cells. Collagen has great tensile strength, and is the main component of fascia, cartilage, ligaments, tendons, bone and skin. Along with elastin and soft keratin, it is responsible for skin strength and elasticity, and its degradation leads to wrinkles that accompany aging. It strengthens blood vessels and plays a role in tissue development. It is present in the cornea and lens of the eye in crystalline form. It may be one of the most abundant proteins in the fossil record, given that it appears to fossilize frequently, even in bones from the Mesozoic and Paleozoic.
Collagen has a wide variety of applications, from food to medical. In the medical industry, it is used in cosmetic surgery and burn surgery. In the food sector, one use example is in casings for sausages.
If collagen is subject to sufficient denaturation, such as by heating, the three tropocollagen strands separate partially or completely into globular domains, containing a different secondary structure to the normal collagen polyproline II (PPII) of random coils. This process describes the formation of gelatin, which is used in many foods, including flavored gelatin desserts. Besides food, gelatin has been used in pharmaceutical, cosmetic, and photography industries. It is also used as a dietary supplement, and has been advertised as a potential remedy against the ageing process.
From the Greek for glue, kolla, the word collagen means "glue producer" and refers to the early process of boiling the skin and sinews of horses and other animals to obtain glue. Collagen adhesive was used by Egyptians about 4,000 years ago, and Native Americans used it in bows about 1,500 years ago. The oldest glue in the world, carbon-dated as more than 8,000 years old, was found to be collagen – used as a protective lining on rope baskets and embroidered fabrics, to hold utensils together, and in crisscross decorations on human skulls. Collagen normally converts to gelatin, but survived due to dry conditions. Animal glues are thermoplastic, softening again upon reheating, so they are still used in making musical instruments such as fine violins and guitars, which may have to be reopened for repairs – an application incompatible with tough, synthetic plastic adhesives, which are permanent. Animal sinews and skins, including leather, have been used to make useful articles for millennia.
Gelatin-resorcinol-formaldehyde glue (and with formaldehyde replaced by less-toxic pentanedial and ethanedial) has been used to repair experimental incisions in rabbit lungs.
Bovine collagen is widely used in dermal fillers for aesthetic correction of wrinkles and skin aging. Collagen cremes are also widely sold even though collagen cannot penetrate the skin because its fibers are too large. Most research on collagen supplements has been funded by industries that could benefit from a positive study result.
The molecular and packing structures of collagen eluded scientists over decades of research. The first evidence that it possesses a regular structure at the molecular level was presented in the mid-1930s. Research then concentrated on the conformation of the collagen monomer, producing several competing models, although correctly dealing with the conformation of each individual peptide chain. The triple-helical "Madras" model, proposed by G. N. Ramachandran in 1955, provided an accurate model of quaternary structure in collagen. This model was supported by further studies of higher resolution in the late 20th century.
The packing structure of collagen has not been defined to the same degree outside of the fibrillar collagen types, although it has been long known to be hexagonal. As with its monomeric structure, several conflicting models propose either that the packing arrangement of collagen molecules is 'sheet-like', or is microfibrillar. The microfibrillar structure of collagen fibrils in tendon, cornea and cartilage was imaged directly by electron microscopy in the late 20th century and early 21st century. The microfibrillar structure of rat tail tendon was modeled as being closest to the observed structure, although it oversimplified the topological progression of neighboring collagen molecules, and so did not predict the correct conformation of the discontinuous D-periodic pentameric arrangement termed microfibril. | [
{
"paragraph_id": 0,
"text": "Collagen (/ˈkɒlədʒən/) is the main structural protein in the extracellular matrix found in the body's various connective tissues. As the main component of connective tissue, it is the most abundant protein in mammals, making up from 25% to 35% of the whole-body protein content. Collagen consists of amino acids bound together to form a triple helix of elongated fibril known as a collagen helix. It is mostly found in connective tissue such as cartilage, bones, tendons, ligaments, and skin. Collagen makes up 30% of the protein found in the human body. Vitamin E improves the production of collagen.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Depending upon the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes one to two percent of muscle tissue and accounts for 6% of the weight of the skeletal muscle tissue. The fibroblast is the most common cell that creates collagen. Gelatin, which is used in food and industry, is collagen that has been irreversibly hydrolyzed using heat, basic solutions or weak acids.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The name collagen comes from the Greek κόλλα (kólla), meaning \"glue\", and suffix -γέν, -gen, denoting \"producing\".",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Over 90% of the collagen in the human body is type I collagen. However, as of 2011, 28 types of human collagen have been identified, described, and divided into several groups according to the structure they form. All of the types contain at least one triple helix. The number of types shows collagen's diverse functionality.",
"title": "Human types"
},
{
"paragraph_id": 4,
"text": "The five most common types are:",
"title": "Human types"
},
{
"paragraph_id": 5,
"text": "The collagenous cardiac skeleton which includes the four heart valve rings, is histologically, elastically and uniquely bound to cardiac muscle. The cardiac skeleton also includes the separating septa of the heart chambers – the interventricular septum and the atrioventricular septum. Collagen contribution to the measure of cardiac performance summarily represents a continuous torsional force opposed to the fluid mechanics of blood pressure emitted from the heart. The collagenous structure that divides the upper chambers of the heart from the lower chambers is an impermeable membrane that excludes both blood and electrical impulses through typical physiological means. With support from collagen, atrial fibrillation never deteriorates to ventricular fibrillation. Collagen is layered in variable densities with smooth muscle mass. The mass, distribution, age, and density of collagen all contribute to the compliance required to move blood back and forth. Individual cardiac valvular leaflets are folded into shape by specialized collagen under variable pressure. Gradual calcium deposition within collagen occurs as a natural function of aging. Calcified points within collagen matrices show contrast in a moving display of blood and muscle, enabling methods of cardiac imaging technology to arrive at ratios essentially stating blood in (cardiac input) and blood out (cardiac output). Pathology of the collagen underpinning of the heart is understood within the category of connective tissue disease.",
"title": "In human biology"
},
{
"paragraph_id": 6,
"text": "As the skeleton forms the structure of the body, it is vital that it maintains its strength, even after breaks and injuries. Collagen is used in bone grafting as it has a triple helical structure, making it a very strong molecule. It is ideal for use in bones, as it does not compromise the structural integrity of the skeleton. The triple helical structure of collagen prevents it from being broken down by enzymes, it enables adhesiveness of cells and it is important for the proper assembly of the extracellular matrix.",
"title": "In human biology"
},
{
"paragraph_id": 7,
"text": "Collagen scaffolds are used in tissue regeneration, whether in sponges, thin sheets, gels, or fibers. Collagen has favorable properties for tissue regeneration, such as pore structure, permeability, hydrophilicity, and stability in vivo. Collagen scaffolds also support deposition of cells, such as osteoblasts and fibroblasts, and once inserted, facilitate growth to proceed normally.",
"title": "In human biology"
},
{
"paragraph_id": 8,
"text": "Collagens are widely employed in the construction of artificial skin substitutes used in the management of severe burns and wounds. These collagens may be derived from bovine, equine, porcine, or even human sources; and are sometimes used in combination with silicones, glycosaminoglycans, fibroblasts, growth factors and other substances.",
"title": "In human biology"
},
{
"paragraph_id": 9,
"text": "Collagen is one of the body's key natural resources and a component of skin tissue that can benefit all stages of wound healing. When collagen is made available to the wound bed, closure can occur. Wound deterioration, followed sometimes by procedures such as amputation, can thus be avoided.",
"title": "In human biology"
},
{
"paragraph_id": 10,
"text": "Collagen is a natural product and is thus used as a natural wound dressing and has properties that artificial wound dressings do not have. It is resistant against bacteria, which is of vital importance in a wound dressing. It helps to keep the wound sterile, because of its natural ability to fight infection. When collagen is used as a burn dressing, healthy granulation tissue is able to form very quickly over the burn, helping it to heal rapidly.",
"title": "In human biology"
},
{
"paragraph_id": 11,
"text": "Throughout the four phases of wound healing, collagen performs the following functions:",
"title": "In human biology"
},
{
"paragraph_id": 12,
"text": "Collagen is used in laboratory studies for cell culture, studying cell behavior and cellular interactions with the extracellular environment. Collagen is also widely used as a bioink for 3D bioprinting and biofabrication of 3D tissue models.",
"title": "Basic research"
},
{
"paragraph_id": 13,
"text": "The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2). The amino acid composition of collagen is atypical for proteins, particularly with respect to its high hydroxyproline content. The most common motifs in the amino acid sequence of collagen are glycine-proline-X and glycine-X-hydroxyproline, where X is any amino acid other than glycine, proline or hydroxyproline. The average amino acid composition for fish and mammal skin is given.",
"title": "Biology"
},
{
"paragraph_id": 14,
"text": "First, a three-dimensional stranded structure is assembled, with the amino acids glycine and proline as its principal components. This is not yet collagen but its precursor, procollagen. Procollagen is then modified by the addition of hydroxyl groups to the amino acids proline and lysine. This step is important for later glycosylation and the formation of the triple helix structure of collagen. Because the hydroxylase enzymes that perform these reactions require vitamin C as a cofactor, a long-term deficiency in this vitamin results in impaired collagen synthesis and scurvy. These hydroxylation reactions are catalyzed by two different enzymes: prolyl-4-hydroxylase and lysyl-hydroxylase. The reaction consumes one ascorbate molecule per hydroxylation. The synthesis of collagen occurs inside and outside of the cell. The formation of collagen which results in fibrillary collagen (most common form) is discussed here. Meshwork collagen, which is often involved in the formation of filtration systems, is the other form of collagen. All types of collagens are triple helices, and the differences lie in the make-up of the alpha peptides created in step 2.",
"title": "Synthesis"
},
{
"paragraph_id": 15,
"text": "Collagen has an unusual amino acid composition and sequence:",
"title": "Synthesis"
},
{
"paragraph_id": 16,
"text": "Cortisol stimulates degradation of (skin) collagen into amino acids.",
"title": "Synthesis"
},
{
"paragraph_id": 17,
"text": "Most collagen forms in a similar manner, but the following process is typical for type I:",
"title": "Synthesis"
},
{
"paragraph_id": 18,
"text": "Vitamin C deficiency causes scurvy, a serious and painful disease in which defective collagen prevents the formation of strong connective tissue. Gums deteriorate and bleed, with loss of teeth; skin discolors, and wounds do not heal. Prior to the 18th century, this condition was notorious among long-duration military, particularly naval, expeditions during which participants were deprived of foods containing vitamin C.",
"title": "Synthesis"
},
{
"paragraph_id": 19,
"text": "An autoimmune disease such as lupus erythematosus or rheumatoid arthritis may attack healthy collagen fibers.",
"title": "Synthesis"
},
{
"paragraph_id": 20,
"text": "Many bacteria and viruses secrete virulence factors, such as the enzyme collagenase, which destroys collagen or interferes with its production.",
"title": "Synthesis"
},
{
"paragraph_id": 21,
"text": "A single collagen molecule, tropocollagen, is used to make up larger collagen aggregates, such as fibrils. It is approximately 300 nm long and 1.5 nm in diameter, and it is made up of three polypeptide strands (called alpha peptides, see step 2), each of which has the conformation of a left-handed helix – this should not be confused with the right-handed alpha helix. These three left-handed helices are twisted together into a right-handed triple helix or \"super helix\", a cooperative quaternary structure stabilized by many hydrogen bonds. With type I collagen and possibly all fibrillar collagens, if not all collagens, each triple-helix associates into a right-handed super-super-coil referred to as the collagen microfibril. Each microfibril is interdigitated with its neighboring microfibrils to a degree that might suggest they are individually unstable, although within collagen fibrils, they are so well ordered as to be crystalline.",
"title": "Molecular structure"
},
{
"paragraph_id": 22,
"text": "A distinctive feature of collagen is the regular arrangement of amino acids in each of the three chains of these collagen subunits. The sequence often follows the pattern Gly-Pro-X or Gly-X-Hyp, where X may be any of various other amino acid residues. Proline or hydroxyproline constitute about 1/6 of the total sequence. With glycine accounting for the 1/3 of the sequence, this means approximately half of the collagen sequence is not glycine, proline or hydroxyproline, a fact often missed due to the distraction of the unusual GX1X2 character of collagen alpha-peptides. The high glycine content of collagen is important with respect to stabilization of the collagen helix as this allows the very close association of the collagen fibers within the molecule, facilitating hydrogen bonding and the formation of intermolecular cross-links. This kind of regular repetition and high glycine content is found in only a few other fibrous proteins, such as silk fibroin.",
"title": "Molecular structure"
},
{
"paragraph_id": 23,
"text": "Collagen is not only a structural protein. Due to its key role in the determination of cell phenotype, cell adhesion, tissue regulation, and infrastructure, many sections of its non-proline-rich regions have cell or matrix association/regulation roles. The relatively high content of proline and hydroxyproline rings, with their geometrically constrained carboxyl and (secondary) amino groups, along with the rich abundance of glycine, accounts for the tendency of the individual polypeptide strands to form left-handed helices spontaneously, without any intrachain hydrogen bonding.",
"title": "Molecular structure"
},
{
"paragraph_id": 24,
"text": "Because glycine is the smallest amino acid with no side chain, it plays a unique role in fibrous structural proteins. In collagen, Gly is required at every third position because the assembly of the triple helix puts this residue at the interior (axis) of the helix, where there is no space for a larger side group than glycine's single hydrogen atom. For the same reason, the rings of the Pro and Hyp must point outward. These two amino acids help stabilize the triple helix – Hyp even more so than Pro; a lower concentration of them is required in animals such as fish, whose body temperatures are lower than most warm-blooded animals. Lower proline and hydroxyproline contents are characteristic of cold-water, but not warm-water fish; the latter tend to have similar proline and hydroxyproline contents to mammals. The lower proline and hydroxyproline contents of cold-water fish and other poikilotherm animals leads to their collagen having a lower thermal stability than mammalian collagen. This lower thermal stability means that gelatin derived from fish collagen is not suitable for many food and industrial applications.",
"title": "Molecular structure"
},
{
"paragraph_id": 25,
"text": "The tropocollagen subunits spontaneously self-assemble, with regularly staggered ends, into even larger arrays in the extracellular spaces of tissues. Additional assembly of fibrils is guided by fibroblasts, which deposit fully formed fibrils from fibripositors. In the fibrillar collagens, molecules are staggered to adjacent molecules by about 67 nm (a unit that is referred to as 'D' and changes depending upon the hydration state of the aggregate). In each D-period repeat of the microfibril, there is a part containing five molecules in cross-section, called the \"overlap\", and a part containing only four molecules, called the \"gap\". These overlap and gap regions are retained as microfibrils assemble into fibrils, and are thus viewable using electron microscopy. The triple helical tropocollagens in the microfibrils are arranged in a quasihexagonal packing pattern.",
"title": "Molecular structure"
},
{
"paragraph_id": 26,
"text": "There is some covalent crosslinking within the triple helices and a variable amount of covalent crosslinking between tropocollagen helices forming well-organized aggregates (such as fibrils). Larger fibrillar bundles are formed with the aid of several different classes of proteins (including different collagen types), glycoproteins, and proteoglycans to form the different types of mature tissues from alternate combinations of the same key players. Collagen's insolubility was a barrier to the study of monomeric collagen until it was found that tropocollagen from young animals can be extracted because it is not yet fully crosslinked. However, advances in microscopy techniques (i.e. electron microscopy (EM) and atomic force microscopy (AFM)) and X-ray diffraction have enabled researchers to obtain increasingly detailed images of collagen structure in situ. These later advances are particularly important to better understanding the way in which collagen structure affects cell–cell and cell–matrix communication and how tissues are constructed in growth and repair and changed in development and disease. For example, using AFM–based nanoindentation it has been shown that a single collagen fibril is a heterogeneous material along its axial direction with significantly different mechanical properties in its gap and overlap regions, correlating with its different molecular organizations in these two regions.",
"title": "Molecular structure"
},
{
"paragraph_id": 27,
"text": "Collagen fibrils/aggregates are arranged in different combinations and concentrations in various tissues to provide varying tissue properties. In bone, entire collagen triple helices lie in a parallel, staggered array. 40 nm gaps between the ends of the tropocollagen subunits (approximately equal to the gap region) probably serve as nucleation sites for the deposition of long, hard, fine crystals of the mineral component, which is hydroxylapatite (approximately) Ca10(OH)2(PO4)6. Type I collagen gives bone its tensile strength.",
"title": "Molecular structure"
},
{
"paragraph_id": 28,
"text": "Collagen-related diseases most commonly arise from genetic defects or nutritional deficiencies that affect the biosynthesis, assembly, posttranslational modification, secretion, or other processes involved in normal collagen production.",
"title": "Associated disorders"
},
{
"paragraph_id": 29,
"text": "In addition to the above-mentioned disorders, excessive deposition of collagen occurs in scleroderma.",
"title": "Associated disorders"
},
{
"paragraph_id": 30,
"text": "One thousand mutations have been identified in 12 out of more than 20 types of collagen. These mutations can lead to various diseases at the tissue level.",
"title": "Diseases"
},
{
"paragraph_id": 31,
"text": "Osteogenesis imperfecta – Caused by a mutation in type 1 collagen, dominant autosomal disorder, results in weak bones and irregular connective tissue, some cases can be mild while others can be lethal. Mild cases have lowered levels of collagen type 1 while severe cases have structural defects in collagen.",
"title": "Diseases"
},
{
"paragraph_id": 32,
"text": "Chondrodysplasias – Skeletal disorder believed to be caused by a mutation in type 2 collagen, further research is being conducted to confirm this.",
"title": "Diseases"
},
{
"paragraph_id": 33,
"text": "Ehlers–Danlos syndrome – Thirteen different types of this disorder, which lead to deformities in connective tissue, are known. Some of the rarer types can be lethal, leading to the rupture of arteries. Each syndrome is caused by a different mutation. For example, the vascular type (vEDS) of this disorder is caused by a mutation in collagen type 3.",
"title": "Diseases"
},
{
"paragraph_id": 34,
"text": "Alport syndrome – Can be passed on genetically, usually as X-linked dominant, but also as both an autosomal dominant and autosomal recessive disorder, those with the condition have problems with their kidneys and eyes, loss of hearing can also develop during the childhood or adolescent years.",
"title": "Diseases"
},
{
"paragraph_id": 35,
"text": "Knobloch syndrome – Caused by a mutation in the COL18A1 gene that codes for the production of collagen XVIII. Patients present with protrusion of the brain tissue and degeneration of the retina; an individual who has family members with the disorder is at an increased risk of developing it themselves since there is a hereditary link.",
"title": "Diseases"
},
{
"paragraph_id": 36,
"text": "Collagen is one of the long, fibrous structural proteins whose functions are quite different from those of globular proteins, such as enzymes. Tough bundles of collagen called collagen fibers are a major component of the extracellular matrix that supports most tissues and gives cells structure from the outside, but collagen is also found inside certain cells. Collagen has great tensile strength, and is the main component of fascia, cartilage, ligaments, tendons, bone and skin. Along with elastin and soft keratin, it is responsible for skin strength and elasticity, and its degradation leads to wrinkles that accompany aging. It strengthens blood vessels and plays a role in tissue development. It is present in the cornea and lens of the eye in crystalline form. It may be one of the most abundant proteins in the fossil record, given that it appears to fossilize frequently, even in bones from the Mesozoic and Paleozoic.",
"title": "Characteristics"
},
{
"paragraph_id": 37,
"text": "Collagen has a wide variety of applications, from food to medical. In the medical industry, it is used in cosmetic surgery and burn surgery. In the food sector, one use example is in casings for sausages.",
"title": "Characteristics"
},
{
"paragraph_id": 38,
"text": "If collagen is subject to sufficient denaturation, such as by heating, the three tropocollagen strands separate partially or completely into globular domains, containing a different secondary structure to the normal collagen polyproline II (PPII) of random coils. This process describes the formation of gelatin, which is used in many foods, including flavored gelatin desserts. Besides food, gelatin has been used in pharmaceutical, cosmetic, and photography industries. It is also used as a dietary supplement, and has been advertised as a potential remedy against the ageing process.",
"title": "Characteristics"
},
{
"paragraph_id": 39,
"text": "From the Greek for glue, kolla, the word collagen means \"glue producer\" and refers to the early process of boiling the skin and sinews of horses and other animals to obtain glue. Collagen adhesive was used by Egyptians about 4,000 years ago, and Native Americans used it in bows about 1,500 years ago. The oldest glue in the world, carbon-dated as more than 8,000 years old, was found to be collagen – used as a protective lining on rope baskets and embroidered fabrics, to hold utensils together, and in crisscross decorations on human skulls. Collagen normally converts to gelatin, but survived due to dry conditions. Animal glues are thermoplastic, softening again upon reheating, so they are still used in making musical instruments such as fine violins and guitars, which may have to be reopened for repairs – an application incompatible with tough, synthetic plastic adhesives, which are permanent. Animal sinews and skins, including leather, have been used to make useful articles for millennia.",
"title": "Characteristics"
},
{
"paragraph_id": 40,
"text": "Gelatin-resorcinol-formaldehyde glue (and with formaldehyde replaced by less-toxic pentanedial and ethanedial) has been used to repair experimental incisions in rabbit lungs.",
"title": "Characteristics"
},
{
"paragraph_id": 41,
"text": "Bovine collagen is widely used in dermal fillers for aesthetic correction of wrinkles and skin aging. Collagen cremes are also widely sold even though collagen cannot penetrate the skin because its fibers are too large. Most research on collagen supplements has been funded by industries that could benefit from a positive study result.",
"title": "Characteristics"
},
{
"paragraph_id": 42,
"text": "The molecular and packing structures of collagen eluded scientists over decades of research. The first evidence that it possesses a regular structure at the molecular level was presented in the mid-1930s. Research then concentrated on the conformation of the collagen monomer, producing several competing models, although correctly dealing with the conformation of each individual peptide chain. The triple-helical \"Madras\" model, proposed by G. N. Ramachandran in 1955, provided an accurate model of quaternary structure in collagen. This model was supported by further studies of higher resolution in the late 20th century.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "The packing structure of collagen has not been defined to the same degree outside of the fibrillar collagen types, although it has been long known to be hexagonal. As with its monomeric structure, several conflicting models propose either that the packing arrangement of collagen molecules is 'sheet-like', or is microfibrillar. The microfibrillar structure of collagen fibrils in tendon, cornea and cartilage was imaged directly by electron microscopy in the late 20th century and early 21st century. The microfibrillar structure of rat tail tendon was modeled as being closest to the observed structure, although it oversimplified the topological progression of neighboring collagen molecules, and so did not predict the correct conformation of the discontinuous D-periodic pentameric arrangement termed microfibril.",
"title": "History"
}
]
| Collagen is the main structural protein in the extracellular matrix found in the body's various connective tissues. As the main component of connective tissue, it is the most abundant protein in mammals, making up from 25% to 35% of the whole-body protein content. Collagen consists of amino acids bound together to form a triple helix of elongated fibril known as a collagen helix. It is mostly found in connective tissue such as cartilage, bones, tendons, ligaments, and skin. Collagen makes up 30% of the protein found in the human body. Vitamin E improves the production of collagen. Depending upon the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes one to two percent of muscle tissue and accounts for 6% of the weight of the skeletal muscle tissue. The fibroblast is the most common cell that creates collagen. Gelatin, which is used in food and industry, is collagen that has been irreversibly hydrolyzed using heat, basic solutions or weak acids. | 2001-09-05T07:14:46Z | 2023-12-31T21:59:49Z | [
"Template:Cite web",
"Template:Fibrous proteins",
"Template:Colend",
"Template:Cite news",
"Template:Cite book",
"Template:Webarchive",
"Template:Connective tissue",
"Template:Citation needed",
"Template:More citations needed section",
"Template:Col div",
"Template:Authority control",
"Template:Short description",
"Template:Use dmy dates",
"Template:IPAc-en",
"Template:PMID",
"Template:Commons category",
"Template:More medical citations needed",
"Template:Reflist",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/Collagen |
6,059 | Calvin and Hobbes | Calvin and Hobbes is a daily American comic strip created by cartoonist Bill Watterson that was syndicated from November 18, 1985, to December 31, 1995. Commonly cited as "the last great newspaper comic", Calvin and Hobbes has enjoyed broad and enduring popularity, influence, and academic and philosophical interest.
Calvin and Hobbes follows the humorous antics of the title characters: Calvin, a precocious, mischievous, and adventurous six-year-old boy; and Hobbes, his sardonic stuffed tiger. Set in the contemporary suburban United States of the 1980s and 1990s, the strip depicts Calvin's frequent flights of fancy and friendship with Hobbes. It also examines Calvin's relationships with his long-suffering parents and with his classmates, especially his neighbor Susie Derkins. Hobbes's dual nature is a defining motif for the strip: to Calvin, Hobbes is a living anthropomorphic tiger, while all the other characters seem to see Hobbes as an inanimate stuffed toy—though Watterson has not clarified exactly how Hobbes is perceived by others. Though the series does not frequently mention specific political figures or contemporary events, it does explore broad issues like environmentalism, public education, and philosophical quandaries.
At the height of its popularity, Calvin and Hobbes was featured in over 2,400 newspapers worldwide. In 2010, reruns of the strip appeared in more than 50 countries, and nearly 45 million copies of the Calvin and Hobbes books had been sold worldwide.
"I thought it was perhaps too 'adult,' too literate. When my then-8-year-old son remarked, 'This is the Doonesbury for kids!' I suspected we had something unusual on our hands."
—Lee Salem, Watterson's editor at Universal, recalling his reaction after seeing Watterson's first submission
Calvin and Hobbes was conceived when Bill Watterson, while working in an advertising job he detested, began devoting his spare time to developing a newspaper comic for potential syndication. He explored various strip ideas but all were rejected by the syndicates. United Feature Syndicate finally responded positively to one strip called The Doghouse, which featured a side character (the main character's little brother) who had a stuffed tiger. United identified these characters as the strongest and encouraged Watterson to develop them as the center of their own strip. Though United Feature ultimately rejected the new strip as lacking in marketing potential, Universal Press Syndicate took it up.
The first Calvin and Hobbes strip was published on November 18, 1985 in 35 newspapers. The strip quickly became popular. Within a year of syndication, the strip was published in roughly 250 newspapers and proved to have international appeal with translation and wide circulation outside the United States.
Although Calvin and Hobbes underwent continual artistic development and creative innovation over the period of syndication, the earliest strips demonstrated a remarkable consistency with the latest. Watterson introduced all the major characters within the first three weeks and made no changes to the central cast over the strip's 10-year history.
By April 5, 1987, Watterson was featured in an article in the Los Angeles Times. Calvin and Hobbes earned Watterson the Reuben Award from the National Cartoonists Society in the Outstanding Cartoonist of the Year category, first in 1986 and again in 1988. He was nominated another time in 1992. The Society awarded him the Humor Comic Strip Award for 1988. Calvin and Hobbes has also won several more awards.
As his creation grew in popularity, there was strong interest from the syndicate to merchandise the characters and expand into other forms of media. Watterson's contract with the syndicate allowed the characters to be licensed without the creator's consent, as was standard at the time. Nevertheless, Watterson had leverage by threatening to simply walk away from the comic strip.
This dynamic played out in a long and emotionally draining battle between Watterson and his syndicate editors. By 1991, Watterson had achieved his goal of securing a new contract that granted him legal control over his creation and all future licensing arrangements.
Having achieved his objective of creative control, Watterson's desire for privacy subsequently reasserted itself and he ceased all media interviews, relocated to New Mexico, and largely disappeared from public engagements, refusing to attend the ceremonies of any of the cartooning awards he won. The pressures of the battle over merchandising led to Watterson taking an extended break from May 5, 1991, to February 1, 1992, a move that was virtually unprecedented in the world of syndicated cartoonists.
During Watterson's first sabbatical from the strip, Universal Press Syndicate continued to charge newspapers full price to re-run old Calvin and Hobbes strips. Few editors approved of the move, but the strip was so popular that they had no choice but to continue to run it for fear that competing newspapers might pick it up and draw its fans away. Watterson returned to the strip in 1992 with plans to produce his Sunday strip as an unbreakable half of a newspaper or tabloid page. This made him only the second cartoonist since Garry Trudeau to have sufficient popularity to demand more space and control over the presentation of his work.
Watterson took a second sabbatical from April 3 through December 31, 1994. His return came with an announcement that Calvin and Hobbes would be concluding at the end of 1995. Stating his belief that he had achieved everything that he wanted to within the medium, he announced his intention to work on future projects at a slower pace with fewer artistic compromises.
The final strip ran on Sunday, December 31, 1995, depicting Calvin and Hobbes sledding down a snowy hill after a fresh snowfall with Calvin exclaiming “Let's go exploring!".
Speaking to NPR in 2005, animation critic Charles Solomon opined that the final strip "left behind a hole in the comics page that no strip has been able to fill."
Syndicated comics were typically published six times a week in black and white, with a Sunday supplement version in a larger, full color format. This larger format version of the strip was constrained by mandatory layout requirements that made it possible for newspaper editors to format the strip for different page sizes and layouts.
Watterson grew increasingly frustrated by the shrinking of the available space for comics in the newspapers and the mandatory panel divisions that restricted his ability to produce better artwork and more creative storytelling. He lamented that without space for anything more than simple dialogue or sparse artwork, comics as an art form were becoming dilute, bland, and unoriginal.
Watterson longed for the artistic freedom allotted to classic strips such as Little Nemo and Krazy Kat, and in 1989 he gave a sample of what could be accomplished with such liberty in the opening pages of the Sunday strip compilation, The Calvin and Hobbes Lazy Sunday Book—an 8-page previously unpublished Calvin story fully illustrated in watercolor. The same book contained an afterword from the artist himself, reflecting on a time when comic strips were allocated a whole page of the newspaper and every comic was like a "color poster".
Within two years, Watterson was ultimately successful in negotiating a deal that provided him more space and creative freedom. Following his 1991 sabbatical, Universal Press announced that Watterson had decided to sell his Sunday strip as an unbreakable half of a newspaper or tabloid page. Many editors and even a few cartoonists including Bil Keane (The Family Circus) and Bruce Beattie (Snafu) criticized him for what they perceived as arrogance and an unwillingness to abide by the normal practices of the cartoon business. Others, including Bill Amend (Foxtrot), Johnny Hart (BC, Wizard of Id) and Barbara Brandon (Where I'm Coming From) supported him. The American Association of Sunday and Feature Editors even formally requested that Universal reconsider the changes. Watterson's own comments on the matter was that "editors will have to judge for themselves whether or not Calvin and Hobbes deserves the extra space. If they don't think the strip carries its own weight, they don't have to run it." Ultimately only 15 newspapers cancelled the strip in response to the layout changes.
Bill Watterson took two sabbaticals from the daily requirements of producing the strip. The first took place from May 5, 1991, to February 1, 1992, and the second from April 3 through December 31, 1994. These sabbaticals were included in the new contract Watterson managed to negotiate with Universal Features in 1990. The sabbaticals were proposed by the syndicate themselves, who, fearing Watterson's complete burnout, endeavored to get another five years of work from their star artist.
Watterson remains only the third cartoonist with sufficient popularity and stature to receive a sabbatical from their syndicate, the first two being Garry Trudeau (Doonesbury) in 1983 and Gary Larson (The Far Side) in 1989. Typically cartoonists are expected to produce sufficient strips to cover any period they may wish to take off. Watterson's lengthy sabbaticals received some mild criticism from his fellow cartoonists including Greg Evans (Luann); and Charles Schulz (Peanuts), one of Watterson's major artistic influences, even called it a "puzzle". Some cartoonists resented the idea that Watterson worked harder than others, while others supported it. At least one newspaper editor noted that the strip was the most popular in the country and stated he "earned it".
Despite the popularity of Calvin and Hobbes, the strip remains notable for the almost complete lack of official product merchandising. Watterson held that comic strips should stand on their own as an art form and although he did not start out completely opposed to merchandising in all forms (or even for all comic strips), he did reject an early syndication deal that involved incorporating a more marketable, licensed character into his strip. In spite of being an unproven cartoonist, and having been flown all the way to New York to discuss the proposal, Watterson reflexively resented the idea of "cartooning by committee" and turned it down.
When Calvin and Hobbes was accepted by Universal Syndicate, and began to grow in popularity, Watterson found himself at odds with the syndicate, which urged him to begin merchandising the characters and touring the country to promote the first collections of comic strips. Watterson refused, believing that the integrity of the strip and its artist would be undermined by commercialization, which he saw as a major negative influence in the world of cartoon art, and that licensing his character would only violate the spirit of his work. He gave an example of this in discussing his opposition to a Hobbes plush toy: that if the essence of Hobbes' nature in the strip is that it remain unresolved whether he is a real tiger or a stuffed toy, then creating a real stuffed toy would only destroy the magic. However, having initially signed away control over merchandising in his initial contract with the syndicate, Watterson commenced a lengthy and emotionally draining battle with Universal to gain control over his work. Ultimately Universal did not approve any products against Watterson's wishes, understanding that, unlike other comic strips, it would be nearly impossible to separate the creator from the strip if Watterson chose to walk away.
One estimate places the value of licensing revenue forgone by Watterson at $300–$400 million. Almost no legitimate Calvin and Hobbes merchandise exists. Exceptions produced during the strip's original run include two 16-month calendars (1988–89 and 1989–90), a t-shirt for the Smithsonian Exhibit, Great American Comics: 100 Years of Cartoon Art (1990) and the textbook Teaching with Calvin and Hobbes, which has been described as "perhaps the most difficult piece of official Calvin and Hobbes memorabilia to find." In 2010, Watterson did allow his characters to be included in a series of United States Postal Service stamps honoring five classic American comics. Licensed prints of Calvin and Hobbes were made available and have also been included in various academic works.
The strip's immense popularity has led to the appearance of various counterfeit items such as window decals and T-shirts that often feature crude humor, binge drinking and other themes that are not found in Watterson's work. Images from one strip in which Calvin and Hobbes dance to loud music at night were commonly used for copyright violations. After threat of a lawsuit alleging infringement of copyright and trademark, some sticker makers replaced Calvin with a different boy, while other makers made no changes. Watterson wryly commented, "I clearly miscalculated how popular it would be to show Calvin urinating on a Ford logo," but later added, "long after the strip is forgotten, [they] are my ticket to immortality".
Watterson has expressed admiration for animation as an artform. In a 1989 interview in The Comics Journal he described the appeal of being able to do things with a moving image that cannot be done by a simple drawing: the distortion, the exaggeration and the control over the length of time an event is viewed. However, although the visual possibilities of animation appealed to Watterson, the idea of finding a voice for Calvin made him uncomfortable, as did the idea of working with a team of animators. Ultimately, Calvin and Hobbes was never made into an animated series. Watterson later stated in The Calvin and Hobbes Tenth Anniversary Book that he liked the fact that his strip was a "low-tech, one-man operation," and that he took great pride in the fact that he drew every line and wrote every word on his own. Calls from major Hollywood figures interested in an adaptation of his work, including Jim Henson, George Lucas and Steven Spielberg, were never returned and in a 2013 interview Watterson stated that he had "zero interest" in an animated adaptation as there was really no upside for him in doing so.
The strip borrows several elements and themes from three major influences: Walt Kelly's Pogo, George Herriman's Krazy Kat and Charles M. Schulz's Peanuts. Schulz and Kelly particularly influenced Watterson's outlook on comics during his formative years.
Notable elements of Watterson's artistic style are his characters' diverse and often exaggerated expressions (particularly those of Calvin), elaborate and bizarre backgrounds for Calvin's flights of imagination, expressions of motion and frequent visual jokes and metaphors. In the later years of the strip, with more panel space available for his use, Watterson experimented more freely with different panel layouts, art styles, stories without dialogue and greater use of white space. He also experimented with his tools, once inking a strip with a stick from his yard in order to achieve a particular look. He also makes a point of not showing certain things explicitly: the "Noodle Incident" and the children's book Hamster Huey and the Gooey Kablooie are left to the reader's imagination, where Watterson was sure they would be "more outrageous" than he could portray.
Watterson's technique started with minimalist pencil sketches drawn with a light pencil (though the larger Sunday strips often required more elaborate work) on a piece of Bristol board, with his brand of choice being Strathmore because he felt it held the drawings better on the page as opposed to the cheaper brands (Watterson said he initially used any cheap pad of Bristol board his local supply store had but switched to Strathmore after he found himself growing more and more displeased with the results). He would then use a small sable brush and India ink to fill in the rest of the drawing, saying that he did not want to simply trace over his penciling and thus make the inking more spontaneous. He lettered dialogue with a Rapidograph fountain pen, and he used a crowquill pen for odds and ends. Mistakes were covered with various forms of correction fluid, including the type used on typewriters. Watterson was careful in his use of color, often spending a great deal of time in choosing the right colors to employ for the weekly Sunday strip; his technique was to cut the color tabs the syndicate sent him into individual squares, lay out the colors, and then paint a watercolor approximation of the strip on tracing paper over the Bristol board and then mark the strip accordingly before sending it on. When Calvin and Hobbes began there were 64 colors available for the Sunday strips. For the later Sunday strips Watterson had 125 colors as well as the ability to fade the colors into each other.
Calvin, named after the 16th-century theologian John Calvin, is a six-year-old boy with spiky blond hair and a distinctive red-and-black striped shirt, black pants and sneakers. Despite his poor grades in school, Calvin demonstrates his intelligence through a sophisticated vocabulary, philosophical mind and creative/artistic talent. Watterson described Calvin as having "not much of a filter between his brain and his mouth", a "little too intelligent for his age", lacking in restraint and not yet having the experience to "know the things that you shouldn't do." The comic strip largely revolves around Calvin's inner world and his largely antagonistic experiences with those outside of it (fellow students, authority figures and his parents).
From Calvin's point of view, Hobbes is an anthropomorphic tiger much larger than Calvin and full of independent attitudes and ideas. When the scene includes any other human, they see merely a stuffed toy, usually seated at an off-kilter angle and blankly staring into space. The true nature of the character is never resolved, instead as Watterson describes, a 'grown-up' version of reality is juxtaposed against Calvin's, with the reader left to "decide which is truer". Hobbes is based on a cat Watterson owned, a grey tabby named Sprite. Sprite inspired the length of Hobbes' body as well as his personality. Although Hobbes' humor stems from acting like a human, Watterson maintained Sprite's feline attitude.
Hobbes is named after 17th-century philosopher Thomas Hobbes, who held what Watterson describes as "a dim view of human nature." He typically exhibits a greater understanding of consequences than Calvin, but rarely intervenes in Calvin's activities beyond a few oblique warnings. He often likes to sneak up and pounce on Calvin, especially at the front door when Calvin is returning home from school. The friendship between the two characters provides the core dynamic of the strip.
Calvin's mother and father are typical middle-class parents who are relatively down to earth and whose sensible attitudes serve as a foil for Calvin's outlandish behavior. Calvin's father is a patent attorney (like Watterson's own father), while his mother is a stay-at-home mom. Both parents are unnamed throughout the entire strip, as Watterson insists, "As far as the strip is concerned, they are important only as Calvin's mom and dad."
Watterson recounts that some fans are angered by the sometimes sardonic way that Calvin's parents respond to him. In response, Watterson defends what Calvin's parents do, remarking that in the case of parenting a kid like Calvin, "I think they do a better job than I would." Calvin's father is overly concerned with "character building" activities in a number of strips, either in the things he makes Calvin do or in the austere eccentricities of his own lifestyle.
Susie Derkins, who first appears early in the strip and is the only important character with both a first and last name, lives on Calvin's street and is one of his classmates. Her last name apparently derives from the pet beagle owned by Watterson's wife's family.
Susie is studious and polite (though she can be aggressive if sufficiently provoked), and she likes to play house or host tea parties with her stuffed animals. She also plays imaginary games with Calvin in which she acts as a high-powered lawyer or politician and wants Calvin to pretend to be her househusband. Though both of them are typically loath to admit it, Calvin and Susie exhibit many common traits and inclinations. For example, the reader occasionally sees Susie with a stuffed rabbit named "Mr. Bun." Much like Calvin, Susie has a mischievous (and sometimes aggressive) streak as well, which the reader witnesses whenever she subverts Calvin's attempts to cheat on school tests by feeding him incorrect answers, or whenever she fights back after Calvin attacks her with snowballs or water balloons.
Hobbes often openly expresses romantic feelings for Susie, to Calvin's disgust. In contrast, Calvin started a club (of which he and Hobbes are the only members) that he calls G.R.O.S.S. (Get Rid Of Slimy GirlS) and, while holding "meetings" in Calvin's tree house or in the "box of secrecy" in Calvin's room, they usually come up with some plot against Susie. In one instance, Calvin steals one of Susie's dolls and holds it for ransom, only to have Susie retaliate by nabbing Hobbes. Watterson admits that Calvin and Susie have a nascent crush on each other and that Susie is a reference to the type of woman whom Watterson himself found attractive and eventually married.
Susie features as a main character in two of the five storylines that appear in Teaching with Calvin and Hobbes.
Calvin also interacts with a handful of secondary characters. Several of these, including Rosalyn, his babysitter; Miss Wormwood, his teacher; and Moe, the school bully, recur regularly through the duration of the strip.
Watterson used the strip to poke fun at the art world, principally through Calvin's unconventional creations of snowmen but also through other expressions of childhood art. When Miss Wormwood complains that he is wasting class time drawing impossible things (a Stegosaurus in a rocket ship, for example), Calvin proclaims himself "on the cutting edge of the avant-garde." He begins exploring the medium of snow when a warm day melts his snowman. His next sculpture "speaks to the horror of our own mortality, inviting the viewer to contemplate the evanescence of life." In later strips, Calvin's creative instincts diversify to include sidewalk drawings (or, as he terms them, examples of "suburban postmodernism").
Watterson also lampooned the academic world. In one example, Calvin carefully crafts an "artist's statement", claiming that such essays convey more messages than artworks themselves ever do (Hobbes blandly notes, "You misspelled Weltanschauung"). He indulges in what Watterson calls "pop psychobabble" to justify his destructive rampages and shift blame to his parents, citing "toxic codependency." In one instance, he pens a book report based on the theory that the purpose of academic writing is to "inflate weak ideas, obscure poor reasoning and inhibit clarity," entitled The Dynamics of Interbeing and Monological Imperatives in Dick and Jane: A Study in Psychic Transrelational Gender Modes. Displaying his creation to Hobbes, he remarks, "Academia, here I come!" Watterson explains that he adapted this jargon (and similar examples from several other strips) from an actual book of art criticism.
Overall, Watterson's satirical essays serve to attack both sides, criticizing both the commercial mainstream and the artists who are supposed to be "outside" it. The strip on Sunday, June 21, 1992, criticized the naming of The Big Bang theory as not evocative of the wonders behind it and coined the term "Horrendous Space Kablooie", an alternative that achieved some informal popularity among scientists and was often shortened to "the HSK." The term has also been referred to in newspapers, books and university courses.
Calvin imagines himself as many great creatures and other people, including dinosaurs, elephants, jungle-farers and superheroes. Three of his alter egos are well-defined and recurrent:
Calvin also has several adventures involving corrugated cardboard boxes, which he adapts for many imaginative and elaborate uses. In one strip, when Calvin shows off his Transmogrifier, a device that transforms its user into any desired creature or item, Hobbes remarks, "It's amazing what they do with corrugated cardboard these days." Calvin is able to change the function of the boxes by rewriting the label and flipping the box onto another side. In this way, a box can be used not only for its conventional purposes (a storage container for water balloons, for example), but also as a flying time machine, a duplicator, a transmogrifier or, with the attachment of a few wires and a colander, a "Cerebral Enhance-o-tron."
In the real world, Calvin's antics with his box have had varying effects. When he transmogrified into a tiger, he still appeared as a regular human child to his parents. However, in a story where he made several duplicates of himself, his parents are seen interacting with what does seem like multiple Calvins, including in a strip where two of him are seen in the same panel as his father. It is ultimately unknown what his parents do or do not see, as Calvin tries to hide most of his creations (or conceal their effects) so as not to traumatize them.
In addition, Calvin uses a cardboard box as a sidewalk kiosk to sell things. Often, Calvin offers merchandise no one would want, such as "suicide drink", "a swift kick in the butt" for one dollar or a "frank appraisal of your looks" for fifty cents. In one strip, he sells "happiness" for ten cents, hitting the customer in the face with a water balloon and explaining that he meant his own happiness. In another strip, he sold "insurance", firing a slingshot at those who refused to buy it. In some strips, he tried to sell "great ideas" and, in one earlier strip, he attempted to sell the family car to obtain money for a grenade launcher. In yet another strip, he sells "life" for five cents, where the customer receives nothing in return, which, in Calvin's opinion, is life.
The box has also functioned as an alternate secret meeting place for G.R.O.S.S., as the "Box of Secrecy".
Other kids' games are all such a bore! They've gotta have rules and they gotta keep score! Calvinball is better by far! It's never the same! It's always bizarre! You don't need a team or a referee! You know that it's great, 'cause it's named after me!
—Excerpt from the Calvinball theme song
Calvinball is an improvisational sport/game introduced in a 1990 storyline that involved Calvin's negative experience of joining the school baseball team. Calvinball is a nomic or self-modifying game, a contest of wits, skill and creativity rather than stamina or athletic skill. The game is portrayed as a rebellion against conventional team sports and became a staple of the final five years of the comic. The only consistent rules of the game are that Calvinball may never be played with the same rules twice and that each participant must wear a mask.
When asked how to play, Watterson stated: "It's pretty simple: you make up the rules as you go." In most appearances of the game, a comical array of conventional and non-conventional sporting equipment is involved, including a croquet set, a badminton set, assorted flags, bags, signs, a hobby horse, water buckets and balloons, with humorous allusions to unseen elements such as "time-fracture wickets". Scoring is portrayed as arbitrary and nonsensical ("Q to 12" and "oogy to boogy") and the lack of fixed rules leads to lengthy argument between the participants as to who scored, where the boundaries are, and when the game is finished. Usually, the contest results in Calvin being outsmarted by Hobbes. The game has been described in one academic work not as a new game based on fragments of an older one, but as the "constant connecting and disconnecting of parts, the constant evasion of rules or guidelines based on collective creativity."
Calvin often creates horrendous/dark humor scenes with his snowmen and other snow sculptures. He uses the snowman for social commentary, revenge or pure enjoyment. Examples include Snowman Calvin being yelled at by Snowman Dad to shovel the snow; one snowman eating snow cones scooped out of a second snowman, who is lying on the ground with an ice-cream scoop in his back; a "snowman house of horror"; and snowmen representing people he hates. "The ones I really hate are small, so they'll melt faster," he says. There was even an occasion on which Calvin accidentally brought a snowman to life and it made itself and a small army into "deranged mutant killer monster snow goons."
Calvin's snow art is often used as a commentary on art in general. For example, Calvin has complained more than once about the lack of originality in other people's snow art and compared it with his own grotesque snow sculptures. In one of these instances, Calvin and Hobbes claim to be the sole guardians of high culture; in another, Hobbes admires Calvin's willingness to put artistic integrity above marketability, causing Calvin to reconsider and make an ordinary snowman.
Calvin and Hobbes frequently ride downhill in a wagon or sled (depending on the season), as a device to add some physical comedy to the strip and because, according to Watterson, "it's a lot more interesting ... than talking heads." While the ride is sometimes the focus of the strip, it also frequently serves as a counterpoint or visual metaphor while Calvin ponders the meaning of life, death, God, philosophy or a variety of other weighty subjects. Many of their rides end in spectacular crashes which leave them battered, beaten up and broken, a fact which convinces Hobbes to sometimes hop off before a ride even begins. In the final strip, Calvin and Hobbes depart on their sled to go exploring. This theme is similar (perhaps even an homage) to scenes in Walt Kelly's Pogo. Calvin and Hobbes' sled has been described as the most famous sled in American arts since Citizen Kane.
G.R.O.S.S. (which is a backronym for Get Rid Of Slimy GirlS, "otherwise it doesn't spell anything") is a club in which Calvin and Hobbes are the only members. The club was founded in the garage of their house, but to clear space for its activities, Calvin and (purportedly) Hobbes push Calvin's parents' car, causing it to roll into a ditch (but not suffer damage); the incident prompts the duo to change the club's location to Calvin's treehouse. They hold meetings that involve finding ways to annoy and discomfort Susie Derkins, a girl and enemy of their club. Notable actions include planting a fake secret tape near her in attempt to draw her in to a trap, trapping her in a closet at their house and creating elaborate water balloon traps. Calvin gave himself and Hobbes important positions in the club, Calvin being "Dictator-for-Life" and Hobbes being "President-and-First-Tiger". They go into Calvin's treehouse for their club meetings and often get into fights during them. The password to get into the treehouse is intentionally long and difficult, which has on at least one occasion ruined Calvin's plans. As Hobbes is able to climb the tree without the rope, he is usually the one who comes up with the password, which often involves heaping praise upon tigers. An example of this can be seen in the comic strip where Calvin, rushing to get into the treehouse to throw things at a passing Susie Derkins, insults Hobbes, who is in the treehouse and thus has to let down the rope. Hobbes forces Calvin to say the password for insulting him. By the time Susie arrives, in time to hear Calvin saying some of the password, causing him to stumble, Calvin is on "Verse Seven: Tigers are perfect!/The E-pit-o-me/of good looks and grace/and quiet..uh..um..dignity". The opportunity to pelt Susie with something having passed, Calvin threatens to turn Hobbes into a rug.
There are 18 Calvin and Hobbes books, published from 1987 to 1997. These include 11 collections, which form a complete archive of the newspaper strips, except for a single daily strip from November 28, 1985. (The collections do contain a strip for this date, but it is not the same strip that appeared in some newspapers.) Treasuries usually combine the two preceding collections with bonus material and include color reprints of Sunday comics.
Watterson included some new material in the treasuries. In The Essential Calvin and Hobbes, which includes cartoons from the collections Calvin and Hobbes and Something Under the Bed Is Drooling, the back cover features a scene of a giant Calvin rampaging through a town. The scene is based on Watterson's home town of Chagrin Falls, Ohio, and Calvin is holding the Chagrin Falls Popcorn Shop, an iconic candy and ice cream shop overlooking the town's namesake falls. Several of the treasuries incorporate additional poetry; The Indispensable Calvin and Hobbes book features a set of poems, ranging from just a few lines to an entire page, that cover topics such as Calvin's mother's "hindsight" and exploring the woods. In The Essential Calvin and Hobbes, Watterson presents a long poem explaining a night's battle against a monster from Calvin's perspective. The Authoritative Calvin and Hobbes includes a story based on Calvin's use of the Transmogrifier to finish his reading homework.
A complete collection of Calvin and Hobbes strips, in three hardcover volumes totaling 1440 pages, was released on October 4, 2005, by Andrews McMeel Publishing. It includes color prints of the art used on paperback covers, the treasuries' extra illustrated stories and poems and a new introduction by Bill Watterson in which he talks about his inspirations and his story leading up to the publication of the strip. The alternate 1985 strip is still omitted, and three other strips (January 7 and November 24, 1987, and November 25, 1988) have altered dialogue. A four-volume paperback version was released November 13, 2012.
To celebrate the release (which coincided with the strip's 20th anniversary and the tenth anniversary of its absence from newspapers), Bill Watterson answered 15 questions submitted by readers.
Early books were printed in smaller format in black and white. These were later reproduced in twos in color in the "Treasuries" (Essential, Authoritative and Indispensable), except for the contents of Attack of the Deranged Mutant Killer Monster Snow Goons. Those Sunday strips were not reprinted in color until the Complete collection was finally published in 2005.
Watterson claims he named the books the "Essential, Authoritative and Indispensable" because, as he says in The Calvin and Hobbes Tenth Anniversary Book, the books are "obviously none of these things."
An officially licensed children's textbook entitled Teaching with Calvin and Hobbes was published in a single print run in Fargo, North Dakota, in 1993. The book is composed of Calvin and Hobbes strips that form story arcs, including "The Binoculars" and "The Bug Collection", followed by lessons based on the stories.
What do you think the principal meant when he said they had "quite a file" on Calvin? -Teaching with Calvin and Hobbes
The book is rare and highly sought. It has been called the "Holy Grail" for Calvin and Hobbes collectors.
Reviewing Calvin and Hobbes in 1990, Entertainment Weekly's Ken Tucker gave the strip an A+ rating, writing "Watterson summons up the pain and confusion of childhood as much as he does its innocence and fun."
In 1993, paleontologist and paleoartist Gregory S. Paul praised Bill Watterson for the scientific accuracy of the dinosaurs appearing in Calvin and Hobbes.
In her 1994 book When Toys Come Alive, Lois Rostow Kuznets theorizes that Hobbes serves both as a figure of Calvin's childish fantasy life and as an outlet for the expression of libidinous desires more associated with adults. Kuznets also analyzes Calvin's other fantasies, suggesting that they are a second tier of fantasies utilized in places like school where transitional objects such as Hobbes would not be socially acceptable.
Political scientist James Q. Wilson, in a paean to Calvin and Hobbes upon Watterson's decision to end the strip in 1995, characterized it as "our only popular explication of the moral philosophy of Aristotle."
Alisa White Coleman analyzed the strip's underlying messages concerning ethics and values in "'Calvin and Hobbes': A Critique of Society's Values," published in the Journal of Mass Media Ethics in 2000.
A collection of original Sunday strips was exhibited at Ohio State University's Billy Ireland Cartoon Library & Museum in 2001. Watterson himself selected the strips and provided his own commentary for the exhibition catalog, which was later published by Andrews McMeel as Calvin and Hobbes: Sunday Pages 1985–1995.
Since the discontinuation of Calvin and Hobbes, individual strips have been licensed for reprint in schoolbooks, including the Christian homeschooling book The Fallacy Detective in 2002, and the university-level philosophy reader Open Questions: Readings for Critical Thinking and Writing in 2005; in the latter, the ethical views of Watterson and his characters Calvin and Hobbes are discussed in relation to the views of professional philosophers. Since 2009, Twitter users have indicated that Calvin and Hobbes strips have appeared in textbooks for subjects in the sciences, social sciences, mathematics, philosophy and foreign language.
In a 2009 evaluation of the entire body of Calvin and Hobbes strips using grounded theory methodology, Christijan D. Draper found that: "Overall, Calvin and Hobbes suggests that meaningful time use is a key attribute of a life well lived," and that "the strip suggests one way to assess the meaning associated with time use is through preemptive retrospection by which a person looks at current experiences through the lens of an anticipated future..."
Jamey Heit's Imagination and Meaning in Calvin and Hobbes, a critical and academic analysis of the strip, was published in 2012.
Calvin and Hobbes strips were again exhibited at the Billy Ireland Cartoon Library & Museum at Ohio State University in 2014, in an exhibition entitled Exploring Calvin and Hobbes. An exhibition catalog by the same title, which also contained an interview with Watterson conducted by Jenny Robb, the curator of the museum, was published by Andrews McMeel in 2015.
"Since its concluding panel in 1995, Calvin and Hobbes has remained one of the most influential and well-loved comic strips of our time."
–The Atlantic, "How Calvin and Hobbes Inspired a Generation," October 25, 2013
Years after its original newspaper run, Calvin and Hobbes has continued to exert influence in entertainment, art, and fandom.
In television, Calvin and Hobbes have been satirically depicted in stop motion animation in the 2006 Robot Chicken episode "Lust for Puppets," and in traditional animation in the 2009 Family Guy episode "Not All Dogs Go to Heaven." In the 2013 Community episode "Paranormal Parentage," the characters Abed Nadir (Danny Pudi) and Troy Barnes (Donald Glover) dress as Calvin and Hobbes, respectively, for Halloween.
British artists, merchandisers, booksellers, and philosophers were interviewed for a 2009 BBC Radio 4 half-hour programme about the abiding popularity of the comic strip, narrated by Phill Jupitus.
The first book-length study of the strip, Looking for Calvin and Hobbes: The Unconventional Story of Bill Watterson and His Revolutionary Comic Strip by Nevin Martell, was first published in 2009; an expanded edition was published in 2010. The book chronicles Martell's quest to tell the story of Calvin and Hobbes and Watterson through research and interviews with people connected to the cartoonist and his work. The director of the later documentary Dear Mr. Watterson referenced Looking for Calvin and Hobbes in discussing the production of the movie, and Martell appears in the film.
The American documentary film Dear Mr. Watterson, released in 2013, explores the impact and legacy of Calvin and Hobbes through interviews with authors, curators, historians, and numerous professional cartoonists.
The enduring significance of Calvin and Hobbes to international cartooning was recognized by the jury of the Angoulême International Comics Festival in 2014 by the awarding of its Grand Prix to Watterson, only the fourth American to ever receive the honor (after Will Eisner, Robert Crumb, and Art Spiegelman).
From 2016 to 2021, author Berkeley Breathed included Calvin and Hobbes in various Bloom County cartoons. He launched the first cartoon on April Fool's Day 2016 and jokingly issued a statement suggesting that he had acquired Calvin and Hobbes from Bill Watterson, who was "out of the Arizona facility, continent and looking forward to some well-earned financial security." While bearing Watterson's signature and drawing style as well as featuring characters from both Calvin and Hobbes and Breathed's Bloom County, it is unclear whether Watterson had any input into these cartoons or not.
Calvin and Hobbes remains the most viewed comic on GoComics, which cycles through old strips with an approximately 30-year delay.
A number of artists and cartoonists have created unofficial works portraying Calvin as a teenager/adult; the concept has also inspired writers.
In 2011, a comic strip appeared by cartoonists Dan and Tom Heyerman called Hobbes and Bacon. The strip depicts Calvin as an adult, married to Susie Derkins with a young daughter named after philosopher Francis Bacon, to whom Calvin gives Hobbes. Though consisting of only four strips originally, Hobbes and Bacon received considerable attention when it appeared and was continued by other cartoonists and artists.
A novel titled Calvin by CLA Young Adult Book Award–winning author Martine Leavitt was published in 2015. The story tells of seventeen-year-old Calvin—who was born on the day that Calvin and Hobbes ended, and who has now been diagnosed with schizophrenia—and his hallucination of Hobbes, his childhood stuffed tiger. With his friend Susie, who might also be a hallucination, Calvin sets off to find Bill Watterson in the hope that the cartoonist can provide aid for Calvin's condition.
The titular character of the comic strip Frazz has been noted for his similar appearance and personality to a grown-up Calvin. Creator Jef Mallett has stated that although Watterson is an inspiration to him, the similarities are unintentional. | [
{
"paragraph_id": 0,
"text": "Calvin and Hobbes is a daily American comic strip created by cartoonist Bill Watterson that was syndicated from November 18, 1985, to December 31, 1995. Commonly cited as \"the last great newspaper comic\", Calvin and Hobbes has enjoyed broad and enduring popularity, influence, and academic and philosophical interest.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Calvin and Hobbes follows the humorous antics of the title characters: Calvin, a precocious, mischievous, and adventurous six-year-old boy; and Hobbes, his sardonic stuffed tiger. Set in the contemporary suburban United States of the 1980s and 1990s, the strip depicts Calvin's frequent flights of fancy and friendship with Hobbes. It also examines Calvin's relationships with his long-suffering parents and with his classmates, especially his neighbor Susie Derkins. Hobbes's dual nature is a defining motif for the strip: to Calvin, Hobbes is a living anthropomorphic tiger, while all the other characters seem to see Hobbes as an inanimate stuffed toy—though Watterson has not clarified exactly how Hobbes is perceived by others. Though the series does not frequently mention specific political figures or contemporary events, it does explore broad issues like environmentalism, public education, and philosophical quandaries.",
"title": ""
},
{
"paragraph_id": 2,
"text": "At the height of its popularity, Calvin and Hobbes was featured in over 2,400 newspapers worldwide. In 2010, reruns of the strip appeared in more than 50 countries, and nearly 45 million copies of the Calvin and Hobbes books had been sold worldwide.",
"title": ""
},
{
"paragraph_id": 3,
"text": "\"I thought it was perhaps too 'adult,' too literate. When my then-8-year-old son remarked, 'This is the Doonesbury for kids!' I suspected we had something unusual on our hands.\"",
"title": "History"
},
{
"paragraph_id": 4,
"text": "—Lee Salem, Watterson's editor at Universal, recalling his reaction after seeing Watterson's first submission",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Calvin and Hobbes was conceived when Bill Watterson, while working in an advertising job he detested, began devoting his spare time to developing a newspaper comic for potential syndication. He explored various strip ideas but all were rejected by the syndicates. United Feature Syndicate finally responded positively to one strip called The Doghouse, which featured a side character (the main character's little brother) who had a stuffed tiger. United identified these characters as the strongest and encouraged Watterson to develop them as the center of their own strip. Though United Feature ultimately rejected the new strip as lacking in marketing potential, Universal Press Syndicate took it up.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The first Calvin and Hobbes strip was published on November 18, 1985 in 35 newspapers. The strip quickly became popular. Within a year of syndication, the strip was published in roughly 250 newspapers and proved to have international appeal with translation and wide circulation outside the United States.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Although Calvin and Hobbes underwent continual artistic development and creative innovation over the period of syndication, the earliest strips demonstrated a remarkable consistency with the latest. Watterson introduced all the major characters within the first three weeks and made no changes to the central cast over the strip's 10-year history.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "By April 5, 1987, Watterson was featured in an article in the Los Angeles Times. Calvin and Hobbes earned Watterson the Reuben Award from the National Cartoonists Society in the Outstanding Cartoonist of the Year category, first in 1986 and again in 1988. He was nominated another time in 1992. The Society awarded him the Humor Comic Strip Award for 1988. Calvin and Hobbes has also won several more awards.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "As his creation grew in popularity, there was strong interest from the syndicate to merchandise the characters and expand into other forms of media. Watterson's contract with the syndicate allowed the characters to be licensed without the creator's consent, as was standard at the time. Nevertheless, Watterson had leverage by threatening to simply walk away from the comic strip.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "This dynamic played out in a long and emotionally draining battle between Watterson and his syndicate editors. By 1991, Watterson had achieved his goal of securing a new contract that granted him legal control over his creation and all future licensing arrangements.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Having achieved his objective of creative control, Watterson's desire for privacy subsequently reasserted itself and he ceased all media interviews, relocated to New Mexico, and largely disappeared from public engagements, refusing to attend the ceremonies of any of the cartooning awards he won. The pressures of the battle over merchandising led to Watterson taking an extended break from May 5, 1991, to February 1, 1992, a move that was virtually unprecedented in the world of syndicated cartoonists.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "During Watterson's first sabbatical from the strip, Universal Press Syndicate continued to charge newspapers full price to re-run old Calvin and Hobbes strips. Few editors approved of the move, but the strip was so popular that they had no choice but to continue to run it for fear that competing newspapers might pick it up and draw its fans away. Watterson returned to the strip in 1992 with plans to produce his Sunday strip as an unbreakable half of a newspaper or tabloid page. This made him only the second cartoonist since Garry Trudeau to have sufficient popularity to demand more space and control over the presentation of his work.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Watterson took a second sabbatical from April 3 through December 31, 1994. His return came with an announcement that Calvin and Hobbes would be concluding at the end of 1995. Stating his belief that he had achieved everything that he wanted to within the medium, he announced his intention to work on future projects at a slower pace with fewer artistic compromises.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The final strip ran on Sunday, December 31, 1995, depicting Calvin and Hobbes sledding down a snowy hill after a fresh snowfall with Calvin exclaiming “Let's go exploring!\".",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Speaking to NPR in 2005, animation critic Charles Solomon opined that the final strip \"left behind a hole in the comics page that no strip has been able to fill.\"",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Syndicated comics were typically published six times a week in black and white, with a Sunday supplement version in a larger, full color format. This larger format version of the strip was constrained by mandatory layout requirements that made it possible for newspaper editors to format the strip for different page sizes and layouts.",
"title": "Sunday formatting"
},
{
"paragraph_id": 17,
"text": "Watterson grew increasingly frustrated by the shrinking of the available space for comics in the newspapers and the mandatory panel divisions that restricted his ability to produce better artwork and more creative storytelling. He lamented that without space for anything more than simple dialogue or sparse artwork, comics as an art form were becoming dilute, bland, and unoriginal.",
"title": "Sunday formatting"
},
{
"paragraph_id": 18,
"text": "Watterson longed for the artistic freedom allotted to classic strips such as Little Nemo and Krazy Kat, and in 1989 he gave a sample of what could be accomplished with such liberty in the opening pages of the Sunday strip compilation, The Calvin and Hobbes Lazy Sunday Book—an 8-page previously unpublished Calvin story fully illustrated in watercolor. The same book contained an afterword from the artist himself, reflecting on a time when comic strips were allocated a whole page of the newspaper and every comic was like a \"color poster\".",
"title": "Sunday formatting"
},
{
"paragraph_id": 19,
"text": "Within two years, Watterson was ultimately successful in negotiating a deal that provided him more space and creative freedom. Following his 1991 sabbatical, Universal Press announced that Watterson had decided to sell his Sunday strip as an unbreakable half of a newspaper or tabloid page. Many editors and even a few cartoonists including Bil Keane (The Family Circus) and Bruce Beattie (Snafu) criticized him for what they perceived as arrogance and an unwillingness to abide by the normal practices of the cartoon business. Others, including Bill Amend (Foxtrot), Johnny Hart (BC, Wizard of Id) and Barbara Brandon (Where I'm Coming From) supported him. The American Association of Sunday and Feature Editors even formally requested that Universal reconsider the changes. Watterson's own comments on the matter was that \"editors will have to judge for themselves whether or not Calvin and Hobbes deserves the extra space. If they don't think the strip carries its own weight, they don't have to run it.\" Ultimately only 15 newspapers cancelled the strip in response to the layout changes.",
"title": "Sunday formatting"
},
{
"paragraph_id": 20,
"text": "Bill Watterson took two sabbaticals from the daily requirements of producing the strip. The first took place from May 5, 1991, to February 1, 1992, and the second from April 3 through December 31, 1994. These sabbaticals were included in the new contract Watterson managed to negotiate with Universal Features in 1990. The sabbaticals were proposed by the syndicate themselves, who, fearing Watterson's complete burnout, endeavored to get another five years of work from their star artist.",
"title": "Sabbaticals"
},
{
"paragraph_id": 21,
"text": "Watterson remains only the third cartoonist with sufficient popularity and stature to receive a sabbatical from their syndicate, the first two being Garry Trudeau (Doonesbury) in 1983 and Gary Larson (The Far Side) in 1989. Typically cartoonists are expected to produce sufficient strips to cover any period they may wish to take off. Watterson's lengthy sabbaticals received some mild criticism from his fellow cartoonists including Greg Evans (Luann); and Charles Schulz (Peanuts), one of Watterson's major artistic influences, even called it a \"puzzle\". Some cartoonists resented the idea that Watterson worked harder than others, while others supported it. At least one newspaper editor noted that the strip was the most popular in the country and stated he \"earned it\".",
"title": "Sabbaticals"
},
{
"paragraph_id": 22,
"text": "Despite the popularity of Calvin and Hobbes, the strip remains notable for the almost complete lack of official product merchandising. Watterson held that comic strips should stand on their own as an art form and although he did not start out completely opposed to merchandising in all forms (or even for all comic strips), he did reject an early syndication deal that involved incorporating a more marketable, licensed character into his strip. In spite of being an unproven cartoonist, and having been flown all the way to New York to discuss the proposal, Watterson reflexively resented the idea of \"cartooning by committee\" and turned it down.",
"title": "Merchandising"
},
{
"paragraph_id": 23,
"text": "When Calvin and Hobbes was accepted by Universal Syndicate, and began to grow in popularity, Watterson found himself at odds with the syndicate, which urged him to begin merchandising the characters and touring the country to promote the first collections of comic strips. Watterson refused, believing that the integrity of the strip and its artist would be undermined by commercialization, which he saw as a major negative influence in the world of cartoon art, and that licensing his character would only violate the spirit of his work. He gave an example of this in discussing his opposition to a Hobbes plush toy: that if the essence of Hobbes' nature in the strip is that it remain unresolved whether he is a real tiger or a stuffed toy, then creating a real stuffed toy would only destroy the magic. However, having initially signed away control over merchandising in his initial contract with the syndicate, Watterson commenced a lengthy and emotionally draining battle with Universal to gain control over his work. Ultimately Universal did not approve any products against Watterson's wishes, understanding that, unlike other comic strips, it would be nearly impossible to separate the creator from the strip if Watterson chose to walk away.",
"title": "Merchandising"
},
{
"paragraph_id": 24,
"text": "One estimate places the value of licensing revenue forgone by Watterson at $300–$400 million. Almost no legitimate Calvin and Hobbes merchandise exists. Exceptions produced during the strip's original run include two 16-month calendars (1988–89 and 1989–90), a t-shirt for the Smithsonian Exhibit, Great American Comics: 100 Years of Cartoon Art (1990) and the textbook Teaching with Calvin and Hobbes, which has been described as \"perhaps the most difficult piece of official Calvin and Hobbes memorabilia to find.\" In 2010, Watterson did allow his characters to be included in a series of United States Postal Service stamps honoring five classic American comics. Licensed prints of Calvin and Hobbes were made available and have also been included in various academic works.",
"title": "Merchandising"
},
{
"paragraph_id": 25,
"text": "The strip's immense popularity has led to the appearance of various counterfeit items such as window decals and T-shirts that often feature crude humor, binge drinking and other themes that are not found in Watterson's work. Images from one strip in which Calvin and Hobbes dance to loud music at night were commonly used for copyright violations. After threat of a lawsuit alleging infringement of copyright and trademark, some sticker makers replaced Calvin with a different boy, while other makers made no changes. Watterson wryly commented, \"I clearly miscalculated how popular it would be to show Calvin urinating on a Ford logo,\" but later added, \"long after the strip is forgotten, [they] are my ticket to immortality\".",
"title": "Merchandising"
},
{
"paragraph_id": 26,
"text": "Watterson has expressed admiration for animation as an artform. In a 1989 interview in The Comics Journal he described the appeal of being able to do things with a moving image that cannot be done by a simple drawing: the distortion, the exaggeration and the control over the length of time an event is viewed. However, although the visual possibilities of animation appealed to Watterson, the idea of finding a voice for Calvin made him uncomfortable, as did the idea of working with a team of animators. Ultimately, Calvin and Hobbes was never made into an animated series. Watterson later stated in The Calvin and Hobbes Tenth Anniversary Book that he liked the fact that his strip was a \"low-tech, one-man operation,\" and that he took great pride in the fact that he drew every line and wrote every word on his own. Calls from major Hollywood figures interested in an adaptation of his work, including Jim Henson, George Lucas and Steven Spielberg, were never returned and in a 2013 interview Watterson stated that he had \"zero interest\" in an animated adaptation as there was really no upside for him in doing so.",
"title": "Merchandising"
},
{
"paragraph_id": 27,
"text": "The strip borrows several elements and themes from three major influences: Walt Kelly's Pogo, George Herriman's Krazy Kat and Charles M. Schulz's Peanuts. Schulz and Kelly particularly influenced Watterson's outlook on comics during his formative years.",
"title": "Style and influences"
},
{
"paragraph_id": 28,
"text": "Notable elements of Watterson's artistic style are his characters' diverse and often exaggerated expressions (particularly those of Calvin), elaborate and bizarre backgrounds for Calvin's flights of imagination, expressions of motion and frequent visual jokes and metaphors. In the later years of the strip, with more panel space available for his use, Watterson experimented more freely with different panel layouts, art styles, stories without dialogue and greater use of white space. He also experimented with his tools, once inking a strip with a stick from his yard in order to achieve a particular look. He also makes a point of not showing certain things explicitly: the \"Noodle Incident\" and the children's book Hamster Huey and the Gooey Kablooie are left to the reader's imagination, where Watterson was sure they would be \"more outrageous\" than he could portray.",
"title": "Style and influences"
},
{
"paragraph_id": 29,
"text": "Watterson's technique started with minimalist pencil sketches drawn with a light pencil (though the larger Sunday strips often required more elaborate work) on a piece of Bristol board, with his brand of choice being Strathmore because he felt it held the drawings better on the page as opposed to the cheaper brands (Watterson said he initially used any cheap pad of Bristol board his local supply store had but switched to Strathmore after he found himself growing more and more displeased with the results). He would then use a small sable brush and India ink to fill in the rest of the drawing, saying that he did not want to simply trace over his penciling and thus make the inking more spontaneous. He lettered dialogue with a Rapidograph fountain pen, and he used a crowquill pen for odds and ends. Mistakes were covered with various forms of correction fluid, including the type used on typewriters. Watterson was careful in his use of color, often spending a great deal of time in choosing the right colors to employ for the weekly Sunday strip; his technique was to cut the color tabs the syndicate sent him into individual squares, lay out the colors, and then paint a watercolor approximation of the strip on tracing paper over the Bristol board and then mark the strip accordingly before sending it on. When Calvin and Hobbes began there were 64 colors available for the Sunday strips. For the later Sunday strips Watterson had 125 colors as well as the ability to fade the colors into each other.",
"title": "Production and technique"
},
{
"paragraph_id": 30,
"text": "Calvin, named after the 16th-century theologian John Calvin, is a six-year-old boy with spiky blond hair and a distinctive red-and-black striped shirt, black pants and sneakers. Despite his poor grades in school, Calvin demonstrates his intelligence through a sophisticated vocabulary, philosophical mind and creative/artistic talent. Watterson described Calvin as having \"not much of a filter between his brain and his mouth\", a \"little too intelligent for his age\", lacking in restraint and not yet having the experience to \"know the things that you shouldn't do.\" The comic strip largely revolves around Calvin's inner world and his largely antagonistic experiences with those outside of it (fellow students, authority figures and his parents).",
"title": "Main characters"
},
{
"paragraph_id": 31,
"text": "From Calvin's point of view, Hobbes is an anthropomorphic tiger much larger than Calvin and full of independent attitudes and ideas. When the scene includes any other human, they see merely a stuffed toy, usually seated at an off-kilter angle and blankly staring into space. The true nature of the character is never resolved, instead as Watterson describes, a 'grown-up' version of reality is juxtaposed against Calvin's, with the reader left to \"decide which is truer\". Hobbes is based on a cat Watterson owned, a grey tabby named Sprite. Sprite inspired the length of Hobbes' body as well as his personality. Although Hobbes' humor stems from acting like a human, Watterson maintained Sprite's feline attitude.",
"title": "Main characters"
},
{
"paragraph_id": 32,
"text": "Hobbes is named after 17th-century philosopher Thomas Hobbes, who held what Watterson describes as \"a dim view of human nature.\" He typically exhibits a greater understanding of consequences than Calvin, but rarely intervenes in Calvin's activities beyond a few oblique warnings. He often likes to sneak up and pounce on Calvin, especially at the front door when Calvin is returning home from school. The friendship between the two characters provides the core dynamic of the strip.",
"title": "Main characters"
},
{
"paragraph_id": 33,
"text": "Calvin's mother and father are typical middle-class parents who are relatively down to earth and whose sensible attitudes serve as a foil for Calvin's outlandish behavior. Calvin's father is a patent attorney (like Watterson's own father), while his mother is a stay-at-home mom. Both parents are unnamed throughout the entire strip, as Watterson insists, \"As far as the strip is concerned, they are important only as Calvin's mom and dad.\"",
"title": "Main characters"
},
{
"paragraph_id": 34,
"text": "Watterson recounts that some fans are angered by the sometimes sardonic way that Calvin's parents respond to him. In response, Watterson defends what Calvin's parents do, remarking that in the case of parenting a kid like Calvin, \"I think they do a better job than I would.\" Calvin's father is overly concerned with \"character building\" activities in a number of strips, either in the things he makes Calvin do or in the austere eccentricities of his own lifestyle.",
"title": "Main characters"
},
{
"paragraph_id": 35,
"text": "Susie Derkins, who first appears early in the strip and is the only important character with both a first and last name, lives on Calvin's street and is one of his classmates. Her last name apparently derives from the pet beagle owned by Watterson's wife's family.",
"title": "Main characters"
},
{
"paragraph_id": 36,
"text": "Susie is studious and polite (though she can be aggressive if sufficiently provoked), and she likes to play house or host tea parties with her stuffed animals. She also plays imaginary games with Calvin in which she acts as a high-powered lawyer or politician and wants Calvin to pretend to be her househusband. Though both of them are typically loath to admit it, Calvin and Susie exhibit many common traits and inclinations. For example, the reader occasionally sees Susie with a stuffed rabbit named \"Mr. Bun.\" Much like Calvin, Susie has a mischievous (and sometimes aggressive) streak as well, which the reader witnesses whenever she subverts Calvin's attempts to cheat on school tests by feeding him incorrect answers, or whenever she fights back after Calvin attacks her with snowballs or water balloons.",
"title": "Main characters"
},
{
"paragraph_id": 37,
"text": "Hobbes often openly expresses romantic feelings for Susie, to Calvin's disgust. In contrast, Calvin started a club (of which he and Hobbes are the only members) that he calls G.R.O.S.S. (Get Rid Of Slimy GirlS) and, while holding \"meetings\" in Calvin's tree house or in the \"box of secrecy\" in Calvin's room, they usually come up with some plot against Susie. In one instance, Calvin steals one of Susie's dolls and holds it for ransom, only to have Susie retaliate by nabbing Hobbes. Watterson admits that Calvin and Susie have a nascent crush on each other and that Susie is a reference to the type of woman whom Watterson himself found attractive and eventually married.",
"title": "Main characters"
},
{
"paragraph_id": 38,
"text": "Susie features as a main character in two of the five storylines that appear in Teaching with Calvin and Hobbes.",
"title": "Main characters"
},
{
"paragraph_id": 39,
"text": "Calvin also interacts with a handful of secondary characters. Several of these, including Rosalyn, his babysitter; Miss Wormwood, his teacher; and Moe, the school bully, recur regularly through the duration of the strip.",
"title": "Main characters"
},
{
"paragraph_id": 40,
"text": "Watterson used the strip to poke fun at the art world, principally through Calvin's unconventional creations of snowmen but also through other expressions of childhood art. When Miss Wormwood complains that he is wasting class time drawing impossible things (a Stegosaurus in a rocket ship, for example), Calvin proclaims himself \"on the cutting edge of the avant-garde.\" He begins exploring the medium of snow when a warm day melts his snowman. His next sculpture \"speaks to the horror of our own mortality, inviting the viewer to contemplate the evanescence of life.\" In later strips, Calvin's creative instincts diversify to include sidewalk drawings (or, as he terms them, examples of \"suburban postmodernism\").",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 41,
"text": "Watterson also lampooned the academic world. In one example, Calvin carefully crafts an \"artist's statement\", claiming that such essays convey more messages than artworks themselves ever do (Hobbes blandly notes, \"You misspelled Weltanschauung\"). He indulges in what Watterson calls \"pop psychobabble\" to justify his destructive rampages and shift blame to his parents, citing \"toxic codependency.\" In one instance, he pens a book report based on the theory that the purpose of academic writing is to \"inflate weak ideas, obscure poor reasoning and inhibit clarity,\" entitled The Dynamics of Interbeing and Monological Imperatives in Dick and Jane: A Study in Psychic Transrelational Gender Modes. Displaying his creation to Hobbes, he remarks, \"Academia, here I come!\" Watterson explains that he adapted this jargon (and similar examples from several other strips) from an actual book of art criticism.",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 42,
"text": "Overall, Watterson's satirical essays serve to attack both sides, criticizing both the commercial mainstream and the artists who are supposed to be \"outside\" it. The strip on Sunday, June 21, 1992, criticized the naming of The Big Bang theory as not evocative of the wonders behind it and coined the term \"Horrendous Space Kablooie\", an alternative that achieved some informal popularity among scientists and was often shortened to \"the HSK.\" The term has also been referred to in newspapers, books and university courses.",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 43,
"text": "Calvin imagines himself as many great creatures and other people, including dinosaurs, elephants, jungle-farers and superheroes. Three of his alter egos are well-defined and recurrent:",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 44,
"text": "",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 45,
"text": "Calvin also has several adventures involving corrugated cardboard boxes, which he adapts for many imaginative and elaborate uses. In one strip, when Calvin shows off his Transmogrifier, a device that transforms its user into any desired creature or item, Hobbes remarks, \"It's amazing what they do with corrugated cardboard these days.\" Calvin is able to change the function of the boxes by rewriting the label and flipping the box onto another side. In this way, a box can be used not only for its conventional purposes (a storage container for water balloons, for example), but also as a flying time machine, a duplicator, a transmogrifier or, with the attachment of a few wires and a colander, a \"Cerebral Enhance-o-tron.\"",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 46,
"text": "In the real world, Calvin's antics with his box have had varying effects. When he transmogrified into a tiger, he still appeared as a regular human child to his parents. However, in a story where he made several duplicates of himself, his parents are seen interacting with what does seem like multiple Calvins, including in a strip where two of him are seen in the same panel as his father. It is ultimately unknown what his parents do or do not see, as Calvin tries to hide most of his creations (or conceal their effects) so as not to traumatize them.",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 47,
"text": "In addition, Calvin uses a cardboard box as a sidewalk kiosk to sell things. Often, Calvin offers merchandise no one would want, such as \"suicide drink\", \"a swift kick in the butt\" for one dollar or a \"frank appraisal of your looks\" for fifty cents. In one strip, he sells \"happiness\" for ten cents, hitting the customer in the face with a water balloon and explaining that he meant his own happiness. In another strip, he sold \"insurance\", firing a slingshot at those who refused to buy it. In some strips, he tried to sell \"great ideas\" and, in one earlier strip, he attempted to sell the family car to obtain money for a grenade launcher. In yet another strip, he sells \"life\" for five cents, where the customer receives nothing in return, which, in Calvin's opinion, is life.",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 48,
"text": "The box has also functioned as an alternate secret meeting place for G.R.O.S.S., as the \"Box of Secrecy\".",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 49,
"text": "Other kids' games are all such a bore! They've gotta have rules and they gotta keep score! Calvinball is better by far! It's never the same! It's always bizarre! You don't need a team or a referee! You know that it's great, 'cause it's named after me!",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 50,
"text": "—Excerpt from the Calvinball theme song",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 51,
"text": "Calvinball is an improvisational sport/game introduced in a 1990 storyline that involved Calvin's negative experience of joining the school baseball team. Calvinball is a nomic or self-modifying game, a contest of wits, skill and creativity rather than stamina or athletic skill. The game is portrayed as a rebellion against conventional team sports and became a staple of the final five years of the comic. The only consistent rules of the game are that Calvinball may never be played with the same rules twice and that each participant must wear a mask.",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 52,
"text": "When asked how to play, Watterson stated: \"It's pretty simple: you make up the rules as you go.\" In most appearances of the game, a comical array of conventional and non-conventional sporting equipment is involved, including a croquet set, a badminton set, assorted flags, bags, signs, a hobby horse, water buckets and balloons, with humorous allusions to unseen elements such as \"time-fracture wickets\". Scoring is portrayed as arbitrary and nonsensical (\"Q to 12\" and \"oogy to boogy\") and the lack of fixed rules leads to lengthy argument between the participants as to who scored, where the boundaries are, and when the game is finished. Usually, the contest results in Calvin being outsmarted by Hobbes. The game has been described in one academic work not as a new game based on fragments of an older one, but as the \"constant connecting and disconnecting of parts, the constant evasion of rules or guidelines based on collective creativity.\"",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 53,
"text": "Calvin often creates horrendous/dark humor scenes with his snowmen and other snow sculptures. He uses the snowman for social commentary, revenge or pure enjoyment. Examples include Snowman Calvin being yelled at by Snowman Dad to shovel the snow; one snowman eating snow cones scooped out of a second snowman, who is lying on the ground with an ice-cream scoop in his back; a \"snowman house of horror\"; and snowmen representing people he hates. \"The ones I really hate are small, so they'll melt faster,\" he says. There was even an occasion on which Calvin accidentally brought a snowman to life and it made itself and a small army into \"deranged mutant killer monster snow goons.\"",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 54,
"text": "Calvin's snow art is often used as a commentary on art in general. For example, Calvin has complained more than once about the lack of originality in other people's snow art and compared it with his own grotesque snow sculptures. In one of these instances, Calvin and Hobbes claim to be the sole guardians of high culture; in another, Hobbes admires Calvin's willingness to put artistic integrity above marketability, causing Calvin to reconsider and make an ordinary snowman.",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 55,
"text": "Calvin and Hobbes frequently ride downhill in a wagon or sled (depending on the season), as a device to add some physical comedy to the strip and because, according to Watterson, \"it's a lot more interesting ... than talking heads.\" While the ride is sometimes the focus of the strip, it also frequently serves as a counterpoint or visual metaphor while Calvin ponders the meaning of life, death, God, philosophy or a variety of other weighty subjects. Many of their rides end in spectacular crashes which leave them battered, beaten up and broken, a fact which convinces Hobbes to sometimes hop off before a ride even begins. In the final strip, Calvin and Hobbes depart on their sled to go exploring. This theme is similar (perhaps even an homage) to scenes in Walt Kelly's Pogo. Calvin and Hobbes' sled has been described as the most famous sled in American arts since Citizen Kane.",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 56,
"text": "G.R.O.S.S. (which is a backronym for Get Rid Of Slimy GirlS, \"otherwise it doesn't spell anything\") is a club in which Calvin and Hobbes are the only members. The club was founded in the garage of their house, but to clear space for its activities, Calvin and (purportedly) Hobbes push Calvin's parents' car, causing it to roll into a ditch (but not suffer damage); the incident prompts the duo to change the club's location to Calvin's treehouse. They hold meetings that involve finding ways to annoy and discomfort Susie Derkins, a girl and enemy of their club. Notable actions include planting a fake secret tape near her in attempt to draw her in to a trap, trapping her in a closet at their house and creating elaborate water balloon traps. Calvin gave himself and Hobbes important positions in the club, Calvin being \"Dictator-for-Life\" and Hobbes being \"President-and-First-Tiger\". They go into Calvin's treehouse for their club meetings and often get into fights during them. The password to get into the treehouse is intentionally long and difficult, which has on at least one occasion ruined Calvin's plans. As Hobbes is able to climb the tree without the rope, he is usually the one who comes up with the password, which often involves heaping praise upon tigers. An example of this can be seen in the comic strip where Calvin, rushing to get into the treehouse to throw things at a passing Susie Derkins, insults Hobbes, who is in the treehouse and thus has to let down the rope. Hobbes forces Calvin to say the password for insulting him. By the time Susie arrives, in time to hear Calvin saying some of the password, causing him to stumble, Calvin is on \"Verse Seven: Tigers are perfect!/The E-pit-o-me/of good looks and grace/and quiet..uh..um..dignity\". The opportunity to pelt Susie with something having passed, Calvin threatens to turn Hobbes into a rug.",
"title": "Recurring elements and themes"
},
{
"paragraph_id": 57,
"text": "There are 18 Calvin and Hobbes books, published from 1987 to 1997. These include 11 collections, which form a complete archive of the newspaper strips, except for a single daily strip from November 28, 1985. (The collections do contain a strip for this date, but it is not the same strip that appeared in some newspapers.) Treasuries usually combine the two preceding collections with bonus material and include color reprints of Sunday comics.",
"title": "Books"
},
{
"paragraph_id": 58,
"text": "Watterson included some new material in the treasuries. In The Essential Calvin and Hobbes, which includes cartoons from the collections Calvin and Hobbes and Something Under the Bed Is Drooling, the back cover features a scene of a giant Calvin rampaging through a town. The scene is based on Watterson's home town of Chagrin Falls, Ohio, and Calvin is holding the Chagrin Falls Popcorn Shop, an iconic candy and ice cream shop overlooking the town's namesake falls. Several of the treasuries incorporate additional poetry; The Indispensable Calvin and Hobbes book features a set of poems, ranging from just a few lines to an entire page, that cover topics such as Calvin's mother's \"hindsight\" and exploring the woods. In The Essential Calvin and Hobbes, Watterson presents a long poem explaining a night's battle against a monster from Calvin's perspective. The Authoritative Calvin and Hobbes includes a story based on Calvin's use of the Transmogrifier to finish his reading homework.",
"title": "Books"
},
{
"paragraph_id": 59,
"text": "A complete collection of Calvin and Hobbes strips, in three hardcover volumes totaling 1440 pages, was released on October 4, 2005, by Andrews McMeel Publishing. It includes color prints of the art used on paperback covers, the treasuries' extra illustrated stories and poems and a new introduction by Bill Watterson in which he talks about his inspirations and his story leading up to the publication of the strip. The alternate 1985 strip is still omitted, and three other strips (January 7 and November 24, 1987, and November 25, 1988) have altered dialogue. A four-volume paperback version was released November 13, 2012.",
"title": "Books"
},
{
"paragraph_id": 60,
"text": "To celebrate the release (which coincided with the strip's 20th anniversary and the tenth anniversary of its absence from newspapers), Bill Watterson answered 15 questions submitted by readers.",
"title": "Books"
},
{
"paragraph_id": 61,
"text": "Early books were printed in smaller format in black and white. These were later reproduced in twos in color in the \"Treasuries\" (Essential, Authoritative and Indispensable), except for the contents of Attack of the Deranged Mutant Killer Monster Snow Goons. Those Sunday strips were not reprinted in color until the Complete collection was finally published in 2005.",
"title": "Books"
},
{
"paragraph_id": 62,
"text": "Watterson claims he named the books the \"Essential, Authoritative and Indispensable\" because, as he says in The Calvin and Hobbes Tenth Anniversary Book, the books are \"obviously none of these things.\"",
"title": "Books"
},
{
"paragraph_id": 63,
"text": "An officially licensed children's textbook entitled Teaching with Calvin and Hobbes was published in a single print run in Fargo, North Dakota, in 1993. The book is composed of Calvin and Hobbes strips that form story arcs, including \"The Binoculars\" and \"The Bug Collection\", followed by lessons based on the stories.",
"title": "Books"
},
{
"paragraph_id": 64,
"text": "What do you think the principal meant when he said they had \"quite a file\" on Calvin? -Teaching with Calvin and Hobbes",
"title": "Books"
},
{
"paragraph_id": 65,
"text": "The book is rare and highly sought. It has been called the \"Holy Grail\" for Calvin and Hobbes collectors.",
"title": "Books"
},
{
"paragraph_id": 66,
"text": "Reviewing Calvin and Hobbes in 1990, Entertainment Weekly's Ken Tucker gave the strip an A+ rating, writing \"Watterson summons up the pain and confusion of childhood as much as he does its innocence and fun.\"",
"title": "Reception"
},
{
"paragraph_id": 67,
"text": "In 1993, paleontologist and paleoartist Gregory S. Paul praised Bill Watterson for the scientific accuracy of the dinosaurs appearing in Calvin and Hobbes.",
"title": "Reception"
},
{
"paragraph_id": 68,
"text": "In her 1994 book When Toys Come Alive, Lois Rostow Kuznets theorizes that Hobbes serves both as a figure of Calvin's childish fantasy life and as an outlet for the expression of libidinous desires more associated with adults. Kuznets also analyzes Calvin's other fantasies, suggesting that they are a second tier of fantasies utilized in places like school where transitional objects such as Hobbes would not be socially acceptable.",
"title": "Reception"
},
{
"paragraph_id": 69,
"text": "Political scientist James Q. Wilson, in a paean to Calvin and Hobbes upon Watterson's decision to end the strip in 1995, characterized it as \"our only popular explication of the moral philosophy of Aristotle.\"",
"title": "Reception"
},
{
"paragraph_id": 70,
"text": "Alisa White Coleman analyzed the strip's underlying messages concerning ethics and values in \"'Calvin and Hobbes': A Critique of Society's Values,\" published in the Journal of Mass Media Ethics in 2000.",
"title": "Reception"
},
{
"paragraph_id": 71,
"text": "A collection of original Sunday strips was exhibited at Ohio State University's Billy Ireland Cartoon Library & Museum in 2001. Watterson himself selected the strips and provided his own commentary for the exhibition catalog, which was later published by Andrews McMeel as Calvin and Hobbes: Sunday Pages 1985–1995.",
"title": "Reception"
},
{
"paragraph_id": 72,
"text": "Since the discontinuation of Calvin and Hobbes, individual strips have been licensed for reprint in schoolbooks, including the Christian homeschooling book The Fallacy Detective in 2002, and the university-level philosophy reader Open Questions: Readings for Critical Thinking and Writing in 2005; in the latter, the ethical views of Watterson and his characters Calvin and Hobbes are discussed in relation to the views of professional philosophers. Since 2009, Twitter users have indicated that Calvin and Hobbes strips have appeared in textbooks for subjects in the sciences, social sciences, mathematics, philosophy and foreign language.",
"title": "Reception"
},
{
"paragraph_id": 73,
"text": "In a 2009 evaluation of the entire body of Calvin and Hobbes strips using grounded theory methodology, Christijan D. Draper found that: \"Overall, Calvin and Hobbes suggests that meaningful time use is a key attribute of a life well lived,\" and that \"the strip suggests one way to assess the meaning associated with time use is through preemptive retrospection by which a person looks at current experiences through the lens of an anticipated future...\"",
"title": "Reception"
},
{
"paragraph_id": 74,
"text": "Jamey Heit's Imagination and Meaning in Calvin and Hobbes, a critical and academic analysis of the strip, was published in 2012.",
"title": "Reception"
},
{
"paragraph_id": 75,
"text": "Calvin and Hobbes strips were again exhibited at the Billy Ireland Cartoon Library & Museum at Ohio State University in 2014, in an exhibition entitled Exploring Calvin and Hobbes. An exhibition catalog by the same title, which also contained an interview with Watterson conducted by Jenny Robb, the curator of the museum, was published by Andrews McMeel in 2015.",
"title": "Reception"
},
{
"paragraph_id": 76,
"text": "\"Since its concluding panel in 1995, Calvin and Hobbes has remained one of the most influential and well-loved comic strips of our time.\"",
"title": "Legacy"
},
{
"paragraph_id": 77,
"text": "–The Atlantic, \"How Calvin and Hobbes Inspired a Generation,\" October 25, 2013",
"title": "Legacy"
},
{
"paragraph_id": 78,
"text": "Years after its original newspaper run, Calvin and Hobbes has continued to exert influence in entertainment, art, and fandom.",
"title": "Legacy"
},
{
"paragraph_id": 79,
"text": "In television, Calvin and Hobbes have been satirically depicted in stop motion animation in the 2006 Robot Chicken episode \"Lust for Puppets,\" and in traditional animation in the 2009 Family Guy episode \"Not All Dogs Go to Heaven.\" In the 2013 Community episode \"Paranormal Parentage,\" the characters Abed Nadir (Danny Pudi) and Troy Barnes (Donald Glover) dress as Calvin and Hobbes, respectively, for Halloween.",
"title": "Legacy"
},
{
"paragraph_id": 80,
"text": "British artists, merchandisers, booksellers, and philosophers were interviewed for a 2009 BBC Radio 4 half-hour programme about the abiding popularity of the comic strip, narrated by Phill Jupitus.",
"title": "Legacy"
},
{
"paragraph_id": 81,
"text": "The first book-length study of the strip, Looking for Calvin and Hobbes: The Unconventional Story of Bill Watterson and His Revolutionary Comic Strip by Nevin Martell, was first published in 2009; an expanded edition was published in 2010. The book chronicles Martell's quest to tell the story of Calvin and Hobbes and Watterson through research and interviews with people connected to the cartoonist and his work. The director of the later documentary Dear Mr. Watterson referenced Looking for Calvin and Hobbes in discussing the production of the movie, and Martell appears in the film.",
"title": "Legacy"
},
{
"paragraph_id": 82,
"text": "The American documentary film Dear Mr. Watterson, released in 2013, explores the impact and legacy of Calvin and Hobbes through interviews with authors, curators, historians, and numerous professional cartoonists.",
"title": "Legacy"
},
{
"paragraph_id": 83,
"text": "The enduring significance of Calvin and Hobbes to international cartooning was recognized by the jury of the Angoulême International Comics Festival in 2014 by the awarding of its Grand Prix to Watterson, only the fourth American to ever receive the honor (after Will Eisner, Robert Crumb, and Art Spiegelman).",
"title": "Legacy"
},
{
"paragraph_id": 84,
"text": "From 2016 to 2021, author Berkeley Breathed included Calvin and Hobbes in various Bloom County cartoons. He launched the first cartoon on April Fool's Day 2016 and jokingly issued a statement suggesting that he had acquired Calvin and Hobbes from Bill Watterson, who was \"out of the Arizona facility, continent and looking forward to some well-earned financial security.\" While bearing Watterson's signature and drawing style as well as featuring characters from both Calvin and Hobbes and Breathed's Bloom County, it is unclear whether Watterson had any input into these cartoons or not.",
"title": "Legacy"
},
{
"paragraph_id": 85,
"text": "Calvin and Hobbes remains the most viewed comic on GoComics, which cycles through old strips with an approximately 30-year delay.",
"title": "Legacy"
},
{
"paragraph_id": 86,
"text": "A number of artists and cartoonists have created unofficial works portraying Calvin as a teenager/adult; the concept has also inspired writers.",
"title": "Legacy"
},
{
"paragraph_id": 87,
"text": "In 2011, a comic strip appeared by cartoonists Dan and Tom Heyerman called Hobbes and Bacon. The strip depicts Calvin as an adult, married to Susie Derkins with a young daughter named after philosopher Francis Bacon, to whom Calvin gives Hobbes. Though consisting of only four strips originally, Hobbes and Bacon received considerable attention when it appeared and was continued by other cartoonists and artists.",
"title": "Legacy"
},
{
"paragraph_id": 88,
"text": "A novel titled Calvin by CLA Young Adult Book Award–winning author Martine Leavitt was published in 2015. The story tells of seventeen-year-old Calvin—who was born on the day that Calvin and Hobbes ended, and who has now been diagnosed with schizophrenia—and his hallucination of Hobbes, his childhood stuffed tiger. With his friend Susie, who might also be a hallucination, Calvin sets off to find Bill Watterson in the hope that the cartoonist can provide aid for Calvin's condition.",
"title": "Legacy"
},
{
"paragraph_id": 89,
"text": "The titular character of the comic strip Frazz has been noted for his similar appearance and personality to a grown-up Calvin. Creator Jef Mallett has stated that although Watterson is an inspiration to him, the similarities are unintentional.",
"title": "Legacy"
}
]
| Calvin and Hobbes is a daily American comic strip created by cartoonist Bill Watterson that was syndicated from November 18, 1985, to December 31, 1995. Commonly cited as "the last great newspaper comic", Calvin and Hobbes has enjoyed broad and enduring popularity, influence, and academic and philosophical interest. Calvin and Hobbes follows the humorous antics of the title characters: Calvin, a precocious, mischievous, and adventurous six-year-old boy; and Hobbes, his sardonic stuffed tiger. Set in the contemporary suburban United States of the 1980s and 1990s, the strip depicts Calvin's frequent flights of fancy and friendship with Hobbes. It also examines Calvin's relationships with his long-suffering parents and with his classmates, especially his neighbor Susie Derkins. Hobbes's dual nature is a defining motif for the strip: to Calvin, Hobbes is a living anthropomorphic tiger, while all the other characters seem to see Hobbes as an inanimate stuffed toy—though Watterson has not clarified exactly how Hobbes is perceived by others. Though the series does not frequently mention specific political figures or contemporary events, it does explore broad issues like environmentalism, public education, and philosophical quandaries. At the height of its popularity, Calvin and Hobbes was featured in over 2,400 newspapers worldwide. In 2010, reruns of the strip appeared in more than 50 countries, and nearly 45 million copies of the Calvin and Hobbes books had been sold worldwide. | 2001-08-31T16:29:14Z | 2023-12-27T18:43:14Z | [
"Template:Main",
"Template:Anchor",
"Template:Reflist",
"Template:Wikiquote",
"Template:Calvin and Hobbes",
"Template:Use mdy dates",
"Template:Spoken Wikipedia",
"Template:Unreferenced section",
"Template:Cite web",
"Template:Cite episode",
"Template:Portal bar",
"Template:Pp-move-vandalism",
"Template:See",
"Template:'s",
"Template:Cite magazine",
"Template:Cite comic",
"Template:Multiple image",
"Template:Citation needed",
"Template:See also",
"Template:Cite book",
"Template:Cite news",
"Template:Harvp",
"Template:Authority control",
"Template:Sfn",
"Template:Cite thesis",
"Template:Commons category",
"Template:UniversalPressSyndicate",
"Template:More citations needed section",
"Template:Infobox comic strip",
"Template:Quote box",
"Template:Quotation",
"Template:Cite journal",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Calvin_and_Hobbes |
6,060 | Campaign for Real Ale | The Campaign for Real Ale (CAMRA) is an independent voluntary consumer organisation headquartered in St Albans, England, which promotes real ale, cider and perry and traditional British pubs and clubs. With just over 150,000 members, it is the largest single-issue consumer group in the UK, and is a founding member of the European Beer Consumers Union (EBCU).
The organisation was founded on 16 March 1971 in Kruger's Bar, Dunquin, Kerry, Ireland, by Michael Hardman, Graham Lees, Jim Makin, and Bill Mellor, who were opposed to the growing mass production of beer and the homogenisation of the British brewing industry. The original name was the Campaign for the Revitalisation of Ale. Following the formation of the Campaign, the first annual general meeting took place in 1972, at the Rose Inn in Coton Road, Nuneaton.
Early membership consisted of the four founders and their friends. Interest in CAMRA and its objectives spread rapidly, with 5,000 members signed up by 1973. Other early influential members included Christopher Hutt, author of Death of the English Pub, who succeeded Hardman as chairman, Frank Baillie, author of The Beer Drinker's Companion, and later the many times Good Beer Guide editor, Roger Protz.
In 1991, CAMRA reached 30,000 members across the UK and abroad and, a year later, helped to launch the European Beer Consumers Union. CAMRA remains EBCU's largest contributor, despite the UK's exit from the European Union.
CAMRA published a history book on its 50th birthday, 16 March 2021, written by Laura Hadland 50 Years of CAMRA.
CAMRA's stated aims are:
CAMRA's campaigns include promoting small brewing and pub businesses, reforming licensing laws, reducing tax on beer, and stopping continued consolidation among local British brewers. It also makes an effort to promote less common varieties of beer, including stout, porter, and mild, as well as traditional cider and perry.
CAMRA's states that real ale should be served without the use of additional carbonation. This means that "any beer brand which is produced in both cask and keg versions" is not admitted to CAMRA festivals if the brewery's marketing is deemed to imply an equivalence of quality or character between the two versions.
CAMRA is organised on a federal basis, over 200 local branches, each covering a particular geographical area of the UK, that contribute to the central body of the organisation based in St Albans. It is governed by a National Executive, made up of 12 voluntary unpaid directors elected by the membership. The local branches are grouped into 16 regions across the UK, such as the West Midlands or Wessex.
In 2009, CAMRA's membership reached 100,000, and 150,000 members in 2013.
CAMRA publishes the Good Beer Guide, an annually compiled directory of the best 4,500 real ale outlets and listing of real ale brewers. CAMRA members received a monthly newspaper called What's Brewing until its April 2021 issue and there is a quarterly colour magazine called Beer. It also maintains a National Inventory of Historic Pub Interiors to help bring greater recognition and protection to Britain's most historic pubs.
CAMRA supports and promotes beer and cider festivals around the country, which are organised by local CAMRA branches. Generally, each festival charges an entry fee which either covers entry only or also includes a commemorative glass showing the details of the festival. A festival programme is usually also provided, with a list and description of the drinks available. Members may get discounted entrance to CAMRA festivals.
The Campaign also organises the annual Great British Beer Festival in August. It is now held in the Great, National & West Halls at the Olympia Exhibition Centre, in Kensington, London, having been held for a few years at Earl's Court as well as regionally in the past at venues such as Brighton and Leeds. This is the UK's largest beer festival, with over 900 beers, ciders and perries available over the week long event.
For many years, CAMRA also organised the National Winter Ales Festival. However, in 2017 this was re-branded as the Great British Beer Festival Winter where they award the Champion Winter Beer of Britain. Unlike the Great British Beer Festival, the Winter event does not have a permanent venue and is rotated throughout the country every three years. Recent hosts have been Derby and Norwich, with the event currently held each February in Birmingham. In 2020 CAMRA also launched the Great Welsh Beer Festival, to be held in Cardiff in April.
CAMRA presents awards for beers and pubs, such as the National Pub of the Year. The competition begins in the preceding year with branches choosing their local pub of the year through either a ballot or a panel of judges. The branch winners are entered into 16 regional competitions which are then visited by several individuals who agree the best using a scoring system that looks at beer quality, aesthetic, and welcome. The four finalists are announced each year before a ceremony to crown the winner in the spring. There are also the Pub Design Awards, which are held in association with English Heritage and the Victorian Society. These comprise several categories, including new build, refurbished and converted pubs.
The best known CAMRA award is the Champion Beer of Britain, which is selected at the Great British Beer Festival. Other awards include the Champion Beer of Scotland and the Champion Beer of Wales.
CAMRA developed the National Beer Scoring Scheme (NBSS) as an easy to use scheme for judging beer quality in pubs, to assist CAMRA branches in selecting pubs for the Good Beer Guide. CAMRA members input their beer scores online via WhatPub or through the Good Beer Guide app.
The CAMRA Pub Heritage Group identifies, records and helps to protect pub interiors of historic and/or architectural importance, and seeks to get them listed.
The group maintains two inventories of Heritage pubs, the National Inventory (NI), which contains only those pubs that have been maintained in their original condition (or have been modified very little) for at least thirty years, but usually since at least World War II. The second, larger, inventory is the Regional Inventory (RI), which is broken down by county and contains both those pubs listed in the NI and other pubs that are not eligible for the NI, for reasons such as having been overly modified, but are still considered historically important, or have particular architectural value.
The LocAle scheme was launched in 2007 to promote locally brewed beers. The scheme functions slightly differently in each area, and is managed by each branch, but each is similar: if the beer is to be promoted as a LocAle it must come from a brewery within a predetermined number of miles set by each CAMRA branch, generally around 20, although the North London branch has set it at 30 miles from brewery to pub, even if it comes from a distribution centre further away; in addition, each participating pub must keep at least one LocAle for sale at all times.
CAMRA members may join the CAMRA Members' Investment Club which, since 1989, has invested in real ale breweries and pub chains. As of January 2021 the club had over 3,000 members and owned investments worth over £17 million. Although all investors must be CAMRA members, the CAMRA Members' Investment Club is not part of CAMRA Ltd.
51°45′06″N 0°18′51″W / 51.7518°N 0.3141°W / 51.7518; -0.3141 | [
{
"paragraph_id": 0,
"text": "The Campaign for Real Ale (CAMRA) is an independent voluntary consumer organisation headquartered in St Albans, England, which promotes real ale, cider and perry and traditional British pubs and clubs. With just over 150,000 members, it is the largest single-issue consumer group in the UK, and is a founding member of the European Beer Consumers Union (EBCU).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The organisation was founded on 16 March 1971 in Kruger's Bar, Dunquin, Kerry, Ireland, by Michael Hardman, Graham Lees, Jim Makin, and Bill Mellor, who were opposed to the growing mass production of beer and the homogenisation of the British brewing industry. The original name was the Campaign for the Revitalisation of Ale. Following the formation of the Campaign, the first annual general meeting took place in 1972, at the Rose Inn in Coton Road, Nuneaton.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Early membership consisted of the four founders and their friends. Interest in CAMRA and its objectives spread rapidly, with 5,000 members signed up by 1973. Other early influential members included Christopher Hutt, author of Death of the English Pub, who succeeded Hardman as chairman, Frank Baillie, author of The Beer Drinker's Companion, and later the many times Good Beer Guide editor, Roger Protz.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In 1991, CAMRA reached 30,000 members across the UK and abroad and, a year later, helped to launch the European Beer Consumers Union. CAMRA remains EBCU's largest contributor, despite the UK's exit from the European Union.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "CAMRA published a history book on its 50th birthday, 16 March 2021, written by Laura Hadland 50 Years of CAMRA.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "CAMRA's stated aims are:",
"title": "Aims"
},
{
"paragraph_id": 6,
"text": "CAMRA's campaigns include promoting small brewing and pub businesses, reforming licensing laws, reducing tax on beer, and stopping continued consolidation among local British brewers. It also makes an effort to promote less common varieties of beer, including stout, porter, and mild, as well as traditional cider and perry.",
"title": "Aims"
},
{
"paragraph_id": 7,
"text": "CAMRA's states that real ale should be served without the use of additional carbonation. This means that \"any beer brand which is produced in both cask and keg versions\" is not admitted to CAMRA festivals if the brewery's marketing is deemed to imply an equivalence of quality or character between the two versions.",
"title": "Aims"
},
{
"paragraph_id": 8,
"text": "CAMRA is organised on a federal basis, over 200 local branches, each covering a particular geographical area of the UK, that contribute to the central body of the organisation based in St Albans. It is governed by a National Executive, made up of 12 voluntary unpaid directors elected by the membership. The local branches are grouped into 16 regions across the UK, such as the West Midlands or Wessex.",
"title": "Organisation"
},
{
"paragraph_id": 9,
"text": "In 2009, CAMRA's membership reached 100,000, and 150,000 members in 2013.",
"title": "Organisation"
},
{
"paragraph_id": 10,
"text": "CAMRA publishes the Good Beer Guide, an annually compiled directory of the best 4,500 real ale outlets and listing of real ale brewers. CAMRA members received a monthly newspaper called What's Brewing until its April 2021 issue and there is a quarterly colour magazine called Beer. It also maintains a National Inventory of Historic Pub Interiors to help bring greater recognition and protection to Britain's most historic pubs.",
"title": "Publications and websites"
},
{
"paragraph_id": 11,
"text": "CAMRA supports and promotes beer and cider festivals around the country, which are organised by local CAMRA branches. Generally, each festival charges an entry fee which either covers entry only or also includes a commemorative glass showing the details of the festival. A festival programme is usually also provided, with a list and description of the drinks available. Members may get discounted entrance to CAMRA festivals.",
"title": "Festivals"
},
{
"paragraph_id": 12,
"text": "The Campaign also organises the annual Great British Beer Festival in August. It is now held in the Great, National & West Halls at the Olympia Exhibition Centre, in Kensington, London, having been held for a few years at Earl's Court as well as regionally in the past at venues such as Brighton and Leeds. This is the UK's largest beer festival, with over 900 beers, ciders and perries available over the week long event.",
"title": "Festivals"
},
{
"paragraph_id": 13,
"text": "For many years, CAMRA also organised the National Winter Ales Festival. However, in 2017 this was re-branded as the Great British Beer Festival Winter where they award the Champion Winter Beer of Britain. Unlike the Great British Beer Festival, the Winter event does not have a permanent venue and is rotated throughout the country every three years. Recent hosts have been Derby and Norwich, with the event currently held each February in Birmingham. In 2020 CAMRA also launched the Great Welsh Beer Festival, to be held in Cardiff in April.",
"title": "Festivals"
},
{
"paragraph_id": 14,
"text": "CAMRA presents awards for beers and pubs, such as the National Pub of the Year. The competition begins in the preceding year with branches choosing their local pub of the year through either a ballot or a panel of judges. The branch winners are entered into 16 regional competitions which are then visited by several individuals who agree the best using a scoring system that looks at beer quality, aesthetic, and welcome. The four finalists are announced each year before a ceremony to crown the winner in the spring. There are also the Pub Design Awards, which are held in association with English Heritage and the Victorian Society. These comprise several categories, including new build, refurbished and converted pubs.",
"title": "Awards"
},
{
"paragraph_id": 15,
"text": "The best known CAMRA award is the Champion Beer of Britain, which is selected at the Great British Beer Festival. Other awards include the Champion Beer of Scotland and the Champion Beer of Wales.",
"title": "Awards"
},
{
"paragraph_id": 16,
"text": "CAMRA developed the National Beer Scoring Scheme (NBSS) as an easy to use scheme for judging beer quality in pubs, to assist CAMRA branches in selecting pubs for the Good Beer Guide. CAMRA members input their beer scores online via WhatPub or through the Good Beer Guide app.",
"title": "National Beer Scoring Scheme"
},
{
"paragraph_id": 17,
"text": "The CAMRA Pub Heritage Group identifies, records and helps to protect pub interiors of historic and/or architectural importance, and seeks to get them listed.",
"title": "Pub heritage"
},
{
"paragraph_id": 18,
"text": "The group maintains two inventories of Heritage pubs, the National Inventory (NI), which contains only those pubs that have been maintained in their original condition (or have been modified very little) for at least thirty years, but usually since at least World War II. The second, larger, inventory is the Regional Inventory (RI), which is broken down by county and contains both those pubs listed in the NI and other pubs that are not eligible for the NI, for reasons such as having been overly modified, but are still considered historically important, or have particular architectural value.",
"title": "Pub heritage"
},
{
"paragraph_id": 19,
"text": "The LocAle scheme was launched in 2007 to promote locally brewed beers. The scheme functions slightly differently in each area, and is managed by each branch, but each is similar: if the beer is to be promoted as a LocAle it must come from a brewery within a predetermined number of miles set by each CAMRA branch, generally around 20, although the North London branch has set it at 30 miles from brewery to pub, even if it comes from a distribution centre further away; in addition, each participating pub must keep at least one LocAle for sale at all times.",
"title": "LocAle"
},
{
"paragraph_id": 20,
"text": "CAMRA members may join the CAMRA Members' Investment Club which, since 1989, has invested in real ale breweries and pub chains. As of January 2021 the club had over 3,000 members and owned investments worth over £17 million. Although all investors must be CAMRA members, the CAMRA Members' Investment Club is not part of CAMRA Ltd.",
"title": "Investment club"
},
{
"paragraph_id": 21,
"text": "51°45′06″N 0°18′51″W / 51.7518°N 0.3141°W / 51.7518; -0.3141",
"title": "External links"
}
]
| The Campaign for Real Ale (CAMRA) is an independent voluntary consumer organisation headquartered in St Albans, England, which promotes real ale, cider and perry and traditional British pubs and clubs. With just over 150,000 members, it is the largest single-issue consumer group in the UK, and is a founding member of the European Beer Consumers Union (EBCU). | 2001-09-05T14:49:53Z | 2023-12-08T15:15:57Z | [
"Template:Use dmy dates",
"Template:Cite news",
"Template:Curlie",
"Template:British beer",
"Template:Redirect",
"Template:Cite book",
"Template:Cite web",
"Template:Webarchive",
"Template:Coord",
"Template:Use British English",
"Template:Infobox organisation",
"Template:Reflist",
"Template:CAMRA",
"Template:Short description",
"Template:Portal",
"Template:Citation",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Campaign_for_Real_Ale |
6,061 | CNO cycle | The CNO cycle (for carbon–nitrogen–oxygen; sometimes called Bethe–Weizsäcker cycle after Hans Albrecht Bethe and Carl Friedrich von Weizsäcker) is one of the two known sets of fusion reactions by which stars convert hydrogen to helium, the other being the proton–proton chain reaction (p–p cycle), which is more efficient at the Sun's core temperature. The CNO cycle is hypothesized to be dominant in stars that are more than 1.3 times as massive as the Sun.
Unlike the proton-proton reaction, which consumes all its constituents, the CNO cycle is a catalytic cycle. In the CNO cycle, four protons fuse, using carbon, nitrogen, and oxygen isotopes as catalysts, each of which is consumed at one step of the CNO cycle, but re-generated in a later step. The end product is one alpha particle (a stable helium nucleus), two positrons, and two electron neutrinos.
There are various alternative paths and catalysts involved in the CNO cycles, but all these cycles have the same net result:
The positrons will almost instantly annihilate with electrons, releasing energy in the form of gamma rays. The neutrinos escape from the star carrying away some energy. One nucleus goes on to become carbon, nitrogen, and oxygen isotopes through a number of transformations in a repeating cycle.
The proton–proton chain is more prominent in stars the mass of the Sun or less. This difference stems from temperature dependency differences between the two reactions; pp-chain reaction starts at temperatures around 4×10 K (4 megakelvin), making it the dominant energy source in smaller stars. A self-maintaining CNO chain starts at approximately 15×10 K, but its energy output rises much more rapidly with increasing temperatures so that it becomes the dominant source of energy at approximately 17×10 K.
The Sun has a core temperature of around 15.7×10 K, and only 1.7% of He nuclei produced in the Sun are born in the CNO cycle.
The CNO-I process was independently proposed by Carl von Weizsäcker and Hans Bethe in the late 1930s.
The first reports of the experimental detection of the neutrinos produced by the CNO cycle in the Sun were published in 2020 by the BOREXINO collaboration. This was also the first experimental confirmation that the Sun had a CNO cycle, that the proposed magnitude of the cycle was accurate, and that von Weizsäcker and Bethe were correct.
Under typical conditions found in stars, catalytic hydrogen burning by the CNO cycles is limited by proton captures. Specifically, the timescale for beta decay of the radioactive nuclei produced is faster than the timescale for fusion. Because of the long timescales involved, the cold CNO cycles convert hydrogen to helium slowly, allowing them to power stars in quiescent equilibrium for many years.
The first proposed catalytic cycle for the conversion of hydrogen into helium was initially called the carbon–nitrogen cycle (CN-cycle), also referred to as the Bethe–Weizsäcker cycle in honor of the independent work of Carl Friedrich von Weizsäcker in 1937–38 and Hans Bethe. Bethe's 1939 papers on the CN-cycle drew on three earlier papers written in collaboration with Robert Bacher and Milton Stanley Livingston and which came to be known informally as "Bethe's Bible". It was considered the standard work on nuclear physics for many years and was a significant factor in his being awarded the 1967 Nobel Prize in Physics. Bethe's original calculations suggested the CN-cycle was the Sun's primary source of energy. This conclusion arose from a belief that is now known to be mistaken, that the abundance of nitrogen in the sun is approximately 10%; it is actually less than half a percent. The CN-cycle, named as it contains no stable isotope of oxygen, involves the following cycle of transformations:
This cycle is now understood as being the first part of a larger process, the CNO-cycle, and the main reactions in this part of the cycle (CNO-I) are:
where the carbon-12 nucleus used in the first reaction is regenerated in the last reaction. After the two positrons emitted annihilate with two ambient electrons producing an additional 2.04 MeV, the total energy released in one cycle is 26.73 MeV; in some texts, authors are erroneously including the positron annihilation energy in with the beta-decay Q-value and then neglecting the equal amount of energy released by annihilation, leading to possible confusion. All values are calculated with reference to the Atomic Mass Evaluation 2003.
The limiting (slowest) reaction in the CNO-I cycle is the proton capture on 7N. In 2006 it was experimentally measured down to stellar energies, revising the calculated age of globular clusters by around 1 billion years.
The neutrinos emitted in beta decay will have a spectrum of energy ranges, because although momentum is conserved, the momentum can be shared in any way between the positron and neutrino, with either emitted at rest and the other taking away the full energy, or anything in between, so long as all the energy from the Q-value is used. The total momentum received by the positron and the neutrino is not great enough to cause a significant recoil of the much heavier daughter nucleus and hence, its contribution to kinetic energy of the products, for the precision of values given here, can be neglected. Thus the neutrino emitted during the decay of nitrogen-13 can have an energy from zero up to 1.20 MeV, and the neutrino emitted during the decay of oxygen-15 can have an energy from zero up to 1.73 MeV. On average, about 1.7 MeV of the total energy output is taken away by neutrinos for each loop of the cycle, leaving about 25 MeV available for producing luminosity.
In a minor branch of the above reaction, occurring in the Sun's core 0.04% of the time, the final reaction involving 7N shown above does not produce carbon-12 and an alpha particle, but instead produces oxygen-16 and a photon and continues
In detail:
Like the carbon, nitrogen, and oxygen involved in the main branch, the fluorine produced in the minor branch is merely an intermediate product; at steady state, it does not accumulate in the star.
This subdominant branch is significant only for massive stars. The reactions are started when one of the reactions in CNO-II results in fluorine-18 and a photon instead of nitrogen-14 and an alpha particle, and continues
In detail:
Like the CNO-III, this branch is also only significant in massive stars. The reactions are started when one of the reactions in CNO-III results in fluorine-19 and a photon instead of nitrogen-15 and an alpha particle, and continues
In detail:
In some instances 9F can combine with a helium nucleus to start a sodium-neon cycle.
Under conditions of higher temperature and pressure, such as those found in novae and X-ray bursts, the rate of proton captures exceeds the rate of beta-decay, pushing the burning to the proton drip line. The essential idea is that a radioactive species will capture a proton before it can beta decay, opening new nuclear burning pathways that are otherwise inaccessible. Because of the higher temperatures involved, these catalytic cycles are typically referred to as the hot CNO cycles; because the timescales are limited by beta decays instead of proton captures, they are also called the beta-limited CNO cycles.
The difference between the CNO-I cycle and the HCNO-I cycle is that 7N captures a proton instead of decaying, leading to the total sequence
In detail:
The notable difference between the CNO-II cycle and the HCNO-II cycle is that 9F captures a proton instead of decaying, and neon is produced in a subsequent reaction on 9F, leading to the total sequence
In detail:
An alternative to the HCNO-II cycle is that 9F captures a proton moving towards higher mass and using the same helium production mechanism as the CNO-IV cycle as
In detail:
While the total number of "catalytic" nuclei are conserved in the cycle, in stellar evolution the relative proportions of the nuclei are altered. When the cycle is run to equilibrium, the ratio of the carbon-12/carbon-13 nuclei is driven to 3.5, and nitrogen-14 becomes the most numerous nucleus, regardless of initial composition. During a star's evolution, convective mixing episodes moves material, within which the CNO cycle has operated, from the star's interior to the surface, altering the observed composition of the star. Red giant stars are observed to have lower carbon-12/carbon-13 and carbon-12/nitrogen-14 ratios than do main sequence stars, which is considered to be convincing evidence for the operation of the CNO cycle. | [
{
"paragraph_id": 0,
"text": "The CNO cycle (for carbon–nitrogen–oxygen; sometimes called Bethe–Weizsäcker cycle after Hans Albrecht Bethe and Carl Friedrich von Weizsäcker) is one of the two known sets of fusion reactions by which stars convert hydrogen to helium, the other being the proton–proton chain reaction (p–p cycle), which is more efficient at the Sun's core temperature. The CNO cycle is hypothesized to be dominant in stars that are more than 1.3 times as massive as the Sun.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Unlike the proton-proton reaction, which consumes all its constituents, the CNO cycle is a catalytic cycle. In the CNO cycle, four protons fuse, using carbon, nitrogen, and oxygen isotopes as catalysts, each of which is consumed at one step of the CNO cycle, but re-generated in a later step. The end product is one alpha particle (a stable helium nucleus), two positrons, and two electron neutrinos.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There are various alternative paths and catalysts involved in the CNO cycles, but all these cycles have the same net result:",
"title": ""
},
{
"paragraph_id": 3,
"text": "The positrons will almost instantly annihilate with electrons, releasing energy in the form of gamma rays. The neutrinos escape from the star carrying away some energy. One nucleus goes on to become carbon, nitrogen, and oxygen isotopes through a number of transformations in a repeating cycle.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The proton–proton chain is more prominent in stars the mass of the Sun or less. This difference stems from temperature dependency differences between the two reactions; pp-chain reaction starts at temperatures around 4×10 K (4 megakelvin), making it the dominant energy source in smaller stars. A self-maintaining CNO chain starts at approximately 15×10 K, but its energy output rises much more rapidly with increasing temperatures so that it becomes the dominant source of energy at approximately 17×10 K.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The Sun has a core temperature of around 15.7×10 K, and only 1.7% of He nuclei produced in the Sun are born in the CNO cycle.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The CNO-I process was independently proposed by Carl von Weizsäcker and Hans Bethe in the late 1930s.",
"title": ""
},
{
"paragraph_id": 7,
"text": "The first reports of the experimental detection of the neutrinos produced by the CNO cycle in the Sun were published in 2020 by the BOREXINO collaboration. This was also the first experimental confirmation that the Sun had a CNO cycle, that the proposed magnitude of the cycle was accurate, and that von Weizsäcker and Bethe were correct.",
"title": ""
},
{
"paragraph_id": 8,
"text": "Under typical conditions found in stars, catalytic hydrogen burning by the CNO cycles is limited by proton captures. Specifically, the timescale for beta decay of the radioactive nuclei produced is faster than the timescale for fusion. Because of the long timescales involved, the cold CNO cycles convert hydrogen to helium slowly, allowing them to power stars in quiescent equilibrium for many years.",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 9,
"text": "The first proposed catalytic cycle for the conversion of hydrogen into helium was initially called the carbon–nitrogen cycle (CN-cycle), also referred to as the Bethe–Weizsäcker cycle in honor of the independent work of Carl Friedrich von Weizsäcker in 1937–38 and Hans Bethe. Bethe's 1939 papers on the CN-cycle drew on three earlier papers written in collaboration with Robert Bacher and Milton Stanley Livingston and which came to be known informally as \"Bethe's Bible\". It was considered the standard work on nuclear physics for many years and was a significant factor in his being awarded the 1967 Nobel Prize in Physics. Bethe's original calculations suggested the CN-cycle was the Sun's primary source of energy. This conclusion arose from a belief that is now known to be mistaken, that the abundance of nitrogen in the sun is approximately 10%; it is actually less than half a percent. The CN-cycle, named as it contains no stable isotope of oxygen, involves the following cycle of transformations:",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 10,
"text": "This cycle is now understood as being the first part of a larger process, the CNO-cycle, and the main reactions in this part of the cycle (CNO-I) are:",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 11,
"text": "where the carbon-12 nucleus used in the first reaction is regenerated in the last reaction. After the two positrons emitted annihilate with two ambient electrons producing an additional 2.04 MeV, the total energy released in one cycle is 26.73 MeV; in some texts, authors are erroneously including the positron annihilation energy in with the beta-decay Q-value and then neglecting the equal amount of energy released by annihilation, leading to possible confusion. All values are calculated with reference to the Atomic Mass Evaluation 2003.",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 12,
"text": "The limiting (slowest) reaction in the CNO-I cycle is the proton capture on 7N. In 2006 it was experimentally measured down to stellar energies, revising the calculated age of globular clusters by around 1 billion years.",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 13,
"text": "The neutrinos emitted in beta decay will have a spectrum of energy ranges, because although momentum is conserved, the momentum can be shared in any way between the positron and neutrino, with either emitted at rest and the other taking away the full energy, or anything in between, so long as all the energy from the Q-value is used. The total momentum received by the positron and the neutrino is not great enough to cause a significant recoil of the much heavier daughter nucleus and hence, its contribution to kinetic energy of the products, for the precision of values given here, can be neglected. Thus the neutrino emitted during the decay of nitrogen-13 can have an energy from zero up to 1.20 MeV, and the neutrino emitted during the decay of oxygen-15 can have an energy from zero up to 1.73 MeV. On average, about 1.7 MeV of the total energy output is taken away by neutrinos for each loop of the cycle, leaving about 25 MeV available for producing luminosity.",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 14,
"text": "In a minor branch of the above reaction, occurring in the Sun's core 0.04% of the time, the final reaction involving 7N shown above does not produce carbon-12 and an alpha particle, but instead produces oxygen-16 and a photon and continues",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 15,
"text": "In detail:",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 16,
"text": "Like the carbon, nitrogen, and oxygen involved in the main branch, the fluorine produced in the minor branch is merely an intermediate product; at steady state, it does not accumulate in the star.",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 17,
"text": "This subdominant branch is significant only for massive stars. The reactions are started when one of the reactions in CNO-II results in fluorine-18 and a photon instead of nitrogen-14 and an alpha particle, and continues",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 18,
"text": "In detail:",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 19,
"text": "Like the CNO-III, this branch is also only significant in massive stars. The reactions are started when one of the reactions in CNO-III results in fluorine-19 and a photon instead of nitrogen-15 and an alpha particle, and continues",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 20,
"text": "In detail:",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 21,
"text": "In some instances 9F can combine with a helium nucleus to start a sodium-neon cycle.",
"title": "Cold CNO cycles"
},
{
"paragraph_id": 22,
"text": "Under conditions of higher temperature and pressure, such as those found in novae and X-ray bursts, the rate of proton captures exceeds the rate of beta-decay, pushing the burning to the proton drip line. The essential idea is that a radioactive species will capture a proton before it can beta decay, opening new nuclear burning pathways that are otherwise inaccessible. Because of the higher temperatures involved, these catalytic cycles are typically referred to as the hot CNO cycles; because the timescales are limited by beta decays instead of proton captures, they are also called the beta-limited CNO cycles.",
"title": "Hot CNO cycles"
},
{
"paragraph_id": 23,
"text": "The difference between the CNO-I cycle and the HCNO-I cycle is that 7N captures a proton instead of decaying, leading to the total sequence",
"title": "Hot CNO cycles"
},
{
"paragraph_id": 24,
"text": "In detail:",
"title": "Hot CNO cycles"
},
{
"paragraph_id": 25,
"text": "The notable difference between the CNO-II cycle and the HCNO-II cycle is that 9F captures a proton instead of decaying, and neon is produced in a subsequent reaction on 9F, leading to the total sequence",
"title": "Hot CNO cycles"
},
{
"paragraph_id": 26,
"text": "In detail:",
"title": "Hot CNO cycles"
},
{
"paragraph_id": 27,
"text": "An alternative to the HCNO-II cycle is that 9F captures a proton moving towards higher mass and using the same helium production mechanism as the CNO-IV cycle as",
"title": "Hot CNO cycles"
},
{
"paragraph_id": 28,
"text": "In detail:",
"title": "Hot CNO cycles"
},
{
"paragraph_id": 29,
"text": "While the total number of \"catalytic\" nuclei are conserved in the cycle, in stellar evolution the relative proportions of the nuclei are altered. When the cycle is run to equilibrium, the ratio of the carbon-12/carbon-13 nuclei is driven to 3.5, and nitrogen-14 becomes the most numerous nucleus, regardless of initial composition. During a star's evolution, convective mixing episodes moves material, within which the CNO cycle has operated, from the star's interior to the surface, altering the observed composition of the star. Red giant stars are observed to have lower carbon-12/carbon-13 and carbon-12/nitrogen-14 ratios than do main sequence stars, which is considered to be convincing evidence for the operation of the CNO cycle.",
"title": "Use in astronomy"
}
]
| The CNO cycle is one of the two known sets of fusion reactions by which stars convert hydrogen to helium, the other being the proton–proton chain reaction, which is more efficient at the Sun's core temperature. The CNO cycle is hypothesized to be dominant in stars that are more than 1.3 times as massive as the Sun. Unlike the proton-proton reaction, which consumes all its constituents, the CNO cycle is a catalytic cycle. In the CNO cycle, four protons fuse, using carbon, nitrogen, and oxygen isotopes as catalysts, each of which is consumed at one step of the CNO cycle, but re-generated in a later step. The end product is one alpha particle, two positrons, and two electron neutrinos. There are various alternative paths and catalysts involved in the CNO cycles, but all these cycles have the same net result: The positrons will almost instantly annihilate with electrons, releasing energy in the form of gamma rays. The neutrinos escape from the star carrying away some energy. One nucleus goes on to become carbon, nitrogen, and oxygen isotopes through a number of transformations in a repeating cycle. The proton–proton chain is more prominent in stars the mass of the Sun or less. This difference stems from temperature dependency differences between the two reactions; pp-chain reaction starts at temperatures around 4×106 K (4 megakelvin), making it the dominant energy source in smaller stars. A self-maintaining CNO chain starts at approximately 15×106 K, but its energy output rises much more rapidly with increasing temperatures so that it becomes the dominant source of energy at approximately 17×106 K. The Sun has a core temperature of around 15.7×106 K, and only 1.7% of 4He nuclei produced in the Sun are
born in the CNO cycle. The CNO-I process was independently proposed by Carl von Weizsäcker and Hans Bethe in the late 1930s. The first reports of the experimental detection of the neutrinos produced by the CNO cycle in the Sun were published in 2020 by the BOREXINO collaboration. This was also the first experimental confirmation that the Sun had a CNO cycle, that the proposed magnitude of the cycle was accurate, and that von Weizsäcker and Bethe were correct. | 2001-08-09T22:30:14Z | 2023-08-14T07:02:08Z | [
"Template:Val",
"Template:SimpleNuclide",
"Template:Separated entries",
"Template:Cite web",
"Template:Cite magazine",
"Template:Use dmy dates",
"Template:Nuclide",
"Template:SubatomicParticle",
"Template:Math",
"Template:Efn",
"Template:Cite journal",
"Template:Short description",
"Template:Notelist",
"Template:Reflist",
"Template:Portal bar",
"Template:Clarify",
"Template:Cite book",
"Template:Nuclear processes"
]
| https://en.wikipedia.org/wiki/CNO_cycle |
6,062 | Craps | Craps is a dice game in which players bet on the outcomes of the roll of a pair of dice. Players can wager money against each other (playing "street craps") or against a bank ("casino craps"). Because it requires little equipment, "street craps" can be played in informal settings. While shooting craps, players may use slang terminology to place bets and actions.
In 1788, "Krabs" (later spelled crabs) was an English variation on the dice game hazard (also spelled hasard).
Craps developed in the United States from a simplification of the western European game of hazard. The origins of hazard are obscure and may date to the Crusades. Hazard was brought from London to New Orleans in approximately 1805 by the returning Bernard Xavier Philippe de Marigny de Mandeville, the young gambler and scion of a family of wealthy landowners in colonial Louisiana. Although in hazard the dice shooter may choose any number from five to nine to be his main number, de Marigny simplified the game such that the main number is always seven, which is the mathematically optimal choice (choice with the lowest disadvantage for the shooter). Both hazard and its simpler derivative were unfamiliar to and rejected by Americans of his social class, leading de Marigny to introduce his novelty to the local underclass. Field hands taught their friends and deckhands, who carried the new game up the Mississippi River and its tributaries. Celebrating the popular success of his novelty, de Marigny gave the name Rue de Craps to a street in his new subdivision in New Orleans.
The central game, called pass from the French word pas (meaning "pace" or "step"), has been gradually supplemented over the decades by many companion games which can be played simultaneously with pass. Now applied to the entire collection of games, the name craps derives from an underclass Louisiana mispronunciation of the word crabs, which in aristocratic London had been the epithet for the numbers two and three. In hazard, both "crabs" are always instant-losing numbers for the first dice roll regardless of the shooter's selected main number. Also in hazard, if the main number is seven then the number twelve is added to the crabs as a losing number on the first dice roll. This structure is retained in the simplified game called pass. All three losing numbers on the first roll of pass are jointly called the craps numbers.
For a century after its invention, casinos used unfair dice. In approximately 1907, a dicemaker named John H. Winn in Philadelphia introduced a layout which featured bets on Don't Pass as well as Pass. Virtually all modern casinos use his innovation, which incentivizes casinos to use fair dice.
Craps exploded in popularity during World War II, which brought most young American men of every social class into the military. The street version of craps was popular among service members who often played it using a blanket as a shooting surface. Their military memories led to craps becoming the dominant casino game in postwar Las Vegas and the Caribbean. After 1960, a few casinos in Europe, Australia, and Macau began offering craps, and, after 2004, online casinos extended the game's spread globally.
Bank craps or casino craps is played by one or more players betting against the casino rather than each other. Both the players and the dealers stand around a large rectangular craps table. Sitting is discouraged by most casinos unless a player has medical reasons for requiring a seat.
Players use casino chips rather than cash to bet on the Craps "layout", a fabric surface which displays the various bets. The bets vary somewhat among casinos in availability, locations, and payouts. The tables roughly resemble bathtubs and come in various sizes. In some locations, chips may be called checks, tokens, or plaques.
Against one long side is the casino's table bank: as many as two thousand casino chips in stacks of 20. The opposite long side is usually a long mirror. The U-shaped ends of the table have duplicate layouts and standing room for approximately eight players. In the center of the layout is an additional group of bets which are used by players from both ends. The vertical walls at each end are usually covered with a rubberized target surface covered with small pyramid shapes to randomize the dice which strike them. The top edges of the table walls have one or two horizontal grooves in which players may store their reserve chips.
The table is run by up to four casino employees: a boxman seated (usually the only seated employee) behind the casino's bank, who manages the chips, supervises the dealers, and handles "coloring up" players (exchanging small chip denominations for larger denominations in order to preserve the chips at a table); two base dealers who stand to either side of the boxman and collect and pay bets to players around their half of the table; and a stickman who stands directly across the table from the boxman, takes and pays (or directs the base dealers to do so) the bets in the center of the table, announces the results of each roll (usually with a distinctive patter), and moves the dice across the layout with an elongated wooden stick.
Each employee also watches for mistakes by the others because of the sometimes large number of bets and frantic pace of the game. In smaller casinos or at quiet times of day, one or more of these employees may be missing, and have their job covered by another, or cause player capacity to be reduced.
Some smaller casinos have introduced "mini-craps" tables which are operated with only two dealers; rather than being two essentially identical sides and the center area, a single set of major bets is presented, split by the center bets. Responsibility of the dealers is adjusted: while the stickman continues to handle the center bets, it is the base dealer who handles all other bets (as well as cash and chip exchanges).
By contrast, in "street craps", there is no marked table and often the game is played with no back-stop against which the dice are to hit. (Despite the name "street craps", this game is often played in houses, usually on an un-carpeted garage or kitchen floor.) The wagers are made in cash, never in chips, and are usually thrown down onto the ground or floor by the players. There are no attendants, and so the progress of the game, fairness of the throws, and the way that the payouts are made for winning bets are self-policed by the players.
Each casino may set which bets are offered and different payouts for them, though a core set of bets and payouts is typical. Players take turns rolling two dice and whoever is throwing the dice is called the "shooter". Players can bet on the various options by placing chips directly on the appropriately-marked sections of the layout, or asking the base dealer or stickman to do so, depending on which bet is being made.
While acting as the shooter, a player must have a bet on the "Pass" line and/or the "Don't Pass" line. "Pass" and "Don't Pass" are sometimes called "Win" and "Don't Win" or "Right" and "Wrong" bets. The game is played in rounds and these "Pass" and "Don't Pass" bets are betting on the outcome of a round. The shooter is presented with multiple dice (typically five) by the "stickman", and must choose two for the round. The remaining dice are returned to the stickman's bowl and are not used.
Each round has two phases: "come-out" and "point". Dice are passed to the left. To start a round, the shooter makes one or more "come-out" rolls. The shooter must shoot toward the farther back wall and is generally required to hit the farther back wall with both dice. Casinos may allow a few warnings before enforcing the dice to hit the back wall and are generally lenient if at least one die hits the back wall. Both dice must be tossed in one throw. If only one die is thrown the shot is invalid. A come-out roll of 2, 3, or 12 is called "craps" or "crapping out", and anyone betting the Pass line loses. On the other hand, anyone betting the Don't Pass line on come out wins with a roll of 2 or 3 and ties (pushes) if a 12 is rolled. Shooters may keep rolling after crapping out; the dice are only required to be passed if a shooter sevens out (rolls a seven after a point has been established). A come-out roll of 7 or 11 is a "natural"; the Pass line wins and Don't Pass loses. The other possible numbers are the point numbers: 4, 5, 6, 8, 9, and 10. If the shooter rolls one of these numbers on the come-out roll, this establishes the "point" – to "pass" or "win", the point number must be rolled again before a seven.
The dealer flips a button to the "On" side and moves it to the point number signifying the second phase of the round. If the shooter "hits" the point value again (any value of the dice that sum to the point will do; the shooter doesn't have to exactly repeat the exact combination of the come-out roll) before rolling a seven, the Pass line wins and a new round starts. If the shooter rolls any seven before repeating the point number (a "seven-out"), the Pass line loses, the Don't Pass line wins, and the dice pass clockwise to the next new shooter for the next round. Once a point has been established any multi-roll bet (including Pass and/or Don't Pass line bets and odds) are unaffected by the 2, 3, 11, or 12; the only numbers which affect the round are the established point, any specific bet on a number, or any 7. Any single roll bet is always affected (win or lose) by the outcome of any roll.
While the come-out roll may specifically refer to the first roll of a new shooter, any roll where no point is established may be referred to as a come-out. By this definition the start of any new round regardless if it is the shooter's first toss can be referred to as a come-out roll.
Any player can make a bet on Pass or Don't Pass as long as a point has not been established, or Come or Don't Come as long as a point is established. All other bets, including an increase in odds behind the Pass and Don't Pass lines, may be made at any time. All bets other than Pass line and Come may be removed or reduced any time before the bet loses. This is known as "taking it down" in craps.
The maximum bet for Place, Buy, Lay, Pass, and Come bets are generally equal to table maximum. Lay bet maximum are equal to the table maximum win, so players wishing to lay the 4 or 10 may bet twice at amount of the table maximum for the win to be table maximum. Odds behind Pass, Come, Don't Pass, and Don't Come may be however larger than the odds offered allows and can be greater than the table maximum in some casinos. Don't odds are capped on the maximum allowed win some casino allow the odds bet itself to be larger than the maximum bet allowed as long as the win is capped at maximum odds. Single rolls bets can be lower than the table minimum, but the maximum bet allowed is also lower than the table maximum. The maximum allowed single roll bet is based on the maximum allowed win from a single roll.
In all the above scenarios, whenever the Pass line wins, the Don't Pass line loses, and vice versa, with one exception: on the come-out roll, a roll of 12 will cause Pass Line bets to lose, but Don't Pass bets are pushed (or "barred"), neither winning nor losing. (The same applies to "Come" and "Don't Come" bets, discussed below.)
A player wishing to play craps without being the shooter should approach the craps table and first check to see if the dealer's "On" button is on any of the point numbers.
In either case, all single or multi-roll proposition bets may be placed in either of the two rounds.
Between dice rolls there is a period for dealers to make payouts and collect losing bets, after which players can place new bets. The stickman monitors the action at a table and decides when to give the shooter the dice, after which no more betting is allowed.
When joining the game, one should place money on the table rather than passing it directly to a dealer. The dealer's exaggerated movements during the process of "making change" or "change only" (converting currency to an equivalent in casino cheques) are required so that any disputes can be later reviewed against security camera footage.
The dealers will insist that the shooter roll with one hand and that the dice bounce off the far wall surrounding the table. These requirements are meant to keep the game fair (preventing switching the dice or making a "controlled shot"). If a die leaves the table, the shooter will usually be asked to select another die from the remaining three but can request permission to use the same die if it passes the boxman's inspection. This requirement exists to keep the game fair and reduce the chance of loaded dice.
There are many local variants of the calls made by the stickman for rolls during a craps game. These frequently incorporate a reminder to the dealers as to which bets to pay or collect.
Rolls of 4, 6, 8, and 10 are called "hard" or "easy" (e.g. "six the hard way", "easy eight", "hard ten") depending on whether they were rolled as a "double" or as any other combination of values, because of their significance in center table bets known as the "hard ways". Hard way rolls are so named because there is only one way to roll them (i.e., the value on each die is the same when the number is rolled). Consequently, it is more likely to roll the number in combinations (easy) rather than as a double (hard).
The shooter is required to make either a Pass line bet or a Don't Pass bet if he wants to shoot. On the come out roll each player may only make one bet on the Pass or Don't Pass, but may bet both if desired. The Pass Line and Don't Pass bet is optional for any player not shooting. In rare cases, some casinos require all players to make a minimum Pass Line or Don't Pass bet (if they want to make any other bet), whether they are currently shooting or not.
The fundamental bet in craps is the Pass line bet, which is a bet for the shooter to win. This bet must be at least the table minimum and at most the table maximum.
The Pass line bet pays even money.
The Pass line bet is a contract bet. Once a Pass line bet is made, it is always working and cannot be turned "Off", taken down, or reduced until a decision is reached – the point is made, or the shooter sevens out. A player may increase any corresponding odds (up to the table limit) behind the Pass line at any time after a point is established. Players may only bet the Pass line on the come out roll when no point has been established, unless the casino allows put betting where the player can bet Pass line or increase an existing Pass line bet whenever desired and may take odds immediately if the point is already on.
A Don't Pass bet is a bet for the shooter to lose ("seven out, line away") and is almost the opposite of the Pass line bet. Like the Pass bet, this bet must be at least the table minimum and at most the table maximum.
The Don't Pass bet pays even money.
The Don't Pass bet is a no-contract bet. After a point is established, a player may take down or reduce a Don't Pass bet and any corresponding odds at any time because odds of rolling a 7 before the point is in the player's favor. Once taken down or reduced, however, the Don't Pass bet may not be restored or increased. Because the shooter must have a line bet the shooter generally may not reduce a Don't Pass bet below the table minimum. In Las Vegas, a majority of casinos will allow the shooter to move the bet to the Pass line in lieu of taking it down; however, in other areas such as Pennsylvania and Atlantic City, this is not allowed. Even though players are allowed to remove the Don't Pass line bet after a point has been established, the bet cannot be turned "Off" without being removed. If a player chooses to remove the Don't Pass line bet, he or she can no longer lay odds behind the Don't Pass line. The player can, however, still make standard lay bets on any of the point numbers (4, 5, 6, 8, 9, 10).
There are two different ways to calculate the odds and house edge of this bet. The table below gives the numbers considering that the game ends in a push when a 12 is rolled, rather than being undetermined. Betting on Don't Pass is often called "playing the dark side", and it is considered by some players to be in poor taste, or even taboo, because it goes directly against conventional play, winning when most of the players lose.
If a 4, 5, 6, 8, 9, or 10 is thrown on the come-out roll (i.e., if a point is established), most casinos allow Pass line players to take odds by placing up to some predetermined multiple of the Pass line bet, behind the Pass line. This additional bet wins if the point is rolled again before a 7 is rolled (the point is made) and pays at the true odds of 2-to-1 if 4 or 10 is the point, 3-to-2 if 5 or 9 is the point, or 6-to-5 if 6 or 8 is the point. Unlike the Pass line bet itself, the Pass line odds bet can be turned "Off" (not working), removed or reduced anytime before it loses. In Las Vegas, generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine odds and Pass bet must be table minimum so players can bet the minimum single unit on odds depending on the point. If the point is a 4 or 10 players can bet as little as $1 on odds if the table minimum is low such as is $5, $10 or $15. If the player requests the Pass odds be not working ("Off") and the shooter sevens-out or hits the point, the Pass line bet will be lost or doubled and the Pass odds returned.
Individual casinos (and sometimes tables within a casino) vary greatly in the maximum odds they offer, from single or double odds (one or two times the Pass line bet) up to 100x or even unlimited odds. A variation often seen is "3-4-5X Odds", where the maximum allowed odds bet depends on the point: three times if the point is 4 or 10; four times on points of 5 or 9; or five times on points of 6 or 8. This rule simplifies the calculation of winnings: a maximum Pass odds bet on a 3–4–5× table will always be paid at six times the Pass line bet regardless of the point.
As odds bets are paid at true odds, in contrast with the Pass line which is always even money, taking odds on a minimum Pass line bet lessens the house advantage compared with betting the same total amount on the Pass line only. A maximum odds bet on a minimum Pass line bet often gives the lowest house edge available in any game in the casino. However, the odds bet cannot be made independently, so the house retains an edge on the Pass line bet itself.
If a player is playing Don't Pass instead of pass, they may also lay odds by placing chips behind the Don't Pass line. If a 7 comes before the point is rolled, the odds pay at true odds of 1-to-2 if 4 or 10 is the point, 2-to-3 if 5 or 9 is the point, or 5-to-6 if 6 or 8 is the point. Typically the maximum lay bet will be expressed such that a player may win up to an amount equal to the maximum odds multiple at the table. If a player lays maximum odds with a point of 4 or 10 on a table offering five-times odds, he would be able to lay a maximum of ten times the amount of his Don't Pass bet. At 5x odds table, the maximum amount the combined bet can win will always be 6x the amount of the Don't Pass bet. Players can bet table minimum odds if desired and win less than table minimum. Like the Don't Pass bet the odds can be removed or reduced. Unlike the Don't Pass bet itself, the Don't Pass odds can be turned "Off" (not working). In Las Vegas generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine lay odds and Don't Pass bet must be table minimum so players may bet as little as the minimum two units on odds depending on the point. If the point is a 4 or 10 players can bet as little as $2 if the table minimum is low such as $5, $10 or $15 tables. If the player requests the Don't Pass odds to be not working ("Off") and the shooter hits the point or sevens-out, the Don't Pass bet will be lost or doubled and the Don't Pass odds returned. Unlike a standard lay bet on a point, lay odds behind the Don't Pass line does not charge commission (vig).
A Come bet can be visualized as starting an entirely new Pass line bet, unique to that player. Like the Pass Line each player may only make one Come bet per roll, this does not exclude a player from betting odds on an already established Come point. This bet must be at least the table minimum and at most the table maximum. Players may bet both the Come and Don't Come on the same roll if desired. Come bets can only be made after a point has been established since, on the come-out roll, a Come bet would be the same thing as a Pass line bet. A player making a Come bet will bet on the first point number that "comes" from the shooter's next roll, regardless of the table's round. If a 7 or 11 is rolled on the first round, it wins. If a 2, 3, or 12 is rolled, it loses. If instead the roll is 4, 5, 6, 8, 9, or 10, the Come bet will be moved by the base dealer onto a box representing the number the shooter threw. This number becomes the "come-bet point" and the player is allowed to take odds, just like a Pass line bet. Also like a Pass line bet, the come bet is a contract bet and is always working, and cannot be turned "Off", removed or reduced until it wins or loses. However, the odds taken behind a Come bet can be turned "Off" (not working), removed or reduced anytime before the bet loses. In Las Vegas generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine odds and Pass bet must be table minimum so players can bet the minimum single unit depending on the point. If the point is a 4 or 10, players can bet as little as $1 if the table minimum is low such as $5, $10, or $15 minimums. If the player requests the Come odds to be not working ("Off") and the shooter sevens-out or hits the Come bet point, the Come bet will be lost or doubled and the Come odds returned. If the casino allows put betting a player may increase a Come bet after a point has been established and bet larger odds behind if desired. Put betting also allows a player to bet on a Come and take odds immediately on a point number without a Come bet point being established.
The dealer will place the odds on top of the come bet, but slightly off center in order to differentiate between the original bet and the odds. The second round wins if the shooter rolls the come bet point again before a seven. Winning come bets are paid the same as winning Pass line bets: even money for the original bet and true odds for the odds bet. If, instead, the seven is rolled before the come-bet point, the come bet (and any odds bet) loses.
Because of the come bet, if the shooter makes their point, a player can find themselves in the situation where they still have a come bet (possibly with odds on it) and the next roll is a come-out roll. In this situation, odds bets on the come wagers are usually presumed to be not working for the come-out roll. That means that if the shooter rolls a 7 on the come-out roll, any players with active come bets waiting for a come-bet point lose their initial wager but will have their odds bets returned to them.
If the come-bet point is rolled on the come-out roll, the odds do not win but the come bet does and the odds bet is returned (along with the come bet and its payoff). The player can tell the dealer that they want their odds working, such that if the shooter rolls a number that matches the come point, the odds bet will win along with the come bet, and if a seven is rolled, both lose.
Many players will use a come bet as "insurance" against sevening out: if the shooter rolls a seven, the come bet pays 1:1, offsetting the loss of the Pass line bet. The risk in this strategy is the situation where the shooter does not hit a seven for several rolls, leading to multiple come bets that will be lost if the shooter eventually sevens out.
In the same way that a come bet is similar to a Pass line bet, a Don't Come bet is similar to a Don't Pass bet. Like the come, the Don't Come can only be bet after a point has already been established as it is the same as a Don't Pass line bet when no point is established. This bet must be at least the table minimum and at most the table maximum. A Don't Come bet is played in two rounds. If a 2 or 3 is rolled in the first round, it wins. If a 7 or 11 is rolled, it loses. If a 12 is rolled, it is a push (subject to the same 2/12 switch described above for the Don't Pass bet). If, instead, the roll is 4, 5, 6, 8, 9, or 10, the Don't Come bet will be moved by the base dealer onto a box representing the number the shooter threw. The second round wins if the shooter rolls a seven before the Don't Come point. Like the Don't Pass each player may only make one Don't Come bet per roll, this does not exclude a player from laying odds on an already established Don't Come points. Players may bet both the Don't Come and Come on the same roll if desired.
The player may lay odds on a Don't Come bet, just like a Don't Pass bet; in this case, the dealer (not the player) places the odds bet on top of the bet in the box, because of limited space, slightly offset to signify that it is an odds bet and not part of the original Don't Come bet. Lay odds behind a Don't Come are subject to the same rules as Don't Pass lay odds. Unlike a standard lay bet on a point, lay odds behind a Don't Come point does not charge commission (vig) and gives the player true odds. Like the Don't Pass line bet, Don't Come bets are no-contract, and can be removed or reduced after a Don't Come point has been established, but cannot be turned off ("not working") without being removed. A player may also call, "No Action" when a point is established, and the bet will not be moved to its point. This play is not to the player's advantage. If the bet is removed, the player can no longer lay odds behind the Don't Come point and cannot restore or increase the same Don't Come bet. Players must wait until next roll as long as a Pass line point has been established (players cannot bet Don't Come on come out rolls) before they can make a new Don't Come bet. Las Vegas casinos which allow put betting allows players to move the Don't Come directly to any Come point as a put; however, this is not allowed in Atlantic City or Pennsylvania. Unlike the Don't Come bet itself, the Don't Come odds can be turned "Off" (not working), removed, or reduced if desired. In Las Vegas, players generally must lay at least table minimum on odds if desired and win less than table minimum; in Atlantic City and Pennsylvania a player's combined bet must be at least table minimum, so depending on the point number players may lay as little as 2 minimum units (e.g. if the point is 4 or 10). If the player requests the Don't Come odds be not working ("Off") and the shooter hits the Don't Come point or sevens-out, the Don't Come bet will be lost or doubled and the Don't Come odds returned.
Winning Don't Come bets are paid the same as winning Don't Pass bets: even money for the original bet and true odds for the odds lay. Unlike come bets, the odds laid behind points established by Don't Come bets are always working including come out rolls unless the player specifies otherwise.
These are bets that may not be settled on the first roll and may need any number of subsequent rolls before an outcome is determined. Most multi-roll bets may fall into the situation where a point is made by the shooter before the outcome of the multi-roll bet is decided. These bets are often considered "not working" on the new come-out roll until the next point is established, unless the player calls the bet as "working."
Casino rules vary on this; some of these bets may not be callable, while others may be considered "working" during the come-out. Dealers will usually announce if bets are working unless otherwise called off. If a non-working point number placed, bought or laid becomes the new point as the result of a come-out, the bet is usually refunded, or can be moved to another number for free.
Players can bet any point number (4, 5, 6, 8, 9, 10) by placing their wager in the come area and telling the dealer how much and on what number(s), "30 on the 6", "5 on the 5", or "25 on the 10". These are typically "Place Bets to Win". These are bets that the number bet on will be rolled before a 7 is rolled. These bets are considered working bets, and will continue to be paid out each time a shooter rolls the number bet. On a come-out roll, a place bet is considered to be not in effect unless the player who made it specifies otherwise. This bet may be removed or reduced at any time until it loses; in the latter case, the player must abide by any table minimums.
Place bets to win pay out at slightly worse than the true odds: 9-to-5 on points 4 or 10, 7-to-5 on points 5 or 9, and 7-to-6 on points 6 or 8. The place bets on the outside numbers (4,5,9,10) should be made in units of $5, (on a $5 minimum table), in order to receive the correct exact payout of $5 paying $7 or $5 paying $9. The place bets on the 6 & 8 should be made in units of $6, (on a $5 minimum table), in order to receive the correct exact payout of $6 paying $7. For the 4 and 10, it is to the player's advantage to 'buy' the bet (see below).
An alternative form, rarely offered by casinos, is the "place bet to lose." This bet is the opposite of the place bet to win and pays off if a 7 is rolled before the specific point number. The place bet to lose typically carries a lower house edge than a place bet to win. Payouts are 4–5 on points 6 or 8, 5–8 on 5 or 9, and 5–11 on 4 or 10.
Players can also buy a bet which are paid at true odds, but a 5% commission is charged on the amount of the bet. Buy bets are placed with the shooter betting at a specific number will come out before a player sevens out. The buy bet must be at least table minimum excluding commission; however, some casinos require the minimum buy bet amount to be at least $20 to match the $1 charged on the 5% commission. Traditionally, the buy bet commission is paid no matter what, but in recent years a number of casinos have changed their policy to charge the commission only when the buy bet wins. Some casinos charge the commission as a one-time fee to buy the number; payouts are then always at true odds. Most casinos usually charge only $1 for a $25 green-chip bet (4% commission), or $2 for $50 (two green chips), reducing the house advantage a bit more. Players may remove or reduce this bet (bet must be at least table minimum excluding vig) anytime before it loses. Buy bets like place bets are not working when no point has been established unless the player specifies otherwise.
Where commission is charged only on wins, the commission is often deducted from the winning payoff—a winning $25 buy bet on the 10 would pay $49, for instance. The house edges stated in the table assume the commission is charged on all bets. They are reduced by at least a factor of two if commission is charged on winning bets only.
A lay bet is the opposite of a buy bet, where a player bets on a 7 to roll before the number that is laid. Players may only lay the 4, 5, 6, 8, 9, or 10 and may lay multiple numbers if desired. Just like the buy bet lay bets pay true odds, but because the lay bet is the opposite of the buy bet, the payout is reversed. Therefore, players get 1 to 2 for the numbers 4 and 10, 2 to 3 for the numbers 5 and 9, and 5 to 6 for the numbers 6 and 8. A 5% commission (vigorish, vig, juice) is charged up front on the possible winning amount. For example: A $40 Lay Bet on the 4 would pay $20 on a win. The 5% vig would be $1 based on the $20 win. (not $2 based on the $40 bet as the way buy bet commissions are figured.) Like the buy bet the commission is adjusted to suit the betting unit such that fraction of a dollar payouts are not needed. Casinos may charge the vig up front thereby requiring the player to pay a vig win or lose, other casinos may only take the vig if the bet wins. Taking vig only on wins lowers house edge. Players may removed or reduce this bet (bet must be at least table minimum) anytime before it loses. Some casinos in Las Vegas allow players to lay table minimum plus vig if desired and win less than table minimum. Lay bet maximums are equal to the table maximum win, so if a player wishes to lay the 4 or 10, he or she may bet twice at amount of the table maximum for the win to be table maximum. Other casinos require the minimum bet to win at $20 even at the lowest minimum tables in order to match the $1 vig, this requires a $40 bet. Similar to buy betting, some casinos only take commission on win reducing house edge. Unlike place and buy bets, lay bets are always working even when no point has been established. The player must specify otherwise if he or she wishes to have the bet not working.
If a player is unsure of whether a bet is a single or multi-roll bet, it can be noted that all single-roll bets will be displayed on the playing surface in one color (usually red), while all multi-roll bets will be displayed in a different color (usually yellow).
A put bet is a bet which allows players to increase or make a Pass line bet after a point has been established (after come-out roll). Players may make a put bet on the Pass line and take odds immediately or increase odds behind if a player decides to add money to an already existing Pass line bet. Put betting also allows players to increase an existing come bet for additional odds after a come point has been established or make a new come bet and take odds immediately behind if desired without a come bet point being established. If increased or added put bets on the Pass line and Come cannot be turned "Off", removed or reduced, but odds bet behind can be turned "Off", removed or reduced. The odds bet is generally required to be the table minimum. Player cannot put bet the Don't Pass or Don't Come. Put betting may give a larger house edge over place betting unless the casino offers high odds.
Put bets are generally allowed in Las Vegas, but not allowed in Atlantic City and Pennsylvania.
Put bets are better than place bets (to win) when betting more than 5-times odds over the flat bet portion of the put bet. For example, a player wants a $30 bet on the six. Looking at two possible bets: 1) Place the six, or 2) Put the six with odds. A $30 place bet on the six pays $35 if it wins. A $30 put bet would be a $5 flat line bet plus $25 (5-times) in odds, and also would pay $35 if it wins. Now, with a $60 bet on the six, the place bet wins $70, where the put bet ($5 + $55 in odds) would pay $71. The player needs to be at a table which not only allows put bets, but also high-times odds, to take this advantage.
This bet can only be placed on the numbers 4, 6, 8, and 10. In order for this bet to win, the chosen number must be rolled the "hard way" (as doubles) before a 7 or any other non-double combination ("easy way") totaling that number is rolled. For example, a player who bets a hard 6 can only win by seeing a 3–3 roll come up before any 7 or any easy roll totaling 6 (4–2 or 5–1); otherwise, the player loses.
In Las Vegas casinos, this bet is generally working, including when no point has been established, unless the player specifies otherwise. In other casinos such as those in Atlantic City, hard ways are not working when the point is off unless the player requests to have it working on the come out roll.
Like single-roll bets, hard way bets can be lower than the table minimum; however, the maximum bet allowed is also lower than the table maximum. The minimum hard way bet can be a minimum one unit. For example, lower stake table minimums of $5 or $10, generally allow minimum hard ways bets of $1. The maximum bet is based on the maximum allowed win from a single roll.
Easy way is not a specific bet offered in standard casinos, but a term used to define any number combination which has two ways to roll. For example, (6–4, 4–6) would be a "10 easy". The 4, 6, 8 or 10 can be made both hard and easy ways. Betting point numbers (which pays off on easy or hard rolls of that number) or single-roll ("hop") bets (e.g., "hop the 2–4" is a bet for the next roll to be an easy six rolled as a two and four) are methods of betting easy ways.
A player can choose either the 6 or 8 being rolled before the shooter throws a seven. These wagers are usually avoided by experienced craps players since they pay even money (1:1) while a player can make place bets on the 6 or the 8, which pay more (7:6). Some casinos (especially all those in Atlantic City) do not even offer the Big 6 & 8. The bets are located in the corners behind the Pass line, and bets may be placed directly by players.
The only real advantage offered by the Big 6 & 8 is that they can be bet for the table minimum, whereas a place bet minimum may sometimes be greater than the table minimum (e.g. $6 place bet on a $3 minimum game.) In addition place bets are usually not working, except by agreement, when the shooter is "coming out" i.e. shooting for a point, and Big 6 and 8 bets always work. Some modern layouts no longer show the Big 6/Big 8 bet.
Single-roll (proposition) bets are resolved in one dice roll by the shooter. Most of these are called "service bets", and they are located at the center of most craps tables. Only the stickman or a dealer can place a service bet. Single-roll bets can be lower than the table minimum, but the maximum bet allowed is also lower than the table maximum. The maximum bet is based on the maximum allowed win from a single roll. The lowest single-roll bet can be a minimum one unit bet. For example, tables with minimums of $5 or $10 generally allow minimum single-roll bets of $1. Single bets are always working by default unless the player specifies otherwise. The bets include:
Fire Bet: Before the shooter begins, some casinos will allow a bet known as a fire bet to be placed. A fire bet is a bet of as little as $1 and generally up to a maximum of $5 to $10 sometimes higher, depending on casino, made in the hope that the next shooter will have a hot streak of setting and getting many points of different values. As different individual points are made by the shooter, they will be marked on the craps layout with a fire symbol.
The first three points will not pay out on the fire bet, but the fourth, fifth, and sixth will pay out at increasing odds. The fourth point pays at 24-to-1, the fifth point pays at 249-to-1, and the 6th point pays at 999-to-1. (The points must all be different numbers for them to count toward the fire bet.) For example, a shooter who successfully hits a point of 10 twice will only garner credit for the first one on the fire bet. Players must hit the established point in order for it to count toward the fire bet. The payout is determine by the number of points which have been established and hit after the shooter sevens out.
Bonus Craps: Prior to the initial "come out roll", players may place an optional wager (usually a $1 minimum to a maximum $25) on one or more of the three Bonus Craps wagers, "All Small", "All Tall", or "All or Nothing at All." For players to win the "All Small" wager, the shooter must hit all five small numbers (2, 3, 4, 5, 6) before a seven is rolled; similarly, "All Tall" wins if all five high numbers (8, 9, 10, 11, 12) are hit before a seven is rolled.
These bets pay 35-for-1, for a house advantage of 7.76%. "All or Nothing at All" wins if the shooter hits all 10 numbers before a seven is rolled. This pays 176-for-1, for a house edge of 7.46%. For all three wagers, the order in which the numbers are hit does not matter. Whenever a seven is hit, including on the come out roll, all bonus bets lose, the bonus board is reset, and new bonus bets may be placed.
A player may wish to make multiple different bets. For example, a player may be wish to bet $1 on all hard ways and the horn. If one of the bets win the dealer may automatically replenish the losing bet with profits from the winning bet. In this example, if the shooter rolls a hard 8 (pays 9:1), the horn loses. The dealer may return $5 to the player and place the other $4 on the horn bet which lost. If the player does not want the bet replenished, he or she should request any or all bets be taken down.
A working bet is a live bet. Bets may also be on the board, but not in play and therefore not working. Pass line and come bets are always working meaning the chips are in play and the player is therefore wagering live money. Other bets may be working or not working depending whether a point has been established or player's choice. Place and buy bets are working by default when a point is established and not working when the point is off unless the player specifies otherwise. Lay bets are always working even if a point has not been established unless the player requests otherwise. At any time, a player may wish to take any bet or bets out of play. The dealer will put an "Off" button on the player's specific bet or bets; this allows the player to keep his chips on the board without a live wager. For example, if a player decides not to wager a place bet mid-roll but wishes to keep the chips on the number, he or she may request the bet be "not working" or "Off". The chips remain on the table, but the player cannot win from or lose chips which are not working.
The opposite is also allowed. By default place and buy bets are not working without an established point; a player may wish to wager chips before a point has been established. In this case, the player would request the bet be working in which the dealer will place an "On" button on the specified chips.
The probability of dice combinations determine the odds of the payout. The following chart shows the dice combinations needed to roll each number. The two and twelve are the hardest to roll since only one combination of dice is possible. The game of craps is built around the dice roll of seven, since it is the most easily rolled dice combination.
Viewed another way:
The expected value of all bets is usually negative, such that the average player will always lose money. This is because the house always sets the paid odds to below the actual odds. The only exception is the "odds" bet that the player is allowed to make after a point is established on a pass/come Don't Pass/Don't Come bet (the odds portion of the bet has a long-term expected value of 0). However, this "free odds" bet cannot be made independently, so the expected value of the entire bet, including odds, is still negative. Since there is no correlation between die rolls, there is normally no possible long-term winning strategy in craps.
There are occasional promotional variants that provide either no house edge or even a player edge. One example is a field bet that pays 3:1 on 12 and 2:1 on either 3 or 11. Overall, given the 5:4 true odds of this bet, and the weighted average paid odds of approximately 7:5, the player has a 5% advantage on this bet. This is sometimes seen at casinos running limited-time incentives, in jurisdictions or gaming houses that require the game to be fair, or in layouts for use in informal settings using play money. No casino currently runs a craps table with a bet that yields a player edge full-time.
Maximizing the size of the odds bet in relation to the line bet will reduce, but never eliminate the house edge, and will increase variance. Most casinos have a limit on how large the odds bet can be in relation to the line bet, with single, double, and five times odds common. Some casinos offer 3–4–5 odds, referring to the maximum multiple of the line bet a player can place in odds for the points of 4 and 10, 5 and 9, and 6 and 8, respectively. During promotional periods, a casino may even offer 100x odds bets, which reduces the house edge to almost nothing, but dramatically increases variance, as the player will be betting in large betting units.
Since several of the multiple roll bets pay off in ratios of fractions on the dollar, it is important that the player bets in multiples that will allow a correct payoff in complete dollars. Normally, payoffs will be rounded down to the nearest dollar, resulting in a higher house advantage. These bets include all place bets, taking odds, and buying on numbers 6, 8, 5, and 9, as well as laying all numbers.
These variants depend on the casino and the table, and sometimes a casino will have different tables that use or omit these variants and others.
When craps is played in a casino, all bets have a house advantage. That is, it can be shown mathematically that a player will (with 100% probability) lose all his or her money to the casino in the long run, while in the short run the player is more likely to lose money than make money. There may be players who are lucky and get ahead for a period of time, but in the long run these winning streaks are eroded away. One can slow, but not eliminate, one's average losses by only placing bets with the smallest house advantage.
The Pass/Don't Pass line, Come/Don't Come line, place 6, place 8, buy 4 and buy 10 (only under the casino rules where commission is charged only on wins) have the lowest house edge in the casino, and all other bets will, on average, lose money between three and twelve times faster because of the difference in house edges.
The place bets and buy bets differ from the Pass line and come line, in that place bets and buy bets can be removed at any time, since, while they are multi-roll bets, their odds of winning do not change from roll to roll, whereas Pass line bets and come line bets are a combination of different odds on their first roll and subsequent rolls. The first roll of a Pass line bet is 2:1 advantage for the player (8 wins, 4 losses), but it's "paid for" by subsequent rolls that are at the same disadvantage to the player as the Don't Pass bets were at an advantage. As such, they cannot profitably let the player take down the bet after the first roll. Players can bet or lay odds behind an established point depending on whether it was a Pass/Come or Don't Pass/Don't Come to lower house edge by receiving true odds on the point. Casinos which allow put betting allows players to increase or make new pass/come bets after the come-out roll. This bet generally has a higher house edge than place betting, unless the casino offers high odds.
Conversely, a player can take back (pick up) a Don't Pass or Don't Come bet after the first roll, but this cannot be recommended, because they already endured the disadvantaged part of the combination – the first roll. On that come-out roll, they win just 3 times (2 and 3), while losing 8 of them (7 and 11) and pushing one (12) out of the 36 possible rolls. On the other 24 rolls that become a point, their Don't Pass bet is now to their advantage by 6:3 (4 and 10), 6:4 (5 and 9) and 6:5 (6 and 8). If a player chooses to remove the initial Don't Come and/or Don't Pass line bet, he or she can no longer lay odds behind the bet and cannot re-bet the same Don't Pass and/or Don't Come number (players must make a new Don't Pass or come bets if desired). However, players can still make standard lay bets odds on any of the point numbers (4,5,6,8,9,10).
Among these, and the remaining numbers and possible bets, there are a myriad of systems and progressions that can be used with many combinations of numbers.
An important alternative metric is house advantage per roll (rather than per bet), which may be expressed in loss per hour. The typical pace of rolls varies depending on the number of players, but 102 rolls per hour is a cited rate for a nearly full table. This same reference states that only "29.6% of total rolls are come out rolls, on average", so for this alternative metric, needing extra rolls to resolve the Pass line bet, for example, is factored. This number then permits calculation of rate of loss per hour, and per the 4 day/5 hour per day gambling trip:
Besides the rules of the game itself, a number of formal and informal rules are commonly applied in the table form of Craps, especially when played in a casino.
To reduce the potential opportunity for switching dice by sleight-of-hand, players are not supposed to handle the dice with more than one hand (such as shaking them in cupped hands before rolling) nor take the dice past the edge of the table. If a player wishes to change shooting hands, they may set the dice on the table, let go, then take them with the other hand.
When throwing the dice, the player is expected to hit the farthest wall at the opposite end of the table (these walls are typically augmented with pyramidal structures to ensure highly unpredictable bouncing after impact). Casinos will sometimes allow a roll that does not hit the opposite wall as long as the dice are thrown past the middle of the table; a very short roll will be nullified as a "no roll". The dice may not be slid across the table and must be tossed. These rules are intended to prevent dexterous players from physically influencing the outcome of the roll.
Players are generally asked not to throw the dice above a certain height (such as the eye level of the dealers). This is both for the safety of those around the table, and to eliminate the potential use of such a throw as a distraction device in order to cheat.
Dice are still considered "in play" if they land on players' bets on the table, the dealer's working stacks, on the marker puck, or with one die resting on top of the other. The roll is invalid if either or both dice land in the boxman's bank, the stickman's bowl (where the extra three dice are kept between rolls), or in the rails around the top of the table where players chips are kept. If one or both dice hits a player or dealer and rolls back onto the table, the roll counts as long as the person being hit did not intentionally interfere with either of the dice, though some casinos will rule "no roll" for this situation. If one or both leave the table, it is also a "no roll", and the dice may either be replaced or examined by the boxman and returned to play.
Shooters may wish to "set" the dice to a particular starting configuration before throwing (such as showing a particular number or combination, stacking the dice, or spacing them to be picked up between different fingers), but if they do, they are often asked to be quick about it so as not to delay the game. Some casinos disallow such rituals to speed up the pace of the game. Some may also discourage or disallow unsanitary practices such as kissing or spitting on the dice.
In most casinos, players are not allowed to hand anything directly to dealers, and vice versa. Items such as cash, checks, and chips are exchanged by laying them down on the table; for example, when "buying in" (paying cash for chips), players are expected to place the cash on the layout: the dealer will take it and then place the chips in front of the player. This rule is enforced in order to allow the casino to easily monitor and record all transfers via overhead surveillance cameras, and to reduce the opportunity for cheating via sleight-of-hand.
Most casinos prohibit "call bets", and may have a warning such as "No Call Bets" printed on the layout to make this clear. This means a player may not call out a bet without also placing the corresponding chips on the table. Such a rule reduces the potential for misunderstanding in loud environments, as well as disputes over the amount that the player intended to bet after the outcome has been decided. Some casinos choose to allow call bets once players have bought-in. When allowed, they are usually made when a player wishes to bet at the last second, immediately before the dice are thrown, to avoid the risk of obstructing the roll.
Craps is among the most social and most superstitious of all gambling games, which leads to an enormous variety of informal rules of etiquette that players may be expected to follow. An exhaustive list of these is beyond the scope of this article, but the guidelines below are most commonly given.
Tipping the dealers is universal and expected in Craps. As in most other casino games, a player may simply place (or toss) chips onto the table and say, "For the dealers", "For the crew", etc. In craps, it is also common to place a bet for the dealers. This is usually done one of three ways: by placing an ordinary bet and simply declaring it for the dealers, as a "two-way", or "on top". A "Two-Way" is a bet for both parties: for example, a player may toss in two chips and say "Two Way Hard Eight", which will be understood to mean one chip for the player and one chip for the dealers. Players may also place a stack of chips for a bet as usual, but leave the top chip off-center and announce "on top for the dealers". The dealer's portion is often called a "toke" bet, which comes from the practice of using $1 slot machine tokens to place dealer bets in some casinos.
In some cases, players may also tip each other, for example as a show of gratitude to the thrower for a roll on which they win a substantial bet.
Craps players routinely practice a wide range of superstitious behaviors, and may expect or demand these from other players as well.
Most prominently, it is universally considered bad luck to say the word "seven" (after the "come-out", a roll of 7 is a loss for "pass" bets). Dealers themselves often make significant efforts to avoid calling out the number. When necessary, participants may refer to seven with a "nickname" such as "Big Red" (or just "Red"), "the S-word", etc.
Although no wagering system can consistently beat casino games based on independent trials such as craps, that does not stop gamblers from believing in them. One of the best known systems is the Martingale System. In this strategy, the gambler doubles his bet after every loss. After a win, the bet is reset to the original bet. The theory is that the first win would recover all previous losses plus win a profit equal to the original stake.
Other systems depend on the gambler's fallacy, which in craps terms is the belief that past dice rolls influence the probabilities of future dice rolls. For example, the gambler's fallacy indicates that a craps player should bet on eleven if an eleven has not appeared or has appeared too often in the last 20 rolls. In practice this can be observed as players respond to a roll such as a Hard Six with an immediate wager on the Hard Six.
In reality, each roll of the dice is an independent event, so the probability of rolling eleven is exactly 1/18 on every roll, regardless of the number of times eleven has come up in the last x rolls. Even if the dice are actually biased toward particular results ("loaded"), each roll is still independent of all the previous ones. The common term to describe this is "dice have no memory".
Another approach is to "set" the dice in a particular orientation, and then throw them in such a manner that they do not tumble randomly. The theory is that given exactly the same throw from exactly the same starting configuration, the dice will tumble in the same way and therefore show the same or similar values every time.
Casinos take steps to prevent this. The dice are usually required to hit the back wall of the table, which is normally faced with a jagged angular texture such as pyramids, making controlled spins more difficult. There has been no independent evidence that such methods can be successfully applied in a real casino.
Bank craps is a variation of the original craps game and is sometimes known as Las Vegas Craps. This variant is quite popular in Nevada gambling houses, and its availability online has now made it a globally played game. Bank craps uses a special table layout and all bets must be made against the house. In Bank Craps, the dice are thrown over a wire or a string that is normally stretched a few inches from the table's surface. The lowest house edge (for the Pass/Don't Pass) in this variation is around 1.4%. Generally, if the word "craps" is used without any modifier, it can be inferred to mean this version of the game, to which most of this article refers.
Crapless craps, also known as bastard craps, is a simple version of the original craps game, and is normally played as an online private game. The biggest difference between crapless craps and original craps is that the shooter (person throwing the dice) is at a far greater disadvantage and has a house edge of 5.38%. Another difference is that this is one of the craps games in which a player can bet on rolling a 2, 3, 11 or 12 before a 7 is thrown. In crapless craps, 2 and 12 have odds of 11:2 and have a house edge of 7.143% while 3 and 11 have odds of 11:4 with a house edge of 6.25%.
New York Craps is one of the variations of craps played mostly in the Eastern coast of the US, true to its name. History states that this game was actually found and played in casinos in Yugoslavia, the UK and the Bahamas. In this craps variant, the house edge is greater than Las Vegas Craps or Bank craps. The table layout is also different, and is called a double-end-dealer table. This variation is different from the original craps game in several ways, but the primary difference is that New York craps doesn't allow Come or Don't Come bets. New York Craps Players bet on box numbers like 4, 5, 6, 8, 9, or 10. The overall house edge in New York craps is 5%.
In order to get around California laws barring the payout of a game being directly related to the roll of dice, Indian reservations have adapted the game to substitute cards for dice.
To replicate the original dice odds exactly without dice or possibility of card-counting, one scheme uses two shuffle machines each with just one deck of Ace through 6 each. Each machine selects one of the 6 cards at random and this is the roll. The selected cards are replaced and the decks are reshuffled for the next roll.
In one variation, two shoes are used, each containing some number of regular card decks that have been stripped down to just the Aces and deuces through sixes. The boxman simply deals one card from each shoe and that is the roll on which bets are settled. Since a card-counting scheme is easily devised to make use of the information of cards that have already been dealt, a relatively small portion (less than 50%) of each shoe is usually dealt in order to protect the house.
In a similar variation, cards representing dice are dealt directly from a continuous shuffling machine (CSM). Typically, the CSM will hold approximately 264 cards, or 44 sets of 1 through 6 spot cards. Two cards are dealt from the CSM for each roll. The game is played exactly as regular craps, but the roll distribution of the remaining cards in the CSM is slightly skewed from the normal symmetric distribution of dice.
Even if the dealer were to shuffle each roll back into the CSM, the effect of buffering a number of cards in the chute of the CSM provides information about the skew of the next roll. Analysis shows this type of game is biased towards the Don't Pass and Don't Come bets. A player betting Don't Pass and Don't Come every roll and laying 10x odds receives a 2% profit on the initial Don't Pass / Don't Come bet each roll. Using a counting system allows the player to attain a similar return at lower variance.
In this game variation, one red deck and one blue deck of six cards each (A through 6), and a red die and a blue die are used. Each deck is shuffled separately, usually by machine. Each card is then dealt onto the layout, into the 6 red and 6 blue numbered boxes. The shooter then shoots the dice. The red card in the red-numbered box corresponding to the red die, and the blue card in the blue-numbered box corresponding to the blue die are then turned over to form the roll on which bets are settled.
Another variation uses a red and a blue deck of 36 custom playing cards each. Each card has a picture of a two-die roll on it – from 1–1 to 6–6. The shooter shoots what looks like a red and a blue die, called "cubes". They are numbered such that they can never throw a pair, and that the blue one will show a higher value than the red one exactly half the time. One such scheme could be 222555 on the red die and 333444 on the blue die.
One card is dealt from the red deck and one is dealt from the blue deck. The shooter throws the "cubes" and the color of the cube that is higher selects the color of the card to be used to settle bets. On one such table, an additional one-roll prop bet was offered: If the card that was turned over for the "roll" was either 1–1 or 6–6, the other card was also turned over. If the other card was the "opposite" (6–6 or 1–1, respectively) of the first card, the bet paid 500:1 for this 647:1 proposition.
And additional variation uses a single set of 6 cards, and regular dice. The roll of the dice maps to the card in that position, and if a pair is rolled, then the mapped card is used twice, as a pair.
Recreational or informal playing of craps outside of a casino is referred to as street craps or private craps. The most notable difference between playing street craps and bank craps is that there is no bank or house to cover bets in street craps. Players must bet against each other by covering or fading each other's bets for the game to be played. If money is used instead of chips and depending on the laws of where it is being played, street craps can be an illegal form of gambling.
There are many variations of street craps. The simplest way is to either agree on or roll a number as the point, then roll the point again before rolling a seven. Unlike more complex proposition bets offered by casinos, street craps has more simplified betting options. The shooter is required to make either a Pass or a Don't Pass bet if he wants to roll the dice. Another player must choose to cover the shooter to create a stake for the game to continue.
If there are several players, the rotation of the player who must cover the shooter may change with the shooter (comparable to a blind in poker). The person covering the shooter will always bet against the shooter. For example, if the shooter made a "Pass" bet, the person covering the shooter would make a "Don't Pass" bet to win. Once the shooter is covered, other players may make Pass/Don't Pass bets, or any other proposition bets, as long as there is another player willing to cover.
Due to the random nature of the game, in popular culture a "crapshoot" is often used to describe an action with an unpredictable outcome.
The prayer or invocation "Baby needs a new pair of shoes!" is associated with shooting craps.
Floating craps is an illegal operation of craps. The term floating refers to the practice of the game's operators using portable tables and equipment to quickly move the game from location to location to stay ahead of the law enforcement authorities. The term may have originated in the 1930s when Benny Binion (later known for founding the downtown Las Vegas hotel Binion's) set up an illegal craps game utilizing tables created from portable crates for the Texas Centennial Exposition.
The 1950 Broadway musical Guys and Dolls features a major plot point revolving around a floating craps game.
In the 1950s and 1960s The Sands Hotel in Las Vegas had a craps table that floated in the swimming pool, as a joke reference to the notoriety of the term.
A Golden Arm is a craps player who rolls the dice for longer than one hour without losing. Likely the first known Golden Arm was Oahu native Stanley Fujitake, who rolled 118 times without sevening out in 3 hours and 6 minutes at the California Hotel and Casino on May 28, 1989.
The current record for length of a "hand" (successive rounds won by the same shooter) is 154 rolls including 25 passes by Patricia DeMauro of New Jersey, lasting 4 hours and 18 minutes, at the Borgata in Atlantic City, New Jersey, on May 23–24, 2009. She bested by over an hour the record held for almost 20 years – that of Fujitake. | [
{
"paragraph_id": 0,
"text": "Craps is a dice game in which players bet on the outcomes of the roll of a pair of dice. Players can wager money against each other (playing \"street craps\") or against a bank (\"casino craps\"). Because it requires little equipment, \"street craps\" can be played in informal settings. While shooting craps, players may use slang terminology to place bets and actions.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In 1788, \"Krabs\" (later spelled crabs) was an English variation on the dice game hazard (also spelled hasard).",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Craps developed in the United States from a simplification of the western European game of hazard. The origins of hazard are obscure and may date to the Crusades. Hazard was brought from London to New Orleans in approximately 1805 by the returning Bernard Xavier Philippe de Marigny de Mandeville, the young gambler and scion of a family of wealthy landowners in colonial Louisiana. Although in hazard the dice shooter may choose any number from five to nine to be his main number, de Marigny simplified the game such that the main number is always seven, which is the mathematically optimal choice (choice with the lowest disadvantage for the shooter). Both hazard and its simpler derivative were unfamiliar to and rejected by Americans of his social class, leading de Marigny to introduce his novelty to the local underclass. Field hands taught their friends and deckhands, who carried the new game up the Mississippi River and its tributaries. Celebrating the popular success of his novelty, de Marigny gave the name Rue de Craps to a street in his new subdivision in New Orleans.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The central game, called pass from the French word pas (meaning \"pace\" or \"step\"), has been gradually supplemented over the decades by many companion games which can be played simultaneously with pass. Now applied to the entire collection of games, the name craps derives from an underclass Louisiana mispronunciation of the word crabs, which in aristocratic London had been the epithet for the numbers two and three. In hazard, both \"crabs\" are always instant-losing numbers for the first dice roll regardless of the shooter's selected main number. Also in hazard, if the main number is seven then the number twelve is added to the crabs as a losing number on the first dice roll. This structure is retained in the simplified game called pass. All three losing numbers on the first roll of pass are jointly called the craps numbers.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "For a century after its invention, casinos used unfair dice. In approximately 1907, a dicemaker named John H. Winn in Philadelphia introduced a layout which featured bets on Don't Pass as well as Pass. Virtually all modern casinos use his innovation, which incentivizes casinos to use fair dice.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Craps exploded in popularity during World War II, which brought most young American men of every social class into the military. The street version of craps was popular among service members who often played it using a blanket as a shooting surface. Their military memories led to craps becoming the dominant casino game in postwar Las Vegas and the Caribbean. After 1960, a few casinos in Europe, Australia, and Macau began offering craps, and, after 2004, online casinos extended the game's spread globally.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Bank craps or casino craps is played by one or more players betting against the casino rather than each other. Both the players and the dealers stand around a large rectangular craps table. Sitting is discouraged by most casinos unless a player has medical reasons for requiring a seat.",
"title": "Bank craps"
},
{
"paragraph_id": 7,
"text": "Players use casino chips rather than cash to bet on the Craps \"layout\", a fabric surface which displays the various bets. The bets vary somewhat among casinos in availability, locations, and payouts. The tables roughly resemble bathtubs and come in various sizes. In some locations, chips may be called checks, tokens, or plaques.",
"title": "Bank craps"
},
{
"paragraph_id": 8,
"text": "Against one long side is the casino's table bank: as many as two thousand casino chips in stacks of 20. The opposite long side is usually a long mirror. The U-shaped ends of the table have duplicate layouts and standing room for approximately eight players. In the center of the layout is an additional group of bets which are used by players from both ends. The vertical walls at each end are usually covered with a rubberized target surface covered with small pyramid shapes to randomize the dice which strike them. The top edges of the table walls have one or two horizontal grooves in which players may store their reserve chips.",
"title": "Bank craps"
},
{
"paragraph_id": 9,
"text": "The table is run by up to four casino employees: a boxman seated (usually the only seated employee) behind the casino's bank, who manages the chips, supervises the dealers, and handles \"coloring up\" players (exchanging small chip denominations for larger denominations in order to preserve the chips at a table); two base dealers who stand to either side of the boxman and collect and pay bets to players around their half of the table; and a stickman who stands directly across the table from the boxman, takes and pays (or directs the base dealers to do so) the bets in the center of the table, announces the results of each roll (usually with a distinctive patter), and moves the dice across the layout with an elongated wooden stick.",
"title": "Bank craps"
},
{
"paragraph_id": 10,
"text": "Each employee also watches for mistakes by the others because of the sometimes large number of bets and frantic pace of the game. In smaller casinos or at quiet times of day, one or more of these employees may be missing, and have their job covered by another, or cause player capacity to be reduced.",
"title": "Bank craps"
},
{
"paragraph_id": 11,
"text": "Some smaller casinos have introduced \"mini-craps\" tables which are operated with only two dealers; rather than being two essentially identical sides and the center area, a single set of major bets is presented, split by the center bets. Responsibility of the dealers is adjusted: while the stickman continues to handle the center bets, it is the base dealer who handles all other bets (as well as cash and chip exchanges).",
"title": "Bank craps"
},
{
"paragraph_id": 12,
"text": "By contrast, in \"street craps\", there is no marked table and often the game is played with no back-stop against which the dice are to hit. (Despite the name \"street craps\", this game is often played in houses, usually on an un-carpeted garage or kitchen floor.) The wagers are made in cash, never in chips, and are usually thrown down onto the ground or floor by the players. There are no attendants, and so the progress of the game, fairness of the throws, and the way that the payouts are made for winning bets are self-policed by the players.",
"title": "Bank craps"
},
{
"paragraph_id": 13,
"text": "Each casino may set which bets are offered and different payouts for them, though a core set of bets and payouts is typical. Players take turns rolling two dice and whoever is throwing the dice is called the \"shooter\". Players can bet on the various options by placing chips directly on the appropriately-marked sections of the layout, or asking the base dealer or stickman to do so, depending on which bet is being made.",
"title": "Bank craps"
},
{
"paragraph_id": 14,
"text": "While acting as the shooter, a player must have a bet on the \"Pass\" line and/or the \"Don't Pass\" line. \"Pass\" and \"Don't Pass\" are sometimes called \"Win\" and \"Don't Win\" or \"Right\" and \"Wrong\" bets. The game is played in rounds and these \"Pass\" and \"Don't Pass\" bets are betting on the outcome of a round. The shooter is presented with multiple dice (typically five) by the \"stickman\", and must choose two for the round. The remaining dice are returned to the stickman's bowl and are not used.",
"title": "Bank craps"
},
{
"paragraph_id": 15,
"text": "Each round has two phases: \"come-out\" and \"point\". Dice are passed to the left. To start a round, the shooter makes one or more \"come-out\" rolls. The shooter must shoot toward the farther back wall and is generally required to hit the farther back wall with both dice. Casinos may allow a few warnings before enforcing the dice to hit the back wall and are generally lenient if at least one die hits the back wall. Both dice must be tossed in one throw. If only one die is thrown the shot is invalid. A come-out roll of 2, 3, or 12 is called \"craps\" or \"crapping out\", and anyone betting the Pass line loses. On the other hand, anyone betting the Don't Pass line on come out wins with a roll of 2 or 3 and ties (pushes) if a 12 is rolled. Shooters may keep rolling after crapping out; the dice are only required to be passed if a shooter sevens out (rolls a seven after a point has been established). A come-out roll of 7 or 11 is a \"natural\"; the Pass line wins and Don't Pass loses. The other possible numbers are the point numbers: 4, 5, 6, 8, 9, and 10. If the shooter rolls one of these numbers on the come-out roll, this establishes the \"point\" – to \"pass\" or \"win\", the point number must be rolled again before a seven.",
"title": "Bank craps"
},
{
"paragraph_id": 16,
"text": "The dealer flips a button to the \"On\" side and moves it to the point number signifying the second phase of the round. If the shooter \"hits\" the point value again (any value of the dice that sum to the point will do; the shooter doesn't have to exactly repeat the exact combination of the come-out roll) before rolling a seven, the Pass line wins and a new round starts. If the shooter rolls any seven before repeating the point number (a \"seven-out\"), the Pass line loses, the Don't Pass line wins, and the dice pass clockwise to the next new shooter for the next round. Once a point has been established any multi-roll bet (including Pass and/or Don't Pass line bets and odds) are unaffected by the 2, 3, 11, or 12; the only numbers which affect the round are the established point, any specific bet on a number, or any 7. Any single roll bet is always affected (win or lose) by the outcome of any roll.",
"title": "Bank craps"
},
{
"paragraph_id": 17,
"text": "While the come-out roll may specifically refer to the first roll of a new shooter, any roll where no point is established may be referred to as a come-out. By this definition the start of any new round regardless if it is the shooter's first toss can be referred to as a come-out roll.",
"title": "Bank craps"
},
{
"paragraph_id": 18,
"text": "Any player can make a bet on Pass or Don't Pass as long as a point has not been established, or Come or Don't Come as long as a point is established. All other bets, including an increase in odds behind the Pass and Don't Pass lines, may be made at any time. All bets other than Pass line and Come may be removed or reduced any time before the bet loses. This is known as \"taking it down\" in craps.",
"title": "Bank craps"
},
{
"paragraph_id": 19,
"text": "The maximum bet for Place, Buy, Lay, Pass, and Come bets are generally equal to table maximum. Lay bet maximum are equal to the table maximum win, so players wishing to lay the 4 or 10 may bet twice at amount of the table maximum for the win to be table maximum. Odds behind Pass, Come, Don't Pass, and Don't Come may be however larger than the odds offered allows and can be greater than the table maximum in some casinos. Don't odds are capped on the maximum allowed win some casino allow the odds bet itself to be larger than the maximum bet allowed as long as the win is capped at maximum odds. Single rolls bets can be lower than the table minimum, but the maximum bet allowed is also lower than the table maximum. The maximum allowed single roll bet is based on the maximum allowed win from a single roll.",
"title": "Bank craps"
},
{
"paragraph_id": 20,
"text": "In all the above scenarios, whenever the Pass line wins, the Don't Pass line loses, and vice versa, with one exception: on the come-out roll, a roll of 12 will cause Pass Line bets to lose, but Don't Pass bets are pushed (or \"barred\"), neither winning nor losing. (The same applies to \"Come\" and \"Don't Come\" bets, discussed below.)",
"title": "Bank craps"
},
{
"paragraph_id": 21,
"text": "A player wishing to play craps without being the shooter should approach the craps table and first check to see if the dealer's \"On\" button is on any of the point numbers.",
"title": "Bank craps"
},
{
"paragraph_id": 22,
"text": "In either case, all single or multi-roll proposition bets may be placed in either of the two rounds.",
"title": "Bank craps"
},
{
"paragraph_id": 23,
"text": "Between dice rolls there is a period for dealers to make payouts and collect losing bets, after which players can place new bets. The stickman monitors the action at a table and decides when to give the shooter the dice, after which no more betting is allowed.",
"title": "Bank craps"
},
{
"paragraph_id": 24,
"text": "When joining the game, one should place money on the table rather than passing it directly to a dealer. The dealer's exaggerated movements during the process of \"making change\" or \"change only\" (converting currency to an equivalent in casino cheques) are required so that any disputes can be later reviewed against security camera footage.",
"title": "Bank craps"
},
{
"paragraph_id": 25,
"text": "The dealers will insist that the shooter roll with one hand and that the dice bounce off the far wall surrounding the table. These requirements are meant to keep the game fair (preventing switching the dice or making a \"controlled shot\"). If a die leaves the table, the shooter will usually be asked to select another die from the remaining three but can request permission to use the same die if it passes the boxman's inspection. This requirement exists to keep the game fair and reduce the chance of loaded dice.",
"title": "Bank craps"
},
{
"paragraph_id": 26,
"text": "There are many local variants of the calls made by the stickman for rolls during a craps game. These frequently incorporate a reminder to the dealers as to which bets to pay or collect.",
"title": "Bank craps"
},
{
"paragraph_id": 27,
"text": "Rolls of 4, 6, 8, and 10 are called \"hard\" or \"easy\" (e.g. \"six the hard way\", \"easy eight\", \"hard ten\") depending on whether they were rolled as a \"double\" or as any other combination of values, because of their significance in center table bets known as the \"hard ways\". Hard way rolls are so named because there is only one way to roll them (i.e., the value on each die is the same when the number is rolled). Consequently, it is more likely to roll the number in combinations (easy) rather than as a double (hard).",
"title": "Bank craps"
},
{
"paragraph_id": 28,
"text": "The shooter is required to make either a Pass line bet or a Don't Pass bet if he wants to shoot. On the come out roll each player may only make one bet on the Pass or Don't Pass, but may bet both if desired. The Pass Line and Don't Pass bet is optional for any player not shooting. In rare cases, some casinos require all players to make a minimum Pass Line or Don't Pass bet (if they want to make any other bet), whether they are currently shooting or not.",
"title": "Types of wagers"
},
{
"paragraph_id": 29,
"text": "The fundamental bet in craps is the Pass line bet, which is a bet for the shooter to win. This bet must be at least the table minimum and at most the table maximum.",
"title": "Types of wagers"
},
{
"paragraph_id": 30,
"text": "The Pass line bet pays even money.",
"title": "Types of wagers"
},
{
"paragraph_id": 31,
"text": "The Pass line bet is a contract bet. Once a Pass line bet is made, it is always working and cannot be turned \"Off\", taken down, or reduced until a decision is reached – the point is made, or the shooter sevens out. A player may increase any corresponding odds (up to the table limit) behind the Pass line at any time after a point is established. Players may only bet the Pass line on the come out roll when no point has been established, unless the casino allows put betting where the player can bet Pass line or increase an existing Pass line bet whenever desired and may take odds immediately if the point is already on.",
"title": "Types of wagers"
},
{
"paragraph_id": 32,
"text": "A Don't Pass bet is a bet for the shooter to lose (\"seven out, line away\") and is almost the opposite of the Pass line bet. Like the Pass bet, this bet must be at least the table minimum and at most the table maximum.",
"title": "Types of wagers"
},
{
"paragraph_id": 33,
"text": "The Don't Pass bet pays even money.",
"title": "Types of wagers"
},
{
"paragraph_id": 34,
"text": "The Don't Pass bet is a no-contract bet. After a point is established, a player may take down or reduce a Don't Pass bet and any corresponding odds at any time because odds of rolling a 7 before the point is in the player's favor. Once taken down or reduced, however, the Don't Pass bet may not be restored or increased. Because the shooter must have a line bet the shooter generally may not reduce a Don't Pass bet below the table minimum. In Las Vegas, a majority of casinos will allow the shooter to move the bet to the Pass line in lieu of taking it down; however, in other areas such as Pennsylvania and Atlantic City, this is not allowed. Even though players are allowed to remove the Don't Pass line bet after a point has been established, the bet cannot be turned \"Off\" without being removed. If a player chooses to remove the Don't Pass line bet, he or she can no longer lay odds behind the Don't Pass line. The player can, however, still make standard lay bets on any of the point numbers (4, 5, 6, 8, 9, 10).",
"title": "Types of wagers"
},
{
"paragraph_id": 35,
"text": "There are two different ways to calculate the odds and house edge of this bet. The table below gives the numbers considering that the game ends in a push when a 12 is rolled, rather than being undetermined. Betting on Don't Pass is often called \"playing the dark side\", and it is considered by some players to be in poor taste, or even taboo, because it goes directly against conventional play, winning when most of the players lose.",
"title": "Types of wagers"
},
{
"paragraph_id": 36,
"text": "If a 4, 5, 6, 8, 9, or 10 is thrown on the come-out roll (i.e., if a point is established), most casinos allow Pass line players to take odds by placing up to some predetermined multiple of the Pass line bet, behind the Pass line. This additional bet wins if the point is rolled again before a 7 is rolled (the point is made) and pays at the true odds of 2-to-1 if 4 or 10 is the point, 3-to-2 if 5 or 9 is the point, or 6-to-5 if 6 or 8 is the point. Unlike the Pass line bet itself, the Pass line odds bet can be turned \"Off\" (not working), removed or reduced anytime before it loses. In Las Vegas, generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine odds and Pass bet must be table minimum so players can bet the minimum single unit on odds depending on the point. If the point is a 4 or 10 players can bet as little as $1 on odds if the table minimum is low such as is $5, $10 or $15. If the player requests the Pass odds be not working (\"Off\") and the shooter sevens-out or hits the point, the Pass line bet will be lost or doubled and the Pass odds returned.",
"title": "Types of wagers"
},
{
"paragraph_id": 37,
"text": "Individual casinos (and sometimes tables within a casino) vary greatly in the maximum odds they offer, from single or double odds (one or two times the Pass line bet) up to 100x or even unlimited odds. A variation often seen is \"3-4-5X Odds\", where the maximum allowed odds bet depends on the point: three times if the point is 4 or 10; four times on points of 5 or 9; or five times on points of 6 or 8. This rule simplifies the calculation of winnings: a maximum Pass odds bet on a 3–4–5× table will always be paid at six times the Pass line bet regardless of the point.",
"title": "Types of wagers"
},
{
"paragraph_id": 38,
"text": "As odds bets are paid at true odds, in contrast with the Pass line which is always even money, taking odds on a minimum Pass line bet lessens the house advantage compared with betting the same total amount on the Pass line only. A maximum odds bet on a minimum Pass line bet often gives the lowest house edge available in any game in the casino. However, the odds bet cannot be made independently, so the house retains an edge on the Pass line bet itself.",
"title": "Types of wagers"
},
{
"paragraph_id": 39,
"text": "If a player is playing Don't Pass instead of pass, they may also lay odds by placing chips behind the Don't Pass line. If a 7 comes before the point is rolled, the odds pay at true odds of 1-to-2 if 4 or 10 is the point, 2-to-3 if 5 or 9 is the point, or 5-to-6 if 6 or 8 is the point. Typically the maximum lay bet will be expressed such that a player may win up to an amount equal to the maximum odds multiple at the table. If a player lays maximum odds with a point of 4 or 10 on a table offering five-times odds, he would be able to lay a maximum of ten times the amount of his Don't Pass bet. At 5x odds table, the maximum amount the combined bet can win will always be 6x the amount of the Don't Pass bet. Players can bet table minimum odds if desired and win less than table minimum. Like the Don't Pass bet the odds can be removed or reduced. Unlike the Don't Pass bet itself, the Don't Pass odds can be turned \"Off\" (not working). In Las Vegas generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine lay odds and Don't Pass bet must be table minimum so players may bet as little as the minimum two units on odds depending on the point. If the point is a 4 or 10 players can bet as little as $2 if the table minimum is low such as $5, $10 or $15 tables. If the player requests the Don't Pass odds to be not working (\"Off\") and the shooter hits the point or sevens-out, the Don't Pass bet will be lost or doubled and the Don't Pass odds returned. Unlike a standard lay bet on a point, lay odds behind the Don't Pass line does not charge commission (vig).",
"title": "Types of wagers"
},
{
"paragraph_id": 40,
"text": "A Come bet can be visualized as starting an entirely new Pass line bet, unique to that player. Like the Pass Line each player may only make one Come bet per roll, this does not exclude a player from betting odds on an already established Come point. This bet must be at least the table minimum and at most the table maximum. Players may bet both the Come and Don't Come on the same roll if desired. Come bets can only be made after a point has been established since, on the come-out roll, a Come bet would be the same thing as a Pass line bet. A player making a Come bet will bet on the first point number that \"comes\" from the shooter's next roll, regardless of the table's round. If a 7 or 11 is rolled on the first round, it wins. If a 2, 3, or 12 is rolled, it loses. If instead the roll is 4, 5, 6, 8, 9, or 10, the Come bet will be moved by the base dealer onto a box representing the number the shooter threw. This number becomes the \"come-bet point\" and the player is allowed to take odds, just like a Pass line bet. Also like a Pass line bet, the come bet is a contract bet and is always working, and cannot be turned \"Off\", removed or reduced until it wins or loses. However, the odds taken behind a Come bet can be turned \"Off\" (not working), removed or reduced anytime before the bet loses. In Las Vegas generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine odds and Pass bet must be table minimum so players can bet the minimum single unit depending on the point. If the point is a 4 or 10, players can bet as little as $1 if the table minimum is low such as $5, $10, or $15 minimums. If the player requests the Come odds to be not working (\"Off\") and the shooter sevens-out or hits the Come bet point, the Come bet will be lost or doubled and the Come odds returned. If the casino allows put betting a player may increase a Come bet after a point has been established and bet larger odds behind if desired. Put betting also allows a player to bet on a Come and take odds immediately on a point number without a Come bet point being established.",
"title": "Types of wagers"
},
{
"paragraph_id": 41,
"text": "The dealer will place the odds on top of the come bet, but slightly off center in order to differentiate between the original bet and the odds. The second round wins if the shooter rolls the come bet point again before a seven. Winning come bets are paid the same as winning Pass line bets: even money for the original bet and true odds for the odds bet. If, instead, the seven is rolled before the come-bet point, the come bet (and any odds bet) loses.",
"title": "Types of wagers"
},
{
"paragraph_id": 42,
"text": "Because of the come bet, if the shooter makes their point, a player can find themselves in the situation where they still have a come bet (possibly with odds on it) and the next roll is a come-out roll. In this situation, odds bets on the come wagers are usually presumed to be not working for the come-out roll. That means that if the shooter rolls a 7 on the come-out roll, any players with active come bets waiting for a come-bet point lose their initial wager but will have their odds bets returned to them.",
"title": "Types of wagers"
},
{
"paragraph_id": 43,
"text": "If the come-bet point is rolled on the come-out roll, the odds do not win but the come bet does and the odds bet is returned (along with the come bet and its payoff). The player can tell the dealer that they want their odds working, such that if the shooter rolls a number that matches the come point, the odds bet will win along with the come bet, and if a seven is rolled, both lose.",
"title": "Types of wagers"
},
{
"paragraph_id": 44,
"text": "Many players will use a come bet as \"insurance\" against sevening out: if the shooter rolls a seven, the come bet pays 1:1, offsetting the loss of the Pass line bet. The risk in this strategy is the situation where the shooter does not hit a seven for several rolls, leading to multiple come bets that will be lost if the shooter eventually sevens out.",
"title": "Types of wagers"
},
{
"paragraph_id": 45,
"text": "In the same way that a come bet is similar to a Pass line bet, a Don't Come bet is similar to a Don't Pass bet. Like the come, the Don't Come can only be bet after a point has already been established as it is the same as a Don't Pass line bet when no point is established. This bet must be at least the table minimum and at most the table maximum. A Don't Come bet is played in two rounds. If a 2 or 3 is rolled in the first round, it wins. If a 7 or 11 is rolled, it loses. If a 12 is rolled, it is a push (subject to the same 2/12 switch described above for the Don't Pass bet). If, instead, the roll is 4, 5, 6, 8, 9, or 10, the Don't Come bet will be moved by the base dealer onto a box representing the number the shooter threw. The second round wins if the shooter rolls a seven before the Don't Come point. Like the Don't Pass each player may only make one Don't Come bet per roll, this does not exclude a player from laying odds on an already established Don't Come points. Players may bet both the Don't Come and Come on the same roll if desired.",
"title": "Types of wagers"
},
{
"paragraph_id": 46,
"text": "The player may lay odds on a Don't Come bet, just like a Don't Pass bet; in this case, the dealer (not the player) places the odds bet on top of the bet in the box, because of limited space, slightly offset to signify that it is an odds bet and not part of the original Don't Come bet. Lay odds behind a Don't Come are subject to the same rules as Don't Pass lay odds. Unlike a standard lay bet on a point, lay odds behind a Don't Come point does not charge commission (vig) and gives the player true odds. Like the Don't Pass line bet, Don't Come bets are no-contract, and can be removed or reduced after a Don't Come point has been established, but cannot be turned off (\"not working\") without being removed. A player may also call, \"No Action\" when a point is established, and the bet will not be moved to its point. This play is not to the player's advantage. If the bet is removed, the player can no longer lay odds behind the Don't Come point and cannot restore or increase the same Don't Come bet. Players must wait until next roll as long as a Pass line point has been established (players cannot bet Don't Come on come out rolls) before they can make a new Don't Come bet. Las Vegas casinos which allow put betting allows players to move the Don't Come directly to any Come point as a put; however, this is not allowed in Atlantic City or Pennsylvania. Unlike the Don't Come bet itself, the Don't Come odds can be turned \"Off\" (not working), removed, or reduced if desired. In Las Vegas, players generally must lay at least table minimum on odds if desired and win less than table minimum; in Atlantic City and Pennsylvania a player's combined bet must be at least table minimum, so depending on the point number players may lay as little as 2 minimum units (e.g. if the point is 4 or 10). If the player requests the Don't Come odds be not working (\"Off\") and the shooter hits the Don't Come point or sevens-out, the Don't Come bet will be lost or doubled and the Don't Come odds returned.",
"title": "Types of wagers"
},
{
"paragraph_id": 47,
"text": "Winning Don't Come bets are paid the same as winning Don't Pass bets: even money for the original bet and true odds for the odds lay. Unlike come bets, the odds laid behind points established by Don't Come bets are always working including come out rolls unless the player specifies otherwise.",
"title": "Types of wagers"
},
{
"paragraph_id": 48,
"text": "These are bets that may not be settled on the first roll and may need any number of subsequent rolls before an outcome is determined. Most multi-roll bets may fall into the situation where a point is made by the shooter before the outcome of the multi-roll bet is decided. These bets are often considered \"not working\" on the new come-out roll until the next point is established, unless the player calls the bet as \"working.\"",
"title": "Types of wagers"
},
{
"paragraph_id": 49,
"text": "Casino rules vary on this; some of these bets may not be callable, while others may be considered \"working\" during the come-out. Dealers will usually announce if bets are working unless otherwise called off. If a non-working point number placed, bought or laid becomes the new point as the result of a come-out, the bet is usually refunded, or can be moved to another number for free.",
"title": "Types of wagers"
},
{
"paragraph_id": 50,
"text": "Players can bet any point number (4, 5, 6, 8, 9, 10) by placing their wager in the come area and telling the dealer how much and on what number(s), \"30 on the 6\", \"5 on the 5\", or \"25 on the 10\". These are typically \"Place Bets to Win\". These are bets that the number bet on will be rolled before a 7 is rolled. These bets are considered working bets, and will continue to be paid out each time a shooter rolls the number bet. On a come-out roll, a place bet is considered to be not in effect unless the player who made it specifies otherwise. This bet may be removed or reduced at any time until it loses; in the latter case, the player must abide by any table minimums.",
"title": "Types of wagers"
},
{
"paragraph_id": 51,
"text": "Place bets to win pay out at slightly worse than the true odds: 9-to-5 on points 4 or 10, 7-to-5 on points 5 or 9, and 7-to-6 on points 6 or 8. The place bets on the outside numbers (4,5,9,10) should be made in units of $5, (on a $5 minimum table), in order to receive the correct exact payout of $5 paying $7 or $5 paying $9. The place bets on the 6 & 8 should be made in units of $6, (on a $5 minimum table), in order to receive the correct exact payout of $6 paying $7. For the 4 and 10, it is to the player's advantage to 'buy' the bet (see below).",
"title": "Types of wagers"
},
{
"paragraph_id": 52,
"text": "An alternative form, rarely offered by casinos, is the \"place bet to lose.\" This bet is the opposite of the place bet to win and pays off if a 7 is rolled before the specific point number. The place bet to lose typically carries a lower house edge than a place bet to win. Payouts are 4–5 on points 6 or 8, 5–8 on 5 or 9, and 5–11 on 4 or 10.",
"title": "Types of wagers"
},
{
"paragraph_id": 53,
"text": "Players can also buy a bet which are paid at true odds, but a 5% commission is charged on the amount of the bet. Buy bets are placed with the shooter betting at a specific number will come out before a player sevens out. The buy bet must be at least table minimum excluding commission; however, some casinos require the minimum buy bet amount to be at least $20 to match the $1 charged on the 5% commission. Traditionally, the buy bet commission is paid no matter what, but in recent years a number of casinos have changed their policy to charge the commission only when the buy bet wins. Some casinos charge the commission as a one-time fee to buy the number; payouts are then always at true odds. Most casinos usually charge only $1 for a $25 green-chip bet (4% commission), or $2 for $50 (two green chips), reducing the house advantage a bit more. Players may remove or reduce this bet (bet must be at least table minimum excluding vig) anytime before it loses. Buy bets like place bets are not working when no point has been established unless the player specifies otherwise.",
"title": "Types of wagers"
},
{
"paragraph_id": 54,
"text": "Where commission is charged only on wins, the commission is often deducted from the winning payoff—a winning $25 buy bet on the 10 would pay $49, for instance. The house edges stated in the table assume the commission is charged on all bets. They are reduced by at least a factor of two if commission is charged on winning bets only.",
"title": "Types of wagers"
},
{
"paragraph_id": 55,
"text": "A lay bet is the opposite of a buy bet, where a player bets on a 7 to roll before the number that is laid. Players may only lay the 4, 5, 6, 8, 9, or 10 and may lay multiple numbers if desired. Just like the buy bet lay bets pay true odds, but because the lay bet is the opposite of the buy bet, the payout is reversed. Therefore, players get 1 to 2 for the numbers 4 and 10, 2 to 3 for the numbers 5 and 9, and 5 to 6 for the numbers 6 and 8. A 5% commission (vigorish, vig, juice) is charged up front on the possible winning amount. For example: A $40 Lay Bet on the 4 would pay $20 on a win. The 5% vig would be $1 based on the $20 win. (not $2 based on the $40 bet as the way buy bet commissions are figured.) Like the buy bet the commission is adjusted to suit the betting unit such that fraction of a dollar payouts are not needed. Casinos may charge the vig up front thereby requiring the player to pay a vig win or lose, other casinos may only take the vig if the bet wins. Taking vig only on wins lowers house edge. Players may removed or reduce this bet (bet must be at least table minimum) anytime before it loses. Some casinos in Las Vegas allow players to lay table minimum plus vig if desired and win less than table minimum. Lay bet maximums are equal to the table maximum win, so if a player wishes to lay the 4 or 10, he or she may bet twice at amount of the table maximum for the win to be table maximum. Other casinos require the minimum bet to win at $20 even at the lowest minimum tables in order to match the $1 vig, this requires a $40 bet. Similar to buy betting, some casinos only take commission on win reducing house edge. Unlike place and buy bets, lay bets are always working even when no point has been established. The player must specify otherwise if he or she wishes to have the bet not working.",
"title": "Types of wagers"
},
{
"paragraph_id": 56,
"text": "If a player is unsure of whether a bet is a single or multi-roll bet, it can be noted that all single-roll bets will be displayed on the playing surface in one color (usually red), while all multi-roll bets will be displayed in a different color (usually yellow).",
"title": "Types of wagers"
},
{
"paragraph_id": 57,
"text": "A put bet is a bet which allows players to increase or make a Pass line bet after a point has been established (after come-out roll). Players may make a put bet on the Pass line and take odds immediately or increase odds behind if a player decides to add money to an already existing Pass line bet. Put betting also allows players to increase an existing come bet for additional odds after a come point has been established or make a new come bet and take odds immediately behind if desired without a come bet point being established. If increased or added put bets on the Pass line and Come cannot be turned \"Off\", removed or reduced, but odds bet behind can be turned \"Off\", removed or reduced. The odds bet is generally required to be the table minimum. Player cannot put bet the Don't Pass or Don't Come. Put betting may give a larger house edge over place betting unless the casino offers high odds.",
"title": "Types of wagers"
},
{
"paragraph_id": 58,
"text": "Put bets are generally allowed in Las Vegas, but not allowed in Atlantic City and Pennsylvania.",
"title": "Types of wagers"
},
{
"paragraph_id": 59,
"text": "Put bets are better than place bets (to win) when betting more than 5-times odds over the flat bet portion of the put bet. For example, a player wants a $30 bet on the six. Looking at two possible bets: 1) Place the six, or 2) Put the six with odds. A $30 place bet on the six pays $35 if it wins. A $30 put bet would be a $5 flat line bet plus $25 (5-times) in odds, and also would pay $35 if it wins. Now, with a $60 bet on the six, the place bet wins $70, where the put bet ($5 + $55 in odds) would pay $71. The player needs to be at a table which not only allows put bets, but also high-times odds, to take this advantage.",
"title": "Types of wagers"
},
{
"paragraph_id": 60,
"text": "This bet can only be placed on the numbers 4, 6, 8, and 10. In order for this bet to win, the chosen number must be rolled the \"hard way\" (as doubles) before a 7 or any other non-double combination (\"easy way\") totaling that number is rolled. For example, a player who bets a hard 6 can only win by seeing a 3–3 roll come up before any 7 or any easy roll totaling 6 (4–2 or 5–1); otherwise, the player loses.",
"title": "Types of wagers"
},
{
"paragraph_id": 61,
"text": "In Las Vegas casinos, this bet is generally working, including when no point has been established, unless the player specifies otherwise. In other casinos such as those in Atlantic City, hard ways are not working when the point is off unless the player requests to have it working on the come out roll.",
"title": "Types of wagers"
},
{
"paragraph_id": 62,
"text": "Like single-roll bets, hard way bets can be lower than the table minimum; however, the maximum bet allowed is also lower than the table maximum. The minimum hard way bet can be a minimum one unit. For example, lower stake table minimums of $5 or $10, generally allow minimum hard ways bets of $1. The maximum bet is based on the maximum allowed win from a single roll.",
"title": "Types of wagers"
},
{
"paragraph_id": 63,
"text": "Easy way is not a specific bet offered in standard casinos, but a term used to define any number combination which has two ways to roll. For example, (6–4, 4–6) would be a \"10 easy\". The 4, 6, 8 or 10 can be made both hard and easy ways. Betting point numbers (which pays off on easy or hard rolls of that number) or single-roll (\"hop\") bets (e.g., \"hop the 2–4\" is a bet for the next roll to be an easy six rolled as a two and four) are methods of betting easy ways.",
"title": "Types of wagers"
},
{
"paragraph_id": 64,
"text": "A player can choose either the 6 or 8 being rolled before the shooter throws a seven. These wagers are usually avoided by experienced craps players since they pay even money (1:1) while a player can make place bets on the 6 or the 8, which pay more (7:6). Some casinos (especially all those in Atlantic City) do not even offer the Big 6 & 8. The bets are located in the corners behind the Pass line, and bets may be placed directly by players.",
"title": "Types of wagers"
},
{
"paragraph_id": 65,
"text": "The only real advantage offered by the Big 6 & 8 is that they can be bet for the table minimum, whereas a place bet minimum may sometimes be greater than the table minimum (e.g. $6 place bet on a $3 minimum game.) In addition place bets are usually not working, except by agreement, when the shooter is \"coming out\" i.e. shooting for a point, and Big 6 and 8 bets always work. Some modern layouts no longer show the Big 6/Big 8 bet.",
"title": "Types of wagers"
},
{
"paragraph_id": 66,
"text": "Single-roll (proposition) bets are resolved in one dice roll by the shooter. Most of these are called \"service bets\", and they are located at the center of most craps tables. Only the stickman or a dealer can place a service bet. Single-roll bets can be lower than the table minimum, but the maximum bet allowed is also lower than the table maximum. The maximum bet is based on the maximum allowed win from a single roll. The lowest single-roll bet can be a minimum one unit bet. For example, tables with minimums of $5 or $10 generally allow minimum single-roll bets of $1. Single bets are always working by default unless the player specifies otherwise. The bets include:",
"title": "Types of wagers"
},
{
"paragraph_id": 67,
"text": "Fire Bet: Before the shooter begins, some casinos will allow a bet known as a fire bet to be placed. A fire bet is a bet of as little as $1 and generally up to a maximum of $5 to $10 sometimes higher, depending on casino, made in the hope that the next shooter will have a hot streak of setting and getting many points of different values. As different individual points are made by the shooter, they will be marked on the craps layout with a fire symbol.",
"title": "Types of wagers"
},
{
"paragraph_id": 68,
"text": "The first three points will not pay out on the fire bet, but the fourth, fifth, and sixth will pay out at increasing odds. The fourth point pays at 24-to-1, the fifth point pays at 249-to-1, and the 6th point pays at 999-to-1. (The points must all be different numbers for them to count toward the fire bet.) For example, a shooter who successfully hits a point of 10 twice will only garner credit for the first one on the fire bet. Players must hit the established point in order for it to count toward the fire bet. The payout is determine by the number of points which have been established and hit after the shooter sevens out.",
"title": "Types of wagers"
},
{
"paragraph_id": 69,
"text": "Bonus Craps: Prior to the initial \"come out roll\", players may place an optional wager (usually a $1 minimum to a maximum $25) on one or more of the three Bonus Craps wagers, \"All Small\", \"All Tall\", or \"All or Nothing at All.\" For players to win the \"All Small\" wager, the shooter must hit all five small numbers (2, 3, 4, 5, 6) before a seven is rolled; similarly, \"All Tall\" wins if all five high numbers (8, 9, 10, 11, 12) are hit before a seven is rolled.",
"title": "Types of wagers"
},
{
"paragraph_id": 70,
"text": "These bets pay 35-for-1, for a house advantage of 7.76%. \"All or Nothing at All\" wins if the shooter hits all 10 numbers before a seven is rolled. This pays 176-for-1, for a house edge of 7.46%. For all three wagers, the order in which the numbers are hit does not matter. Whenever a seven is hit, including on the come out roll, all bonus bets lose, the bonus board is reset, and new bonus bets may be placed.",
"title": "Types of wagers"
},
{
"paragraph_id": 71,
"text": "A player may wish to make multiple different bets. For example, a player may be wish to bet $1 on all hard ways and the horn. If one of the bets win the dealer may automatically replenish the losing bet with profits from the winning bet. In this example, if the shooter rolls a hard 8 (pays 9:1), the horn loses. The dealer may return $5 to the player and place the other $4 on the horn bet which lost. If the player does not want the bet replenished, he or she should request any or all bets be taken down.",
"title": "Types of wagers"
},
{
"paragraph_id": 72,
"text": "A working bet is a live bet. Bets may also be on the board, but not in play and therefore not working. Pass line and come bets are always working meaning the chips are in play and the player is therefore wagering live money. Other bets may be working or not working depending whether a point has been established or player's choice. Place and buy bets are working by default when a point is established and not working when the point is off unless the player specifies otherwise. Lay bets are always working even if a point has not been established unless the player requests otherwise. At any time, a player may wish to take any bet or bets out of play. The dealer will put an \"Off\" button on the player's specific bet or bets; this allows the player to keep his chips on the board without a live wager. For example, if a player decides not to wager a place bet mid-roll but wishes to keep the chips on the number, he or she may request the bet be \"not working\" or \"Off\". The chips remain on the table, but the player cannot win from or lose chips which are not working.",
"title": "Types of wagers"
},
{
"paragraph_id": 73,
"text": "The opposite is also allowed. By default place and buy bets are not working without an established point; a player may wish to wager chips before a point has been established. In this case, the player would request the bet be working in which the dealer will place an \"On\" button on the specified chips.",
"title": "Types of wagers"
},
{
"paragraph_id": 74,
"text": "The probability of dice combinations determine the odds of the payout. The following chart shows the dice combinations needed to roll each number. The two and twelve are the hardest to roll since only one combination of dice is possible. The game of craps is built around the dice roll of seven, since it is the most easily rolled dice combination.",
"title": "Bet odds and summary"
},
{
"paragraph_id": 75,
"text": "Viewed another way:",
"title": "Bet odds and summary"
},
{
"paragraph_id": 76,
"text": "The expected value of all bets is usually negative, such that the average player will always lose money. This is because the house always sets the paid odds to below the actual odds. The only exception is the \"odds\" bet that the player is allowed to make after a point is established on a pass/come Don't Pass/Don't Come bet (the odds portion of the bet has a long-term expected value of 0). However, this \"free odds\" bet cannot be made independently, so the expected value of the entire bet, including odds, is still negative. Since there is no correlation between die rolls, there is normally no possible long-term winning strategy in craps.",
"title": "Bet odds and summary"
},
{
"paragraph_id": 77,
"text": "There are occasional promotional variants that provide either no house edge or even a player edge. One example is a field bet that pays 3:1 on 12 and 2:1 on either 3 or 11. Overall, given the 5:4 true odds of this bet, and the weighted average paid odds of approximately 7:5, the player has a 5% advantage on this bet. This is sometimes seen at casinos running limited-time incentives, in jurisdictions or gaming houses that require the game to be fair, or in layouts for use in informal settings using play money. No casino currently runs a craps table with a bet that yields a player edge full-time.",
"title": "Bet odds and summary"
},
{
"paragraph_id": 78,
"text": "Maximizing the size of the odds bet in relation to the line bet will reduce, but never eliminate the house edge, and will increase variance. Most casinos have a limit on how large the odds bet can be in relation to the line bet, with single, double, and five times odds common. Some casinos offer 3–4–5 odds, referring to the maximum multiple of the line bet a player can place in odds for the points of 4 and 10, 5 and 9, and 6 and 8, respectively. During promotional periods, a casino may even offer 100x odds bets, which reduces the house edge to almost nothing, but dramatically increases variance, as the player will be betting in large betting units.",
"title": "Bet odds and summary"
},
{
"paragraph_id": 79,
"text": "Since several of the multiple roll bets pay off in ratios of fractions on the dollar, it is important that the player bets in multiples that will allow a correct payoff in complete dollars. Normally, payoffs will be rounded down to the nearest dollar, resulting in a higher house advantage. These bets include all place bets, taking odds, and buying on numbers 6, 8, 5, and 9, as well as laying all numbers.",
"title": "Bet odds and summary"
},
{
"paragraph_id": 80,
"text": "These variants depend on the casino and the table, and sometimes a casino will have different tables that use or omit these variants and others.",
"title": "Betting variants"
},
{
"paragraph_id": 81,
"text": "When craps is played in a casino, all bets have a house advantage. That is, it can be shown mathematically that a player will (with 100% probability) lose all his or her money to the casino in the long run, while in the short run the player is more likely to lose money than make money. There may be players who are lucky and get ahead for a period of time, but in the long run these winning streaks are eroded away. One can slow, but not eliminate, one's average losses by only placing bets with the smallest house advantage.",
"title": "Optimal betting"
},
{
"paragraph_id": 82,
"text": "The Pass/Don't Pass line, Come/Don't Come line, place 6, place 8, buy 4 and buy 10 (only under the casino rules where commission is charged only on wins) have the lowest house edge in the casino, and all other bets will, on average, lose money between three and twelve times faster because of the difference in house edges.",
"title": "Optimal betting"
},
{
"paragraph_id": 83,
"text": "The place bets and buy bets differ from the Pass line and come line, in that place bets and buy bets can be removed at any time, since, while they are multi-roll bets, their odds of winning do not change from roll to roll, whereas Pass line bets and come line bets are a combination of different odds on their first roll and subsequent rolls. The first roll of a Pass line bet is 2:1 advantage for the player (8 wins, 4 losses), but it's \"paid for\" by subsequent rolls that are at the same disadvantage to the player as the Don't Pass bets were at an advantage. As such, they cannot profitably let the player take down the bet after the first roll. Players can bet or lay odds behind an established point depending on whether it was a Pass/Come or Don't Pass/Don't Come to lower house edge by receiving true odds on the point. Casinos which allow put betting allows players to increase or make new pass/come bets after the come-out roll. This bet generally has a higher house edge than place betting, unless the casino offers high odds.",
"title": "Optimal betting"
},
{
"paragraph_id": 84,
"text": "Conversely, a player can take back (pick up) a Don't Pass or Don't Come bet after the first roll, but this cannot be recommended, because they already endured the disadvantaged part of the combination – the first roll. On that come-out roll, they win just 3 times (2 and 3), while losing 8 of them (7 and 11) and pushing one (12) out of the 36 possible rolls. On the other 24 rolls that become a point, their Don't Pass bet is now to their advantage by 6:3 (4 and 10), 6:4 (5 and 9) and 6:5 (6 and 8). If a player chooses to remove the initial Don't Come and/or Don't Pass line bet, he or she can no longer lay odds behind the bet and cannot re-bet the same Don't Pass and/or Don't Come number (players must make a new Don't Pass or come bets if desired). However, players can still make standard lay bets odds on any of the point numbers (4,5,6,8,9,10).",
"title": "Optimal betting"
},
{
"paragraph_id": 85,
"text": "Among these, and the remaining numbers and possible bets, there are a myriad of systems and progressions that can be used with many combinations of numbers.",
"title": "Optimal betting"
},
{
"paragraph_id": 86,
"text": "An important alternative metric is house advantage per roll (rather than per bet), which may be expressed in loss per hour. The typical pace of rolls varies depending on the number of players, but 102 rolls per hour is a cited rate for a nearly full table. This same reference states that only \"29.6% of total rolls are come out rolls, on average\", so for this alternative metric, needing extra rolls to resolve the Pass line bet, for example, is factored. This number then permits calculation of rate of loss per hour, and per the 4 day/5 hour per day gambling trip:",
"title": "Optimal betting"
},
{
"paragraph_id": 87,
"text": "Besides the rules of the game itself, a number of formal and informal rules are commonly applied in the table form of Craps, especially when played in a casino.",
"title": "Table rules"
},
{
"paragraph_id": 88,
"text": "To reduce the potential opportunity for switching dice by sleight-of-hand, players are not supposed to handle the dice with more than one hand (such as shaking them in cupped hands before rolling) nor take the dice past the edge of the table. If a player wishes to change shooting hands, they may set the dice on the table, let go, then take them with the other hand.",
"title": "Table rules"
},
{
"paragraph_id": 89,
"text": "When throwing the dice, the player is expected to hit the farthest wall at the opposite end of the table (these walls are typically augmented with pyramidal structures to ensure highly unpredictable bouncing after impact). Casinos will sometimes allow a roll that does not hit the opposite wall as long as the dice are thrown past the middle of the table; a very short roll will be nullified as a \"no roll\". The dice may not be slid across the table and must be tossed. These rules are intended to prevent dexterous players from physically influencing the outcome of the roll.",
"title": "Table rules"
},
{
"paragraph_id": 90,
"text": "Players are generally asked not to throw the dice above a certain height (such as the eye level of the dealers). This is both for the safety of those around the table, and to eliminate the potential use of such a throw as a distraction device in order to cheat.",
"title": "Table rules"
},
{
"paragraph_id": 91,
"text": "Dice are still considered \"in play\" if they land on players' bets on the table, the dealer's working stacks, on the marker puck, or with one die resting on top of the other. The roll is invalid if either or both dice land in the boxman's bank, the stickman's bowl (where the extra three dice are kept between rolls), or in the rails around the top of the table where players chips are kept. If one or both dice hits a player or dealer and rolls back onto the table, the roll counts as long as the person being hit did not intentionally interfere with either of the dice, though some casinos will rule \"no roll\" for this situation. If one or both leave the table, it is also a \"no roll\", and the dice may either be replaced or examined by the boxman and returned to play.",
"title": "Table rules"
},
{
"paragraph_id": 92,
"text": "Shooters may wish to \"set\" the dice to a particular starting configuration before throwing (such as showing a particular number or combination, stacking the dice, or spacing them to be picked up between different fingers), but if they do, they are often asked to be quick about it so as not to delay the game. Some casinos disallow such rituals to speed up the pace of the game. Some may also discourage or disallow unsanitary practices such as kissing or spitting on the dice.",
"title": "Table rules"
},
{
"paragraph_id": 93,
"text": "In most casinos, players are not allowed to hand anything directly to dealers, and vice versa. Items such as cash, checks, and chips are exchanged by laying them down on the table; for example, when \"buying in\" (paying cash for chips), players are expected to place the cash on the layout: the dealer will take it and then place the chips in front of the player. This rule is enforced in order to allow the casino to easily monitor and record all transfers via overhead surveillance cameras, and to reduce the opportunity for cheating via sleight-of-hand.",
"title": "Table rules"
},
{
"paragraph_id": 94,
"text": "Most casinos prohibit \"call bets\", and may have a warning such as \"No Call Bets\" printed on the layout to make this clear. This means a player may not call out a bet without also placing the corresponding chips on the table. Such a rule reduces the potential for misunderstanding in loud environments, as well as disputes over the amount that the player intended to bet after the outcome has been decided. Some casinos choose to allow call bets once players have bought-in. When allowed, they are usually made when a player wishes to bet at the last second, immediately before the dice are thrown, to avoid the risk of obstructing the roll.",
"title": "Table rules"
},
{
"paragraph_id": 95,
"text": "Craps is among the most social and most superstitious of all gambling games, which leads to an enormous variety of informal rules of etiquette that players may be expected to follow. An exhaustive list of these is beyond the scope of this article, but the guidelines below are most commonly given.",
"title": "Etiquette"
},
{
"paragraph_id": 96,
"text": "Tipping the dealers is universal and expected in Craps. As in most other casino games, a player may simply place (or toss) chips onto the table and say, \"For the dealers\", \"For the crew\", etc. In craps, it is also common to place a bet for the dealers. This is usually done one of three ways: by placing an ordinary bet and simply declaring it for the dealers, as a \"two-way\", or \"on top\". A \"Two-Way\" is a bet for both parties: for example, a player may toss in two chips and say \"Two Way Hard Eight\", which will be understood to mean one chip for the player and one chip for the dealers. Players may also place a stack of chips for a bet as usual, but leave the top chip off-center and announce \"on top for the dealers\". The dealer's portion is often called a \"toke\" bet, which comes from the practice of using $1 slot machine tokens to place dealer bets in some casinos.",
"title": "Etiquette"
},
{
"paragraph_id": 97,
"text": "In some cases, players may also tip each other, for example as a show of gratitude to the thrower for a roll on which they win a substantial bet.",
"title": "Etiquette"
},
{
"paragraph_id": 98,
"text": "Craps players routinely practice a wide range of superstitious behaviors, and may expect or demand these from other players as well.",
"title": "Etiquette"
},
{
"paragraph_id": 99,
"text": "Most prominently, it is universally considered bad luck to say the word \"seven\" (after the \"come-out\", a roll of 7 is a loss for \"pass\" bets). Dealers themselves often make significant efforts to avoid calling out the number. When necessary, participants may refer to seven with a \"nickname\" such as \"Big Red\" (or just \"Red\"), \"the S-word\", etc.",
"title": "Etiquette"
},
{
"paragraph_id": 100,
"text": "Although no wagering system can consistently beat casino games based on independent trials such as craps, that does not stop gamblers from believing in them. One of the best known systems is the Martingale System. In this strategy, the gambler doubles his bet after every loss. After a win, the bet is reset to the original bet. The theory is that the first win would recover all previous losses plus win a profit equal to the original stake.",
"title": "Systems"
},
{
"paragraph_id": 101,
"text": "Other systems depend on the gambler's fallacy, which in craps terms is the belief that past dice rolls influence the probabilities of future dice rolls. For example, the gambler's fallacy indicates that a craps player should bet on eleven if an eleven has not appeared or has appeared too often in the last 20 rolls. In practice this can be observed as players respond to a roll such as a Hard Six with an immediate wager on the Hard Six.",
"title": "Systems"
},
{
"paragraph_id": 102,
"text": "In reality, each roll of the dice is an independent event, so the probability of rolling eleven is exactly 1/18 on every roll, regardless of the number of times eleven has come up in the last x rolls. Even if the dice are actually biased toward particular results (\"loaded\"), each roll is still independent of all the previous ones. The common term to describe this is \"dice have no memory\".",
"title": "Systems"
},
{
"paragraph_id": 103,
"text": "Another approach is to \"set\" the dice in a particular orientation, and then throw them in such a manner that they do not tumble randomly. The theory is that given exactly the same throw from exactly the same starting configuration, the dice will tumble in the same way and therefore show the same or similar values every time.",
"title": "Systems"
},
{
"paragraph_id": 104,
"text": "Casinos take steps to prevent this. The dice are usually required to hit the back wall of the table, which is normally faced with a jagged angular texture such as pyramids, making controlled spins more difficult. There has been no independent evidence that such methods can be successfully applied in a real casino.",
"title": "Systems"
},
{
"paragraph_id": 105,
"text": "Bank craps is a variation of the original craps game and is sometimes known as Las Vegas Craps. This variant is quite popular in Nevada gambling houses, and its availability online has now made it a globally played game. Bank craps uses a special table layout and all bets must be made against the house. In Bank Craps, the dice are thrown over a wire or a string that is normally stretched a few inches from the table's surface. The lowest house edge (for the Pass/Don't Pass) in this variation is around 1.4%. Generally, if the word \"craps\" is used without any modifier, it can be inferred to mean this version of the game, to which most of this article refers.",
"title": "Variants"
},
{
"paragraph_id": 106,
"text": "Crapless craps, also known as bastard craps, is a simple version of the original craps game, and is normally played as an online private game. The biggest difference between crapless craps and original craps is that the shooter (person throwing the dice) is at a far greater disadvantage and has a house edge of 5.38%. Another difference is that this is one of the craps games in which a player can bet on rolling a 2, 3, 11 or 12 before a 7 is thrown. In crapless craps, 2 and 12 have odds of 11:2 and have a house edge of 7.143% while 3 and 11 have odds of 11:4 with a house edge of 6.25%.",
"title": "Variants"
},
{
"paragraph_id": 107,
"text": "New York Craps is one of the variations of craps played mostly in the Eastern coast of the US, true to its name. History states that this game was actually found and played in casinos in Yugoslavia, the UK and the Bahamas. In this craps variant, the house edge is greater than Las Vegas Craps or Bank craps. The table layout is also different, and is called a double-end-dealer table. This variation is different from the original craps game in several ways, but the primary difference is that New York craps doesn't allow Come or Don't Come bets. New York Craps Players bet on box numbers like 4, 5, 6, 8, 9, or 10. The overall house edge in New York craps is 5%.",
"title": "Variants"
},
{
"paragraph_id": 108,
"text": "In order to get around California laws barring the payout of a game being directly related to the roll of dice, Indian reservations have adapted the game to substitute cards for dice.",
"title": "Card-based variations"
},
{
"paragraph_id": 109,
"text": "To replicate the original dice odds exactly without dice or possibility of card-counting, one scheme uses two shuffle machines each with just one deck of Ace through 6 each. Each machine selects one of the 6 cards at random and this is the roll. The selected cards are replaced and the decks are reshuffled for the next roll.",
"title": "Card-based variations"
},
{
"paragraph_id": 110,
"text": "In one variation, two shoes are used, each containing some number of regular card decks that have been stripped down to just the Aces and deuces through sixes. The boxman simply deals one card from each shoe and that is the roll on which bets are settled. Since a card-counting scheme is easily devised to make use of the information of cards that have already been dealt, a relatively small portion (less than 50%) of each shoe is usually dealt in order to protect the house.",
"title": "Card-based variations"
},
{
"paragraph_id": 111,
"text": "In a similar variation, cards representing dice are dealt directly from a continuous shuffling machine (CSM). Typically, the CSM will hold approximately 264 cards, or 44 sets of 1 through 6 spot cards. Two cards are dealt from the CSM for each roll. The game is played exactly as regular craps, but the roll distribution of the remaining cards in the CSM is slightly skewed from the normal symmetric distribution of dice.",
"title": "Card-based variations"
},
{
"paragraph_id": 112,
"text": "Even if the dealer were to shuffle each roll back into the CSM, the effect of buffering a number of cards in the chute of the CSM provides information about the skew of the next roll. Analysis shows this type of game is biased towards the Don't Pass and Don't Come bets. A player betting Don't Pass and Don't Come every roll and laying 10x odds receives a 2% profit on the initial Don't Pass / Don't Come bet each roll. Using a counting system allows the player to attain a similar return at lower variance.",
"title": "Card-based variations"
},
{
"paragraph_id": 113,
"text": "",
"title": "Card-based variations"
},
{
"paragraph_id": 114,
"text": "In this game variation, one red deck and one blue deck of six cards each (A through 6), and a red die and a blue die are used. Each deck is shuffled separately, usually by machine. Each card is then dealt onto the layout, into the 6 red and 6 blue numbered boxes. The shooter then shoots the dice. The red card in the red-numbered box corresponding to the red die, and the blue card in the blue-numbered box corresponding to the blue die are then turned over to form the roll on which bets are settled.",
"title": "Card-based variations"
},
{
"paragraph_id": 115,
"text": "Another variation uses a red and a blue deck of 36 custom playing cards each. Each card has a picture of a two-die roll on it – from 1–1 to 6–6. The shooter shoots what looks like a red and a blue die, called \"cubes\". They are numbered such that they can never throw a pair, and that the blue one will show a higher value than the red one exactly half the time. One such scheme could be 222555 on the red die and 333444 on the blue die.",
"title": "Card-based variations"
},
{
"paragraph_id": 116,
"text": "One card is dealt from the red deck and one is dealt from the blue deck. The shooter throws the \"cubes\" and the color of the cube that is higher selects the color of the card to be used to settle bets. On one such table, an additional one-roll prop bet was offered: If the card that was turned over for the \"roll\" was either 1–1 or 6–6, the other card was also turned over. If the other card was the \"opposite\" (6–6 or 1–1, respectively) of the first card, the bet paid 500:1 for this 647:1 proposition.",
"title": "Card-based variations"
},
{
"paragraph_id": 117,
"text": "And additional variation uses a single set of 6 cards, and regular dice. The roll of the dice maps to the card in that position, and if a pair is rolled, then the mapped card is used twice, as a pair.",
"title": "Card-based variations"
},
{
"paragraph_id": 118,
"text": "Recreational or informal playing of craps outside of a casino is referred to as street craps or private craps. The most notable difference between playing street craps and bank craps is that there is no bank or house to cover bets in street craps. Players must bet against each other by covering or fading each other's bets for the game to be played. If money is used instead of chips and depending on the laws of where it is being played, street craps can be an illegal form of gambling.",
"title": "Rules of play against other players (\"Street Craps\")"
},
{
"paragraph_id": 119,
"text": "There are many variations of street craps. The simplest way is to either agree on or roll a number as the point, then roll the point again before rolling a seven. Unlike more complex proposition bets offered by casinos, street craps has more simplified betting options. The shooter is required to make either a Pass or a Don't Pass bet if he wants to roll the dice. Another player must choose to cover the shooter to create a stake for the game to continue.",
"title": "Rules of play against other players (\"Street Craps\")"
},
{
"paragraph_id": 120,
"text": "If there are several players, the rotation of the player who must cover the shooter may change with the shooter (comparable to a blind in poker). The person covering the shooter will always bet against the shooter. For example, if the shooter made a \"Pass\" bet, the person covering the shooter would make a \"Don't Pass\" bet to win. Once the shooter is covered, other players may make Pass/Don't Pass bets, or any other proposition bets, as long as there is another player willing to cover.",
"title": "Rules of play against other players (\"Street Craps\")"
},
{
"paragraph_id": 121,
"text": "Due to the random nature of the game, in popular culture a \"crapshoot\" is often used to describe an action with an unpredictable outcome.",
"title": "In popular culture"
},
{
"paragraph_id": 122,
"text": "The prayer or invocation \"Baby needs a new pair of shoes!\" is associated with shooting craps.",
"title": "In popular culture"
},
{
"paragraph_id": 123,
"text": "Floating craps is an illegal operation of craps. The term floating refers to the practice of the game's operators using portable tables and equipment to quickly move the game from location to location to stay ahead of the law enforcement authorities. The term may have originated in the 1930s when Benny Binion (later known for founding the downtown Las Vegas hotel Binion's) set up an illegal craps game utilizing tables created from portable crates for the Texas Centennial Exposition.",
"title": "In popular culture"
},
{
"paragraph_id": 124,
"text": "The 1950 Broadway musical Guys and Dolls features a major plot point revolving around a floating craps game.",
"title": "In popular culture"
},
{
"paragraph_id": 125,
"text": "In the 1950s and 1960s The Sands Hotel in Las Vegas had a craps table that floated in the swimming pool, as a joke reference to the notoriety of the term.",
"title": "In popular culture"
},
{
"paragraph_id": 126,
"text": "A Golden Arm is a craps player who rolls the dice for longer than one hour without losing. Likely the first known Golden Arm was Oahu native Stanley Fujitake, who rolled 118 times without sevening out in 3 hours and 6 minutes at the California Hotel and Casino on May 28, 1989.",
"title": "In popular culture"
},
{
"paragraph_id": 127,
"text": "The current record for length of a \"hand\" (successive rounds won by the same shooter) is 154 rolls including 25 passes by Patricia DeMauro of New Jersey, lasting 4 hours and 18 minutes, at the Borgata in Atlantic City, New Jersey, on May 23–24, 2009. She bested by over an hour the record held for almost 20 years – that of Fujitake.",
"title": "In popular culture"
}
]
| Craps is a dice game in which players bet on the outcomes of the roll of a pair of dice. Players can wager money against each other or against a bank. Because it requires little equipment, "street craps" can be played in informal settings. While shooting craps, players may use slang terminology to place bets and actions. | 2001-08-10T00:13:02Z | 2023-12-18T22:58:43Z | [
"Template:Overly detailed",
"Template:Columns-list",
"Template:More citations needed",
"Template:Anchor",
"Template:Die",
"Template:Reflist",
"Template:Main",
"Template:Wikt",
"Template:Mathworld",
"Template:About",
"Template:MOS",
"Template:Cite book",
"Template:Gambling",
"Template:Infobox game",
"Template:Citation needed",
"Template:ISBN",
"Template:Prone to spam",
"Template:Short description",
"Template:More footnotes needed",
"Template:Cite web",
"Template:Unreferenced section",
"Template:Commons category",
"Template:Curlie",
"Template:Dice games",
"Template:Cite journal",
"Template:Cite news",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Craps |
6,066 | Carl von Clausewitz | Carl Philipp Gottfried (or Gottlieb) von Clausewitz (German pronunciation: [ˌkaʁl fɔn ˈklaʊ̯zəvɪt͡s] ; 1 June 1780 – 16 November 1831) was a Prussian general and military theorist who stressed the "moral" (in modern terms meaning psychological) and political aspects of waging war. His most notable work, Vom Kriege ("On War"), though unfinished at his death, is considered a seminal treatise on military strategy and science.
Clausewitz was a realist in many different senses, including realpolitik, and while in some respects a romantic, he also drew heavily on the rationalist ideas of the European Enlightenment.
Clausewitz stressed the dialectical interaction of diverse factors, noting how unexpected developments unfolding under the "fog of war" (i.e., in the face of incomplete, dubious, and often erroneous information and great fear, doubt, and excitement) call for rapid decisions by alert commanders. He saw history as a vital check on erudite abstractions that did not accord with experience. In contrast to the early work of Antoine-Henri Jomini, he argued that war could not be quantified or reduced to mapwork, geometry, and graphs. Clausewitz had many aphorisms, of which the most famous is "War is the continuation of policy with other means." (often misquoted as "... by other means").
Clausewitz's Christian names are sometimes given in non-German sources as "Karl", "Carl Philipp Gottlieb", or "Carl Maria". He spelled his own given name with a "C" in order to identify with the classical Western tradition; writers who use "Karl" are often seeking to emphasize their German (rather than European) identity. "Carl Philipp Gottfried" appears on Clausewitz's tombstone. Nonetheless, sources such as military historian Peter Paret and Encyclopædia Britannica continue to use Gottlieb instead of Gottfried.
Clausewitz was born on 1 June 1780 in Burg bei Magdeburg in the Prussian Duchy of Magdeburg as the fourth and youngest son of a family that made claims to a noble status which Carl accepted. Clausewitz's family claimed descent from the Barons of Clausewitz in Upper Silesia, though scholars question the connection. His grandfather, the son of a Lutheran pastor, had been a professor of theology. Clausewitz's father, once a lieutenant in the army of Frederick the Great, King of Prussia, held a minor post in the Prussian internal-revenue service. Clausewitz entered the Prussian military service at the age of twelve as a lance corporal, eventually attaining the rank of major general.
Clausewitz served in the Rhine campaigns (1793–1794) including the siege of Mainz, when the Prussian Army invaded France during the French Revolution, and fought in the Napoleonic Wars from 1806 to 1815. He entered the Kriegsakademie (also cited as "The German War School", the "Military Academy in Berlin", and the "Prussian Military Academy," later the "War College") in Berlin in 1801 (aged 21), probably studied the writings of the philosophers Immanuel Kant and/or Johann Gottlieb Fichte and Friedrich Schleiermacher and won the regard of General Gerhard von Scharnhorst, the future first chief-of-staff of the newly reformed Prussian Army (appointed 1809). Clausewitz, Hermann von Boyen (1771–1848) and Karl von Grolman (1777–1843) were among Scharnhorst's primary allies in his efforts to reform the Prussian army between 1807 and 1814.
Clausewitz served during the Jena Campaign as aide-de-camp to Prince August. At the Battle of Jena-Auerstedt on 14 October 1806—when Napoleon invaded Prussia and defeated the Prussian-Saxon army commanded by Karl Wilhelm Ferdinand, Duke of Brunswick—he was captured, one of the 25,000 prisoners taken that day as the Prussian army disintegrated. He was 26. Clausewitz was held prisoner with his prince in France from 1807 to 1808. Returning to Prussia, he assisted in the reform of the Prussian army and state. Johann Gottlieb Fichte wrote On Machiavelli, as an Author, and Passages from His Writings in June 1807. ("Über Machiavell, als Schriftsteller, und Stellen aus seinen Schriften" ). Carl Clausewitz wrote an interesting and anonymous Letter to Fichte (1809) about his book on Machiavelli. The letter was published in Fichte's Verstreute kleine Schriften 157–166. For an English translation of the letter see Carl von Clausewitz Historical and Political Writings Edited by: Peter Paret and D. Moran (1992).
On 10 December 1810, he married the socially prominent Countess Marie von Brühl, whom he had first met in 1803. She was a member of the noble German Brühl family originating in Thuringia. The couple moved in the highest circles, socialising with Berlin's political, literary, and intellectual élite. Marie was well-educated and politically well-connected—she played an important role in her husband's career progress and intellectual evolution. She also edited, published, and introduced his collected works.
Opposed to Prussia's enforced alliance with Napoleon, Clausewitz left the Prussian army and served in the Imperial Russian Army from 1812 to 1813 during the Russian campaign, taking part in the Battle of Borodino (1812). Like many Prussian officers serving in Russia, he joined the Russian–German Legion in 1813. In the service of the Russian Empire, Clausewitz helped negotiate the Convention of Tauroggen (1812), which prepared the way for the coalition of Prussia, Russia, and the United Kingdom that ultimately defeated Napoleon and his allies.
In 1815 the Russian-German Legion became integrated into the Prussian Army and Clausewitz re-entered Prussian service as a colonel. He was soon appointed chief-of-staff of Johann von Thielmann's III Corps. In that capacity he served at the Battle of Ligny and the Battle of Wavre during the Waterloo campaign in 1815. An army led personally by Napoleon defeated the Prussians at Ligny (south of Mont-Saint-Jean and the village of Waterloo) on 16 June 1815, but they withdrew in good order. Napoleon's failure to destroy the Prussian forces led to his defeat a few days later at the Battle of Waterloo (18 June 1815), when the Prussian forces arrived on his right flank late in the afternoon to support the Anglo-Dutch-Belgian forces pressing his front. Napoleon had convinced his troops that the field grey uniforms were those of Marshal Grouchy's grenadiers. Clausewitz's unit fought heavily outnumbered at Wavre (18–19 June 1815), preventing large reinforcements from reaching Napoleon at Waterloo. After the war, Clausewitz served as the director of the Kriegsakademie, where he served until 1830. In that year he returned to active duty with the army. Soon afterward, the outbreak of several revolutions around Europe and a crisis in Poland appeared to presage another major European war. Clausewitz was appointed chief of staff of the only army Prussia was able to mobilise in this emergency, which was sent to the Polish border. Its commander, Gneisenau, died of cholera (August 1831), and Clausewitz took command of the Prussian army's efforts to construct a cordon sanitaire to contain the great cholera outbreak (the first time cholera had appeared in modern heartland Europe, causing a continent-wide panic). Clausewitz himself died of the same disease shortly afterwards, on 16 November 1831.
His widow edited, published, and wrote the introduction to his magnum opus on the philosophy of war in 1832. (He had started working on the text in 1816 but had not completed it.) She wrote the preface for On War and had published most of his collected works by 1835. She died in January 1836.
Clausewitz was a professional combat soldier who was involved in numerous military campaigns, but he is famous primarily as a military theorist interested in the examination of war, utilising the campaigns of Frederick the Great and Napoleon as frames of reference for his work. He wrote a careful, systematic, philosophical examination of war in all its aspects. The result was his principal book, On War, a major work on the philosophy of war. It was unfinished when Clausewitz died and contains material written at different stages in his intellectual evolution, producing some significant contradictions between different sections. The sequence and precise character of that evolution is a source of much debate as to the exact meaning behind some seemingly contradictory observations in discussions pertinent to the tactical, operational and strategic levels of war, for example (though many of these apparent contradictions are simply the result of his dialectical method). Clausewitz constantly sought to revise the text, particularly between 1827 and his departure on his last field assignments, to include more material on "people's war" and forms of war other than high-intensity warfare between states, but relatively little of this material was included in the book. Soldiers before this time had written treatises on various military subjects, but none had undertaken a great philosophical examination of war on the scale of those written by Clausewitz and Leo Tolstoy, both of whom were inspired by the events of the Napoleonic Era.
Clausewitz's work is still studied today, demonstrating its continued relevance. More than sixteen major English-language books that focused specifically on his work were published between 2005 and 2014, whereas his 19th-century rival Jomini has faded from influence. The historian Lynn Montross said that this outcome "may be explained by the fact that Jomini produced a system of war, Clausewitz a philosophy. The one has been outdated by new weapons, the other still influences the strategy behind those weapons." Jomini did not attempt to define war but Clausewitz did, providing (and dialectically comparing) a number of definitions. The first is his dialectical thesis: "War is thus an act of force to compel our enemy to do our will." The second, often treated as Clausewitz's 'bottom line,' is in fact merely his dialectical antithesis: "War is merely the continuation of policy with other means." The synthesis of his dialectical examination of the nature of war is his famous "trinity," saying that war is "a fascinating trinity—composed of primordial violence, hatred, and enmity, which are to be regarded as a blind natural force; the play of chance and probability, within which the creative spirit is free to roam; and its element of subordination, as an instrument of policy, which makes it subject to pure reason." Christopher Bassford says the best shorthand for Clausewitz's trinity should be something like "violent emotion/chance/rational calculation." However, it is frequently presented as "people/army/government," a misunderstanding based on a later paragraph in the same section. This misrepresentation was popularised by U.S. Army Colonel Harry Summers' Vietnam-era interpretation, facilitated by weaknesses in the 1976 Howard/Paret translation.
The degree to which Clausewitz managed to revise his manuscript to reflect that synthesis is the subject of much debate. His final reference to war and Politik, however, goes beyond his widely quoted antithesis: "War is simply the continuation of political intercourse with the addition of other means. We deliberately use the phrase 'with the addition of other means' because we also want to make it clear that war in itself does not suspend political intercourse or change it into something entirely different. In essentials that intercourse continues, irrespective of the means it employs. The main lines along which military events progress, and to which they are restricted, are political lines that continue throughout the war into the subsequent peace."
A prince or general who knows exactly how to organise his war according to his object and means, who does neither too little nor too much, gives by that the greatest proof of his genius. But the effects of this talent are exhibited not so much by the invention of new modes of action, which might strike the eye immediately, as in the successful final result of the whole. It is the exact fulfilment of silent suppositions, it is the noiseless harmony of the whole action which we should admire, and which only makes itself known in the total result.
Clausewitz introduced systematic philosophical contemplation into Western military thinking, with powerful implications not only for historical and analytical writing but also for practical policy, military instruction, and operational planning. He relied on his own experiences, contemporary writings about Napoleon, and on deep historical research. His historiographical approach is evident in his first extended study, written when he was 25, of the Thirty Years' War. He rejects the Enlightenment's view of the war as a chaotic muddle and instead explains its drawn-out operations by the economy and technology of the age, the social characteristics of the troops, and the commanders' politics and psychology. In On War, Clausewitz sees all wars as the sum of decisions, actions, and reactions in an uncertain and dangerous context, and also as a socio-political phenomenon. He also stressed the complex nature of war, which encompasses both the socio-political and the operational and stresses the primacy of state policy. (One should be careful not to limit his observations on war to war between states, however, as he certainly discusses other kinds of protagonists).
The word "strategy" had only recently come into usage in modern Europe, and Clausewitz's definition is quite narrow: "the use of engagements for the object of war" (which many today would call "the operational level" of war). Clausewitz conceived of war as a political, social, and military phenomenon which might—depending on circumstances—involve the entire population of a political entity at war. In any case, Clausewitz saw military force as an instrument that states and other political actors use to pursue the ends of their policy, in a dialectic between opposing wills, each with the aim of imposing his policies and will upon his enemy.
Clausewitz's emphasis on the inherent superiority of the defense suggests that habitual aggressors are likely to end up as failures. The inherent superiority of the defense obviously does not mean that the defender will always win, however: there are other asymmetries to be considered. He was interested in co-operation between the regular army and militia or partisan forces, or citizen soldiers, as one possible—sometimes the only—method of defense. In the circumstances of the Wars of the French Revolution and those with Napoleon, which were energised by a rising spirit of nationalism, he emphasised the need for states to involve their entire populations in the conduct of war. This point is especially important, as these wars demonstrated that such energies could be of decisive importance and for a time led to a democratisation of the armed forces much as universal suffrage democratised politics.
While Clausewitz was intensely aware of the value of intelligence at all levels, he was also very skeptical of the accuracy of much military intelligence: "Many intelligence reports in war are contradictory; even more are false, and most are uncertain.... In short, most intelligence is false." This circumstance is generally described as part of the fog of war. Such skeptical comments apply only to intelligence at the tactical and operational levels; at the strategic and political levels he constantly stressed the requirement for the best possible understanding of what today would be called strategic and political intelligence. His conclusions were influenced by his experiences in the Prussian Army, which was often in an intelligence fog due partly to the superior abilities of Napoleon's system but even more simply to the nature of war. Clausewitz acknowledges that friction creates enormous difficulties for the realization of any plan, and the fog of war hinders commanders from knowing what is happening. It is precisely in the context of this challenge that he develops the concept of military genius, whose capabilities are seen above all in the execution of operations. 'Military genius' is not simply a matter of intellect, but a combination of qualities of intellect, experience, personality, and temperament (and there are many possible such combinations) that create a very highly developed mental aptitude for the waging of war.
Key ideas discussed in On War include:
Clausewitz used a dialectical method to construct his argument, leading to frequent misinterpretation of his ideas. British military theorist B. H. Liddell Hart contends that the enthusiastic acceptance by the Prussian military establishment—especially Moltke the Elder, a former student of Clausewitz —of what they believed to be Clausewitz's ideas, and the subsequent widespread adoption of the Prussian military system worldwide, had a deleterious effect on military theory and practice, due to their egregious misinterpretation of his ideas:
As so often happens, Clausewitz's disciples carried his teaching to an extreme which their master had not intended.... [Clausewitz's] theory of war was expounded in a way too abstract and involved for ordinary soldier-minds, essentially concrete, to follow the course of his argument—which often turned back from the direction in which it was apparently leading. Impressed yet befogged, they grasped at his vivid leading phrases, seeing only their surface meaning, and missing the deeper current of his thought.
As described by Christopher Bassford, then-professor of strategy at the National War College of the United States:
One of the main sources of confusion about Clausewitz's approach lies in his dialectical method of presentation. For example, Clausewitz's famous line that "War is the continuation of policy with other means," ("Der Krieg ist eine bloße Fortsetzung der Politik mit anderen Mitteln") while accurate as far as it goes, was not intended as a statement of fact. It is the antithesis in a dialectical argument whose thesis is the point—made earlier in the analysis—that "war is nothing but a duel [or wrestling match, the extended metaphor in which that discussion was embedded] on a larger scale." His synthesis, which resolves the deficiencies of these two bold statements, says that war is neither "nothing but" an act of brute force nor "merely" a rational act of politics or policy. This synthesis lies in his "fascinating trinity" [wunderliche Dreifaltigkeit]: a dynamic, inherently unstable interaction of the forces of violent emotion, chance, and rational calculation.
Another example of this confusion is the idea that Clausewitz was a proponent of total war as used in the Third Reich's propaganda in the 1940s. In fact, Clausewitz never used the term "total war": rather, he discussed "absolute war," a concept which evolved into the much more abstract notion of "ideal war" discussed at the very beginning of Vom Kriege—the purely logical result of the forces underlying a "pure," Platonic "ideal" of war. In what he called a "logical fantasy," war cannot be waged in a limited way: the rules of competition will force participants to use all means at their disposal to achieve victory. But in the real world, he said, such rigid logic is unrealistic and dangerous. As a practical matter, the military objectives in real war that support political objectives generally fall into two broad types: limited aims or the effective "disarming" of the enemy "to render [him] politically helpless or militarily impotent. Thus, the complete defeat of the enemy may not be necessary, desirable, or even possible.
In modern times the reconstruction of Clausewitzian theory has been a matter of much dispute. One analysis was that of Panagiotis Kondylis, a Greek writer and philosopher, who opposed the interpretations of Raymond Aron in Penser la Guerre, Clausewitz, and other liberal writers. According to Aron, Clausewitz was one of the first writers to condemn the militarism of the Prussian general staff and its war-proneness, based on Clausewitz's argument that "war is a continuation of policy by other means." In Theory of War, Kondylis claims that this is inconsistent with Clausewitzian thought. He claims that Clausewitz was morally indifferent to war (though this probably reflects a lack of familiarity with personal letters from Clausewitz, which demonstrate an acute awareness of war's tragic aspects) and that his advice regarding politics' dominance over the conduct of war has nothing to do with pacifist ideas.
Other notable writers who have studied Clausewitz's texts and translated them into English are historians Peter Paret of the Institute for Advanced Study and Sir Michael Howard. Howard and Paret edited the most widely used edition of On War (Princeton University Press, 1976/1984) and have produced comparative studies of Clausewitz and other theorists, such as Tolstoy. Bernard Brodie's A Guide to the Reading of "On War," in the 1976 Princeton translation, expressed his interpretations of the Prussian's theories and provided students with an influential synopsis of this vital work. The 1873 translation by Colonel James John Graham was heavily—and controversially—edited by the philosopher, musician, and game theorist Anatol Rapoport.
The British military historian John Keegan attacked Clausewitz's theory in his book A History of Warfare. Keegan argued that Clausewitz assumed the existence of states, yet 'war antedates the state, diplomacy and strategy by many millennia.'
Clausewitz died without completing Vom Kriege, but despite this his ideas have been widely influential in military theory and have had a strong influence on German military thought specifically. Later Prussian and German generals, such as Helmuth Graf von Moltke, were clearly influenced by Clausewitz: Moltke's widely quoted statement that "No operational plan extends with high certainty beyond the first encounter with the main enemy force" is a classic reflection of Clausewitz's insistence on the roles of chance, friction, "fog," uncertainty, and interactivity in war.
Clausewitz's influence spread to British thinking as well, though at first more as a historian and analyst than as a theorist. See for example Wellington's extended essay discussing Clausewitz's study of the Campaign of 1815—Wellington's only serious written discussion of the battle, which was widely discussed in 19th-century Britain. Clausewitz's broader thinking came to the fore following Britain's military embarrassments in the Boer War (1899–1902). One example of a heavy Clausewitzian influence in that era is Spenser Wilkinson, journalist, the first Chichele Professor of Military History at Oxford University, and perhaps the most prominent military analyst in Britain from c. 1885 until well into the interwar period. Another is naval historian Julian Corbett (1854–1922), whose work reflected a deep if idiosyncratic adherence to Clausewitz's concepts and frequently an emphasis on Clausewitz's ideas about 'limited objectives' and the inherent strengths of the defensive form of war. Corbett's practical strategic views were often in prominent public conflict with Wilkinson's—see, for example, Wilkinson's article "Strategy at Sea", The Morning Post, 12 February 1912. Following the First World War, however, the influential British military commentator B. H. Liddell Hart in the 1920s erroneously attributed to him the doctrine of "total war" that during the First World War had been embraced by many European general staffs and emulated by the British. More recent scholars typically see that war as so confused in terms of political rationale that it in fact contradicts much of On War. That view assumes, however, a set of values as to what constitutes "rational" political objectives—in this case, values not shaped by the fervid Social Darwinism that was rife in 1914 Europe. One of the most influential British Clausewitzians today is Colin S. Gray; historian Hew Strachan (like Wilkinson also the Chichele Professor of Military History at Oxford University, since 2001) has been an energetic proponent of the study of Clausewitz, but his own views on Clausewitz's ideas are somewhat ambivalent.
With some interesting exceptions (e.g., John McAuley Palmer, Robert M. Johnston, Hoffman Nickerson), Clausewitz had little influence on American military thought before 1945 other than via British writers, though Generals Eisenhower and Patton were avid readers of English translations. He did influence Karl Marx, Friedrich Engels, Vladimir Lenin, Leon Trotsky, Võ Nguyên Giáp, Ferdinand Foch, and Mao Zedong, and thus the Communist Soviet and Chinese traditions, as Lenin emphasized the inevitability of wars among capitalist states in the age of imperialism and presented the armed struggle of the working class as the only path toward the eventual elimination of war. Because Lenin was an admirer of Clausewitz and called him "one of the great military writers," his influence on the Red Army was immense. The Russian historian A.N. Mertsalov commented that "It was an irony of fate that the view in the USSR was that it was Lenin who shaped the attitude towards Clausewitz, and that Lenin's dictum that war is a continuation of politics is taken from the work of this [allegedly] anti-humanist anti-revolutionary." The American mathematician Anatol Rapoport wrote in 1968 that Clausewitz as interpreted by Lenin formed the basis of all Soviet military thinking since 1917, and quoted the remarks by Marshal V.D. Sokolovsky:
In describing the essence of war, Marxism-Leninism takes as its point of departure the premise that war is not an aim in itself, but rather a tool of politics. In his remarks on Clausewitz's On War, Lenin stressed that "Politics is the reason, and war is only the tool, not the other way around. Consequently, it remains only to subordinate the military point of view to the political."
Henry A. Kissinger, however, described Lenin's approach as being that politics is a continuation of war by other means, thus turning Clausewitz's argument "on its head."
Rapoport argued that:
As for Lenin's approval of Clausewitz, it probably stems from his obsession with the struggle for power. The whole Marxist conception of history is that of successive struggles for power, primarily between social classes. This was constantly applied by Lenin in a variety of contexts. Thus the entire history of philosophy appears in Lenin's writings as a vast struggle between "idealism" and "materialism." The fate of the socialist movement was to be decided by a struggle between the revolutionists and the reformers. Clausewitz's acceptance of the struggle for power as the essence of international politics must have impressed Lenin as starkly realistic.
Clausewitz directly influenced Mao Zedong, who read On War in 1938 and organised a seminar on Clausewitz for the Party leadership in Yan'an. Thus the "Clausewitzian" content in many of Mao's writings is not merely a regurgitation of Lenin but reflects Mao's own study. The idea that war involves inherent "friction" that distorts, to a greater or lesser degree, all prior arrangements, has become common currency in fields such as business strategy and sport. The phrase fog of war derives from Clausewitz's stress on how confused warfare can seem while one is immersed within it. The term center of gravity, used in a military context derives from Clausewitz's usage, which he took from Newtonian mechanics. In U.S. military doctrine, "center of gravity" refers to the basis of an opponent's power at the operational, strategic, or political level, though this is only one aspect of Clausewitz's use of the term.
The deterrence strategy of the United States in the 1950s was closely inspired by President Dwight Eisenhower's reading of Clausewitz as a young officer in the 1920s. Eisenhower was greatly impressed by Clausewitz's example of a theoretical, idealized "absolute war" in Vom Kriege as a way of demonstrating how absurd it would be to attempt such a strategy in practice. For Eisenhower, the age of nuclear weapons had made what was for Clausewitz in the early 19th century only a theoretical vision an all too real possibility in the mid-20th century. From Eisenhower's viewpoint, the best deterrent to war was to show the world just how appalling and horrific a nuclear "absolute war" would be if it should ever occur, hence a series of much publicized nuclear tests in the Pacific, giving first priority in the defense budget to nuclear weapons and delivery systems over conventional weapons, and making repeated statements in public that the United States was able and willing at all times to use nuclear weapons. In this way, through the massive retaliation doctrine and the closely related foreign policy concept of brinkmanship, Eisenhower hoped to hold out a credible vision of Clausewitzian nuclear "absolute war" in order to deter the Soviet Union and/or China from ever risking a war or even conditions that might lead to a war with the United States.
...Philanthropists may easily imagine there is a skillful method of disarming and overcoming an enemy without causing great bloodshed, and that this is the proper tendency of the art of War. However plausible this may appear, still it is an error which must be extirpated; for in such dangerous things as war, the errors which proceed from a spirit of benevolence are just the worst. As the use of physical power to the utmost extent by no means excludes the co-operation of the intelligence, it follows that he who uses force unsparingly, without reference to the quantity of bloodshed, must obtain a superiority if his adversary does not act likewise. By such means the former dictates the law to the latter, and both proceed to extremities, to which the only limitations are those imposed by the amount of counteracting force on each side.
After 1970, some theorists claimed that nuclear proliferation made Clausewitzian concepts obsolete after the 20th-century period in which they dominated the world. John E. Sheppard, Jr., argues that by developing nuclear weapons, state-based conventional armies simultaneously both perfected their original purpose, to destroy a mirror image of themselves, and made themselves obsolete. No two powers have used nuclear weapons against each other, instead using diplomacy, conventional means, or proxy wars to settle disputes. If such a conflict did occur, presumably both combatants would be annihilated. Heavily influenced by the war in Vietnam and by antipathy to American strategist Henry Kissinger, the American biologist, musician, and game-theorist Anatol Rapoport argued in 1968 that a Clausewitzian view of war was not only obsolete in the age of nuclear weapons, but also highly dangerous as it promoted a "zero-sum paradigm" to international relations and a "dissolution of rationality" amongst decision-makers.
The end of the 20th century and the beginning of the 21st century have seen many instances of state armies attempting to suppress insurgencies, terrorism, and other forms of asymmetrical warfare. Clausewitz did not focus solely on wars between countries with well-defined armies. The era of the French Revolution and Napoleon was full of revolutions, rebellions, and violence by "non-state actors," such as the wars in the French Vendée and in Spain. Clausewitz wrote a series of "Lectures on Small War" and studied the rebellion in the Vendée (1793–1796) and the Tyrolean uprising of 1809. In his famous "Bekenntnisdenkschrift" of 1812, he called for a "Spanish war in Germany" and laid out a comprehensive guerrilla strategy to be waged against Napoleon. In On War he included a famous chapter on "The People in Arms."
One prominent critic of Clausewitz is the Israeli military historian Martin van Creveld. In his book The Transformation of War, Creveld argued that Clausewitz's famous "Trinity" of people, army, and government was an obsolete socio-political construct based on the state, which was rapidly passing from the scene as the key player in war, and that he (Creveld) had constructed a new "non-trinitarian" model for modern warfare. Creveld's work has had great influence. Daniel Moran replied, 'The most egregious misrepresentation of Clausewitz's famous metaphor must be that of Martin van Creveld, who has declared Clausewitz to be an apostle of Trinitarian War, by which he means, incomprehensibly, a war of 'state against state and army against army,' from which the influence of the people is entirely excluded." Christopher Bassford went further, noting that one need only read the paragraph in which Clausewitz defined his Trinity to see "that the words 'people,' 'army,' and 'government' appear nowhere at all in the list of the Trinity's components.... Creveld's and Keegan's assault on Clausewitz's Trinity is not only a classic 'blow into the air,' i.e., an assault on a position Clausewitz doesn't occupy. It is also a pointless attack on a concept that is quite useful in its own right. In any case, their failure to read the actual wording of the theory they so vociferously attack, and to grasp its deep relevance to the phenomena they describe, is hard to credit."
Some have gone further and suggested that Clausewitz's best-known aphorism, that war is a continuation of policy with other means, is not only irrelevant today but also inapplicable historically. For an opposing view see the sixteen essays presented in Clausewitz in the Twenty-First Century edited by Hew Strachan and Andreas Herberg-Rothe.
In military academies, schools, and universities worldwide, Clausewitz's Vom Kriege is often (usually in translation) mandatory reading.
August Otto Rühle von Lilienstern – Prussian officer from whom Clausewitz allegedly took, without acknowledgement, several important ideas (including that about war as pursuing political aims) made famous in On War. However, substantial basis for assuming common influences exist, most prominently Scharnhorst, who was Clausewitz's "second father" and professional mentor. This provokes skepticism of the claim the ideas were plagiarized from Lilienstern.
Informational notes
Citations | [
{
"paragraph_id": 0,
"text": "Carl Philipp Gottfried (or Gottlieb) von Clausewitz (German pronunciation: [ˌkaʁl fɔn ˈklaʊ̯zəvɪt͡s] ; 1 June 1780 – 16 November 1831) was a Prussian general and military theorist who stressed the \"moral\" (in modern terms meaning psychological) and political aspects of waging war. His most notable work, Vom Kriege (\"On War\"), though unfinished at his death, is considered a seminal treatise on military strategy and science.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Clausewitz was a realist in many different senses, including realpolitik, and while in some respects a romantic, he also drew heavily on the rationalist ideas of the European Enlightenment.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Clausewitz stressed the dialectical interaction of diverse factors, noting how unexpected developments unfolding under the \"fog of war\" (i.e., in the face of incomplete, dubious, and often erroneous information and great fear, doubt, and excitement) call for rapid decisions by alert commanders. He saw history as a vital check on erudite abstractions that did not accord with experience. In contrast to the early work of Antoine-Henri Jomini, he argued that war could not be quantified or reduced to mapwork, geometry, and graphs. Clausewitz had many aphorisms, of which the most famous is \"War is the continuation of policy with other means.\" (often misquoted as \"... by other means\").",
"title": ""
},
{
"paragraph_id": 3,
"text": "Clausewitz's Christian names are sometimes given in non-German sources as \"Karl\", \"Carl Philipp Gottlieb\", or \"Carl Maria\". He spelled his own given name with a \"C\" in order to identify with the classical Western tradition; writers who use \"Karl\" are often seeking to emphasize their German (rather than European) identity. \"Carl Philipp Gottfried\" appears on Clausewitz's tombstone. Nonetheless, sources such as military historian Peter Paret and Encyclopædia Britannica continue to use Gottlieb instead of Gottfried.",
"title": "Name"
},
{
"paragraph_id": 4,
"text": "Clausewitz was born on 1 June 1780 in Burg bei Magdeburg in the Prussian Duchy of Magdeburg as the fourth and youngest son of a family that made claims to a noble status which Carl accepted. Clausewitz's family claimed descent from the Barons of Clausewitz in Upper Silesia, though scholars question the connection. His grandfather, the son of a Lutheran pastor, had been a professor of theology. Clausewitz's father, once a lieutenant in the army of Frederick the Great, King of Prussia, held a minor post in the Prussian internal-revenue service. Clausewitz entered the Prussian military service at the age of twelve as a lance corporal, eventually attaining the rank of major general.",
"title": "Life and military career"
},
{
"paragraph_id": 5,
"text": "Clausewitz served in the Rhine campaigns (1793–1794) including the siege of Mainz, when the Prussian Army invaded France during the French Revolution, and fought in the Napoleonic Wars from 1806 to 1815. He entered the Kriegsakademie (also cited as \"The German War School\", the \"Military Academy in Berlin\", and the \"Prussian Military Academy,\" later the \"War College\") in Berlin in 1801 (aged 21), probably studied the writings of the philosophers Immanuel Kant and/or Johann Gottlieb Fichte and Friedrich Schleiermacher and won the regard of General Gerhard von Scharnhorst, the future first chief-of-staff of the newly reformed Prussian Army (appointed 1809). Clausewitz, Hermann von Boyen (1771–1848) and Karl von Grolman (1777–1843) were among Scharnhorst's primary allies in his efforts to reform the Prussian army between 1807 and 1814.",
"title": "Life and military career"
},
{
"paragraph_id": 6,
"text": "Clausewitz served during the Jena Campaign as aide-de-camp to Prince August. At the Battle of Jena-Auerstedt on 14 October 1806—when Napoleon invaded Prussia and defeated the Prussian-Saxon army commanded by Karl Wilhelm Ferdinand, Duke of Brunswick—he was captured, one of the 25,000 prisoners taken that day as the Prussian army disintegrated. He was 26. Clausewitz was held prisoner with his prince in France from 1807 to 1808. Returning to Prussia, he assisted in the reform of the Prussian army and state. Johann Gottlieb Fichte wrote On Machiavelli, as an Author, and Passages from His Writings in June 1807. (\"Über Machiavell, als Schriftsteller, und Stellen aus seinen Schriften\" ). Carl Clausewitz wrote an interesting and anonymous Letter to Fichte (1809) about his book on Machiavelli. The letter was published in Fichte's Verstreute kleine Schriften 157–166. For an English translation of the letter see Carl von Clausewitz Historical and Political Writings Edited by: Peter Paret and D. Moran (1992).",
"title": "Life and military career"
},
{
"paragraph_id": 7,
"text": "On 10 December 1810, he married the socially prominent Countess Marie von Brühl, whom he had first met in 1803. She was a member of the noble German Brühl family originating in Thuringia. The couple moved in the highest circles, socialising with Berlin's political, literary, and intellectual élite. Marie was well-educated and politically well-connected—she played an important role in her husband's career progress and intellectual evolution. She also edited, published, and introduced his collected works.",
"title": "Life and military career"
},
{
"paragraph_id": 8,
"text": "Opposed to Prussia's enforced alliance with Napoleon, Clausewitz left the Prussian army and served in the Imperial Russian Army from 1812 to 1813 during the Russian campaign, taking part in the Battle of Borodino (1812). Like many Prussian officers serving in Russia, he joined the Russian–German Legion in 1813. In the service of the Russian Empire, Clausewitz helped negotiate the Convention of Tauroggen (1812), which prepared the way for the coalition of Prussia, Russia, and the United Kingdom that ultimately defeated Napoleon and his allies.",
"title": "Life and military career"
},
{
"paragraph_id": 9,
"text": "In 1815 the Russian-German Legion became integrated into the Prussian Army and Clausewitz re-entered Prussian service as a colonel. He was soon appointed chief-of-staff of Johann von Thielmann's III Corps. In that capacity he served at the Battle of Ligny and the Battle of Wavre during the Waterloo campaign in 1815. An army led personally by Napoleon defeated the Prussians at Ligny (south of Mont-Saint-Jean and the village of Waterloo) on 16 June 1815, but they withdrew in good order. Napoleon's failure to destroy the Prussian forces led to his defeat a few days later at the Battle of Waterloo (18 June 1815), when the Prussian forces arrived on his right flank late in the afternoon to support the Anglo-Dutch-Belgian forces pressing his front. Napoleon had convinced his troops that the field grey uniforms were those of Marshal Grouchy's grenadiers. Clausewitz's unit fought heavily outnumbered at Wavre (18–19 June 1815), preventing large reinforcements from reaching Napoleon at Waterloo. After the war, Clausewitz served as the director of the Kriegsakademie, where he served until 1830. In that year he returned to active duty with the army. Soon afterward, the outbreak of several revolutions around Europe and a crisis in Poland appeared to presage another major European war. Clausewitz was appointed chief of staff of the only army Prussia was able to mobilise in this emergency, which was sent to the Polish border. Its commander, Gneisenau, died of cholera (August 1831), and Clausewitz took command of the Prussian army's efforts to construct a cordon sanitaire to contain the great cholera outbreak (the first time cholera had appeared in modern heartland Europe, causing a continent-wide panic). Clausewitz himself died of the same disease shortly afterwards, on 16 November 1831.",
"title": "Life and military career"
},
{
"paragraph_id": 10,
"text": "His widow edited, published, and wrote the introduction to his magnum opus on the philosophy of war in 1832. (He had started working on the text in 1816 but had not completed it.) She wrote the preface for On War and had published most of his collected works by 1835. She died in January 1836.",
"title": "Life and military career"
},
{
"paragraph_id": 11,
"text": "Clausewitz was a professional combat soldier who was involved in numerous military campaigns, but he is famous primarily as a military theorist interested in the examination of war, utilising the campaigns of Frederick the Great and Napoleon as frames of reference for his work. He wrote a careful, systematic, philosophical examination of war in all its aspects. The result was his principal book, On War, a major work on the philosophy of war. It was unfinished when Clausewitz died and contains material written at different stages in his intellectual evolution, producing some significant contradictions between different sections. The sequence and precise character of that evolution is a source of much debate as to the exact meaning behind some seemingly contradictory observations in discussions pertinent to the tactical, operational and strategic levels of war, for example (though many of these apparent contradictions are simply the result of his dialectical method). Clausewitz constantly sought to revise the text, particularly between 1827 and his departure on his last field assignments, to include more material on \"people's war\" and forms of war other than high-intensity warfare between states, but relatively little of this material was included in the book. Soldiers before this time had written treatises on various military subjects, but none had undertaken a great philosophical examination of war on the scale of those written by Clausewitz and Leo Tolstoy, both of whom were inspired by the events of the Napoleonic Era.",
"title": "Theory of war"
},
{
"paragraph_id": 12,
"text": "Clausewitz's work is still studied today, demonstrating its continued relevance. More than sixteen major English-language books that focused specifically on his work were published between 2005 and 2014, whereas his 19th-century rival Jomini has faded from influence. The historian Lynn Montross said that this outcome \"may be explained by the fact that Jomini produced a system of war, Clausewitz a philosophy. The one has been outdated by new weapons, the other still influences the strategy behind those weapons.\" Jomini did not attempt to define war but Clausewitz did, providing (and dialectically comparing) a number of definitions. The first is his dialectical thesis: \"War is thus an act of force to compel our enemy to do our will.\" The second, often treated as Clausewitz's 'bottom line,' is in fact merely his dialectical antithesis: \"War is merely the continuation of policy with other means.\" The synthesis of his dialectical examination of the nature of war is his famous \"trinity,\" saying that war is \"a fascinating trinity—composed of primordial violence, hatred, and enmity, which are to be regarded as a blind natural force; the play of chance and probability, within which the creative spirit is free to roam; and its element of subordination, as an instrument of policy, which makes it subject to pure reason.\" Christopher Bassford says the best shorthand for Clausewitz's trinity should be something like \"violent emotion/chance/rational calculation.\" However, it is frequently presented as \"people/army/government,\" a misunderstanding based on a later paragraph in the same section. This misrepresentation was popularised by U.S. Army Colonel Harry Summers' Vietnam-era interpretation, facilitated by weaknesses in the 1976 Howard/Paret translation.",
"title": "Theory of war"
},
{
"paragraph_id": 13,
"text": "The degree to which Clausewitz managed to revise his manuscript to reflect that synthesis is the subject of much debate. His final reference to war and Politik, however, goes beyond his widely quoted antithesis: \"War is simply the continuation of political intercourse with the addition of other means. We deliberately use the phrase 'with the addition of other means' because we also want to make it clear that war in itself does not suspend political intercourse or change it into something entirely different. In essentials that intercourse continues, irrespective of the means it employs. The main lines along which military events progress, and to which they are restricted, are political lines that continue throughout the war into the subsequent peace.\"",
"title": "Theory of war"
},
{
"paragraph_id": 14,
"text": "A prince or general who knows exactly how to organise his war according to his object and means, who does neither too little nor too much, gives by that the greatest proof of his genius. But the effects of this talent are exhibited not so much by the invention of new modes of action, which might strike the eye immediately, as in the successful final result of the whole. It is the exact fulfilment of silent suppositions, it is the noiseless harmony of the whole action which we should admire, and which only makes itself known in the total result.",
"title": "Theory of war"
},
{
"paragraph_id": 15,
"text": "Clausewitz introduced systematic philosophical contemplation into Western military thinking, with powerful implications not only for historical and analytical writing but also for practical policy, military instruction, and operational planning. He relied on his own experiences, contemporary writings about Napoleon, and on deep historical research. His historiographical approach is evident in his first extended study, written when he was 25, of the Thirty Years' War. He rejects the Enlightenment's view of the war as a chaotic muddle and instead explains its drawn-out operations by the economy and technology of the age, the social characteristics of the troops, and the commanders' politics and psychology. In On War, Clausewitz sees all wars as the sum of decisions, actions, and reactions in an uncertain and dangerous context, and also as a socio-political phenomenon. He also stressed the complex nature of war, which encompasses both the socio-political and the operational and stresses the primacy of state policy. (One should be careful not to limit his observations on war to war between states, however, as he certainly discusses other kinds of protagonists).",
"title": "Theory of war"
},
{
"paragraph_id": 16,
"text": "The word \"strategy\" had only recently come into usage in modern Europe, and Clausewitz's definition is quite narrow: \"the use of engagements for the object of war\" (which many today would call \"the operational level\" of war). Clausewitz conceived of war as a political, social, and military phenomenon which might—depending on circumstances—involve the entire population of a political entity at war. In any case, Clausewitz saw military force as an instrument that states and other political actors use to pursue the ends of their policy, in a dialectic between opposing wills, each with the aim of imposing his policies and will upon his enemy.",
"title": "Theory of war"
},
{
"paragraph_id": 17,
"text": "Clausewitz's emphasis on the inherent superiority of the defense suggests that habitual aggressors are likely to end up as failures. The inherent superiority of the defense obviously does not mean that the defender will always win, however: there are other asymmetries to be considered. He was interested in co-operation between the regular army and militia or partisan forces, or citizen soldiers, as one possible—sometimes the only—method of defense. In the circumstances of the Wars of the French Revolution and those with Napoleon, which were energised by a rising spirit of nationalism, he emphasised the need for states to involve their entire populations in the conduct of war. This point is especially important, as these wars demonstrated that such energies could be of decisive importance and for a time led to a democratisation of the armed forces much as universal suffrage democratised politics.",
"title": "Theory of war"
},
{
"paragraph_id": 18,
"text": "While Clausewitz was intensely aware of the value of intelligence at all levels, he was also very skeptical of the accuracy of much military intelligence: \"Many intelligence reports in war are contradictory; even more are false, and most are uncertain.... In short, most intelligence is false.\" This circumstance is generally described as part of the fog of war. Such skeptical comments apply only to intelligence at the tactical and operational levels; at the strategic and political levels he constantly stressed the requirement for the best possible understanding of what today would be called strategic and political intelligence. His conclusions were influenced by his experiences in the Prussian Army, which was often in an intelligence fog due partly to the superior abilities of Napoleon's system but even more simply to the nature of war. Clausewitz acknowledges that friction creates enormous difficulties for the realization of any plan, and the fog of war hinders commanders from knowing what is happening. It is precisely in the context of this challenge that he develops the concept of military genius, whose capabilities are seen above all in the execution of operations. 'Military genius' is not simply a matter of intellect, but a combination of qualities of intellect, experience, personality, and temperament (and there are many possible such combinations) that create a very highly developed mental aptitude for the waging of war.",
"title": "Theory of war"
},
{
"paragraph_id": 19,
"text": "Key ideas discussed in On War include:",
"title": "Theory of war"
},
{
"paragraph_id": 20,
"text": "Clausewitz used a dialectical method to construct his argument, leading to frequent misinterpretation of his ideas. British military theorist B. H. Liddell Hart contends that the enthusiastic acceptance by the Prussian military establishment—especially Moltke the Elder, a former student of Clausewitz —of what they believed to be Clausewitz's ideas, and the subsequent widespread adoption of the Prussian military system worldwide, had a deleterious effect on military theory and practice, due to their egregious misinterpretation of his ideas:",
"title": "Interpretation and misinterpretation"
},
{
"paragraph_id": 21,
"text": "As so often happens, Clausewitz's disciples carried his teaching to an extreme which their master had not intended.... [Clausewitz's] theory of war was expounded in a way too abstract and involved for ordinary soldier-minds, essentially concrete, to follow the course of his argument—which often turned back from the direction in which it was apparently leading. Impressed yet befogged, they grasped at his vivid leading phrases, seeing only their surface meaning, and missing the deeper current of his thought.",
"title": "Interpretation and misinterpretation"
},
{
"paragraph_id": 22,
"text": "As described by Christopher Bassford, then-professor of strategy at the National War College of the United States:",
"title": "Interpretation and misinterpretation"
},
{
"paragraph_id": 23,
"text": "One of the main sources of confusion about Clausewitz's approach lies in his dialectical method of presentation. For example, Clausewitz's famous line that \"War is the continuation of policy with other means,\" (\"Der Krieg ist eine bloße Fortsetzung der Politik mit anderen Mitteln\") while accurate as far as it goes, was not intended as a statement of fact. It is the antithesis in a dialectical argument whose thesis is the point—made earlier in the analysis—that \"war is nothing but a duel [or wrestling match, the extended metaphor in which that discussion was embedded] on a larger scale.\" His synthesis, which resolves the deficiencies of these two bold statements, says that war is neither \"nothing but\" an act of brute force nor \"merely\" a rational act of politics or policy. This synthesis lies in his \"fascinating trinity\" [wunderliche Dreifaltigkeit]: a dynamic, inherently unstable interaction of the forces of violent emotion, chance, and rational calculation.",
"title": "Interpretation and misinterpretation"
},
{
"paragraph_id": 24,
"text": "Another example of this confusion is the idea that Clausewitz was a proponent of total war as used in the Third Reich's propaganda in the 1940s. In fact, Clausewitz never used the term \"total war\": rather, he discussed \"absolute war,\" a concept which evolved into the much more abstract notion of \"ideal war\" discussed at the very beginning of Vom Kriege—the purely logical result of the forces underlying a \"pure,\" Platonic \"ideal\" of war. In what he called a \"logical fantasy,\" war cannot be waged in a limited way: the rules of competition will force participants to use all means at their disposal to achieve victory. But in the real world, he said, such rigid logic is unrealistic and dangerous. As a practical matter, the military objectives in real war that support political objectives generally fall into two broad types: limited aims or the effective \"disarming\" of the enemy \"to render [him] politically helpless or militarily impotent. Thus, the complete defeat of the enemy may not be necessary, desirable, or even possible.",
"title": "Interpretation and misinterpretation"
},
{
"paragraph_id": 25,
"text": "In modern times the reconstruction of Clausewitzian theory has been a matter of much dispute. One analysis was that of Panagiotis Kondylis, a Greek writer and philosopher, who opposed the interpretations of Raymond Aron in Penser la Guerre, Clausewitz, and other liberal writers. According to Aron, Clausewitz was one of the first writers to condemn the militarism of the Prussian general staff and its war-proneness, based on Clausewitz's argument that \"war is a continuation of policy by other means.\" In Theory of War, Kondylis claims that this is inconsistent with Clausewitzian thought. He claims that Clausewitz was morally indifferent to war (though this probably reflects a lack of familiarity with personal letters from Clausewitz, which demonstrate an acute awareness of war's tragic aspects) and that his advice regarding politics' dominance over the conduct of war has nothing to do with pacifist ideas.",
"title": "Interpretation and misinterpretation"
},
{
"paragraph_id": 26,
"text": "Other notable writers who have studied Clausewitz's texts and translated them into English are historians Peter Paret of the Institute for Advanced Study and Sir Michael Howard. Howard and Paret edited the most widely used edition of On War (Princeton University Press, 1976/1984) and have produced comparative studies of Clausewitz and other theorists, such as Tolstoy. Bernard Brodie's A Guide to the Reading of \"On War,\" in the 1976 Princeton translation, expressed his interpretations of the Prussian's theories and provided students with an influential synopsis of this vital work. The 1873 translation by Colonel James John Graham was heavily—and controversially—edited by the philosopher, musician, and game theorist Anatol Rapoport.",
"title": "Interpretation and misinterpretation"
},
{
"paragraph_id": 27,
"text": "The British military historian John Keegan attacked Clausewitz's theory in his book A History of Warfare. Keegan argued that Clausewitz assumed the existence of states, yet 'war antedates the state, diplomacy and strategy by many millennia.'",
"title": "Interpretation and misinterpretation"
},
{
"paragraph_id": 28,
"text": "Clausewitz died without completing Vom Kriege, but despite this his ideas have been widely influential in military theory and have had a strong influence on German military thought specifically. Later Prussian and German generals, such as Helmuth Graf von Moltke, were clearly influenced by Clausewitz: Moltke's widely quoted statement that \"No operational plan extends with high certainty beyond the first encounter with the main enemy force\" is a classic reflection of Clausewitz's insistence on the roles of chance, friction, \"fog,\" uncertainty, and interactivity in war.",
"title": "Influence"
},
{
"paragraph_id": 29,
"text": "Clausewitz's influence spread to British thinking as well, though at first more as a historian and analyst than as a theorist. See for example Wellington's extended essay discussing Clausewitz's study of the Campaign of 1815—Wellington's only serious written discussion of the battle, which was widely discussed in 19th-century Britain. Clausewitz's broader thinking came to the fore following Britain's military embarrassments in the Boer War (1899–1902). One example of a heavy Clausewitzian influence in that era is Spenser Wilkinson, journalist, the first Chichele Professor of Military History at Oxford University, and perhaps the most prominent military analyst in Britain from c. 1885 until well into the interwar period. Another is naval historian Julian Corbett (1854–1922), whose work reflected a deep if idiosyncratic adherence to Clausewitz's concepts and frequently an emphasis on Clausewitz's ideas about 'limited objectives' and the inherent strengths of the defensive form of war. Corbett's practical strategic views were often in prominent public conflict with Wilkinson's—see, for example, Wilkinson's article \"Strategy at Sea\", The Morning Post, 12 February 1912. Following the First World War, however, the influential British military commentator B. H. Liddell Hart in the 1920s erroneously attributed to him the doctrine of \"total war\" that during the First World War had been embraced by many European general staffs and emulated by the British. More recent scholars typically see that war as so confused in terms of political rationale that it in fact contradicts much of On War. That view assumes, however, a set of values as to what constitutes \"rational\" political objectives—in this case, values not shaped by the fervid Social Darwinism that was rife in 1914 Europe. One of the most influential British Clausewitzians today is Colin S. Gray; historian Hew Strachan (like Wilkinson also the Chichele Professor of Military History at Oxford University, since 2001) has been an energetic proponent of the study of Clausewitz, but his own views on Clausewitz's ideas are somewhat ambivalent.",
"title": "Influence"
},
{
"paragraph_id": 30,
"text": "With some interesting exceptions (e.g., John McAuley Palmer, Robert M. Johnston, Hoffman Nickerson), Clausewitz had little influence on American military thought before 1945 other than via British writers, though Generals Eisenhower and Patton were avid readers of English translations. He did influence Karl Marx, Friedrich Engels, Vladimir Lenin, Leon Trotsky, Võ Nguyên Giáp, Ferdinand Foch, and Mao Zedong, and thus the Communist Soviet and Chinese traditions, as Lenin emphasized the inevitability of wars among capitalist states in the age of imperialism and presented the armed struggle of the working class as the only path toward the eventual elimination of war. Because Lenin was an admirer of Clausewitz and called him \"one of the great military writers,\" his influence on the Red Army was immense. The Russian historian A.N. Mertsalov commented that \"It was an irony of fate that the view in the USSR was that it was Lenin who shaped the attitude towards Clausewitz, and that Lenin's dictum that war is a continuation of politics is taken from the work of this [allegedly] anti-humanist anti-revolutionary.\" The American mathematician Anatol Rapoport wrote in 1968 that Clausewitz as interpreted by Lenin formed the basis of all Soviet military thinking since 1917, and quoted the remarks by Marshal V.D. Sokolovsky:",
"title": "Influence"
},
{
"paragraph_id": 31,
"text": "In describing the essence of war, Marxism-Leninism takes as its point of departure the premise that war is not an aim in itself, but rather a tool of politics. In his remarks on Clausewitz's On War, Lenin stressed that \"Politics is the reason, and war is only the tool, not the other way around. Consequently, it remains only to subordinate the military point of view to the political.\"",
"title": "Influence"
},
{
"paragraph_id": 32,
"text": "Henry A. Kissinger, however, described Lenin's approach as being that politics is a continuation of war by other means, thus turning Clausewitz's argument \"on its head.\"",
"title": "Influence"
},
{
"paragraph_id": 33,
"text": "Rapoport argued that:",
"title": "Influence"
},
{
"paragraph_id": 34,
"text": "As for Lenin's approval of Clausewitz, it probably stems from his obsession with the struggle for power. The whole Marxist conception of history is that of successive struggles for power, primarily between social classes. This was constantly applied by Lenin in a variety of contexts. Thus the entire history of philosophy appears in Lenin's writings as a vast struggle between \"idealism\" and \"materialism.\" The fate of the socialist movement was to be decided by a struggle between the revolutionists and the reformers. Clausewitz's acceptance of the struggle for power as the essence of international politics must have impressed Lenin as starkly realistic.",
"title": "Influence"
},
{
"paragraph_id": 35,
"text": "Clausewitz directly influenced Mao Zedong, who read On War in 1938 and organised a seminar on Clausewitz for the Party leadership in Yan'an. Thus the \"Clausewitzian\" content in many of Mao's writings is not merely a regurgitation of Lenin but reflects Mao's own study. The idea that war involves inherent \"friction\" that distorts, to a greater or lesser degree, all prior arrangements, has become common currency in fields such as business strategy and sport. The phrase fog of war derives from Clausewitz's stress on how confused warfare can seem while one is immersed within it. The term center of gravity, used in a military context derives from Clausewitz's usage, which he took from Newtonian mechanics. In U.S. military doctrine, \"center of gravity\" refers to the basis of an opponent's power at the operational, strategic, or political level, though this is only one aspect of Clausewitz's use of the term.",
"title": "Influence"
},
{
"paragraph_id": 36,
"text": "The deterrence strategy of the United States in the 1950s was closely inspired by President Dwight Eisenhower's reading of Clausewitz as a young officer in the 1920s. Eisenhower was greatly impressed by Clausewitz's example of a theoretical, idealized \"absolute war\" in Vom Kriege as a way of demonstrating how absurd it would be to attempt such a strategy in practice. For Eisenhower, the age of nuclear weapons had made what was for Clausewitz in the early 19th century only a theoretical vision an all too real possibility in the mid-20th century. From Eisenhower's viewpoint, the best deterrent to war was to show the world just how appalling and horrific a nuclear \"absolute war\" would be if it should ever occur, hence a series of much publicized nuclear tests in the Pacific, giving first priority in the defense budget to nuclear weapons and delivery systems over conventional weapons, and making repeated statements in public that the United States was able and willing at all times to use nuclear weapons. In this way, through the massive retaliation doctrine and the closely related foreign policy concept of brinkmanship, Eisenhower hoped to hold out a credible vision of Clausewitzian nuclear \"absolute war\" in order to deter the Soviet Union and/or China from ever risking a war or even conditions that might lead to a war with the United States.",
"title": "Influence"
},
{
"paragraph_id": 37,
"text": "...Philanthropists may easily imagine there is a skillful method of disarming and overcoming an enemy without causing great bloodshed, and that this is the proper tendency of the art of War. However plausible this may appear, still it is an error which must be extirpated; for in such dangerous things as war, the errors which proceed from a spirit of benevolence are just the worst. As the use of physical power to the utmost extent by no means excludes the co-operation of the intelligence, it follows that he who uses force unsparingly, without reference to the quantity of bloodshed, must obtain a superiority if his adversary does not act likewise. By such means the former dictates the law to the latter, and both proceed to extremities, to which the only limitations are those imposed by the amount of counteracting force on each side.",
"title": "Influence"
},
{
"paragraph_id": 38,
"text": "After 1970, some theorists claimed that nuclear proliferation made Clausewitzian concepts obsolete after the 20th-century period in which they dominated the world. John E. Sheppard, Jr., argues that by developing nuclear weapons, state-based conventional armies simultaneously both perfected their original purpose, to destroy a mirror image of themselves, and made themselves obsolete. No two powers have used nuclear weapons against each other, instead using diplomacy, conventional means, or proxy wars to settle disputes. If such a conflict did occur, presumably both combatants would be annihilated. Heavily influenced by the war in Vietnam and by antipathy to American strategist Henry Kissinger, the American biologist, musician, and game-theorist Anatol Rapoport argued in 1968 that a Clausewitzian view of war was not only obsolete in the age of nuclear weapons, but also highly dangerous as it promoted a \"zero-sum paradigm\" to international relations and a \"dissolution of rationality\" amongst decision-makers.",
"title": "Influence"
},
{
"paragraph_id": 39,
"text": "The end of the 20th century and the beginning of the 21st century have seen many instances of state armies attempting to suppress insurgencies, terrorism, and other forms of asymmetrical warfare. Clausewitz did not focus solely on wars between countries with well-defined armies. The era of the French Revolution and Napoleon was full of revolutions, rebellions, and violence by \"non-state actors,\" such as the wars in the French Vendée and in Spain. Clausewitz wrote a series of \"Lectures on Small War\" and studied the rebellion in the Vendée (1793–1796) and the Tyrolean uprising of 1809. In his famous \"Bekenntnisdenkschrift\" of 1812, he called for a \"Spanish war in Germany\" and laid out a comprehensive guerrilla strategy to be waged against Napoleon. In On War he included a famous chapter on \"The People in Arms.\"",
"title": "Influence"
},
{
"paragraph_id": 40,
"text": "One prominent critic of Clausewitz is the Israeli military historian Martin van Creveld. In his book The Transformation of War, Creveld argued that Clausewitz's famous \"Trinity\" of people, army, and government was an obsolete socio-political construct based on the state, which was rapidly passing from the scene as the key player in war, and that he (Creveld) had constructed a new \"non-trinitarian\" model for modern warfare. Creveld's work has had great influence. Daniel Moran replied, 'The most egregious misrepresentation of Clausewitz's famous metaphor must be that of Martin van Creveld, who has declared Clausewitz to be an apostle of Trinitarian War, by which he means, incomprehensibly, a war of 'state against state and army against army,' from which the influence of the people is entirely excluded.\" Christopher Bassford went further, noting that one need only read the paragraph in which Clausewitz defined his Trinity to see \"that the words 'people,' 'army,' and 'government' appear nowhere at all in the list of the Trinity's components.... Creveld's and Keegan's assault on Clausewitz's Trinity is not only a classic 'blow into the air,' i.e., an assault on a position Clausewitz doesn't occupy. It is also a pointless attack on a concept that is quite useful in its own right. In any case, their failure to read the actual wording of the theory they so vociferously attack, and to grasp its deep relevance to the phenomena they describe, is hard to credit.\"",
"title": "Influence"
},
{
"paragraph_id": 41,
"text": "Some have gone further and suggested that Clausewitz's best-known aphorism, that war is a continuation of policy with other means, is not only irrelevant today but also inapplicable historically. For an opposing view see the sixteen essays presented in Clausewitz in the Twenty-First Century edited by Hew Strachan and Andreas Herberg-Rothe.",
"title": "Influence"
},
{
"paragraph_id": 42,
"text": "In military academies, schools, and universities worldwide, Clausewitz's Vom Kriege is often (usually in translation) mandatory reading.",
"title": "Influence"
},
{
"paragraph_id": 43,
"text": "August Otto Rühle von Lilienstern – Prussian officer from whom Clausewitz allegedly took, without acknowledgement, several important ideas (including that about war as pursuing political aims) made famous in On War. However, substantial basis for assuming common influences exist, most prominently Scharnhorst, who was Clausewitz's \"second father\" and professional mentor. This provokes skepticism of the claim the ideas were plagiarized from Lilienstern.",
"title": "See also"
},
{
"paragraph_id": 44,
"text": "Informational notes",
"title": "References"
},
{
"paragraph_id": 45,
"text": "Citations",
"title": "References"
}
]
| Carl Philipp Gottfried von Clausewitz was a Prussian general and military theorist who stressed the "moral" and political aspects of waging war. His most notable work, Vom Kriege, though unfinished at his death, is considered a seminal treatise on military strategy and science. Clausewitz was a realist in many different senses, including realpolitik, and while in some respects a romantic, he also drew heavily on the rationalist ideas of the European Enlightenment. Clausewitz stressed the dialectical interaction of diverse factors, noting how unexpected developments unfolding under the "fog of war" call for rapid decisions by alert commanders. He saw history as a vital check on erudite abstractions that did not accord with experience. In contrast to the early work of Antoine-Henri Jomini, he argued that war could not be quantified or reduced to mapwork, geometry, and graphs. Clausewitz had many aphorisms, of which the most famous is "War is the continuation of policy with other means.". | 2001-08-10T10:25:53Z | 2023-12-24T16:03:15Z | [
"Template:Infobox military person",
"Template:Lang",
"Template:Circa",
"Template:Webarchive",
"Template:Internet Archive author",
"Template:Short description",
"Template:Redirect",
"Template:Rp",
"Template:Blockquote",
"Template:Cite EB1911",
"Template:Cite encyclopedia",
"Template:Cite journal",
"Template:OCLC",
"Template:Sister project links",
"Template:Synthesis inline",
"Template:Reflist",
"Template:ISBN",
"Template:Refend",
"Template:Authority control",
"Template:Refn",
"Template:Dubious",
"Template:Cite book",
"Template:Refbegin",
"Template:Gutenberg author",
"Template:Use British English",
"Template:IPA-de",
"Template:Div col",
"Template:Div col end",
"Template:Citation needed",
"Template:Cite web",
"Template:ISSN",
"Template:Librivox author"
]
| https://en.wikipedia.org/wiki/Carl_von_Clausewitz |
6,068 | Common Lisp | Common Lisp (CL) is a dialect of the Lisp programming language, published in American National Standards Institute (ANSI) standard document ANSI INCITS 226-1994 (S20018) (formerly X3.226-1994 (R1999)). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived from the ANSI Common Lisp standard.
The Common Lisp language was developed as a standardized and improved successor of Maclisp. By the early 1980s several groups were already at work on diverse successors to MacLisp: Lisp Machine Lisp (aka ZetaLisp), Spice Lisp, NIL and S-1 Lisp. Common Lisp sought to unify, standardise, and extend the features of these MacLisp dialects. Common Lisp is not an implementation, but rather a language specification. Several implementations of the Common Lisp standard are available, including free and open-source software and proprietary products. Common Lisp is a general-purpose, multi-paradigm programming language. It supports a combination of procedural, functional, and object-oriented programming paradigms. As a dynamic programming language, it facilitates evolutionary and incremental software development, with iterative compilation into efficient run-time programs. This incremental development is often done interactively without interrupting the running application.
It also supports optional type annotation and casting, which can be added as necessary at the later profiling and optimization stages, to permit the compiler to generate more efficient code. For instance, fixnum can hold an unboxed integer in a range supported by the hardware and implementation, permitting more efficient arithmetic than on big integers or arbitrary precision types. Similarly, the compiler can be told on a per-module or per-function basis which type of safety level is wanted, using optimize declarations.
Common Lisp includes CLOS, an object system that supports multimethods and method combinations. It is often implemented with a Metaobject Protocol.
Common Lisp is extensible through standard features such as Lisp macros (code transformations) and reader macros (input parsers for characters).
Common Lisp provides partial backwards compatibility with Maclisp and John McCarthy's original Lisp. This allows older Lisp software to be ported to Common Lisp.
Work on Common Lisp started in 1981 after an initiative by ARPA manager Bob Engelmore to develop a single community standard Lisp dialect. Much of the initial language design was done via electronic mail. In 1982, Guy L. Steele Jr. gave the first overview of Common Lisp at the 1982 ACM Symposium on LISP and functional programming.
The first language documentation was published in 1984 as Common Lisp the Language (known as CLtL1), first edition. A second edition (known as CLtL2), published in 1990, incorporated many changes to the language, made during the ANSI Common Lisp standardization process: extended LOOP syntax, the Common Lisp Object System, the Condition System for error handling, an interface to the pretty printer and much more. But CLtL2 does not describe the final ANSI Common Lisp standard and thus is not a documentation of ANSI Common Lisp. The final ANSI Common Lisp standard then was published in 1994. Since then no update to the standard has been published. Various extensions and improvements to Common Lisp (examples are Unicode, Concurrency, CLOS-based IO) have been provided by implementations and libraries.
Common Lisp is a dialect of Lisp. It uses S-expressions to denote both code and data structure. Function calls, macro forms and special forms are written as lists, with the name of the operator first, as in these examples:
Common Lisp has many data types.
Number types include integers, ratios, floating-point numbers, and complex numbers. Common Lisp uses bignums to represent numerical values of arbitrary size and precision. The ratio type represents fractions exactly, a facility not available in many languages. Common Lisp automatically coerces numeric values among these types as appropriate.
The Common Lisp character type is not limited to ASCII characters. Most modern implementations allow Unicode characters.
The symbol type is common to Lisp languages, but largely unknown outside them. A symbol is a unique, named data object with several parts: name, value, function, property list, and package. Of these, value cell and function cell are the most important. Symbols in Lisp are often used similarly to identifiers in other languages: to hold the value of a variable; however there are many other uses. Normally, when a symbol is evaluated, its value is returned. Some symbols evaluate to themselves, for example, all symbols in the keyword package are self-evaluating. Boolean values in Common Lisp are represented by the self-evaluating symbols T and NIL. Common Lisp has namespaces for symbols, called 'packages'.
A number of functions are available for rounding scalar numeric values in various ways. The function round rounds the argument to the nearest integer, with halfway cases rounded to the even integer. The functions truncate, floor, and ceiling round towards zero, down, or up respectively. All these functions return the discarded fractional part as a secondary value. For example, (floor -2.5) yields −3, 0.5; (ceiling -2.5) yields −2, −0.5; (round 2.5) yields 2, 0.5; and (round 3.5) yields 4, −0.5.
Sequence types in Common Lisp include lists, vectors, bit-vectors, and strings. There are many operations that can work on any sequence type.
As in almost all other Lisp dialects, lists in Common Lisp are composed of conses, sometimes called cons cells or pairs. A cons is a data structure with two slots, called its car and cdr. A list is a linked chain of conses or the empty list. Each cons's car refers to a member of the list (possibly another list). Each cons's cdr refers to the next cons—except for the last cons in a list, whose cdr refers to the nil value. Conses can also easily be used to implement trees and other complex data structures; though it is usually advised to use structure or class instances instead. It is also possible to create circular data structures with conses.
Common Lisp supports multidimensional arrays, and can dynamically resize adjustable arrays if required. Multidimensional arrays can be used for matrix mathematics. A vector is a one-dimensional array. Arrays can carry any type as members (even mixed types in the same array) or can be specialized to contain a specific type of members, as in a vector of bits. Usually, only a few types are supported. Many implementations can optimize array functions when the array used is type-specialized. Two type-specialized array types are standard: a string is a vector of characters, while a bit-vector is a vector of bits.
Hash tables store associations between data objects. Any object may be used as key or value. Hash tables are automatically resized as needed.
Packages are collections of symbols, used chiefly to separate the parts of a program into namespaces. A package may export some symbols, marking them as part of a public interface. Packages can use other packages.
Structures, similar in use to C structs and Pascal records, represent arbitrary complex data structures with any number and type of fields (called slots). Structures allow single-inheritance.
Classes are similar to structures, but offer more dynamic features and multiple-inheritance. (See CLOS). Classes have been added late to Common Lisp and there is some conceptual overlap with structures. Objects created of classes are called Instances. A special case is Generic Functions. Generic Functions are both functions and instances.
Common Lisp supports first-class functions. For instance, it is possible to write functions that take other functions as arguments or return functions as well. This makes it possible to describe very general operations.
The Common Lisp library relies heavily on such higher-order functions. For example, the sort function takes a relational operator as an argument and key function as an optional keyword argument. This can be used not only to sort any type of data, but also to sort data structures according to a key.
The evaluation model for functions is very simple. When the evaluator encounters a form (f a1 a2...) then it presumes that the symbol named f is one of the following:
If f is the name of a function, then the arguments a1, a2, ..., an are evaluated in left-to-right order, and the function is found and invoked with those values supplied as parameters.
The macro defun defines functions where a function definition gives the name of the function, the names of any arguments, and a function body:
Function definitions may include compiler directives, known as declarations, which provide hints to the compiler about optimization settings or the data types of arguments. They may also include documentation strings (docstrings), which the Lisp system may use to provide interactive documentation:
Anonymous functions (function literals) are defined using lambda expressions, e.g. (lambda (x) (* x x)) for a function that squares its argument. Lisp programming style frequently uses higher-order functions for which it is useful to provide anonymous functions as arguments.
Local functions can be defined with flet and labels.
There are several other operators related to the definition and manipulation of functions. For instance, a function may be compiled with the compile operator. (Some Lisp systems run functions using an interpreter by default unless instructed to compile; others compile every function).
The macro defgeneric defines generic functions. Generic functions are a collection of methods. The macro defmethod defines methods.
Methods can specialize their parameters over CLOS standard classes, system classes, structure classes or individual objects. For many types, there are corresponding system classes.
When a generic function is called, multiple-dispatch will determine the effective method to use.
Generic Functions are also a first class data type. There are many more features to Generic Functions and Methods than described above.
The namespace for function names is separate from the namespace for data variables. This is a key difference between Common Lisp and Scheme. For Common Lisp, operators that define names in the function namespace include defun, flet, labels, defmethod and defgeneric.
To pass a function by name as an argument to another function, one must use the function special operator, commonly abbreviated as #'. The first sort example above refers to the function named by the symbol > in the function namespace, with the code #'>. Conversely, to call a function passed in such a way, one would use the funcall operator on the argument.
Scheme's evaluation model is simpler: there is only one namespace, and all positions in the form are evaluated (in any order) – not just the arguments. Code written in one dialect is therefore sometimes confusing to programmers more experienced in the other. For instance, many Common Lisp programmers like to use descriptive variable names such as list or string which could cause problems in Scheme, as they would locally shadow function names.
Whether a separate namespace for functions is an advantage is a source of contention in the Lisp community. It is usually referred to as the Lisp-1 vs. Lisp-2 debate. Lisp-1 refers to Scheme's model and Lisp-2 refers to Common Lisp's model. These names were coined in a 1988 paper by Richard P. Gabriel and Kent Pitman, which extensively compares the two approaches.
Common Lisp supports the concept of multiple values, where any expression always has a single primary value, but it might also have any number of secondary values, which might be received and inspected by interested callers. This concept is distinct from returning a list value, as the secondary values are fully optional, and passed via a dedicated side channel. This means that callers may remain entirely unaware of the secondary values being there if they have no need for them, and it makes it convenient to use the mechanism for communicating information that is sometimes useful, but not always necessary. For example,
Multiple values are supported by a handful of standard forms, most common of which are the MULTIPLE-VALUE-BIND special form for accessing secondary values and VALUES for returning multiple values:
Other data types in Common Lisp include:
Like programs in many other programming languages, Common Lisp programs make use of names to refer to variables, functions, and many other kinds of entities. Named references are subject to scope.
The association between a name and the entity which the name refers to is called a binding.
Scope refers to the set of circumstances in which a name is determined to have a particular binding.
The circumstances which determine scope in Common Lisp include:
To understand what a symbol refers to, the Common Lisp programmer must know what kind of reference is being expressed, what kind of scope it uses if it is a variable reference (dynamic versus lexical scope), and also the run-time situation: in what environment is the reference resolved, where was the binding introduced into the environment, et cetera.
Some environments in Lisp are globally pervasive. For instance, if a new type is defined, it is known everywhere thereafter. References to that type look it up in this global environment.
One type of environment in Common Lisp is the dynamic environment. Bindings established in this environment have dynamic extent, which means that a binding is established at the start of the execution of some construct, such as a let block, and disappears when that construct finishes executing: its lifetime is tied to the dynamic activation and deactivation of a block. However, a dynamic binding is not just visible within that block; it is also visible to all functions invoked from that block. This type of visibility is known as indefinite scope. Bindings which exhibit dynamic extent (lifetime tied to the activation and deactivation of a block) and indefinite scope (visible to all functions which are called from that block) are said to have dynamic scope.
Common Lisp has support for dynamically scoped variables, which are also called special variables. Certain other kinds of bindings are necessarily dynamically scoped also, such as restarts and catch tags. Function bindings cannot be dynamically scoped using flet (which only provides lexically scoped function bindings), but function objects (a first-level object in Common Lisp) can be assigned to dynamically scoped variables, bound using let in dynamic scope, then called using funcall or APPLY.
Dynamic scope is extremely useful because it adds referential clarity and discipline to global variables. Global variables are frowned upon in computer science as potential sources of error, because they can give rise to ad-hoc, covert channels of communication among modules that lead to unwanted, surprising interactions.
In Common Lisp, a special variable which has only a top-level binding behaves just like a global variable in other programming languages. A new value can be stored into it, and that value simply replaces what is in the top-level binding. Careless replacement of the value of a global variable is at the heart of bugs caused by the use of global variables. However, another way to work with a special variable is to give it a new, local binding within an expression. This is sometimes referred to as "rebinding" the variable. Binding a dynamically scoped variable temporarily creates a new memory location for that variable, and associates the name with that location. While that binding is in effect, all references to that variable refer to the new binding; the previous binding is hidden. When execution of the binding expression terminates, the temporary memory location is gone, and the old binding is revealed, with the original value intact. Of course, multiple dynamic bindings for the same variable can be nested.
In Common Lisp implementations which support multithreading, dynamic scopes are specific to each thread of execution. Thus special variables serve as an abstraction for thread local storage. If one thread rebinds a special variable, this rebinding has no effect on that variable in other threads. The value stored in a binding can only be retrieved by the thread which created that binding. If each thread binds some special variable *x*, then *x* behaves like thread-local storage. Among threads which do not rebind *x*, it behaves like an ordinary global: all of these threads refer to the same top-level binding of *x*.
Dynamic variables can be used to extend the execution context with additional context information which is implicitly passed from function to function without having to appear as an extra function parameter. This is especially useful when the control transfer has to pass through layers of unrelated code, which simply cannot be extended with extra parameters to pass the additional data. A situation like this usually calls for a global variable. That global variable must be saved and restored, so that the scheme doesn't break under recursion: dynamic variable rebinding takes care of this. And that variable must be made thread-local (or else a big mutex must be used) so the scheme doesn't break under threads: dynamic scope implementations can take care of this also.
In the Common Lisp library, there are many standard special variables. For instance, all standard I/O streams are stored in the top-level bindings of well-known special variables. The standard output stream is stored in *standard-output*.
Suppose a function foo writes to standard output:
To capture its output in a character string, *standard-output* can be bound to a string stream and called:
Common Lisp supports lexical environments. Formally, the bindings in a lexical environment have lexical scope and may have either an indefinite extent or dynamic extent, depending on the type of namespace. Lexical scope means that visibility is physically restricted to the block in which the binding is established. References which are not textually (i.e. lexically) embedded in that block simply do not see that binding.
The tags in a TAGBODY have lexical scope. The expression (GO X) is erroneous if it is not embedded in a TAGBODY which contains a label X. However, the label bindings disappear when the TAGBODY terminates its execution, because they have dynamic extent. If that block of code is re-entered by the invocation of a lexical closure, it is invalid for the body of that closure to try to transfer control to a tag via GO:
When the TAGBODY is executed, it first evaluates the setf form which stores a function in the special variable *stashed*. Then the (go end-label) transfers control to end-label, skipping the code (print "Hello"). Since end-label is at the end of the tagbody, the tagbody terminates, yielding NIL. Suppose that the previously remembered function is now called:
This situation is erroneous. One implementation's response is an error condition containing the message, "GO: tagbody for tag SOME-LABEL has already been left". The function tried to evaluate (go some-label), which is lexically embedded in the tagbody, and resolves to the label. However, the tagbody isn't executing (its extent has ended), and so the control transfer cannot take place.
Local function bindings in Lisp have lexical scope, and variable bindings also have lexical scope by default. By contrast with GO labels, both of these have indefinite extent. When a lexical function or variable binding is established, that binding continues to exist for as long as references to it are possible, even after the construct which established that binding has terminated. References to lexical variables and functions after the termination of their establishing construct are possible thanks to lexical closures.
Lexical binding is the default binding mode for Common Lisp variables. For an individual symbol, it can be switched to dynamic scope, either by a local declaration, by a global declaration. The latter may occur implicitly through the use of a construct like DEFVAR or DEFPARAMETER. It is an important convention in Common Lisp programming that special (i.e. dynamically scoped) variables have names which begin and end with an asterisk sigil * in what is called the "earmuff convention". If adhered to, this convention effectively creates a separate namespace for special variables, so that variables intended to be lexical are not accidentally made special.
Lexical scope is useful for several reasons.
Firstly, references to variables and functions can be compiled to efficient machine code, because the run-time environment structure is relatively simple. In many cases it can be optimized to stack storage, so opening and closing lexical scopes has minimal overhead. Even in cases where full closures must be generated, access to the closure's environment is still efficient; typically each variable becomes an offset into a vector of bindings, and so a variable reference becomes a simple load or store instruction with a base-plus-offset addressing mode.
Secondly, lexical scope (combined with indefinite extent) gives rise to the lexical closure, which in turn creates a whole paradigm of programming centered around the use of functions being first-class objects, which is at the root of functional programming.
Thirdly, perhaps most importantly, even if lexical closures are not exploited, the use of lexical scope isolates program modules from unwanted interactions. Due to their restricted visibility, lexical variables are private. If one module A binds a lexical variable X, and calls another module B, references to X in B will not accidentally resolve to the X bound in A. B simply has no access to X. For situations in which disciplined interactions through a variable are desirable, Common Lisp provides special variables. Special variables allow for a module A to set up a binding for a variable X which is visible to another module B, called from A. Being able to do this is an advantage, and being able to prevent it from happening is also an advantage; consequently, Common Lisp supports both lexical and dynamic scope.
A macro in Lisp superficially resembles a function in usage. However, rather than representing an expression which is evaluated, it represents a transformation of the program source code. The macro gets the source it surrounds as arguments, binds them to its parameters and computes a new source form. This new form can also use a macro. The macro expansion is repeated until the new source form does not use a macro. The final computed form is the source code executed at runtime.
Typical uses of macros in Lisp:
Various standard Common Lisp features also need to be implemented as macros, such as:
Macros are defined by the defmacro macro. The special operator macrolet allows the definition of local (lexically scoped) macros. It is also possible to define macros for symbols using define-symbol-macro and symbol-macrolet.
Paul Graham's book On Lisp describes the use of macros in Common Lisp in detail. Doug Hoyte's book Let Over Lambda extends the discussion on macros, claiming "Macros are the single greatest advantage that lisp has as a programming language and the single greatest advantage of any programming language." Hoyte provides several examples of iterative development of macros.
Macros allow Lisp programmers to create new syntactic forms in the language. One typical use is to create new control structures. The example macro provides an until looping construct. The syntax is:
The macro definition for until:
tagbody is a primitive Common Lisp special operator which provides the ability to name tags and use the go form to jump to those tags. The backquote ` provides a notation that provides code templates, where the value of forms preceded with a comma are filled in. Forms preceded with comma and at-sign are spliced in. The tagbody form tests the end condition. If the condition is true, it jumps to the end tag. Otherwise, the provided body code is executed and then it jumps to the start tag.
An example of using the above until macro:
The code can be expanded using the function macroexpand-1. The expansion for the above example looks like this:
During macro expansion the value of the variable test is (= (random 10) 0) and the value of the variable body is ((write-line "Hello")). The body is a list of forms.
Symbols are usually automatically upcased. The expansion uses the TAGBODY with two labels. The symbols for these labels are computed by GENSYM and are not interned in any package. Two go forms use these tags to jump to. Since tagbody is a primitive operator in Common Lisp (and not a macro), it will not be expanded into something else. The expanded form uses the when macro, which also will be expanded. Fully expanding a source form is called code walking.
In the fully expanded (walked) form, the when form is replaced by the primitive if:
All macros must be expanded before the source code containing them can be evaluated or compiled normally. Macros can be considered functions that accept and return S-expressions – similar to abstract syntax trees, but not limited to those. These functions are invoked before the evaluator or compiler to produce the final source code. Macros are written in normal Common Lisp, and may use any Common Lisp (or third-party) operator available.
Common Lisp macros are capable of what is commonly called variable capture, where symbols in the macro-expansion body coincide with those in the calling context, allowing the programmer to create macros wherein various symbols have special meaning. The term variable capture is somewhat misleading, because all namespaces are vulnerable to unwanted capture, including the operator and function namespace, the tagbody label namespace, catch tag, condition handler and restart namespaces.
Variable capture can introduce software defects. This happens in one of the following two ways:
The Scheme dialect of Lisp provides a macro-writing system which provides the referential transparency that eliminates both types of capture problem. This type of macro system is sometimes called "hygienic", in particular by its proponents (who regard macro systems which do not automatically solve this problem as unhygienic).
In Common Lisp, macro hygiene is ensured one of two different ways.
One approach is to use gensyms: guaranteed-unique symbols which can be used in a macro-expansion without threat of capture. The use of gensyms in a macro definition is a manual chore, but macros can be written which simplify the instantiation and use of gensyms. Gensyms solve type 2 capture easily, but they are not applicable to type 1 capture in the same way, because the macro expansion cannot rename the interfering symbols in the surrounding code which capture its references. Gensyms could be used to provide stable aliases for the global symbols which the macro expansion needs. The macro expansion would use these secret aliases rather than the well-known names, so redefinition of the well-known names would have no ill effect on the macro.
Another approach is to use packages. A macro defined in its own package can simply use internal symbols in that package in its expansion. The use of packages deals with type 1 and type 2 capture.
However, packages don't solve the type 1 capture of references to standard Common Lisp functions and operators. The reason is that the use of packages to solve capture problems revolves around the use of private symbols (symbols in one package, which are not imported into, or otherwise made visible in other packages). Whereas the Common Lisp library symbols are external, and frequently imported into or made visible in user-defined packages.
The following is an example of unwanted capture in the operator namespace, occurring in the expansion of a macro:
The until macro will expand into a form which calls do which is intended to refer to the standard Common Lisp macro do. However, in this context, do may have a completely different meaning, so until may not work properly.
Common Lisp solves the problem of the shadowing of standard operators and functions by forbidding their redefinition. Because it redefines the standard operator do, the preceding is actually a fragment of non-conforming Common Lisp, which allows implementations to diagnose and reject it.
The condition system is responsible for exception handling in Common Lisp. It provides conditions, handlers and restarts. Conditions are objects describing an exceptional situation (for example an error). If a condition is signaled, the Common Lisp system searches for a handler for this condition type and calls the handler. The handler can now search for restarts and use one of these restarts to automatically repair the current problem, using information such as the condition type and any relevant information provided as part of the condition object, and call the appropriate restart function.
These restarts, if unhandled by code, can be presented to users (as part of a user interface, that of a debugger for example), so that the user can select and invoke one of the available restarts. Since the condition handler is called in the context of the error (without unwinding the stack), full error recovery is possible in many cases, where other exception handling systems would have already terminated the current routine. The debugger itself can also be customized or replaced using the *debugger-hook* dynamic variable. Code found within unwind-protect forms such as finalizers will also be executed as appropriate despite the exception.
In the following example (using Symbolics Genera) the user tries to open a file in a Lisp function test called from the Read-Eval-Print-LOOP (REPL), when the file does not exist. The Lisp system presents four restarts. The user selects the Retry OPEN using a different pathname restart and enters a different pathname (lispm-init.lisp instead of lispm-int.lisp). The user code does not contain any error handling code. The whole error handling and restart code is provided by the Lisp system, which can handle and repair the error without terminating the user code.
Common Lisp includes a toolkit for object-oriented programming, the Common Lisp Object System or CLOS. Peter Norvig explains how many Design Patterns are simpler to implement in a dynamic language with the features of CLOS (Multiple Inheritance, Mixins, Multimethods, Metaclasses, Method combinations, etc.). Several extensions to Common Lisp for object-oriented programming have been proposed to be included into the ANSI Common Lisp standard, but eventually CLOS was adopted as the standard object-system for Common Lisp. CLOS is a dynamic object system with multiple dispatch and multiple inheritance, and differs radically from the OOP facilities found in static languages such as C++ or Java. As a dynamic object system, CLOS allows changes at runtime to generic functions and classes. Methods can be added and removed, classes can be added and redefined, objects can be updated for class changes and the class of objects can be changed.
CLOS has been integrated into ANSI Common Lisp. Generic functions can be used like normal functions and are a first-class data type. Every CLOS class is integrated into the Common Lisp type system. Many Common Lisp types have a corresponding class. There is more potential use of CLOS for Common Lisp. The specification does not say whether conditions are implemented with CLOS. Pathnames and streams could be implemented with CLOS. These further usage possibilities of CLOS for ANSI Common Lisp are not part of the standard. Actual Common Lisp implementations use CLOS for pathnames, streams, input–output, conditions, the implementation of CLOS itself and more.
A Lisp interpreter directly executes Lisp source code provided as Lisp objects (lists, symbols, numbers, ...) read from s-expressions. A Lisp compiler generates bytecode or machine code from Lisp source code. Common Lisp allows both individual Lisp functions to be compiled in memory and the compilation of whole files to externally stored compiled code (fasl files).
Several implementations of earlier Lisp dialects provided both an interpreter and a compiler. Unfortunately often the semantics were different. These earlier Lisps implemented lexical scoping in the compiler and dynamic scoping in the interpreter. Common Lisp requires that both the interpreter and compiler use lexical scoping by default. The Common Lisp standard describes both the semantics of the interpreter and a compiler. The compiler can be called using the function compile for individual functions and using the function compile-file for files. Common Lisp allows type declarations and provides ways to influence the compiler code generation policy. For the latter various optimization qualities can be given values between 0 (not important) and 3 (most important): speed, space, safety, debug and compilation-speed.
There is also a function to evaluate Lisp code: eval. eval takes code as pre-parsed s-expressions and not, like in some other languages, as text strings. This way code can be constructed with the usual Lisp functions for constructing lists and symbols and then this code can be evaluated with the function eval. Several Common Lisp implementations (like Clozure CL and SBCL) are implementing eval using their compiler. This way code is compiled, even though it is evaluated using the function eval.
The file compiler is invoked using the function compile-file. The generated file with compiled code is called a fasl (from fast load) file. These fasl files and also source code files can be loaded with the function load into a running Common Lisp system. Depending on the implementation, the file compiler generates byte-code (for example for the Java Virtual Machine), C language code (which then is compiled with a C compiler) or, directly, native code.
Common Lisp implementations can be used interactively, even though the code gets fully compiled. The idea of an Interpreted language thus does not apply for interactive Common Lisp.
The language makes a distinction between read-time, compile-time, load-time, and run-time, and allows user code to also make this distinction to perform the wanted type of processing at the wanted step.
Some special operators are provided to especially suit interactive development; for instance, defvar will only assign a value to its provided variable if it wasn't already bound, while defparameter will always perform the assignment. This distinction is useful when interactively evaluating, compiling and loading code in a live image.
Some features are also provided to help writing compilers and interpreters. Symbols consist of first-level objects and are directly manipulable by user code. The progv special operator allows to create lexical bindings programmatically, while packages are also manipulable. The Lisp compiler is available at runtime to compile files or individual functions. These make it easy to use Lisp as an intermediate compiler or interpreter for another language.
The following program calculates the smallest number of people in a room for whom the probability of unique birthdays is less than 50% (the birthday paradox, where for 1 person the probability is obviously 100%, for 2 it is 364/365, etc.). The answer is 23.
In Common Lisp, by convention, constants are enclosed with + characters.
Calling the example function using the REPL (Read Eval Print Loop):
We define a class person and a method for displaying the name and age of a person. Next we define a group of persons as a list of person objects. Then we iterate over the sorted list.
It prints the three names with descending age.
Use of the LOOP macro is demonstrated:
Example use:
Compare with the built in exponentiation:
WITH-OPEN-FILE is a macro that opens a file and provides a stream. When the form is returning, the file is automatically closed. FUNCALL calls a function object. The LOOP collects all lines that match the predicate.
The function AVAILABLE-SHELLS calls the above function LIST-MATCHING-LINES with a pathname and an anonymous function as the predicate. The predicate returns the pathname of a shell or NIL (if the string is not the filename of a shell).
Example results (on Mac OS X 10.6):
Common Lisp is most frequently compared with, and contrasted to, Scheme—if only because they are the two most popular Lisp dialects. Scheme predates CL, and comes not only from the same Lisp tradition but from some of the same engineers—Guy Steele, with whom Gerald Jay Sussman designed Scheme, chaired the standards committee for Common Lisp.
Common Lisp is a general-purpose programming language, in contrast to Lisp variants such as Emacs Lisp and AutoLISP which are extension languages embedded in particular products (GNU Emacs and AutoCAD, respectively). Unlike many earlier Lisps, Common Lisp (like Scheme) uses lexical variable scope by default for both interpreted and compiled code.
Most of the Lisp systems whose designs contributed to Common Lisp—such as ZetaLisp and Franz Lisp—used dynamically scoped variables in their interpreters and lexically scoped variables in their compilers. Scheme introduced the sole use of lexically scoped variables to Lisp; an inspiration from ALGOL 68. CL supports dynamically scoped variables as well, but they must be explicitly declared as "special". There are no differences in scoping between ANSI CL interpreters and compilers.
Common Lisp is sometimes termed a Lisp-2 and Scheme a Lisp-1, referring to CL's use of separate namespaces for functions and variables. (In fact, CL has many namespaces, such as those for go tags, block names, and loop keywords). There is a long-standing controversy between CL and Scheme advocates over the tradeoffs involved in multiple namespaces. In Scheme, it is (broadly) necessary to avoid giving variables names which clash with functions; Scheme functions frequently have arguments named lis, lst, or lyst so as not to conflict with the system function list. However, in CL it is necessary to explicitly refer to the function namespace when passing a function as an argument—which is also a common occurrence, as in the sort example above.
CL also differs from Scheme in its handling of boolean values. Scheme uses the special values #t and #f to represent truth and falsity. CL follows the older Lisp convention of using the symbols T and NIL, with NIL standing also for the empty list. In CL, any non-NIL value is treated as true by conditionals, such as if, whereas in Scheme all non-#f values are treated as true. These conventions allow some operators in both languages to serve both as predicates (answering a boolean-valued question) and as returning a useful value for further computation, but in Scheme the value '() which is equivalent to NIL in Common Lisp evaluates to true in a boolean expression.
Lastly, the Scheme standards documents require tail-call optimization, which the CL standard does not. Most CL implementations do offer tail-call optimization, although often only when the programmer uses an optimization directive. Nonetheless, common CL coding style does not favor the ubiquitous use of recursion that Scheme style prefers—what a Scheme programmer would express with tail recursion, a CL user would usually express with an iterative expression in do, dolist, loop, or (more recently) with the iterate package.
See the Category Common Lisp implementations.
Common Lisp is defined by a specification (like Ada and C) rather than by one implementation (like Perl). There are many implementations, and the standard details areas in which they may validly differ.
In addition, implementations tend to come with extensions, which provide functionality not covered in the standard:
Free and open-source software libraries have been created to support extensions to Common Lisp in a portable way, and are most notably found in the repositories of the Common-Lisp.net and CLOCC (Common Lisp Open Code Collection) projects.
Common Lisp implementations may use any mix of native code compilation, byte code compilation or interpretation. Common Lisp has been designed to support incremental compilers, file compilers and block compilers. Standard declarations to optimize compilation (such as function inlining or type specialization) are proposed in the language specification. Most Common Lisp implementations compile source code to native machine code. Some implementations can create (optimized) stand-alone applications. Others compile to interpreted bytecode, which is less efficient than native code, but eases binary-code portability. Some compilers compile Common Lisp code to C code. The misconception that Lisp is a purely interpreted language is most likely because Lisp environments provide an interactive prompt and that code is compiled one-by-one, in an incremental way. With Common Lisp incremental compilation is widely used.
Some Unix-based implementations (CLISP, SBCL) can be used as a scripting language; that is, invoked by the system transparently in the way that a Perl or Unix shell interpreter is.
Common Lisp is used to develop research applications (often in Artificial Intelligence), for rapid development of prototypes or for deployed applications.
Common Lisp is used in many commercial applications, including the Yahoo! Store web-commerce site, which originally involved Paul Graham and was later rewritten in C++ and Perl. Other notable examples include:
There also exist open-source applications written in Common Lisp, such as:
A chronological list of books published (or about to be published) about Common Lisp (the language) or about programming with Common Lisp (especially AI programming). | [
{
"paragraph_id": 0,
"text": "Common Lisp (CL) is a dialect of the Lisp programming language, published in American National Standards Institute (ANSI) standard document ANSI INCITS 226-1994 (S20018) (formerly X3.226-1994 (R1999)). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived from the ANSI Common Lisp standard.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Common Lisp language was developed as a standardized and improved successor of Maclisp. By the early 1980s several groups were already at work on diverse successors to MacLisp: Lisp Machine Lisp (aka ZetaLisp), Spice Lisp, NIL and S-1 Lisp. Common Lisp sought to unify, standardise, and extend the features of these MacLisp dialects. Common Lisp is not an implementation, but rather a language specification. Several implementations of the Common Lisp standard are available, including free and open-source software and proprietary products. Common Lisp is a general-purpose, multi-paradigm programming language. It supports a combination of procedural, functional, and object-oriented programming paradigms. As a dynamic programming language, it facilitates evolutionary and incremental software development, with iterative compilation into efficient run-time programs. This incremental development is often done interactively without interrupting the running application.",
"title": ""
},
{
"paragraph_id": 2,
"text": "It also supports optional type annotation and casting, which can be added as necessary at the later profiling and optimization stages, to permit the compiler to generate more efficient code. For instance, fixnum can hold an unboxed integer in a range supported by the hardware and implementation, permitting more efficient arithmetic than on big integers or arbitrary precision types. Similarly, the compiler can be told on a per-module or per-function basis which type of safety level is wanted, using optimize declarations.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Common Lisp includes CLOS, an object system that supports multimethods and method combinations. It is often implemented with a Metaobject Protocol.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Common Lisp is extensible through standard features such as Lisp macros (code transformations) and reader macros (input parsers for characters).",
"title": ""
},
{
"paragraph_id": 5,
"text": "Common Lisp provides partial backwards compatibility with Maclisp and John McCarthy's original Lisp. This allows older Lisp software to be ported to Common Lisp.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Work on Common Lisp started in 1981 after an initiative by ARPA manager Bob Engelmore to develop a single community standard Lisp dialect. Much of the initial language design was done via electronic mail. In 1982, Guy L. Steele Jr. gave the first overview of Common Lisp at the 1982 ACM Symposium on LISP and functional programming.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The first language documentation was published in 1984 as Common Lisp the Language (known as CLtL1), first edition. A second edition (known as CLtL2), published in 1990, incorporated many changes to the language, made during the ANSI Common Lisp standardization process: extended LOOP syntax, the Common Lisp Object System, the Condition System for error handling, an interface to the pretty printer and much more. But CLtL2 does not describe the final ANSI Common Lisp standard and thus is not a documentation of ANSI Common Lisp. The final ANSI Common Lisp standard then was published in 1994. Since then no update to the standard has been published. Various extensions and improvements to Common Lisp (examples are Unicode, Concurrency, CLOS-based IO) have been provided by implementations and libraries.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Common Lisp is a dialect of Lisp. It uses S-expressions to denote both code and data structure. Function calls, macro forms and special forms are written as lists, with the name of the operator first, as in these examples:",
"title": "Syntax"
},
{
"paragraph_id": 9,
"text": "Common Lisp has many data types.",
"title": "Data types"
},
{
"paragraph_id": 10,
"text": "Number types include integers, ratios, floating-point numbers, and complex numbers. Common Lisp uses bignums to represent numerical values of arbitrary size and precision. The ratio type represents fractions exactly, a facility not available in many languages. Common Lisp automatically coerces numeric values among these types as appropriate.",
"title": "Data types"
},
{
"paragraph_id": 11,
"text": "The Common Lisp character type is not limited to ASCII characters. Most modern implementations allow Unicode characters.",
"title": "Data types"
},
{
"paragraph_id": 12,
"text": "The symbol type is common to Lisp languages, but largely unknown outside them. A symbol is a unique, named data object with several parts: name, value, function, property list, and package. Of these, value cell and function cell are the most important. Symbols in Lisp are often used similarly to identifiers in other languages: to hold the value of a variable; however there are many other uses. Normally, when a symbol is evaluated, its value is returned. Some symbols evaluate to themselves, for example, all symbols in the keyword package are self-evaluating. Boolean values in Common Lisp are represented by the self-evaluating symbols T and NIL. Common Lisp has namespaces for symbols, called 'packages'.",
"title": "Data types"
},
{
"paragraph_id": 13,
"text": "A number of functions are available for rounding scalar numeric values in various ways. The function round rounds the argument to the nearest integer, with halfway cases rounded to the even integer. The functions truncate, floor, and ceiling round towards zero, down, or up respectively. All these functions return the discarded fractional part as a secondary value. For example, (floor -2.5) yields −3, 0.5; (ceiling -2.5) yields −2, −0.5; (round 2.5) yields 2, 0.5; and (round 3.5) yields 4, −0.5.",
"title": "Data types"
},
{
"paragraph_id": 14,
"text": "Sequence types in Common Lisp include lists, vectors, bit-vectors, and strings. There are many operations that can work on any sequence type.",
"title": "Data types"
},
{
"paragraph_id": 15,
"text": "As in almost all other Lisp dialects, lists in Common Lisp are composed of conses, sometimes called cons cells or pairs. A cons is a data structure with two slots, called its car and cdr. A list is a linked chain of conses or the empty list. Each cons's car refers to a member of the list (possibly another list). Each cons's cdr refers to the next cons—except for the last cons in a list, whose cdr refers to the nil value. Conses can also easily be used to implement trees and other complex data structures; though it is usually advised to use structure or class instances instead. It is also possible to create circular data structures with conses.",
"title": "Data types"
},
{
"paragraph_id": 16,
"text": "Common Lisp supports multidimensional arrays, and can dynamically resize adjustable arrays if required. Multidimensional arrays can be used for matrix mathematics. A vector is a one-dimensional array. Arrays can carry any type as members (even mixed types in the same array) or can be specialized to contain a specific type of members, as in a vector of bits. Usually, only a few types are supported. Many implementations can optimize array functions when the array used is type-specialized. Two type-specialized array types are standard: a string is a vector of characters, while a bit-vector is a vector of bits.",
"title": "Data types"
},
{
"paragraph_id": 17,
"text": "Hash tables store associations between data objects. Any object may be used as key or value. Hash tables are automatically resized as needed.",
"title": "Data types"
},
{
"paragraph_id": 18,
"text": "Packages are collections of symbols, used chiefly to separate the parts of a program into namespaces. A package may export some symbols, marking them as part of a public interface. Packages can use other packages.",
"title": "Data types"
},
{
"paragraph_id": 19,
"text": "Structures, similar in use to C structs and Pascal records, represent arbitrary complex data structures with any number and type of fields (called slots). Structures allow single-inheritance.",
"title": "Data types"
},
{
"paragraph_id": 20,
"text": "Classes are similar to structures, but offer more dynamic features and multiple-inheritance. (See CLOS). Classes have been added late to Common Lisp and there is some conceptual overlap with structures. Objects created of classes are called Instances. A special case is Generic Functions. Generic Functions are both functions and instances.",
"title": "Data types"
},
{
"paragraph_id": 21,
"text": "Common Lisp supports first-class functions. For instance, it is possible to write functions that take other functions as arguments or return functions as well. This makes it possible to describe very general operations.",
"title": "Data types"
},
{
"paragraph_id": 22,
"text": "The Common Lisp library relies heavily on such higher-order functions. For example, the sort function takes a relational operator as an argument and key function as an optional keyword argument. This can be used not only to sort any type of data, but also to sort data structures according to a key.",
"title": "Data types"
},
{
"paragraph_id": 23,
"text": "The evaluation model for functions is very simple. When the evaluator encounters a form (f a1 a2...) then it presumes that the symbol named f is one of the following:",
"title": "Data types"
},
{
"paragraph_id": 24,
"text": "If f is the name of a function, then the arguments a1, a2, ..., an are evaluated in left-to-right order, and the function is found and invoked with those values supplied as parameters.",
"title": "Data types"
},
{
"paragraph_id": 25,
"text": "The macro defun defines functions where a function definition gives the name of the function, the names of any arguments, and a function body:",
"title": "Data types"
},
{
"paragraph_id": 26,
"text": "Function definitions may include compiler directives, known as declarations, which provide hints to the compiler about optimization settings or the data types of arguments. They may also include documentation strings (docstrings), which the Lisp system may use to provide interactive documentation:",
"title": "Data types"
},
{
"paragraph_id": 27,
"text": "Anonymous functions (function literals) are defined using lambda expressions, e.g. (lambda (x) (* x x)) for a function that squares its argument. Lisp programming style frequently uses higher-order functions for which it is useful to provide anonymous functions as arguments.",
"title": "Data types"
},
{
"paragraph_id": 28,
"text": "Local functions can be defined with flet and labels.",
"title": "Data types"
},
{
"paragraph_id": 29,
"text": "There are several other operators related to the definition and manipulation of functions. For instance, a function may be compiled with the compile operator. (Some Lisp systems run functions using an interpreter by default unless instructed to compile; others compile every function).",
"title": "Data types"
},
{
"paragraph_id": 30,
"text": "The macro defgeneric defines generic functions. Generic functions are a collection of methods. The macro defmethod defines methods.",
"title": "Data types"
},
{
"paragraph_id": 31,
"text": "Methods can specialize their parameters over CLOS standard classes, system classes, structure classes or individual objects. For many types, there are corresponding system classes.",
"title": "Data types"
},
{
"paragraph_id": 32,
"text": "When a generic function is called, multiple-dispatch will determine the effective method to use.",
"title": "Data types"
},
{
"paragraph_id": 33,
"text": "Generic Functions are also a first class data type. There are many more features to Generic Functions and Methods than described above.",
"title": "Data types"
},
{
"paragraph_id": 34,
"text": "The namespace for function names is separate from the namespace for data variables. This is a key difference between Common Lisp and Scheme. For Common Lisp, operators that define names in the function namespace include defun, flet, labels, defmethod and defgeneric.",
"title": "Data types"
},
{
"paragraph_id": 35,
"text": "To pass a function by name as an argument to another function, one must use the function special operator, commonly abbreviated as #'. The first sort example above refers to the function named by the symbol > in the function namespace, with the code #'>. Conversely, to call a function passed in such a way, one would use the funcall operator on the argument.",
"title": "Data types"
},
{
"paragraph_id": 36,
"text": "Scheme's evaluation model is simpler: there is only one namespace, and all positions in the form are evaluated (in any order) – not just the arguments. Code written in one dialect is therefore sometimes confusing to programmers more experienced in the other. For instance, many Common Lisp programmers like to use descriptive variable names such as list or string which could cause problems in Scheme, as they would locally shadow function names.",
"title": "Data types"
},
{
"paragraph_id": 37,
"text": "Whether a separate namespace for functions is an advantage is a source of contention in the Lisp community. It is usually referred to as the Lisp-1 vs. Lisp-2 debate. Lisp-1 refers to Scheme's model and Lisp-2 refers to Common Lisp's model. These names were coined in a 1988 paper by Richard P. Gabriel and Kent Pitman, which extensively compares the two approaches.",
"title": "Data types"
},
{
"paragraph_id": 38,
"text": "Common Lisp supports the concept of multiple values, where any expression always has a single primary value, but it might also have any number of secondary values, which might be received and inspected by interested callers. This concept is distinct from returning a list value, as the secondary values are fully optional, and passed via a dedicated side channel. This means that callers may remain entirely unaware of the secondary values being there if they have no need for them, and it makes it convenient to use the mechanism for communicating information that is sometimes useful, but not always necessary. For example,",
"title": "Data types"
},
{
"paragraph_id": 39,
"text": "Multiple values are supported by a handful of standard forms, most common of which are the MULTIPLE-VALUE-BIND special form for accessing secondary values and VALUES for returning multiple values:",
"title": "Data types"
},
{
"paragraph_id": 40,
"text": "Other data types in Common Lisp include:",
"title": "Data types"
},
{
"paragraph_id": 41,
"text": "Like programs in many other programming languages, Common Lisp programs make use of names to refer to variables, functions, and many other kinds of entities. Named references are subject to scope.",
"title": "Scope"
},
{
"paragraph_id": 42,
"text": "The association between a name and the entity which the name refers to is called a binding.",
"title": "Scope"
},
{
"paragraph_id": 43,
"text": "Scope refers to the set of circumstances in which a name is determined to have a particular binding.",
"title": "Scope"
},
{
"paragraph_id": 44,
"text": "The circumstances which determine scope in Common Lisp include:",
"title": "Scope"
},
{
"paragraph_id": 45,
"text": "To understand what a symbol refers to, the Common Lisp programmer must know what kind of reference is being expressed, what kind of scope it uses if it is a variable reference (dynamic versus lexical scope), and also the run-time situation: in what environment is the reference resolved, where was the binding introduced into the environment, et cetera.",
"title": "Scope"
},
{
"paragraph_id": 46,
"text": "Some environments in Lisp are globally pervasive. For instance, if a new type is defined, it is known everywhere thereafter. References to that type look it up in this global environment.",
"title": "Scope"
},
{
"paragraph_id": 47,
"text": "One type of environment in Common Lisp is the dynamic environment. Bindings established in this environment have dynamic extent, which means that a binding is established at the start of the execution of some construct, such as a let block, and disappears when that construct finishes executing: its lifetime is tied to the dynamic activation and deactivation of a block. However, a dynamic binding is not just visible within that block; it is also visible to all functions invoked from that block. This type of visibility is known as indefinite scope. Bindings which exhibit dynamic extent (lifetime tied to the activation and deactivation of a block) and indefinite scope (visible to all functions which are called from that block) are said to have dynamic scope.",
"title": "Scope"
},
{
"paragraph_id": 48,
"text": "Common Lisp has support for dynamically scoped variables, which are also called special variables. Certain other kinds of bindings are necessarily dynamically scoped also, such as restarts and catch tags. Function bindings cannot be dynamically scoped using flet (which only provides lexically scoped function bindings), but function objects (a first-level object in Common Lisp) can be assigned to dynamically scoped variables, bound using let in dynamic scope, then called using funcall or APPLY.",
"title": "Scope"
},
{
"paragraph_id": 49,
"text": "Dynamic scope is extremely useful because it adds referential clarity and discipline to global variables. Global variables are frowned upon in computer science as potential sources of error, because they can give rise to ad-hoc, covert channels of communication among modules that lead to unwanted, surprising interactions.",
"title": "Scope"
},
{
"paragraph_id": 50,
"text": "In Common Lisp, a special variable which has only a top-level binding behaves just like a global variable in other programming languages. A new value can be stored into it, and that value simply replaces what is in the top-level binding. Careless replacement of the value of a global variable is at the heart of bugs caused by the use of global variables. However, another way to work with a special variable is to give it a new, local binding within an expression. This is sometimes referred to as \"rebinding\" the variable. Binding a dynamically scoped variable temporarily creates a new memory location for that variable, and associates the name with that location. While that binding is in effect, all references to that variable refer to the new binding; the previous binding is hidden. When execution of the binding expression terminates, the temporary memory location is gone, and the old binding is revealed, with the original value intact. Of course, multiple dynamic bindings for the same variable can be nested.",
"title": "Scope"
},
{
"paragraph_id": 51,
"text": "In Common Lisp implementations which support multithreading, dynamic scopes are specific to each thread of execution. Thus special variables serve as an abstraction for thread local storage. If one thread rebinds a special variable, this rebinding has no effect on that variable in other threads. The value stored in a binding can only be retrieved by the thread which created that binding. If each thread binds some special variable *x*, then *x* behaves like thread-local storage. Among threads which do not rebind *x*, it behaves like an ordinary global: all of these threads refer to the same top-level binding of *x*.",
"title": "Scope"
},
{
"paragraph_id": 52,
"text": "Dynamic variables can be used to extend the execution context with additional context information which is implicitly passed from function to function without having to appear as an extra function parameter. This is especially useful when the control transfer has to pass through layers of unrelated code, which simply cannot be extended with extra parameters to pass the additional data. A situation like this usually calls for a global variable. That global variable must be saved and restored, so that the scheme doesn't break under recursion: dynamic variable rebinding takes care of this. And that variable must be made thread-local (or else a big mutex must be used) so the scheme doesn't break under threads: dynamic scope implementations can take care of this also.",
"title": "Scope"
},
{
"paragraph_id": 53,
"text": "In the Common Lisp library, there are many standard special variables. For instance, all standard I/O streams are stored in the top-level bindings of well-known special variables. The standard output stream is stored in *standard-output*.",
"title": "Scope"
},
{
"paragraph_id": 54,
"text": "Suppose a function foo writes to standard output:",
"title": "Scope"
},
{
"paragraph_id": 55,
"text": "To capture its output in a character string, *standard-output* can be bound to a string stream and called:",
"title": "Scope"
},
{
"paragraph_id": 56,
"text": "Common Lisp supports lexical environments. Formally, the bindings in a lexical environment have lexical scope and may have either an indefinite extent or dynamic extent, depending on the type of namespace. Lexical scope means that visibility is physically restricted to the block in which the binding is established. References which are not textually (i.e. lexically) embedded in that block simply do not see that binding.",
"title": "Scope"
},
{
"paragraph_id": 57,
"text": "The tags in a TAGBODY have lexical scope. The expression (GO X) is erroneous if it is not embedded in a TAGBODY which contains a label X. However, the label bindings disappear when the TAGBODY terminates its execution, because they have dynamic extent. If that block of code is re-entered by the invocation of a lexical closure, it is invalid for the body of that closure to try to transfer control to a tag via GO:",
"title": "Scope"
},
{
"paragraph_id": 58,
"text": "When the TAGBODY is executed, it first evaluates the setf form which stores a function in the special variable *stashed*. Then the (go end-label) transfers control to end-label, skipping the code (print \"Hello\"). Since end-label is at the end of the tagbody, the tagbody terminates, yielding NIL. Suppose that the previously remembered function is now called:",
"title": "Scope"
},
{
"paragraph_id": 59,
"text": "This situation is erroneous. One implementation's response is an error condition containing the message, \"GO: tagbody for tag SOME-LABEL has already been left\". The function tried to evaluate (go some-label), which is lexically embedded in the tagbody, and resolves to the label. However, the tagbody isn't executing (its extent has ended), and so the control transfer cannot take place.",
"title": "Scope"
},
{
"paragraph_id": 60,
"text": "Local function bindings in Lisp have lexical scope, and variable bindings also have lexical scope by default. By contrast with GO labels, both of these have indefinite extent. When a lexical function or variable binding is established, that binding continues to exist for as long as references to it are possible, even after the construct which established that binding has terminated. References to lexical variables and functions after the termination of their establishing construct are possible thanks to lexical closures.",
"title": "Scope"
},
{
"paragraph_id": 61,
"text": "Lexical binding is the default binding mode for Common Lisp variables. For an individual symbol, it can be switched to dynamic scope, either by a local declaration, by a global declaration. The latter may occur implicitly through the use of a construct like DEFVAR or DEFPARAMETER. It is an important convention in Common Lisp programming that special (i.e. dynamically scoped) variables have names which begin and end with an asterisk sigil * in what is called the \"earmuff convention\". If adhered to, this convention effectively creates a separate namespace for special variables, so that variables intended to be lexical are not accidentally made special.",
"title": "Scope"
},
{
"paragraph_id": 62,
"text": "Lexical scope is useful for several reasons.",
"title": "Scope"
},
{
"paragraph_id": 63,
"text": "Firstly, references to variables and functions can be compiled to efficient machine code, because the run-time environment structure is relatively simple. In many cases it can be optimized to stack storage, so opening and closing lexical scopes has minimal overhead. Even in cases where full closures must be generated, access to the closure's environment is still efficient; typically each variable becomes an offset into a vector of bindings, and so a variable reference becomes a simple load or store instruction with a base-plus-offset addressing mode.",
"title": "Scope"
},
{
"paragraph_id": 64,
"text": "Secondly, lexical scope (combined with indefinite extent) gives rise to the lexical closure, which in turn creates a whole paradigm of programming centered around the use of functions being first-class objects, which is at the root of functional programming.",
"title": "Scope"
},
{
"paragraph_id": 65,
"text": "Thirdly, perhaps most importantly, even if lexical closures are not exploited, the use of lexical scope isolates program modules from unwanted interactions. Due to their restricted visibility, lexical variables are private. If one module A binds a lexical variable X, and calls another module B, references to X in B will not accidentally resolve to the X bound in A. B simply has no access to X. For situations in which disciplined interactions through a variable are desirable, Common Lisp provides special variables. Special variables allow for a module A to set up a binding for a variable X which is visible to another module B, called from A. Being able to do this is an advantage, and being able to prevent it from happening is also an advantage; consequently, Common Lisp supports both lexical and dynamic scope.",
"title": "Scope"
},
{
"paragraph_id": 66,
"text": "A macro in Lisp superficially resembles a function in usage. However, rather than representing an expression which is evaluated, it represents a transformation of the program source code. The macro gets the source it surrounds as arguments, binds them to its parameters and computes a new source form. This new form can also use a macro. The macro expansion is repeated until the new source form does not use a macro. The final computed form is the source code executed at runtime.",
"title": "Macros"
},
{
"paragraph_id": 67,
"text": "Typical uses of macros in Lisp:",
"title": "Macros"
},
{
"paragraph_id": 68,
"text": "Various standard Common Lisp features also need to be implemented as macros, such as:",
"title": "Macros"
},
{
"paragraph_id": 69,
"text": "Macros are defined by the defmacro macro. The special operator macrolet allows the definition of local (lexically scoped) macros. It is also possible to define macros for symbols using define-symbol-macro and symbol-macrolet.",
"title": "Macros"
},
{
"paragraph_id": 70,
"text": "Paul Graham's book On Lisp describes the use of macros in Common Lisp in detail. Doug Hoyte's book Let Over Lambda extends the discussion on macros, claiming \"Macros are the single greatest advantage that lisp has as a programming language and the single greatest advantage of any programming language.\" Hoyte provides several examples of iterative development of macros.",
"title": "Macros"
},
{
"paragraph_id": 71,
"text": "Macros allow Lisp programmers to create new syntactic forms in the language. One typical use is to create new control structures. The example macro provides an until looping construct. The syntax is:",
"title": "Macros"
},
{
"paragraph_id": 72,
"text": "The macro definition for until:",
"title": "Macros"
},
{
"paragraph_id": 73,
"text": "tagbody is a primitive Common Lisp special operator which provides the ability to name tags and use the go form to jump to those tags. The backquote ` provides a notation that provides code templates, where the value of forms preceded with a comma are filled in. Forms preceded with comma and at-sign are spliced in. The tagbody form tests the end condition. If the condition is true, it jumps to the end tag. Otherwise, the provided body code is executed and then it jumps to the start tag.",
"title": "Macros"
},
{
"paragraph_id": 74,
"text": "An example of using the above until macro:",
"title": "Macros"
},
{
"paragraph_id": 75,
"text": "The code can be expanded using the function macroexpand-1. The expansion for the above example looks like this:",
"title": "Macros"
},
{
"paragraph_id": 76,
"text": "During macro expansion the value of the variable test is (= (random 10) 0) and the value of the variable body is ((write-line \"Hello\")). The body is a list of forms.",
"title": "Macros"
},
{
"paragraph_id": 77,
"text": "Symbols are usually automatically upcased. The expansion uses the TAGBODY with two labels. The symbols for these labels are computed by GENSYM and are not interned in any package. Two go forms use these tags to jump to. Since tagbody is a primitive operator in Common Lisp (and not a macro), it will not be expanded into something else. The expanded form uses the when macro, which also will be expanded. Fully expanding a source form is called code walking.",
"title": "Macros"
},
{
"paragraph_id": 78,
"text": "In the fully expanded (walked) form, the when form is replaced by the primitive if:",
"title": "Macros"
},
{
"paragraph_id": 79,
"text": "All macros must be expanded before the source code containing them can be evaluated or compiled normally. Macros can be considered functions that accept and return S-expressions – similar to abstract syntax trees, but not limited to those. These functions are invoked before the evaluator or compiler to produce the final source code. Macros are written in normal Common Lisp, and may use any Common Lisp (or third-party) operator available.",
"title": "Macros"
},
{
"paragraph_id": 80,
"text": "Common Lisp macros are capable of what is commonly called variable capture, where symbols in the macro-expansion body coincide with those in the calling context, allowing the programmer to create macros wherein various symbols have special meaning. The term variable capture is somewhat misleading, because all namespaces are vulnerable to unwanted capture, including the operator and function namespace, the tagbody label namespace, catch tag, condition handler and restart namespaces.",
"title": "Macros"
},
{
"paragraph_id": 81,
"text": "Variable capture can introduce software defects. This happens in one of the following two ways:",
"title": "Macros"
},
{
"paragraph_id": 82,
"text": "The Scheme dialect of Lisp provides a macro-writing system which provides the referential transparency that eliminates both types of capture problem. This type of macro system is sometimes called \"hygienic\", in particular by its proponents (who regard macro systems which do not automatically solve this problem as unhygienic).",
"title": "Macros"
},
{
"paragraph_id": 83,
"text": "In Common Lisp, macro hygiene is ensured one of two different ways.",
"title": "Macros"
},
{
"paragraph_id": 84,
"text": "One approach is to use gensyms: guaranteed-unique symbols which can be used in a macro-expansion without threat of capture. The use of gensyms in a macro definition is a manual chore, but macros can be written which simplify the instantiation and use of gensyms. Gensyms solve type 2 capture easily, but they are not applicable to type 1 capture in the same way, because the macro expansion cannot rename the interfering symbols in the surrounding code which capture its references. Gensyms could be used to provide stable aliases for the global symbols which the macro expansion needs. The macro expansion would use these secret aliases rather than the well-known names, so redefinition of the well-known names would have no ill effect on the macro.",
"title": "Macros"
},
{
"paragraph_id": 85,
"text": "Another approach is to use packages. A macro defined in its own package can simply use internal symbols in that package in its expansion. The use of packages deals with type 1 and type 2 capture.",
"title": "Macros"
},
{
"paragraph_id": 86,
"text": "However, packages don't solve the type 1 capture of references to standard Common Lisp functions and operators. The reason is that the use of packages to solve capture problems revolves around the use of private symbols (symbols in one package, which are not imported into, or otherwise made visible in other packages). Whereas the Common Lisp library symbols are external, and frequently imported into or made visible in user-defined packages.",
"title": "Macros"
},
{
"paragraph_id": 87,
"text": "The following is an example of unwanted capture in the operator namespace, occurring in the expansion of a macro:",
"title": "Macros"
},
{
"paragraph_id": 88,
"text": "The until macro will expand into a form which calls do which is intended to refer to the standard Common Lisp macro do. However, in this context, do may have a completely different meaning, so until may not work properly.",
"title": "Macros"
},
{
"paragraph_id": 89,
"text": "Common Lisp solves the problem of the shadowing of standard operators and functions by forbidding their redefinition. Because it redefines the standard operator do, the preceding is actually a fragment of non-conforming Common Lisp, which allows implementations to diagnose and reject it.",
"title": "Macros"
},
{
"paragraph_id": 90,
"text": "The condition system is responsible for exception handling in Common Lisp. It provides conditions, handlers and restarts. Conditions are objects describing an exceptional situation (for example an error). If a condition is signaled, the Common Lisp system searches for a handler for this condition type and calls the handler. The handler can now search for restarts and use one of these restarts to automatically repair the current problem, using information such as the condition type and any relevant information provided as part of the condition object, and call the appropriate restart function.",
"title": "Condition system"
},
{
"paragraph_id": 91,
"text": "These restarts, if unhandled by code, can be presented to users (as part of a user interface, that of a debugger for example), so that the user can select and invoke one of the available restarts. Since the condition handler is called in the context of the error (without unwinding the stack), full error recovery is possible in many cases, where other exception handling systems would have already terminated the current routine. The debugger itself can also be customized or replaced using the *debugger-hook* dynamic variable. Code found within unwind-protect forms such as finalizers will also be executed as appropriate despite the exception.",
"title": "Condition system"
},
{
"paragraph_id": 92,
"text": "In the following example (using Symbolics Genera) the user tries to open a file in a Lisp function test called from the Read-Eval-Print-LOOP (REPL), when the file does not exist. The Lisp system presents four restarts. The user selects the Retry OPEN using a different pathname restart and enters a different pathname (lispm-init.lisp instead of lispm-int.lisp). The user code does not contain any error handling code. The whole error handling and restart code is provided by the Lisp system, which can handle and repair the error without terminating the user code.",
"title": "Condition system"
},
{
"paragraph_id": 93,
"text": "Common Lisp includes a toolkit for object-oriented programming, the Common Lisp Object System or CLOS. Peter Norvig explains how many Design Patterns are simpler to implement in a dynamic language with the features of CLOS (Multiple Inheritance, Mixins, Multimethods, Metaclasses, Method combinations, etc.). Several extensions to Common Lisp for object-oriented programming have been proposed to be included into the ANSI Common Lisp standard, but eventually CLOS was adopted as the standard object-system for Common Lisp. CLOS is a dynamic object system with multiple dispatch and multiple inheritance, and differs radically from the OOP facilities found in static languages such as C++ or Java. As a dynamic object system, CLOS allows changes at runtime to generic functions and classes. Methods can be added and removed, classes can be added and redefined, objects can be updated for class changes and the class of objects can be changed.",
"title": "Common Lisp Object System (CLOS)"
},
{
"paragraph_id": 94,
"text": "CLOS has been integrated into ANSI Common Lisp. Generic functions can be used like normal functions and are a first-class data type. Every CLOS class is integrated into the Common Lisp type system. Many Common Lisp types have a corresponding class. There is more potential use of CLOS for Common Lisp. The specification does not say whether conditions are implemented with CLOS. Pathnames and streams could be implemented with CLOS. These further usage possibilities of CLOS for ANSI Common Lisp are not part of the standard. Actual Common Lisp implementations use CLOS for pathnames, streams, input–output, conditions, the implementation of CLOS itself and more.",
"title": "Common Lisp Object System (CLOS)"
},
{
"paragraph_id": 95,
"text": "A Lisp interpreter directly executes Lisp source code provided as Lisp objects (lists, symbols, numbers, ...) read from s-expressions. A Lisp compiler generates bytecode or machine code from Lisp source code. Common Lisp allows both individual Lisp functions to be compiled in memory and the compilation of whole files to externally stored compiled code (fasl files).",
"title": "Compiler and interpreter"
},
{
"paragraph_id": 96,
"text": "Several implementations of earlier Lisp dialects provided both an interpreter and a compiler. Unfortunately often the semantics were different. These earlier Lisps implemented lexical scoping in the compiler and dynamic scoping in the interpreter. Common Lisp requires that both the interpreter and compiler use lexical scoping by default. The Common Lisp standard describes both the semantics of the interpreter and a compiler. The compiler can be called using the function compile for individual functions and using the function compile-file for files. Common Lisp allows type declarations and provides ways to influence the compiler code generation policy. For the latter various optimization qualities can be given values between 0 (not important) and 3 (most important): speed, space, safety, debug and compilation-speed.",
"title": "Compiler and interpreter"
},
{
"paragraph_id": 97,
"text": "There is also a function to evaluate Lisp code: eval. eval takes code as pre-parsed s-expressions and not, like in some other languages, as text strings. This way code can be constructed with the usual Lisp functions for constructing lists and symbols and then this code can be evaluated with the function eval. Several Common Lisp implementations (like Clozure CL and SBCL) are implementing eval using their compiler. This way code is compiled, even though it is evaluated using the function eval.",
"title": "Compiler and interpreter"
},
{
"paragraph_id": 98,
"text": "The file compiler is invoked using the function compile-file. The generated file with compiled code is called a fasl (from fast load) file. These fasl files and also source code files can be loaded with the function load into a running Common Lisp system. Depending on the implementation, the file compiler generates byte-code (for example for the Java Virtual Machine), C language code (which then is compiled with a C compiler) or, directly, native code.",
"title": "Compiler and interpreter"
},
{
"paragraph_id": 99,
"text": "Common Lisp implementations can be used interactively, even though the code gets fully compiled. The idea of an Interpreted language thus does not apply for interactive Common Lisp.",
"title": "Compiler and interpreter"
},
{
"paragraph_id": 100,
"text": "The language makes a distinction between read-time, compile-time, load-time, and run-time, and allows user code to also make this distinction to perform the wanted type of processing at the wanted step.",
"title": "Compiler and interpreter"
},
{
"paragraph_id": 101,
"text": "Some special operators are provided to especially suit interactive development; for instance, defvar will only assign a value to its provided variable if it wasn't already bound, while defparameter will always perform the assignment. This distinction is useful when interactively evaluating, compiling and loading code in a live image.",
"title": "Compiler and interpreter"
},
{
"paragraph_id": 102,
"text": "Some features are also provided to help writing compilers and interpreters. Symbols consist of first-level objects and are directly manipulable by user code. The progv special operator allows to create lexical bindings programmatically, while packages are also manipulable. The Lisp compiler is available at runtime to compile files or individual functions. These make it easy to use Lisp as an intermediate compiler or interpreter for another language.",
"title": "Compiler and interpreter"
},
{
"paragraph_id": 103,
"text": "The following program calculates the smallest number of people in a room for whom the probability of unique birthdays is less than 50% (the birthday paradox, where for 1 person the probability is obviously 100%, for 2 it is 364/365, etc.). The answer is 23.",
"title": "Code examples"
},
{
"paragraph_id": 104,
"text": "In Common Lisp, by convention, constants are enclosed with + characters.",
"title": "Code examples"
},
{
"paragraph_id": 105,
"text": "Calling the example function using the REPL (Read Eval Print Loop):",
"title": "Code examples"
},
{
"paragraph_id": 106,
"text": "We define a class person and a method for displaying the name and age of a person. Next we define a group of persons as a list of person objects. Then we iterate over the sorted list.",
"title": "Code examples"
},
{
"paragraph_id": 107,
"text": "It prints the three names with descending age.",
"title": "Code examples"
},
{
"paragraph_id": 108,
"text": "Use of the LOOP macro is demonstrated:",
"title": "Code examples"
},
{
"paragraph_id": 109,
"text": "Example use:",
"title": "Code examples"
},
{
"paragraph_id": 110,
"text": "Compare with the built in exponentiation:",
"title": "Code examples"
},
{
"paragraph_id": 111,
"text": "WITH-OPEN-FILE is a macro that opens a file and provides a stream. When the form is returning, the file is automatically closed. FUNCALL calls a function object. The LOOP collects all lines that match the predicate.",
"title": "Code examples"
},
{
"paragraph_id": 112,
"text": "The function AVAILABLE-SHELLS calls the above function LIST-MATCHING-LINES with a pathname and an anonymous function as the predicate. The predicate returns the pathname of a shell or NIL (if the string is not the filename of a shell).",
"title": "Code examples"
},
{
"paragraph_id": 113,
"text": "Example results (on Mac OS X 10.6):",
"title": "Code examples"
},
{
"paragraph_id": 114,
"text": "Common Lisp is most frequently compared with, and contrasted to, Scheme—if only because they are the two most popular Lisp dialects. Scheme predates CL, and comes not only from the same Lisp tradition but from some of the same engineers—Guy Steele, with whom Gerald Jay Sussman designed Scheme, chaired the standards committee for Common Lisp.",
"title": "Comparison with other Lisps"
},
{
"paragraph_id": 115,
"text": "Common Lisp is a general-purpose programming language, in contrast to Lisp variants such as Emacs Lisp and AutoLISP which are extension languages embedded in particular products (GNU Emacs and AutoCAD, respectively). Unlike many earlier Lisps, Common Lisp (like Scheme) uses lexical variable scope by default for both interpreted and compiled code.",
"title": "Comparison with other Lisps"
},
{
"paragraph_id": 116,
"text": "Most of the Lisp systems whose designs contributed to Common Lisp—such as ZetaLisp and Franz Lisp—used dynamically scoped variables in their interpreters and lexically scoped variables in their compilers. Scheme introduced the sole use of lexically scoped variables to Lisp; an inspiration from ALGOL 68. CL supports dynamically scoped variables as well, but they must be explicitly declared as \"special\". There are no differences in scoping between ANSI CL interpreters and compilers.",
"title": "Comparison with other Lisps"
},
{
"paragraph_id": 117,
"text": "Common Lisp is sometimes termed a Lisp-2 and Scheme a Lisp-1, referring to CL's use of separate namespaces for functions and variables. (In fact, CL has many namespaces, such as those for go tags, block names, and loop keywords). There is a long-standing controversy between CL and Scheme advocates over the tradeoffs involved in multiple namespaces. In Scheme, it is (broadly) necessary to avoid giving variables names which clash with functions; Scheme functions frequently have arguments named lis, lst, or lyst so as not to conflict with the system function list. However, in CL it is necessary to explicitly refer to the function namespace when passing a function as an argument—which is also a common occurrence, as in the sort example above.",
"title": "Comparison with other Lisps"
},
{
"paragraph_id": 118,
"text": "CL also differs from Scheme in its handling of boolean values. Scheme uses the special values #t and #f to represent truth and falsity. CL follows the older Lisp convention of using the symbols T and NIL, with NIL standing also for the empty list. In CL, any non-NIL value is treated as true by conditionals, such as if, whereas in Scheme all non-#f values are treated as true. These conventions allow some operators in both languages to serve both as predicates (answering a boolean-valued question) and as returning a useful value for further computation, but in Scheme the value '() which is equivalent to NIL in Common Lisp evaluates to true in a boolean expression.",
"title": "Comparison with other Lisps"
},
{
"paragraph_id": 119,
"text": "Lastly, the Scheme standards documents require tail-call optimization, which the CL standard does not. Most CL implementations do offer tail-call optimization, although often only when the programmer uses an optimization directive. Nonetheless, common CL coding style does not favor the ubiquitous use of recursion that Scheme style prefers—what a Scheme programmer would express with tail recursion, a CL user would usually express with an iterative expression in do, dolist, loop, or (more recently) with the iterate package.",
"title": "Comparison with other Lisps"
},
{
"paragraph_id": 120,
"text": "See the Category Common Lisp implementations.",
"title": "Implementations"
},
{
"paragraph_id": 121,
"text": "Common Lisp is defined by a specification (like Ada and C) rather than by one implementation (like Perl). There are many implementations, and the standard details areas in which they may validly differ.",
"title": "Implementations"
},
{
"paragraph_id": 122,
"text": "In addition, implementations tend to come with extensions, which provide functionality not covered in the standard:",
"title": "Implementations"
},
{
"paragraph_id": 123,
"text": "Free and open-source software libraries have been created to support extensions to Common Lisp in a portable way, and are most notably found in the repositories of the Common-Lisp.net and CLOCC (Common Lisp Open Code Collection) projects.",
"title": "Implementations"
},
{
"paragraph_id": 124,
"text": "Common Lisp implementations may use any mix of native code compilation, byte code compilation or interpretation. Common Lisp has been designed to support incremental compilers, file compilers and block compilers. Standard declarations to optimize compilation (such as function inlining or type specialization) are proposed in the language specification. Most Common Lisp implementations compile source code to native machine code. Some implementations can create (optimized) stand-alone applications. Others compile to interpreted bytecode, which is less efficient than native code, but eases binary-code portability. Some compilers compile Common Lisp code to C code. The misconception that Lisp is a purely interpreted language is most likely because Lisp environments provide an interactive prompt and that code is compiled one-by-one, in an incremental way. With Common Lisp incremental compilation is widely used.",
"title": "Implementations"
},
{
"paragraph_id": 125,
"text": "Some Unix-based implementations (CLISP, SBCL) can be used as a scripting language; that is, invoked by the system transparently in the way that a Perl or Unix shell interpreter is.",
"title": "Implementations"
},
{
"paragraph_id": 126,
"text": "Common Lisp is used to develop research applications (often in Artificial Intelligence), for rapid development of prototypes or for deployed applications.",
"title": "Applications"
},
{
"paragraph_id": 127,
"text": "Common Lisp is used in many commercial applications, including the Yahoo! Store web-commerce site, which originally involved Paul Graham and was later rewritten in C++ and Perl. Other notable examples include:",
"title": "Applications"
},
{
"paragraph_id": 128,
"text": "There also exist open-source applications written in Common Lisp, such as:",
"title": "Applications"
},
{
"paragraph_id": 129,
"text": "A chronological list of books published (or about to be published) about Common Lisp (the language) or about programming with Common Lisp (especially AI programming).",
"title": "Bibliography"
}
]
| Common Lisp (CL) is a dialect of the Lisp programming language, published in American National Standards Institute (ANSI) standard document ANSI INCITS 226-1994 (S20018). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived from the ANSI Common Lisp standard. The Common Lisp language was developed as a standardized and improved successor of Maclisp. By the early 1980s several groups were already at work on diverse successors to MacLisp: Lisp Machine Lisp, Spice Lisp, NIL and S-1 Lisp. Common Lisp sought to unify, standardise, and extend the features of these MacLisp dialects. Common Lisp is not an implementation, but rather a language specification. Several implementations of the Common Lisp standard are available, including free and open-source software and proprietary products.
Common Lisp is a general-purpose, multi-paradigm programming language. It supports a combination of procedural, functional, and object-oriented programming paradigms. As a dynamic programming language, it facilitates evolutionary and incremental software development, with iterative compilation into efficient run-time programs. This incremental development is often done interactively without interrupting the running application. It also supports optional type annotation and casting, which can be added as necessary at the later profiling and optimization stages, to permit the compiler to generate more efficient code. For instance, fixnum can hold an unboxed integer in a range supported by the hardware and implementation, permitting more efficient arithmetic than on big integers or arbitrary precision types. Similarly, the compiler can be told on a per-module or per-function basis which type of safety level is wanted, using optimize declarations. Common Lisp includes CLOS, an object system that supports multimethods and method combinations. It is often implemented with a Metaobject Protocol. Common Lisp is extensible through standard features such as Lisp macros and reader macros. Common Lisp provides partial backwards compatibility with Maclisp and John McCarthy's original Lisp. This allows older Lisp software to be ported to Common Lisp. | 2001-10-26T18:06:01Z | 2023-12-18T20:41:06Z | [
"Template:Clear",
"Template:Cite journal",
"Template:ISBN",
"Template:Citation needed",
"Template:More citations needed section",
"Template:Category see also",
"Template:Cite book",
"Template:Wikibooks",
"Template:Common Lisp",
"Template:Authority control",
"Template:Infobox programming language",
"Template:Main",
"Template:Reflist",
"Template:Refend",
"Template:Portal",
"Template:Cite web",
"Template:Webarchive",
"Template:Refbegin",
"Template:Lisp programming language",
"Template:Short description",
"Template:Use mdy dates"
]
| https://en.wikipedia.org/wiki/Common_Lisp |
6,069 | Color code | A color code is a system for displaying information by using different colors.
The earliest examples of color codes in use are for long-distance communication by use of flags, as in semaphore communication. The United Kingdom adopted a color code scheme for such communication wherein red signified danger and white signified safety, with other colors having similar assignments of meaning.
As chemistry and other technologies advanced, it became expedient to use coloration as a signal for telling apart things that would otherwise be confusingly similar, such as wiring in electrical and electronic devices, and pharmaceutical pills.
The use of color codes has been extended to abstractions, such as the Homeland Security Advisory System color code in the United States. Similarly, hospital emergency codes often incorporate colors (such as the widely used "Code Blue" indicating a cardiac arrest), although they may also include numbers, and may not conform to a uniform standard.
Color codes do present some potential problems. On forms and signage, the use of color can distract from black and white text. They are often difficult for color blind and blind people to interpret, and even for those with normal color vision, use of many colors to code many variables can lead to use of confusingly similar colors.
Systems incorporating color-coding include: | [
{
"paragraph_id": 0,
"text": "A color code is a system for displaying information by using different colors.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The earliest examples of color codes in use are for long-distance communication by use of flags, as in semaphore communication. The United Kingdom adopted a color code scheme for such communication wherein red signified danger and white signified safety, with other colors having similar assignments of meaning.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As chemistry and other technologies advanced, it became expedient to use coloration as a signal for telling apart things that would otherwise be confusingly similar, such as wiring in electrical and electronic devices, and pharmaceutical pills.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The use of color codes has been extended to abstractions, such as the Homeland Security Advisory System color code in the United States. Similarly, hospital emergency codes often incorporate colors (such as the widely used \"Code Blue\" indicating a cardiac arrest), although they may also include numbers, and may not conform to a uniform standard.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Color codes do present some potential problems. On forms and signage, the use of color can distract from black and white text. They are often difficult for color blind and blind people to interpret, and even for those with normal color vision, use of many colors to code many variables can lead to use of confusingly similar colors.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Systems incorporating color-coding include:",
"title": "Examples"
}
]
| A color code is a system for displaying information by using different colors. The earliest examples of color codes in use are for long-distance communication by use of flags, as in semaphore communication. The United Kingdom adopted a color code scheme for such communication wherein red signified danger and white signified safety, with other colors having similar assignments of meaning. As chemistry and other technologies advanced, it became expedient to use coloration as a signal for telling apart things that would otherwise be confusingly similar, such as wiring in electrical and electronic devices, and pharmaceutical pills. The use of color codes has been extended to abstractions, such as the Homeland Security Advisory System color code in the United States. Similarly, hospital emergency codes often incorporate colors, although they may also include numbers, and may not conform to a uniform standard. Color codes do present some potential problems. On forms and signage, the use of color can distract from black and white text. They are often difficult for color blind and blind people to interpret, and even for those with normal color vision, use of many colors to code many variables can lead to use of confusingly similar colors. | 2001-08-12T13:06:11Z | 2023-12-25T21:18:42Z | [
"Template:For-multi",
"Template:Reflist",
"Template:Authority control",
"Template:Use American English",
"Template:Excessive examples",
"Template:Cite web",
"Template:Commons-inline",
"Template:Color topics",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Color_code |
6,080 | CGI | CGI may refer to: | [
{
"paragraph_id": 0,
"text": "CGI may refer to:",
"title": ""
}
]
| CGI may refer to: | 2023-05-10T21:06:02Z | [
"Template:Disambiguation",
"Template:Wiktionary",
"Template:TOC right",
"Template:Lookfrom",
"Template:Intitle"
]
| https://en.wikipedia.org/wiki/CGI |
|
6,082 | Cortex | Cortex or cortical may refer to: | [
{
"paragraph_id": 0,
"text": "Cortex or cortical may refer to:",
"title": ""
}
]
| Cortex or cortical may refer to: | 2002-02-21T19:30:25Z | 2023-12-21T05:36:03Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:In title",
"Template:Look from",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Cortex |
6,084 | Collection | Collection or Collections may refer to:
Collection may also refer to: | [
{
"paragraph_id": 0,
"text": "Collection or Collections may refer to:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Collection may also refer to:",
"title": ""
}
]
| Collection or Collections may refer to: Cash collection, the function of an accounts receivable department
Collection (church), money donated by the congregation during a church service
Collection agency, agency to collect cash
Collections management (museum)
Collection (museum), objects in a particular field forms the core basis for the museum
Fonds in archives
Private collection, sometimes just called "collection"
Collection, a beginning-of-term exam or Principal's Collections
Collection (horse), a horse carrying more weight on his hindquarters than his forehand
Collection (racehorse), an Irish-bred, Hong Kong based Thoroughbred racehorse
Collection (publishing), a gathering of books under the same title at the same publisher
Scientific collection, any systematic collection of objects for scientific study Collection may also refer to: | 2023-07-05T18:51:07Z | [
"Template:Div col end",
"Template:Disambiguation",
"Template:For",
"Template:Technical reasons",
"Template:Wiktionary",
"Template:TOC right",
"Template:Col div",
"Template:Col div end"
]
| https://en.wikipedia.org/wiki/Collection |
|
6,085 | Cauchy sequence | In mathematics, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all excluding a finite number of elements of the sequence are less than that given distance from each other. Cauchy sequences are named after Augustin-Louis Cauchy; they may occasionally be known as fundamental sequences.
It is not sufficient for each term to become arbitrarily close to the preceding term. For instance, in the sequence of square roots of natural numbers:
the consecutive terms become arbitrarily close to each other – their differences
tend to zero as the index n grows. However, with growing values of n, the terms a n {\displaystyle a_{n}} become arbitrarily large. So, for any index n and distance d, there exists an index m big enough such that a m − a n > d . {\displaystyle a_{m}-a_{n}>d.} As a result, no matter how far one goes, the remaining terms of the sequence never get close to each other; hence the sequence is not Cauchy.
The utility of Cauchy sequences lies in the fact that in a complete metric space (one where all such sequences are known to converge to a limit), the criterion for convergence depends only on the terms of the sequence itself, as opposed to the definition of convergence, which uses the limit value as well as the terms. This is often exploited in algorithms, both theoretical and applied, where an iterative process can be shown relatively easily to produce a Cauchy sequence, consisting of the iterates, thus fulfilling a logical condition, such as termination.
Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filters and Cauchy nets.
A sequence
of real numbers is called a Cauchy sequence if for every positive real number ε , {\displaystyle \varepsilon ,} there is a positive integer N such that for all natural numbers m , n > N , {\displaystyle m,n>N,}
where the vertical bars denote the absolute value. In a similar way one can define Cauchy sequences of rational or complex numbers. Cauchy formulated such a condition by requiring x m − x n {\displaystyle x_{m}-x_{n}} to be infinitesimal for every pair of infinite m, n.
For any real number r, the sequence of truncated decimal expansions of r forms a Cauchy sequence. For example, when r = π , {\displaystyle r=\pi ,} this sequence is (3, 3.1, 3.14, 3.141, ...). The mth and nth terms differ by at most 10 1 − m {\displaystyle 10^{1-m}} when m < n, and as m grows this becomes smaller than any fixed positive number ε . {\displaystyle \varepsilon .}
If ( x 1 , x 2 , x 3 , . . . ) {\displaystyle (x_{1},x_{2},x_{3},...)} is a sequence in the set X , {\displaystyle X,} then a modulus of Cauchy convergence for the sequence is a function α {\displaystyle \alpha } from the set of natural numbers to itself, such that for all natural numbers k {\displaystyle k} and natural numbers m , n > α ( k ) , {\displaystyle m,n>\alpha (k),} | x m − x n | < 1 / k . {\displaystyle |x_{m}-x_{n}|<1/k.}
Any sequence with a modulus of Cauchy convergence is a Cauchy sequence. The existence of a modulus for a Cauchy sequence follows from the well-ordering property of the natural numbers (let α ( k ) {\displaystyle \alpha (k)} be the smallest possible N {\displaystyle N} in the definition of Cauchy sequence, taking ε {\displaystyle \varepsilon } to be 1 / k {\displaystyle 1/k} ). The existence of a modulus also follows from the principle of countable choice. Regular Cauchy sequences are sequences with a given modulus of Cauchy convergence (usually α ( k ) = k {\displaystyle \alpha (k)=k} or α ( k ) = 2 k {\displaystyle \alpha (k)=2^{k}} ). Any Cauchy sequence with a modulus of Cauchy convergence is equivalent to a regular Cauchy sequence; this can be proven without using any form of the axiom of choice.
Moduli of Cauchy convergence are used by constructive mathematicians who do not wish to use any form of choice. Using a modulus of Cauchy convergence can simplify both definitions and theorems in constructive analysis. Regular Cauchy sequences were used by Bishop (2012) and by Bridges (1997) in constructive mathematics textbooks.
Since the definition of a Cauchy sequence only involves metric concepts, it is straightforward to generalize it to any metric space X. To do so, the absolute value | x m − x n | {\displaystyle \left|x_{m}-x_{n}\right|} is replaced by the distance d ( x m , x n ) {\displaystyle d\left(x_{m},x_{n}\right)} (where d denotes a metric) between x m {\displaystyle x_{m}} and x n . {\displaystyle x_{n}.}
Formally, given a metric space ( X , d ) , {\displaystyle (X,d),} a sequence
is Cauchy, if for every positive real number ε > 0 {\displaystyle \varepsilon >0} there is a positive integer N {\displaystyle N} such that for all positive integers m , n > N , {\displaystyle m,n>N,} the distance
Roughly speaking, the terms of the sequence are getting closer and closer together in a way that suggests that the sequence ought to have a limit in X. Nonetheless, such a limit does not always exist within X: the property of a space that every Cauchy sequence converges in the space is called completeness, and is detailed below.
A metric space (X, d) in which every Cauchy sequence converges to an element of X is called complete.
The real numbers are complete under the metric induced by the usual absolute value, and one of the standard constructions of the real numbers involves Cauchy sequences of rational numbers. In this construction, each equivalence class of Cauchy sequences of rational numbers with a certain tail behavior—that is, each class of sequences that get arbitrarily close to one another— is a real number.
A rather different type of example is afforded by a metric space X which has the discrete metric (where any two distinct points are at distance 1 from each other). Any Cauchy sequence of elements of X must be constant beyond some fixed point, and converges to the eventually repeating term.
The rational numbers Q {\displaystyle \mathbb {Q} } are not complete (for the usual distance): There are sequences of rationals that converge (in R {\displaystyle \mathbb {R} } ) to irrational numbers; these are Cauchy sequences having no limit in Q . {\displaystyle \mathbb {Q} .} In fact, if a real number x is irrational, then the sequence (xn), whose n-th term is the truncation to n decimal places of the decimal expansion of x, gives a Cauchy sequence of rational numbers with irrational limit x. Irrational numbers certainly exist in R , {\displaystyle \mathbb {R} ,} for example:
The open interval X = ( 0 , 2 ) {\displaystyle X=(0,2)} in the set of real numbers with an ordinary distance in R {\displaystyle \mathbb {R} } is not a complete space: there is a sequence x n = 1 / n {\displaystyle x_{n}=1/n} in it, which is Cauchy (for arbitrarily small distance bound d > 0 {\displaystyle d>0} all terms x n {\displaystyle x_{n}} of n > 1 / d {\displaystyle n>1/d} fit in the ( 0 , d ) {\displaystyle (0,d)} interval), however does not converge in X {\displaystyle X} — its 'limit', number 0, does not belong to the space X . {\displaystyle X.}
These last two properties, together with the Bolzano–Weierstrass theorem, yield one standard proof of the completeness of the real numbers, closely related to both the Bolzano–Weierstrass theorem and the Heine–Borel theorem. Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The alternative approach, mentioned above, of constructing the real numbers as the completion of the rational numbers, makes the completeness of the real numbers tautological.
One of the standard illustrations of the advantage of being able to work with Cauchy sequences and make use of completeness is provided by consideration of the summation of an infinite series of real numbers (or, more generally, of elements of any complete normed linear space, or Banach space). Such a series ∑ n = 1 ∞ x n {\textstyle \sum _{n=1}^{\infty }x_{n}} is considered to be convergent if and only if the sequence of partial sums ( s m ) {\displaystyle (s_{m})} is convergent, where s m = ∑ n = 1 m x n . {\textstyle s_{m}=\sum _{n=1}^{m}x_{n}.} It is a routine matter to determine whether the sequence of partial sums is Cauchy or not, since for positive integers p > q , {\displaystyle p>q,}
If f : M → N {\displaystyle f:M\to N} is a uniformly continuous map between the metric spaces M and N and (xn) is a Cauchy sequence in M, then ( f ( x n ) ) {\displaystyle (f(x_{n}))} is a Cauchy sequence in N. If ( x n ) {\displaystyle (x_{n})} and ( y n ) {\displaystyle (y_{n})} are two Cauchy sequences in the rational, real or complex numbers, then the sum ( x n + y n ) {\displaystyle (x_{n}+y_{n})} and the product ( x n y n ) {\displaystyle (x_{n}y_{n})} are also Cauchy sequences.
There is also a concept of Cauchy sequence for a topological vector space X {\displaystyle X} : Pick a local base B {\displaystyle B} for X {\displaystyle X} about 0; then ( x k {\displaystyle x_{k}} ) is a Cauchy sequence if for each member V ∈ B , {\displaystyle V\in B,} there is some number N {\displaystyle N} such that whenever n , m > N , x n − x m {\displaystyle n,m>N,x_{n}-x_{m}} is an element of V . {\displaystyle V.} If the topology of X {\displaystyle X} is compatible with a translation-invariant metric d , {\displaystyle d,} the two definitions agree.
Since the topological vector space definition of Cauchy sequence requires only that there be a continuous "subtraction" operation, it can just as well be stated in the context of a topological group: A sequence ( x k ) {\displaystyle (x_{k})} in a topological group G {\displaystyle G} is a Cauchy sequence if for every open neighbourhood U {\displaystyle U} of the identity in G {\displaystyle G} there exists some number N {\displaystyle N} such that whenever m , n > N {\displaystyle m,n>N} it follows that x n x m − 1 ∈ U . {\displaystyle x_{n}x_{m}^{-1}\in U.} As above, it is sufficient to check this for the neighbourhoods in any local base of the identity in G . {\displaystyle G.}
As in the construction of the completion of a metric space, one can furthermore define the binary relation on Cauchy sequences in G {\displaystyle G} that ( x k ) {\displaystyle (x_{k})} and ( y k ) {\displaystyle (y_{k})} are equivalent if for every open neighbourhood U {\displaystyle U} of the identity in G {\displaystyle G} there exists some number N {\displaystyle N} such that whenever m , n > N {\displaystyle m,n>N} it follows that x n y m − 1 ∈ U . {\displaystyle x_{n}y_{m}^{-1}\in U.} This relation is an equivalence relation: It is reflexive since the sequences are Cauchy sequences. It is symmetric since y n x m − 1 = ( x m y n − 1 ) − 1 ∈ U − 1 {\displaystyle y_{n}x_{m}^{-1}=(x_{m}y_{n}^{-1})^{-1}\in U^{-1}} which by continuity of the inverse is another open neighbourhood of the identity. It is transitive since x n z l − 1 = x n y m − 1 y m z l − 1 ∈ U ′ U ″ {\displaystyle x_{n}z_{l}^{-1}=x_{n}y_{m}^{-1}y_{m}z_{l}^{-1}\in U'U''} where U ′ {\displaystyle U'} and U ″ {\displaystyle U''} are open neighbourhoods of the identity such that U ′ U ″ ⊆ U {\displaystyle U'U''\subseteq U} ; such pairs exist by the continuity of the group operation.
There is also a concept of Cauchy sequence in a group G {\displaystyle G} : Let H = ( H r ) {\displaystyle H=(H_{r})} be a decreasing sequence of normal subgroups of G {\displaystyle G} of finite index. Then a sequence ( x n ) {\displaystyle (x_{n})} in G {\displaystyle G} is said to be Cauchy (with respect to H {\displaystyle H} ) if and only if for any r {\displaystyle r} there is N {\displaystyle N} such that for all m , n > N , x n x m − 1 ∈ H r . {\displaystyle m,n>N,x_{n}x_{m}^{-1}\in H_{r}.}
Technically, this is the same thing as a topological group Cauchy sequence for a particular choice of topology on G , {\displaystyle G,} namely that for which H {\displaystyle H} is a local base.
The set C {\displaystyle C} of such Cauchy sequences forms a group (for the componentwise product), and the set C 0 {\displaystyle C_{0}} of null sequences (sequences such that ∀ r , ∃ N , ∀ n > N , x n ∈ H r {\displaystyle \forall r,\exists N,\forall n>N,x_{n}\in H_{r}} ) is a normal subgroup of C . {\displaystyle C.} The factor group C / C 0 {\displaystyle C/C_{0}} is called the completion of G {\displaystyle G} with respect to H . {\displaystyle H.}
One can then show that this completion is isomorphic to the inverse limit of the sequence ( G / H r ) . {\displaystyle (G/H_{r}).}
An example of this construction familiar in number theory and algebraic geometry is the construction of the p {\displaystyle p} -adic completion of the integers with respect to a prime p . {\displaystyle p.} In this case, G {\displaystyle G} is the integers under addition, and H r {\displaystyle H_{r}} is the additive subgroup consisting of integer multiples of p r . {\displaystyle p_{r}.}
If H {\displaystyle H} is a cofinal sequence (that is, any normal subgroup of finite index contains some H r {\displaystyle H_{r}} ), then this completion is canonical in the sense that it is isomorphic to the inverse limit of ( G / H ) H , {\displaystyle (G/H)_{H},} where H {\displaystyle H} varies over all normal subgroups of finite index. For further details, see Ch. I.10 in Lang's "Algebra".
A real sequence ⟨ u n : n ∈ N ⟩ {\displaystyle \langle u_{n}:n\in \mathbb {N} \rangle } has a natural hyperreal extension, defined for hypernatural values H of the index n in addition to the usual natural n. The sequence is Cauchy if and only if for every infinite H and K, the values u H {\displaystyle u_{H}} and u K {\displaystyle u_{K}} are infinitely close, or adequal, that is,
where "st" is the standard part function.
Krause (2020) introduced a notion of Cauchy completion of a category. Applied to Q {\displaystyle \mathbb {Q} } (the category whose objects are rational numbers, and there is a morphism from x to y if and only if x ≤ y {\displaystyle x\leq y} ), this Cauchy completion yields R ∪ { ∞ } {\displaystyle \mathbb {R} \cup \left\{\infty \right\}} (again interpreted as a category using its natural ordering). | [
{
"paragraph_id": 0,
"text": "In mathematics, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all excluding a finite number of elements of the sequence are less than that given distance from each other. Cauchy sequences are named after Augustin-Louis Cauchy; they may occasionally be known as fundamental sequences.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It is not sufficient for each term to become arbitrarily close to the preceding term. For instance, in the sequence of square roots of natural numbers:",
"title": ""
},
{
"paragraph_id": 2,
"text": "the consecutive terms become arbitrarily close to each other – their differences",
"title": ""
},
{
"paragraph_id": 3,
"text": "tend to zero as the index n grows. However, with growing values of n, the terms a n {\\displaystyle a_{n}} become arbitrarily large. So, for any index n and distance d, there exists an index m big enough such that a m − a n > d . {\\displaystyle a_{m}-a_{n}>d.} As a result, no matter how far one goes, the remaining terms of the sequence never get close to each other; hence the sequence is not Cauchy.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The utility of Cauchy sequences lies in the fact that in a complete metric space (one where all such sequences are known to converge to a limit), the criterion for convergence depends only on the terms of the sequence itself, as opposed to the definition of convergence, which uses the limit value as well as the terms. This is often exploited in algorithms, both theoretical and applied, where an iterative process can be shown relatively easily to produce a Cauchy sequence, consisting of the iterates, thus fulfilling a logical condition, such as termination.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filters and Cauchy nets.",
"title": ""
},
{
"paragraph_id": 6,
"text": "A sequence",
"title": "In real numbers"
},
{
"paragraph_id": 7,
"text": "of real numbers is called a Cauchy sequence if for every positive real number ε , {\\displaystyle \\varepsilon ,} there is a positive integer N such that for all natural numbers m , n > N , {\\displaystyle m,n>N,}",
"title": "In real numbers"
},
{
"paragraph_id": 8,
"text": "where the vertical bars denote the absolute value. In a similar way one can define Cauchy sequences of rational or complex numbers. Cauchy formulated such a condition by requiring x m − x n {\\displaystyle x_{m}-x_{n}} to be infinitesimal for every pair of infinite m, n.",
"title": "In real numbers"
},
{
"paragraph_id": 9,
"text": "For any real number r, the sequence of truncated decimal expansions of r forms a Cauchy sequence. For example, when r = π , {\\displaystyle r=\\pi ,} this sequence is (3, 3.1, 3.14, 3.141, ...). The mth and nth terms differ by at most 10 1 − m {\\displaystyle 10^{1-m}} when m < n, and as m grows this becomes smaller than any fixed positive number ε . {\\displaystyle \\varepsilon .}",
"title": "In real numbers"
},
{
"paragraph_id": 10,
"text": "If ( x 1 , x 2 , x 3 , . . . ) {\\displaystyle (x_{1},x_{2},x_{3},...)} is a sequence in the set X , {\\displaystyle X,} then a modulus of Cauchy convergence for the sequence is a function α {\\displaystyle \\alpha } from the set of natural numbers to itself, such that for all natural numbers k {\\displaystyle k} and natural numbers m , n > α ( k ) , {\\displaystyle m,n>\\alpha (k),} | x m − x n | < 1 / k . {\\displaystyle |x_{m}-x_{n}|<1/k.}",
"title": "In real numbers"
},
{
"paragraph_id": 11,
"text": "Any sequence with a modulus of Cauchy convergence is a Cauchy sequence. The existence of a modulus for a Cauchy sequence follows from the well-ordering property of the natural numbers (let α ( k ) {\\displaystyle \\alpha (k)} be the smallest possible N {\\displaystyle N} in the definition of Cauchy sequence, taking ε {\\displaystyle \\varepsilon } to be 1 / k {\\displaystyle 1/k} ). The existence of a modulus also follows from the principle of countable choice. Regular Cauchy sequences are sequences with a given modulus of Cauchy convergence (usually α ( k ) = k {\\displaystyle \\alpha (k)=k} or α ( k ) = 2 k {\\displaystyle \\alpha (k)=2^{k}} ). Any Cauchy sequence with a modulus of Cauchy convergence is equivalent to a regular Cauchy sequence; this can be proven without using any form of the axiom of choice.",
"title": "In real numbers"
},
{
"paragraph_id": 12,
"text": "Moduli of Cauchy convergence are used by constructive mathematicians who do not wish to use any form of choice. Using a modulus of Cauchy convergence can simplify both definitions and theorems in constructive analysis. Regular Cauchy sequences were used by Bishop (2012) and by Bridges (1997) in constructive mathematics textbooks.",
"title": "In real numbers"
},
{
"paragraph_id": 13,
"text": "Since the definition of a Cauchy sequence only involves metric concepts, it is straightforward to generalize it to any metric space X. To do so, the absolute value | x m − x n | {\\displaystyle \\left|x_{m}-x_{n}\\right|} is replaced by the distance d ( x m , x n ) {\\displaystyle d\\left(x_{m},x_{n}\\right)} (where d denotes a metric) between x m {\\displaystyle x_{m}} and x n . {\\displaystyle x_{n}.}",
"title": "In a metric space"
},
{
"paragraph_id": 14,
"text": "Formally, given a metric space ( X , d ) , {\\displaystyle (X,d),} a sequence",
"title": "In a metric space"
},
{
"paragraph_id": 15,
"text": "is Cauchy, if for every positive real number ε > 0 {\\displaystyle \\varepsilon >0} there is a positive integer N {\\displaystyle N} such that for all positive integers m , n > N , {\\displaystyle m,n>N,} the distance",
"title": "In a metric space"
},
{
"paragraph_id": 16,
"text": "Roughly speaking, the terms of the sequence are getting closer and closer together in a way that suggests that the sequence ought to have a limit in X. Nonetheless, such a limit does not always exist within X: the property of a space that every Cauchy sequence converges in the space is called completeness, and is detailed below.",
"title": "In a metric space"
},
{
"paragraph_id": 17,
"text": "A metric space (X, d) in which every Cauchy sequence converges to an element of X is called complete.",
"title": "Completeness"
},
{
"paragraph_id": 18,
"text": "The real numbers are complete under the metric induced by the usual absolute value, and one of the standard constructions of the real numbers involves Cauchy sequences of rational numbers. In this construction, each equivalence class of Cauchy sequences of rational numbers with a certain tail behavior—that is, each class of sequences that get arbitrarily close to one another— is a real number.",
"title": "Completeness"
},
{
"paragraph_id": 19,
"text": "A rather different type of example is afforded by a metric space X which has the discrete metric (where any two distinct points are at distance 1 from each other). Any Cauchy sequence of elements of X must be constant beyond some fixed point, and converges to the eventually repeating term.",
"title": "Completeness"
},
{
"paragraph_id": 20,
"text": "The rational numbers Q {\\displaystyle \\mathbb {Q} } are not complete (for the usual distance): There are sequences of rationals that converge (in R {\\displaystyle \\mathbb {R} } ) to irrational numbers; these are Cauchy sequences having no limit in Q . {\\displaystyle \\mathbb {Q} .} In fact, if a real number x is irrational, then the sequence (xn), whose n-th term is the truncation to n decimal places of the decimal expansion of x, gives a Cauchy sequence of rational numbers with irrational limit x. Irrational numbers certainly exist in R , {\\displaystyle \\mathbb {R} ,} for example:",
"title": "Completeness"
},
{
"paragraph_id": 21,
"text": "The open interval X = ( 0 , 2 ) {\\displaystyle X=(0,2)} in the set of real numbers with an ordinary distance in R {\\displaystyle \\mathbb {R} } is not a complete space: there is a sequence x n = 1 / n {\\displaystyle x_{n}=1/n} in it, which is Cauchy (for arbitrarily small distance bound d > 0 {\\displaystyle d>0} all terms x n {\\displaystyle x_{n}} of n > 1 / d {\\displaystyle n>1/d} fit in the ( 0 , d ) {\\displaystyle (0,d)} interval), however does not converge in X {\\displaystyle X} — its 'limit', number 0, does not belong to the space X . {\\displaystyle X.}",
"title": "Completeness"
},
{
"paragraph_id": 22,
"text": "These last two properties, together with the Bolzano–Weierstrass theorem, yield one standard proof of the completeness of the real numbers, closely related to both the Bolzano–Weierstrass theorem and the Heine–Borel theorem. Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The alternative approach, mentioned above, of constructing the real numbers as the completion of the rational numbers, makes the completeness of the real numbers tautological.",
"title": "Completeness"
},
{
"paragraph_id": 23,
"text": "One of the standard illustrations of the advantage of being able to work with Cauchy sequences and make use of completeness is provided by consideration of the summation of an infinite series of real numbers (or, more generally, of elements of any complete normed linear space, or Banach space). Such a series ∑ n = 1 ∞ x n {\\textstyle \\sum _{n=1}^{\\infty }x_{n}} is considered to be convergent if and only if the sequence of partial sums ( s m ) {\\displaystyle (s_{m})} is convergent, where s m = ∑ n = 1 m x n . {\\textstyle s_{m}=\\sum _{n=1}^{m}x_{n}.} It is a routine matter to determine whether the sequence of partial sums is Cauchy or not, since for positive integers p > q , {\\displaystyle p>q,}",
"title": "Completeness"
},
{
"paragraph_id": 24,
"text": "If f : M → N {\\displaystyle f:M\\to N} is a uniformly continuous map between the metric spaces M and N and (xn) is a Cauchy sequence in M, then ( f ( x n ) ) {\\displaystyle (f(x_{n}))} is a Cauchy sequence in N. If ( x n ) {\\displaystyle (x_{n})} and ( y n ) {\\displaystyle (y_{n})} are two Cauchy sequences in the rational, real or complex numbers, then the sum ( x n + y n ) {\\displaystyle (x_{n}+y_{n})} and the product ( x n y n ) {\\displaystyle (x_{n}y_{n})} are also Cauchy sequences.",
"title": "Completeness"
},
{
"paragraph_id": 25,
"text": "There is also a concept of Cauchy sequence for a topological vector space X {\\displaystyle X} : Pick a local base B {\\displaystyle B} for X {\\displaystyle X} about 0; then ( x k {\\displaystyle x_{k}} ) is a Cauchy sequence if for each member V ∈ B , {\\displaystyle V\\in B,} there is some number N {\\displaystyle N} such that whenever n , m > N , x n − x m {\\displaystyle n,m>N,x_{n}-x_{m}} is an element of V . {\\displaystyle V.} If the topology of X {\\displaystyle X} is compatible with a translation-invariant metric d , {\\displaystyle d,} the two definitions agree.",
"title": "Generalizations"
},
{
"paragraph_id": 26,
"text": "Since the topological vector space definition of Cauchy sequence requires only that there be a continuous \"subtraction\" operation, it can just as well be stated in the context of a topological group: A sequence ( x k ) {\\displaystyle (x_{k})} in a topological group G {\\displaystyle G} is a Cauchy sequence if for every open neighbourhood U {\\displaystyle U} of the identity in G {\\displaystyle G} there exists some number N {\\displaystyle N} such that whenever m , n > N {\\displaystyle m,n>N} it follows that x n x m − 1 ∈ U . {\\displaystyle x_{n}x_{m}^{-1}\\in U.} As above, it is sufficient to check this for the neighbourhoods in any local base of the identity in G . {\\displaystyle G.}",
"title": "Generalizations"
},
{
"paragraph_id": 27,
"text": "As in the construction of the completion of a metric space, one can furthermore define the binary relation on Cauchy sequences in G {\\displaystyle G} that ( x k ) {\\displaystyle (x_{k})} and ( y k ) {\\displaystyle (y_{k})} are equivalent if for every open neighbourhood U {\\displaystyle U} of the identity in G {\\displaystyle G} there exists some number N {\\displaystyle N} such that whenever m , n > N {\\displaystyle m,n>N} it follows that x n y m − 1 ∈ U . {\\displaystyle x_{n}y_{m}^{-1}\\in U.} This relation is an equivalence relation: It is reflexive since the sequences are Cauchy sequences. It is symmetric since y n x m − 1 = ( x m y n − 1 ) − 1 ∈ U − 1 {\\displaystyle y_{n}x_{m}^{-1}=(x_{m}y_{n}^{-1})^{-1}\\in U^{-1}} which by continuity of the inverse is another open neighbourhood of the identity. It is transitive since x n z l − 1 = x n y m − 1 y m z l − 1 ∈ U ′ U ″ {\\displaystyle x_{n}z_{l}^{-1}=x_{n}y_{m}^{-1}y_{m}z_{l}^{-1}\\in U'U''} where U ′ {\\displaystyle U'} and U ″ {\\displaystyle U''} are open neighbourhoods of the identity such that U ′ U ″ ⊆ U {\\displaystyle U'U''\\subseteq U} ; such pairs exist by the continuity of the group operation.",
"title": "Generalizations"
},
{
"paragraph_id": 28,
"text": "There is also a concept of Cauchy sequence in a group G {\\displaystyle G} : Let H = ( H r ) {\\displaystyle H=(H_{r})} be a decreasing sequence of normal subgroups of G {\\displaystyle G} of finite index. Then a sequence ( x n ) {\\displaystyle (x_{n})} in G {\\displaystyle G} is said to be Cauchy (with respect to H {\\displaystyle H} ) if and only if for any r {\\displaystyle r} there is N {\\displaystyle N} such that for all m , n > N , x n x m − 1 ∈ H r . {\\displaystyle m,n>N,x_{n}x_{m}^{-1}\\in H_{r}.}",
"title": "Generalizations"
},
{
"paragraph_id": 29,
"text": "Technically, this is the same thing as a topological group Cauchy sequence for a particular choice of topology on G , {\\displaystyle G,} namely that for which H {\\displaystyle H} is a local base.",
"title": "Generalizations"
},
{
"paragraph_id": 30,
"text": "The set C {\\displaystyle C} of such Cauchy sequences forms a group (for the componentwise product), and the set C 0 {\\displaystyle C_{0}} of null sequences (sequences such that ∀ r , ∃ N , ∀ n > N , x n ∈ H r {\\displaystyle \\forall r,\\exists N,\\forall n>N,x_{n}\\in H_{r}} ) is a normal subgroup of C . {\\displaystyle C.} The factor group C / C 0 {\\displaystyle C/C_{0}} is called the completion of G {\\displaystyle G} with respect to H . {\\displaystyle H.}",
"title": "Generalizations"
},
{
"paragraph_id": 31,
"text": "One can then show that this completion is isomorphic to the inverse limit of the sequence ( G / H r ) . {\\displaystyle (G/H_{r}).}",
"title": "Generalizations"
},
{
"paragraph_id": 32,
"text": "An example of this construction familiar in number theory and algebraic geometry is the construction of the p {\\displaystyle p} -adic completion of the integers with respect to a prime p . {\\displaystyle p.} In this case, G {\\displaystyle G} is the integers under addition, and H r {\\displaystyle H_{r}} is the additive subgroup consisting of integer multiples of p r . {\\displaystyle p_{r}.}",
"title": "Generalizations"
},
{
"paragraph_id": 33,
"text": "If H {\\displaystyle H} is a cofinal sequence (that is, any normal subgroup of finite index contains some H r {\\displaystyle H_{r}} ), then this completion is canonical in the sense that it is isomorphic to the inverse limit of ( G / H ) H , {\\displaystyle (G/H)_{H},} where H {\\displaystyle H} varies over all normal subgroups of finite index. For further details, see Ch. I.10 in Lang's \"Algebra\".",
"title": "Generalizations"
},
{
"paragraph_id": 34,
"text": "A real sequence ⟨ u n : n ∈ N ⟩ {\\displaystyle \\langle u_{n}:n\\in \\mathbb {N} \\rangle } has a natural hyperreal extension, defined for hypernatural values H of the index n in addition to the usual natural n. The sequence is Cauchy if and only if for every infinite H and K, the values u H {\\displaystyle u_{H}} and u K {\\displaystyle u_{K}} are infinitely close, or adequal, that is,",
"title": "Generalizations"
},
{
"paragraph_id": 35,
"text": "where \"st\" is the standard part function.",
"title": "Generalizations"
},
{
"paragraph_id": 36,
"text": "Krause (2020) introduced a notion of Cauchy completion of a category. Applied to Q {\\displaystyle \\mathbb {Q} } (the category whose objects are rational numbers, and there is a morphism from x to y if and only if x ≤ y {\\displaystyle x\\leq y} ), this Cauchy completion yields R ∪ { ∞ } {\\displaystyle \\mathbb {R} \\cup \\left\\{\\infty \\right\\}} (again interpreted as a category using its natural ordering).",
"title": "Generalizations"
}
]
| In mathematics, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all excluding a finite number of elements of the sequence are less than that given distance from each other. Cauchy sequences are named after Augustin-Louis Cauchy; they may occasionally be known as fundamental sequences. It is not sufficient for each term to become arbitrarily close to the preceding term. For instance, in the sequence of square roots of natural numbers: the consecutive terms become arbitrarily close to each other – their differences tend to zero as the index n grows. However, with growing values of n, the terms a n become arbitrarily large. So, for any index n and distance d, there exists an index m big enough such that a m − a n > d . As a result, no matter how far one goes, the remaining terms of the sequence never get close to each other; hence the sequence is not Cauchy. The utility of Cauchy sequences lies in the fact that in a complete metric space, the criterion for convergence depends only on the terms of the sequence itself, as opposed to the definition of convergence, which uses the limit value as well as the terms. This is often exploited in algorithms, both theoretical and applied, where an iterative process can be shown relatively easily to produce a Cauchy sequence, consisting of the iterates, thus fulfilling a logical condition, such as termination. Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filters and Cauchy nets. | 2001-09-25T04:21:42Z | 2023-12-07T20:33:52Z | [
"Template:Harvtxt",
"Template:Annotated link",
"Template:Springer",
"Template:Short description",
"Template:Multiple image",
"Template:Em",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:Series (mathematics)",
"Template:Use shortened footnotes",
"Template:Sfn",
"Template:Mvar"
]
| https://en.wikipedia.org/wiki/Cauchy_sequence |
6,088 | Common Era | Common Era (CE) and Before the Common Era (BCE) are year notations for the Gregorian calendar (and its predecessor, the Julian calendar), the world's most widely used calendar era. Common Era and Before the Common Era are alternatives to the original Anno Domini (AD) and Before Christ (BC) notations used for the same calendar era. The two notation systems are numerically equivalent: "2023 CE" and "AD 2023" each describe the current year; "400 BCE" and "400 BC" are the same year.
The expression can be traced back to 1615, when it first appears in a book by Johannes Kepler as the Latin: annus aerae nostrae vulgaris (year of our common era), and to 1635 in English as "Vulgar Era". The term "Common Era" can be found in English as early as 1708, and became more widely used in the mid-19th century by Jewish religious scholars. Since the later 20th century, BCE and CE have become popular in academic and scientific publications because BCE and CE are religiously neutral terms. They have been promoted as more sensitive to non-Christians by not referring to Jesus, the central figure of Christianity, especially via the religious terms "Christ" and Dominus ("Lord") utilized by the other abbreviations.
The idea of numbering years beginning from the date he believed to be the date of birth of Jesus, was conceived around the year 525 by the Christian monk Dionysius Exiguus. He did this to replace the then dominant Era of Martyrs system, because he did not wish to continue the memory of a tyrant who persecuted Christians. He numbered years from an initial reference date ("epoch"), an event he referred to as the Incarnation of Jesus. Dionysius labeled the column of the table in which he introduced the new era as "Anni Domini Nostri Jesu Christi" [Year of our Lord Jesus Christ].
This way of numbering years became more widespread in Europe with its use by Bede in England in 731. Bede also introduced the practice of dating years before what he supposed was the year of birth of Jesus, and the practice of not using a year zero. In 1422, Portugal became the last Western European country to switch to the system begun by Dionysius.
The term "Common Era" is traced back in English to its appearance as "Vulgar Era" to distinguish dates on the Gregorian calendar in popular use from dates of the regnal year (the year of the reign of a sovereign) typically used in national law. (The word 'vulgar' originally meant 'of the ordinary people', with no derogatory associations.)
The first use of the Latin term anno aerae nostrae vulgaris may be that in a 1615 book by Johannes Kepler. Kepler uses it again, as ab Anno vulgaris aerae, in a 1616 table of ephemerides, and again, as ab anno vulgaris aerae, in 1617. A 1635 English edition of that book has the title page in English that may be the earliest-found use of Vulgar Era in English. A 1701 book edited by John Le Clerc includes the phrase "Before Christ according to the Vulgar Æra, 6".
The Merriam Webster Dictionary gives 1716 as being the date of first use of the term "vulgar era" (to mean Christian era).
The first published use of "Christian Era" may be the Latin phrase annus aerae christianae on the title page of a 1584 theology book, De Eucharistica controuersia. In 1649, the Latin phrase annus æræ Christianæ appeared in the title of an English almanac. A 1652 ephemeris may be the first instance found so far of the English use of "Christian Era".
The English phrase "Common Era" appears at least as early as 1708, and in a 1715 book on astronomy it is used interchangeably with "Christian Era" and "Vulgar Era". A 1759 history book uses common æra in a generic sense, to refer to "the common era of the Jews". The first use of the phrase "before the common era" may be that in a 1770 work that also uses common era and vulgar era as synonyms, in a translation of a book originally written in German. The 1797 edition of the Encyclopædia Britannica uses the terms vulgar era and common era synonymously. In 1835, in his book Living Oracles, Alexander Campbell, wrote: "The vulgar Era, or Anno Domini; the fourth year of Jesus Christ, the first of which was but eight days", and also refers to the common era as a synonym for vulgar era with "the fact that our Lord was born on the 4th year before the vulgar era, called Anno Domini, thus making (for example) the 42d year from his birth to correspond with the 38th of the common era". The Catholic Encyclopedia (1909) in at least one article reports all three terms (Christian, Vulgar, Common Era) being commonly understood by the early 20th century.
The phrase "common era", in lower case, also appeared in the 19th century in a 'generic' sense, not necessarily to refer to the Christian Era, but to any system of dates in common use throughout a civilization. Thus, "the common era of the Jews", "the common era of the Mahometans", "common era of the world", "the common era of the foundation of Rome". When it did refer to the Christian Era, it was sometimes qualified, e.g., "common era of the Incarnation", "common era of the Nativity", or "common era of the birth of Christ".
An adapted translation of Common Era into Latin as Era Vulgaris was adopted in the 20th century by some followers of Aleister Crowley, and thus the abbreviation "e.v." or "EV" may sometimes be seen as a replacement for AD.
Although Jews have their own Hebrew calendar, they often use the Gregorian calendar without the AD prefix. As early as 1825, the abbreviation VE (for Vulgar Era) was in use among Jews to denote years in the Western calendar. As of 2005, Common Era notation has also been in use for Hebrew lessons for more than a century. Jews have also used the term Current Era.
Some academics in the fields of theology, education, archaeology and history have adopted CE and BCE notation despite some disagreement. Several style guides now prefer or mandate its use. A study conducted in 2014 found that the BCE/CE notation is not growing at the expense of BC and AD notation in the scholarly literature, and that both notations are used in a relatively stable fashion.
In 2002, an advisory panel for the religious education syllabus for England and Wales recommended introducing BCE/CE dates to schools, and by 2018 some local education authorities were using them. In 2018, the National Trust said it would continue to use BC/AD as its house style. English Heritage explains its era policy thus: "It might seem strange to use a Christian calendar system when referring to British prehistory, but the BC/AD labels are widely used and understood." Some parts of the BBC use BCE/CE, but some presenters have said they will not. As of October 2019, the BBC News style guide has entries for AD and BC, but not for CE or BCE.
The style guide for The Guardian says, under the entry for CE/BCE: "some people prefer CE (common era, current era, or Christian era) and BCE (before common era, etc.) to AD and BC, which, however, remain our style".
In the United States, the use of the BCE/CE notation in textbooks was reported in 2005 to be growing. Some publications have transitioned to using it exclusively. For example, the 2007 World Almanac was the first edition to switch to BCE/CE, ending a period of 138 years in which the traditional BC/AD dating notation was used. BCE/CE is used by the College Board in its history tests, and by the Norton Anthology of English Literature. Others have taken a different approach. The US-based History Channel uses BCE/CE notation in articles on non-Christian religious topics such as Jerusalem and Judaism. The 2006 style guide for the Episcopal Diocese Maryland Church News says that BCE and CE should be used.
In June 2006, in the United States, the Kentucky State School Board reversed its decision to use BCE and CE in the state's new Program of Studies, leaving education of students about these concepts a matter of local discretion.
In 2011, media reports suggested that the BC/AD notation in Australian school textbooks would be replaced by BCE/CE notation. The change drew opposition from some politicians and church leaders. Weeks after the story broke, the Australian Curriculum, Assessment and Reporting Authority denied the rumours and stated that the BC/AD notation would remain, with CE and BCE as an optional suggested learning activity.
In 2013, the Canadian Museum of Civilization (now the Canadian Museum of History) in Gatineau (opposite Ottawa), which had previously switched to BCE/CE, decided to change back to BC/AD in material intended for the public while retaining BCE/CE in academic content.
The use of CE in Jewish scholarship was historically motivated by the desire to avoid the implicit "Our Lord" in the abbreviation AD. Although other aspects of dating systems are based in Christian origins, AD is a direct reference to Jesus as Lord.
Proponents of the Common Era notation assert that the use of BCE/CE shows sensitivity to those who use the same year numbering system as the one that originated with and is currently used by Christians, but who are not themselves Christian.
Former United Nations Secretary-General Kofi Annan has argued:
[T]he Christian calendar no longer belongs exclusively to Christians. People of all faiths have taken to using it simply as a matter of convenience. There is so much interaction between people of different faiths and cultures – different civilizations, if you like – that some shared way of reckoning time is a necessity. And so the Christian Era has become the Common Era.
Adena K. Berkowitz, in her application to argue before the United States Supreme Court, opted to use BCE and CE because "Given the multicultural society that we live in, the traditional Jewish designations – B.C.E. and C.E. – cast a wider net of inclusion".
"Non-Christian scholars, especially, embraced [CE and BCE] because they could now communicate more easily with the Christian community. Jewish, Islamic, Hindu and Buddhist scholars could retain their [own] calendar but refer to events using the Gregorian Calendar as BCE and CE without compromising their own beliefs about the divinity of Jesus of Nazareth".
Some critics often note the fact that there is no difference in the epoch of the two systems, a moment about four to seven years after the date of birth of Jesus of Nazareth. BCE and CE are still aligned with BC and AD, which denote the periods before and after Jesus was born.
Some Christians are offended by the removal of the reference to Jesus in the Common Era notation. The Southern Baptist Convention supports retaining the BC/AD abbreviations.
Roman Catholic priest and writer on interfaith issues Raimon Panikkar argued that the BCE/CE usage is the less inclusive option, since they are still using the Christian calendar numbers, forcing it on other nations. In 1993, the English-language expert Kenneth G. Wilson speculated a slippery slope scenario in his style guide that "if we do end by casting aside the AD/BC convention, almost certainly some will argue that we ought to cast aside as well the conventional numbering system [that is, the method of numbering years] itself, given its Christian basis."
The abbreviation BCE, just as with BC, always follows the year number. Unlike AD, which still often precedes the year number, CE always follows the year number (if context requires that it be written at all). Thus, the current year is written as 2023 in both notations (or, if further clarity is needed, as 2023 CE, or as AD 2023), and the year that Socrates died is represented as 399 BCE (the same year that is represented by 399 BC in the BC/AD notation). The abbreviations are sometimes written with small capital letters, or with periods (e.g., "B.C.E." or "C.E."). The US-based Society of Biblical Literature style guide for academic texts on religion prefers BCE/CE to BC/AD. | [
{
"paragraph_id": 0,
"text": "Common Era (CE) and Before the Common Era (BCE) are year notations for the Gregorian calendar (and its predecessor, the Julian calendar), the world's most widely used calendar era. Common Era and Before the Common Era are alternatives to the original Anno Domini (AD) and Before Christ (BC) notations used for the same calendar era. The two notation systems are numerically equivalent: \"2023 CE\" and \"AD 2023\" each describe the current year; \"400 BCE\" and \"400 BC\" are the same year.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The expression can be traced back to 1615, when it first appears in a book by Johannes Kepler as the Latin: annus aerae nostrae vulgaris (year of our common era), and to 1635 in English as \"Vulgar Era\". The term \"Common Era\" can be found in English as early as 1708, and became more widely used in the mid-19th century by Jewish religious scholars. Since the later 20th century, BCE and CE have become popular in academic and scientific publications because BCE and CE are religiously neutral terms. They have been promoted as more sensitive to non-Christians by not referring to Jesus, the central figure of Christianity, especially via the religious terms \"Christ\" and Dominus (\"Lord\") utilized by the other abbreviations.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The idea of numbering years beginning from the date he believed to be the date of birth of Jesus, was conceived around the year 525 by the Christian monk Dionysius Exiguus. He did this to replace the then dominant Era of Martyrs system, because he did not wish to continue the memory of a tyrant who persecuted Christians. He numbered years from an initial reference date (\"epoch\"), an event he referred to as the Incarnation of Jesus. Dionysius labeled the column of the table in which he introduced the new era as \"Anni Domini Nostri Jesu Christi\" [Year of our Lord Jesus Christ].",
"title": "History"
},
{
"paragraph_id": 3,
"text": "This way of numbering years became more widespread in Europe with its use by Bede in England in 731. Bede also introduced the practice of dating years before what he supposed was the year of birth of Jesus, and the practice of not using a year zero. In 1422, Portugal became the last Western European country to switch to the system begun by Dionysius.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The term \"Common Era\" is traced back in English to its appearance as \"Vulgar Era\" to distinguish dates on the Gregorian calendar in popular use from dates of the regnal year (the year of the reign of a sovereign) typically used in national law. (The word 'vulgar' originally meant 'of the ordinary people', with no derogatory associations.)",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The first use of the Latin term anno aerae nostrae vulgaris may be that in a 1615 book by Johannes Kepler. Kepler uses it again, as ab Anno vulgaris aerae, in a 1616 table of ephemerides, and again, as ab anno vulgaris aerae, in 1617. A 1635 English edition of that book has the title page in English that may be the earliest-found use of Vulgar Era in English. A 1701 book edited by John Le Clerc includes the phrase \"Before Christ according to the Vulgar Æra, 6\".",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The Merriam Webster Dictionary gives 1716 as being the date of first use of the term \"vulgar era\" (to mean Christian era).",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The first published use of \"Christian Era\" may be the Latin phrase annus aerae christianae on the title page of a 1584 theology book, De Eucharistica controuersia. In 1649, the Latin phrase annus æræ Christianæ appeared in the title of an English almanac. A 1652 ephemeris may be the first instance found so far of the English use of \"Christian Era\".",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The English phrase \"Common Era\" appears at least as early as 1708, and in a 1715 book on astronomy it is used interchangeably with \"Christian Era\" and \"Vulgar Era\". A 1759 history book uses common æra in a generic sense, to refer to \"the common era of the Jews\". The first use of the phrase \"before the common era\" may be that in a 1770 work that also uses common era and vulgar era as synonyms, in a translation of a book originally written in German. The 1797 edition of the Encyclopædia Britannica uses the terms vulgar era and common era synonymously. In 1835, in his book Living Oracles, Alexander Campbell, wrote: \"The vulgar Era, or Anno Domini; the fourth year of Jesus Christ, the first of which was but eight days\", and also refers to the common era as a synonym for vulgar era with \"the fact that our Lord was born on the 4th year before the vulgar era, called Anno Domini, thus making (for example) the 42d year from his birth to correspond with the 38th of the common era\". The Catholic Encyclopedia (1909) in at least one article reports all three terms (Christian, Vulgar, Common Era) being commonly understood by the early 20th century.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The phrase \"common era\", in lower case, also appeared in the 19th century in a 'generic' sense, not necessarily to refer to the Christian Era, but to any system of dates in common use throughout a civilization. Thus, \"the common era of the Jews\", \"the common era of the Mahometans\", \"common era of the world\", \"the common era of the foundation of Rome\". When it did refer to the Christian Era, it was sometimes qualified, e.g., \"common era of the Incarnation\", \"common era of the Nativity\", or \"common era of the birth of Christ\".",
"title": "History"
},
{
"paragraph_id": 10,
"text": "An adapted translation of Common Era into Latin as Era Vulgaris was adopted in the 20th century by some followers of Aleister Crowley, and thus the abbreviation \"e.v.\" or \"EV\" may sometimes be seen as a replacement for AD.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Although Jews have their own Hebrew calendar, they often use the Gregorian calendar without the AD prefix. As early as 1825, the abbreviation VE (for Vulgar Era) was in use among Jews to denote years in the Western calendar. As of 2005, Common Era notation has also been in use for Hebrew lessons for more than a century. Jews have also used the term Current Era.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Some academics in the fields of theology, education, archaeology and history have adopted CE and BCE notation despite some disagreement. Several style guides now prefer or mandate its use. A study conducted in 2014 found that the BCE/CE notation is not growing at the expense of BC and AD notation in the scholarly literature, and that both notations are used in a relatively stable fashion.",
"title": "Contemporary usage"
},
{
"paragraph_id": 13,
"text": "In 2002, an advisory panel for the religious education syllabus for England and Wales recommended introducing BCE/CE dates to schools, and by 2018 some local education authorities were using them. In 2018, the National Trust said it would continue to use BC/AD as its house style. English Heritage explains its era policy thus: \"It might seem strange to use a Christian calendar system when referring to British prehistory, but the BC/AD labels are widely used and understood.\" Some parts of the BBC use BCE/CE, but some presenters have said they will not. As of October 2019, the BBC News style guide has entries for AD and BC, but not for CE or BCE.",
"title": "Contemporary usage"
},
{
"paragraph_id": 14,
"text": "The style guide for The Guardian says, under the entry for CE/BCE: \"some people prefer CE (common era, current era, or Christian era) and BCE (before common era, etc.) to AD and BC, which, however, remain our style\".",
"title": "Contemporary usage"
},
{
"paragraph_id": 15,
"text": "In the United States, the use of the BCE/CE notation in textbooks was reported in 2005 to be growing. Some publications have transitioned to using it exclusively. For example, the 2007 World Almanac was the first edition to switch to BCE/CE, ending a period of 138 years in which the traditional BC/AD dating notation was used. BCE/CE is used by the College Board in its history tests, and by the Norton Anthology of English Literature. Others have taken a different approach. The US-based History Channel uses BCE/CE notation in articles on non-Christian religious topics such as Jerusalem and Judaism. The 2006 style guide for the Episcopal Diocese Maryland Church News says that BCE and CE should be used.",
"title": "Contemporary usage"
},
{
"paragraph_id": 16,
"text": "In June 2006, in the United States, the Kentucky State School Board reversed its decision to use BCE and CE in the state's new Program of Studies, leaving education of students about these concepts a matter of local discretion.",
"title": "Contemporary usage"
},
{
"paragraph_id": 17,
"text": "In 2011, media reports suggested that the BC/AD notation in Australian school textbooks would be replaced by BCE/CE notation. The change drew opposition from some politicians and church leaders. Weeks after the story broke, the Australian Curriculum, Assessment and Reporting Authority denied the rumours and stated that the BC/AD notation would remain, with CE and BCE as an optional suggested learning activity.",
"title": "Contemporary usage"
},
{
"paragraph_id": 18,
"text": "In 2013, the Canadian Museum of Civilization (now the Canadian Museum of History) in Gatineau (opposite Ottawa), which had previously switched to BCE/CE, decided to change back to BC/AD in material intended for the public while retaining BCE/CE in academic content.",
"title": "Contemporary usage"
},
{
"paragraph_id": 19,
"text": "The use of CE in Jewish scholarship was historically motivated by the desire to avoid the implicit \"Our Lord\" in the abbreviation AD. Although other aspects of dating systems are based in Christian origins, AD is a direct reference to Jesus as Lord.",
"title": "Rationales"
},
{
"paragraph_id": 20,
"text": "Proponents of the Common Era notation assert that the use of BCE/CE shows sensitivity to those who use the same year numbering system as the one that originated with and is currently used by Christians, but who are not themselves Christian.",
"title": "Rationales"
},
{
"paragraph_id": 21,
"text": "Former United Nations Secretary-General Kofi Annan has argued:",
"title": "Rationales"
},
{
"paragraph_id": 22,
"text": "[T]he Christian calendar no longer belongs exclusively to Christians. People of all faiths have taken to using it simply as a matter of convenience. There is so much interaction between people of different faiths and cultures – different civilizations, if you like – that some shared way of reckoning time is a necessity. And so the Christian Era has become the Common Era.",
"title": "Rationales"
},
{
"paragraph_id": 23,
"text": "Adena K. Berkowitz, in her application to argue before the United States Supreme Court, opted to use BCE and CE because \"Given the multicultural society that we live in, the traditional Jewish designations – B.C.E. and C.E. – cast a wider net of inclusion\".",
"title": "Rationales"
},
{
"paragraph_id": 24,
"text": "\"Non-Christian scholars, especially, embraced [CE and BCE] because they could now communicate more easily with the Christian community. Jewish, Islamic, Hindu and Buddhist scholars could retain their [own] calendar but refer to events using the Gregorian Calendar as BCE and CE without compromising their own beliefs about the divinity of Jesus of Nazareth\".",
"title": "Rationales"
},
{
"paragraph_id": 25,
"text": "Some critics often note the fact that there is no difference in the epoch of the two systems, a moment about four to seven years after the date of birth of Jesus of Nazareth. BCE and CE are still aligned with BC and AD, which denote the periods before and after Jesus was born.",
"title": "Rationales"
},
{
"paragraph_id": 26,
"text": "Some Christians are offended by the removal of the reference to Jesus in the Common Era notation. The Southern Baptist Convention supports retaining the BC/AD abbreviations.",
"title": "Rationales"
},
{
"paragraph_id": 27,
"text": "Roman Catholic priest and writer on interfaith issues Raimon Panikkar argued that the BCE/CE usage is the less inclusive option, since they are still using the Christian calendar numbers, forcing it on other nations. In 1993, the English-language expert Kenneth G. Wilson speculated a slippery slope scenario in his style guide that \"if we do end by casting aside the AD/BC convention, almost certainly some will argue that we ought to cast aside as well the conventional numbering system [that is, the method of numbering years] itself, given its Christian basis.\"",
"title": "Rationales"
},
{
"paragraph_id": 28,
"text": "The abbreviation BCE, just as with BC, always follows the year number. Unlike AD, which still often precedes the year number, CE always follows the year number (if context requires that it be written at all). Thus, the current year is written as 2023 in both notations (or, if further clarity is needed, as 2023 CE, or as AD 2023), and the year that Socrates died is represented as 399 BCE (the same year that is represented by 399 BC in the BC/AD notation). The abbreviations are sometimes written with small capital letters, or with periods (e.g., \"B.C.E.\" or \"C.E.\"). The US-based Society of Biblical Literature style guide for academic texts on religion prefers BCE/CE to BC/AD.",
"title": "Conventions in style guides"
}
]
| Common Era (CE) and Before the Common Era (BCE) are year notations for the Gregorian calendar, the world's most widely used calendar era. Common Era and Before the Common Era are alternatives to the original Anno Domini (AD) and Before Christ (BC) notations used for the same calendar era. The two notation systems are numerically equivalent: "2023 CE" and "AD 2023" each describe the current year; "400 BCE" and "400 BC" are the same year. The expression can be traced back to 1615, when it first appears in a book by Johannes Kepler as the Latin: annus aerae nostrae vulgaris, and to 1635 in English as "Vulgar Era". The term "Common Era" can be found in English as early as 1708, and became more widely used in the mid-19th century by Jewish religious scholars. Since the later 20th century, BCE and CE have become popular in academic and scientific publications because BCE and CE are religiously neutral terms. They have been promoted as more sensitive to non-Christians by not referring to Jesus, the central figure of Christianity, especially via the religious terms "Christ" and Dominus ("Lord") utilized by the other abbreviations. | 2001-10-24T07:24:43Z | 2023-12-30T16:29:03Z | [
"Template:Blockquote",
"Template:Currentyear",
"Template:Reflist",
"Template:Efn",
"Template:See also",
"Template:Lang-es",
"Template:Cite news",
"Template:Time Topics",
"Template:Short description",
"Template:Lang",
"Template:Snd",
"Template:Better source needed",
"Template:Chronology",
"Template:Wikt",
"Template:Ndash",
"Template:Cite encyclopedia",
"Template:Use dmy dates",
"Template:As of",
"Template:Notelist",
"Template:Cite web",
"Template:Cite book",
"Template:Aut",
"Template:Portal bar",
"Template:Redirect",
"Template:Lang-zh",
"Template:Wiktionary-inline",
"Template:Lang-la",
"Template:Rp",
"Template:Nbsp",
"Template:Cn",
"Template:Cite journal",
"Template:Transliteration",
"Template:Cite dictionary",
"Template:Cite Encyclopedia",
"Template:Cite magazine",
"Template:Calendars"
]
| https://en.wikipedia.org/wiki/Common_Era |
6,091 | Charles Robert Malden | Charles Robert Malden (9 August 1797 – 23 May 1855), was a nineteenth-century British naval officer, surveyor and educator. He is the discoverer of Malden Island in the central Pacific, which is named in his honour. He also founded Windlesham House School at Brighton, England.
Malden was born in Putney, Surrey, son of Jonas Malden, a surgeon. He entered British naval service at the age of 11 on 22 June 1809. He served nine years as a volunteer 1st class, midshipman, and shipmate, including one year in the English Channel and Bay of Biscay (1809), four years at the Cape of Good Hope and in the East Indies (1809–14), two and a half years on the North American and West Indian stations (1814–16), and a year and a half in the Mediterranean (1817–18). He was present at the capture of Mauritius and Java, and at the battles of Baltimore and New Orleans.
He passed the examination in the elements of mathematics and the theory of navigation at the Royal Naval Academy on 2–4 September 1816, and became a 1st Lieutenant on 1 September 1818. In eight years of active service as an officer, he served two and a half years in a surveying ship in the Mediterranean (1818–21), one and a half years in a surveying sloop in the English Channel and off the coast of Ireland (1823–24), and one and a half years as Surveyor of the frigate HMS Blonde during a voyage (1824–26) to and from the Hawaiian Islands (then known as the "Sandwich islands"). In Hawaii he surveyed harbours which, he noted, were "said not to exist by Captains Cook and Vancouver." On the return voyage he discovered and explored uninhabited Malden Island in the central Pacific on 30 July 1825. After his return he left active service but remained at half pay. He served for several years as hydrographer to King William IV.
He married Frances Cole, daughter of Rev. William Hodgson Cole, rector of West Clandon and Vicar of Wonersh, near Guildford, Surrey, on 8 April 1828. Malden became the father of seven sons and a daughter.
From 1830 to 1836 he took pupils for the Royal Navy at Ryde, Isle of Wight. He purchased the school of Henry Worsley at Newport, Isle of Wight, in December 1836, reopened it as a preparatory school on 20 February 1837, and moved it to Montpelier Road in Brighton in December 1837. He built the Windlesham House School at Brighton in 1844, and conducted the school until his death there in 1855. He was succeeded as headmaster by his son Henry Charles Malden. | [
{
"paragraph_id": 0,
"text": "Charles Robert Malden (9 August 1797 – 23 May 1855), was a nineteenth-century British naval officer, surveyor and educator. He is the discoverer of Malden Island in the central Pacific, which is named in his honour. He also founded Windlesham House School at Brighton, England.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Malden was born in Putney, Surrey, son of Jonas Malden, a surgeon. He entered British naval service at the age of 11 on 22 June 1809. He served nine years as a volunteer 1st class, midshipman, and shipmate, including one year in the English Channel and Bay of Biscay (1809), four years at the Cape of Good Hope and in the East Indies (1809–14), two and a half years on the North American and West Indian stations (1814–16), and a year and a half in the Mediterranean (1817–18). He was present at the capture of Mauritius and Java, and at the battles of Baltimore and New Orleans.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "He passed the examination in the elements of mathematics and the theory of navigation at the Royal Naval Academy on 2–4 September 1816, and became a 1st Lieutenant on 1 September 1818. In eight years of active service as an officer, he served two and a half years in a surveying ship in the Mediterranean (1818–21), one and a half years in a surveying sloop in the English Channel and off the coast of Ireland (1823–24), and one and a half years as Surveyor of the frigate HMS Blonde during a voyage (1824–26) to and from the Hawaiian Islands (then known as the \"Sandwich islands\"). In Hawaii he surveyed harbours which, he noted, were \"said not to exist by Captains Cook and Vancouver.\" On the return voyage he discovered and explored uninhabited Malden Island in the central Pacific on 30 July 1825. After his return he left active service but remained at half pay. He served for several years as hydrographer to King William IV.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "He married Frances Cole, daughter of Rev. William Hodgson Cole, rector of West Clandon and Vicar of Wonersh, near Guildford, Surrey, on 8 April 1828. Malden became the father of seven sons and a daughter.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "From 1830 to 1836 he took pupils for the Royal Navy at Ryde, Isle of Wight. He purchased the school of Henry Worsley at Newport, Isle of Wight, in December 1836, reopened it as a preparatory school on 20 February 1837, and moved it to Montpelier Road in Brighton in December 1837. He built the Windlesham House School at Brighton in 1844, and conducted the school until his death there in 1855. He was succeeded as headmaster by his son Henry Charles Malden.",
"title": "Biography"
}
]
| Charles Robert Malden, was a nineteenth-century British naval officer, surveyor and educator. He is the discoverer of Malden Island in the central Pacific, which is named in his honour. He also founded Windlesham House School at Brighton, England. | 2022-10-28T12:45:30Z | [
"Template:HMS",
"Template:Reflist",
"Template:Cite book",
"Template:Cite wikisource",
"Template:Use British English",
"Template:Use dmy dates",
"Template:More citations needed",
"Template:Infobox military person"
]
| https://en.wikipedia.org/wiki/Charles_Robert_Malden |
|
6,094 | CPD | CPD may refer to: | [
{
"paragraph_id": 0,
"text": "CPD may refer to:",
"title": ""
}
]
| CPD may refer to: | 2022-04-04T14:54:42Z | [
"Template:TOC right",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/CPD |
|
6,095 | Chechnya | Chechnya (/ˈtʃɛtʃniə/ CHECH-nee-ə; Russian: Чечня́, IPA: [tɕɪtɕˈnʲa]; Chechen: Нохчийчоь, romanized: Noxçiyçö), officially the Chechen Republic, is a republic of Russia. It is situated in the North Caucasus of Eastern Europe, close to the Caspian Sea. The republic forms a part of the North Caucasian Federal District, and shares land borders with the country of Georgia to its south; with the Russian republics of Dagestan, Ingushetia, and North Ossetia-Alania to its east, north, and west; and with Stavropol Krai to its northwest.
After the dissolution of the Soviet Union in 1991, the Checheno-Ingush ASSR split into two parts: the Republic of Ingushetia and the Chechen Republic. The latter proclaimed the Chechen Republic of Ichkeria, which sought independence, while the former sided with Russia. Following the First Chechen War of 1994–1996 with Russia, Chechnya gained de facto independence as the Chechen Republic of Ichkeria, although de jure it remained a part of Russia. Russian federal control was restored in the Second Chechen War of 1999–2009, with Chechen politics being dominated by a former rebel Akhmad Kadyrov, and later his son Ramzan Kadyrov.
The republic covers an area of 17,300 square kilometres (6,700 square miles), with a population of over 1.5 million residents as of 2021. It is home to the indigenous Chechens, part of the Nakh peoples, and of primarily Muslim faith. Grozny is the capital and largest city.
According to Leonti Mroveli, the 11th-century Georgian chronicler, the word Caucasian is derived from the Nakh ancestor Kavkas. According to George Anchabadze of Ilia State University:
The Vainakhs are the ancient natives of the Caucasus. It is noteworthy, that according to the genealogical table drawn up by Leonti Mroveli, the legendary forefather of the Vainakhs was "Kavkas", hence the name Kavkasians, one of the ethnicons met in the ancient Georgian written sources, signifying the ancestors of the Chechens and Ingush. As appears from the above, the Vainakhs, at least by name, are presented as the most "Caucasian" people of all the Caucasians (Caucasus – Kavkas – Kavkasians) in the Georgian historical tradition.
American linguist Johanna Nichols "has used language to connect the modern people of the Caucasus region to the ancient farmers of the Fertile Crescent" and her research suggests that "farmers of the region were proto-Nakh-Daghestanians". Nichols stated: "The Nakh–Dagestanian languages are the closest thing we have to a direct continuation of the cultural and linguistic community that gave rise to Western civilisation."
Traces of human settlement dating back to 40,000 BCE were found near Lake Kezanoi. Cave paintings, artifacts, and other archaeological evidence indicate continuous habitation for some 8,000 years. People living in these settlements used tools, fire, and clothing made of animal skins.
The Caucasian Epipaleolithic and early Caucasian Neolithic era saw the introduction of agriculture, irrigation, and the domestication of animals in the region. Settlements near Ali-Yurt and Magas, discovered in modern times, revealed tools made out of stone: stone axes, polished stones, stone knives, stones with holes drilled in them, clay dishes etc. Settlements made out of clay bricks were discovered in the plains. In the mountains there were settlements made from stone and surrounded by walls; some of them dated back to 8000 BCE. This period also saw the appearance of the wheel (3000 BCE), horseback riding, metal works (copper, gold, silver, iron), dishes, armor, daggers, knives and arrow tips in the region. The artifacts were found near Nasare-Cort, Muzhichi, Ja-E-Bortz (alternatively known as Surkha-khi), Abbey-Gove (also known as Nazran or Nasare).
The German scientist Peter Simon Pallas believed that the Vainakh people (Chechens and Ingush) were the direct descendants from Alania. In 1239, the Alania capital of Maghas and the Alan confederacy of the Northern Caucasian highlanders, nations, and tribes was destroyed by Batu Khan (a Mongol leader and a grandson of Genghis Khan).
According to the missionary Pian de Carpine, a part of the Alans had successfully resisted a Mongol siege on a mountain for 12 years:
When they (the Mongols) begin to besiege a fortress, they besiege it for many years, as it happens today with one mountain in the land of the Alans. We believe they have been besieging it for twelve years and they (the Alans) put up courageous resistance and killed many Tatars, including many noble ones.
This twelve year old siege is not found in any other report, however the Russian historian A. I. Krasnov connected this battle with two Chechen folktales he recorded in 1967 that spoke of an old hunter named Idig who with his companions defended the Dakuoh mountain for 12 years against Tatar-Mongols. He also reported to have found several arrowheads and spears from the 13th century near the very mountain at which the battle took place:
The next year, with the onset of summer, the enemy hordes came again to destroy the highlanders. But even this year they failed to capture the mountain, on which the brave Chechens settled down. The battle lasted twelve years. The main wealth of the Chechens – livestock – was stolen by the enemies. Tired of the long years of hard struggle, the Chechens, believing the assurances of mercy by the enemy, descended from the mountain, but the Mongol-Tatars treacherously killed the majority, and the rest were taken into slavery. This fate was escaped only by Idig and a few of his companions who did not trust the nomads and remained on the mountain. They managed to escape and leave Mount Dakuoh after 12 years of siege.
In the 14th and 15th centuries, there was frequent warfare between the Chechens, Tamerlane and Tokhtamysh, culminating in the Battle of the Terek River (see Tokhtamysh–Timur war). The Chechen tribes built fortresses, castles, and defensive walls, protecting the mountains from the invaders. Part of the lowland tribes were occupied by Mongols. However, during the mid-14th century a strong Chechen Princedom called Simsim emerged under Khour II, a Chechen king that led the Chechen politics and wars. He was in charge of an army of Chechens against the rogue warlord Mamai and defeated him in the Battle of Tatar-tup in 1362. The kingdom of Simsim was almost destroyed during the Timurid invasion of the Caucasus, when Khour II allied himself with the Golden Horde Khan Tokhtamysh in the Battle of the Terek River. Timur sought to punish the highlanders for their allegiance to Tokhtamysh and as a consequence invaded Simsim in 1395.
The 16th century saw the first Russian involvement in the Caucasus. In 1558, Temryuk of Kabarda sent his emissaries to Moscow requesting help from Ivan the Terrible against the Vainakh tribes. Ivan the Terrible married Temryuk's daughter Maria Temryukovna. An alliance was formed to gain the ground in the central Caucasus for the expanding Tsardom of Russia against stubborn Vainakh defenders. Chechnya was a nation in the Northern Caucasus that fought against foreign rule continually since the 15th century. Several Chechen leaders such as the 17th century Mehk-Da Aldaman Gheza led the Chechen politics and fought off encroachments of foreign powers. He defended the borders of Chechnya from invasions of Kabardinians and Avars during the Battle of Khachara in 1667. The Chechens converted over the next few centuries to Sunni Islam, as Islam was associated with resistance to Russian encroachment.
Peter the Great first sought to increase Russia's political influence in the Caucasus and the Caspian Sea at the expense of Safavid Persia when he launched the Russo-Persian War of 1722–1723. Russian forces succeeded in taking much of the Caucasian territories from Iran for several years.
As the Russians took control of the Caspian corridor and moved into Persian-ruled Dagestan, Peter's forces ran into mountain tribes. Peter sent a cavalry force to subdue them, but the Chechens routed them. In 1732, after Russia already ceded back most of the Caucasus to Persia, now led by Nader Shah, following the Treaty of Resht, Russian troops clashed again with Chechens in a village called Chechen-aul along the Argun River. The Russians were defeated again and withdrew, but this battle is responsible for the apocryphal story about how the Nokchi came to be known as "Chechens" – the people ostensibly named for the place the battle had taken place. The name Chechen was however already used since as early as 1692.
Under intermittent Persian rule since 1555, in 1783 the eastern Georgians of Kartl-Kakheti led by Erekle II and Russia signed the Treaty of Georgievsk. According to this treaty, Kartl-Kakheti received protection from Russia, and Georgia abjured any dependence on Iran. In order to increase its influence in the Caucasus and to secure communications with Kartli and other Christian regions of the Transcaucasia which it considered useful in its wars against Persia and Turkey, the Russian Empire began conquering the Northern Caucasus mountains. The Russian Empire used Christianity to justify its conquests, allowing Islam to spread widely because it positioned itself as the religion of liberation from tsardom, which viewed Nakh tribes as "bandits". The rebellion was led by Mansur Ushurma, a Chechen Naqshbandi (Sufi) sheikh—with wavering military support from other North Caucasian tribes. Mansur hoped to establish a Transcaucasus Islamic state under sharia law. He was unable to fully achieve this because in the course of the war he was betrayed by the Ottomans, handed over to Russians, and executed in 1794.
Following the forced ceding of the current territories of Dagestan, most of Azerbaijan, and Georgia by Persia to Russia, following the Russo-Persian War of 1804–1813 and its resultant Treaty of Gulistan, Russia significantly widened its foothold in the Caucasus at the expense of Persia. Another successful Caucasus war against Persia several years later, starting in 1826 and ending in 1828 with the Treaty of Turkmenchay, and a successful war against Ottoman Turkey in 1828 and 1829, enabled Russia to use a much larger portion of its army in subduing the natives of the North Caucasus.
The resistance of the Nakh tribes never ended and was a fertile ground for a new Muslim-Avar commander, Imam Shamil, who fought against the Russians from 1834 to 1859 (see Murid War). In 1859, Shamil was captured by Russians at aul Gunib. Shamil left Baysangur of Benoa, a Chechen with one arm, one eye, and one leg, in charge of command at Gunib. Baysangur broke through the siege and continued to fight Russia for another two years until he was captured and killed by Russians. The Russian tsar hoped that by sparing the life of Shamil, the resistance in the North Caucasus would stop, but it did not. Russia began to use a colonization tactic by destroying Nakh settlements and building Cossack defense lines in the lowlands. The Cossacks suffered defeat after defeat and were constantly attacked by mountaineers, who were robbing them of food and weaponry.
The tsarists' regime used a different approach at the end of the 1860s. They offered Chechens and Ingush to leave the Caucasus for the Ottoman Empire (see Muhajir (Caucasus)). It is estimated that about 80% of Chechens and Ingush left the Caucasus during the deportation. It weakened the resistance which went from open warfare to insurgent warfare. One of the notable Chechen resistance fighters at the end of the 19th century was a Chechen abrek Zelimkhan Gushmazukaev and his comrade-in-arms Ingush abrek Sulom-Beck Sagopshinski. Together they built up small units which constantly harassed Russian military convoys, government mints, and government post-service, mainly in Ingushetia and Chechnya. Ingush aul Kek was completely burned when the Ingush refused to hand over Zelimkhan. Zelimkhan was killed at the beginning of the 20th century. The war between Nakh tribes and Russia resurfaced during the times of the Russian Revolution, which saw the Nakh struggle against Anton Denikin and later against the Soviet Union.
On 21 December 1917, Ingushetia, Chechnya, and Dagestan declared independence from Russia and formed a single state: "United Mountain Dwellers of the North Caucasus" (also known as the Mountainous Republic of the Northern Caucasus) which was recognized by major world powers. The capital of the new state was moved to Temir-Khan-Shura (Dagestan). Tapa Tchermoeff, a prominent Chechen statesman, was elected the first prime minister of the state. The second prime minister elected was Vassan-Girey Dzhabagiev, an Ingush statesman, who also was the author of the constitution of the republic in 1917, and in 1920 he was re-elected for the third term. In 1921 the Russians attacked and occupied the country and forcibly absorbed it into the Soviet state. The Caucasian war for independence restarted, and the government went into exile.
During Soviet rule, Chechnya and Ingushetia were combined to form the Checheno-Ingush Autonomous Soviet Socialist Republic. In the 1930s, Chechnya was flooded with many Ukrainians fleeing a famine. As a result, many of the Ukrainians settled in Chechen-Ingush ASSR permanently and survived the famine. Although over 50,000 Chechens and over 12,000 Ingush were fighting against Nazi Germany on the front line (including Heroes of the USSR: Abukhadzhi Idrisov, Khanpasha Nuradilov, Movlid Visaitov), and although Nazi German troops advanced as far as the Ossetian ASSR city of Ordzhonikidze and the Chechen-Ingush ASSR city of Malgobek after capturing half of the Caucasus in less than a month, Chechens and Ingush were falsely accused as Nazi supporters and entire nations were deported during Operation Lentil to the Kazakh SSR (later Kazakhstan) in 1944 near the end of World War II where over 60% of Chechen and Ingush populations perished. American historian Norman Naimark writes:
Troops assembled villagers and townspeople, loaded them onto trucks – many deportees remembered that they were Studebakers, fresh from Lend-Lease deliveries over the Iranian border – and delivered them at previously designated railheads. ... Those who could not be moved were shot. ... [A] few fighters aside, the entire Chechen and Ingush nations, 496,460 people, were deported from their homeland.
The deportation was justified by the materials prepared by NKVD officer Bogdan Kobulov accusing Chechens and Ingush in a mass conspiracy preparing rebellion and providing assistance to the German forces. Many of the materials were later proven to be fabricated. Even distinguished Red Army officers who fought bravely against Germans (e.g. the commander of 255th Separate Chechen-Ingush regiment Movlid Visaitov, the first to contact American forces at Elbe river) were deported. There is a theory that the real reason why Chechens and Ingush were deported was the desire of Russia to attack Turkey, an anti-communist country, as Chechens and Ingush could impede such plans. In 2004, the European Parliament recognized the deportation of Chechens and Ingush as an act of genocide.
The territory of the Chechen-Ingush Autonomous Soviet Socialist Republic was divided between Stavropol Krai (where Grozny Okrug was formed), the Dagestan ASSR, the North Ossetian ASSR, and the Georgian SSR.
The Chechens and Ingush were allowed to return to their land after 1956 during de-Stalinisation under Nikita Khrushchev when the Chechen-Ingush ASSR was restored but with both the boundaries and ethnic composition of the territory significantly changed. There were many (predominantly Russian) migrants from other parts of the Soviet Union, who often settled in the abandoned family homes of Chechens and Ingushes. The republic lost its Prigorodny District which transferred to North Ossetian ASSR but gained predominantly Russian Naursky District and Shelkovskoy District that is considered the homeland for Terek Cossacks.
The Russification policies towards Chechens continued after 1956, with Russian language proficiency required in many aspects of life to provide Chechens better opportunities for advancement in the Soviet system.
On 26 November 1990, the Supreme Council of Chechen-Ingush ASSR adopted the "Declaration of State Sovereignty of the Chechen-Ingush Republic". This declaration was part of the reorganisation of the Soviet Union. This new treaty was to be signed 22 August 1991, which would have transformed 15 republic states into more than 80. The 19–21 August 1991 Soviet coup d'état attempt led to the abandonment of this reorganisation.
With the impending dissolution of the Soviet Union in 1991, an independence movement, the Chechen National Congress, was formed, led by ex-Soviet Air Force general and new Chechen President Dzhokhar Dudayev. It campaigned for the recognition of Chechnya as a separate nation. This movement was opposed by Boris Yeltsin's Russian Federation, which argued that Chechnya had not been an independent entity within the Soviet Union—as the Baltic, Central Asian, and other Caucasian states such as Georgia had—but was part of the Russian Soviet Federative Socialist Republic and hence did not have a right under the Soviet constitution to secede. It also argued that other republics of Russia, such as Tatarstan, would consider seceding from the Russian Federation if Chechnya were granted that right. Finally, it argued that Chechnya was a major hub in the oil infrastructure of Russia and hence its secession would hurt the country's economy and energy access.
During the Chechen Revolution, the Soviet Chechen leader Doku Zavgayev was overthrown and Dzhokhar Dudayev seized power. On 1 November 1991, Dudaev's Chechnya issued a unilateral declaration of independence. In the ensuing decade, the territory was locked in an ongoing struggle between various factions, usually fighting unconventionally.
The First Chechen War took place from 1994 to 1996, when Russian forces attempted to regain control over Chechnya. Despite overwhelming numerical superiority in men, weaponry, and air support, the Russian forces were unable to establish effective permanent control over the mountainous area due to numerous successful full-scale battles and insurgency raids. The Budyonnovsk hospital hostage crisis in 1995 shocked the Russian public.
In April 1996 the first democratically elected president of Chechnya, Dzhokhar Dudayev, was killed by Russian forces using a booby trap bomb and a missile fired from a warplane after he was located by triangulating the position of a satellite phone he was using.
The widespread demoralisation of the Russian forces in the area and a successful offensive to re-take Grozny by Chechen rebel forces led by Aslan Maskhadov prompted Russian President Boris Yeltsin to declare a ceasefire in 1996, and sign a peace treaty a year later that saw a withdrawal of Russian forces.
After the war, parliamentary and presidential elections took place in January 1997 in Chechnya and brought to power new President Aslan Maskhadov, chief of staff and prime minister in the Chechen coalition government, for a five-year term. Maskhadov sought to maintain Chechen sovereignty while pressing the Russian government to help rebuild the republic, whose formal economy and infrastructure were virtually destroyed. Russia continued to send money for the rehabilitation of the republic; it also provided pensions and funds for schools and hospitals. Nearly half a million people (40% of Chechnya's prewar population) had been internally displaced and lived in refugee camps or overcrowded villages. There was an economic downturn. Two Russian brigades were permanently stationed in Chechnya.
In light of the devastated economic structure, kidnapping emerged as the principal source of income countrywide, procuring over US$200 million during the three-year independence of the chaotic fledgling state, although victims were rarely killed. In 1998, 176 people were kidnapped, 90 of whom were released, according to official accounts. President Maskhadov started a major campaign against hostage-takers, and on 25 October 1998, Shadid Bargishev, Chechnya's top anti-kidnapping official, was killed in a remote-controlled car bombing. Bargishev's colleagues then insisted they would not be intimidated by the attack and would go ahead with their offensive. Political violence and religious extremism, blamed on "Wahhabism", was rife. In 1998, Grozny authorities declared a state of emergency. Tensions led to open clashes between the Chechen National Guard and Islamist militants, such as the July 1998 confrontation in Gudermes.
The War of Dagestan began on 7 August 1999, during which the Islamic International Peacekeeping Brigade (IIPB) began an unsuccessful incursion into the neighboring Russian republic of Dagestan in favor of the Shura of Dagestan which sought independence from Russia. In September, a series of apartment bombs that killed around 300 people in several Russian cities, including Moscow, were blamed on the Chechen separatists. Some journalists contested the official explanation, instead blaming the Russian Secret Service for blowing up the buildings to initiate a new military campaign against Chechnya. In response to the bombings, a prolonged air campaign of retaliatory strikes against the Ichkerian regime and a ground offensive that began in October 1999 marked the beginning of the Second Chechen War. Much better organized and planned than the First Chechen War, the Russian armed forces took control of most regions. The Russian forces used brutal force, killing 60 Chechen civilians during a mop-up operation in Aldy, Chechnya on 5 February 2000. After the re-capture of Grozny in February 2000, the Ichkerian regime fell apart.
Chechen rebels continued to fight Russian troops and conduct terrorist attacks. In October 2002, 40–50 Chechen rebels seized a Moscow theater and took about 900 civilians hostage. The crisis ended with 117 hostages and up to 50 rebels dead, mostly due to an unknown aerosol pumped into the building by Russian special forces to incapacitate the people inside.
In response to the increasing terrorism, Russia tightened its grip on Chechnya and expanded its anti-terrorist operations throughout the region. Russia installed a pro-Russian Chechen regime. In 2003, a referendum was held on a constitution that reintegrated Chechnya within Russia but provided limited autonomy. According to the Chechen government, the referendum passed with 95.5% of the votes and almost 80% turnout. The Economist was skeptical of the results, arguing that "few outside the Kremlin regard the referendum as fair".
In September 2004, separatist rebels occupied a school in the town of Beslan, North Ossetia, demanding recognition of the independence of Chechnya and a Russian withdrawal. 1,100 people (including 777 children) were taken hostage. The attack lasted three days, resulting in the deaths of over 331 people, including 186 children. After the 2004 school siege, Russian president Vladimir Putin announced sweeping security and political reforms, sealing borders in the Caucasus region and revealing plans to give the central government more power. He also vowed to take tougher action against domestic terrorism, including preemptive strikes against Chechen separatists. In 2005 and 2006, separatist leaders Aslan Maskhadov and Shamil Basayev were killed.
Since 2007, Chechnya has been governed by Ramzan Kadyrov. Kadyrov's rule has been characterized by high-level corruption, a poor human rights record, widespread use of torture, and a growing cult of personality. Allegations of anti-gay purges in Chechnya were initially reported on 1 April 2017.
In April 2009, Russia ended its counter-terrorism operation and pulled out the bulk of its army. The insurgency in the North Caucasus continued even after this date. The Caucasus Emirate had fully adopted the tenets of a Salafist jihadist group through its strict adherence to the Sunni Hanbali obedience to the literal interpretation of the Quran and the Sunnah.
The Chechen government has been outspoken in its support for the 2022 Russian invasion of Ukraine, where a Chechen military force, the Kadyrovtsy, which is under Kadyrov's personal command, has played a leading role, notably in the Siege of Mariupol. Meanwhile, a substantial number of Chechen separatists have allied themselves to the Ukrainian cause and are fighting a mutual Russian enemy in the Donbas. In June 2022, the US State Department advised citizens not to travel to Chechnya, due to terrorism, kidnapping, and risk of civil unrest.
Situated in the eastern part of the North Caucasus in Eastern Europe, Chechnya is surrounded on nearly all sides by Russian Federal territory. In the west, it borders North Ossetia and Ingushetia, in the north, Stavropol Krai, in the east, Dagestan, and to the south, Georgia. Its capital is Grozny. Chechnya is well known for being mountainous, but it is in fact split between the more flat areas north of the Terek, and the highlands south of the Terek.
Rivers:
Despite a relatively small territory, Chechnya is characterized by a variety of climate conditions. The average temperature in Grozny is 11.2 °C (52.1 °F).
The Chechen Republic is divided into 15 districts and 3 cities of republican significance.
According to the 2021 Census, the population of the republic is 1,510,824, up from 1,268,989 in the 2010 Census.
As of the 2021 Census, Chechens at 1,456,792 make up 96.4% of the republic's population. Other groups include Russians (18,225, or 1.2%), Kumyks (12,184, or 0.8%) and a host of other small groups, each accounting for less than 0.5% of the total population The birth rate was 25.41 in 2004. (25.7 in Achkhoi Martan, 19.8 in Groznyy, 17.5 in Kurchaloi, 28.3 in Urus Martan and 11.1 in Vedeno).
The languages used in the Republic are Chechen and Russian. Chechen belongs to the Vaynakh or North-central Caucasian language family, which also includes Ingush and Batsb. Some scholars place it in a wider North Caucasian languages.
Despite its difficult past, Chechnya has a high life expectancy, one of the highest in Russia. But the pattern of life expectancy is unusual, and in according to numerous statistics, Chechnya stands out from the overall picture. In 2020 Chechnya had the deepest fall in life expectancy, but in 2021 it had the biggest rise. Chechnya has the highest excess of life expectancy in rural areas over cities.
(in the territory of modern Chechnya)
Islam is the predominant religion in Chechnya, practiced by 95% of those polled in Grozny in 2010. Most of the population is Sunni and follows either the Shafi'i or the Hanafi schools of fiqh (Islamic jurisprudence). The Shafi'i school of jurisprudence has a long tradition among the Chechens, and thus it remains the most practiced. Many Chechens are also Sufis, of either the Qadiri or Naqshbandi orders.
Following the end of the Soviet Union, there has been an Islamic revival in Chechnya, and in 2011 it was estimated that there were 465 mosques, including the Akhmad Kadyrov Mosque in Grozny accommodating 10,000 worshippers, as well 31 madrasas, including an Islamic university named Kunta-haji and a Center of Islamic Medicine in Grozny which is the largest such institution in Europe.
From the 11th to 13th centuries (i.e. before Mongol invasions of Durdzuketia), there was a mission of Georgian Orthodox missionaries to the Nakh peoples. Their success was limited, though a couple of highland teips did convert (conversion was largely by teips). However, during the Mongol invasions, these Christianized teips gradually reverted to paganism, perhaps due to the loss of trans-Caucasian contacts as the Georgians fought the Mongols and briefly fell under their dominion.
The once-strong Russian minority in Chechnya, mostly Terek Cossacks and estimated as numbering approximately 25,000 in 2012, are predominantly Russian Orthodox, although currently only one church exists in Grozny. In August 2011, Archbishop Zosima of Vladikavkaz and Makhachkala performed the first mass baptism ceremony in the history of the Chechen Republic in the Terek River of Naursky District in which 35 citizens of Naursky and Shelkovsky districts were converted to Orthodoxy. As of 2020, there are eight Orthodox churches in Chechnya, the largest is the temple of the Archangel Michael in Grozny.
Since 1990, the Chechen Republic has had many legal, military, and civil conflicts involving separatist movements and pro-Russian authorities. Today, Chechnya is a relatively stable federal republic, although there is still some separatist movement activity. Its regional constitution entered into effect on 2 April 2003, after an all-Chechen referendum was held on 23 March 2003. Some Chechens were controlled by regional teips, or clans, despite the existence of pro- and anti-Russian political structures.
The former separatist religious leader (mufti) Akhmad Kadyrov, looked upon as a traitor by many separatists, was elected president with 83% of the vote in an internationally monitored election on 5th of October 2003. Incidents of ballot stuffing and voter intimidation by Russian soldiers and the exclusion of separatist parties from the polls were subsequently reported by Organization for Security and Co-operation in Europe (OSCE) monitors. On 9 May 2004, Kadyrov was assassinated in Grozny football stadium by a landmine explosion that was planted beneath a VIP stage and detonated during a parade, and Sergey Abramov was appointed acting prime minister after the incident. However, since 2005 Ramzan Kadyrov (son of Akhmad Kadyrov) has been the caretaker prime minister, and in 2007 was appointed as the new president. Many allege he is the wealthiest and most powerful man in the republic, with control over a large private militia (the Kadyrovites). The militia, which began as his father's security force, has been accused of killings and kidnappings by human rights organisations such as Human Rights Watch.
Ichkeria was a member of the Unrepresented Nations and Peoples Organisation between 1991 and 2010. Former president of Georgia Zviad Gamsakhurdia deposed in a military coup of 1991 and a participant of the Georgian Civil War, recognized the independence of the Chechen Republic of Ichkeria in 1993. Diplomatic relations with Ichkeria were also established by the partially recognised Islamic Emirate of Afghanistan under the Taliban government on 16 January 2000. This recognition ceased with the fall of the Taliban in 2001. However, despite Taliban recognition, there were no friendly relations between the Taliban and Ichkeria—Maskhadov rejected their recognition, stating that the Taliban were illegitimate. Ichkeria also received vocal support from the Baltic countries, a group of Ukrainian nationalists, and Poland; Estonia once voted to recognize, but the act never was followed through due to pressure applied by both Russia and the EU.
The president of this government was Aslan Maskhadov, and the foreign minister was Ilyas Akhmadov, who was the spokesman for the president. Maskhadov had been elected for four years in an internationally monitored election in 1997, which took place after signing a peace agreement with Russia. In 2001 he issued a decree prolonging his office for one additional year; he was unable to participate in the 2003 presidential election since separatist parties were barred by the Russian government, and Maskhadov faced accusations of terrorist offenses in Russia. Maskhadov left Grozny and moved to the separatist-controlled areas of the south at the onset of the Second Chechen War. Maskhadov was unable to influence a number of warlords who retain effective control over Chechen territory, and his power was diminished as a result. Russian forces killed Maskhadov on 8 March 2005, and the assassination was widely criticized since it left no legitimate Chechen separatist leader with whom to conduct peace talks. Akhmed Zakayev, deputy prime minister and a foreign minister under Maskhadov, was appointed shortly after the 1997 election and is currently living under asylum in England. He and others chose Abdul Khalim Saidullayev, a relatively unknown Islamic judge who was previously the host of an Islamic program on Chechen television, to replace Maskhadov following his death. On 17 June 2006, it was reported that Russian special forces killed Abdul Khalim Saidullayev in a raid in the Chechen town of Argun. On 10 July 2006, Shamil Basayev, a leader of the Chechen rebel movement, was killed in a truck explosion during an arms deal.
The successor of Saidullayev became Doku Umarov. On 31 October 2007, Umarov abolished the Chechen Republic of Ichkeria and its presidency and in its place proclaimed the Caucasus Emirate with himself as its Emir. This change of status has been rejected by many Chechen politicians and military leaders who continue to support the existence of the republic.
During the 2022 Russian invasion of Ukraine, the Ukrainian parliament voted to recognize the "Chechen Republic of Ichkeria as territory temporarily occupied by the Russian Federation".
Тhe Internal Displacement Monitoring Center reports that after hundreds of thousands of ethnic Russians and Chechens fled their homes following inter-ethnic and separatist conflicts in Chechnya in 1994 and 1999, more than 150,000 people still remain displaced in Russia today.
Нuman rights groups criticized the conduct of the 2005 parliamentary elections as unfairly influenced by the central Russian government and military.
In 2006 Human Rights Watch reported that pro-Russian Chechen forces under the command of Ramzan Kadyrov, as well as federal police personnel, used torture to get information about separatist forces. "If you are detained in Chechnya, you face a real and immediate risk of torture. And there is little chance that your torturer will be held accountable", said Holly Cartner, Director of the Europe and Central Asia division of the Human Rights Watch.
In 2009, the US government financed American organization Freedom House included Chechnya in the "Worst of the Worst" list of most repressive societies in the world, together with Burma, North Korea, Tibet, and others. Memorial considers Chechnya under Kadyrov to be a totalitarian regime.
On 1 February 2009, The New York Times released extensive evidence to support allegations of consistent torture and executions under the Kadyrov government. The accusations were sparked by the assassination in Austria of a former Chechen rebel who had gained access to Kadyrov's inner circle, 27-year-old Umar Israilov.
On 1 July 2009, Amnesty International released a detailed report covering the human rights violations committed by the Russian Federation against Chechen citizens. Among the most prominent features was that those abused had no method of redress against assaults, ranging from kidnapping to torture, while those responsible were never held accountable. This led to the conclusion that Chechnya was being ruled without law, being run into further devastating destabilization.
On 10 March 2011, Human Rights Watch reported that since Chechenization, the government has pushed for enforced Islamic dress code. The president Ramzan Kadyrov is quoted as saying "I have the right to criticize my wife. She doesn't [have the right to criticize me]. With us [in Chechen society], a wife is a housewife. A woman should know her place. A woman should give her love to us [men]... She would be [man's] property. And the man is the owner. Here, if a woman does not behave properly, her husband, father, and brother are responsible. According to our tradition, if a woman fools around, her family members kill her... That's how it happens, a brother kills his sister or a husband kills his wife... As a president, I cannot allow for them to kill. So, let women not wear shorts...". He has also openly defended honor killings on several occasions.
On 9 July 2017, Russian newspaper Novaya Gazeta reported that a number of people were subject to an extrajudicial execution on the night of 26 January 2017. It published 27 names of the people known to be dead, but stressed that the list is "not all [of those killed]"; the newspaper asserted that 50 people may have been killed in the execution. Some of the dead were gay, but not all; the deaths appeared to have been triggered by the death of a policeman, and according to the author of the report, Elena Milashina, were executed for alleged terrorism.
In December 2021, up to 50 family members of critics of the Kadyrov government were abducted in a wave of mass kidnappings beginning on 22 December.
Although homosexuality is officially legal in Chechnya per Russian law, it is de facto illegal. Chechen authorities have reportedly arrested, imprisoned and killed persons based on their perceived sexual orientation.
In 2017, it was reported by Novaya Gazeta and human rights groups that Chechen authorities had set up concentration camps, one of which is in Argun, where gay men are interrogated and subjected to physical violence. On 27 June 2018, the Parliamentary Assembly of the Council of Europe noted "cases of abduction, arbitrary detention and torture ... with the direct involvement of Chechen law enforcement officials and on the orders of top-level Chechen authorities" and expressed dismay "at the statements of Chechen and Russian public officials denying the existence of LGBTI people in the Chechen Republic". Kadyrov's spokesman Alvi Karimov told Interfax that gay people "simply do not exist in the republic" and made an approving reference to honor killings by family members "if there were such people in Chechnya". In a 2021 Council of Europe report into anti-LGBTI hate crimes, rapporteur Foura ben Chikha described the "state-sponsored attacks carried out against LGBTI people in Chechnya in 2017" as "the single most egregious example of violence against LGBTI people in Europe that has occurred in decades".
On 11 January 2019, it was reported that another 'gay purge' had begun in the country in December 2018, with several gay men and women being detained. The Russian LGBT Network believes that around 40 people were detained and two killed.
During the war, the Chechen economy fell apart. In 1994, the separatists planned to introduce a new currency, but the change did not occur due to the re-taking of Chechnya by Russian troops in the Second Chechen War.
The economic situation in Chechnya has improved considerably since 2000. According to the New York Times, major efforts to rebuild Grozny have been made, and improvements in the political situation have led some officials to consider setting up a tourism industry, though there are claims that construction workers are being irregularly paid and that poor people have been displaced.
Chechnya's unemployment was 67% in 2006 and fell to 21.5% in 2014.
Total revenue of the budget of Chechnya for 2017 was 59.2 billion rubles. Of these, 48.5 billion rubles were grants from the federal budget of the Russian Federation.
In late 1970s, Chechnya produced up to 20 million tons of oil annually, production declined sharply to approximately 3 million tons in the late 1980s, and to below 2 million tons before 1994, first (1994–1996) second Russian invasion of Chechnya (1999) inflicted material damage on the oil-sector infrastructure, oil production decreased to 750,000 tons in 2001 only to increase to 2 million tons in 2006, by 2012 production was 1 million tons.
The culture of Chechnya is based on the native traditions of Chechen people. Chechen mythology along with art have helped shape the culture for over 1,000 years. | [
{
"paragraph_id": 0,
"text": "Chechnya (/ˈtʃɛtʃniə/ CHECH-nee-ə; Russian: Чечня́, IPA: [tɕɪtɕˈnʲa]; Chechen: Нохчийчоь, romanized: Noxçiyçö), officially the Chechen Republic, is a republic of Russia. It is situated in the North Caucasus of Eastern Europe, close to the Caspian Sea. The republic forms a part of the North Caucasian Federal District, and shares land borders with the country of Georgia to its south; with the Russian republics of Dagestan, Ingushetia, and North Ossetia-Alania to its east, north, and west; and with Stavropol Krai to its northwest.",
"title": ""
},
{
"paragraph_id": 1,
"text": "After the dissolution of the Soviet Union in 1991, the Checheno-Ingush ASSR split into two parts: the Republic of Ingushetia and the Chechen Republic. The latter proclaimed the Chechen Republic of Ichkeria, which sought independence, while the former sided with Russia. Following the First Chechen War of 1994–1996 with Russia, Chechnya gained de facto independence as the Chechen Republic of Ichkeria, although de jure it remained a part of Russia. Russian federal control was restored in the Second Chechen War of 1999–2009, with Chechen politics being dominated by a former rebel Akhmad Kadyrov, and later his son Ramzan Kadyrov.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The republic covers an area of 17,300 square kilometres (6,700 square miles), with a population of over 1.5 million residents as of 2021. It is home to the indigenous Chechens, part of the Nakh peoples, and of primarily Muslim faith. Grozny is the capital and largest city.",
"title": ""
},
{
"paragraph_id": 3,
"text": "According to Leonti Mroveli, the 11th-century Georgian chronicler, the word Caucasian is derived from the Nakh ancestor Kavkas. According to George Anchabadze of Ilia State University:",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The Vainakhs are the ancient natives of the Caucasus. It is noteworthy, that according to the genealogical table drawn up by Leonti Mroveli, the legendary forefather of the Vainakhs was \"Kavkas\", hence the name Kavkasians, one of the ethnicons met in the ancient Georgian written sources, signifying the ancestors of the Chechens and Ingush. As appears from the above, the Vainakhs, at least by name, are presented as the most \"Caucasian\" people of all the Caucasians (Caucasus – Kavkas – Kavkasians) in the Georgian historical tradition.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "American linguist Johanna Nichols \"has used language to connect the modern people of the Caucasus region to the ancient farmers of the Fertile Crescent\" and her research suggests that \"farmers of the region were proto-Nakh-Daghestanians\". Nichols stated: \"The Nakh–Dagestanian languages are the closest thing we have to a direct continuation of the cultural and linguistic community that gave rise to Western civilisation.\"",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Traces of human settlement dating back to 40,000 BCE were found near Lake Kezanoi. Cave paintings, artifacts, and other archaeological evidence indicate continuous habitation for some 8,000 years. People living in these settlements used tools, fire, and clothing made of animal skins.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Caucasian Epipaleolithic and early Caucasian Neolithic era saw the introduction of agriculture, irrigation, and the domestication of animals in the region. Settlements near Ali-Yurt and Magas, discovered in modern times, revealed tools made out of stone: stone axes, polished stones, stone knives, stones with holes drilled in them, clay dishes etc. Settlements made out of clay bricks were discovered in the plains. In the mountains there were settlements made from stone and surrounded by walls; some of them dated back to 8000 BCE. This period also saw the appearance of the wheel (3000 BCE), horseback riding, metal works (copper, gold, silver, iron), dishes, armor, daggers, knives and arrow tips in the region. The artifacts were found near Nasare-Cort, Muzhichi, Ja-E-Bortz (alternatively known as Surkha-khi), Abbey-Gove (also known as Nazran or Nasare).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The German scientist Peter Simon Pallas believed that the Vainakh people (Chechens and Ingush) were the direct descendants from Alania. In 1239, the Alania capital of Maghas and the Alan confederacy of the Northern Caucasian highlanders, nations, and tribes was destroyed by Batu Khan (a Mongol leader and a grandson of Genghis Khan).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "According to the missionary Pian de Carpine, a part of the Alans had successfully resisted a Mongol siege on a mountain for 12 years:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "When they (the Mongols) begin to besiege a fortress, they besiege it for many years, as it happens today with one mountain in the land of the Alans. We believe they have been besieging it for twelve years and they (the Alans) put up courageous resistance and killed many Tatars, including many noble ones.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "This twelve year old siege is not found in any other report, however the Russian historian A. I. Krasnov connected this battle with two Chechen folktales he recorded in 1967 that spoke of an old hunter named Idig who with his companions defended the Dakuoh mountain for 12 years against Tatar-Mongols. He also reported to have found several arrowheads and spears from the 13th century near the very mountain at which the battle took place:",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The next year, with the onset of summer, the enemy hordes came again to destroy the highlanders. But even this year they failed to capture the mountain, on which the brave Chechens settled down. The battle lasted twelve years. The main wealth of the Chechens – livestock – was stolen by the enemies. Tired of the long years of hard struggle, the Chechens, believing the assurances of mercy by the enemy, descended from the mountain, but the Mongol-Tatars treacherously killed the majority, and the rest were taken into slavery. This fate was escaped only by Idig and a few of his companions who did not trust the nomads and remained on the mountain. They managed to escape and leave Mount Dakuoh after 12 years of siege.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In the 14th and 15th centuries, there was frequent warfare between the Chechens, Tamerlane and Tokhtamysh, culminating in the Battle of the Terek River (see Tokhtamysh–Timur war). The Chechen tribes built fortresses, castles, and defensive walls, protecting the mountains from the invaders. Part of the lowland tribes were occupied by Mongols. However, during the mid-14th century a strong Chechen Princedom called Simsim emerged under Khour II, a Chechen king that led the Chechen politics and wars. He was in charge of an army of Chechens against the rogue warlord Mamai and defeated him in the Battle of Tatar-tup in 1362. The kingdom of Simsim was almost destroyed during the Timurid invasion of the Caucasus, when Khour II allied himself with the Golden Horde Khan Tokhtamysh in the Battle of the Terek River. Timur sought to punish the highlanders for their allegiance to Tokhtamysh and as a consequence invaded Simsim in 1395.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The 16th century saw the first Russian involvement in the Caucasus. In 1558, Temryuk of Kabarda sent his emissaries to Moscow requesting help from Ivan the Terrible against the Vainakh tribes. Ivan the Terrible married Temryuk's daughter Maria Temryukovna. An alliance was formed to gain the ground in the central Caucasus for the expanding Tsardom of Russia against stubborn Vainakh defenders. Chechnya was a nation in the Northern Caucasus that fought against foreign rule continually since the 15th century. Several Chechen leaders such as the 17th century Mehk-Da Aldaman Gheza led the Chechen politics and fought off encroachments of foreign powers. He defended the borders of Chechnya from invasions of Kabardinians and Avars during the Battle of Khachara in 1667. The Chechens converted over the next few centuries to Sunni Islam, as Islam was associated with resistance to Russian encroachment.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Peter the Great first sought to increase Russia's political influence in the Caucasus and the Caspian Sea at the expense of Safavid Persia when he launched the Russo-Persian War of 1722–1723. Russian forces succeeded in taking much of the Caucasian territories from Iran for several years.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "As the Russians took control of the Caspian corridor and moved into Persian-ruled Dagestan, Peter's forces ran into mountain tribes. Peter sent a cavalry force to subdue them, but the Chechens routed them. In 1732, after Russia already ceded back most of the Caucasus to Persia, now led by Nader Shah, following the Treaty of Resht, Russian troops clashed again with Chechens in a village called Chechen-aul along the Argun River. The Russians were defeated again and withdrew, but this battle is responsible for the apocryphal story about how the Nokchi came to be known as \"Chechens\" – the people ostensibly named for the place the battle had taken place. The name Chechen was however already used since as early as 1692.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Under intermittent Persian rule since 1555, in 1783 the eastern Georgians of Kartl-Kakheti led by Erekle II and Russia signed the Treaty of Georgievsk. According to this treaty, Kartl-Kakheti received protection from Russia, and Georgia abjured any dependence on Iran. In order to increase its influence in the Caucasus and to secure communications with Kartli and other Christian regions of the Transcaucasia which it considered useful in its wars against Persia and Turkey, the Russian Empire began conquering the Northern Caucasus mountains. The Russian Empire used Christianity to justify its conquests, allowing Islam to spread widely because it positioned itself as the religion of liberation from tsardom, which viewed Nakh tribes as \"bandits\". The rebellion was led by Mansur Ushurma, a Chechen Naqshbandi (Sufi) sheikh—with wavering military support from other North Caucasian tribes. Mansur hoped to establish a Transcaucasus Islamic state under sharia law. He was unable to fully achieve this because in the course of the war he was betrayed by the Ottomans, handed over to Russians, and executed in 1794.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Following the forced ceding of the current territories of Dagestan, most of Azerbaijan, and Georgia by Persia to Russia, following the Russo-Persian War of 1804–1813 and its resultant Treaty of Gulistan, Russia significantly widened its foothold in the Caucasus at the expense of Persia. Another successful Caucasus war against Persia several years later, starting in 1826 and ending in 1828 with the Treaty of Turkmenchay, and a successful war against Ottoman Turkey in 1828 and 1829, enabled Russia to use a much larger portion of its army in subduing the natives of the North Caucasus.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The resistance of the Nakh tribes never ended and was a fertile ground for a new Muslim-Avar commander, Imam Shamil, who fought against the Russians from 1834 to 1859 (see Murid War). In 1859, Shamil was captured by Russians at aul Gunib. Shamil left Baysangur of Benoa, a Chechen with one arm, one eye, and one leg, in charge of command at Gunib. Baysangur broke through the siege and continued to fight Russia for another two years until he was captured and killed by Russians. The Russian tsar hoped that by sparing the life of Shamil, the resistance in the North Caucasus would stop, but it did not. Russia began to use a colonization tactic by destroying Nakh settlements and building Cossack defense lines in the lowlands. The Cossacks suffered defeat after defeat and were constantly attacked by mountaineers, who were robbing them of food and weaponry.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The tsarists' regime used a different approach at the end of the 1860s. They offered Chechens and Ingush to leave the Caucasus for the Ottoman Empire (see Muhajir (Caucasus)). It is estimated that about 80% of Chechens and Ingush left the Caucasus during the deportation. It weakened the resistance which went from open warfare to insurgent warfare. One of the notable Chechen resistance fighters at the end of the 19th century was a Chechen abrek Zelimkhan Gushmazukaev and his comrade-in-arms Ingush abrek Sulom-Beck Sagopshinski. Together they built up small units which constantly harassed Russian military convoys, government mints, and government post-service, mainly in Ingushetia and Chechnya. Ingush aul Kek was completely burned when the Ingush refused to hand over Zelimkhan. Zelimkhan was killed at the beginning of the 20th century. The war between Nakh tribes and Russia resurfaced during the times of the Russian Revolution, which saw the Nakh struggle against Anton Denikin and later against the Soviet Union.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "On 21 December 1917, Ingushetia, Chechnya, and Dagestan declared independence from Russia and formed a single state: \"United Mountain Dwellers of the North Caucasus\" (also known as the Mountainous Republic of the Northern Caucasus) which was recognized by major world powers. The capital of the new state was moved to Temir-Khan-Shura (Dagestan). Tapa Tchermoeff, a prominent Chechen statesman, was elected the first prime minister of the state. The second prime minister elected was Vassan-Girey Dzhabagiev, an Ingush statesman, who also was the author of the constitution of the republic in 1917, and in 1920 he was re-elected for the third term. In 1921 the Russians attacked and occupied the country and forcibly absorbed it into the Soviet state. The Caucasian war for independence restarted, and the government went into exile.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "During Soviet rule, Chechnya and Ingushetia were combined to form the Checheno-Ingush Autonomous Soviet Socialist Republic. In the 1930s, Chechnya was flooded with many Ukrainians fleeing a famine. As a result, many of the Ukrainians settled in Chechen-Ingush ASSR permanently and survived the famine. Although over 50,000 Chechens and over 12,000 Ingush were fighting against Nazi Germany on the front line (including Heroes of the USSR: Abukhadzhi Idrisov, Khanpasha Nuradilov, Movlid Visaitov), and although Nazi German troops advanced as far as the Ossetian ASSR city of Ordzhonikidze and the Chechen-Ingush ASSR city of Malgobek after capturing half of the Caucasus in less than a month, Chechens and Ingush were falsely accused as Nazi supporters and entire nations were deported during Operation Lentil to the Kazakh SSR (later Kazakhstan) in 1944 near the end of World War II where over 60% of Chechen and Ingush populations perished. American historian Norman Naimark writes:",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Troops assembled villagers and townspeople, loaded them onto trucks – many deportees remembered that they were Studebakers, fresh from Lend-Lease deliveries over the Iranian border – and delivered them at previously designated railheads. ... Those who could not be moved were shot. ... [A] few fighters aside, the entire Chechen and Ingush nations, 496,460 people, were deported from their homeland.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The deportation was justified by the materials prepared by NKVD officer Bogdan Kobulov accusing Chechens and Ingush in a mass conspiracy preparing rebellion and providing assistance to the German forces. Many of the materials were later proven to be fabricated. Even distinguished Red Army officers who fought bravely against Germans (e.g. the commander of 255th Separate Chechen-Ingush regiment Movlid Visaitov, the first to contact American forces at Elbe river) were deported. There is a theory that the real reason why Chechens and Ingush were deported was the desire of Russia to attack Turkey, an anti-communist country, as Chechens and Ingush could impede such plans. In 2004, the European Parliament recognized the deportation of Chechens and Ingush as an act of genocide.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The territory of the Chechen-Ingush Autonomous Soviet Socialist Republic was divided between Stavropol Krai (where Grozny Okrug was formed), the Dagestan ASSR, the North Ossetian ASSR, and the Georgian SSR.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The Chechens and Ingush were allowed to return to their land after 1956 during de-Stalinisation under Nikita Khrushchev when the Chechen-Ingush ASSR was restored but with both the boundaries and ethnic composition of the territory significantly changed. There were many (predominantly Russian) migrants from other parts of the Soviet Union, who often settled in the abandoned family homes of Chechens and Ingushes. The republic lost its Prigorodny District which transferred to North Ossetian ASSR but gained predominantly Russian Naursky District and Shelkovskoy District that is considered the homeland for Terek Cossacks.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The Russification policies towards Chechens continued after 1956, with Russian language proficiency required in many aspects of life to provide Chechens better opportunities for advancement in the Soviet system.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "On 26 November 1990, the Supreme Council of Chechen-Ingush ASSR adopted the \"Declaration of State Sovereignty of the Chechen-Ingush Republic\". This declaration was part of the reorganisation of the Soviet Union. This new treaty was to be signed 22 August 1991, which would have transformed 15 republic states into more than 80. The 19–21 August 1991 Soviet coup d'état attempt led to the abandonment of this reorganisation.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "With the impending dissolution of the Soviet Union in 1991, an independence movement, the Chechen National Congress, was formed, led by ex-Soviet Air Force general and new Chechen President Dzhokhar Dudayev. It campaigned for the recognition of Chechnya as a separate nation. This movement was opposed by Boris Yeltsin's Russian Federation, which argued that Chechnya had not been an independent entity within the Soviet Union—as the Baltic, Central Asian, and other Caucasian states such as Georgia had—but was part of the Russian Soviet Federative Socialist Republic and hence did not have a right under the Soviet constitution to secede. It also argued that other republics of Russia, such as Tatarstan, would consider seceding from the Russian Federation if Chechnya were granted that right. Finally, it argued that Chechnya was a major hub in the oil infrastructure of Russia and hence its secession would hurt the country's economy and energy access.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "During the Chechen Revolution, the Soviet Chechen leader Doku Zavgayev was overthrown and Dzhokhar Dudayev seized power. On 1 November 1991, Dudaev's Chechnya issued a unilateral declaration of independence. In the ensuing decade, the territory was locked in an ongoing struggle between various factions, usually fighting unconventionally.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The First Chechen War took place from 1994 to 1996, when Russian forces attempted to regain control over Chechnya. Despite overwhelming numerical superiority in men, weaponry, and air support, the Russian forces were unable to establish effective permanent control over the mountainous area due to numerous successful full-scale battles and insurgency raids. The Budyonnovsk hospital hostage crisis in 1995 shocked the Russian public.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In April 1996 the first democratically elected president of Chechnya, Dzhokhar Dudayev, was killed by Russian forces using a booby trap bomb and a missile fired from a warplane after he was located by triangulating the position of a satellite phone he was using.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The widespread demoralisation of the Russian forces in the area and a successful offensive to re-take Grozny by Chechen rebel forces led by Aslan Maskhadov prompted Russian President Boris Yeltsin to declare a ceasefire in 1996, and sign a peace treaty a year later that saw a withdrawal of Russian forces.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "After the war, parliamentary and presidential elections took place in January 1997 in Chechnya and brought to power new President Aslan Maskhadov, chief of staff and prime minister in the Chechen coalition government, for a five-year term. Maskhadov sought to maintain Chechen sovereignty while pressing the Russian government to help rebuild the republic, whose formal economy and infrastructure were virtually destroyed. Russia continued to send money for the rehabilitation of the republic; it also provided pensions and funds for schools and hospitals. Nearly half a million people (40% of Chechnya's prewar population) had been internally displaced and lived in refugee camps or overcrowded villages. There was an economic downturn. Two Russian brigades were permanently stationed in Chechnya.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In light of the devastated economic structure, kidnapping emerged as the principal source of income countrywide, procuring over US$200 million during the three-year independence of the chaotic fledgling state, although victims were rarely killed. In 1998, 176 people were kidnapped, 90 of whom were released, according to official accounts. President Maskhadov started a major campaign against hostage-takers, and on 25 October 1998, Shadid Bargishev, Chechnya's top anti-kidnapping official, was killed in a remote-controlled car bombing. Bargishev's colleagues then insisted they would not be intimidated by the attack and would go ahead with their offensive. Political violence and religious extremism, blamed on \"Wahhabism\", was rife. In 1998, Grozny authorities declared a state of emergency. Tensions led to open clashes between the Chechen National Guard and Islamist militants, such as the July 1998 confrontation in Gudermes.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The War of Dagestan began on 7 August 1999, during which the Islamic International Peacekeeping Brigade (IIPB) began an unsuccessful incursion into the neighboring Russian republic of Dagestan in favor of the Shura of Dagestan which sought independence from Russia. In September, a series of apartment bombs that killed around 300 people in several Russian cities, including Moscow, were blamed on the Chechen separatists. Some journalists contested the official explanation, instead blaming the Russian Secret Service for blowing up the buildings to initiate a new military campaign against Chechnya. In response to the bombings, a prolonged air campaign of retaliatory strikes against the Ichkerian regime and a ground offensive that began in October 1999 marked the beginning of the Second Chechen War. Much better organized and planned than the First Chechen War, the Russian armed forces took control of most regions. The Russian forces used brutal force, killing 60 Chechen civilians during a mop-up operation in Aldy, Chechnya on 5 February 2000. After the re-capture of Grozny in February 2000, the Ichkerian regime fell apart.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Chechen rebels continued to fight Russian troops and conduct terrorist attacks. In October 2002, 40–50 Chechen rebels seized a Moscow theater and took about 900 civilians hostage. The crisis ended with 117 hostages and up to 50 rebels dead, mostly due to an unknown aerosol pumped into the building by Russian special forces to incapacitate the people inside.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "In response to the increasing terrorism, Russia tightened its grip on Chechnya and expanded its anti-terrorist operations throughout the region. Russia installed a pro-Russian Chechen regime. In 2003, a referendum was held on a constitution that reintegrated Chechnya within Russia but provided limited autonomy. According to the Chechen government, the referendum passed with 95.5% of the votes and almost 80% turnout. The Economist was skeptical of the results, arguing that \"few outside the Kremlin regard the referendum as fair\".",
"title": "History"
},
{
"paragraph_id": 39,
"text": "In September 2004, separatist rebels occupied a school in the town of Beslan, North Ossetia, demanding recognition of the independence of Chechnya and a Russian withdrawal. 1,100 people (including 777 children) were taken hostage. The attack lasted three days, resulting in the deaths of over 331 people, including 186 children. After the 2004 school siege, Russian president Vladimir Putin announced sweeping security and political reforms, sealing borders in the Caucasus region and revealing plans to give the central government more power. He also vowed to take tougher action against domestic terrorism, including preemptive strikes against Chechen separatists. In 2005 and 2006, separatist leaders Aslan Maskhadov and Shamil Basayev were killed.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Since 2007, Chechnya has been governed by Ramzan Kadyrov. Kadyrov's rule has been characterized by high-level corruption, a poor human rights record, widespread use of torture, and a growing cult of personality. Allegations of anti-gay purges in Chechnya were initially reported on 1 April 2017.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "In April 2009, Russia ended its counter-terrorism operation and pulled out the bulk of its army. The insurgency in the North Caucasus continued even after this date. The Caucasus Emirate had fully adopted the tenets of a Salafist jihadist group through its strict adherence to the Sunni Hanbali obedience to the literal interpretation of the Quran and the Sunnah.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The Chechen government has been outspoken in its support for the 2022 Russian invasion of Ukraine, where a Chechen military force, the Kadyrovtsy, which is under Kadyrov's personal command, has played a leading role, notably in the Siege of Mariupol. Meanwhile, a substantial number of Chechen separatists have allied themselves to the Ukrainian cause and are fighting a mutual Russian enemy in the Donbas. In June 2022, the US State Department advised citizens not to travel to Chechnya, due to terrorism, kidnapping, and risk of civil unrest.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "Situated in the eastern part of the North Caucasus in Eastern Europe, Chechnya is surrounded on nearly all sides by Russian Federal territory. In the west, it borders North Ossetia and Ingushetia, in the north, Stavropol Krai, in the east, Dagestan, and to the south, Georgia. Its capital is Grozny. Chechnya is well known for being mountainous, but it is in fact split between the more flat areas north of the Terek, and the highlands south of the Terek.",
"title": "Geography"
},
{
"paragraph_id": 44,
"text": "Rivers:",
"title": "Geography"
},
{
"paragraph_id": 45,
"text": "Despite a relatively small territory, Chechnya is characterized by a variety of climate conditions. The average temperature in Grozny is 11.2 °C (52.1 °F).",
"title": "Geography"
},
{
"paragraph_id": 46,
"text": "The Chechen Republic is divided into 15 districts and 3 cities of republican significance.",
"title": "Administrative divisions"
},
{
"paragraph_id": 47,
"text": "According to the 2021 Census, the population of the republic is 1,510,824, up from 1,268,989 in the 2010 Census.",
"title": "Demographics"
},
{
"paragraph_id": 48,
"text": "As of the 2021 Census, Chechens at 1,456,792 make up 96.4% of the republic's population. Other groups include Russians (18,225, or 1.2%), Kumyks (12,184, or 0.8%) and a host of other small groups, each accounting for less than 0.5% of the total population The birth rate was 25.41 in 2004. (25.7 in Achkhoi Martan, 19.8 in Groznyy, 17.5 in Kurchaloi, 28.3 in Urus Martan and 11.1 in Vedeno).",
"title": "Demographics"
},
{
"paragraph_id": 49,
"text": "The languages used in the Republic are Chechen and Russian. Chechen belongs to the Vaynakh or North-central Caucasian language family, which also includes Ingush and Batsb. Some scholars place it in a wider North Caucasian languages.",
"title": "Demographics"
},
{
"paragraph_id": 50,
"text": "Despite its difficult past, Chechnya has a high life expectancy, one of the highest in Russia. But the pattern of life expectancy is unusual, and in according to numerous statistics, Chechnya stands out from the overall picture. In 2020 Chechnya had the deepest fall in life expectancy, but in 2021 it had the biggest rise. Chechnya has the highest excess of life expectancy in rural areas over cities.",
"title": "Demographics"
},
{
"paragraph_id": 51,
"text": "",
"title": "Demographics"
},
{
"paragraph_id": 52,
"text": "(in the territory of modern Chechnya)",
"title": "Demographics"
},
{
"paragraph_id": 53,
"text": "Islam is the predominant religion in Chechnya, practiced by 95% of those polled in Grozny in 2010. Most of the population is Sunni and follows either the Shafi'i or the Hanafi schools of fiqh (Islamic jurisprudence). The Shafi'i school of jurisprudence has a long tradition among the Chechens, and thus it remains the most practiced. Many Chechens are also Sufis, of either the Qadiri or Naqshbandi orders.",
"title": "Demographics"
},
{
"paragraph_id": 54,
"text": "Following the end of the Soviet Union, there has been an Islamic revival in Chechnya, and in 2011 it was estimated that there were 465 mosques, including the Akhmad Kadyrov Mosque in Grozny accommodating 10,000 worshippers, as well 31 madrasas, including an Islamic university named Kunta-haji and a Center of Islamic Medicine in Grozny which is the largest such institution in Europe.",
"title": "Demographics"
},
{
"paragraph_id": 55,
"text": "From the 11th to 13th centuries (i.e. before Mongol invasions of Durdzuketia), there was a mission of Georgian Orthodox missionaries to the Nakh peoples. Their success was limited, though a couple of highland teips did convert (conversion was largely by teips). However, during the Mongol invasions, these Christianized teips gradually reverted to paganism, perhaps due to the loss of trans-Caucasian contacts as the Georgians fought the Mongols and briefly fell under their dominion.",
"title": "Demographics"
},
{
"paragraph_id": 56,
"text": "The once-strong Russian minority in Chechnya, mostly Terek Cossacks and estimated as numbering approximately 25,000 in 2012, are predominantly Russian Orthodox, although currently only one church exists in Grozny. In August 2011, Archbishop Zosima of Vladikavkaz and Makhachkala performed the first mass baptism ceremony in the history of the Chechen Republic in the Terek River of Naursky District in which 35 citizens of Naursky and Shelkovsky districts were converted to Orthodoxy. As of 2020, there are eight Orthodox churches in Chechnya, the largest is the temple of the Archangel Michael in Grozny.",
"title": "Demographics"
},
{
"paragraph_id": 57,
"text": "Since 1990, the Chechen Republic has had many legal, military, and civil conflicts involving separatist movements and pro-Russian authorities. Today, Chechnya is a relatively stable federal republic, although there is still some separatist movement activity. Its regional constitution entered into effect on 2 April 2003, after an all-Chechen referendum was held on 23 March 2003. Some Chechens were controlled by regional teips, or clans, despite the existence of pro- and anti-Russian political structures.",
"title": "Politics"
},
{
"paragraph_id": 58,
"text": "The former separatist religious leader (mufti) Akhmad Kadyrov, looked upon as a traitor by many separatists, was elected president with 83% of the vote in an internationally monitored election on 5th of October 2003. Incidents of ballot stuffing and voter intimidation by Russian soldiers and the exclusion of separatist parties from the polls were subsequently reported by Organization for Security and Co-operation in Europe (OSCE) monitors. On 9 May 2004, Kadyrov was assassinated in Grozny football stadium by a landmine explosion that was planted beneath a VIP stage and detonated during a parade, and Sergey Abramov was appointed acting prime minister after the incident. However, since 2005 Ramzan Kadyrov (son of Akhmad Kadyrov) has been the caretaker prime minister, and in 2007 was appointed as the new president. Many allege he is the wealthiest and most powerful man in the republic, with control over a large private militia (the Kadyrovites). The militia, which began as his father's security force, has been accused of killings and kidnappings by human rights organisations such as Human Rights Watch.",
"title": "Politics"
},
{
"paragraph_id": 59,
"text": "Ichkeria was a member of the Unrepresented Nations and Peoples Organisation between 1991 and 2010. Former president of Georgia Zviad Gamsakhurdia deposed in a military coup of 1991 and a participant of the Georgian Civil War, recognized the independence of the Chechen Republic of Ichkeria in 1993. Diplomatic relations with Ichkeria were also established by the partially recognised Islamic Emirate of Afghanistan under the Taliban government on 16 January 2000. This recognition ceased with the fall of the Taliban in 2001. However, despite Taliban recognition, there were no friendly relations between the Taliban and Ichkeria—Maskhadov rejected their recognition, stating that the Taliban were illegitimate. Ichkeria also received vocal support from the Baltic countries, a group of Ukrainian nationalists, and Poland; Estonia once voted to recognize, but the act never was followed through due to pressure applied by both Russia and the EU.",
"title": "Politics"
},
{
"paragraph_id": 60,
"text": "The president of this government was Aslan Maskhadov, and the foreign minister was Ilyas Akhmadov, who was the spokesman for the president. Maskhadov had been elected for four years in an internationally monitored election in 1997, which took place after signing a peace agreement with Russia. In 2001 he issued a decree prolonging his office for one additional year; he was unable to participate in the 2003 presidential election since separatist parties were barred by the Russian government, and Maskhadov faced accusations of terrorist offenses in Russia. Maskhadov left Grozny and moved to the separatist-controlled areas of the south at the onset of the Second Chechen War. Maskhadov was unable to influence a number of warlords who retain effective control over Chechen territory, and his power was diminished as a result. Russian forces killed Maskhadov on 8 March 2005, and the assassination was widely criticized since it left no legitimate Chechen separatist leader with whom to conduct peace talks. Akhmed Zakayev, deputy prime minister and a foreign minister under Maskhadov, was appointed shortly after the 1997 election and is currently living under asylum in England. He and others chose Abdul Khalim Saidullayev, a relatively unknown Islamic judge who was previously the host of an Islamic program on Chechen television, to replace Maskhadov following his death. On 17 June 2006, it was reported that Russian special forces killed Abdul Khalim Saidullayev in a raid in the Chechen town of Argun. On 10 July 2006, Shamil Basayev, a leader of the Chechen rebel movement, was killed in a truck explosion during an arms deal.",
"title": "Politics"
},
{
"paragraph_id": 61,
"text": "The successor of Saidullayev became Doku Umarov. On 31 October 2007, Umarov abolished the Chechen Republic of Ichkeria and its presidency and in its place proclaimed the Caucasus Emirate with himself as its Emir. This change of status has been rejected by many Chechen politicians and military leaders who continue to support the existence of the republic.",
"title": "Politics"
},
{
"paragraph_id": 62,
"text": "During the 2022 Russian invasion of Ukraine, the Ukrainian parliament voted to recognize the \"Chechen Republic of Ichkeria as territory temporarily occupied by the Russian Federation\".",
"title": "Politics"
},
{
"paragraph_id": 63,
"text": "Тhe Internal Displacement Monitoring Center reports that after hundreds of thousands of ethnic Russians and Chechens fled their homes following inter-ethnic and separatist conflicts in Chechnya in 1994 and 1999, more than 150,000 people still remain displaced in Russia today.",
"title": "Human rights"
},
{
"paragraph_id": 64,
"text": "Нuman rights groups criticized the conduct of the 2005 parliamentary elections as unfairly influenced by the central Russian government and military.",
"title": "Human rights"
},
{
"paragraph_id": 65,
"text": "In 2006 Human Rights Watch reported that pro-Russian Chechen forces under the command of Ramzan Kadyrov, as well as federal police personnel, used torture to get information about separatist forces. \"If you are detained in Chechnya, you face a real and immediate risk of torture. And there is little chance that your torturer will be held accountable\", said Holly Cartner, Director of the Europe and Central Asia division of the Human Rights Watch.",
"title": "Human rights"
},
{
"paragraph_id": 66,
"text": "In 2009, the US government financed American organization Freedom House included Chechnya in the \"Worst of the Worst\" list of most repressive societies in the world, together with Burma, North Korea, Tibet, and others. Memorial considers Chechnya under Kadyrov to be a totalitarian regime.",
"title": "Human rights"
},
{
"paragraph_id": 67,
"text": "On 1 February 2009, The New York Times released extensive evidence to support allegations of consistent torture and executions under the Kadyrov government. The accusations were sparked by the assassination in Austria of a former Chechen rebel who had gained access to Kadyrov's inner circle, 27-year-old Umar Israilov.",
"title": "Human rights"
},
{
"paragraph_id": 68,
"text": "On 1 July 2009, Amnesty International released a detailed report covering the human rights violations committed by the Russian Federation against Chechen citizens. Among the most prominent features was that those abused had no method of redress against assaults, ranging from kidnapping to torture, while those responsible were never held accountable. This led to the conclusion that Chechnya was being ruled without law, being run into further devastating destabilization.",
"title": "Human rights"
},
{
"paragraph_id": 69,
"text": "On 10 March 2011, Human Rights Watch reported that since Chechenization, the government has pushed for enforced Islamic dress code. The president Ramzan Kadyrov is quoted as saying \"I have the right to criticize my wife. She doesn't [have the right to criticize me]. With us [in Chechen society], a wife is a housewife. A woman should know her place. A woman should give her love to us [men]... She would be [man's] property. And the man is the owner. Here, if a woman does not behave properly, her husband, father, and brother are responsible. According to our tradition, if a woman fools around, her family members kill her... That's how it happens, a brother kills his sister or a husband kills his wife... As a president, I cannot allow for them to kill. So, let women not wear shorts...\". He has also openly defended honor killings on several occasions.",
"title": "Human rights"
},
{
"paragraph_id": 70,
"text": "On 9 July 2017, Russian newspaper Novaya Gazeta reported that a number of people were subject to an extrajudicial execution on the night of 26 January 2017. It published 27 names of the people known to be dead, but stressed that the list is \"not all [of those killed]\"; the newspaper asserted that 50 people may have been killed in the execution. Some of the dead were gay, but not all; the deaths appeared to have been triggered by the death of a policeman, and according to the author of the report, Elena Milashina, were executed for alleged terrorism.",
"title": "Human rights"
},
{
"paragraph_id": 71,
"text": "In December 2021, up to 50 family members of critics of the Kadyrov government were abducted in a wave of mass kidnappings beginning on 22 December.",
"title": "Human rights"
},
{
"paragraph_id": 72,
"text": "Although homosexuality is officially legal in Chechnya per Russian law, it is de facto illegal. Chechen authorities have reportedly arrested, imprisoned and killed persons based on their perceived sexual orientation.",
"title": "Human rights"
},
{
"paragraph_id": 73,
"text": "In 2017, it was reported by Novaya Gazeta and human rights groups that Chechen authorities had set up concentration camps, one of which is in Argun, where gay men are interrogated and subjected to physical violence. On 27 June 2018, the Parliamentary Assembly of the Council of Europe noted \"cases of abduction, arbitrary detention and torture ... with the direct involvement of Chechen law enforcement officials and on the orders of top-level Chechen authorities\" and expressed dismay \"at the statements of Chechen and Russian public officials denying the existence of LGBTI people in the Chechen Republic\". Kadyrov's spokesman Alvi Karimov told Interfax that gay people \"simply do not exist in the republic\" and made an approving reference to honor killings by family members \"if there were such people in Chechnya\". In a 2021 Council of Europe report into anti-LGBTI hate crimes, rapporteur Foura ben Chikha described the \"state-sponsored attacks carried out against LGBTI people in Chechnya in 2017\" as \"the single most egregious example of violence against LGBTI people in Europe that has occurred in decades\".",
"title": "Human rights"
},
{
"paragraph_id": 74,
"text": "On 11 January 2019, it was reported that another 'gay purge' had begun in the country in December 2018, with several gay men and women being detained. The Russian LGBT Network believes that around 40 people were detained and two killed.",
"title": "Human rights"
},
{
"paragraph_id": 75,
"text": "During the war, the Chechen economy fell apart. In 1994, the separatists planned to introduce a new currency, but the change did not occur due to the re-taking of Chechnya by Russian troops in the Second Chechen War.",
"title": "Economy"
},
{
"paragraph_id": 76,
"text": "The economic situation in Chechnya has improved considerably since 2000. According to the New York Times, major efforts to rebuild Grozny have been made, and improvements in the political situation have led some officials to consider setting up a tourism industry, though there are claims that construction workers are being irregularly paid and that poor people have been displaced.",
"title": "Economy"
},
{
"paragraph_id": 77,
"text": "Chechnya's unemployment was 67% in 2006 and fell to 21.5% in 2014.",
"title": "Economy"
},
{
"paragraph_id": 78,
"text": "Total revenue of the budget of Chechnya for 2017 was 59.2 billion rubles. Of these, 48.5 billion rubles were grants from the federal budget of the Russian Federation.",
"title": "Economy"
},
{
"paragraph_id": 79,
"text": "In late 1970s, Chechnya produced up to 20 million tons of oil annually, production declined sharply to approximately 3 million tons in the late 1980s, and to below 2 million tons before 1994, first (1994–1996) second Russian invasion of Chechnya (1999) inflicted material damage on the oil-sector infrastructure, oil production decreased to 750,000 tons in 2001 only to increase to 2 million tons in 2006, by 2012 production was 1 million tons.",
"title": "Economy"
},
{
"paragraph_id": 80,
"text": "The culture of Chechnya is based on the native traditions of Chechen people. Chechen mythology along with art have helped shape the culture for over 1,000 years.",
"title": "Culture"
}
]
| Chechnya, officially the Chechen Republic, is a republic of Russia. It is situated in the North Caucasus of Eastern Europe, close to the Caspian Sea. The republic forms a part of the North Caucasian Federal District, and shares land borders with the country of Georgia to its south; with the Russian republics of Dagestan, Ingushetia, and North Ossetia-Alania to its east, north, and west; and with Stavropol Krai to its northwest. After the dissolution of the Soviet Union in 1991, the Checheno-Ingush ASSR split into two parts: the Republic of Ingushetia and the Chechen Republic. The latter proclaimed the Chechen Republic of Ichkeria, which sought independence, while the former sided with Russia. Following the First Chechen War of 1994–1996 with Russia, Chechnya gained de facto independence as the Chechen Republic of Ichkeria, although de jure it remained a part of Russia. Russian federal control was restored in the Second Chechen War of 1999–2009, with Chechen politics being dominated by a former rebel Akhmad Kadyrov, and later his son Ramzan Kadyrov. The republic covers an area of 17,300 square kilometres, with a population of over 1.5 million residents as of 2021.
It is home to the indigenous Chechens, part of the Nakh peoples, and of primarily Muslim faith. Grozny is the capital and largest city. | 2001-08-13T07:21:23Z | 2023-12-25T06:36:44Z | [
"Template:Pp-move-vandalism",
"Template:Convert",
"Template:History of Chechnya",
"Template:Notelist",
"Template:Reflist",
"Template:Chechnya",
"Template:Pp-move-indef",
"Template:Cite journal",
"Template:Dead link",
"Template:RussiaBasicLawRef",
"Template:ISBN",
"Template:Refbegin",
"Template:Infobox Russian federal subject",
"Template:Webarchive",
"Template:Official website",
"Template:Curlie",
"Template:In lang",
"Template:IPAc-en",
"Template:Sfn",
"Template:'\"",
"Template:Ru-pop-ref",
"Template:Cite magazine",
"Template:Efn",
"Template:Fcn",
"Template:Quote",
"Template:Cbignore",
"Template:Use dmy dates",
"Template:IPA-ru",
"Template:Historical populations",
"Template:Unreliable source?",
"Template:Lang",
"Template:Which",
"Template:Commons",
"Template:Distinguish",
"Template:Page needed",
"Template:Citation",
"Template:Refend",
"Template:Quotation",
"Template:Better source needed",
"Template:' \"",
"Template:Cite news",
"Template:Cite Russian law",
"Template:As of",
"Template:Main",
"Template:Citation needed",
"Template:See also",
"Template:POV section",
"Template:Cite web",
"Template:Cite book",
"Template:Countries and regions of the Caucasus",
"Template:Authority control",
"Template:Subdivisions of Russia",
"Template:Short description",
"Template:Respell",
"Template:Lang-ce",
"Template:Largest cities",
"Template:Lang-ru"
]
| https://en.wikipedia.org/wiki/Chechnya |
6,097 | Canonization | Canonization is the declaration of a deceased person as an officially recognized saint, specifically, the official act of a Christian communion declaring a person worthy of public veneration and entering their name in the canon catalogue of saints, or authorized list of that communion's recognized saints.
Canonization is a papal declaration that the Catholic faithful may venerate a particular deceased member of the church. Popes began making such decrees in the tenth century. Up to that point, the local bishops governed the veneration of holy men and women within their own dioceses; and there may have been, for any particular saint, no formal decree at all. In subsequent centuries, the procedures became increasingly regularized and the Popes began restricting to themselves the right to declare someone a Catholic saint. In contemporary usage, the term is understood to refer to the act by which any Christian church declares that a person who has died is a saint, upon which declaration the person is included in the list of recognized saints, called the "canon".
In the Roman Martyrology, the following entry is given for the Penitent Thief: "Commemoration of the holy thief in Jerusalem who confessed to Christ and canonized him by Jesus himself.
The Roman Rite's Canon of the Mass contains only the names of apostles and martyrs, along with that of the Blessed Virgin Mary and, since 1962, that of Saint Joseph her spouse.
By the fourth century, however, "confessors"—people who had confessed their faith not by dying but by word and life—began to be venerated publicly. Examples of such people are Saint Hilarion and Saint Ephrem the Syrian in the East, and Saint Martin of Tours and Saint Hilary of Poitiers in the West. Their names were inserted in the diptychs, the lists of saints explicitly venerated in the liturgy, and their tombs were honoured in like manner as those of the martyrs. Since the witness of their lives was not as unequivocal as that of the martyrs, they were venerated publicly only with the approval by the local bishop. This process is often referred to as "local canonization".
This approval was required even for veneration of a reputed martyr. In his history of the Donatist heresy, Saint Optatus recounts that at Carthage a Catholic matron, named Lucilla, incurred the censures of the Church for having kissed the relics of a reputed martyr whose claims to martyrdom had not been juridically proved. And Saint Cyprian (died 258) recommended that the utmost diligence be observed in investigating the claims of those who were said to have died for the faith. All the circumstances accompanying the martyrdom were to be inquired into; the faith of those who suffered, and the motives that animated them were to be rigorously examined, in order to prevent the recognition of undeserving persons. Evidence was sought from the court records of the trials or from people who had been present at the trials.
Augustine of Hippo (died 430) tells of the procedure which was followed in his day for the recognition of a martyr. The bishop of the diocese in which the martyrdom took place set up a canonical process for conducting the inquiry with the utmost severity. The acts of the process were sent either to the metropolitan or primate, who carefully examined the cause, and, after consultation with the suffragan bishops, declared whether the deceased was worthy of the name of "martyr" and public veneration.
Though not "canonizations" in the narrow sense, acts of formal recognition, such as the erection of an altar over the saint's tomb or transferring the saint's relics to a church, were preceded by formal inquiries into the sanctity of the person's life and the miracles attributed to that person's intercession.
Such acts of recognition of a saint were authoritative, in the strict sense, only for the diocese or ecclesiastical province for which they were issued, but with the spread of the fame of a saint, were often accepted elsewhere also.
In the Catholic Church, both in the Latin and the constituent Eastern churches, the act of canonization is reserved to the Apostolic See and occurs at the conclusion of a long process requiring extensive proof that the candidate for canonization lived and died in such an exemplary and holy way that they are worthy to be recognized as a saint. The Church's official recognition of sanctity implies that the person is now in Heaven and that they may be publicly invoked and mentioned officially in the liturgy of the Church, including in the Litany of the Saints.
In the Catholic Church, canonization is a decree that allows universal veneration of the saint. For permission to venerate merely locally, only beatification is needed.
For several centuries the bishops, or in some places only the primates and patriarchs, could grant martyrs and confessors public ecclesiastical honor; such honor, however, was always decreed only for the local territory of which the grantors had jurisdiction. Only acceptance of the cultus by the Pope made the cultus universal, because he alone can rule the universal Catholic Church. Abuses, however, crept into this discipline, due as well to indiscretions of popular fervor as to the negligence of some bishops in inquiring into the lives of those whom they permitted to be honoured as saints.
In the Medieval West, the Apostolic See was asked to intervene in the question of canonizations so as to ensure more authoritative decisions. The canonization of Saint Udalric, Bishop of Augsburg by Pope John XV in 993 was the first undoubted example of papal canonization of a saint from outside of Rome being declared worthy of liturgical veneration for the entire church.
Thereafter, recourse to the judgment of the Pope occurred more frequently. Toward the end of the 11th century, the Popes began asserting their exclusive right to authorize the veneration of a saint against the older rights of bishops to do so for their dioceses and regions. Popes therefore decreed that the virtues and miracles of persons proposed for public veneration should be examined in councils, more specifically in general councils. Pope Urban II, Pope Calixtus II, and Pope Eugene III conformed to this discipline.
Hugh de Boves, Archbishop of Rouen, canonized Walter of Pontoise, or St. Gaultier, in 1153, the final saint in Western Europe to be canonized by an authority other than the Pope: "The last case of canonization by a metropolitan is said to have been that of St. Gaultier, or Gaucher, [A]bbot of Pontoise, by the Archbishop of Rouen. A decree of Pope Alexander III [in] 1170 gave the prerogative to the [P]ope thenceforth, so far as the Western Church was concerned." In a decretal of 1173, Pope Alexander III reprimanded some bishops for permitting veneration of a man who was merely killed while intoxicated, prohibited veneration of the man, and most significantly decreed that "you shall not therefore presume to honor him in the future; for, even if miracles were worked through him, it is not lawful for you to venerate him as a saint without the authority of the Catholic Church." Theologians disagree as to the full import of the decretal of Pope Alexander III: either a new law was instituted, in which case the Pope then for the first time reserved the right of beatification to himself, or an existing law was confirmed.
However, the procedure initiated by the decretal of Pope Alexander III was confirmed by a bull of Pope Innocent III issued on the occasion of the canonization of Cunigunde of Luxembourg in 1200. The bull of Pope Innocent III resulted in increasingly elaborate inquiries to the Apostolic See concerning canonizations. Because the decretal of Pope Alexander III did not end all controversy and some bishops did not obey it in so far as it regarded beatification, the right of which they had certainly possessed hitherto, Pope Urban VIII issued the Apostolic letter Caelestis Hierusalem cives of 5 July 1634 that exclusively reserved to the Apostolic See both its immemorial right of canonization and that of beatification. He further regulated both of these acts by issuing his Decreta servanda in beatificatione et canonizatione Sanctorum on 12 March 1642.
In his De Servorum Dei beatificatione et de Beatorum canonizatione of five volumes the eminent canonist Prospero Lambertini (1675–1758), who later became Pope Benedict XIV, elaborated on the procedural norms of Pope Urban VIII's Apostolic letter Caelestis Hierusalem cives of 1634 and Decreta servanda in beatificatione et canonizatione Sanctorum of 1642, and on the conventional practice of the time. His work published from 1734 to 1738 governed the proceedings until 1917. The article "Beatification and canonization process in 1914" describes the procedures followed until the promulgation of the Codex of 1917. The substance of De Servorum Dei beatifιcatione et de Beatorum canonizatione was incorporated into the Codex Iuris Canonici (Code of Canon Law) of 1917, which governed until the promulgation of the revised Codex Iuris Canonici in 1983 by Pope John Paul II. Prior to promulgation of the revised Codex in 1983, Pope Paul VI initiated a simplification of the procedures.
The Apostolic constitution Divinus Perfectionis Magister of Pope John Paul II of 25 January 1983 and the norms issued by the Congregation for the Causes of Saints on 7 February 1983 to implement the constitution in dioceses, continued the simplification of the process initiated by Pope Paul VI. Contrary to popular belief, the reforms did not eliminate the office of the Promoter of the Faith (Latin: Promotor Fidei), popularly known as the Devil's advocate, whose office is to question the material presented in favor of canonization. The reforms were intended to reduce the adversarial nature of the process. In November 2012 Pope Benedict XVI appointed Monsignor Carmello Pellegrino as Promoter of the Faith.
Candidates for canonization undergo the following process:
Canonization is a statement of the Church that the person certainly enjoys the beatific vision of Heaven. The title of "Saint" (Latin: Sanctus or Sancta) is then proper, reflecting that the saint is a refulgence of the holiness (sanctitas) of God himself, which alone comes from God's gift. The saint is assigned a feast day which may be celebrated anywhere in the universal Church, although it is not necessarily added to the General Roman Calendar or local calendars as an "obligatory" feast; parish churches may be erected in their honor; and the faithful may freely celebrate and honor the saint.
Although recognition of sainthood by the Pope does not directly concern a fact of Divine revelation, nonetheless it must be "definitively held" by the faithful as infallible pursuant to, at the least, the Universal Magisterium of the Church, because it is a truth related to revelation by historical necessity.
Regarding the Eastern Catholic Churches, individual sui juris churches have the right to "glorify" saints for their own jurisdictions, although this has rarely happened.
Popes have several times permitted to the universal Church, without executing the ordinary judicial process of canonization described above, the veneration as a saint, the "cultus" of one long venerated as such locally. This act of a Pope is denominated "equipollent" or "equivalent canonization" and "confirmation of cultus". In such cases, there is no need to have a miracle attributed to the saint to allow their canonization. According to the rules Pope Benedict XIV (regnat 17 August 1740 – 3 May 1758) instituted, there are three conditions for an equipollent canonization: (1) existence of an ancient cultus of the person, (2) a general and constant attestation to the virtues or martyrdom of the person by credible historians, and (3) uninterrupted fame of the person as a worker of miracles.
The majority of Protestant denominations do not formally recognize saints, usually with the claim that there can not be saints since no follower of Christ is more or less worthy of the favor of the Lord than any other. However, some denominations do, as shown below.
The Church of England, the Mother Church of the Anglican Communion, canonized Charles I as a saint, in the Convocations of Canterbury and York of 1660.
The General Conference of the United Methodist Church has formally declared individuals martyrs, including Dietrich Bonhoeffer (in 2008) and Martin Luther King Jr. (in 2012).
Various terms are used for canonization by the autocephalous Eastern Orthodox Churches: канонизация ("canonization") or прославление ("glorification", in the Russian Orthodox Church), კანონიზაცია (kanonizats’ia, Georgian Orthodox Church), канонизација (Serbian Orthodox Church), canonizare (Romanian Orthodox Church), and Канонизация (Bulgarian Orthodox Church). Additional terms are used for canonization by other autocephalous Eastern Orthodox Churches: αγιοκατάταξη (Katharevousa: ἁγιοκατάταξις) agiokatataxi/agiokatataxis, "ranking among saints" (Ecumenical Patriarchate of Constantinople, Church of Cyprus, Church of Greece), kanonizim (Albanian Orthodox Church), kanonizacja (Polish Orthodox Church), and kanonizace/kanonizácia (Czech and Slovak Orthodox Church).
The Orthodox Church in America, an Eastern Orthodox Church partly recognized as autocephalous, uses the term "glorification" for the official recognition of a person as a saint.
Within the Armenian Apostolic Church, part of Oriental Orthodoxy, there had been discussions since the 1980s about canonizing the victims of the Armenian genocide. On 23 April 2015, all of the victims of the genocide were canonized. | [
{
"paragraph_id": 0,
"text": "Canonization is the declaration of a deceased person as an officially recognized saint, specifically, the official act of a Christian communion declaring a person worthy of public veneration and entering their name in the canon catalogue of saints, or authorized list of that communion's recognized saints.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Canonization is a papal declaration that the Catholic faithful may venerate a particular deceased member of the church. Popes began making such decrees in the tenth century. Up to that point, the local bishops governed the veneration of holy men and women within their own dioceses; and there may have been, for any particular saint, no formal decree at all. In subsequent centuries, the procedures became increasingly regularized and the Popes began restricting to themselves the right to declare someone a Catholic saint. In contemporary usage, the term is understood to refer to the act by which any Christian church declares that a person who has died is a saint, upon which declaration the person is included in the list of recognized saints, called the \"canon\".",
"title": "Catholic Church"
},
{
"paragraph_id": 2,
"text": "In the Roman Martyrology, the following entry is given for the Penitent Thief: \"Commemoration of the holy thief in Jerusalem who confessed to Christ and canonized him by Jesus himself.",
"title": "Catholic Church"
},
{
"paragraph_id": 3,
"text": "The Roman Rite's Canon of the Mass contains only the names of apostles and martyrs, along with that of the Blessed Virgin Mary and, since 1962, that of Saint Joseph her spouse.",
"title": "Catholic Church"
},
{
"paragraph_id": 4,
"text": "By the fourth century, however, \"confessors\"—people who had confessed their faith not by dying but by word and life—began to be venerated publicly. Examples of such people are Saint Hilarion and Saint Ephrem the Syrian in the East, and Saint Martin of Tours and Saint Hilary of Poitiers in the West. Their names were inserted in the diptychs, the lists of saints explicitly venerated in the liturgy, and their tombs were honoured in like manner as those of the martyrs. Since the witness of their lives was not as unequivocal as that of the martyrs, they were venerated publicly only with the approval by the local bishop. This process is often referred to as \"local canonization\".",
"title": "Catholic Church"
},
{
"paragraph_id": 5,
"text": "This approval was required even for veneration of a reputed martyr. In his history of the Donatist heresy, Saint Optatus recounts that at Carthage a Catholic matron, named Lucilla, incurred the censures of the Church for having kissed the relics of a reputed martyr whose claims to martyrdom had not been juridically proved. And Saint Cyprian (died 258) recommended that the utmost diligence be observed in investigating the claims of those who were said to have died for the faith. All the circumstances accompanying the martyrdom were to be inquired into; the faith of those who suffered, and the motives that animated them were to be rigorously examined, in order to prevent the recognition of undeserving persons. Evidence was sought from the court records of the trials or from people who had been present at the trials.",
"title": "Catholic Church"
},
{
"paragraph_id": 6,
"text": "Augustine of Hippo (died 430) tells of the procedure which was followed in his day for the recognition of a martyr. The bishop of the diocese in which the martyrdom took place set up a canonical process for conducting the inquiry with the utmost severity. The acts of the process were sent either to the metropolitan or primate, who carefully examined the cause, and, after consultation with the suffragan bishops, declared whether the deceased was worthy of the name of \"martyr\" and public veneration.",
"title": "Catholic Church"
},
{
"paragraph_id": 7,
"text": "Though not \"canonizations\" in the narrow sense, acts of formal recognition, such as the erection of an altar over the saint's tomb or transferring the saint's relics to a church, were preceded by formal inquiries into the sanctity of the person's life and the miracles attributed to that person's intercession.",
"title": "Catholic Church"
},
{
"paragraph_id": 8,
"text": "Such acts of recognition of a saint were authoritative, in the strict sense, only for the diocese or ecclesiastical province for which they were issued, but with the spread of the fame of a saint, were often accepted elsewhere also.",
"title": "Catholic Church"
},
{
"paragraph_id": 9,
"text": "In the Catholic Church, both in the Latin and the constituent Eastern churches, the act of canonization is reserved to the Apostolic See and occurs at the conclusion of a long process requiring extensive proof that the candidate for canonization lived and died in such an exemplary and holy way that they are worthy to be recognized as a saint. The Church's official recognition of sanctity implies that the person is now in Heaven and that they may be publicly invoked and mentioned officially in the liturgy of the Church, including in the Litany of the Saints.",
"title": "Catholic Church"
},
{
"paragraph_id": 10,
"text": "In the Catholic Church, canonization is a decree that allows universal veneration of the saint. For permission to venerate merely locally, only beatification is needed.",
"title": "Catholic Church"
},
{
"paragraph_id": 11,
"text": "For several centuries the bishops, or in some places only the primates and patriarchs, could grant martyrs and confessors public ecclesiastical honor; such honor, however, was always decreed only for the local territory of which the grantors had jurisdiction. Only acceptance of the cultus by the Pope made the cultus universal, because he alone can rule the universal Catholic Church. Abuses, however, crept into this discipline, due as well to indiscretions of popular fervor as to the negligence of some bishops in inquiring into the lives of those whom they permitted to be honoured as saints.",
"title": "Catholic Church"
},
{
"paragraph_id": 12,
"text": "In the Medieval West, the Apostolic See was asked to intervene in the question of canonizations so as to ensure more authoritative decisions. The canonization of Saint Udalric, Bishop of Augsburg by Pope John XV in 993 was the first undoubted example of papal canonization of a saint from outside of Rome being declared worthy of liturgical veneration for the entire church.",
"title": "Catholic Church"
},
{
"paragraph_id": 13,
"text": "Thereafter, recourse to the judgment of the Pope occurred more frequently. Toward the end of the 11th century, the Popes began asserting their exclusive right to authorize the veneration of a saint against the older rights of bishops to do so for their dioceses and regions. Popes therefore decreed that the virtues and miracles of persons proposed for public veneration should be examined in councils, more specifically in general councils. Pope Urban II, Pope Calixtus II, and Pope Eugene III conformed to this discipline.",
"title": "Catholic Church"
},
{
"paragraph_id": 14,
"text": "Hugh de Boves, Archbishop of Rouen, canonized Walter of Pontoise, or St. Gaultier, in 1153, the final saint in Western Europe to be canonized by an authority other than the Pope: \"The last case of canonization by a metropolitan is said to have been that of St. Gaultier, or Gaucher, [A]bbot of Pontoise, by the Archbishop of Rouen. A decree of Pope Alexander III [in] 1170 gave the prerogative to the [P]ope thenceforth, so far as the Western Church was concerned.\" In a decretal of 1173, Pope Alexander III reprimanded some bishops for permitting veneration of a man who was merely killed while intoxicated, prohibited veneration of the man, and most significantly decreed that \"you shall not therefore presume to honor him in the future; for, even if miracles were worked through him, it is not lawful for you to venerate him as a saint without the authority of the Catholic Church.\" Theologians disagree as to the full import of the decretal of Pope Alexander III: either a new law was instituted, in which case the Pope then for the first time reserved the right of beatification to himself, or an existing law was confirmed.",
"title": "Catholic Church"
},
{
"paragraph_id": 15,
"text": "However, the procedure initiated by the decretal of Pope Alexander III was confirmed by a bull of Pope Innocent III issued on the occasion of the canonization of Cunigunde of Luxembourg in 1200. The bull of Pope Innocent III resulted in increasingly elaborate inquiries to the Apostolic See concerning canonizations. Because the decretal of Pope Alexander III did not end all controversy and some bishops did not obey it in so far as it regarded beatification, the right of which they had certainly possessed hitherto, Pope Urban VIII issued the Apostolic letter Caelestis Hierusalem cives of 5 July 1634 that exclusively reserved to the Apostolic See both its immemorial right of canonization and that of beatification. He further regulated both of these acts by issuing his Decreta servanda in beatificatione et canonizatione Sanctorum on 12 March 1642.",
"title": "Catholic Church"
},
{
"paragraph_id": 16,
"text": "In his De Servorum Dei beatificatione et de Beatorum canonizatione of five volumes the eminent canonist Prospero Lambertini (1675–1758), who later became Pope Benedict XIV, elaborated on the procedural norms of Pope Urban VIII's Apostolic letter Caelestis Hierusalem cives of 1634 and Decreta servanda in beatificatione et canonizatione Sanctorum of 1642, and on the conventional practice of the time. His work published from 1734 to 1738 governed the proceedings until 1917. The article \"Beatification and canonization process in 1914\" describes the procedures followed until the promulgation of the Codex of 1917. The substance of De Servorum Dei beatifιcatione et de Beatorum canonizatione was incorporated into the Codex Iuris Canonici (Code of Canon Law) of 1917, which governed until the promulgation of the revised Codex Iuris Canonici in 1983 by Pope John Paul II. Prior to promulgation of the revised Codex in 1983, Pope Paul VI initiated a simplification of the procedures.",
"title": "Catholic Church"
},
{
"paragraph_id": 17,
"text": "The Apostolic constitution Divinus Perfectionis Magister of Pope John Paul II of 25 January 1983 and the norms issued by the Congregation for the Causes of Saints on 7 February 1983 to implement the constitution in dioceses, continued the simplification of the process initiated by Pope Paul VI. Contrary to popular belief, the reforms did not eliminate the office of the Promoter of the Faith (Latin: Promotor Fidei), popularly known as the Devil's advocate, whose office is to question the material presented in favor of canonization. The reforms were intended to reduce the adversarial nature of the process. In November 2012 Pope Benedict XVI appointed Monsignor Carmello Pellegrino as Promoter of the Faith.",
"title": "Catholic Church"
},
{
"paragraph_id": 18,
"text": "Candidates for canonization undergo the following process:",
"title": "Catholic Church"
},
{
"paragraph_id": 19,
"text": "Canonization is a statement of the Church that the person certainly enjoys the beatific vision of Heaven. The title of \"Saint\" (Latin: Sanctus or Sancta) is then proper, reflecting that the saint is a refulgence of the holiness (sanctitas) of God himself, which alone comes from God's gift. The saint is assigned a feast day which may be celebrated anywhere in the universal Church, although it is not necessarily added to the General Roman Calendar or local calendars as an \"obligatory\" feast; parish churches may be erected in their honor; and the faithful may freely celebrate and honor the saint.",
"title": "Catholic Church"
},
{
"paragraph_id": 20,
"text": "Although recognition of sainthood by the Pope does not directly concern a fact of Divine revelation, nonetheless it must be \"definitively held\" by the faithful as infallible pursuant to, at the least, the Universal Magisterium of the Church, because it is a truth related to revelation by historical necessity.",
"title": "Catholic Church"
},
{
"paragraph_id": 21,
"text": "Regarding the Eastern Catholic Churches, individual sui juris churches have the right to \"glorify\" saints for their own jurisdictions, although this has rarely happened.",
"title": "Catholic Church"
},
{
"paragraph_id": 22,
"text": "Popes have several times permitted to the universal Church, without executing the ordinary judicial process of canonization described above, the veneration as a saint, the \"cultus\" of one long venerated as such locally. This act of a Pope is denominated \"equipollent\" or \"equivalent canonization\" and \"confirmation of cultus\". In such cases, there is no need to have a miracle attributed to the saint to allow their canonization. According to the rules Pope Benedict XIV (regnat 17 August 1740 – 3 May 1758) instituted, there are three conditions for an equipollent canonization: (1) existence of an ancient cultus of the person, (2) a general and constant attestation to the virtues or martyrdom of the person by credible historians, and (3) uninterrupted fame of the person as a worker of miracles.",
"title": "Catholic Church"
},
{
"paragraph_id": 23,
"text": "The majority of Protestant denominations do not formally recognize saints, usually with the claim that there can not be saints since no follower of Christ is more or less worthy of the favor of the Lord than any other. However, some denominations do, as shown below.",
"title": "Protestant denominations"
},
{
"paragraph_id": 24,
"text": "The Church of England, the Mother Church of the Anglican Communion, canonized Charles I as a saint, in the Convocations of Canterbury and York of 1660.",
"title": "Protestant denominations"
},
{
"paragraph_id": 25,
"text": "The General Conference of the United Methodist Church has formally declared individuals martyrs, including Dietrich Bonhoeffer (in 2008) and Martin Luther King Jr. (in 2012).",
"title": "Protestant denominations"
},
{
"paragraph_id": 26,
"text": "Various terms are used for canonization by the autocephalous Eastern Orthodox Churches: канонизация (\"canonization\") or прославление (\"glorification\", in the Russian Orthodox Church), კანონიზაცია (kanonizats’ia, Georgian Orthodox Church), канонизација (Serbian Orthodox Church), canonizare (Romanian Orthodox Church), and Канонизация (Bulgarian Orthodox Church). Additional terms are used for canonization by other autocephalous Eastern Orthodox Churches: αγιοκατάταξη (Katharevousa: ἁγιοκατάταξις) agiokatataxi/agiokatataxis, \"ranking among saints\" (Ecumenical Patriarchate of Constantinople, Church of Cyprus, Church of Greece), kanonizim (Albanian Orthodox Church), kanonizacja (Polish Orthodox Church), and kanonizace/kanonizácia (Czech and Slovak Orthodox Church).",
"title": "Eastern Orthodox Church"
},
{
"paragraph_id": 27,
"text": "The Orthodox Church in America, an Eastern Orthodox Church partly recognized as autocephalous, uses the term \"glorification\" for the official recognition of a person as a saint.",
"title": "Eastern Orthodox Church"
},
{
"paragraph_id": 28,
"text": "Within the Armenian Apostolic Church, part of Oriental Orthodoxy, there had been discussions since the 1980s about canonizing the victims of the Armenian genocide. On 23 April 2015, all of the victims of the genocide were canonized.",
"title": "Oriental Orthodox Church"
}
]
| Canonization is the declaration of a deceased person as an officially recognized saint, specifically, the official act of a Christian communion declaring a person worthy of public veneration and entering their name in the canon catalogue of saints, or authorized list of that communion's recognized saints. | 2001-10-16T21:36:10Z | 2023-12-25T13:14:05Z | [
"Template:Refn",
"Template:Cn span",
"Template:Citation",
"Template:Lang",
"Template:Portal",
"Template:Cite book",
"Template:Short description",
"Template:Ordered list",
"Template:Main article",
"Template:Saints",
"Template:Use American English",
"Template:Cite journal",
"Template:Cite news",
"Template:Webarchive",
"Template:Authority control",
"Template:Other uses",
"Template:Further",
"Template:Expand section",
"Template:Cite encyclopedia",
"Template:Wikisource-inline",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Cite web",
"Template:Canonization",
"Template:In lang",
"Template:Wiktionary",
"Template:Main"
]
| https://en.wikipedia.org/wiki/Canonization |
6,099 | Carboxylic acid | In organic chemistry, a carboxylic acid is an organic acid that contains a carboxyl group (C(=O)OH) attached to an R-group. The general formula of a carboxylic acid is R−COOH or R−CO2H, with R referring to the alkyl, alkenyl, aryl, or other group. Carboxylic acids occur widely. Important examples include the amino acids and fatty acids. Deprotonation of a carboxylic acid gives a carboxylate anion.
Carboxylic acids are commonly identified by their trivial names. They often have the suffix -ic acid. IUPAC-recommended names also exist; in this system, carboxylic acids have an -oic acid suffix. For example, butyric acid (C3H7CO2H) is butanoic acid by IUPAC guidelines. For nomenclature of complex molecules containing a carboxylic acid, the carboxyl can be considered position one of the parent chain even if there are other substituents, such as 3-chloropropanoic acid. Alternately, it can be named as a "carboxy" or "carboxylic acid" substituent on another parent structure, such as 2-carboxyfuran.
The carboxylate anion (R–COO or RCO2) of a carboxylic acid is usually named with the suffix -ate, in keeping with the general pattern of -ic acid and -ate for a conjugate acid and its conjugate base, respectively. For example, the conjugate base of acetic acid is acetate.
Carbonic acid, which occurs in bicarbonate buffer systems in nature, is not generally classed as one of the carboxylic acids, despite that it has a moiety that looks like a COOH group.
Carboxylic acids are polar. Because they are both hydrogen-bond acceptors (the carbonyl –C=O) and hydrogen-bond donors (the hydroxyl –OH), they also participate in hydrogen bonding. Together, the hydroxyl and carbonyl group form the functional group carboxyl. Carboxylic acids usually exist as dimers in nonpolar media due to their tendency to "self-associate". Smaller carboxylic acids (1 to 5 carbons) are soluble in water, whereas bigger carboxylic acids have limited solubility due to the increasing hydrophobic nature of the alkyl chain. These longer chain acids tend to be soluble in less-polar solvents such as ethers and alcohols. Aqueous sodium hydroxide and carboxylic acids, even hydrophobic ones, react to yield water-soluble sodium salts. For example, enanthic acid has a low solubility in water (0.2 g/L), but its sodium salt is very soluble in water.
Carboxylic acids tend to have higher boiling points than water, because of their greater surface areas and their tendency to form stabilised dimers through hydrogen bonds. For boiling to occur, either the dimer bonds must be broken or the entire dimer arrangement must be vaporised, increasing the enthalpy of vaporization requirements significantly.
Carboxylic acids are Brønsted–Lowry acids because they are proton (H) donors. They are the most common type of organic acid.
Carboxylic acids are typically weak acids, meaning that they only partially dissociate into H3O cations and RCOO anions in neutral aqueous solution. For example, at room temperature, in a 1-molar solution of acetic acid, only 0.001% of the acid are dissociated (i.e. 10 moles out of 1 mol). Electron-withdrawing substituents, such as -CF3 group, give stronger acids (the pKa of acetic acid is 4.76 whereas trifluoroacetic acid, with a trifluoromethyl substituent, has a pKa of 0.23). Electron-donating substituents give weaker acids (the pKa of formic acid is 3.75 whereas acetic acid, with a methyl substituent, has a pKa of 4.76)
Deprotonation of carboxylic acids gives carboxylate anions; these are resonance stabilized, because the negative charge is delocalized over the two oxygen atoms, increasing the stability of the anion. Each of the carbon–oxygen bonds in the carboxylate anion has a partial double-bond character. The carbonyl carbon's partial positive charge is also weakened by the -/2 negative charges on the 2 oxygen atoms.
Carboxylic acids often have strong sour odours. Esters of carboxylic acids tend to have fruity, pleasant odours, and many are used in perfume.
Carboxylic acids are readily identified as such by infrared spectroscopy. They exhibit a sharp band associated with vibration of the C=O carbonyl bond (νC=O) between 1680 and 1725 cm. A characteristic νO–H band appears as a broad peak in the 2500 to 3000 cm region. By H NMR spectrometry, the hydroxyl hydrogen appears in the 10–13 ppm region, although it is often either broadened or not observed owing to exchange with traces of water.
Many carboxylic acids are produced industrially on a large scale. They are also frequently found in nature. Esters of fatty acids are the main components of lipids and polyamides of aminocarboxylic acids are the main components of proteins.
Carboxylic acids are used in the production of polymers, pharmaceuticals, solvents, and food additives. Industrially important carboxylic acids include acetic acid (component of vinegar, precursor to solvents and coatings), acrylic and methacrylic acids (precursors to polymers, adhesives), adipic acid (polymers), citric acid (a flavor and preservative in food and beverages), ethylenediaminetetraacetic acid (chelating agent), fatty acids (coatings), maleic acid (polymers), propionic acid (food preservative), terephthalic acid (polymers). Important carboxylate salts are soaps.
In general, industrial routes to carboxylic acids differ from those used on a smaller scale because they require specialized equipment.
Preparative methods for small scale reactions for research or for production of fine chemicals often employ expensive consumable reagents.
Many reactions produce carboxylic acids but are used only in specific cases or are mainly of academic interest.
The most widely practiced reactions convert carboxylic acids into esters, amides, carboxylate salts, acid chlorides, and alcohols. Carboxylic acids react with bases to form carboxylate salts, in which the hydrogen of the hydroxyl (–OH) group is replaced with a metal cation. For example, acetic acid found in vinegar reacts with sodium bicarbonate (baking soda) to form sodium acetate, carbon dioxide, and water:
Carboxylic acids also react with alcohols to give esters. This process is widely used, e.g. in the production of polyesters. Likewise, carboxylic acids are converted into amides, but this conversion typically does not occur by direct reaction of the carboxylic acid and the amine. Instead esters are typical precursors to amides. The conversion of amino acids into peptides is a significant biochemical process that requires ATP.
The hydroxyl group on carboxylic acids may be replaced with a chlorine atom using thionyl chloride to give acyl chlorides. In nature, carboxylic acids are converted to thioesters.
Like esters, most carboxylic acids can be reduced to alcohols by hydrogenation, or using hydride transferring agents such as lithium aluminium hydride. Strong alkyl transferring agents, such as organolithium compounds but not Grignard reagents, will reduce carboxylic acids to ketones along with transfer of the alkyl group.
N,N-Dimethyl(chloromethylene)ammonium chloride (ClHC=N(CH3)2Cl) is a highly chemoselective agent for carboxylic acid reduction. It selectively activates the carboxylic acid to give the carboxymethyleneammonium salt, which can be reduced by a mild reductant like lithium tris(t-butoxy)aluminum hydride to afford an aldehyde in a one pot procedure. This procedure is known to tolerate reactive carbonyl functionalities such as ketone as well as moderately reactive ester, olefin, nitrile, and halide moieties.
The carboxyl radical, •COOH, only exists briefly. The acid dissociation constant of •COOH has been measured using electron paramagnetic resonance spectroscopy. The carboxyl group tends to dimerise to form oxalic acid. | [
{
"paragraph_id": 0,
"text": "In organic chemistry, a carboxylic acid is an organic acid that contains a carboxyl group (C(=O)OH) attached to an R-group. The general formula of a carboxylic acid is R−COOH or R−CO2H, with R referring to the alkyl, alkenyl, aryl, or other group. Carboxylic acids occur widely. Important examples include the amino acids and fatty acids. Deprotonation of a carboxylic acid gives a carboxylate anion.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Carboxylic acids are commonly identified by their trivial names. They often have the suffix -ic acid. IUPAC-recommended names also exist; in this system, carboxylic acids have an -oic acid suffix. For example, butyric acid (C3H7CO2H) is butanoic acid by IUPAC guidelines. For nomenclature of complex molecules containing a carboxylic acid, the carboxyl can be considered position one of the parent chain even if there are other substituents, such as 3-chloropropanoic acid. Alternately, it can be named as a \"carboxy\" or \"carboxylic acid\" substituent on another parent structure, such as 2-carboxyfuran.",
"title": "Examples and nomenclature"
},
{
"paragraph_id": 2,
"text": "The carboxylate anion (R–COO or RCO2) of a carboxylic acid is usually named with the suffix -ate, in keeping with the general pattern of -ic acid and -ate for a conjugate acid and its conjugate base, respectively. For example, the conjugate base of acetic acid is acetate.",
"title": "Examples and nomenclature"
},
{
"paragraph_id": 3,
"text": "Carbonic acid, which occurs in bicarbonate buffer systems in nature, is not generally classed as one of the carboxylic acids, despite that it has a moiety that looks like a COOH group.",
"title": "Examples and nomenclature"
},
{
"paragraph_id": 4,
"text": "Carboxylic acids are polar. Because they are both hydrogen-bond acceptors (the carbonyl –C=O) and hydrogen-bond donors (the hydroxyl –OH), they also participate in hydrogen bonding. Together, the hydroxyl and carbonyl group form the functional group carboxyl. Carboxylic acids usually exist as dimers in nonpolar media due to their tendency to \"self-associate\". Smaller carboxylic acids (1 to 5 carbons) are soluble in water, whereas bigger carboxylic acids have limited solubility due to the increasing hydrophobic nature of the alkyl chain. These longer chain acids tend to be soluble in less-polar solvents such as ethers and alcohols. Aqueous sodium hydroxide and carboxylic acids, even hydrophobic ones, react to yield water-soluble sodium salts. For example, enanthic acid has a low solubility in water (0.2 g/L), but its sodium salt is very soluble in water.",
"title": "Physical properties"
},
{
"paragraph_id": 5,
"text": "Carboxylic acids tend to have higher boiling points than water, because of their greater surface areas and their tendency to form stabilised dimers through hydrogen bonds. For boiling to occur, either the dimer bonds must be broken or the entire dimer arrangement must be vaporised, increasing the enthalpy of vaporization requirements significantly.",
"title": "Physical properties"
},
{
"paragraph_id": 6,
"text": "Carboxylic acids are Brønsted–Lowry acids because they are proton (H) donors. They are the most common type of organic acid.",
"title": "Physical properties"
},
{
"paragraph_id": 7,
"text": "Carboxylic acids are typically weak acids, meaning that they only partially dissociate into H3O cations and RCOO anions in neutral aqueous solution. For example, at room temperature, in a 1-molar solution of acetic acid, only 0.001% of the acid are dissociated (i.e. 10 moles out of 1 mol). Electron-withdrawing substituents, such as -CF3 group, give stronger acids (the pKa of acetic acid is 4.76 whereas trifluoroacetic acid, with a trifluoromethyl substituent, has a pKa of 0.23). Electron-donating substituents give weaker acids (the pKa of formic acid is 3.75 whereas acetic acid, with a methyl substituent, has a pKa of 4.76)",
"title": "Physical properties"
},
{
"paragraph_id": 8,
"text": "Deprotonation of carboxylic acids gives carboxylate anions; these are resonance stabilized, because the negative charge is delocalized over the two oxygen atoms, increasing the stability of the anion. Each of the carbon–oxygen bonds in the carboxylate anion has a partial double-bond character. The carbonyl carbon's partial positive charge is also weakened by the -/2 negative charges on the 2 oxygen atoms.",
"title": "Physical properties"
},
{
"paragraph_id": 9,
"text": "Carboxylic acids often have strong sour odours. Esters of carboxylic acids tend to have fruity, pleasant odours, and many are used in perfume.",
"title": "Physical properties"
},
{
"paragraph_id": 10,
"text": "Carboxylic acids are readily identified as such by infrared spectroscopy. They exhibit a sharp band associated with vibration of the C=O carbonyl bond (νC=O) between 1680 and 1725 cm. A characteristic νO–H band appears as a broad peak in the 2500 to 3000 cm region. By H NMR spectrometry, the hydroxyl hydrogen appears in the 10–13 ppm region, although it is often either broadened or not observed owing to exchange with traces of water.",
"title": "Physical properties"
},
{
"paragraph_id": 11,
"text": "Many carboxylic acids are produced industrially on a large scale. They are also frequently found in nature. Esters of fatty acids are the main components of lipids and polyamides of aminocarboxylic acids are the main components of proteins.",
"title": "Occurrence and applications"
},
{
"paragraph_id": 12,
"text": "Carboxylic acids are used in the production of polymers, pharmaceuticals, solvents, and food additives. Industrially important carboxylic acids include acetic acid (component of vinegar, precursor to solvents and coatings), acrylic and methacrylic acids (precursors to polymers, adhesives), adipic acid (polymers), citric acid (a flavor and preservative in food and beverages), ethylenediaminetetraacetic acid (chelating agent), fatty acids (coatings), maleic acid (polymers), propionic acid (food preservative), terephthalic acid (polymers). Important carboxylate salts are soaps.",
"title": "Occurrence and applications"
},
{
"paragraph_id": 13,
"text": "In general, industrial routes to carboxylic acids differ from those used on a smaller scale because they require specialized equipment.",
"title": "Synthesis"
},
{
"paragraph_id": 14,
"text": "Preparative methods for small scale reactions for research or for production of fine chemicals often employ expensive consumable reagents.",
"title": "Synthesis"
},
{
"paragraph_id": 15,
"text": "Many reactions produce carboxylic acids but are used only in specific cases or are mainly of academic interest.",
"title": "Synthesis"
},
{
"paragraph_id": 16,
"text": "The most widely practiced reactions convert carboxylic acids into esters, amides, carboxylate salts, acid chlorides, and alcohols. Carboxylic acids react with bases to form carboxylate salts, in which the hydrogen of the hydroxyl (–OH) group is replaced with a metal cation. For example, acetic acid found in vinegar reacts with sodium bicarbonate (baking soda) to form sodium acetate, carbon dioxide, and water:",
"title": "Reactions"
},
{
"paragraph_id": 17,
"text": "Carboxylic acids also react with alcohols to give esters. This process is widely used, e.g. in the production of polyesters. Likewise, carboxylic acids are converted into amides, but this conversion typically does not occur by direct reaction of the carboxylic acid and the amine. Instead esters are typical precursors to amides. The conversion of amino acids into peptides is a significant biochemical process that requires ATP.",
"title": "Reactions"
},
{
"paragraph_id": 18,
"text": "The hydroxyl group on carboxylic acids may be replaced with a chlorine atom using thionyl chloride to give acyl chlorides. In nature, carboxylic acids are converted to thioesters.",
"title": "Reactions"
},
{
"paragraph_id": 19,
"text": "Like esters, most carboxylic acids can be reduced to alcohols by hydrogenation, or using hydride transferring agents such as lithium aluminium hydride. Strong alkyl transferring agents, such as organolithium compounds but not Grignard reagents, will reduce carboxylic acids to ketones along with transfer of the alkyl group.",
"title": "Reactions"
},
{
"paragraph_id": 20,
"text": "N,N-Dimethyl(chloromethylene)ammonium chloride (ClHC=N(CH3)2Cl) is a highly chemoselective agent for carboxylic acid reduction. It selectively activates the carboxylic acid to give the carboxymethyleneammonium salt, which can be reduced by a mild reductant like lithium tris(t-butoxy)aluminum hydride to afford an aldehyde in a one pot procedure. This procedure is known to tolerate reactive carbonyl functionalities such as ketone as well as moderately reactive ester, olefin, nitrile, and halide moieties.",
"title": "Reactions"
},
{
"paragraph_id": 21,
"text": "",
"title": "Reactions"
},
{
"paragraph_id": 22,
"text": "The carboxyl radical, •COOH, only exists briefly. The acid dissociation constant of •COOH has been measured using electron paramagnetic resonance spectroscopy. The carboxyl group tends to dimerise to form oxalic acid.",
"title": "Carboxyl radical"
}
]
| In organic chemistry, a carboxylic acid is an organic acid that contains a carboxyl group (COH) attached to an R-group. The general formula of a carboxylic acid is R−COOH or R−CO2H, with R referring to the alkyl, alkenyl, aryl, or other group. Carboxylic acids occur widely. Important examples include the amino acids and fatty acids. Deprotonation of a carboxylic acid gives a carboxylate anion. | 2001-08-31T00:24:38Z | 2023-12-10T19:08:43Z | [
"Template:Redirect",
"Template:Chem2",
"Template:Anchor",
"Template:Wikiquote",
"Template:Wiktionary",
"Template:Use dmy dates",
"Template:Citation needed",
"Template:GoldBookRef",
"Template:Organic reactions",
"Template:Authority control",
"Template:Short description",
"Template:Distinguish",
"Template:Cite journal",
"Template:Reflist",
"Template:Cite book",
"Template:OrgSynth",
"Template:Functional Groups"
]
| https://en.wikipedia.org/wiki/Carboxylic_acid |
6,100 | Chernobyl | Chernobyl (/tʃɜːrˈnoʊbəl/ chur-NOH-bəl, UK also /tʃɜːrˈnɒbəl/ chur-NOB-əl; Russian: Чернобыль, IPA: [tɕɪrˈnobɨlʲ]) or Chornobyl (Ukrainian: Чорнобиль, IPA: [tʃorˈnɔbɪlʲ] ) is a partially abandoned city in the Chernobyl Exclusion Zone, situated in the Vyshhorod Raion of northern Kyiv Oblast, Ukraine. Chernobyl is about 90 kilometres (60 mi) north of Kyiv, and 160 kilometres (100 mi) southwest of the Belarusian city of Gomel. Before its evacuation, the city had about 14,000 residents (considerably less than neighboring Pripyat). While living anywhere within the Chernobyl Exclusion Zone is technically illegal today, authorities tolerate those who choose to live within some of the less irradiated areas, and around 1,000 people live in Chernobyl today.
First mentioned as a ducal hunting lodge in 1193, the city has changed hands multiple times over the course of history. Jews moved into the city in the 16th century, and a now-defunct monastery was established in the area in 1626. By the end of the 18th century, Chernobyl was a major centre of Hasidic Judaism under the Twersky Dynasty, who left Chernobyl after the city was subject to pogroms in the early 20th century. The Jewish community was later murdered during the Holocaust. Chernobyl was chosen as the site of Ukraine's first nuclear power plant in 1972, located 15 kilometres (9 mi) north of the city, which opened in 1977. Chernobyl was evacuated on 5 May 1986, nine days after a catastrophic nuclear disaster at the plant, which was the largest nuclear disaster in history. Along with the residents of the nearby city of Pripyat, which was built as a home for the plant's workers, the population was relocated to the newly built city of Slavutych, and most have never returned.
The city was the administrative centre of Chernobyl Raion (district) from 1923. After the disaster, in 1988, the raion was dissolved and administration was transferred to the neighbouring Ivankiv Raion. The raion was abolished on 18 July 2020 as part of the administrative reform of Ukraine, which reduced the number of raions of Kyiv Oblast to seven. The area of Ivankiv Raion was merged into Vyshhorod Raion.
Although Chernobyl is primarily a ghost town today, a small number of people still live there, in houses marked with signs that read, "Owner of this house lives here", and a small number of animals live there as well. Workers on watch and administrative personnel of the Chernobyl Exclusion Zone are also stationed in the city. The city has two general stores and a hotel.
During the 2022 Russian invasion of Ukraine, Chernobyl was temporarily captured and occupied by Russian forces between 24 February and 2 April. After its capture, it was reported that radiation levels temporarily rose, due to human activities, including earthworks, which disturbed the dust.
The city's name is the same as one of the Ukrainian names for Artemisia vulgaris, mugwort or common wormwood: чорнобиль, chornóbyl' (or more commonly полин звичайний polýn zvycháynyy, 'common artemisia'). The name is inherited from Proto-Slavic *čьrnobylъ or Proto-Slavic *čьrnobyl, a compound of Proto-Slavic *čьrnъ 'black' + Proto-Slavic *bylь 'grass', the parts related to Ukrainian: чорний, romanized: chórnyy, lit. 'black' and било byló, 'stalk', so named in distinction to the lighter-stemmed wormwood A. absinthium.
The name in languages used nearby is:
The name in languages formerly used in the area is:
In English, the Russian-derived spelling Chernobyl has been commonly used, but some style guides recommend the spelling Chornobyl, or the use of romanized Ukrainian names for Ukrainian places generally.
The Polish Geographical Dictionary of the Kingdom of Poland of 1880–1902 states that the time the city was founded is not known.
Some older geographical dictionaries and descriptions of modern Eastern Europe mention "Czernobol" (Chernobyl) with reference to Ptolemy's world map (2nd century AD). Czernobol is identified as Azagarium [uk] "oppidium Sarmatiae" (Lat., "a city in Sarmatia"), by the 1605 Lexicon geographicum of Filippo Ferrari and the 1677 Lexicon Universale of Johann Jakob Hofmann. According to the Dictionary of Ancient Geography of Alexander Macbean (London, 1773), Azagarium is "a town of Sarmatia Europaea, on the Borysthenes" (Dnieper), 36° East longitude and 50°40' latitude. The city is "now supposed to be Czernobol, a town of Poland, in Red Russia [Red Ruthenia], in the Palatinate of Kiow [see Kiev Voivodeship], not far from the Borysthenes."
Whether Azagarium is indeed Czernobol is debatable. The question of Azagarium's correct location was raised in 1842 by Habsburg-Slovak historian, Pavel Jozef Šafárik, who published a book titled "Slavic Ancient History" ("Sławiańskie starożytności"), where he claimed Azagarium to be the hill of Zaguryna, which he found on an old Russian map "Bolzoj czertez" (Big drawing) near the city of Pereiaslav, now in central Ukraine.
In 2019, Ukrainian architect Boris Yerofalov-Pylypchak published a book, Roman Kyiv or Castrum Azagarium at Kyiv-Podil.
The archaeological excavations that were conducted in 2005–2008 found a cultural layer from the 10–12th centuries AD, which predates the first documentary mention of Chernobyl.
Around the 12th century Chernobyl was part of the land of Kievan Rus′. The first known mention of the settlement as Chernobyl is from an 1193 charter, which describes it as a hunting lodge of Knyaz Rurik Rostislavich. In 1362 it was a crown village of the Grand Duchy of Lithuania. Around that time the town had own castle which was ruined at least on two occasions in 1473 and 1482. The Chernobyl castle was rebuilt in the first quarter of the 16th century being located nearby the settlement in a hard to reach area. With revival of the castle, Chernobyl became a county seat. In 1552 it accounted for 196 buildings with 1,372 residents, out of which over 1,160 were considered city dwellers. In the city were developing various crafts professions such as blacksmith, cooper among others. Near Chernobyl has been excavated bog iron, out of which was produced iron. The village was granted to Filon Kmita, a captain of the royal cavalry, as a fiefdom in 1566. Following the Union of Lublin, the province where Chernobyl is located was transferred to the Crown of the Kingdom of Poland in 1569. Under the Polish Crown, Chernobyl became a seat of eldership (starostwo). During that period Chernobyl was inhabited by Ukrainian peasants, some Polish people and a relatively large number of Jews. Jews were brought to Chernobyl by Filon Kmita, during the Polish campaign of colonization. The first mentioning of Jewish community in Chernobyl is in the 17th century. In 1600 the first Roman Catholic church was built in the town. Local population was persecuted for holding Eastern Orthodox rite services. The traditionally Eastern Orthodox Ukrainian peasantry around the town were forcibly converted, by Poland, to the Ruthenian Uniate Church. In 1626, during the Counter-reformation, a Dominican church and monastery were founded by Lukasz Sapieha. A group of Old Catholics opposed the decrees of the Council of Trent. The Chernobyl residents actively supported the Khmelnytsky Uprising (1648–1657).
With the signing of the Truce of Andrusovo in 1667, Chernobyl was secured after the Sapieha family. Sometime in the 18th century, the place was passed on to the Chodkiewicz family. In the mid-18th century the area around Chernobyl was engulfed in a number of peasant riots, which caused Prince Riepnin to write from Warsaw to Major General Krechetnikov, requesting hussars to be sent from Kharkiv to deal with the uprising near Chernobyl in 1768. The 8th Lithuanian Infantry Regiment was stationed in the town in 1791. By the end of the 18th century, the town accounted for 2,865 residents and had 642 buildings.
Following the Second Partition of Poland, in 1793 Chernobyl was annexed by the Russian Empire and became part of Radomyshl county (uezd) as a supernumerary town ("zashtatny gorod"). Many of the Uniate Church converts returned to Eastern Orthodoxy.
In 1832, following the failed Polish November Uprising, the Dominican monastery was sequestrated. The church of the Old Catholics was disbanded in 1852.
Until the end of the 19th century, Chernobyl was a privately owned city that belonged to the Chodkiewicz family. In 1896 they sold the city to the state, but until 1910 they owned a castle and a house in the city.
In the second half of the 18th century, Chernobyl became a major centre of Hasidic Judaism. The Chernobyl Hasidic dynasty had been founded by Rabbi Menachem Nachum Twersky. The Jewish population suffered greatly from pogroms in October 1905 and in March–April 1919; many Jews were killed or robbed at the instigation of the Russian nationalist Black Hundreds. When the Twersky Dynasty left Chernobyl in 1920, it ceased to exist as a center of Hasidism.
Chernobyl had a population of 10,800 in 1898, including 7,200 Jews. In the beginning of March 1918 Chernobyl was occupied in World War I by German forces (see Treaty of Brest-Litovsk).
Ukrainians and Bolsheviks fought over the city in the ensuing Civil War. In the Polish–Soviet War of 1919–20, Chernobyl was taken first by the Polish Army and then by the cavalry of the Red Army. From 1921 onwards, it was officially incorporated into the Ukrainian SSR.
Between 1929 and 1933, Chernobyl suffered from killings during Stalin's collectivization campaign. It was also affected by the famine that resulted from Stalin's policies. The Polish and German community of Chernobyl was deported to Kazakhstan in 1936, during the Frontier Clearances.
During World War II, Chernobyl was occupied by the German Army from 25 August 1941 to 17 November 1943. When the Germans arrived, only 400 Jews remained in Chernobyl; they were murdered during the Holocaust.
In 1972, the Duga-1 radio receiver, part of the larger Duga over-the-horizon radar array, began construction 11 km (6.8 mi) west-northwest of Chernobyl. It was the origin of the Russian Woodpecker and was designed as part of an anti-ballistic missile early warning radar network.
On 15 August 1972, the Chernobyl Nuclear Power Plant (officially the Vladimir Ilyich Lenin Nuclear Power Plant) began construction about 15 km (9.3 mi) northwest of Chernobyl. The plant was built alongside Pripyat, an "atomograd" city founded on 4 February 1970 that was intended to serve the nuclear power plant. The decision to build the power plant was adopted by the Central Committee of the Communist Party of the Soviet Union and the Council of Ministers of the Soviet Union on recommendations of the State Planning Committee that the Ukrainian SSR be its location. It was the first nuclear power plant to be built in Ukraine.
With the dissolution of the Soviet Union in 1991, Chernobyl remained part of Ukraine within the Chernobyl Exclusion Zone which Ukraine inherited from the Soviet Union.
During the 2022 Russian invasion of Ukraine, Russian forces captured the city on 24 February. After its capture, Ukrainian officials reported that the radiation levels started to rise due to recent military activity causing radioactive dust to ascend into the air. Hundreds of Russian soldiers were suffering from radiation poisoning after digging trenches in a contaminated area, and one died. On 31 March it was reported that Russian forces had left the exclusion zone. Ukrainian authorities reasserted control over the area on 2 April.
Chernobyl is located about 90 kilometres (60 mi) north of Kyiv, and 160 kilometres (100 mi) southwest of the Belarusian city of Gomel.
Chernobyl has a humid continental climate (Dfb) with very warm, wet summers with cool nights and long, cold, and snowy winters.
On 26 April 1986, one of the reactors at the Chernobyl Nuclear Power Plant exploded after unsanctioned experiments on the reactor by plant operators were done improperly. The resulting loss of control was due to design flaws of the RBMK reactor, which made it unstable when operated at low power, and prone to thermal runaway where increases in temperature increase reactor power output.
Chernobyl city was evacuated nine days after the disaster. The level of contamination with caesium-137 was around 555 kBq/m (surface ground deposition in 1986).
Later analyses concluded that, even with very conservative estimates, relocation of the city (or of any area below 1500 kBq/m) could not be justified on the grounds of radiological health. This however does not account for the uncertainty in the first few days of the accident about further depositions and weather patterns. Moreover, an earlier short-term evacuation could have averted more significant doses from short-lived isotope radiation (specifically iodine-131, which has a half-life of about eight days). Estimates of health effects are a subject of some controversy, see Effects of the Chernobyl disaster.
In 1998, average caesium-137 doses from the accident (estimated at 1–2 mSv per year) did not exceed those from other sources of exposure. Current effective caesium-137 dose rates as of 2019 are 200–250 nSv/h, or roughly 1.7–2.2 mSv per year, which is comparable to the worldwide average background radiation from natural sources.
The base of operations for the administration and monitoring of the Chernobyl Exclusion Zone was moved from Pripyat to Chernobyl. Chernobyl currently contains offices for the State Agency of Ukraine on the Exclusion Zone Management and accommodations for visitors. Apartment blocks have been repurposed as accommodations for employees of the State Agency. The length of time that workers may spend within the Chernobyl Exclusion Zone is restricted by regulations that have been implemented to limit radiation exposure. Today, visits are allowed to Chernobyl but limited by strict rules.
In 2003, the United Nations Development Programme launched a project, called the Chernobyl Recovery and Development Programme (CRDP), for the recovery of the affected areas. The main goal of the CRDP's activities is supporting the efforts of the Government of Ukraine to mitigate the long-term social, economic, and ecological consequences of the Chernobyl disaster.
The city has become overgrown and many types of animals live there. According to census information collected over an extended period of time, it is estimated that more mammals live there now than before the disaster.
Notably, Mikhail Gorbachev, the final leader of the Soviet Union, stated in respect to the Chernobyl disaster that, "More than anything else, (Chernobyl) opened the possibility of much greater freedom of expression, to the point that the (Soviet) system as we knew it could no longer continue." | [
{
"paragraph_id": 0,
"text": "Chernobyl (/tʃɜːrˈnoʊbəl/ chur-NOH-bəl, UK also /tʃɜːrˈnɒbəl/ chur-NOB-əl; Russian: Чернобыль, IPA: [tɕɪrˈnobɨlʲ]) or Chornobyl (Ukrainian: Чорнобиль, IPA: [tʃorˈnɔbɪlʲ] ) is a partially abandoned city in the Chernobyl Exclusion Zone, situated in the Vyshhorod Raion of northern Kyiv Oblast, Ukraine. Chernobyl is about 90 kilometres (60 mi) north of Kyiv, and 160 kilometres (100 mi) southwest of the Belarusian city of Gomel. Before its evacuation, the city had about 14,000 residents (considerably less than neighboring Pripyat). While living anywhere within the Chernobyl Exclusion Zone is technically illegal today, authorities tolerate those who choose to live within some of the less irradiated areas, and around 1,000 people live in Chernobyl today.",
"title": ""
},
{
"paragraph_id": 1,
"text": "First mentioned as a ducal hunting lodge in 1193, the city has changed hands multiple times over the course of history. Jews moved into the city in the 16th century, and a now-defunct monastery was established in the area in 1626. By the end of the 18th century, Chernobyl was a major centre of Hasidic Judaism under the Twersky Dynasty, who left Chernobyl after the city was subject to pogroms in the early 20th century. The Jewish community was later murdered during the Holocaust. Chernobyl was chosen as the site of Ukraine's first nuclear power plant in 1972, located 15 kilometres (9 mi) north of the city, which opened in 1977. Chernobyl was evacuated on 5 May 1986, nine days after a catastrophic nuclear disaster at the plant, which was the largest nuclear disaster in history. Along with the residents of the nearby city of Pripyat, which was built as a home for the plant's workers, the population was relocated to the newly built city of Slavutych, and most have never returned.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The city was the administrative centre of Chernobyl Raion (district) from 1923. After the disaster, in 1988, the raion was dissolved and administration was transferred to the neighbouring Ivankiv Raion. The raion was abolished on 18 July 2020 as part of the administrative reform of Ukraine, which reduced the number of raions of Kyiv Oblast to seven. The area of Ivankiv Raion was merged into Vyshhorod Raion.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although Chernobyl is primarily a ghost town today, a small number of people still live there, in houses marked with signs that read, \"Owner of this house lives here\", and a small number of animals live there as well. Workers on watch and administrative personnel of the Chernobyl Exclusion Zone are also stationed in the city. The city has two general stores and a hotel.",
"title": ""
},
{
"paragraph_id": 4,
"text": "During the 2022 Russian invasion of Ukraine, Chernobyl was temporarily captured and occupied by Russian forces between 24 February and 2 April. After its capture, it was reported that radiation levels temporarily rose, due to human activities, including earthworks, which disturbed the dust.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The city's name is the same as one of the Ukrainian names for Artemisia vulgaris, mugwort or common wormwood: чорнобиль, chornóbyl' (or more commonly полин звичайний polýn zvycháynyy, 'common artemisia'). The name is inherited from Proto-Slavic *čьrnobylъ or Proto-Slavic *čьrnobyl, a compound of Proto-Slavic *čьrnъ 'black' + Proto-Slavic *bylь 'grass', the parts related to Ukrainian: чорний, romanized: chórnyy, lit. 'black' and било byló, 'stalk', so named in distinction to the lighter-stemmed wormwood A. absinthium.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "The name in languages used nearby is:",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "The name in languages formerly used in the area is:",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "In English, the Russian-derived spelling Chernobyl has been commonly used, but some style guides recommend the spelling Chornobyl, or the use of romanized Ukrainian names for Ukrainian places generally.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "The Polish Geographical Dictionary of the Kingdom of Poland of 1880–1902 states that the time the city was founded is not known.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Some older geographical dictionaries and descriptions of modern Eastern Europe mention \"Czernobol\" (Chernobyl) with reference to Ptolemy's world map (2nd century AD). Czernobol is identified as Azagarium [uk] \"oppidium Sarmatiae\" (Lat., \"a city in Sarmatia\"), by the 1605 Lexicon geographicum of Filippo Ferrari and the 1677 Lexicon Universale of Johann Jakob Hofmann. According to the Dictionary of Ancient Geography of Alexander Macbean (London, 1773), Azagarium is \"a town of Sarmatia Europaea, on the Borysthenes\" (Dnieper), 36° East longitude and 50°40' latitude. The city is \"now supposed to be Czernobol, a town of Poland, in Red Russia [Red Ruthenia], in the Palatinate of Kiow [see Kiev Voivodeship], not far from the Borysthenes.\"",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Whether Azagarium is indeed Czernobol is debatable. The question of Azagarium's correct location was raised in 1842 by Habsburg-Slovak historian, Pavel Jozef Šafárik, who published a book titled \"Slavic Ancient History\" (\"Sławiańskie starożytności\"), where he claimed Azagarium to be the hill of Zaguryna, which he found on an old Russian map \"Bolzoj czertez\" (Big drawing) near the city of Pereiaslav, now in central Ukraine.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 2019, Ukrainian architect Boris Yerofalov-Pylypchak published a book, Roman Kyiv or Castrum Azagarium at Kyiv-Podil.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The archaeological excavations that were conducted in 2005–2008 found a cultural layer from the 10–12th centuries AD, which predates the first documentary mention of Chernobyl.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Around the 12th century Chernobyl was part of the land of Kievan Rus′. The first known mention of the settlement as Chernobyl is from an 1193 charter, which describes it as a hunting lodge of Knyaz Rurik Rostislavich. In 1362 it was a crown village of the Grand Duchy of Lithuania. Around that time the town had own castle which was ruined at least on two occasions in 1473 and 1482. The Chernobyl castle was rebuilt in the first quarter of the 16th century being located nearby the settlement in a hard to reach area. With revival of the castle, Chernobyl became a county seat. In 1552 it accounted for 196 buildings with 1,372 residents, out of which over 1,160 were considered city dwellers. In the city were developing various crafts professions such as blacksmith, cooper among others. Near Chernobyl has been excavated bog iron, out of which was produced iron. The village was granted to Filon Kmita, a captain of the royal cavalry, as a fiefdom in 1566. Following the Union of Lublin, the province where Chernobyl is located was transferred to the Crown of the Kingdom of Poland in 1569. Under the Polish Crown, Chernobyl became a seat of eldership (starostwo). During that period Chernobyl was inhabited by Ukrainian peasants, some Polish people and a relatively large number of Jews. Jews were brought to Chernobyl by Filon Kmita, during the Polish campaign of colonization. The first mentioning of Jewish community in Chernobyl is in the 17th century. In 1600 the first Roman Catholic church was built in the town. Local population was persecuted for holding Eastern Orthodox rite services. The traditionally Eastern Orthodox Ukrainian peasantry around the town were forcibly converted, by Poland, to the Ruthenian Uniate Church. In 1626, during the Counter-reformation, a Dominican church and monastery were founded by Lukasz Sapieha. A group of Old Catholics opposed the decrees of the Council of Trent. The Chernobyl residents actively supported the Khmelnytsky Uprising (1648–1657).",
"title": "History"
},
{
"paragraph_id": 15,
"text": "With the signing of the Truce of Andrusovo in 1667, Chernobyl was secured after the Sapieha family. Sometime in the 18th century, the place was passed on to the Chodkiewicz family. In the mid-18th century the area around Chernobyl was engulfed in a number of peasant riots, which caused Prince Riepnin to write from Warsaw to Major General Krechetnikov, requesting hussars to be sent from Kharkiv to deal with the uprising near Chernobyl in 1768. The 8th Lithuanian Infantry Regiment was stationed in the town in 1791. By the end of the 18th century, the town accounted for 2,865 residents and had 642 buildings.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Following the Second Partition of Poland, in 1793 Chernobyl was annexed by the Russian Empire and became part of Radomyshl county (uezd) as a supernumerary town (\"zashtatny gorod\"). Many of the Uniate Church converts returned to Eastern Orthodoxy.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1832, following the failed Polish November Uprising, the Dominican monastery was sequestrated. The church of the Old Catholics was disbanded in 1852.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Until the end of the 19th century, Chernobyl was a privately owned city that belonged to the Chodkiewicz family. In 1896 they sold the city to the state, but until 1910 they owned a castle and a house in the city.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In the second half of the 18th century, Chernobyl became a major centre of Hasidic Judaism. The Chernobyl Hasidic dynasty had been founded by Rabbi Menachem Nachum Twersky. The Jewish population suffered greatly from pogroms in October 1905 and in March–April 1919; many Jews were killed or robbed at the instigation of the Russian nationalist Black Hundreds. When the Twersky Dynasty left Chernobyl in 1920, it ceased to exist as a center of Hasidism.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Chernobyl had a population of 10,800 in 1898, including 7,200 Jews. In the beginning of March 1918 Chernobyl was occupied in World War I by German forces (see Treaty of Brest-Litovsk).",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Ukrainians and Bolsheviks fought over the city in the ensuing Civil War. In the Polish–Soviet War of 1919–20, Chernobyl was taken first by the Polish Army and then by the cavalry of the Red Army. From 1921 onwards, it was officially incorporated into the Ukrainian SSR.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Between 1929 and 1933, Chernobyl suffered from killings during Stalin's collectivization campaign. It was also affected by the famine that resulted from Stalin's policies. The Polish and German community of Chernobyl was deported to Kazakhstan in 1936, during the Frontier Clearances.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "During World War II, Chernobyl was occupied by the German Army from 25 August 1941 to 17 November 1943. When the Germans arrived, only 400 Jews remained in Chernobyl; they were murdered during the Holocaust.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 1972, the Duga-1 radio receiver, part of the larger Duga over-the-horizon radar array, began construction 11 km (6.8 mi) west-northwest of Chernobyl. It was the origin of the Russian Woodpecker and was designed as part of an anti-ballistic missile early warning radar network.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "On 15 August 1972, the Chernobyl Nuclear Power Plant (officially the Vladimir Ilyich Lenin Nuclear Power Plant) began construction about 15 km (9.3 mi) northwest of Chernobyl. The plant was built alongside Pripyat, an \"atomograd\" city founded on 4 February 1970 that was intended to serve the nuclear power plant. The decision to build the power plant was adopted by the Central Committee of the Communist Party of the Soviet Union and the Council of Ministers of the Soviet Union on recommendations of the State Planning Committee that the Ukrainian SSR be its location. It was the first nuclear power plant to be built in Ukraine.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "With the dissolution of the Soviet Union in 1991, Chernobyl remained part of Ukraine within the Chernobyl Exclusion Zone which Ukraine inherited from the Soviet Union.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "During the 2022 Russian invasion of Ukraine, Russian forces captured the city on 24 February. After its capture, Ukrainian officials reported that the radiation levels started to rise due to recent military activity causing radioactive dust to ascend into the air. Hundreds of Russian soldiers were suffering from radiation poisoning after digging trenches in a contaminated area, and one died. On 31 March it was reported that Russian forces had left the exclusion zone. Ukrainian authorities reasserted control over the area on 2 April.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Chernobyl is located about 90 kilometres (60 mi) north of Kyiv, and 160 kilometres (100 mi) southwest of the Belarusian city of Gomel.",
"title": "Geography"
},
{
"paragraph_id": 29,
"text": "Chernobyl has a humid continental climate (Dfb) with very warm, wet summers with cool nights and long, cold, and snowy winters.",
"title": "Geography"
},
{
"paragraph_id": 30,
"text": "On 26 April 1986, one of the reactors at the Chernobyl Nuclear Power Plant exploded after unsanctioned experiments on the reactor by plant operators were done improperly. The resulting loss of control was due to design flaws of the RBMK reactor, which made it unstable when operated at low power, and prone to thermal runaway where increases in temperature increase reactor power output.",
"title": "Chernobyl nuclear reactor disaster"
},
{
"paragraph_id": 31,
"text": "Chernobyl city was evacuated nine days after the disaster. The level of contamination with caesium-137 was around 555 kBq/m (surface ground deposition in 1986).",
"title": "Chernobyl nuclear reactor disaster"
},
{
"paragraph_id": 32,
"text": "Later analyses concluded that, even with very conservative estimates, relocation of the city (or of any area below 1500 kBq/m) could not be justified on the grounds of radiological health. This however does not account for the uncertainty in the first few days of the accident about further depositions and weather patterns. Moreover, an earlier short-term evacuation could have averted more significant doses from short-lived isotope radiation (specifically iodine-131, which has a half-life of about eight days). Estimates of health effects are a subject of some controversy, see Effects of the Chernobyl disaster.",
"title": "Chernobyl nuclear reactor disaster"
},
{
"paragraph_id": 33,
"text": "In 1998, average caesium-137 doses from the accident (estimated at 1–2 mSv per year) did not exceed those from other sources of exposure. Current effective caesium-137 dose rates as of 2019 are 200–250 nSv/h, or roughly 1.7–2.2 mSv per year, which is comparable to the worldwide average background radiation from natural sources.",
"title": "Chernobyl nuclear reactor disaster"
},
{
"paragraph_id": 34,
"text": "The base of operations for the administration and monitoring of the Chernobyl Exclusion Zone was moved from Pripyat to Chernobyl. Chernobyl currently contains offices for the State Agency of Ukraine on the Exclusion Zone Management and accommodations for visitors. Apartment blocks have been repurposed as accommodations for employees of the State Agency. The length of time that workers may spend within the Chernobyl Exclusion Zone is restricted by regulations that have been implemented to limit radiation exposure. Today, visits are allowed to Chernobyl but limited by strict rules.",
"title": "Chernobyl nuclear reactor disaster"
},
{
"paragraph_id": 35,
"text": "In 2003, the United Nations Development Programme launched a project, called the Chernobyl Recovery and Development Programme (CRDP), for the recovery of the affected areas. The main goal of the CRDP's activities is supporting the efforts of the Government of Ukraine to mitigate the long-term social, economic, and ecological consequences of the Chernobyl disaster.",
"title": "Chernobyl nuclear reactor disaster"
},
{
"paragraph_id": 36,
"text": "The city has become overgrown and many types of animals live there. According to census information collected over an extended period of time, it is estimated that more mammals live there now than before the disaster.",
"title": "Chernobyl nuclear reactor disaster"
},
{
"paragraph_id": 37,
"text": "Notably, Mikhail Gorbachev, the final leader of the Soviet Union, stated in respect to the Chernobyl disaster that, \"More than anything else, (Chernobyl) opened the possibility of much greater freedom of expression, to the point that the (Soviet) system as we knew it could no longer continue.\"",
"title": "Chernobyl nuclear reactor disaster"
}
]
| Chernobyl or Chornobyl is a partially abandoned city in the Chernobyl Exclusion Zone, situated in the Vyshhorod Raion of northern Kyiv Oblast, Ukraine. Chernobyl is about 90 kilometres (60 mi) north of Kyiv, and 160 kilometres (100 mi) southwest of the Belarusian city of Gomel. Before its evacuation, the city had about 14,000 residents. While living anywhere within the Chernobyl Exclusion Zone is technically illegal today, authorities tolerate those who choose to live within some of the less irradiated areas, and around 1,000 people live in Chernobyl today. First mentioned as a ducal hunting lodge in 1193, the city has changed hands multiple times over the course of history. Jews moved into the city in the 16th century, and a now-defunct monastery was established in the area in 1626. By the end of the 18th century, Chernobyl was a major centre of Hasidic Judaism under the Twersky Dynasty, who left Chernobyl after the city was subject to pogroms in the early 20th century. The Jewish community was later murdered during the Holocaust. Chernobyl was chosen as the site of Ukraine's first nuclear power plant in 1972, located 15 kilometres (9 mi) north of the city, which opened in 1977. Chernobyl was evacuated on 5 May 1986, nine days after a catastrophic nuclear disaster at the plant, which was the largest nuclear disaster in history. Along with the residents of the nearby city of Pripyat, which was built as a home for the plant's workers, the population was relocated to the newly built city of Slavutych, and most have never returned. The city was the administrative centre of Chernobyl Raion (district) from 1923. After the disaster, in 1988, the raion was dissolved and administration was transferred to the neighbouring Ivankiv Raion. The raion was abolished on 18 July 2020 as part of the administrative reform of Ukraine, which reduced the number of raions of Kyiv Oblast to seven. The area of Ivankiv Raion was merged into Vyshhorod Raion. Although Chernobyl is primarily a ghost town today, a small number of people still live there, in houses marked with signs that read, "Owner of this house lives here", and a small number of animals live there as well. Workers on watch and administrative personnel of the Chernobyl Exclusion Zone are also stationed in the city. The city has two general stores and a hotel. During the 2022 Russian invasion of Ukraine, Chernobyl was temporarily captured and occupied by Russian forces between 24 February and 2 April. After its capture, it was reported that radiation levels temporarily rose, due to human activities, including earthworks, which disturbed the dust. | 2001-08-13T17:59:01Z | 2023-12-31T11:13:00Z | [
"Template:IPA-ru",
"Template:Citation needed",
"Template:IPA-yi",
"Template:Wikivoyage",
"Template:Infobox settlement",
"Template:Dubious",
"Template:Commons category",
"Template:When",
"Template:Lang-ru",
"Template:Lang-yi",
"Template:ISBN",
"Template:Cite tech report",
"Template:Cite report",
"Template:Respell",
"Template:Pp-vandalism",
"Template:Use dmy dates",
"Template:Convert",
"Template:Transliteration",
"Template:Lang-pl",
"Template:Cite news",
"Template:Kyiv Oblast",
"Template:Short description",
"Template:Ill",
"Template:IPAc-en",
"Template:Lang-uk",
"Template:IPA-uk",
"Template:Lang",
"Template:Clarify",
"Template:Main",
"Template:Cite web",
"Template:Cite conference",
"Template:About",
"Template:Chernobyl disaster",
"Template:Authority control",
"Template:Chernobyl Exclusion Zone",
"Template:IPA-pl",
"Template:Reflist",
"Template:Webarchive",
"Template:Dead link",
"Template:Proto",
"Template:IPA-be",
"Template:More citations needed section",
"Template:Weather box",
"Template:Cite book",
"Template:Cite journal",
"Template:Lang-be"
]
| https://en.wikipedia.org/wiki/Chernobyl |
6,102 | Cyan | Cyan (/ˈsaɪ.ən, -æn/) is the color between blue and green on the visible spectrum of light. It is evoked by light with a predominant wavelength between 490 and 520 nm, between the wavelengths of green and blue.
In the subtractive color system, or CMYK color model, which can be overlaid to produce all colors in paint and color printing, cyan is one of the primary colors, along with magenta and yellow. In the additive color system, or RGB color model, used to create all the colors on a computer or television display, cyan is made by mixing equal amounts of green and blue light. Cyan is the complement of red; it can be made by the removal of red from white. Mixing red light and cyan light at the right intensity will make white light.
Colors in the cyan color range are teal, turquoise, electric blue, aquamarine, and others described as blue-green. Cyan is most often associated with peace, relaxation, healthcare, dreams, and spirituality.
Its name is derived from the Ancient Greek word kyanos (κύανος), meaning "dark blue enamel, Lapis lazuli". It was formerly known as "cyan blue" or cyan-blue, and its first recorded use as a color name in English was in 1879. Further origins of the color name can be traced back to a dye produced from the cornflower (Centaurea cyanus).
In most languages, 'cyan' is not a basic color term and it phenomenologically appears as a greenish vibrant hue of blue to most English speakers. Other English terms for this "borderline" hue region include blue green, aqua, turquoise, teal, and grue.
The web color cyan shown at right is a secondary color in the RGB color model, which uses combinations of red, green and blue light to create all the colors on computer and television displays. In X11 colors, this color is called both cyan and aqua. In the HTML color list, this same color is called aqua.
The web colors are more vivid than the cyan used in the CMYK color system, and the web colors cannot be accurately reproduced on a printed page. To reproduce the web color cyan in inks, it is necessary to add some white ink to the printer's cyan below, so when it is reproduced in printing, it is not a primary subtractive color. It is called aqua (a name in use since 1598) because it is a color commonly associated with water, such as the appearance of the water at a tropical beach.
Cyan is also one of the common inks used in four-color printing, along with magenta, yellow, and black; this set of colors is referred to as CMYK. In printing, the cyan ink is sometimes known as printer's cyan, process cyan, or process blue.
While both the additive secondary and the subtractive primary are called cyan, they can be substantially different from one another. Cyan printing ink is typically more saturated than the RGB secondary cyan, depending on what RGB color space and ink are considered. That is, process cyan is usually outside the RGB gamut, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there can be variations in the printed color that is pure cyan ink. This is because real-world subtractive (unlike additive) color mixing does not consistently produce the same result when mixing apparently identical colors, since the specific frequencies filtered out to produce that color affect how it interacts with other colors. Phthalocyanine blue is one such commonly used pigment. A typical formulation of process cyan is shown in the color box on the right. | [
{
"paragraph_id": 0,
"text": "Cyan (/ˈsaɪ.ən, -æn/) is the color between blue and green on the visible spectrum of light. It is evoked by light with a predominant wavelength between 490 and 520 nm, between the wavelengths of green and blue.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the subtractive color system, or CMYK color model, which can be overlaid to produce all colors in paint and color printing, cyan is one of the primary colors, along with magenta and yellow. In the additive color system, or RGB color model, used to create all the colors on a computer or television display, cyan is made by mixing equal amounts of green and blue light. Cyan is the complement of red; it can be made by the removal of red from white. Mixing red light and cyan light at the right intensity will make white light.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Colors in the cyan color range are teal, turquoise, electric blue, aquamarine, and others described as blue-green. Cyan is most often associated with peace, relaxation, healthcare, dreams, and spirituality.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Its name is derived from the Ancient Greek word kyanos (κύανος), meaning \"dark blue enamel, Lapis lazuli\". It was formerly known as \"cyan blue\" or cyan-blue, and its first recorded use as a color name in English was in 1879. Further origins of the color name can be traced back to a dye produced from the cornflower (Centaurea cyanus).",
"title": "Etymology and terminology"
},
{
"paragraph_id": 4,
"text": "In most languages, 'cyan' is not a basic color term and it phenomenologically appears as a greenish vibrant hue of blue to most English speakers. Other English terms for this \"borderline\" hue region include blue green, aqua, turquoise, teal, and grue.",
"title": "Etymology and terminology"
},
{
"paragraph_id": 5,
"text": "The web color cyan shown at right is a secondary color in the RGB color model, which uses combinations of red, green and blue light to create all the colors on computer and television displays. In X11 colors, this color is called both cyan and aqua. In the HTML color list, this same color is called aqua.",
"title": "Cyan on the web and printing"
},
{
"paragraph_id": 6,
"text": "The web colors are more vivid than the cyan used in the CMYK color system, and the web colors cannot be accurately reproduced on a printed page. To reproduce the web color cyan in inks, it is necessary to add some white ink to the printer's cyan below, so when it is reproduced in printing, it is not a primary subtractive color. It is called aqua (a name in use since 1598) because it is a color commonly associated with water, such as the appearance of the water at a tropical beach.",
"title": "Cyan on the web and printing"
},
{
"paragraph_id": 7,
"text": "Cyan is also one of the common inks used in four-color printing, along with magenta, yellow, and black; this set of colors is referred to as CMYK. In printing, the cyan ink is sometimes known as printer's cyan, process cyan, or process blue.",
"title": "Cyan on the web and printing"
},
{
"paragraph_id": 8,
"text": "While both the additive secondary and the subtractive primary are called cyan, they can be substantially different from one another. Cyan printing ink is typically more saturated than the RGB secondary cyan, depending on what RGB color space and ink are considered. That is, process cyan is usually outside the RGB gamut, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there can be variations in the printed color that is pure cyan ink. This is because real-world subtractive (unlike additive) color mixing does not consistently produce the same result when mixing apparently identical colors, since the specific frequencies filtered out to produce that color affect how it interacts with other colors. Phthalocyanine blue is one such commonly used pigment. A typical formulation of process cyan is shown in the color box on the right.",
"title": "Cyan on the web and printing"
}
]
| Cyan is the color between blue and green on the visible spectrum of light. It is evoked by light with a predominant wavelength between 490 and 520 nm, between the wavelengths of green and blue. In the subtractive color system, or CMYK color model, which can be overlaid to produce all colors in paint and color printing, cyan is one of the primary colors, along with magenta and yellow. In the additive color system, or RGB color model, used to create all the colors on a computer or television display, cyan is made by mixing equal amounts of green and blue light. Cyan is the complement of red; it can be made by the removal of red from white. Mixing red light and cyan light at the right intensity will make white light. Colors in the cyan color range are teal, turquoise, electric blue, aquamarine, and others described as blue-green. Cyan is most often associated with peace, relaxation, healthcare, dreams, and spirituality. | 2001-11-01T03:55:26Z | 2023-12-01T16:58:00Z | [
"Template:Cite Dictionary.com",
"Template:Cite Merriam-Webster",
"Template:Shades of cyan",
"Template:Web colors",
"Template:Infobox color",
"Template:IPAc-en",
"Template:Commons category",
"Template:Cite OED",
"Template:ISBN",
"Template:Cite news",
"Template:Electromagnetic spectrum",
"Template:Short description",
"Template:About",
"Template:Reflist",
"Template:AHDict",
"Template:Cite web",
"Template:Cite journal",
"Template:Clarify",
"Template:Cite book",
"Template:Color topics"
]
| https://en.wikipedia.org/wiki/Cyan |
6,105 | Conventional insulin therapy | Conventional insulin therapy is a therapeutic regimen for treatment of diabetes mellitus which contrasts with the newer intensive insulin therapy.
This older method (prior to the development home blood glucose monitoring) is still in use in a proportion of cases.
Conventional insulin therapy is characterized by:
The down side of this method is that it is difficult to achieve as good results of glycemic control as with intensive insulin therapy. The advantage is that, for diabetics with a regular lifestyle, the regime is less intrusive than the intensive therapy. | [
{
"paragraph_id": 0,
"text": "Conventional insulin therapy is a therapeutic regimen for treatment of diabetes mellitus which contrasts with the newer intensive insulin therapy.",
"title": ""
},
{
"paragraph_id": 1,
"text": "This older method (prior to the development home blood glucose monitoring) is still in use in a proportion of cases.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Conventional insulin therapy is characterized by:",
"title": ""
},
{
"paragraph_id": 3,
"text": "The down side of this method is that it is difficult to achieve as good results of glycemic control as with intensive insulin therapy. The advantage is that, for diabetics with a regular lifestyle, the regime is less intrusive than the intensive therapy.",
"title": ""
},
{
"paragraph_id": 4,
"text": "",
"title": "References"
}
]
| Conventional insulin therapy is a therapeutic regimen for treatment of diabetes mellitus which contrasts with the newer intensive insulin therapy. This older method is still in use in a proportion of cases. Conventional insulin therapy is characterized by: Insulin injections of a mixture of regular and intermediate acting insulin are performed two times a day, or to improve overnight glucose, mixed in the morning to cover breakfast and lunch, but with regular acting insulin alone for dinner and intermediate acting insulin at bedtime.
Meals are scheduled to match the anticipated peaks in the insulin profiles.
The target range for blood glucose levels is higher than is desired in the intensive regimen.
Frequent measurements of blood glucose levels were not used. The down side of this method is that it is difficult to achieve as good results of glycemic control as with intensive insulin therapy. The advantage is that, for diabetics with a regular lifestyle, the regime is less intrusive than the intensive therapy. | 2002-02-25T15:51:15Z | 2023-12-01T06:10:22Z | [
"Template:Cite web",
"Template:Diabetes",
"Template:Treatment-stub",
"Template:Refimprove",
"Template:Infobox medical intervention",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Conventional_insulin_therapy |
6,109 | Cream | Cream is a dairy product composed of the higher-fat layer skimmed from the top of milk before homogenization. In un-homogenized milk, the fat, which is less dense, eventually rises to the top. In the industrial production of cream, this process is accelerated by using centrifuges called "separators". In many countries, it is sold in several grades depending on the total butterfat content. It can be dried to a powder for shipment to distant markets, and contains high levels of saturated fat.
Cream skimmed from milk may be called "sweet cream" to distinguish it from cream skimmed from whey, a by-product of cheese-making. Whey cream has a lower fat content and tastes more salty, tangy, and "cheesy". In many countries partially fermented cream is also sold: sour cream, crème fraîche, and so on. Both forms have many culinary uses in both sweet and savoury dishes.
Cream produced by cattle (particularly Jersey cattle) grazing on natural pasture often contains some carotenoid pigments derived from the plants they eat; traces of these intensely colored pigments give milk a slightly yellow tone, hence the name of the yellowish-white color: cream. Carotenoids are also the origin of butter's yellow color. Cream from goat's milk, water buffalo milk, or from cows fed indoors on grain or grain-based pellets, is white.
Cream is used as an ingredient in many foods, including ice cream, many sauces, soups, stews, puddings, and some custard bases, and is also used for cakes. Whipped cream is served as a topping on ice cream sundaes, milkshakes, lassi, eggnog, sweet pies, strawberries, blueberries, or peaches. Cream is also used in Indian curries such as masala dishes.
Cream (usually light/single cream or half and half) may be added to coffee.
Both single and double cream (see Types for definitions) can be used in cooking. Double cream or full-fat crème fraîche is often used when the cream is added to a hot sauce, to prevent it separating or "splitting". Double cream can be thinned with milk to make an approximation of single cream.
The French word crème denotes not only dairy cream but also other thick liquids such as sweet and savory custards, which are normally made with milk, not cream.
Different grades of cream are distinguished by their fat content, whether they have been heat-treated, whipped, and so on. In many jurisdictions, there are regulations for each type.
The Australia New Zealand Food Standards Code – Standard 2.5.2 – Defines cream as a milk product comparatively rich in fat, in the form of an emulsion of fat-in-skim milk, which can be obtained by separation from milk. Cream sold without further specification must contain no less than 350 g/kg (35%) milk fat.
Manufacturers labels may distinguish between different fat contents, a general guideline is as follows:
Canadian cream definitions are similar to those used in the United States, except for "light cream", which is very low-fat cream, usually with 5 or 6 percent butterfat. Specific product characteristics are generally uniform throughout Canada, but names vary by both geographic and linguistic area and by manufacturer: "coffee cream" may be 10 or 18 percent cream and "half-and-half" (crème légère) may be 3, 5, 6 or 10 percent, all depending on location and brand.
Regulations allow cream to contain acidity regulators and stabilizers. For whipping cream, allowed additives include skim milk powder (≤ 0.25%), glucose solids (≤ 0.1%), calcium sulphate (≤ 0.005%), and xanthan gum (≤ 0.02%). The content of milk fat in canned cream must be displayed as a percentage followed by "milk fat", "B.F", or "M.F".
In France, the use of the term "cream" for food products is defined by the decree 80-313 of April 23, 1980. It specifies the minimum rate of milk fat (12%) as well as the rules for pasteurisation or UHT sterilisation. The mention "crème fraîche" (fresh cream) can only be used for pasteurised creams conditioned on production site within 24h after pasteurisation. Even if food additives complying with French and European laws are allowed, usually, none will be found in plain "crèmes" and "crèmes fraîches" apart from lactic ferments (some low cost creams (or close to creams) can contain thickening agents, but rarely). Fat content is commonly shown as "XX% M.G." ("matière grasse").
Russia, as well as other EAC countries, legally separates cream into two classes: normal (10–34% butterfat) and heavy (35–58%), but the industry has pretty much standardized around the following types:
In Sweden, cream is usually sold as:
Mellangrädde (27%) is, nowadays, a less common variant. Gräddfil (usually 12%) and Creme Fraiche (usually around 35%) are two common sour cream products.
In Switzerland, the types of cream are legally defined as follows:
Sour cream and crème fraîche (German: Sauerrahm, Crème fraîche; French: crème acidulée, crème fraîche; Italian: panna acidula, crème fraîche) are defined as cream soured by bacterial cultures.
Thick cream (German: verdickter Rahm; French: crème épaissie; Italian: panna addensata) is defined as cream thickened using thickening agents.
In the United Kingdom, these types of cream are produced. Fat content must meet the Food Labelling Regulations 1996.
In the United States, cream is usually sold as:
Not all grades are defined by all jurisdictions, and the exact fat content ranges vary. The above figures, except for "manufacturer's cream", are based on the Code of Federal Regulations, Title 21, Part 131.
Cream may have thickening agents and stabilizers added. Thickeners include sodium alginate, carrageenan, gelatine, sodium bicarbonate, tetrasodium pyrophosphate, and alginic acid.
Other processing may be carried out. For example, cream has a tendency to produce oily globules (called "feathering") when added to coffee. The stability of the cream may be increased by increasing the non-fat solids content, which can be done by partial demineralisation and addition of sodium caseinate, although this is expensive.
Butter is made by churning cream to separate the butterfat and buttermilk. This can be done by hand or by machine.
Whipped cream is made by whisking or mixing air into cream with more than 30% fat, to turn the liquid cream into a soft solid. Nitrous oxide, from whipped-cream chargers may also be used to make whipped cream.
Sour cream, produced in many countries, is cream (12 to 16% or more milk fat) that has been subjected to a bacterial culture that produces lactic acid (0.5%+), which sours and thickens it.
Crème fraîche (28% milk fat) is slightly soured with bacterial culture, but not as sour or as thick as sour cream. Mexican crema (or cream espesa) is similar to crème fraîche.
Smetana is a heavy cream-derived (15–40% milk fat) Central and Eastern European sweet or sour cream.
Rjome or rømme is Norwegian sour cream containing 35% milk fat, similar to Icelandic sýrður rjómi.
Clotted cream in the United Kingdom is made through a process that starts by slowly heating whole milk to produce a very high-fat (55%) product, similar to Indian malai.
Reduced cream is a cream product in New Zealand, often used to make Kiwi dip.
Some non-edible substances are called creams due to their consistency: shoe cream is runny, unlike regular waxy shoe polish; hand/body "creme" or "skin cream" is meant for moisturizing the skin.
Regulations in many jurisdictions restrict the use of the word cream for foods. Words such as creme, kreme, creame, or whipped topping (e.g., Cool Whip) are often used for products which cannot legally be called cream, though in some jurisdictions even these spellings may be disallowed, for example under the doctrine of idem sonans. Oreo and Hydrox cookies are a type of sandwich cookie in which two biscuits have a soft, sweet filling between them that is called "crème filling." In some cases, foods can be described as cream although they do not contain predominantly milk fats; for example, in Britain, "ice cream" can contain non-milk fat (declared on the label) in addition to or instead of cream, and salad cream is the customary name for a non-dairy condiment that has been produced since the 1920s.
In other languages, cognates of "cream" are also sometimes used for non-food products, such as fogkrém (Hungarian for toothpaste), or Sonnencreme (German for sunscreen).
Some products are described as "cream alternatives". For example, Elmlea Double, etc. are blends of buttermilk or lentils and vegetable oil with other additives sold by Upfield in the United Kingdom packaged and shelved in the same way as cream, labelled as having "a creamy taste".
Nutrition chart for heavy cream | [
{
"paragraph_id": 0,
"text": "Cream is a dairy product composed of the higher-fat layer skimmed from the top of milk before homogenization. In un-homogenized milk, the fat, which is less dense, eventually rises to the top. In the industrial production of cream, this process is accelerated by using centrifuges called \"separators\". In many countries, it is sold in several grades depending on the total butterfat content. It can be dried to a powder for shipment to distant markets, and contains high levels of saturated fat.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cream skimmed from milk may be called \"sweet cream\" to distinguish it from cream skimmed from whey, a by-product of cheese-making. Whey cream has a lower fat content and tastes more salty, tangy, and \"cheesy\". In many countries partially fermented cream is also sold: sour cream, crème fraîche, and so on. Both forms have many culinary uses in both sweet and savoury dishes.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cream produced by cattle (particularly Jersey cattle) grazing on natural pasture often contains some carotenoid pigments derived from the plants they eat; traces of these intensely colored pigments give milk a slightly yellow tone, hence the name of the yellowish-white color: cream. Carotenoids are also the origin of butter's yellow color. Cream from goat's milk, water buffalo milk, or from cows fed indoors on grain or grain-based pellets, is white.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cream is used as an ingredient in many foods, including ice cream, many sauces, soups, stews, puddings, and some custard bases, and is also used for cakes. Whipped cream is served as a topping on ice cream sundaes, milkshakes, lassi, eggnog, sweet pies, strawberries, blueberries, or peaches. Cream is also used in Indian curries such as masala dishes.",
"title": "Cuisine"
},
{
"paragraph_id": 4,
"text": "Cream (usually light/single cream or half and half) may be added to coffee.",
"title": "Cuisine"
},
{
"paragraph_id": 5,
"text": "Both single and double cream (see Types for definitions) can be used in cooking. Double cream or full-fat crème fraîche is often used when the cream is added to a hot sauce, to prevent it separating or \"splitting\". Double cream can be thinned with milk to make an approximation of single cream.",
"title": "Cuisine"
},
{
"paragraph_id": 6,
"text": "The French word crème denotes not only dairy cream but also other thick liquids such as sweet and savory custards, which are normally made with milk, not cream.",
"title": "Cuisine"
},
{
"paragraph_id": 7,
"text": "Different grades of cream are distinguished by their fat content, whether they have been heat-treated, whipped, and so on. In many jurisdictions, there are regulations for each type.",
"title": "Types"
},
{
"paragraph_id": 8,
"text": "The Australia New Zealand Food Standards Code – Standard 2.5.2 – Defines cream as a milk product comparatively rich in fat, in the form of an emulsion of fat-in-skim milk, which can be obtained by separation from milk. Cream sold without further specification must contain no less than 350 g/kg (35%) milk fat.",
"title": "Types"
},
{
"paragraph_id": 9,
"text": "Manufacturers labels may distinguish between different fat contents, a general guideline is as follows:",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "Canadian cream definitions are similar to those used in the United States, except for \"light cream\", which is very low-fat cream, usually with 5 or 6 percent butterfat. Specific product characteristics are generally uniform throughout Canada, but names vary by both geographic and linguistic area and by manufacturer: \"coffee cream\" may be 10 or 18 percent cream and \"half-and-half\" (crème légère) may be 3, 5, 6 or 10 percent, all depending on location and brand.",
"title": "Types"
},
{
"paragraph_id": 11,
"text": "Regulations allow cream to contain acidity regulators and stabilizers. For whipping cream, allowed additives include skim milk powder (≤ 0.25%), glucose solids (≤ 0.1%), calcium sulphate (≤ 0.005%), and xanthan gum (≤ 0.02%). The content of milk fat in canned cream must be displayed as a percentage followed by \"milk fat\", \"B.F\", or \"M.F\".",
"title": "Types"
},
{
"paragraph_id": 12,
"text": "In France, the use of the term \"cream\" for food products is defined by the decree 80-313 of April 23, 1980. It specifies the minimum rate of milk fat (12%) as well as the rules for pasteurisation or UHT sterilisation. The mention \"crème fraîche\" (fresh cream) can only be used for pasteurised creams conditioned on production site within 24h after pasteurisation. Even if food additives complying with French and European laws are allowed, usually, none will be found in plain \"crèmes\" and \"crèmes fraîches\" apart from lactic ferments (some low cost creams (or close to creams) can contain thickening agents, but rarely). Fat content is commonly shown as \"XX% M.G.\" (\"matière grasse\").",
"title": "Types"
},
{
"paragraph_id": 13,
"text": "Russia, as well as other EAC countries, legally separates cream into two classes: normal (10–34% butterfat) and heavy (35–58%), but the industry has pretty much standardized around the following types:",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "In Sweden, cream is usually sold as:",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "Mellangrädde (27%) is, nowadays, a less common variant. Gräddfil (usually 12%) and Creme Fraiche (usually around 35%) are two common sour cream products.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "In Switzerland, the types of cream are legally defined as follows:",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "Sour cream and crème fraîche (German: Sauerrahm, Crème fraîche; French: crème acidulée, crème fraîche; Italian: panna acidula, crème fraîche) are defined as cream soured by bacterial cultures.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "Thick cream (German: verdickter Rahm; French: crème épaissie; Italian: panna addensata) is defined as cream thickened using thickening agents.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "In the United Kingdom, these types of cream are produced. Fat content must meet the Food Labelling Regulations 1996.",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "In the United States, cream is usually sold as:",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "Not all grades are defined by all jurisdictions, and the exact fat content ranges vary. The above figures, except for \"manufacturer's cream\", are based on the Code of Federal Regulations, Title 21, Part 131.",
"title": "Types"
},
{
"paragraph_id": 22,
"text": "Cream may have thickening agents and stabilizers added. Thickeners include sodium alginate, carrageenan, gelatine, sodium bicarbonate, tetrasodium pyrophosphate, and alginic acid.",
"title": "Processing and additives"
},
{
"paragraph_id": 23,
"text": "Other processing may be carried out. For example, cream has a tendency to produce oily globules (called \"feathering\") when added to coffee. The stability of the cream may be increased by increasing the non-fat solids content, which can be done by partial demineralisation and addition of sodium caseinate, although this is expensive.",
"title": "Processing and additives"
},
{
"paragraph_id": 24,
"text": "Butter is made by churning cream to separate the butterfat and buttermilk. This can be done by hand or by machine.",
"title": "Other cream products"
},
{
"paragraph_id": 25,
"text": "Whipped cream is made by whisking or mixing air into cream with more than 30% fat, to turn the liquid cream into a soft solid. Nitrous oxide, from whipped-cream chargers may also be used to make whipped cream.",
"title": "Other cream products"
},
{
"paragraph_id": 26,
"text": "Sour cream, produced in many countries, is cream (12 to 16% or more milk fat) that has been subjected to a bacterial culture that produces lactic acid (0.5%+), which sours and thickens it.",
"title": "Other cream products"
},
{
"paragraph_id": 27,
"text": "Crème fraîche (28% milk fat) is slightly soured with bacterial culture, but not as sour or as thick as sour cream. Mexican crema (or cream espesa) is similar to crème fraîche.",
"title": "Other cream products"
},
{
"paragraph_id": 28,
"text": "Smetana is a heavy cream-derived (15–40% milk fat) Central and Eastern European sweet or sour cream.",
"title": "Other cream products"
},
{
"paragraph_id": 29,
"text": "Rjome or rømme is Norwegian sour cream containing 35% milk fat, similar to Icelandic sýrður rjómi.",
"title": "Other cream products"
},
{
"paragraph_id": 30,
"text": "Clotted cream in the United Kingdom is made through a process that starts by slowly heating whole milk to produce a very high-fat (55%) product, similar to Indian malai.",
"title": "Other cream products"
},
{
"paragraph_id": 31,
"text": "Reduced cream is a cream product in New Zealand, often used to make Kiwi dip.",
"title": "Other cream products"
},
{
"paragraph_id": 32,
"text": "Some non-edible substances are called creams due to their consistency: shoe cream is runny, unlike regular waxy shoe polish; hand/body \"creme\" or \"skin cream\" is meant for moisturizing the skin.",
"title": "Other items called \"cream\""
},
{
"paragraph_id": 33,
"text": "Regulations in many jurisdictions restrict the use of the word cream for foods. Words such as creme, kreme, creame, or whipped topping (e.g., Cool Whip) are often used for products which cannot legally be called cream, though in some jurisdictions even these spellings may be disallowed, for example under the doctrine of idem sonans. Oreo and Hydrox cookies are a type of sandwich cookie in which two biscuits have a soft, sweet filling between them that is called \"crème filling.\" In some cases, foods can be described as cream although they do not contain predominantly milk fats; for example, in Britain, \"ice cream\" can contain non-milk fat (declared on the label) in addition to or instead of cream, and salad cream is the customary name for a non-dairy condiment that has been produced since the 1920s.",
"title": "Other items called \"cream\""
},
{
"paragraph_id": 34,
"text": "In other languages, cognates of \"cream\" are also sometimes used for non-food products, such as fogkrém (Hungarian for toothpaste), or Sonnencreme (German for sunscreen).",
"title": "Other items called \"cream\""
},
{
"paragraph_id": 35,
"text": "Some products are described as \"cream alternatives\". For example, Elmlea Double, etc. are blends of buttermilk or lentils and vegetable oil with other additives sold by Upfield in the United Kingdom packaged and shelved in the same way as cream, labelled as having \"a creamy taste\".",
"title": "Other items called \"cream\""
},
{
"paragraph_id": 36,
"text": "Nutrition chart for heavy cream",
"title": "External links"
}
]
| Cream is a dairy product composed of the higher-fat layer skimmed from the top of milk before homogenization. In un-homogenized milk, the fat, which is less dense, eventually rises to the top. In the industrial production of cream, this process is accelerated by using centrifuges called "separators". In many countries, it is sold in several grades depending on the total butterfat content. It can be dried to a powder for shipment to distant markets, and contains high levels of saturated fat. Cream skimmed from milk may be called "sweet cream" to distinguish it from cream skimmed from whey, a by-product of cheese-making. Whey cream has a lower fat content and tastes more salty, tangy, and "cheesy". In many countries partially fermented cream is also sold: sour cream, crème fraîche, and so on. Both forms have many culinary uses in both sweet and savoury dishes. Cream produced by cattle grazing on natural pasture often contains some carotenoid pigments derived from the plants they eat; traces of these intensely colored pigments give milk a slightly yellow tone, hence the name of the yellowish-white color: cream. Carotenoids are also the origin of butter's yellow color. Cream from goat's milk, water buffalo milk, or from cows fed indoors on grain or grain-based pellets, is white. | 2001-08-16T06:25:44Z | 2023-11-22T20:59:17Z | [
"Template:Milk navbox",
"Template:Citation",
"Template:Commons category",
"Template:Lang",
"Template:Rp",
"Template:Nowrap",
"Template:Section link",
"Template:Reflist",
"Template:Cite swiss law",
"Template:Pp-semi",
"Template:Other uses",
"Template:Authority control",
"Template:Cite book",
"Template:Webarchive",
"Template:Cite journal",
"Template:Short description",
"Template:When?",
"Template:Cite web",
"Template:Cite act",
"Template:Portal bar",
"Template:Cn",
"Template:See also"
]
| https://en.wikipedia.org/wiki/Cream |
6,111 | Chemical vapor deposition | Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high-quality, and high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films.
In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber.
Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon (dioxide, carbide, nitride, oxynitride), carbon (fiber, nanofibers, nanotubes, diamond and graphene), fluorocarbons, filaments, tungsten, titanium nitride and various high-κ dielectrics.
The term chemical vapour deposition was coined 1960 by John M. Blocher, Jr. who intended to differentiate chemical from physical vapour deposition (PVD).
CVD is practiced in a variety of formats. These processes generally differ in the means by which chemical reactions are initiated.
Most modern CVD is either LPCVD or UHVCVD.
CVD is commonly used to deposit conformal films and augment substrate surfaces in ways that more traditional surface modification techniques are not capable of. CVD is extremely useful in the process of atomic layer deposition at depositing extremely thin layers of material. A variety of applications for such films exist. Gallium arsenide is used in some integrated circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in photovoltaic devices. Certain carbides and nitrides confer wear-resistance. Polymerization by CVD, perhaps the most versatile of all applications, allows for super-thin coatings which possess some very desirable qualities, such as lubricity, hydrophobicity and weather-resistance to name a few. The CVD of metal-organic frameworks, a class of crystalline nanoporous materials, has recently been demonstrated. Recently scaled up as an integrated cleanroom process depositing large-area substrates, the applications for these films are anticipated in gas sensing and low-κ dielectrics. CVD techniques are advantageous for membrane coatings as well, such as those in desalination or water treatment, as these coatings can be sufficiently uniform (conformal) and thin that they do not clog membrane pores.
Polycrystalline silicon is deposited from trichlorosilane (SiHCl3) or silane (SiH4), using the following reactions:
This reaction is usually performed in LPCVD systems, with either pure silane feedstock, or a solution of silane with 70–80% nitrogen. Temperatures between 600 and 650 °C and pressures between 25 and 150 Pa yield a growth rate between 10 and 20 nm per minute. An alternative process uses a hydrogen-based solution. The hydrogen reduces the growth rate, but the temperature is raised to 850 or even 1050 °C to compensate. Polysilicon may be grown directly with doping, if gases such as phosphine, arsine or diborane are added to the CVD chamber. Diborane increases the growth rate, but arsine and phosphine decrease it.
Silicon dioxide (usually called simply "oxide" in the semiconductor industry) may be deposited by several different processes. Common source gases include silane and oxygen, dichlorosilane (SiCl2H2) and nitrous oxide (N2O), or tetraethylorthosilicate (TEOS; Si(OC2H5)4). The reactions are as follows:
The choice of source gas depends on the thermal stability of the substrate; for instance, aluminium is sensitive to high temperature. Silane deposits between 300 and 500 °C, dichlorosilane at around 900 °C, and TEOS between 650 and 750 °C, resulting in a layer of low- temperature oxide (LTO). However, silane produces a lower-quality oxide than the other methods (lower dielectric strength, for instance), and it deposits nonconformally. Any of these reactions may be used in LPCVD, but the silane reaction is also done in APCVD. CVD oxide invariably has lower quality than thermal oxide, but thermal oxidation can only be used in the earliest stages of IC manufacturing.
Oxide may also be grown with impurities (alloying or "doping"). This may have two purposes. During further process steps that occur at high temperature, the impurities may diffuse from the oxide into adjacent layers (most notably silicon) and dope them. Oxides containing 5–15% impurities by mass are often used for this purpose. In addition, silicon dioxide alloyed with phosphorus pentoxide ("P-glass") can be used to smooth out uneven surfaces. P-glass softens and reflows at temperatures above 1000 °C. This process requires a phosphorus concentration of at least 6%, but concentrations above 8% can corrode aluminium. Phosphorus is deposited from phosphine gas and oxygen:
Glasses containing both boron and phosphorus (borophosphosilicate glass, BPSG) undergo viscous flow at lower temperatures; around 850 °C is achievable with glasses containing around 5 weight % of both constituents, but stability in air can be difficult to achieve. Phosphorus oxide in high concentrations interacts with ambient moisture to produce phosphoric acid. Crystals of BPO4 can also precipitate from the flowing glass on cooling; these crystals are not readily etched in the standard reactive plasmas used to pattern oxides, and will result in circuit defects in integrated circuit manufacturing.
Besides these intentional impurities, CVD oxide may contain byproducts of the deposition. TEOS produces a relatively pure oxide, whereas silane introduces hydrogen impurities, and dichlorosilane introduces chlorine.
Lower temperature deposition of silicon dioxide and doped glasses from TEOS using ozone rather than oxygen has also been explored (350 to 500 °C). Ozone glasses have excellent conformality but tend to be hygroscopic – that is, they absorb water from the air due to the incorporation of silanol (Si-OH) in the glass. Infrared spectroscopy and mechanical strain as a function of temperature are valuable diagnostic tools for diagnosing such problems.
Silicon nitride is often used as an insulator and chemical barrier in manufacturing ICs. The following two reactions deposit silicon nitride from the gas phase:
Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (10 Ω·cm and 10 MV/cm, respectively).
Another two reactions may be used in plasma to deposit SiNH:
These films have much less tensile stress, but worse electrical properties (resistivity 10 to 10 Ω·cm, and dielectric strength 1 to 5 MV/cm).
Tungsten CVD, used for forming conductive contacts, vias, and plugs on a semiconductor device, is achieved from tungsten hexafluoride (WF6), which may be deposited in two ways:
Other metals, notably aluminium and copper, can be deposited by CVD. As of 2010, commercially cost-effective CVD for copper did not exist, although volatile sources exist, such as Cu(hfac)2. Copper is typically deposited by electroplating. Aluminium can be deposited from triisobutylaluminium (TIBAL) and related organoaluminium compounds.
CVD for molybdenum, tantalum, titanium, nickel is widely used. These metals can form useful silicides when deposited onto silicon. Mo, Ta and Ti are deposited by LPCVD, from their pentachlorides. Nickel, molybdenum, and tungsten can be deposited at low temperatures from their carbonyl precursors. In general, for an arbitrary metal M, the chloride deposition reaction is as follows:
whereas the carbonyl decomposition reaction can happen spontaneously under thermal treatment or acoustic cavitation and is as follows:
the decomposition of metal carbonyls is often violently precipitated by moisture or air, where oxygen reacts with the metal precursor to form metal or metal oxide along with carbon dioxide.
Niobium(V) oxide layers can be produced by the thermal decomposition of niobium(V) ethoxide with the loss of diethyl ether according to the equation:
Many variations of CVD can be utilized to synthesize graphene. Although many advancements have been made, the processes listed below are not commercially viable yet.
The most popular carbon source that is used to produce graphene is methane gas. One of the less popular choices is petroleum asphalt, notable for being inexpensive but more difficult to work with.
Although methane is the most popular carbon source, hydrogen is required during the preparation process to promote carbon deposition on the substrate. If the flow ratio of methane and hydrogen are not appropriate, it will cause undesirable results. During the growth of graphene, the role of methane is to provide a carbon source, the role of hydrogen is to provide H atoms to corrode amorphous C, and improve the quality of graphene. But excessive H atoms can also corrode graphene. As a result, the integrity of the crystal lattice is destroyed, and the quality of graphene is deteriorated. Therefore, by optimizing the flow rate of methane and hydrogen gases in the growth process, the quality of graphene can be improved.
The use of catalyst is viable in changing the physical process of graphene production. Notable examples include iron nanoparticles, nickel foam, and gallium vapor. These catalysts can either be used in situ during graphene buildup, or situated at some distance away at the deposition area. Some catalysts require another step to remove them from the sample material.
The direct growth of high-quality, large single-crystalline domains of graphene on a dielectric substrate is of vital importance for applications in electronics and optoelectronics. Combining the advantages of both catalytic CVD and the ultra-flat dielectric substrate, gaseous catalyst-assisted CVD paves the way for synthesizing high-quality graphene for device applications while avoiding the transfer process.
Physical conditions such as surrounding pressure, temperature, carrier gas, and chamber material play a big role in production of graphene.
Most systems use LPCVD with pressures ranging from 1 to 1500 Pa. However, some still use APCVD. Low pressures are used more commonly as they help prevent unwanted reactions and produce more uniform thickness of deposition on the substrate.
On the other hand, temperatures used range from 800 to 1050 °C. High temperatures translate to an increase of the rate of reaction. Caution has to be exercised as high temperatures do pose higher danger levels in addition to greater energy costs.
Hydrogen gas and inert gases such as argon are flowed into the system. These gases act as a carrier, enhancing surface reaction and improving reaction rate, thereby increasing deposition of graphene onto the substrate.
Standard quartz tubing and chambers are used in CVD of graphene. Quartz is chosen because it has a very high melting point and is chemically inert. In other words, quartz does not interfere with any physical or chemical reactions regardless of the conditions.
Raman spectroscopy, X-ray spectroscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) are used to examine and characterize the graphene samples.
Raman spectroscopy is used to characterize and identify the graphene particles; X-ray spectroscopy is used to characterize chemical states; TEM is used to provide fine details regarding the internal composition of graphene; SEM is used to examine the surface and topography.
Sometimes, atomic force microscopy (AFM) is used to measure local properties such as friction and magnetism.
Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry.
In spite of graphene's exciting electronic and thermal properties, it is unsuitable as a transistor for future digital devices, due to the absence of a bandgap between the conduction and valence bands. This makes it impossible to switch between on and off states with respect to electron flow. Scaling things down, graphene nanoribbons of less than 10 nm in width do exhibit electronic bandgaps and are therefore potential candidates for digital devices. Precise control over their dimensions, and hence electronic properties, however, represents a challenging goal, and the ribbons typically possess rough edges that are detrimental to their performance.
CVD can be used to produce a synthetic diamond by creating the circumstances necessary for carbon atoms in a gas to settle on a substrate in crystalline form. CVD of diamonds has received much attention in the materials sciences because it allows many new applications that had previously been considered too expensive. CVD diamond growth typically occurs under low pressure (1–27 kPa; 0.145–3.926 psi; 7.5–203 Torr) and involves feeding varying amounts of gases into a chamber, energizing them and providing conditions for diamond growth on the substrate. The gases always include a carbon source, and typically include hydrogen as well, though the amounts used vary greatly depending on the type of diamond being grown. Energy sources include hot filament, microwave power, and arc discharges, among others. The energy source is intended to generate a plasma in which the gases are broken down and more complex chemistries occur. The actual chemical process for diamond growth is still under study and is complicated by the very wide variety of diamond growth processes used.
Using CVD, films of diamond can be grown over large areas of substrate with control over the properties of the diamond produced. In the past, when high pressure high temperature (HPHT) techniques were used to produce a diamond, the result was typically very small free-standing diamonds of varying sizes. With CVD diamond, growth areas of greater than fifteen centimeters (six inches) in diameter have been achieved, and much larger areas are likely to be successfully coated with diamond in the future. Improving this process is key to enabling several important applications.
The growth of diamond directly on a substrate allows the addition of many of diamond's important qualities to other materials. Since diamond has the highest thermal conductivity of any bulk material, layering diamond onto high heat-producing electronics (such as optics and transistors) allows the diamond to be used as a heat sink. Diamond films are being grown on valve rings, cutting tools, and other objects that benefit from diamond's hardness and exceedingly low wear rate. In each case the diamond growth must be carefully done to achieve the necessary adhesion onto the substrate. Diamond's very high scratch resistance and thermal conductivity, combined with a lower coefficient of thermal expansion than Pyrex glass, a coefficient of friction close to that of Teflon (polytetrafluoroethylene) and strong lipophilicity would make it a nearly ideal non-stick coating for cookware if large substrate areas could be coated economically.
CVD growth allows one to control the properties of the diamond produced. In the area of diamond growth, the word "diamond" is used as a description of any material primarily made up of sp3-bonded carbon, and there are many different types of diamond included in this. By regulating the processing parameters—especially the gases introduced, but also including the pressure the system is operated under, the temperature of the diamond, and the method of generating plasma—many different materials that can be considered diamond can be made. Single-crystal diamond can be made containing various dopants. Polycrystalline diamond consisting of grain sizes from several nanometers to several micrometers can be grown. Some polycrystalline diamond grains are surrounded by thin, non-diamond carbon, while others are not. These different factors affect the diamond's hardness, smoothness, conductivity, optical properties and more.
Commercially, mercury cadmium telluride is of continuing interest for detection of infrared radiation. Consisting of an alloy of CdTe and HgTe, this material can be prepared from the dimethyl derivatives of the respective elements. | [
{
"paragraph_id": 0,
"text": "Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high-quality, and high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon (dioxide, carbide, nitride, oxynitride), carbon (fiber, nanofibers, nanotubes, diamond and graphene), fluorocarbons, filaments, tungsten, titanium nitride and various high-κ dielectrics.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The term chemical vapour deposition was coined 1960 by John M. Blocher, Jr. who intended to differentiate chemical from physical vapour deposition (PVD).",
"title": ""
},
{
"paragraph_id": 4,
"text": "CVD is practiced in a variety of formats. These processes generally differ in the means by which chemical reactions are initiated.",
"title": "Types"
},
{
"paragraph_id": 5,
"text": "Most modern CVD is either LPCVD or UHVCVD.",
"title": "Types"
},
{
"paragraph_id": 6,
"text": "CVD is commonly used to deposit conformal films and augment substrate surfaces in ways that more traditional surface modification techniques are not capable of. CVD is extremely useful in the process of atomic layer deposition at depositing extremely thin layers of material. A variety of applications for such films exist. Gallium arsenide is used in some integrated circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in photovoltaic devices. Certain carbides and nitrides confer wear-resistance. Polymerization by CVD, perhaps the most versatile of all applications, allows for super-thin coatings which possess some very desirable qualities, such as lubricity, hydrophobicity and weather-resistance to name a few. The CVD of metal-organic frameworks, a class of crystalline nanoporous materials, has recently been demonstrated. Recently scaled up as an integrated cleanroom process depositing large-area substrates, the applications for these films are anticipated in gas sensing and low-κ dielectrics. CVD techniques are advantageous for membrane coatings as well, such as those in desalination or water treatment, as these coatings can be sufficiently uniform (conformal) and thin that they do not clog membrane pores.",
"title": "Uses"
},
{
"paragraph_id": 7,
"text": "Polycrystalline silicon is deposited from trichlorosilane (SiHCl3) or silane (SiH4), using the following reactions:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 8,
"text": "This reaction is usually performed in LPCVD systems, with either pure silane feedstock, or a solution of silane with 70–80% nitrogen. Temperatures between 600 and 650 °C and pressures between 25 and 150 Pa yield a growth rate between 10 and 20 nm per minute. An alternative process uses a hydrogen-based solution. The hydrogen reduces the growth rate, but the temperature is raised to 850 or even 1050 °C to compensate. Polysilicon may be grown directly with doping, if gases such as phosphine, arsine or diborane are added to the CVD chamber. Diborane increases the growth rate, but arsine and phosphine decrease it.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 9,
"text": "Silicon dioxide (usually called simply \"oxide\" in the semiconductor industry) may be deposited by several different processes. Common source gases include silane and oxygen, dichlorosilane (SiCl2H2) and nitrous oxide (N2O), or tetraethylorthosilicate (TEOS; Si(OC2H5)4). The reactions are as follows:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 10,
"text": "The choice of source gas depends on the thermal stability of the substrate; for instance, aluminium is sensitive to high temperature. Silane deposits between 300 and 500 °C, dichlorosilane at around 900 °C, and TEOS between 650 and 750 °C, resulting in a layer of low- temperature oxide (LTO). However, silane produces a lower-quality oxide than the other methods (lower dielectric strength, for instance), and it deposits nonconformally. Any of these reactions may be used in LPCVD, but the silane reaction is also done in APCVD. CVD oxide invariably has lower quality than thermal oxide, but thermal oxidation can only be used in the earliest stages of IC manufacturing.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 11,
"text": "Oxide may also be grown with impurities (alloying or \"doping\"). This may have two purposes. During further process steps that occur at high temperature, the impurities may diffuse from the oxide into adjacent layers (most notably silicon) and dope them. Oxides containing 5–15% impurities by mass are often used for this purpose. In addition, silicon dioxide alloyed with phosphorus pentoxide (\"P-glass\") can be used to smooth out uneven surfaces. P-glass softens and reflows at temperatures above 1000 °C. This process requires a phosphorus concentration of at least 6%, but concentrations above 8% can corrode aluminium. Phosphorus is deposited from phosphine gas and oxygen:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 12,
"text": "Glasses containing both boron and phosphorus (borophosphosilicate glass, BPSG) undergo viscous flow at lower temperatures; around 850 °C is achievable with glasses containing around 5 weight % of both constituents, but stability in air can be difficult to achieve. Phosphorus oxide in high concentrations interacts with ambient moisture to produce phosphoric acid. Crystals of BPO4 can also precipitate from the flowing glass on cooling; these crystals are not readily etched in the standard reactive plasmas used to pattern oxides, and will result in circuit defects in integrated circuit manufacturing.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 13,
"text": "Besides these intentional impurities, CVD oxide may contain byproducts of the deposition. TEOS produces a relatively pure oxide, whereas silane introduces hydrogen impurities, and dichlorosilane introduces chlorine.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 14,
"text": "Lower temperature deposition of silicon dioxide and doped glasses from TEOS using ozone rather than oxygen has also been explored (350 to 500 °C). Ozone glasses have excellent conformality but tend to be hygroscopic – that is, they absorb water from the air due to the incorporation of silanol (Si-OH) in the glass. Infrared spectroscopy and mechanical strain as a function of temperature are valuable diagnostic tools for diagnosing such problems.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 15,
"text": "Silicon nitride is often used as an insulator and chemical barrier in manufacturing ICs. The following two reactions deposit silicon nitride from the gas phase:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 16,
"text": "Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (10 Ω·cm and 10 MV/cm, respectively).",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 17,
"text": "Another two reactions may be used in plasma to deposit SiNH:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 18,
"text": "These films have much less tensile stress, but worse electrical properties (resistivity 10 to 10 Ω·cm, and dielectric strength 1 to 5 MV/cm).",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 19,
"text": "Tungsten CVD, used for forming conductive contacts, vias, and plugs on a semiconductor device, is achieved from tungsten hexafluoride (WF6), which may be deposited in two ways:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 20,
"text": "Other metals, notably aluminium and copper, can be deposited by CVD. As of 2010, commercially cost-effective CVD for copper did not exist, although volatile sources exist, such as Cu(hfac)2. Copper is typically deposited by electroplating. Aluminium can be deposited from triisobutylaluminium (TIBAL) and related organoaluminium compounds.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 21,
"text": "CVD for molybdenum, tantalum, titanium, nickel is widely used. These metals can form useful silicides when deposited onto silicon. Mo, Ta and Ti are deposited by LPCVD, from their pentachlorides. Nickel, molybdenum, and tungsten can be deposited at low temperatures from their carbonyl precursors. In general, for an arbitrary metal M, the chloride deposition reaction is as follows:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 22,
"text": "whereas the carbonyl decomposition reaction can happen spontaneously under thermal treatment or acoustic cavitation and is as follows:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 23,
"text": "the decomposition of metal carbonyls is often violently precipitated by moisture or air, where oxygen reacts with the metal precursor to form metal or metal oxide along with carbon dioxide.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 24,
"text": "Niobium(V) oxide layers can be produced by the thermal decomposition of niobium(V) ethoxide with the loss of diethyl ether according to the equation:",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 25,
"text": "Many variations of CVD can be utilized to synthesize graphene. Although many advancements have been made, the processes listed below are not commercially viable yet.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 26,
"text": "The most popular carbon source that is used to produce graphene is methane gas. One of the less popular choices is petroleum asphalt, notable for being inexpensive but more difficult to work with.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 27,
"text": "Although methane is the most popular carbon source, hydrogen is required during the preparation process to promote carbon deposition on the substrate. If the flow ratio of methane and hydrogen are not appropriate, it will cause undesirable results. During the growth of graphene, the role of methane is to provide a carbon source, the role of hydrogen is to provide H atoms to corrode amorphous C, and improve the quality of graphene. But excessive H atoms can also corrode graphene. As a result, the integrity of the crystal lattice is destroyed, and the quality of graphene is deteriorated. Therefore, by optimizing the flow rate of methane and hydrogen gases in the growth process, the quality of graphene can be improved.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 28,
"text": "The use of catalyst is viable in changing the physical process of graphene production. Notable examples include iron nanoparticles, nickel foam, and gallium vapor. These catalysts can either be used in situ during graphene buildup, or situated at some distance away at the deposition area. Some catalysts require another step to remove them from the sample material.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 29,
"text": "The direct growth of high-quality, large single-crystalline domains of graphene on a dielectric substrate is of vital importance for applications in electronics and optoelectronics. Combining the advantages of both catalytic CVD and the ultra-flat dielectric substrate, gaseous catalyst-assisted CVD paves the way for synthesizing high-quality graphene for device applications while avoiding the transfer process.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 30,
"text": "Physical conditions such as surrounding pressure, temperature, carrier gas, and chamber material play a big role in production of graphene.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 31,
"text": "Most systems use LPCVD with pressures ranging from 1 to 1500 Pa. However, some still use APCVD. Low pressures are used more commonly as they help prevent unwanted reactions and produce more uniform thickness of deposition on the substrate.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 32,
"text": "On the other hand, temperatures used range from 800 to 1050 °C. High temperatures translate to an increase of the rate of reaction. Caution has to be exercised as high temperatures do pose higher danger levels in addition to greater energy costs.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 33,
"text": "Hydrogen gas and inert gases such as argon are flowed into the system. These gases act as a carrier, enhancing surface reaction and improving reaction rate, thereby increasing deposition of graphene onto the substrate.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 34,
"text": "Standard quartz tubing and chambers are used in CVD of graphene. Quartz is chosen because it has a very high melting point and is chemically inert. In other words, quartz does not interfere with any physical or chemical reactions regardless of the conditions.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 35,
"text": "Raman spectroscopy, X-ray spectroscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) are used to examine and characterize the graphene samples.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 36,
"text": "Raman spectroscopy is used to characterize and identify the graphene particles; X-ray spectroscopy is used to characterize chemical states; TEM is used to provide fine details regarding the internal composition of graphene; SEM is used to examine the surface and topography.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 37,
"text": "Sometimes, atomic force microscopy (AFM) is used to measure local properties such as friction and magnetism.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 38,
"text": "Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 39,
"text": "In spite of graphene's exciting electronic and thermal properties, it is unsuitable as a transistor for future digital devices, due to the absence of a bandgap between the conduction and valence bands. This makes it impossible to switch between on and off states with respect to electron flow. Scaling things down, graphene nanoribbons of less than 10 nm in width do exhibit electronic bandgaps and are therefore potential candidates for digital devices. Precise control over their dimensions, and hence electronic properties, however, represents a challenging goal, and the ribbons typically possess rough edges that are detrimental to their performance.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 40,
"text": "CVD can be used to produce a synthetic diamond by creating the circumstances necessary for carbon atoms in a gas to settle on a substrate in crystalline form. CVD of diamonds has received much attention in the materials sciences because it allows many new applications that had previously been considered too expensive. CVD diamond growth typically occurs under low pressure (1–27 kPa; 0.145–3.926 psi; 7.5–203 Torr) and involves feeding varying amounts of gases into a chamber, energizing them and providing conditions for diamond growth on the substrate. The gases always include a carbon source, and typically include hydrogen as well, though the amounts used vary greatly depending on the type of diamond being grown. Energy sources include hot filament, microwave power, and arc discharges, among others. The energy source is intended to generate a plasma in which the gases are broken down and more complex chemistries occur. The actual chemical process for diamond growth is still under study and is complicated by the very wide variety of diamond growth processes used.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 41,
"text": "Using CVD, films of diamond can be grown over large areas of substrate with control over the properties of the diamond produced. In the past, when high pressure high temperature (HPHT) techniques were used to produce a diamond, the result was typically very small free-standing diamonds of varying sizes. With CVD diamond, growth areas of greater than fifteen centimeters (six inches) in diameter have been achieved, and much larger areas are likely to be successfully coated with diamond in the future. Improving this process is key to enabling several important applications.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 42,
"text": "The growth of diamond directly on a substrate allows the addition of many of diamond's important qualities to other materials. Since diamond has the highest thermal conductivity of any bulk material, layering diamond onto high heat-producing electronics (such as optics and transistors) allows the diamond to be used as a heat sink. Diamond films are being grown on valve rings, cutting tools, and other objects that benefit from diamond's hardness and exceedingly low wear rate. In each case the diamond growth must be carefully done to achieve the necessary adhesion onto the substrate. Diamond's very high scratch resistance and thermal conductivity, combined with a lower coefficient of thermal expansion than Pyrex glass, a coefficient of friction close to that of Teflon (polytetrafluoroethylene) and strong lipophilicity would make it a nearly ideal non-stick coating for cookware if large substrate areas could be coated economically.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 43,
"text": "CVD growth allows one to control the properties of the diamond produced. In the area of diamond growth, the word \"diamond\" is used as a description of any material primarily made up of sp3-bonded carbon, and there are many different types of diamond included in this. By regulating the processing parameters—especially the gases introduced, but also including the pressure the system is operated under, the temperature of the diamond, and the method of generating plasma—many different materials that can be considered diamond can be made. Single-crystal diamond can be made containing various dopants. Polycrystalline diamond consisting of grain sizes from several nanometers to several micrometers can be grown. Some polycrystalline diamond grains are surrounded by thin, non-diamond carbon, while others are not. These different factors affect the diamond's hardness, smoothness, conductivity, optical properties and more.",
"title": "Commercially important materials prepared by CVD"
},
{
"paragraph_id": 44,
"text": "Commercially, mercury cadmium telluride is of continuing interest for detection of infrared radiation. Consisting of an alloy of CdTe and HgTe, this material can be prepared from the dimethyl derivatives of the respective elements.",
"title": "Chalcogenides"
}
]
| Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high-quality, and high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films. In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber. Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon, carbon, fluorocarbons, filaments, tungsten, titanium nitride and various high-κ dielectrics. The term chemical vapour deposition was coined 1960 by John M. Blocher, Jr. who intended to differentiate chemical from physical vapour deposition (PVD). | 2001-08-16T18:58:44Z | 2023-12-27T18:43:13Z | [
"Template:Cite web",
"Template:Refend",
"Template:Reflist",
"Template:See also",
"Template:Div col",
"Template:Cite journal",
"Template:Ullmann",
"Template:Cite book",
"Template:Cite thesis",
"Template:Webarchive",
"Template:For",
"Template:Glass science",
"Template:As of",
"Template:Citation",
"Template:Cite conference",
"Template:Doi",
"Template:Short description",
"Template:Refbegin",
"Template:Div col end"
]
| https://en.wikipedia.org/wiki/Chemical_vapor_deposition |
6,112 | CN Tower | The CN Tower (French: Tour CN) is a 553.3 m-high (1,815.3 ft) concrete communications and observation tower in Toronto, Ontario, Canada. Completed in 1976, it is located in downtown Toronto, built on the former Railway Lands. Its name "CN" referred to Canadian National, the railway company that built the tower. Following the railway's decision to divest non-core freight railway assets prior to the company's privatization in 1995, it transferred the tower to the Canada Lands Company, a federal Crown corporation responsible for the government's real estate portfolio.
The CN Tower held the record for the world's tallest free-standing structure for 32 years, from 1975 until 2007, when it was surpassed by the Burj Khalifa, and was the world's tallest tower until 2009 when it was surpassed by the Canton Tower. It is currently the tenth-tallest free-standing structure in the world and remains the tallest free-standing structure on land in the Western Hemisphere. In 1995, the CN Tower was declared one of the modern Seven Wonders of the World by the American Society of Civil Engineers. It also belongs to the World Federation of Great Towers.
It is a signature icon of Toronto's skyline and attracts more than two million international visitors annually. It houses several observation decks, a revolving restaurant at some 350 metres (1,150 ft), and an entertainment complex.
The original concept of the CN Tower was first conceived in 1968 when the Canadian National Railway wanted to build a large television and radio communication platform to serve the Toronto area, and to demonstrate the strength of Canadian industry and CN in particular. These plans evolved over the next few years, and the project became official in 1972.
The tower would have been part of Metro Centre (see CityPlace), a large development south of Front Street on the Railway Lands, a large railway switching yard that was being made redundant after the opening of the MacMillan Yard north of the city in 1965 (then known as Toronto Yard). Key project team members were NCK Engineering as structural engineer; John Andrews Architects; Webb, Zerafa, Menkes, Housden Architects; Foundation Building Construction; and Canron (Eastern Structural Division).
As Toronto grew rapidly during the late 1960s and early 1970s, multiple skyscrapers were constructed in the downtown core, most notably First Canadian Place, which has Bank of Montreal's head offices. The reflective nature of the new buildings reduced the quality of broadcast signals, requiring new, higher antennas that were at least 300 m (980 ft) tall. The radio wire is estimated to be 102 metres (335 ft) long in 44 pieces, the heaviest of which weighs around 8 tonnes (8.8 short tons; 7.9 long tons).
At the time, most data communications took place over point-to-point microwave links, whose dish antennas covered the roofs of large buildings. As each new skyscraper was added to the downtown, former line-of-sight links were no longer possible. CN intended to rent "hub" space for microwave links, visible from almost any building in the Toronto area.
The original plan for the tower envisioned a tripod consisting of three independent cylindrical "pillars" linked at various heights by structural bridges. Had it been built, this design would have been considerably shorter, with the metal antenna located roughly where the concrete section between the main level and the SkyPod lies today. As the design effort continued, it evolved into the current design with a single continuous hexagonal core to the SkyPod, with three support legs blended into the hexagon below the main level, forming a large Y-shape structure at the ground level.
The idea for the main level in its current form evolved around this time, but the Space Deck (later renamed SkyPod) was not part of the plans until later. One engineer in particular felt that visitors would feel the higher observation deck would be worth paying extra for, and the costs in terms of construction were not prohibitive. Also around this time, it was realized that the tower could become the world's tallest free-standing structure to improve signal quality and attract tourists, and plans were changed to incorporate subtle modifications throughout the structure to this end.
The CN Tower was built by Canada Cement Company (also known as the Cement Foundation Company of Canada at the time), a subsidiary of Sweden's Skanska, a global project-development and construction group.
Construction began on February 6, 1973, with massive excavations at the tower base for the foundation. By the time the foundation was complete, 56,000 t (62,000 short tons; 55,000 long tons) of earth and shale were removed to a depth of 15 m (49.2 ft) in the centre, and a base incorporating 7,000 m (9,200 cu yd) of concrete with 450 t (496 short tons; 443 long tons) of rebar and 36 t (40 short tons; 35 long tons) of steel cable had been built to a thickness of 6.7 m (22 ft). This portion of the construction was fairly rapid, with only four months needed between the start and the foundation being ready for construction on top.
To create the main support pillar, workers constructed a hydraulically raised slipform at the base. This was a fairly unprecedented engineering feat on its own, consisting of a large metal platform that raised itself on jacks at about 6 m (20 ft) per day as the concrete below set. Concrete was poured Monday to Friday (not continuously) by a small team of people until February 22, 1974, at which time it had already become the tallest structure in Canada, surpassing the recently built Inco Superstack in Sudbury, built using similar methods.
The tower contains 40,500 m (53,000 cu yd) of concrete, all of which was mixed on-site in order to ensure batch consistency. Through the pour, the vertical accuracy of the tower was maintained by comparing the slip form's location to massive plumb bobs hanging from it, observed by small telescopes from the ground. Over the height of the tower, it varies from true vertical accuracy by only 29 mm (1.1 in).
In August 1974, construction of the main level commenced. Using 45 hydraulic jacks attached to cables strung from a temporary steel crown anchored to the top of the tower, twelve giant steel and wooden bracket forms were slowly raised, ultimately taking about a week to crawl up to their final position. These forms were used to create the brackets that support the main level, as well as a base for the construction of the main level itself. The Space Deck (currently named SkyPod) was built of concrete poured into a wooden frame attached to rebar at the lower level deck, and then reinforced with a large steel compression band around the outside.
While still under construction, the CN Tower officially became the world's tallest free-standing structure on March 31, 1975.
The antenna was originally to be raised by crane as well, but, during construction, the Sikorsky S-64 Skycrane helicopter became available when the United States Army sold one to civilian operators. The helicopter, named "Olga", was first used to remove the crane, and then flew the antenna up in 36 sections.
The flights of the antenna pieces were a minor tourist attraction of their own, and the schedule was printed in local newspapers. Use of the helicopter saved months of construction time, with this phase taking only three and a half weeks instead of the planned six months. The tower was topped-off on April 2, 1975, after 26 months of construction, officially capturing the height record from Moscow's Ostankino Tower, and bringing the total mass to 118,000 t (130,000 short tons; 116,000 long tons).
Two years into the construction, plans for Metro Centre were scrapped, leaving the tower isolated on the Railway Lands in what was then a largely abandoned light-industrial space. This caused serious problems for tourists to access the tower. Ned Baldwin, project architect with John Andrews, wrote at the time that "All of the logic which dictated the design of the lower accommodation has been upset," and that "Under such ludicrous circumstances Canadian National would hardly have chosen this location to build."
The CN Tower opened on June 26, 1976. The construction costs of approximately CA$63 million ($287 million in 2021 dollars) were repaid in fifteen years.
From the mid-1970s to the mid-1980s, the CN Tower was practically the only development along Front Street West; it was still possible to see Lake Ontario from the foot of the CN Tower due to the expansive parking lots and lack of development in the area at the time. As the area around the tower was developed, particularly with the completion of the Metro Toronto Convention Centre (north building) in 1984 and SkyDome in 1989 (renamed Rogers Centre in 2005), the former Railway Lands were redeveloped and the tower became the centre of a newly developing entertainment area. Access was greatly improved with the construction of the SkyWalk in 1989, which connected the tower and SkyDome to the nearby Union Station railway and subway station, and, in turn, to the city's Path underground pedestrian system. By the mid-1990s, it was the centre of a thriving tourist district. The entire area continues to be an area of intense building, notably a boom in condominium construction in the first quarter of the 21st century, as well as the 2013 opening of the Ripley's Aquarium by the base of the tower.
When the CN Tower opened in 1976, there were three public observation points: the SkyPod (then known as the Space Deck) that stands at 447 m (1,467 ft), the Indoor Observation Level (later named Indoor Lookout Level) at 346 m (1,135 ft), and the Outdoor Observation Terrace (at the same level as the Glass Floor) at 342 m (1,122 ft). One floor above the Indoor Observation Level was the Top of Toronto Restaurant, which completed a revolution once every 72 minutes.
The tower would garner worldwide media attention when stuntman Dar Robinson jumped off the CN Tower on two occasions in 1979 and 1980. The first was for a scene from the movie Highpoint, in which Robinson received CA$250,000 ($885,000 in 2021 dollars) for the stunt. The second was for a personal documentary. The first stunt had him use a parachute which he deployed three seconds before impact with the ground, while the second one used a wire decelerator attached to his back.
On June 26, 1986, the tenth anniversary of the tower's opening, high-rise firefighting and rescue advocate Dan Goodwin, in a sponsored publicity event, used his hands and feet to climb the outside of the tower, a feat he performed twice on the same day. Following both ascents, he used multiple rappels to descend to the ground.
From 1985 to 1992, the CN Tower basement level hosted the world's first flight simulator ride, Tour of the Universe. The ride was replaced in 1992 with a similar attraction entitled "Space Race." It was later dismantled and replaced by two other rides in 1998 and 1999.
A glass floor at an elevation of 342 m (1,122 ft) was installed in 1994. Canadian National Railway sold the tower to Canada Lands Company prior to privatizing the company in 1995, when it divested all operations not directly related to its core freight shipping businesses. The tower's name and wordmark were adjusted to remove the CN railways logo, and the tower was renamed Canada's National Tower (from Canadian National Tower), though the tower is commonly called the CN Tower.
Further changes were made from 1997 to January 2004: TrizecHahn Corporation managed the tower and instituted several expansion projects including a CA$26 million entertainment expansion, the 1997 addition of two new elevators (to a total of six) and the consequential relocation of the staircase from the north side leg to inside the core of the building, a conversion that also added nine stairs to the climb. TrizecHahn also owned the Willis Tower (Sears Tower at the time) in Chicago approximately at the same time.
In 2007, light-emitting diode (LED) lights replaced the incandescent lights that lit the CN Tower at night. This was done to take advantage of the cost savings of LED lights over incandescent lights. The colour of the LED lights can change, compared to the constant white colour of the incandescent lights. On September 12, 2007, Burj Khalifa, then under construction and known as Burj Dubai, surpassed the CN Tower as the world's tallest free-standing structure. In 2008, glass panels were installed in one of the CN Tower elevators, which established a world record (346 m) for highest glass floor panelled elevator in the world.
On August 1, 2011, the CN Tower opened the EdgeWalk, an amusement in which thrill-seekers can walk on and around the roof of the main pod of the tower at 356 m (1,168.0 ft), which is directly above the 360 Restaurant. It is the world's highest full-circle, hands-free walk. Visitors are tethered to an overhead rail system and walk around the edge of the CN Tower's main pod above the 360 Restaurant on a 1.5-metre (4.9 ft) metal floor. The attraction is closed throughout the winter and during periods of electrical storms and high winds.
One of the notable guests who visited EdgeWalk was Canadian comedian Rick Mercer, featured as the first episode of the ninth season of his CBC Television news satire show, Rick Mercer Report. There, he was accompanied by Canadian pop singer Jann Arden. The episode first aired on April 10, 2013.
The tower and surrounding areas were prominent in the 2015 Pan American Games production. In the opening ceremony, a pre-recorded segment featured track-and-field athlete Bruny Surin passing the flame to sprinter Donovan Bailey on the EdgeWalk and parachuting into Rogers Centre. A fireworks display off the tower concluded both the opening and closing ceremonies.
On July 1, 2017, as part of the nationwide celebrations for Canada 150, which celebrated the 150th anniversary of Canadian Confederation, fireworks were once again shot from the tower in a five-minute display coordinated with the tower lights and music broadcast on a local radio station.
The CN Tower consists of several substructures. The main portion of the tower is a hollow concrete hexagonal pillar containing the stairwells and power and plumbing connections. The tower's six elevators are located in the three inverted angles created by the Tower's hexagonal shape (two elevators per angle). Each of the three elevator shafts is lined with glass, allowing for views of the city as the glass-windowed elevators make their way through the tower. The stairwell was originally located in one of these angles (the one facing north), but was moved into the central hollow of the tower; the tower's new fifth and sixth elevators were placed in the hexagonal angle that once contained the stairwell. On top of the main concrete portion of the tower is a 102 m (334.6 ft) tall metal broadcast antenna, carrying television and radio signals. There are three visitor areas: the Glass Floor and Outdoor Observation Terrace, which are both located at an elevation of 342 m (1,122 ft), the Indoor Lookout Level (formerly known as "Indoor Observation Level") located at 346 m (1,135 ft), and the higher SkyPod (formerly known as "Space Deck") at 446.5 m (1,465 ft), just below the metal antenna. The hexagonal shape is visible between the two highest areas; however, below the main deck, three large supporting legs give the tower the appearance of a large tripod.
The main deck level has seven storeys, some of which are open to the public. Below the public areas—at 338 m (1,108.9 ft)—is a large white donut-shaped radome containing the structure's UHF transmitters. The glass floor and outdoor observation deck are at 342 m (1,122.0 ft). The glass floor has an area of 24 m (258 sq ft) and can withstand a pressure of 4.1 megapascals (595 psi). The floor's thermal glass units are 64 mm (2.5 in) thick, consisting of a pane of 25 mm (1.0 in) laminated glass, 25 mm (1.0 in) airspace and a pane of 13 mm (0.5 in) laminated glass. In 2008, one elevator was upgraded to add a glass floor panel, believed to have the highest vertical rise of any elevator equipped with this feature. The Horizons Cafe and the lookout level are at 346 m (1,135.2 ft). The 360 Restaurant, a revolving restaurant that completes a full rotation once every 72 minutes, is at 351 m (1,151.6 ft). When the tower first opened, it also featured a disco named Sparkles (at the Indoor Observation Level), billed as the highest disco and dance floor in the world.
The SkyPod was once the highest public observation deck in the world until it was surpassed by the Shanghai World Financial Center in 2008.
A metal staircase reaches the main deck level after 1,776 steps, and the SkyPod 100 m (328 ft) above after 2,579 steps; it is the tallest metal staircase on Earth. These stairs are intended for emergency use only except for charity stair-climb events two times during the year. The average climber takes approximately 30 minutes to climb to the base of the radome, but the fastest climb on record is 7 minutes and 52 seconds in 1989 by Brendan Keenoy, an Ontario Provincial Police officer. In 2002, Canadian Olympian and Paralympic champion Jeff Adams climbed the stairs of the tower in a specially designed wheelchair. The stairs were originally on one of the three sides of the tower (facing north), with a glass view, but these were later replaced with the third elevator pair and the stairs were moved to the inside of the core. Top climbs on the new, windowless stairwell used since around 2003 have generally been over ten minutes.
A freezing rain storm on March 2, 2007, resulted in a layer of ice several centimetres thick forming on the side of the tower and other downtown buildings. The sun thawed the ice, then winds of up to 90 km/h (56 mph) blew some of it away from the structure. There were fears that cars and windows of nearby buildings would be smashed by large chunks of ice. In response, police closed some streets surrounding the tower. During morning rush hour on March 5 of the same year, police expanded the area of closed streets to include the Gardiner Expressway 310 m (1,017 ft) away from the tower as increased winds blew the ice farther, as far north as King Street West, 490 m (1,608 ft) away, where a taxicab window was shattered. Subsequently, on March 6, 2007, the Gardiner Expressway reopened after winds abated.
On April 16, 2018, falling ice from the CN Tower punctured the roof of the nearby Rogers Centre stadium, causing the Toronto Blue Jays to postpone the game that day to the following day as a doubleheader; this was the third doubleheader held at the Rogers Centre. On April 20 of the same year, the CN Tower reopened.
In August 2000, a fire broke out at the Ostankino Tower in Moscow, killing three people and causing extensive damage. The fire was blamed on poor maintenance and outdated equipment. The failure of the fire-suppression systems and the lack of proper equipment for firefighters allowed the fire to destroy most of the interior and sparked fears the tower might even collapse.
The Ostankino Tower was completed nine years before the CN Tower and is only 13 m (43 ft) shorter. The parallels between the towers led to some concern that the CN Tower could be at risk of a similar tragedy. However, Canadian officials subsequently stated that it is "highly unlikely" that a similar disaster could occur at the CN Tower, as it has important safeguards that were not present in the Ostankino Tower. Specifically, officials cited:
Officials also noted that the CN Tower has an excellent safety record, although there was an electrical fire in the antennas on August 16, 2017 — the tower's first fire. Moreover, other supertall structures built between 1967 and 1976 — such as the Willis Tower (formerly the Sears Tower), the World Trade Center (until its destruction on September 11, 2001), the Fernsehturm Berlin, the Aon Center, 875 North Michigan Avenue (formerly the John Hancock Center), and First Canadian Place — also have excellent safety records, which suggests that the Ostankino Tower accident was a rare safety failure, and that the likelihood of similar events occurring at other supertall structures is extremely low.
The CN Tower was originally lit at night with incandescent lights, which were removed in 1997 because they were inefficient and expensive to repair. In June 2007, the tower was outfitted with 1,330 super-bright LED lights inside the elevator shafts, shooting over the main pod and upward to the top of the tower's mast to light the tower from dusk until 2 a.m. The official opening ceremony took place on June 28, 2007, before the Canada Day holiday weekend.
The tower changes its lighting scheme on holidays and to commemorate major events. After the 95th Grey Cup in Toronto, the tower was lit in green and white to represent the colours of the Grey Cup champion Saskatchewan Roughriders. From sundown on August 27, 2011, to sunrise the following day, the tower was lit in orange, the official colour of the New Democratic Party (NDP), to commemorate the death of federal NDP leader and leader of the official opposition Jack Layton. When former South African president Nelson Mandela died, the tower was lit in the colours of the South African flag. When former federal finance minister under Stephen Harper's Conservatives Jim Flaherty died, the tower was lit in green to reflect his Irish Canadian heritage. On the night of the attacks on Paris on November 13, 2015, the tower displayed the colours of the French flag. On June 8, 2021, the tower displayed the colours of the Toronto Maple Leafs' archrivals Montreal Canadiens after they advanced to the semifinals of 2021 Stanley Cup playoffs. The CN Tower was lit in the colours of the Ukrainian flag during the beginning of the Russian invasion of Ukraine in late February 2022.
Programmed remotely from a desktop computer with a wireless network interface controller in Burlington, Ontario, the LEDs use less energy to light than the previous incandescent lights (10% less energy than the dimly lit version and 60% less than the brightly lit version). The estimated cost to use the LEDs is $1,000 per month.
During the spring and autumn bird migration seasons, the lights are turned off to comply with the voluntary Fatal Light Awareness Program, which "encourages buildings to dim unnecessary exterior lighting to mitigate bird mortality during spring and summer migration."
The CN Tower is the tallest freestanding structure in the Western Hemisphere. As of 2013, there were two other freestanding structures in the Western Hemisphere exceeding 500 m (1,640.4 ft) in height: the Willis Tower in Chicago, which stands at 527 m (1,729.0 ft) when measured to its pinnacle, and One World Trade Center in New York City, which has a pinnacle height of 541.33 m (1,776.0 ft), or approximately 12 m (39.4 ft) shorter than the CN Tower. Due to the symbolism of the number 1776 (the year of the signing of the United States Declaration of Independence), the height of One World Trade Center is unlikely to be increased. The proposed Chicago Spire was expected to exceed the height of the CN Tower, but its construction was halted early due to financial difficulties amid the Great Recession, and was eventually cancelled in 2010.
Guinness World Records has called the CN Tower "the world's tallest self-supporting tower" and "the world's tallest free-standing tower". Although Guinness did list this description of the CN Tower under the heading "tallest building" at least once, it has also listed it under "tallest tower", omitting it from its list of "tallest buildings." In 1996, Guinness changed the tower's classification to "World's Tallest Building and Freestanding Structure". Emporis and the Council on Tall Buildings and Urban Habitat both listed the CN Tower as the world's tallest free-standing structure on land, and specifically state that the CN Tower is not a true building, thereby awarding the title of world's tallest building to Taipei 101, which is 44 m (144 ft) shorter than the CN Tower. The issue of what was tallest became moot when Burj Khalifa, then under construction, exceeded the height of the CN Tower in 2007 (see below).
Although the CN Tower contains a restaurant, a gift shop and multiple observation levels, it does not have floors continuously from the ground, and therefore it is not considered a building by the Council on Tall Buildings and Urban Habitat (CTBUH) or Emporis. CTBUH defines a building as "a structure that is designed for residential, business, or manufacturing purposes. An essential characteristic of a building is that it has floors." The CN Tower and other similar structures—such as the Ostankino Tower in Moscow, Russia; the Oriental Pearl Tower in Shanghai, China; The Strat in Las Vegas, Nevada, United States; and the Eiffel Tower in Paris, France—are categorized as "towers", which are free-standing structures that may have observation decks and a few other habitable levels, but do not have floors from the ground up. The CN Tower was the tallest tower by this definition until 2010 (see below).
Taller than the CN Tower are numerous radio masts and towers, which are held in place by guy-wires, the tallest being the KVLY-TV mast in Blanchard, North Dakota, in the United States at 628 m (2,060 ft) tall, leading to a distinction between these and "free-standing" structures. Additionally, the Petronius Platform stands 610 m (2,001 ft) above its base on the bottom of the Gulf of Mexico near the Mississippi River Delta, but only the top 75 m (246 ft) of this oil and natural gas platform are above water, and the structure is thus partially supported by its buoyancy. Like the CN Tower, none of these taller structures are commonly considered buildings.
On September 12, 2007, Burj Khalifa, which is a hotel, residential and commercial building in Dubai, United Arab Emirates (formerly known as Burj Dubai before opening), passed the CN Tower's 553.33-m height. The CN Tower held the record of the tallest freestanding structure on land for over 30 years.
After Burj Khalifa had been formally recognized by the Guinness World Records as the world's tallest freestanding structure, Guinness re-certified CN Tower as the world's tallest freestanding tower. The tower definition used by Guinness was defined by the Council on Tall Buildings and Urban Habitat as 'a building in which less than 50% of the construction is usable floor space'. Guinness World Records editor-in-chief Craig Glenday announced that Burj Khalifa was not classified as a tower because it has too much usable floor space to be considered to be a tower. CN Tower still held world records for highest above ground wine cellar (in 360 Restaurant) at 351 m, highest above ground restaurant at 346 m (Horizons Restaurant), and tallest free-standing concrete tower during Guinness's recertification. The CN Tower was surpassed in 2009 by the Canton Tower in Guangzhou, China, which stands at 604 m (1,982 ft) tall, as the world's tallest tower; which in turn was surpassed by the Tokyo Skytree in 2011, which currently is the tallest tower at 634.0 m (2,080.1 ft) in height. The CN Tower, as of 2022, stands as the tenth-tallest free-standing structure on land, remains the tallest free-standing structure in the Western Hemisphere, and is the third-tallest tower.
Since its construction, the tower has gained the following world height records:
The CN Tower has been and continues to be used as a communications tower for a number of different media and by numerous companies.
Source: Vividcomm
There is no AM broadcasting from the CN Tower. The FM transmitters are situated in a 102 m-tall (335 ft) metal broadcast antenna, on top of the main concrete portion of the tower at an elevation above 446.5 m (1,465 ft).
Source: Vividcomm
The CN Tower has been featured in numerous films, television shows, music recording covers, and video games. The tower also has its own official mascot, which resembles the tower itself. | [
{
"paragraph_id": 0,
"text": "The CN Tower (French: Tour CN) is a 553.3 m-high (1,815.3 ft) concrete communications and observation tower in Toronto, Ontario, Canada. Completed in 1976, it is located in downtown Toronto, built on the former Railway Lands. Its name \"CN\" referred to Canadian National, the railway company that built the tower. Following the railway's decision to divest non-core freight railway assets prior to the company's privatization in 1995, it transferred the tower to the Canada Lands Company, a federal Crown corporation responsible for the government's real estate portfolio.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The CN Tower held the record for the world's tallest free-standing structure for 32 years, from 1975 until 2007, when it was surpassed by the Burj Khalifa, and was the world's tallest tower until 2009 when it was surpassed by the Canton Tower. It is currently the tenth-tallest free-standing structure in the world and remains the tallest free-standing structure on land in the Western Hemisphere. In 1995, the CN Tower was declared one of the modern Seven Wonders of the World by the American Society of Civil Engineers. It also belongs to the World Federation of Great Towers.",
"title": ""
},
{
"paragraph_id": 2,
"text": "It is a signature icon of Toronto's skyline and attracts more than two million international visitors annually. It houses several observation decks, a revolving restaurant at some 350 metres (1,150 ft), and an entertainment complex.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The original concept of the CN Tower was first conceived in 1968 when the Canadian National Railway wanted to build a large television and radio communication platform to serve the Toronto area, and to demonstrate the strength of Canadian industry and CN in particular. These plans evolved over the next few years, and the project became official in 1972.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The tower would have been part of Metro Centre (see CityPlace), a large development south of Front Street on the Railway Lands, a large railway switching yard that was being made redundant after the opening of the MacMillan Yard north of the city in 1965 (then known as Toronto Yard). Key project team members were NCK Engineering as structural engineer; John Andrews Architects; Webb, Zerafa, Menkes, Housden Architects; Foundation Building Construction; and Canron (Eastern Structural Division).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "As Toronto grew rapidly during the late 1960s and early 1970s, multiple skyscrapers were constructed in the downtown core, most notably First Canadian Place, which has Bank of Montreal's head offices. The reflective nature of the new buildings reduced the quality of broadcast signals, requiring new, higher antennas that were at least 300 m (980 ft) tall. The radio wire is estimated to be 102 metres (335 ft) long in 44 pieces, the heaviest of which weighs around 8 tonnes (8.8 short tons; 7.9 long tons).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "At the time, most data communications took place over point-to-point microwave links, whose dish antennas covered the roofs of large buildings. As each new skyscraper was added to the downtown, former line-of-sight links were no longer possible. CN intended to rent \"hub\" space for microwave links, visible from almost any building in the Toronto area.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The original plan for the tower envisioned a tripod consisting of three independent cylindrical \"pillars\" linked at various heights by structural bridges. Had it been built, this design would have been considerably shorter, with the metal antenna located roughly where the concrete section between the main level and the SkyPod lies today. As the design effort continued, it evolved into the current design with a single continuous hexagonal core to the SkyPod, with three support legs blended into the hexagon below the main level, forming a large Y-shape structure at the ground level.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The idea for the main level in its current form evolved around this time, but the Space Deck (later renamed SkyPod) was not part of the plans until later. One engineer in particular felt that visitors would feel the higher observation deck would be worth paying extra for, and the costs in terms of construction were not prohibitive. Also around this time, it was realized that the tower could become the world's tallest free-standing structure to improve signal quality and attract tourists, and plans were changed to incorporate subtle modifications throughout the structure to this end.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The CN Tower was built by Canada Cement Company (also known as the Cement Foundation Company of Canada at the time), a subsidiary of Sweden's Skanska, a global project-development and construction group.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Construction began on February 6, 1973, with massive excavations at the tower base for the foundation. By the time the foundation was complete, 56,000 t (62,000 short tons; 55,000 long tons) of earth and shale were removed to a depth of 15 m (49.2 ft) in the centre, and a base incorporating 7,000 m (9,200 cu yd) of concrete with 450 t (496 short tons; 443 long tons) of rebar and 36 t (40 short tons; 35 long tons) of steel cable had been built to a thickness of 6.7 m (22 ft). This portion of the construction was fairly rapid, with only four months needed between the start and the foundation being ready for construction on top.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "To create the main support pillar, workers constructed a hydraulically raised slipform at the base. This was a fairly unprecedented engineering feat on its own, consisting of a large metal platform that raised itself on jacks at about 6 m (20 ft) per day as the concrete below set. Concrete was poured Monday to Friday (not continuously) by a small team of people until February 22, 1974, at which time it had already become the tallest structure in Canada, surpassing the recently built Inco Superstack in Sudbury, built using similar methods.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The tower contains 40,500 m (53,000 cu yd) of concrete, all of which was mixed on-site in order to ensure batch consistency. Through the pour, the vertical accuracy of the tower was maintained by comparing the slip form's location to massive plumb bobs hanging from it, observed by small telescopes from the ground. Over the height of the tower, it varies from true vertical accuracy by only 29 mm (1.1 in).",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In August 1974, construction of the main level commenced. Using 45 hydraulic jacks attached to cables strung from a temporary steel crown anchored to the top of the tower, twelve giant steel and wooden bracket forms were slowly raised, ultimately taking about a week to crawl up to their final position. These forms were used to create the brackets that support the main level, as well as a base for the construction of the main level itself. The Space Deck (currently named SkyPod) was built of concrete poured into a wooden frame attached to rebar at the lower level deck, and then reinforced with a large steel compression band around the outside.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "While still under construction, the CN Tower officially became the world's tallest free-standing structure on March 31, 1975.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The antenna was originally to be raised by crane as well, but, during construction, the Sikorsky S-64 Skycrane helicopter became available when the United States Army sold one to civilian operators. The helicopter, named \"Olga\", was first used to remove the crane, and then flew the antenna up in 36 sections.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The flights of the antenna pieces were a minor tourist attraction of their own, and the schedule was printed in local newspapers. Use of the helicopter saved months of construction time, with this phase taking only three and a half weeks instead of the planned six months. The tower was topped-off on April 2, 1975, after 26 months of construction, officially capturing the height record from Moscow's Ostankino Tower, and bringing the total mass to 118,000 t (130,000 short tons; 116,000 long tons).",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Two years into the construction, plans for Metro Centre were scrapped, leaving the tower isolated on the Railway Lands in what was then a largely abandoned light-industrial space. This caused serious problems for tourists to access the tower. Ned Baldwin, project architect with John Andrews, wrote at the time that \"All of the logic which dictated the design of the lower accommodation has been upset,\" and that \"Under such ludicrous circumstances Canadian National would hardly have chosen this location to build.\"",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The CN Tower opened on June 26, 1976. The construction costs of approximately CA$63 million ($287 million in 2021 dollars) were repaid in fifteen years.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "From the mid-1970s to the mid-1980s, the CN Tower was practically the only development along Front Street West; it was still possible to see Lake Ontario from the foot of the CN Tower due to the expansive parking lots and lack of development in the area at the time. As the area around the tower was developed, particularly with the completion of the Metro Toronto Convention Centre (north building) in 1984 and SkyDome in 1989 (renamed Rogers Centre in 2005), the former Railway Lands were redeveloped and the tower became the centre of a newly developing entertainment area. Access was greatly improved with the construction of the SkyWalk in 1989, which connected the tower and SkyDome to the nearby Union Station railway and subway station, and, in turn, to the city's Path underground pedestrian system. By the mid-1990s, it was the centre of a thriving tourist district. The entire area continues to be an area of intense building, notably a boom in condominium construction in the first quarter of the 21st century, as well as the 2013 opening of the Ripley's Aquarium by the base of the tower.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "When the CN Tower opened in 1976, there were three public observation points: the SkyPod (then known as the Space Deck) that stands at 447 m (1,467 ft), the Indoor Observation Level (later named Indoor Lookout Level) at 346 m (1,135 ft), and the Outdoor Observation Terrace (at the same level as the Glass Floor) at 342 m (1,122 ft). One floor above the Indoor Observation Level was the Top of Toronto Restaurant, which completed a revolution once every 72 minutes.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The tower would garner worldwide media attention when stuntman Dar Robinson jumped off the CN Tower on two occasions in 1979 and 1980. The first was for a scene from the movie Highpoint, in which Robinson received CA$250,000 ($885,000 in 2021 dollars) for the stunt. The second was for a personal documentary. The first stunt had him use a parachute which he deployed three seconds before impact with the ground, while the second one used a wire decelerator attached to his back.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "On June 26, 1986, the tenth anniversary of the tower's opening, high-rise firefighting and rescue advocate Dan Goodwin, in a sponsored publicity event, used his hands and feet to climb the outside of the tower, a feat he performed twice on the same day. Following both ascents, he used multiple rappels to descend to the ground.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "From 1985 to 1992, the CN Tower basement level hosted the world's first flight simulator ride, Tour of the Universe. The ride was replaced in 1992 with a similar attraction entitled \"Space Race.\" It was later dismantled and replaced by two other rides in 1998 and 1999.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "A glass floor at an elevation of 342 m (1,122 ft) was installed in 1994. Canadian National Railway sold the tower to Canada Lands Company prior to privatizing the company in 1995, when it divested all operations not directly related to its core freight shipping businesses. The tower's name and wordmark were adjusted to remove the CN railways logo, and the tower was renamed Canada's National Tower (from Canadian National Tower), though the tower is commonly called the CN Tower.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Further changes were made from 1997 to January 2004: TrizecHahn Corporation managed the tower and instituted several expansion projects including a CA$26 million entertainment expansion, the 1997 addition of two new elevators (to a total of six) and the consequential relocation of the staircase from the north side leg to inside the core of the building, a conversion that also added nine stairs to the climb. TrizecHahn also owned the Willis Tower (Sears Tower at the time) in Chicago approximately at the same time.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In 2007, light-emitting diode (LED) lights replaced the incandescent lights that lit the CN Tower at night. This was done to take advantage of the cost savings of LED lights over incandescent lights. The colour of the LED lights can change, compared to the constant white colour of the incandescent lights. On September 12, 2007, Burj Khalifa, then under construction and known as Burj Dubai, surpassed the CN Tower as the world's tallest free-standing structure. In 2008, glass panels were installed in one of the CN Tower elevators, which established a world record (346 m) for highest glass floor panelled elevator in the world.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "On August 1, 2011, the CN Tower opened the EdgeWalk, an amusement in which thrill-seekers can walk on and around the roof of the main pod of the tower at 356 m (1,168.0 ft), which is directly above the 360 Restaurant. It is the world's highest full-circle, hands-free walk. Visitors are tethered to an overhead rail system and walk around the edge of the CN Tower's main pod above the 360 Restaurant on a 1.5-metre (4.9 ft) metal floor. The attraction is closed throughout the winter and during periods of electrical storms and high winds.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "One of the notable guests who visited EdgeWalk was Canadian comedian Rick Mercer, featured as the first episode of the ninth season of his CBC Television news satire show, Rick Mercer Report. There, he was accompanied by Canadian pop singer Jann Arden. The episode first aired on April 10, 2013.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The tower and surrounding areas were prominent in the 2015 Pan American Games production. In the opening ceremony, a pre-recorded segment featured track-and-field athlete Bruny Surin passing the flame to sprinter Donovan Bailey on the EdgeWalk and parachuting into Rogers Centre. A fireworks display off the tower concluded both the opening and closing ceremonies.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "On July 1, 2017, as part of the nationwide celebrations for Canada 150, which celebrated the 150th anniversary of Canadian Confederation, fireworks were once again shot from the tower in a five-minute display coordinated with the tower lights and music broadcast on a local radio station.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The CN Tower consists of several substructures. The main portion of the tower is a hollow concrete hexagonal pillar containing the stairwells and power and plumbing connections. The tower's six elevators are located in the three inverted angles created by the Tower's hexagonal shape (two elevators per angle). Each of the three elevator shafts is lined with glass, allowing for views of the city as the glass-windowed elevators make their way through the tower. The stairwell was originally located in one of these angles (the one facing north), but was moved into the central hollow of the tower; the tower's new fifth and sixth elevators were placed in the hexagonal angle that once contained the stairwell. On top of the main concrete portion of the tower is a 102 m (334.6 ft) tall metal broadcast antenna, carrying television and radio signals. There are three visitor areas: the Glass Floor and Outdoor Observation Terrace, which are both located at an elevation of 342 m (1,122 ft), the Indoor Lookout Level (formerly known as \"Indoor Observation Level\") located at 346 m (1,135 ft), and the higher SkyPod (formerly known as \"Space Deck\") at 446.5 m (1,465 ft), just below the metal antenna. The hexagonal shape is visible between the two highest areas; however, below the main deck, three large supporting legs give the tower the appearance of a large tripod.",
"title": "Structure"
},
{
"paragraph_id": 32,
"text": "The main deck level has seven storeys, some of which are open to the public. Below the public areas—at 338 m (1,108.9 ft)—is a large white donut-shaped radome containing the structure's UHF transmitters. The glass floor and outdoor observation deck are at 342 m (1,122.0 ft). The glass floor has an area of 24 m (258 sq ft) and can withstand a pressure of 4.1 megapascals (595 psi). The floor's thermal glass units are 64 mm (2.5 in) thick, consisting of a pane of 25 mm (1.0 in) laminated glass, 25 mm (1.0 in) airspace and a pane of 13 mm (0.5 in) laminated glass. In 2008, one elevator was upgraded to add a glass floor panel, believed to have the highest vertical rise of any elevator equipped with this feature. The Horizons Cafe and the lookout level are at 346 m (1,135.2 ft). The 360 Restaurant, a revolving restaurant that completes a full rotation once every 72 minutes, is at 351 m (1,151.6 ft). When the tower first opened, it also featured a disco named Sparkles (at the Indoor Observation Level), billed as the highest disco and dance floor in the world.",
"title": "Structure"
},
{
"paragraph_id": 33,
"text": "The SkyPod was once the highest public observation deck in the world until it was surpassed by the Shanghai World Financial Center in 2008.",
"title": "Structure"
},
{
"paragraph_id": 34,
"text": "A metal staircase reaches the main deck level after 1,776 steps, and the SkyPod 100 m (328 ft) above after 2,579 steps; it is the tallest metal staircase on Earth. These stairs are intended for emergency use only except for charity stair-climb events two times during the year. The average climber takes approximately 30 minutes to climb to the base of the radome, but the fastest climb on record is 7 minutes and 52 seconds in 1989 by Brendan Keenoy, an Ontario Provincial Police officer. In 2002, Canadian Olympian and Paralympic champion Jeff Adams climbed the stairs of the tower in a specially designed wheelchair. The stairs were originally on one of the three sides of the tower (facing north), with a glass view, but these were later replaced with the third elevator pair and the stairs were moved to the inside of the core. Top climbs on the new, windowless stairwell used since around 2003 have generally been over ten minutes.",
"title": "Structure"
},
{
"paragraph_id": 35,
"text": "A freezing rain storm on March 2, 2007, resulted in a layer of ice several centimetres thick forming on the side of the tower and other downtown buildings. The sun thawed the ice, then winds of up to 90 km/h (56 mph) blew some of it away from the structure. There were fears that cars and windows of nearby buildings would be smashed by large chunks of ice. In response, police closed some streets surrounding the tower. During morning rush hour on March 5 of the same year, police expanded the area of closed streets to include the Gardiner Expressway 310 m (1,017 ft) away from the tower as increased winds blew the ice farther, as far north as King Street West, 490 m (1,608 ft) away, where a taxicab window was shattered. Subsequently, on March 6, 2007, the Gardiner Expressway reopened after winds abated.",
"title": "Structure"
},
{
"paragraph_id": 36,
"text": "On April 16, 2018, falling ice from the CN Tower punctured the roof of the nearby Rogers Centre stadium, causing the Toronto Blue Jays to postpone the game that day to the following day as a doubleheader; this was the third doubleheader held at the Rogers Centre. On April 20 of the same year, the CN Tower reopened.",
"title": "Structure"
},
{
"paragraph_id": 37,
"text": "In August 2000, a fire broke out at the Ostankino Tower in Moscow, killing three people and causing extensive damage. The fire was blamed on poor maintenance and outdated equipment. The failure of the fire-suppression systems and the lack of proper equipment for firefighters allowed the fire to destroy most of the interior and sparked fears the tower might even collapse.",
"title": "Structure"
},
{
"paragraph_id": 38,
"text": "The Ostankino Tower was completed nine years before the CN Tower and is only 13 m (43 ft) shorter. The parallels between the towers led to some concern that the CN Tower could be at risk of a similar tragedy. However, Canadian officials subsequently stated that it is \"highly unlikely\" that a similar disaster could occur at the CN Tower, as it has important safeguards that were not present in the Ostankino Tower. Specifically, officials cited:",
"title": "Structure"
},
{
"paragraph_id": 39,
"text": "Officials also noted that the CN Tower has an excellent safety record, although there was an electrical fire in the antennas on August 16, 2017 — the tower's first fire. Moreover, other supertall structures built between 1967 and 1976 — such as the Willis Tower (formerly the Sears Tower), the World Trade Center (until its destruction on September 11, 2001), the Fernsehturm Berlin, the Aon Center, 875 North Michigan Avenue (formerly the John Hancock Center), and First Canadian Place — also have excellent safety records, which suggests that the Ostankino Tower accident was a rare safety failure, and that the likelihood of similar events occurring at other supertall structures is extremely low.",
"title": "Structure"
},
{
"paragraph_id": 40,
"text": "The CN Tower was originally lit at night with incandescent lights, which were removed in 1997 because they were inefficient and expensive to repair. In June 2007, the tower was outfitted with 1,330 super-bright LED lights inside the elevator shafts, shooting over the main pod and upward to the top of the tower's mast to light the tower from dusk until 2 a.m. The official opening ceremony took place on June 28, 2007, before the Canada Day holiday weekend.",
"title": "Lighting"
},
{
"paragraph_id": 41,
"text": "The tower changes its lighting scheme on holidays and to commemorate major events. After the 95th Grey Cup in Toronto, the tower was lit in green and white to represent the colours of the Grey Cup champion Saskatchewan Roughriders. From sundown on August 27, 2011, to sunrise the following day, the tower was lit in orange, the official colour of the New Democratic Party (NDP), to commemorate the death of federal NDP leader and leader of the official opposition Jack Layton. When former South African president Nelson Mandela died, the tower was lit in the colours of the South African flag. When former federal finance minister under Stephen Harper's Conservatives Jim Flaherty died, the tower was lit in green to reflect his Irish Canadian heritage. On the night of the attacks on Paris on November 13, 2015, the tower displayed the colours of the French flag. On June 8, 2021, the tower displayed the colours of the Toronto Maple Leafs' archrivals Montreal Canadiens after they advanced to the semifinals of 2021 Stanley Cup playoffs. The CN Tower was lit in the colours of the Ukrainian flag during the beginning of the Russian invasion of Ukraine in late February 2022.",
"title": "Lighting"
},
{
"paragraph_id": 42,
"text": "Programmed remotely from a desktop computer with a wireless network interface controller in Burlington, Ontario, the LEDs use less energy to light than the previous incandescent lights (10% less energy than the dimly lit version and 60% less than the brightly lit version). The estimated cost to use the LEDs is $1,000 per month.",
"title": "Lighting"
},
{
"paragraph_id": 43,
"text": "During the spring and autumn bird migration seasons, the lights are turned off to comply with the voluntary Fatal Light Awareness Program, which \"encourages buildings to dim unnecessary exterior lighting to mitigate bird mortality during spring and summer migration.\"",
"title": "Lighting"
},
{
"paragraph_id": 44,
"text": "The CN Tower is the tallest freestanding structure in the Western Hemisphere. As of 2013, there were two other freestanding structures in the Western Hemisphere exceeding 500 m (1,640.4 ft) in height: the Willis Tower in Chicago, which stands at 527 m (1,729.0 ft) when measured to its pinnacle, and One World Trade Center in New York City, which has a pinnacle height of 541.33 m (1,776.0 ft), or approximately 12 m (39.4 ft) shorter than the CN Tower. Due to the symbolism of the number 1776 (the year of the signing of the United States Declaration of Independence), the height of One World Trade Center is unlikely to be increased. The proposed Chicago Spire was expected to exceed the height of the CN Tower, but its construction was halted early due to financial difficulties amid the Great Recession, and was eventually cancelled in 2010.",
"title": "Height comparisons"
},
{
"paragraph_id": 45,
"text": "Guinness World Records has called the CN Tower \"the world's tallest self-supporting tower\" and \"the world's tallest free-standing tower\". Although Guinness did list this description of the CN Tower under the heading \"tallest building\" at least once, it has also listed it under \"tallest tower\", omitting it from its list of \"tallest buildings.\" In 1996, Guinness changed the tower's classification to \"World's Tallest Building and Freestanding Structure\". Emporis and the Council on Tall Buildings and Urban Habitat both listed the CN Tower as the world's tallest free-standing structure on land, and specifically state that the CN Tower is not a true building, thereby awarding the title of world's tallest building to Taipei 101, which is 44 m (144 ft) shorter than the CN Tower. The issue of what was tallest became moot when Burj Khalifa, then under construction, exceeded the height of the CN Tower in 2007 (see below).",
"title": "Height comparisons"
},
{
"paragraph_id": 46,
"text": "Although the CN Tower contains a restaurant, a gift shop and multiple observation levels, it does not have floors continuously from the ground, and therefore it is not considered a building by the Council on Tall Buildings and Urban Habitat (CTBUH) or Emporis. CTBUH defines a building as \"a structure that is designed for residential, business, or manufacturing purposes. An essential characteristic of a building is that it has floors.\" The CN Tower and other similar structures—such as the Ostankino Tower in Moscow, Russia; the Oriental Pearl Tower in Shanghai, China; The Strat in Las Vegas, Nevada, United States; and the Eiffel Tower in Paris, France—are categorized as \"towers\", which are free-standing structures that may have observation decks and a few other habitable levels, but do not have floors from the ground up. The CN Tower was the tallest tower by this definition until 2010 (see below).",
"title": "Height comparisons"
},
{
"paragraph_id": 47,
"text": "Taller than the CN Tower are numerous radio masts and towers, which are held in place by guy-wires, the tallest being the KVLY-TV mast in Blanchard, North Dakota, in the United States at 628 m (2,060 ft) tall, leading to a distinction between these and \"free-standing\" structures. Additionally, the Petronius Platform stands 610 m (2,001 ft) above its base on the bottom of the Gulf of Mexico near the Mississippi River Delta, but only the top 75 m (246 ft) of this oil and natural gas platform are above water, and the structure is thus partially supported by its buoyancy. Like the CN Tower, none of these taller structures are commonly considered buildings.",
"title": "Height comparisons"
},
{
"paragraph_id": 48,
"text": "On September 12, 2007, Burj Khalifa, which is a hotel, residential and commercial building in Dubai, United Arab Emirates (formerly known as Burj Dubai before opening), passed the CN Tower's 553.33-m height. The CN Tower held the record of the tallest freestanding structure on land for over 30 years.",
"title": "Height comparisons"
},
{
"paragraph_id": 49,
"text": "After Burj Khalifa had been formally recognized by the Guinness World Records as the world's tallest freestanding structure, Guinness re-certified CN Tower as the world's tallest freestanding tower. The tower definition used by Guinness was defined by the Council on Tall Buildings and Urban Habitat as 'a building in which less than 50% of the construction is usable floor space'. Guinness World Records editor-in-chief Craig Glenday announced that Burj Khalifa was not classified as a tower because it has too much usable floor space to be considered to be a tower. CN Tower still held world records for highest above ground wine cellar (in 360 Restaurant) at 351 m, highest above ground restaurant at 346 m (Horizons Restaurant), and tallest free-standing concrete tower during Guinness's recertification. The CN Tower was surpassed in 2009 by the Canton Tower in Guangzhou, China, which stands at 604 m (1,982 ft) tall, as the world's tallest tower; which in turn was surpassed by the Tokyo Skytree in 2011, which currently is the tallest tower at 634.0 m (2,080.1 ft) in height. The CN Tower, as of 2022, stands as the tenth-tallest free-standing structure on land, remains the tallest free-standing structure in the Western Hemisphere, and is the third-tallest tower.",
"title": "Height comparisons"
},
{
"paragraph_id": 50,
"text": "Since its construction, the tower has gained the following world height records:",
"title": "Height comparisons"
},
{
"paragraph_id": 51,
"text": "The CN Tower has been and continues to be used as a communications tower for a number of different media and by numerous companies.",
"title": "Use"
},
{
"paragraph_id": 52,
"text": "Source: Vividcomm",
"title": "Use"
},
{
"paragraph_id": 53,
"text": "There is no AM broadcasting from the CN Tower. The FM transmitters are situated in a 102 m-tall (335 ft) metal broadcast antenna, on top of the main concrete portion of the tower at an elevation above 446.5 m (1,465 ft).",
"title": "Use"
},
{
"paragraph_id": 54,
"text": "Source: Vividcomm",
"title": "Use"
},
{
"paragraph_id": 55,
"text": "The CN Tower has been featured in numerous films, television shows, music recording covers, and video games. The tower also has its own official mascot, which resembles the tower itself.",
"title": "In popular culture"
}
]
| The CN Tower is a 553.3 m-high (1,815.3 ft) concrete communications and observation tower in Toronto, Ontario, Canada. Completed in 1976, it is located in downtown Toronto, built on the former Railway Lands. Its name "CN" referred to Canadian National, the railway company that built the tower. Following the railway's decision to divest non-core freight railway assets prior to the company's privatization in 1995, it transferred the tower to the Canada Lands Company, a federal Crown corporation responsible for the government's real estate portfolio. The CN Tower held the record for the world's tallest free-standing structure for 32 years, from 1975 until 2007, when it was surpassed by the Burj Khalifa, and was the world's tallest tower until 2009 when it was surpassed by the Canton Tower. It is currently the tenth-tallest free-standing structure in the world and remains the tallest free-standing structure on land in the Western Hemisphere. In 1995, the CN Tower was declared one of the modern Seven Wonders of the World by the American Society of Civil Engineers. It also belongs to the World Federation of Great Towers. It is a signature icon of Toronto's skyline and attracts more than two million international visitors annually. It houses several observation decks, a revolving restaurant at some 350 metres (1,150 ft), and an entertainment complex. | 2001-08-16T04:02:09Z | 2023-12-25T02:31:11Z | [
"Template:Cite news",
"Template:Cbignore",
"Template:Cite press release",
"Template:S-ttl",
"Template:Lang-fr",
"Template:Formatprice",
"Template:Tallest towers in the world.svg",
"Template:Portal bar",
"Template:Official",
"Template:Pp-move-indef",
"Template:Use mdy dates",
"Template:Convert",
"Template:Wide image",
"Template:Cite web",
"Template:Webarchive",
"Template:Infobox building",
"Template:Reflist",
"Template:Supertall",
"Template:Clear",
"Template:Inflation",
"Template:S-start",
"Template:S-aft",
"Template:S-end",
"Template:Toronto landmarks",
"Template:About",
"Template:CAD",
"Template:NEXTYEAR",
"Template:Commons category",
"Template:Cite journal",
"Template:Cite book",
"Template:S-bef",
"Template:Short description",
"Template:Pp-vandalism",
"Template:Use Canadian English",
"Template:See also",
"Template:Dead link",
"Template:Cite tweet",
"Template:S-ach",
"Template:Inflation-year",
"Template:Inflation-fn",
"Template:Toronto skyscrapers"
]
| https://en.wikipedia.org/wiki/CN_Tower |
6,113 | Chain rule | In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. More precisely, if h = f ∘ g {\displaystyle h=f\circ g} is the function such that h ( x ) = f ( g ( x ) ) {\displaystyle h(x)=f(g(x))} for every x, then the chain rule is, in Lagrange's notation,
or, equivalently,
The chain rule may also be expressed in Leibniz's notation. If a variable z depends on the variable y, which itself depends on the variable x (that is, y and z are dependent variables), then z depends on x as well, via the intermediate variable y. In this case, the chain rule is expressed as
and
for indicating at which points the derivatives have to be evaluated.
In integration, the counterpart to the chain rule is the substitution rule.
Intuitively, the chain rule states that knowing the instantaneous rate of change of z relative to y and that of y relative to x allows one to calculate the instantaneous rate of change of z relative to x as the product of the two rates of change.
As put by George F. Simmons: "If a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 × 4 = 8 times as fast as the man."
The relationship between this example and the chain rule is as follows. Let z, y and x be the (variable) positions of the car, the bicycle, and the walking man, respectively. The rate of change of relative positions of the car and the bicycle is d z d y = 2. {\textstyle {\frac {dz}{dy}}=2.} Similarly, d y d x = 4. {\textstyle {\frac {dy}{dx}}=4.} So, the rate of change of the relative positions of the car and the walking man is
The rate of change of positions is the ratio of the speeds, and the speed is the derivative of the position with respect to the time; that is,
or, equivalently,
which is also an application of the chain rule.
The chain rule seems to have first been used by Gottfried Wilhelm Leibniz. He used it to calculate the derivative of a + b z + c z 2 {\displaystyle {\sqrt {a+bz+cz^{2}}}} as the composite of the square root function and the function a + b z + c z 2 {\displaystyle a+bz+cz^{2}\!} . He first mentioned it in a 1676 memoir (with a sign error in the calculation). The common notation of the chain rule is due to Leibniz. Guillaume de l'Hôpital used the chain rule implicitly in his Analyse des infiniment petits. The chain rule does not appear in any of Leonhard Euler's analysis books, even though they were written over a hundred years after Leibniz's discovery.. It is believed that the first "modern" version of the chain rule appears in Lagrange’s 1797 Théorie des fonctions analytiques; it also appears in Cauchy’s 1823 Résumé des Leçons données a L’École Royale Polytechnique sur Le Calcul Infinitesimal.
The simplest form of the chain rule is for real-valued functions of one real variable. It states that if g is a function that is differentiable at a point c (i.e. the derivative g′(c) exists) and f is a function that is differentiable at g(c), then the composite function f ∘ g {\displaystyle f\circ g} is differentiable at c, and the derivative is
The rule is sometimes abbreviated as
If y = f(u) and u = g(x), then this abbreviated form is written in Leibniz notation as:
The points where the derivatives are evaluated may also be stated explicitly:
Carrying the same reasoning further, given n functions f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}\!} with the composite function f 1 ∘ ( f 2 ∘ ⋯ ( f n − 1 ∘ f n ) ) {\displaystyle f_{1}\circ (f_{2}\circ \cdots (f_{n-1}\circ f_{n}))\!} , if each function f i {\displaystyle f_{i}\!} is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of Chain Rule, where the derivative is (in Leibniz's notation):
The chain rule can be applied to composites of more than two functions. To take the derivative of a composite of more than two functions, notice that the composite of f, g, and h (in that order) is the composite of f with g ∘ h. The chain rule states that to compute the derivative of f ∘ g ∘ h, it is sufficient to compute the derivative of f and the derivative of g ∘ h. The derivative of f can be calculated directly, and the derivative of g ∘ h can be calculated by applying the chain rule again.
For concreteness, consider the function
This can be decomposed as the composite of three functions:
So that y = f ( g ( h ( x ) ) ) {\displaystyle y=f(g(h(x)))} .
Their derivatives are:
The chain rule states that the derivative of their composite at the point x = a is:
In Leibniz's notation, this is:
or for short,
The derivative function is therefore:
Another way of computing this derivative is to view the composite function f ∘ g ∘ h as the composite of f ∘ g and h. Applying the chain rule in this manner would yield:
This is the same as what was computed above. This should be expected because (f ∘ g) ∘ h = f ∘ (g ∘ h).
Sometimes, it is necessary to differentiate an arbitrarily long composition of the form f 1 ∘ f 2 ∘ ⋯ ∘ f n − 1 ∘ f n {\displaystyle f_{1}\circ f_{2}\circ \cdots \circ f_{n-1}\circ f_{n}\!} . In this case, define
where f a . . a = f a {\displaystyle f_{a\,.\,.\,a}=f_{a}} and f a . . b ( x ) = x {\displaystyle f_{a\,.\,.\,b}(x)=x} when b < a {\displaystyle b<a} . Then the chain rule takes the form
or, in the Lagrange notation,
The chain rule can be used to derive some well-known differentiation rules. For example, the quotient rule is a consequence of the chain rule and the product rule. To see this, write the function f(x)/g(x) as the product f(x) · 1/g(x). First apply the product rule:
To compute the derivative of 1/g(x), notice that it is the composite of g with the reciprocal function, that is, the function that sends x to 1/x. The derivative of the reciprocal function is − 1 / x 2 {\displaystyle -1/x^{2}\!} . By applying the chain rule, the last expression becomes:
which is the usual formula for the quotient rule.
Suppose that y = g(x) has an inverse function. Call its inverse function f so that we have x = f(y). There is a formula for the derivative of f in terms of the derivative of g. To see this, note that f and g satisfy the formula
And because the functions f ( g ( x ) ) {\displaystyle f(g(x))} and x are equal, their derivatives must be equal. The derivative of x is the constant function with value 1, and the derivative of f ( g ( x ) ) {\displaystyle f(g(x))} is determined by the chain rule. Therefore, we have that:
To express f' as a function of an independent variable y, we substitute f ( y ) {\displaystyle f(y)} for x wherever it appears. Then we can solve for f'.
For example, consider the function g(x) = e. It has an inverse f(y) = ln y. Because g′(x) = e, the above formula says that
This formula is true whenever g is differentiable and its inverse f is also differentiable. This formula can fail when one of these conditions is not true. For example, consider g(x) = x. Its inverse is f(y) = y, which is not differentiable at zero. If we attempt to use the above formula to compute the derivative of f at zero, then we must evaluate 1/g′(f(0)). Since f(0) = 0 and g′(0) = 0, we must evaluate 1/0, which is undefined. Therefore, the formula fails in this case. This is not surprising because f is not differentiable at zero.
Faà di Bruno's formula generalizes the chain rule to higher derivatives. Assuming that y = f(u) and u = g(x), then the first few derivatives are:
One proof of the chain rule begins by defining the derivative of the composite function f ∘ g, where we take the limit of the difference quotient for f ∘ g as x approaches a:
Assume for the moment that g ( x ) {\displaystyle g(x)\!} does not equal g ( a ) {\displaystyle g(a)} for any x {\displaystyle x} near a {\displaystyle a} . Then the previous expression is equal to the product of two factors:
If g {\displaystyle g} oscillates near a, then it might happen that no matter how close one gets to a, there is always an even closer x such that g(x) = g(a). For example, this happens near a = 0 for the continuous function g defined by g(x) = 0 for x = 0 and g(x) = x sin(1/x) otherwise. Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function Q {\displaystyle Q} as follows:
We will show that the difference quotient for f ∘ g is always equal to:
Whenever g(x) is not equal to g(a), this is clear because the factors of g(x) − g(a) cancel. When g(x) equals g(a), then the difference quotient for f ∘ g is zero because f(g(x)) equals f(g(a)), and the above product is zero because it equals f′(g(a)) times zero. So the above product is always equal to the difference quotient, and to show that the derivative of f ∘ g at a exists and to determine its value, we need only show that the limit as x goes to a of the above product exists and determine its value.
To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are Q(g(x)) and (g(x) − g(a)) / (x − a). The latter is the difference quotient for g at a, and because g is differentiable at a by assumption, its limit as x tends to a exists and equals g′(a).
As for Q(g(x)), notice that Q is defined wherever f is. Furthermore, f is differentiable at g(a) by assumption, so Q is continuous at g(a), by definition of the derivative. The function g is continuous at a because it is differentiable at a, and therefore Q ∘ g is continuous at a. So its limit as x goes to a exists and equals Q(g(a)), which is f′(g(a)).
This shows that the limits of both factors exist and that they equal f′(g(a)) and g′(a), respectively. Therefore, the derivative of f ∘ g at a exists and equals f′(g(a))g′(a).
Another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. This proof has the advantage that it generalizes to several variables. It relies on the following equivalent definition of differentiability at a point: A function g is differentiable at a if there exists a real number g′(a) and a function ε(h) that tends to zero as h tends to zero, and furthermore
Here the left-hand side represents the true difference between the value of g at a and at a + h, whereas the right-hand side represents the approximation determined by the derivative plus an error term.
In the situation of the chain rule, such a function ε exists because g is assumed to be differentiable at a. Again by assumption, a similar function also exists for f at g(a). Calling this function η, we have
The above definition imposes no constraints on η(0), even though it is assumed that η(k) tends to zero as k tends to zero. If we set η(0) = 0, then η is continuous at 0.
Proving the theorem requires studying the difference f(g(a + h)) − f(g(a)) as h tends to zero. The first step is to substitute for g(a + h) using the definition of differentiability of g at a:
The next step is to use the definition of differentiability of f at g(a). This requires a term of the form f(g(a) + k) for some k. In the above equation, the correct k varies with h. Set kh = g′(a) h + ε(h) h and the right hand side becomes f(g(a) + kh) − f(g(a)). Applying the definition of the derivative gives:
To study the behavior of this expression as h tends to zero, expand kh. After regrouping the terms, the right-hand side becomes:
Because ε(h) and η(kh) tend to zero as h tends to zero, the first two bracketed terms tend to zero as h tends to zero. Applying the same theorem on products of limits as in the first proof, the third bracketed term also tends zero. Because the above expression is equal to the difference f(g(a + h)) − f(g(a)), by the definition of the derivative f ∘ g is differentiable at a and its derivative is f′(g(a)) g′(a).
The role of Q in the first proof is played by η in this proof. They are related by the equation:
The need to define Q at g(a) is analogous to the need to define η at zero.
Constantin Carathéodory's alternative definition of the differentiability of a function can be used to give an elegant proof of the chain rule.
Under this definition, a function f is differentiable at a point a if and only if there is a function q, continuous at a and such that f(x) − f(a) = q(x)(x − a). There is at most one such function, and if f is differentiable at a then f ′(a) = q(a).
Given the assumptions of the chain rule and the fact that differentiable functions and compositions of continuous functions are continuous, we have that there exist functions q, continuous at g(a), and r, continuous at a, and such that,
and
Therefore,
but the function given by h(x) = q(g(x))r(x) is continuous at a, and we get, for this a
A similar approach works for continuously differentiable (vector-)functions of many variables. This method of factoring also allows a unified approach to stronger forms of differentiability, when the derivative is required to be Lipschitz continuous, Hölder continuous, etc. Differentiation itself can be viewed as the polynomial remainder theorem (the little Bézout theorem, or factor theorem), generalized to an appropriate class of functions.
If y = f ( x ) {\displaystyle y=f(x)} and x = g ( t ) {\displaystyle x=g(t)} then choosing infinitesimal Δ t ≠ 0 {\displaystyle \Delta t\not =0} we compute the corresponding Δ x = g ( t + Δ t ) − g ( t ) {\displaystyle \Delta x=g(t+\Delta t)-g(t)} and then the corresponding Δ y = f ( x + Δ x ) − f ( x ) {\displaystyle \Delta y=f(x+\Delta x)-f(x)} , so that
and applying the standard part we obtain
which is the chain rule.
The full generalization of the chain rule to multi-variable functions (such as f : R m → R n {\displaystyle f:\mathbb {R} ^{m}\to \mathbb {R} ^{n}} ) is rather technical. However, it is simpler to write in the case of functions of the form
where f : R k → R {\displaystyle f:\mathbb {R} ^{k}\to \mathbb {R} } , and g i : R → R {\displaystyle g_{i}:\mathbb {R} \to \mathbb {R} } for each i = 1 , 2 , … , k . {\displaystyle i=1,2,\dots ,k.}
As this case occurs often in the study of functions of a single variable, it is worth describing it separately.
Let f : R k → R {\displaystyle f:\mathbb {R} ^{k}\to \mathbb {R} } , and g i : R → R {\displaystyle g_{i}:\mathbb {R} \to \mathbb {R} } for each i = 1 , 2 , … , k . {\displaystyle i=1,2,\dots ,k.} To write the chain rule for the composition of functions
one needs the partial derivatives of f with respect to its k arguments. The usual notations for partial derivatives involve names for the arguments of the function. As these arguments are not named in the above formula, it is simpler and clearer to use D-Notation, and to denote by
the partial derivative of f with respect to its ith argument, and by
the value of this derivative at z.
With this notation, the chain rule is
If the function f is addition, that is, if
then D 1 f = ∂ f ∂ u = 1 {\textstyle D_{1}f={\frac {\partial f}{\partial u}}=1} and D 2 f = ∂ f ∂ v = 1 {\textstyle D_{2}f={\frac {\partial f}{\partial v}}=1} . Thus, the chain rule gives
For multiplication
the partials are D 1 f = v {\displaystyle D_{1}f=v} and D 2 f = u {\displaystyle D_{2}f=u} . Thus,
The case of exponentiation
is slightly more complicated, as
and, as u v = e v ln u , {\displaystyle u^{v}=e^{v\ln u},}
It follows that
The simplest way for writing the chain rule in the general case is to use the total derivative, which is a linear transformation that captures all directional derivatives in a single formula. Consider differentiable functions f : R → R and g : R → R, and a point a in R. Let Da g denote the total derivative of g at a and Dg(a) f denote the total derivative of f at g(a). These two derivatives are linear transformations R → R and R → R, respectively, so they can be composed. The chain rule for total derivatives is that their composite is the total derivative of f ∘ g at a:
or for short,
The higher-dimensional chain rule can be proved using a technique similar to the second proof given above.
Because the total derivative is a linear transformation, the functions appearing in the formula can be rewritten as matrices. The matrix corresponding to a total derivative is called a Jacobian matrix, and the composite of two derivatives corresponds to the product of their Jacobian matrices. From this perspective the chain rule therefore says:
or for short,
That is, the Jacobian of a composite function is the product of the Jacobians of the composed functions (evaluated at the appropriate points).
The higher-dimensional chain rule is a generalization of the one-dimensional chain rule. If k, m, and n are 1, so that f : R → R and g : R → R, then the Jacobian matrices of f and g are 1 × 1. Specifically, they are:
The Jacobian of f ∘ g is the product of these 1 × 1 matrices, so it is f′(g(a))⋅g′(a), as expected from the one-dimensional chain rule. In the language of linear transformations, Da(g) is the function which scales a vector by a factor of g′(a) and Dg(a)(f) is the function which scales a vector by a factor of f′(g(a)). The chain rule says that the composite of these two linear transformations is the linear transformation Da(f ∘ g), and therefore it is the function that scales a vector by f′(g(a))⋅g′(a).
Another way of writing the chain rule is used when f and g are expressed in terms of their components as y = f(u) = (f1(u), …, fk(u)) and u = g(x) = (g1(x), …, gm(x)). In this case, the above rule for Jacobian matrices is usually written as:
The chain rule for total derivatives implies a chain rule for partial derivatives. Recall that when the total derivative exists, the partial derivative in the ith coordinate direction is found by multiplying the Jacobian matrix by the ith basis vector. By doing this to the formula above, we find:
Since the entries of the Jacobian matrix are partial derivatives, we may simplify the above formula to get:
More conceptually, this rule expresses the fact that a change in the xi direction may change all of g1 through gm, and any of these changes may affect f.
In the special case where k = 1, so that f is a real-valued function, then this formula simplifies even further:
This can be rewritten as a dot product. Recalling that u = (g1, …, gm), the partial derivative ∂u / ∂xi is also a vector, and the chain rule says that:
Given u(x, y) = x + 2y where x(r, t) = r sin(t) and y(r,t) = sin(t), determine the value of ∂u / ∂r and ∂u / ∂t using the chain rule.
and
Faà di Bruno's formula for higher-order derivatives of single-variable functions generalizes to the multivariable case. If y = f(u) is a function of u = g(x) as above, then the second derivative of f ∘ g is:
All extensions of calculus have a chain rule. In most of these, the formula remains the same, though the meaning of that formula may be vastly different.
One generalization is to manifolds. In this situation, the chain rule represents the fact that the derivative of f ∘ g is the composite of the derivative of f and the derivative of g. This theorem is an immediate consequence of the higher dimensional chain rule given above, and it has exactly the same formula.
The chain rule is also valid for Fréchet derivatives in Banach spaces. The same formula holds as before. This case and the previous one admit a simultaneous generalization to Banach manifolds.
In differential algebra, the derivative is interpreted as a morphism of modules of Kähler differentials. A ring homomorphism of commutative rings f : R → S determines a morphism of Kähler differentials Df : ΩR → ΩS which sends an element dr to d(f(r)), the exterior differential of f(r). The formula D(f ∘ g) = Df ∘ Dg holds in this context as well.
The common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. A functor is an operation on spaces and functions between them. It associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. In each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. For example, in the manifold case, the derivative sends a C-manifold to a C-manifold (its tangent bundle) and a C-function to its total derivative. There is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. This is exactly the formula D(f ∘ g) = Df ∘ Dg.
There are also chain rules in stochastic calculus. One of these, Itō's lemma, expresses the composite of an Itō process (or more generally a semimartingale) dXt with a twice-differentiable function f. In Itō's lemma, the derivative of the composite function depends not only on dXt and the derivative of f but also on the second derivative of f. The dependence on the second derivative is a consequence of the non-zero quadratic variation of the stochastic process, which broadly speaking means that the process can move up and down in a very rough way. This variant of the chain rule is not an example of a functor because the two functions being composed are of different types. | [
{
"paragraph_id": 0,
"text": "In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. More precisely, if h = f ∘ g {\\displaystyle h=f\\circ g} is the function such that h ( x ) = f ( g ( x ) ) {\\displaystyle h(x)=f(g(x))} for every x, then the chain rule is, in Lagrange's notation,",
"title": ""
},
{
"paragraph_id": 1,
"text": "or, equivalently,",
"title": ""
},
{
"paragraph_id": 2,
"text": "The chain rule may also be expressed in Leibniz's notation. If a variable z depends on the variable y, which itself depends on the variable x (that is, y and z are dependent variables), then z depends on x as well, via the intermediate variable y. In this case, the chain rule is expressed as",
"title": ""
},
{
"paragraph_id": 3,
"text": "and",
"title": ""
},
{
"paragraph_id": 4,
"text": "for indicating at which points the derivatives have to be evaluated.",
"title": ""
},
{
"paragraph_id": 5,
"text": "In integration, the counterpart to the chain rule is the substitution rule.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Intuitively, the chain rule states that knowing the instantaneous rate of change of z relative to y and that of y relative to x allows one to calculate the instantaneous rate of change of z relative to x as the product of the two rates of change.",
"title": "Intuitive explanation"
},
{
"paragraph_id": 7,
"text": "As put by George F. Simmons: \"If a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 × 4 = 8 times as fast as the man.\"",
"title": "Intuitive explanation"
},
{
"paragraph_id": 8,
"text": "The relationship between this example and the chain rule is as follows. Let z, y and x be the (variable) positions of the car, the bicycle, and the walking man, respectively. The rate of change of relative positions of the car and the bicycle is d z d y = 2. {\\textstyle {\\frac {dz}{dy}}=2.} Similarly, d y d x = 4. {\\textstyle {\\frac {dy}{dx}}=4.} So, the rate of change of the relative positions of the car and the walking man is",
"title": "Intuitive explanation"
},
{
"paragraph_id": 9,
"text": "The rate of change of positions is the ratio of the speeds, and the speed is the derivative of the position with respect to the time; that is,",
"title": "Intuitive explanation"
},
{
"paragraph_id": 10,
"text": "or, equivalently,",
"title": "Intuitive explanation"
},
{
"paragraph_id": 11,
"text": "which is also an application of the chain rule.",
"title": "Intuitive explanation"
},
{
"paragraph_id": 12,
"text": "The chain rule seems to have first been used by Gottfried Wilhelm Leibniz. He used it to calculate the derivative of a + b z + c z 2 {\\displaystyle {\\sqrt {a+bz+cz^{2}}}} as the composite of the square root function and the function a + b z + c z 2 {\\displaystyle a+bz+cz^{2}\\!} . He first mentioned it in a 1676 memoir (with a sign error in the calculation). The common notation of the chain rule is due to Leibniz. Guillaume de l'Hôpital used the chain rule implicitly in his Analyse des infiniment petits. The chain rule does not appear in any of Leonhard Euler's analysis books, even though they were written over a hundred years after Leibniz's discovery.. It is believed that the first \"modern\" version of the chain rule appears in Lagrange’s 1797 Théorie des fonctions analytiques; it also appears in Cauchy’s 1823 Résumé des Leçons données a L’École Royale Polytechnique sur Le Calcul Infinitesimal.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The simplest form of the chain rule is for real-valued functions of one real variable. It states that if g is a function that is differentiable at a point c (i.e. the derivative g′(c) exists) and f is a function that is differentiable at g(c), then the composite function f ∘ g {\\displaystyle f\\circ g} is differentiable at c, and the derivative is",
"title": "Statement"
},
{
"paragraph_id": 14,
"text": "The rule is sometimes abbreviated as",
"title": "Statement"
},
{
"paragraph_id": 15,
"text": "If y = f(u) and u = g(x), then this abbreviated form is written in Leibniz notation as:",
"title": "Statement"
},
{
"paragraph_id": 16,
"text": "The points where the derivatives are evaluated may also be stated explicitly:",
"title": "Statement"
},
{
"paragraph_id": 17,
"text": "Carrying the same reasoning further, given n functions f 1 , … , f n {\\displaystyle f_{1},\\ldots ,f_{n}\\!} with the composite function f 1 ∘ ( f 2 ∘ ⋯ ( f n − 1 ∘ f n ) ) {\\displaystyle f_{1}\\circ (f_{2}\\circ \\cdots (f_{n-1}\\circ f_{n}))\\!} , if each function f i {\\displaystyle f_{i}\\!} is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of Chain Rule, where the derivative is (in Leibniz's notation):",
"title": "Statement"
},
{
"paragraph_id": 18,
"text": "The chain rule can be applied to composites of more than two functions. To take the derivative of a composite of more than two functions, notice that the composite of f, g, and h (in that order) is the composite of f with g ∘ h. The chain rule states that to compute the derivative of f ∘ g ∘ h, it is sufficient to compute the derivative of f and the derivative of g ∘ h. The derivative of f can be calculated directly, and the derivative of g ∘ h can be calculated by applying the chain rule again.",
"title": "Applications"
},
{
"paragraph_id": 19,
"text": "For concreteness, consider the function",
"title": "Applications"
},
{
"paragraph_id": 20,
"text": "This can be decomposed as the composite of three functions:",
"title": "Applications"
},
{
"paragraph_id": 21,
"text": "So that y = f ( g ( h ( x ) ) ) {\\displaystyle y=f(g(h(x)))} .",
"title": "Applications"
},
{
"paragraph_id": 22,
"text": "Their derivatives are:",
"title": "Applications"
},
{
"paragraph_id": 23,
"text": "The chain rule states that the derivative of their composite at the point x = a is:",
"title": "Applications"
},
{
"paragraph_id": 24,
"text": "In Leibniz's notation, this is:",
"title": "Applications"
},
{
"paragraph_id": 25,
"text": "or for short,",
"title": "Applications"
},
{
"paragraph_id": 26,
"text": "The derivative function is therefore:",
"title": "Applications"
},
{
"paragraph_id": 27,
"text": "Another way of computing this derivative is to view the composite function f ∘ g ∘ h as the composite of f ∘ g and h. Applying the chain rule in this manner would yield:",
"title": "Applications"
},
{
"paragraph_id": 28,
"text": "This is the same as what was computed above. This should be expected because (f ∘ g) ∘ h = f ∘ (g ∘ h).",
"title": "Applications"
},
{
"paragraph_id": 29,
"text": "Sometimes, it is necessary to differentiate an arbitrarily long composition of the form f 1 ∘ f 2 ∘ ⋯ ∘ f n − 1 ∘ f n {\\displaystyle f_{1}\\circ f_{2}\\circ \\cdots \\circ f_{n-1}\\circ f_{n}\\!} . In this case, define",
"title": "Applications"
},
{
"paragraph_id": 30,
"text": "where f a . . a = f a {\\displaystyle f_{a\\,.\\,.\\,a}=f_{a}} and f a . . b ( x ) = x {\\displaystyle f_{a\\,.\\,.\\,b}(x)=x} when b < a {\\displaystyle b<a} . Then the chain rule takes the form",
"title": "Applications"
},
{
"paragraph_id": 31,
"text": "or, in the Lagrange notation,",
"title": "Applications"
},
{
"paragraph_id": 32,
"text": "The chain rule can be used to derive some well-known differentiation rules. For example, the quotient rule is a consequence of the chain rule and the product rule. To see this, write the function f(x)/g(x) as the product f(x) · 1/g(x). First apply the product rule:",
"title": "Applications"
},
{
"paragraph_id": 33,
"text": "To compute the derivative of 1/g(x), notice that it is the composite of g with the reciprocal function, that is, the function that sends x to 1/x. The derivative of the reciprocal function is − 1 / x 2 {\\displaystyle -1/x^{2}\\!} . By applying the chain rule, the last expression becomes:",
"title": "Applications"
},
{
"paragraph_id": 34,
"text": "which is the usual formula for the quotient rule.",
"title": "Applications"
},
{
"paragraph_id": 35,
"text": "Suppose that y = g(x) has an inverse function. Call its inverse function f so that we have x = f(y). There is a formula for the derivative of f in terms of the derivative of g. To see this, note that f and g satisfy the formula",
"title": "Applications"
},
{
"paragraph_id": 36,
"text": "And because the functions f ( g ( x ) ) {\\displaystyle f(g(x))} and x are equal, their derivatives must be equal. The derivative of x is the constant function with value 1, and the derivative of f ( g ( x ) ) {\\displaystyle f(g(x))} is determined by the chain rule. Therefore, we have that:",
"title": "Applications"
},
{
"paragraph_id": 37,
"text": "To express f' as a function of an independent variable y, we substitute f ( y ) {\\displaystyle f(y)} for x wherever it appears. Then we can solve for f'.",
"title": "Applications"
},
{
"paragraph_id": 38,
"text": "For example, consider the function g(x) = e. It has an inverse f(y) = ln y. Because g′(x) = e, the above formula says that",
"title": "Applications"
},
{
"paragraph_id": 39,
"text": "This formula is true whenever g is differentiable and its inverse f is also differentiable. This formula can fail when one of these conditions is not true. For example, consider g(x) = x. Its inverse is f(y) = y, which is not differentiable at zero. If we attempt to use the above formula to compute the derivative of f at zero, then we must evaluate 1/g′(f(0)). Since f(0) = 0 and g′(0) = 0, we must evaluate 1/0, which is undefined. Therefore, the formula fails in this case. This is not surprising because f is not differentiable at zero.",
"title": "Applications"
},
{
"paragraph_id": 40,
"text": "Faà di Bruno's formula generalizes the chain rule to higher derivatives. Assuming that y = f(u) and u = g(x), then the first few derivatives are:",
"title": "Higher derivatives"
},
{
"paragraph_id": 41,
"text": "One proof of the chain rule begins by defining the derivative of the composite function f ∘ g, where we take the limit of the difference quotient for f ∘ g as x approaches a:",
"title": "Proofs"
},
{
"paragraph_id": 42,
"text": "Assume for the moment that g ( x ) {\\displaystyle g(x)\\!} does not equal g ( a ) {\\displaystyle g(a)} for any x {\\displaystyle x} near a {\\displaystyle a} . Then the previous expression is equal to the product of two factors:",
"title": "Proofs"
},
{
"paragraph_id": 43,
"text": "If g {\\displaystyle g} oscillates near a, then it might happen that no matter how close one gets to a, there is always an even closer x such that g(x) = g(a). For example, this happens near a = 0 for the continuous function g defined by g(x) = 0 for x = 0 and g(x) = x sin(1/x) otherwise. Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function Q {\\displaystyle Q} as follows:",
"title": "Proofs"
},
{
"paragraph_id": 44,
"text": "We will show that the difference quotient for f ∘ g is always equal to:",
"title": "Proofs"
},
{
"paragraph_id": 45,
"text": "Whenever g(x) is not equal to g(a), this is clear because the factors of g(x) − g(a) cancel. When g(x) equals g(a), then the difference quotient for f ∘ g is zero because f(g(x)) equals f(g(a)), and the above product is zero because it equals f′(g(a)) times zero. So the above product is always equal to the difference quotient, and to show that the derivative of f ∘ g at a exists and to determine its value, we need only show that the limit as x goes to a of the above product exists and determine its value.",
"title": "Proofs"
},
{
"paragraph_id": 46,
"text": "To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are Q(g(x)) and (g(x) − g(a)) / (x − a). The latter is the difference quotient for g at a, and because g is differentiable at a by assumption, its limit as x tends to a exists and equals g′(a).",
"title": "Proofs"
},
{
"paragraph_id": 47,
"text": "As for Q(g(x)), notice that Q is defined wherever f is. Furthermore, f is differentiable at g(a) by assumption, so Q is continuous at g(a), by definition of the derivative. The function g is continuous at a because it is differentiable at a, and therefore Q ∘ g is continuous at a. So its limit as x goes to a exists and equals Q(g(a)), which is f′(g(a)).",
"title": "Proofs"
},
{
"paragraph_id": 48,
"text": "This shows that the limits of both factors exist and that they equal f′(g(a)) and g′(a), respectively. Therefore, the derivative of f ∘ g at a exists and equals f′(g(a))g′(a).",
"title": "Proofs"
},
{
"paragraph_id": 49,
"text": "Another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. This proof has the advantage that it generalizes to several variables. It relies on the following equivalent definition of differentiability at a point: A function g is differentiable at a if there exists a real number g′(a) and a function ε(h) that tends to zero as h tends to zero, and furthermore",
"title": "Proofs"
},
{
"paragraph_id": 50,
"text": "Here the left-hand side represents the true difference between the value of g at a and at a + h, whereas the right-hand side represents the approximation determined by the derivative plus an error term.",
"title": "Proofs"
},
{
"paragraph_id": 51,
"text": "In the situation of the chain rule, such a function ε exists because g is assumed to be differentiable at a. Again by assumption, a similar function also exists for f at g(a). Calling this function η, we have",
"title": "Proofs"
},
{
"paragraph_id": 52,
"text": "The above definition imposes no constraints on η(0), even though it is assumed that η(k) tends to zero as k tends to zero. If we set η(0) = 0, then η is continuous at 0.",
"title": "Proofs"
},
{
"paragraph_id": 53,
"text": "Proving the theorem requires studying the difference f(g(a + h)) − f(g(a)) as h tends to zero. The first step is to substitute for g(a + h) using the definition of differentiability of g at a:",
"title": "Proofs"
},
{
"paragraph_id": 54,
"text": "The next step is to use the definition of differentiability of f at g(a). This requires a term of the form f(g(a) + k) for some k. In the above equation, the correct k varies with h. Set kh = g′(a) h + ε(h) h and the right hand side becomes f(g(a) + kh) − f(g(a)). Applying the definition of the derivative gives:",
"title": "Proofs"
},
{
"paragraph_id": 55,
"text": "To study the behavior of this expression as h tends to zero, expand kh. After regrouping the terms, the right-hand side becomes:",
"title": "Proofs"
},
{
"paragraph_id": 56,
"text": "Because ε(h) and η(kh) tend to zero as h tends to zero, the first two bracketed terms tend to zero as h tends to zero. Applying the same theorem on products of limits as in the first proof, the third bracketed term also tends zero. Because the above expression is equal to the difference f(g(a + h)) − f(g(a)), by the definition of the derivative f ∘ g is differentiable at a and its derivative is f′(g(a)) g′(a).",
"title": "Proofs"
},
{
"paragraph_id": 57,
"text": "The role of Q in the first proof is played by η in this proof. They are related by the equation:",
"title": "Proofs"
},
{
"paragraph_id": 58,
"text": "The need to define Q at g(a) is analogous to the need to define η at zero.",
"title": "Proofs"
},
{
"paragraph_id": 59,
"text": "Constantin Carathéodory's alternative definition of the differentiability of a function can be used to give an elegant proof of the chain rule.",
"title": "Proofs"
},
{
"paragraph_id": 60,
"text": "Under this definition, a function f is differentiable at a point a if and only if there is a function q, continuous at a and such that f(x) − f(a) = q(x)(x − a). There is at most one such function, and if f is differentiable at a then f ′(a) = q(a).",
"title": "Proofs"
},
{
"paragraph_id": 61,
"text": "Given the assumptions of the chain rule and the fact that differentiable functions and compositions of continuous functions are continuous, we have that there exist functions q, continuous at g(a), and r, continuous at a, and such that,",
"title": "Proofs"
},
{
"paragraph_id": 62,
"text": "and",
"title": "Proofs"
},
{
"paragraph_id": 63,
"text": "Therefore,",
"title": "Proofs"
},
{
"paragraph_id": 64,
"text": "but the function given by h(x) = q(g(x))r(x) is continuous at a, and we get, for this a",
"title": "Proofs"
},
{
"paragraph_id": 65,
"text": "A similar approach works for continuously differentiable (vector-)functions of many variables. This method of factoring also allows a unified approach to stronger forms of differentiability, when the derivative is required to be Lipschitz continuous, Hölder continuous, etc. Differentiation itself can be viewed as the polynomial remainder theorem (the little Bézout theorem, or factor theorem), generalized to an appropriate class of functions.",
"title": "Proofs"
},
{
"paragraph_id": 66,
"text": "If y = f ( x ) {\\displaystyle y=f(x)} and x = g ( t ) {\\displaystyle x=g(t)} then choosing infinitesimal Δ t ≠ 0 {\\displaystyle \\Delta t\\not =0} we compute the corresponding Δ x = g ( t + Δ t ) − g ( t ) {\\displaystyle \\Delta x=g(t+\\Delta t)-g(t)} and then the corresponding Δ y = f ( x + Δ x ) − f ( x ) {\\displaystyle \\Delta y=f(x+\\Delta x)-f(x)} , so that",
"title": "Proofs"
},
{
"paragraph_id": 67,
"text": "and applying the standard part we obtain",
"title": "Proofs"
},
{
"paragraph_id": 68,
"text": "which is the chain rule.",
"title": "Proofs"
},
{
"paragraph_id": 69,
"text": "The full generalization of the chain rule to multi-variable functions (such as f : R m → R n {\\displaystyle f:\\mathbb {R} ^{m}\\to \\mathbb {R} ^{n}} ) is rather technical. However, it is simpler to write in the case of functions of the form",
"title": "Multivariable case"
},
{
"paragraph_id": 70,
"text": "where f : R k → R {\\displaystyle f:\\mathbb {R} ^{k}\\to \\mathbb {R} } , and g i : R → R {\\displaystyle g_{i}:\\mathbb {R} \\to \\mathbb {R} } for each i = 1 , 2 , … , k . {\\displaystyle i=1,2,\\dots ,k.}",
"title": "Multivariable case"
},
{
"paragraph_id": 71,
"text": "As this case occurs often in the study of functions of a single variable, it is worth describing it separately.",
"title": "Multivariable case"
},
{
"paragraph_id": 72,
"text": "Let f : R k → R {\\displaystyle f:\\mathbb {R} ^{k}\\to \\mathbb {R} } , and g i : R → R {\\displaystyle g_{i}:\\mathbb {R} \\to \\mathbb {R} } for each i = 1 , 2 , … , k . {\\displaystyle i=1,2,\\dots ,k.} To write the chain rule for the composition of functions",
"title": "Multivariable case"
},
{
"paragraph_id": 73,
"text": "one needs the partial derivatives of f with respect to its k arguments. The usual notations for partial derivatives involve names for the arguments of the function. As these arguments are not named in the above formula, it is simpler and clearer to use D-Notation, and to denote by",
"title": "Multivariable case"
},
{
"paragraph_id": 74,
"text": "the partial derivative of f with respect to its ith argument, and by",
"title": "Multivariable case"
},
{
"paragraph_id": 75,
"text": "the value of this derivative at z.",
"title": "Multivariable case"
},
{
"paragraph_id": 76,
"text": "With this notation, the chain rule is",
"title": "Multivariable case"
},
{
"paragraph_id": 77,
"text": "If the function f is addition, that is, if",
"title": "Multivariable case"
},
{
"paragraph_id": 78,
"text": "then D 1 f = ∂ f ∂ u = 1 {\\textstyle D_{1}f={\\frac {\\partial f}{\\partial u}}=1} and D 2 f = ∂ f ∂ v = 1 {\\textstyle D_{2}f={\\frac {\\partial f}{\\partial v}}=1} . Thus, the chain rule gives",
"title": "Multivariable case"
},
{
"paragraph_id": 79,
"text": "For multiplication",
"title": "Multivariable case"
},
{
"paragraph_id": 80,
"text": "the partials are D 1 f = v {\\displaystyle D_{1}f=v} and D 2 f = u {\\displaystyle D_{2}f=u} . Thus,",
"title": "Multivariable case"
},
{
"paragraph_id": 81,
"text": "The case of exponentiation",
"title": "Multivariable case"
},
{
"paragraph_id": 82,
"text": "is slightly more complicated, as",
"title": "Multivariable case"
},
{
"paragraph_id": 83,
"text": "and, as u v = e v ln u , {\\displaystyle u^{v}=e^{v\\ln u},}",
"title": "Multivariable case"
},
{
"paragraph_id": 84,
"text": "It follows that",
"title": "Multivariable case"
},
{
"paragraph_id": 85,
"text": "The simplest way for writing the chain rule in the general case is to use the total derivative, which is a linear transformation that captures all directional derivatives in a single formula. Consider differentiable functions f : R → R and g : R → R, and a point a in R. Let Da g denote the total derivative of g at a and Dg(a) f denote the total derivative of f at g(a). These two derivatives are linear transformations R → R and R → R, respectively, so they can be composed. The chain rule for total derivatives is that their composite is the total derivative of f ∘ g at a:",
"title": "Multivariable case"
},
{
"paragraph_id": 86,
"text": "or for short,",
"title": "Multivariable case"
},
{
"paragraph_id": 87,
"text": "The higher-dimensional chain rule can be proved using a technique similar to the second proof given above.",
"title": "Multivariable case"
},
{
"paragraph_id": 88,
"text": "Because the total derivative is a linear transformation, the functions appearing in the formula can be rewritten as matrices. The matrix corresponding to a total derivative is called a Jacobian matrix, and the composite of two derivatives corresponds to the product of their Jacobian matrices. From this perspective the chain rule therefore says:",
"title": "Multivariable case"
},
{
"paragraph_id": 89,
"text": "or for short,",
"title": "Multivariable case"
},
{
"paragraph_id": 90,
"text": "That is, the Jacobian of a composite function is the product of the Jacobians of the composed functions (evaluated at the appropriate points).",
"title": "Multivariable case"
},
{
"paragraph_id": 91,
"text": "The higher-dimensional chain rule is a generalization of the one-dimensional chain rule. If k, m, and n are 1, so that f : R → R and g : R → R, then the Jacobian matrices of f and g are 1 × 1. Specifically, they are:",
"title": "Multivariable case"
},
{
"paragraph_id": 92,
"text": "The Jacobian of f ∘ g is the product of these 1 × 1 matrices, so it is f′(g(a))⋅g′(a), as expected from the one-dimensional chain rule. In the language of linear transformations, Da(g) is the function which scales a vector by a factor of g′(a) and Dg(a)(f) is the function which scales a vector by a factor of f′(g(a)). The chain rule says that the composite of these two linear transformations is the linear transformation Da(f ∘ g), and therefore it is the function that scales a vector by f′(g(a))⋅g′(a).",
"title": "Multivariable case"
},
{
"paragraph_id": 93,
"text": "Another way of writing the chain rule is used when f and g are expressed in terms of their components as y = f(u) = (f1(u), …, fk(u)) and u = g(x) = (g1(x), …, gm(x)). In this case, the above rule for Jacobian matrices is usually written as:",
"title": "Multivariable case"
},
{
"paragraph_id": 94,
"text": "The chain rule for total derivatives implies a chain rule for partial derivatives. Recall that when the total derivative exists, the partial derivative in the ith coordinate direction is found by multiplying the Jacobian matrix by the ith basis vector. By doing this to the formula above, we find:",
"title": "Multivariable case"
},
{
"paragraph_id": 95,
"text": "Since the entries of the Jacobian matrix are partial derivatives, we may simplify the above formula to get:",
"title": "Multivariable case"
},
{
"paragraph_id": 96,
"text": "More conceptually, this rule expresses the fact that a change in the xi direction may change all of g1 through gm, and any of these changes may affect f.",
"title": "Multivariable case"
},
{
"paragraph_id": 97,
"text": "In the special case where k = 1, so that f is a real-valued function, then this formula simplifies even further:",
"title": "Multivariable case"
},
{
"paragraph_id": 98,
"text": "This can be rewritten as a dot product. Recalling that u = (g1, …, gm), the partial derivative ∂u / ∂xi is also a vector, and the chain rule says that:",
"title": "Multivariable case"
},
{
"paragraph_id": 99,
"text": "Given u(x, y) = x + 2y where x(r, t) = r sin(t) and y(r,t) = sin(t), determine the value of ∂u / ∂r and ∂u / ∂t using the chain rule.",
"title": "Multivariable case"
},
{
"paragraph_id": 100,
"text": "and",
"title": "Multivariable case"
},
{
"paragraph_id": 101,
"text": "Faà di Bruno's formula for higher-order derivatives of single-variable functions generalizes to the multivariable case. If y = f(u) is a function of u = g(x) as above, then the second derivative of f ∘ g is:",
"title": "Multivariable case"
},
{
"paragraph_id": 102,
"text": "All extensions of calculus have a chain rule. In most of these, the formula remains the same, though the meaning of that formula may be vastly different.",
"title": "Further generalizations"
},
{
"paragraph_id": 103,
"text": "One generalization is to manifolds. In this situation, the chain rule represents the fact that the derivative of f ∘ g is the composite of the derivative of f and the derivative of g. This theorem is an immediate consequence of the higher dimensional chain rule given above, and it has exactly the same formula.",
"title": "Further generalizations"
},
{
"paragraph_id": 104,
"text": "The chain rule is also valid for Fréchet derivatives in Banach spaces. The same formula holds as before. This case and the previous one admit a simultaneous generalization to Banach manifolds.",
"title": "Further generalizations"
},
{
"paragraph_id": 105,
"text": "In differential algebra, the derivative is interpreted as a morphism of modules of Kähler differentials. A ring homomorphism of commutative rings f : R → S determines a morphism of Kähler differentials Df : ΩR → ΩS which sends an element dr to d(f(r)), the exterior differential of f(r). The formula D(f ∘ g) = Df ∘ Dg holds in this context as well.",
"title": "Further generalizations"
},
{
"paragraph_id": 106,
"text": "The common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. A functor is an operation on spaces and functions between them. It associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. In each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. For example, in the manifold case, the derivative sends a C-manifold to a C-manifold (its tangent bundle) and a C-function to its total derivative. There is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. This is exactly the formula D(f ∘ g) = Df ∘ Dg.",
"title": "Further generalizations"
},
{
"paragraph_id": 107,
"text": "There are also chain rules in stochastic calculus. One of these, Itō's lemma, expresses the composite of an Itō process (or more generally a semimartingale) dXt with a twice-differentiable function f. In Itō's lemma, the derivative of the composite function depends not only on dXt and the derivative of f but also on the second derivative of f. The dependence on the second derivative is a consequence of the non-zero quadratic variation of the stochastic process, which broadly speaking means that the process can move up and down in a very rough way. This variant of the chain rule is not an example of a functor because the two functions being composed are of different types.",
"title": "Further generalizations"
}
]
| In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. More precisely, if h = f ∘ g is the function such that h = f for every x, then the chain rule is, in Lagrange's notation, or, equivalently, The chain rule may also be expressed in Leibniz's notation. If a variable z depends on the variable y, which itself depends on the variable x, then z depends on x as well, via the intermediate variable y. In this case, the chain rule is expressed as and for indicating at which points the derivatives have to be evaluated. In integration, the counterpart to the chain rule is the substitution rule. | 2001-09-02T14:21:23Z | 2023-12-27T19:22:14Z | [
"Template:About",
"Template:Short description",
"Template:Mvar",
"Template:Math",
"Template:Springer",
"Template:Calculus",
"Template:Cn",
"Template:Annotated link",
"Template:Reflist",
"Template:Cite book",
"Template:MathWorld",
"Template:Citation needed",
"Template:Main",
"Template:Cite journal",
"Template:Calculus topics",
"Template:See also"
]
| https://en.wikipedia.org/wiki/Chain_rule |
6,115 | P versus NP problem | If the solution to a problem is easy to check for correctness, must the problem be easy to solve?
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
The informal term quickly, used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). The general class of questions for which some algorithm can provide an answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be verified in polynomial time is NP, which stands for "nondeterministic polynomial time".
An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If it turns out that P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.
The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields.
It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.
Consider Sudoku, a game where the player is given a partially filled-in grid of numbers and attempts to complete the grid following certain rules. Given an incomplete Sudoku grid, of any size, is there at least one legal solution? Any proposed solution is easily verified, and the time to check a solution grows slowly (polynomially) as the grid gets bigger. However, all known algorithms for finding solutions take, for difficult examples, time that grows exponentially as the grid gets bigger. So, Sudoku is in NP (quickly checkable) but does not seem to be in P (quickly solvable). Thousands of other problems seem similar, in that they are fast to check but slow to solve. Researchers have shown that many of the problems in NP have the extra property that a fast solution to any one of them could be used to build a quick solution to any other problem in NP, a property called NP-completeness. Decades of searching have not yielded a fast solution to any of these problems, so most scientists suspect that none of these problems can be solved quickly. This, however, has never been proven.
The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973).
Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, in which he speculated that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other).
In this theory, the class P consists of all those decision problems (defined below) that can be solved on a deterministic sequential machine in an amount of time that is polynomial in the size of the input; the class NP consists of all those decision problems whose positive solutions can be verified in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:
Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply anything about whether P = NP is true, as stated by Gasarch himself: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era."
To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are a set of problems to each of which any other NP problem can be reduced in polynomial time and whose solution may still be verified in polynomial time. That is, any NP problem can be transformed into any of the NP-complete problems. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into an instance of the Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many such NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems have been shown to be NP-complete, and no fast algorithm for any of them is known.
Based on the definition alone it is not obvious that NP-complete problems exist; however, a trivial and contrived NP-complete problem can be formulated as follows: given a description of a Turing machine M guaranteed to halt in polynomial time, does there exist a polynomial-size input that M will accept? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games.
The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least 2 2 c n {\displaystyle 2^{2^{cn}}} for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time
to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common and reasonably accurate assumption in complexity theory; however, it has some caveats.
First, it is not always true in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, thus rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than 2 ↑↑ ( 2 ↑↑ ( 2 ↑↑ ( h / 2 ) ) ) {\displaystyle 2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow \uparrow (h/2)))} (using Knuth's up-arrow notation), and where h is the number of vertices in H.
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to tackling the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms.
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.
Cook provides a restatement of the problem in The P Versus NP Problem as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.
It is also intuitively argued that the existence of problems that are hard to solve but for which the solutions are easy to verify matches real-world experience.
If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps", no fundamental gap between solving a problem and recognizing the solution once it's found.
On the other hand, some researchers believe that there is overconfidence in believing P ≠ NP and that researchers should explore proofs of P = NP as well. For example, in 2002 these statements were made:
The main argument in favor of P ≠ NP is the total lack of fundamental progress in the area of exhaustive search. This is, in my opinion, a very weak argument. The space of algorithms is very large and we are only at the beginning of its exploration. [...] The resolution of Fermat's Last Theorem also shows that very simple questions may be settled only by very deep theories.
Being attached to a speculation is not a good guide to research planning. One should always try both directions of every problem. Prejudice has caused famous mathematicians to fail to solve famous problems whose solution was opposite to their expectations, even though they had developed all the methods required.
When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN. It is known that DLIN≠NLIN.
One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.
An example of a field that could be upended by a solution showing P = NP is cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:
These would need to be modified or replaced by information-theoretically secure solutions not inherently based on P–NP inequivalence.
On the other hand, there are enormous positive consequences that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as some types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; if these problems were efficiently solvable, it could spur considerable advances in life sciences and biotechnology.
But such changes may pale in significance compared to the revolution an efficient method for solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:
If there really were a machine with φ(n) ∼ k⋅n (or even ∼ k⋅n), this would have consequences of the greatest importance. Namely, it would obviously mean that in spite of the undecidability of the Entscheidungsproblem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine. After all, one would simply have to choose the natural number n so large that when the machine does not deliver a result, it makes no sense to think more about the problem.
Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says:
... it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of the CMI prize problems.
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method that is guaranteed to find proofs to theorems, should one exist of a "reasonable" size, would essentially end this struggle.
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
[...] if you imagine a number M that's finite but incredibly large—like say the number 10↑↑↑↑3 discussed in my paper on "coping with finiteness"—then there's a humongous number of possible algorithms that do n bitwise or addition or shift operations on n given bits, and it's really hard to believe that all of those algorithms fail. My main point, however, is that I don't believe that the equality P = NP will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive.
A proof showing that P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would nevertheless represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.
Also, P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are not powerful enough to answer the question, thus suggesting that novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, each of which is known to be insufficient to prove that P ≠ NP:
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers have also led some computer scientists to suggest that the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). The interpretation of an independence result could be that either no polynomial-time algorithm exists for any NP-complete problem, and such a proof cannot be constructed in (e.g.) ZFC, or that polynomial-time algorithms for NP-complete problems may exist, but it is impossible to prove in ZFC that such algorithms are correct. However, if it can be shown, using techniques of the sort that are currently known to be applicable, that the problem cannot be decided even with much weaker assumptions extending the Peano axioms (PA) for integer arithmetic, then there would necessarily exist nearly polynomial-time algorithms for every problem in NP. Therefore, if one believes (as most complexity theorists do) that not all problems in NP have efficient algorithms, it would follow that proofs of independence using those techniques cannot be possible. Additionally, this result implies that proving independence from PA or ZFC using currently known techniques is no easier than proving the existence of efficient algorithms for all problems in NP.
The P = NP problem can be restated in terms of expressible certain classes of logical statements, as a result of work in descriptive complexity.
Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P can be expressed in first-order logic with the addition of a suitable least fixed-point combinator. Effectively, this, in combination with the order, allows the definition of recursive functions. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
No algorithm for any NP-complete problem is known to run in polynomial time. However, there are algorithms known for NP-complete problems with the property that if P = NP, then the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
If, and only if, P = NP, then this is a polynomial-time algorithm accepting an NP-complete language. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm).
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least 2 − 1 other programs first.
Conceptually speaking, a decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that can produce the correct answer for any input string of length n in at most cn steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is defined as the set of all languages that can be decided by a deterministic polynomial-time Turing machine. That is,
where
and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies the following two conditions:
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach to define NP is to use the concept of certificate and verifier. Formally, NP is defined as the set of languages over a finite alphabet that have a verifier that runs in polynomial time, where the notion of "verifier" is defined as follows.
Let L be a language over a finite alphabet, Σ.
L ∈ NP if, and only if, there exists a binary relation R ⊂ Σ ∗ × Σ ∗ {\displaystyle R\subset \Sigma ^{*}\times \Sigma ^{*}} and a positive integer k such that the following two conditions are satisfied:
A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L.
In general, a verifier does not have to be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time.
Let
Clearly, the question of whether a given x is a composite is equivalent to the question of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.
There are many equivalent ways of describing NP-completeness.
Let L be a language over a finite alphabet Σ.
L is NP-complete if, and only if, the following two conditions are satisfied:
Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete.
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 62 purported proofs of P = NP from 1986 to 2016, of which 50 were proofs of P ≠ NP, 2 were proofs the problem is unprovable, and one was a proof that it is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have since been refuted.
The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.
In the sixth episode of The Simpsons' seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension".
In the second episode of season 2 of Elementary, "Solve for X" revolves around Sherlock and Watson investigating the murders of mathematicians who were attempting to solve P versus NP. | [
{
"paragraph_id": 0,
"text": "If the solution to a problem is easy to check for correctness, must the problem be easy to solve?",
"title": ""
},
{
"paragraph_id": 1,
"text": "The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The informal term quickly, used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). The general class of questions for which some algorithm can provide an answer in polynomial time is \"P\" or \"class P\". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be verified in polynomial time is NP, which stands for \"nondeterministic polynomial time\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If it turns out that P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields.",
"title": ""
},
{
"paragraph_id": 5,
"text": "It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Consider Sudoku, a game where the player is given a partially filled-in grid of numbers and attempts to complete the grid following certain rules. Given an incomplete Sudoku grid, of any size, is there at least one legal solution? Any proposed solution is easily verified, and the time to check a solution grows slowly (polynomially) as the grid gets bigger. However, all known algorithms for finding solutions take, for difficult examples, time that grows exponentially as the grid gets bigger. So, Sudoku is in NP (quickly checkable) but does not seem to be in P (quickly solvable). Thousands of other problems seem similar, in that they are fast to check but slow to solve. Researchers have shown that many of the problems in NP have the extra property that a fast solution to any one of them could be used to build a quick solution to any other problem in NP, a property called NP-completeness. Decades of searching have not yielded a fast solution to any of these problems, so most scientists suspect that none of these problems can be solved quickly. This, however, has never been proven.",
"title": "Example"
},
{
"paragraph_id": 7,
"text": "The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper \"The complexity of theorem proving procedures\" (and independently by Leonid Levin in 1973).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, in which he speculated that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).",
"title": "Context"
},
{
"paragraph_id": 10,
"text": "In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other).",
"title": "Context"
},
{
"paragraph_id": 11,
"text": "In this theory, the class P consists of all those decision problems (defined below) that can be solved on a deterministic sequential machine in an amount of time that is polynomial in the size of the input; the class NP consists of all those decision problems whose positive solutions can be verified in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:",
"title": "Context"
},
{
"paragraph_id": 12,
"text": "Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply anything about whether P = NP is true, as stated by Gasarch himself: \"This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era.\"",
"title": "Context"
},
{
"paragraph_id": 13,
"text": "To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are a set of problems to each of which any other NP problem can be reduced in polynomial time and whose solution may still be verified in polynomial time. That is, any NP problem can be transformed into any of the NP-complete problems. Informally, an NP-complete problem is an NP problem that is at least as \"tough\" as any other problem in NP.",
"title": "NP-completeness"
},
{
"paragraph_id": 14,
"text": "NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.",
"title": "NP-completeness"
},
{
"paragraph_id": 15,
"text": "For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into an instance of the Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many such NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems have been shown to be NP-complete, and no fast algorithm for any of them is known.",
"title": "NP-completeness"
},
{
"paragraph_id": 16,
"text": "Based on the definition alone it is not obvious that NP-complete problems exist; however, a trivial and contrived NP-complete problem can be formulated as follows: given a description of a Turing machine M guaranteed to halt in polynomial time, does there exist a polynomial-size input that M will accept? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.",
"title": "NP-completeness"
},
{
"paragraph_id": 17,
"text": "The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense \"the same problem\".",
"title": "NP-completeness"
},
{
"paragraph_id": 18,
"text": "Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games.",
"title": "Harder problems"
},
{
"paragraph_id": 19,
"text": "The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least 2 2 c n {\\displaystyle 2^{2^{cn}}} for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.",
"title": "Harder problems"
},
{
"paragraph_id": 20,
"text": "It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks \"Are there any solutions?\", the corresponding #P problem asks \"How many solutions are there?\". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.",
"title": "Harder problems"
},
{
"paragraph_id": 21,
"text": "In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.",
"title": "Problems in NP not known to be in P or NP-complete"
},
{
"paragraph_id": 22,
"text": "The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time.",
"title": "Problems in NP not known to be in P or NP-complete"
},
{
"paragraph_id": 23,
"text": "The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time",
"title": "Problems in NP not known to be in P or NP-complete"
},
{
"paragraph_id": 24,
"text": "to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.",
"title": "Problems in NP not known to be in P or NP-complete"
},
{
"paragraph_id": 25,
"text": "All of the above discussion has assumed that P means \"easy\" and \"not in P\" means \"difficult\", an assumption known as Cobham's thesis. It is a common and reasonably accurate assumption in complexity theory; however, it has some caveats.",
"title": "Does P mean \"easy\"?"
},
{
"paragraph_id": 26,
"text": "First, it is not always true in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, thus rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than 2 ↑↑ ( 2 ↑↑ ( 2 ↑↑ ( h / 2 ) ) ) {\\displaystyle 2\\uparrow \\uparrow (2\\uparrow \\uparrow (2\\uparrow \\uparrow (h/2)))} (using Knuth's up-arrow notation), and where h is the number of vertices in H.",
"title": "Does P mean \"easy\"?"
},
{
"paragraph_id": 27,
"text": "On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to tackling the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms.",
"title": "Does P mean \"easy\"?"
},
{
"paragraph_id": 28,
"text": "Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.",
"title": "Does P mean \"easy\"?"
},
{
"paragraph_id": 29,
"text": "Cook provides a restatement of the problem in The P Versus NP Problem as \"Does P = NP?\" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.",
"title": "Reasons to believe P ≠ NP or P = NP"
},
{
"paragraph_id": 30,
"text": "It is also intuitively argued that the existence of problems that are hard to solve but for which the solutions are easy to verify matches real-world experience.",
"title": "Reasons to believe P ≠ NP or P = NP"
},
{
"paragraph_id": 31,
"text": "If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in \"creative leaps\", no fundamental gap between solving a problem and recognizing the solution once it's found.",
"title": "Reasons to believe P ≠ NP or P = NP"
},
{
"paragraph_id": 32,
"text": "On the other hand, some researchers believe that there is overconfidence in believing P ≠ NP and that researchers should explore proofs of P = NP as well. For example, in 2002 these statements were made:",
"title": "Reasons to believe P ≠ NP or P = NP"
},
{
"paragraph_id": 33,
"text": "The main argument in favor of P ≠ NP is the total lack of fundamental progress in the area of exhaustive search. This is, in my opinion, a very weak argument. The space of algorithms is very large and we are only at the beginning of its exploration. [...] The resolution of Fermat's Last Theorem also shows that very simple questions may be settled only by very deep theories.",
"title": "Reasons to believe P ≠ NP or P = NP"
},
{
"paragraph_id": 34,
"text": "Being attached to a speculation is not a good guide to research planning. One should always try both directions of every problem. Prejudice has caused famous mathematicians to fail to solve famous problems whose solution was opposite to their expectations, even though they had developed all the methods required.",
"title": "Reasons to believe P ≠ NP or P = NP"
},
{
"paragraph_id": 35,
"text": "When one substitutes \"linear time on a multitape Turing machine\" for \"polynomial time\" in the definitions of P and NP, one obtains the classes DLIN and NLIN. It is known that DLIN≠NLIN.",
"title": "Reasons to believe P ≠ NP or P = NP"
},
{
"paragraph_id": 36,
"text": "One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.",
"title": "Consequences of solution"
},
{
"paragraph_id": 37,
"text": "A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.",
"title": "Consequences of solution"
},
{
"paragraph_id": 38,
"text": "It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.",
"title": "Consequences of solution"
},
{
"paragraph_id": 39,
"text": "An example of a field that could be upended by a solution showing P = NP is cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:",
"title": "Consequences of solution"
},
{
"paragraph_id": 40,
"text": "These would need to be modified or replaced by information-theoretically secure solutions not inherently based on P–NP inequivalence.",
"title": "Consequences of solution"
},
{
"paragraph_id": 41,
"text": "On the other hand, there are enormous positive consequences that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as some types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; if these problems were efficiently solvable, it could spur considerable advances in life sciences and biotechnology.",
"title": "Consequences of solution"
},
{
"paragraph_id": 42,
"text": "But such changes may pale in significance compared to the revolution an efficient method for solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:",
"title": "Consequences of solution"
},
{
"paragraph_id": 43,
"text": "If there really were a machine with φ(n) ∼ k⋅n (or even ∼ k⋅n), this would have consequences of the greatest importance. Namely, it would obviously mean that in spite of the undecidability of the Entscheidungsproblem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine. After all, one would simply have to choose the natural number n so large that when the machine does not deliver a result, it makes no sense to think more about the problem.",
"title": "Consequences of solution"
},
{
"paragraph_id": 44,
"text": "Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says:",
"title": "Consequences of solution"
},
{
"paragraph_id": 45,
"text": "... it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of the CMI prize problems.",
"title": "Consequences of solution"
},
{
"paragraph_id": 46,
"text": "Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method that is guaranteed to find proofs to theorems, should one exist of a \"reasonable\" size, would essentially end this struggle.",
"title": "Consequences of solution"
},
{
"paragraph_id": 47,
"text": "Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:",
"title": "Consequences of solution"
},
{
"paragraph_id": 48,
"text": "[...] if you imagine a number M that's finite but incredibly large—like say the number 10↑↑↑↑3 discussed in my paper on \"coping with finiteness\"—then there's a humongous number of possible algorithms that do n bitwise or addition or shift operations on n given bits, and it's really hard to believe that all of those algorithms fail. My main point, however, is that I don't believe that the equality P = NP will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive.",
"title": "Consequences of solution"
},
{
"paragraph_id": 49,
"text": "A proof showing that P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would nevertheless represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.",
"title": "Consequences of solution"
},
{
"paragraph_id": 50,
"text": "Also, P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical \"worlds\" that could result from different possible resolutions to the average-case complexity question. These range from \"Algorithmica\", where P = NP and problems like SAT can be solved efficiently in all instances, to \"Cryptomania\", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The \"world\" where P ≠ NP but all problems in NP are tractable in the average case is called \"Heuristica\" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.",
"title": "Consequences of solution"
},
{
"paragraph_id": 51,
"text": "Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are not powerful enough to answer the question, thus suggesting that novel technical approaches are required.",
"title": "Results about difficulty of proof"
},
{
"paragraph_id": 52,
"text": "As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, each of which is known to be insufficient to prove that P ≠ NP:",
"title": "Results about difficulty of proof"
},
{
"paragraph_id": 53,
"text": "These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.",
"title": "Results about difficulty of proof"
},
{
"paragraph_id": 54,
"text": "These barriers have also led some computer scientists to suggest that the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). The interpretation of an independence result could be that either no polynomial-time algorithm exists for any NP-complete problem, and such a proof cannot be constructed in (e.g.) ZFC, or that polynomial-time algorithms for NP-complete problems may exist, but it is impossible to prove in ZFC that such algorithms are correct. However, if it can be shown, using techniques of the sort that are currently known to be applicable, that the problem cannot be decided even with much weaker assumptions extending the Peano axioms (PA) for integer arithmetic, then there would necessarily exist nearly polynomial-time algorithms for every problem in NP. Therefore, if one believes (as most complexity theorists do) that not all problems in NP have efficient algorithms, it would follow that proofs of independence using those techniques cannot be possible. Additionally, this result implies that proving independence from PA or ZFC using currently known techniques is no easier than proving the existence of efficient algorithms for all problems in NP.",
"title": "Results about difficulty of proof"
},
{
"paragraph_id": 55,
"text": "The P = NP problem can be restated in terms of expressible certain classes of logical statements, as a result of work in descriptive complexity.",
"title": "Logical characterizations"
},
{
"paragraph_id": 56,
"text": "Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P can be expressed in first-order logic with the addition of a suitable least fixed-point combinator. Effectively, this, in combination with the order, allows the definition of recursive functions. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.",
"title": "Logical characterizations"
},
{
"paragraph_id": 57,
"text": "Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question \"is P a proper subset of NP\" can be reformulated as \"is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?\". The word \"existential\" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).",
"title": "Logical characterizations"
},
{
"paragraph_id": 58,
"text": "No algorithm for any NP-complete problem is known to run in polynomial time. However, there are algorithms known for NP-complete problems with the property that if P = NP, then the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:",
"title": "Polynomial-time algorithms"
},
{
"paragraph_id": 59,
"text": "If, and only if, P = NP, then this is a polynomial-time algorithm accepting an NP-complete language. \"Accepting\" means it gives \"yes\" answers in polynomial time, but is allowed to run forever when the answer is \"no\" (also known as a semi-algorithm).",
"title": "Polynomial-time algorithms"
},
{
"paragraph_id": 60,
"text": "This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least 2 − 1 other programs first.",
"title": "Polynomial-time algorithms"
},
{
"paragraph_id": 61,
"text": "Conceptually speaking, a decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs \"yes\" or \"no\". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that can produce the correct answer for any input string of length n in at most cn steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is defined as the set of all languages that can be decided by a deterministic polynomial-time Turing machine. That is,",
"title": "Formal definitions"
},
{
"paragraph_id": 62,
"text": "where",
"title": "Formal definitions"
},
{
"paragraph_id": 63,
"text": "and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies the following two conditions:",
"title": "Formal definitions"
},
{
"paragraph_id": 64,
"text": "NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach to define NP is to use the concept of certificate and verifier. Formally, NP is defined as the set of languages over a finite alphabet that have a verifier that runs in polynomial time, where the notion of \"verifier\" is defined as follows.",
"title": "Formal definitions"
},
{
"paragraph_id": 65,
"text": "Let L be a language over a finite alphabet, Σ.",
"title": "Formal definitions"
},
{
"paragraph_id": 66,
"text": "L ∈ NP if, and only if, there exists a binary relation R ⊂ Σ ∗ × Σ ∗ {\\displaystyle R\\subset \\Sigma ^{*}\\times \\Sigma ^{*}} and a positive integer k such that the following two conditions are satisfied:",
"title": "Formal definitions"
},
{
"paragraph_id": 67,
"text": "A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L.",
"title": "Formal definitions"
},
{
"paragraph_id": 68,
"text": "In general, a verifier does not have to be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time.",
"title": "Formal definitions"
},
{
"paragraph_id": 69,
"text": "Let",
"title": "Formal definitions"
},
{
"paragraph_id": 70,
"text": "Clearly, the question of whether a given x is a composite is equivalent to the question of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).",
"title": "Formal definitions"
},
{
"paragraph_id": 71,
"text": "COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.",
"title": "Formal definitions"
},
{
"paragraph_id": 72,
"text": "There are many equivalent ways of describing NP-completeness.",
"title": "Formal definitions"
},
{
"paragraph_id": 73,
"text": "Let L be a language over a finite alphabet Σ.",
"title": "Formal definitions"
},
{
"paragraph_id": 74,
"text": "L is NP-complete if, and only if, the following two conditions are satisfied:",
"title": "Formal definitions"
},
{
"paragraph_id": 75,
"text": "Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete.",
"title": "Formal definitions"
},
{
"paragraph_id": 76,
"text": "While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 62 purported proofs of P = NP from 1986 to 2016, of which 50 were proofs of P ≠ NP, 2 were proofs the problem is unprovable, and one was a proof that it is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have since been refuted.",
"title": "Claimed solutions"
},
{
"paragraph_id": 77,
"text": "The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.",
"title": "Popular culture"
},
{
"paragraph_id": 78,
"text": "In the sixth episode of The Simpsons' seventh season \"Treehouse of Horror VI\", the equation P = NP is seen shortly after Homer accidentally stumbles into the \"third dimension\".",
"title": "Popular culture"
},
{
"paragraph_id": 79,
"text": "In the second episode of season 2 of Elementary, \"Solve for X\" revolves around Sherlock and Watson investigating the murders of mathematicians who were attempting to solve P versus NP.",
"title": "Popular culture"
}
]
| The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved. The informal term quickly, used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm. The general class of questions for which some algorithm can provide an answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be verified in polynomial time is NP, which stands for "nondeterministic polynomial time". An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If it turns out that P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. | 2001-08-26T23:04:30Z | 2023-12-27T12:39:28Z | [
"Template:Short description",
"Template:Cite news",
"Template:Pp-move-indef",
"Template:Blockquote",
"Template:Quote",
"Template:Cite book",
"Template:Cite encyclopedia",
"Template:Use dmy dates",
"Template:Sister project links",
"Template:Cbignore",
"Template:Cite conference",
"Template:Webarchive",
"Template:Unsolved",
"Template:See also",
"Template:Cite journal",
"Template:Garey-Johnson",
"Template:Cite magazine",
"Template:Citation needed",
"Template:Main",
"Template:Cite tech report",
"Template:Millennium Problems",
"Template:Math",
"Template:ComplexityClasses",
"Template:'",
"Template:Cite web",
"Template:Main article",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/P_versus_NP_problem |
6,117 | Charles Sanders Peirce | Charles Sanders Peirce (/pɜːrs/ PURSS; September 10, 1839 – April 19, 1914) was an American scientist, mathematician, logician, and philosopher who is sometimes known as "the father of pragmatism". According to philosopher Paul Weiss, Peirce was "the most original and versatile of America's philosophers and America's greatest logician". Bertrand Russell wrote "he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever".
Educated as a chemist and employed as a scientist for thirty years, Peirce meanwhile made major contributions to logic, such as theories of relations and quantification. C. I. Lewis wrote, "The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century." For Peirce, logic also encompassed much of what is now called epistemology and the philosophy of science. He saw logic as the formal branch of semiotics or study of signs, of which he is a founder, which foreshadowed the debate among logical positivists and proponents of philosophy of language that dominated 20th-century Western philosophy. Peirce's study of signs also included a tripartite theory of predication.
Additionally, he defined the concept of abductive reasoning, as well as rigorously formulating mathematical induction and deductive reasoning. He was one of the founders of statistics. As early as 1886, he saw that logical operations could be carried out by electrical switching circuits. The same idea was used decades later to produce digital computers.
For metaphysics, Peirce was an "objective idealist" in the tradition of German philosopher Immanuel Kant as well as a scholastic realist about universals. He also held a commitment to the ideas of continuity and chance as real features of the universe, views he labeled synechism and tychism respectively. Peirce believed an epistemic fallibilism and anti-skepticism went along with these views.
Peirce was born at 3 Phillips Place in Cambridge, Massachusetts. He was the son of Sarah Hunt Mills and Benjamin Peirce, himself a professor of mathematics and astronomy at Harvard University. At age 12, Charles read his older brother's copy of Richard Whately's Elements of Logic, then the leading English-language text on the subject. So began his lifelong fascination with logic and reasoning.
He suffered from his late teens onward from a nervous condition then known as "facial neuralgia", which would today be diagnosed as trigeminal neuralgia. His biographer, Joseph Brent, says that when in the throes of its pain "he was, at first, almost stupefied, and then aloof, cold, depressed, extremely suspicious, impatient of the slightest crossing, and subject to violent outbursts of temper". Its consequences may have led to the social isolation of his later life.
Peirce went on to earn a Bachelor of Arts degree and a Master of Arts degree (1862) from Harvard. In 1863 the Lawrence Scientific School awarded him a Bachelor of Science degree, Harvard's first summa cum laude chemistry degree. His academic record was otherwise undistinguished. At Harvard, he began lifelong friendships with Francis Ellingwood Abbot, Chauncey Wright, and William James. One of his Harvard instructors, Charles William Eliot, formed an unfavorable opinion of Peirce. This proved fateful, because Eliot, while President of Harvard (1869–1909—a period encompassing nearly all of Peirce's working life), repeatedly vetoed Peirce's employment at the university.
Between 1859 and 1891, Peirce was intermittently employed in various scientific capacities by the United States Coast Survey, which in 1878 was renamed the United States Coast and Geodetic Survey, where he enjoyed his highly influential father's protection until the latter's death in 1880. At the Survey, he worked mainly in geodesy and gravimetry, refining the use of pendulums to determine small local variations in the Earth's gravity.
This employment exempted Peirce from having to take part in the American Civil War; it would have been very awkward for him to do so, as the Boston Brahmin Peirces sympathized with the Confederacy. No members of the Peirce family volunteered or enlisted. Peirce grew up in a home where white supremacy was taken for granted, and slavery was considered natural. Peirce's father had described himself as a secessionist until the outbreak of the war, after which he became a Union partisan, providing donations to the Sanitary Commission, the leading Northern war charity.
Peirce liked to use the following syllogism to illustrate the unreliability of traditional forms of logic (for the first premise arguably assumes the conclusion):
All Men are equal in their political rights. Negroes are Men. Therefore, negroes are equal in political rights to whites.
He was elected a resident fellow of the American Academy of Arts and Sciences in January 1867. The Survey sent him to Europe five times, first in 1871 as part of a group sent to observe a solar eclipse. There, he sought out Augustus De Morgan, William Stanley Jevons, and William Kingdon Clifford, British mathematicians and logicians whose turn of mind resembled his own.
From 1869 to 1872, he was employed as an assistant in Harvard's astronomical observatory, doing important work on determining the brightness of stars and the shape of the Milky Way. In 1872 he founded the Metaphysical Club, a conversational philosophical club that Peirce, the future Supreme Court Justice Oliver Wendell Holmes Jr., the philosopher and psychologist William James, amongst others, formed in January 1872 in Cambridge, Massachusetts, and dissolved in December 1872. Other members of the club included Chauncey Wright, John Fiske, Francis Ellingwood Abbot, Nicholas St. John Green, and Joseph Bangs Warner. The discussions eventually birthed Peirce's notion of pragmatism.
On April 20, 1877, he was elected a member of the National Academy of Sciences. Also in 1877, he proposed measuring the meter as so many wavelengths of light of a certain frequency, the kind of definition employed from 1960 to 1983.
In 1879 Peirce developed Peirce quincuncial projection, having been inspired by H. A. Schwarz's 1869 conformal transformation of a circle onto a polygon of n sides (known as the Schwarz–Christoffel mapping).
During the 1880s, Peirce's indifference to bureaucratic detail waxed while his Survey work's quality and timeliness waned. Peirce took years to write reports that he should have completed in months. Meanwhile, he wrote entries, ultimately thousands, during 1883–1909 on philosophy, logic, science, and other subjects for the encyclopedic Century Dictionary. In 1885, an investigation by the Allison Commission exonerated Peirce, but led to the dismissal of Superintendent Julius Hilgard and several other Coast Survey employees for misuse of public funds. In 1891, Peirce resigned from the Coast Survey at Superintendent Thomas Corwin Mendenhall's request.
In 1879, Peirce was appointed lecturer in logic at Johns Hopkins University, which had strong departments in areas that interested him, such as philosophy (Royce and Dewey completed their Ph.D.s at Hopkins), psychology (taught by G. Stanley Hall and studied by Joseph Jastrow, who coauthored a landmark empirical study with Peirce), and mathematics (taught by J. J. Sylvester, who came to admire Peirce's work on mathematics and logic). His Studies in Logic by Members of the Johns Hopkins University (1883) contained works by himself and Allan Marquand, Christine Ladd, Benjamin Ives Gilman, and Oscar Howard Mitchell, several of whom were his graduate students. Peirce's nontenured position at Hopkins was the only academic appointment he ever held.
Brent documents something Peirce never suspected, namely that his efforts to obtain academic employment, grants, and scientific respectability were repeatedly frustrated by the covert opposition of a major Canadian-American scientist of the day, Simon Newcomb. Newcomb had been a favourite student of Peirce's father; although "no doubt quite bright", "like Salieri in Peter Shaffer’s Amadeus he also had just enough talent to recognize he was not a genius and just enough pettiness to resent someone who was". Additionally "an intensely devout and literal-minded Christian of rigid moral standards", he was appalled by what he considered Peirce's personal shortcomings. Peirce's efforts may also have been hampered by what Brent characterizes as "his difficult personality". In contrast, Keith Devlin believes that Peirce's work was too far ahead of his time to be appreciated by the academic establishment of the day and that this played a large role in his inability to obtain a tenured position.
Peirce's personal life undoubtedly worked against his professional success. After his first wife, Harriet Melusina Fay ("Zina"), left him in 1875, Peirce, while still legally married, became involved with Juliette, whose last name, given variously as Froissy and Pourtalai, and nationality (she spoke French) remains uncertain. When his divorce from Zina became final in 1883, he married Juliette. That year, Newcomb pointed out to a Johns Hopkins trustee that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married; the ensuing scandal led to his dismissal in January 1884. Over the years Peirce sought academic employment at various universities without success. He had no children by either marriage.
In 1887, Peirce spent part of his inheritance from his parents to buy 2,000 acres (8 km) of rural land near Milford, Pennsylvania, which never yielded an economic return. There he had an 1854 farmhouse remodeled to his design. The Peirces named the property "Arisbe". There they lived with few interruptions for the rest of their lives, Charles writing prolifically, with much of his work remaining unpublished to this day (see Works). Living beyond their means soon led to grave financial and legal difficulties. Charles spent much of his last two decades unable to afford heat in winter and subsisting on old bread donated by the local baker. Unable to afford new stationery, he wrote on the verso side of old manuscripts. An outstanding warrant for assault and unpaid debts led to his being a fugitive in New York City for a while. Several people, including his brother James Mills Peirce and his neighbors, relatives of Gifford Pinchot, settled his debts and paid his property taxes and mortgage.
Peirce did some scientific and engineering consulting and wrote much for meager pay, mainly encyclopedic dictionary entries, and reviews for The Nation (with whose editor, Wendell Phillips Garrison, he became friendly). He did translations for the Smithsonian Institution, at its director Samuel Langley's instigation. Peirce also did substantial mathematical calculations for Langley's research on powered flight. Hoping to make money, Peirce tried inventing. He began but did not complete several books. In 1888, President Grover Cleveland appointed him to the Assay Commission.
From 1890 on, he had a friend and admirer in Judge Francis C. Russell of Chicago, who introduced Peirce to editor Paul Carus and owner Edward C. Hegeler of the pioneering American philosophy journal The Monist, which eventually published at least 14 articles by Peirce. He wrote many texts in James Mark Baldwin's Dictionary of Philosophy and Psychology (1901–1905); half of those credited to him appear to have been written actually by Christine Ladd-Franklin under his supervision. He applied in 1902 to the newly formed Carnegie Institution for a grant to write a systematic book describing his life's work. The application was doomed; his nemesis, Newcomb, served on the Carnegie Institution executive committee, and its president had been president of Johns Hopkins at the time of Peirce's dismissal.
The one who did the most to help Peirce in these desperate times was his old friend William James, dedicating his Will to Believe (1897) to Peirce, and arranging for Peirce to be paid to give two series of lectures at or near Harvard (1898 and 1903). Most important, each year from 1907 until James's death in 1910, James wrote to his friends in the Boston intelligentsia to request financial aid for Peirce; the fund continued even after James died. Peirce reciprocated by designating James's eldest son as his heir should Juliette predecease him. It has been believed that this was also why Peirce used "Santiago" ("St. James" in English) as a middle name, but he appeared in print as early as 1890 as Charles Santiago Peirce. (See Charles Santiago Sanders Peirce for discussion and references).
Peirce died destitute in Milford, Pennsylvania, twenty years before his widow. Juliette Peirce kept the urn with Peirce's ashes at Arisbe. In 1934, Pennsylvania Governor Gifford Pinchot arranged for Juliette's burial in Milford Cemetery. The urn with Peirce's ashes was interred with Juliette.
Bertrand Russell (1959) wrote "Beyond doubt [...] he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever". Russell and Whitehead's Principia Mathematica, published from 1910 to 1913, does not mention Peirce (Peirce's work was not widely known until later). A. N. Whitehead, while reading some of Peirce's unpublished manuscripts soon after arriving at Harvard in 1924, was struck by how Peirce had anticipated his own "process" thinking. (On Peirce and process metaphysics, see Lowe 1964.) Karl Popper viewed Peirce as "one of the greatest philosophers of all times". Yet Peirce's achievements were not immediately recognized. His imposing contemporaries William James and Josiah Royce admired him and Cassius Jackson Keyser, at Columbia and C. K. Ogden, wrote about Peirce with respect but to no immediate effect.
The first scholar to give Peirce his considered professional attention was Royce's student Morris Raphael Cohen, the editor of an anthology of Peirce's writings entitled Chance, Love, and Logic (1923), and the author of the first bibliography of Peirce's scattered writings. John Dewey studied under Peirce at Johns Hopkins. From 1916 onward, Dewey's writings repeatedly mention Peirce with deference. His 1938 Logic: The Theory of Inquiry is much influenced by Peirce. The publication of the first six volumes of Collected Papers (1931–1935), the most important event to date in Peirce studies and one that Cohen made possible by raising the needed funds, did not prompt an outpouring of secondary studies. The editors of those volumes, Charles Hartshorne and Paul Weiss, did not become Peirce specialists. Early landmarks of the secondary literature include the monographs by Buchler (1939), Feibleman (1946), and Goudge (1950), the 1941 PhD thesis by Arthur W. Burks (who went on to edit volumes 7 and 8), and the studies edited by Wiener and Young (1952). The Charles S. Peirce Society was founded in 1946. Its Transactions, an academic quarterly specializing in Peirce's pragmatism and American philosophy has appeared since 1965. (See Phillips 2014, 62 for discussion of Peirce and Dewey relative to transactionalism.)
By 1943 such was Peirce's reputation, in the US at least, that Webster's Biographical Dictionary said that Peirce was "now regarded as the most original thinker and greatest logician of his time".
In 1949, while doing unrelated archival work, the historian of mathematics Carolyn Eisele (1902–2000) chanced on an autograph letter by Peirce. So began her forty years of research on Peirce, “the mathematician and scientist,” culminating in Eisele (1976, 1979, 1985). Beginning around 1960, the philosopher and historian of ideas Max Fisch (1900–1995) emerged as an authority on Peirce (Fisch, 1986). He includes many of his relevant articles in a survey (Fisch 1986: 422–448) of the impact of Peirce's thought through 1983.
Peirce has gained an international following, marked by university research centers devoted to Peirce studies and pragmatism in Brazil (CeneP/CIEP), Finland (HPRC and Commens), Germany (Wirth's group, Hoffman's and Otte's group, and Deuser's and Härle's group), France (L'I.R.S.C.E.), Spain (GEP), and Italy (CSP). His writings have been translated into several languages, including German, French, Finnish, Spanish, and Swedish. Since 1950, there have been French, Italian, Spanish, British, and Brazilian Peirce scholars of note. For many years, the North American philosophy department most devoted to Peirce was the University of Toronto, thanks in part to the leadership of Thomas Goudge and David Savan. In recent years, U.S. Peirce scholars have clustered at Indiana University – Purdue University Indianapolis, home of the Peirce Edition Project (PEP) –, and Pennsylvania State University.
Currently, considerable interest is being taken in Peirce's ideas by researchers wholly outside the arena of academic philosophy. The interest comes from industry, business, technology, intelligence organizations, and the military; and it has resulted in the existence of a substantial number of agencies, institutes, businesses, and laboratories in which ongoing research into and development of Peircean concepts are being vigorously undertaken.
In recent years, Peirce's trichotomy of signs is exploited by a growing number of practitioners for marketing and design tasks.
John Deely writes that Peirce was the last of the "moderns" and "first of the postmoderns". He lauds Peirce's doctrine of signs as a contribution to the dawn of the Postmodern epoch. Deely additionally comments that "Peirce stands...in a position analogous to the position occupied by Augustine as last of the Western Fathers and first of the medievals".
Peirce's reputation rests largely on academic papers published in American scientific and scholarly journals such as Proceedings of the American Academy of Arts and Sciences, the Journal of Speculative Philosophy, The Monist, Popular Science Monthly, the American Journal of Mathematics, Memoirs of the National Academy of Sciences, The Nation, and others. See Articles by Peirce, published in his lifetime for an extensive list with links to them online. The only full-length book (neither extract nor pamphlet) that Peirce authored and saw published in his lifetime was Photometric Researches (1878), a 181-page monograph on the applications of spectrographic methods to astronomy. While at Johns Hopkins, he edited Studies in Logic (1883), containing chapters by himself and his graduate students. Besides lectures during his years (1879–1884) as lecturer in Logic at Johns Hopkins, he gave at least nine series of lectures, many now published; see Lectures by Peirce.
After Peirce's death, Harvard University obtained from Peirce's widow the papers found in his study, but did not microfilm them until 1964. Only after Richard Robin (1967) catalogued this Nachlass did it become clear that Peirce had left approximately 1,650 unpublished manuscripts, totaling over 100,000 pages, mostly still unpublished except on microfilm. On the vicissitudes of Peirce's papers, see Houser (1989). Reportedly the papers remain in unsatisfactory condition.
The first published anthology of Peirce's articles was the one-volume Chance, Love and Logic: Philosophical Essays, edited by Morris Raphael Cohen, 1923, still in print. Other one-volume anthologies were published in 1940, 1957, 1958, 1972, 1994, and 2009, most still in print. The main posthumous editions of Peirce's works in their long trek to light, often multi-volume, and some still in print, have included:
1931–1958: Collected Papers of Charles Sanders Peirce (CP), 8 volumes, includes many published works, along with a selection of previously unpublished work and a smattering of his correspondence. This long-time standard edition drawn from Peirce's work from the 1860s to 1913 remains the most comprehensive survey of his prolific output from 1893 to 1913. It is organized thematically, but texts (including lecture series) are often split up across volumes, while texts from various stages in Peirce's development are often combined, requiring frequent visits to editors' notes. Edited (1–6) by Charles Hartshorne and Paul Weiss and (7–8) by Arthur Burks, in print and online.
1975–1987: Charles Sanders Peirce: Contributions to The Nation, 4 volumes, includes Peirce's more than 300 reviews and articles published 1869–1908 in The Nation. Edited by Kenneth Laine Ketner and James Edward Cook, online.
1976: The New Elements of Mathematics by Charles S. Peirce, 4 volumes in 5, included many previously unpublished Peirce manuscripts on mathematical subjects, along with Peirce's important published mathematical articles. Edited by Carolyn Eisele, back in print.
1977: Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby (2nd edition 2001), included Peirce's entire correspondence (1903–1912) with Victoria, Lady Welby. Peirce's other published correspondence is largely limited to the 14 letters included in volume 8 of the Collected Papers, and the 20-odd pre-1890 items included so far in the Writings. Edited by Charles S. Hardwick with James Cook, out of print.
1982–now: Writings of Charles S. Peirce, A Chronological Edition (W), Volumes 1–6 & 8, of a projected 30. The limited coverage, and defective editing and organization, of the Collected Papers led Max Fisch and others in the 1970s to found the Peirce Edition Project (PEP), whose mission is to prepare a more complete critical chronological edition. Only seven volumes have appeared to date, but they cover the period from 1859 to 1892, when Peirce carried out much of his best-known work. Writings of Charles S. Peirce, 8 was published in November 2010; and work continues on Writings of Charles S. Peirce, 7, 9, and 11. In print and online.
1985: Historical Perspectives on Peirce's Logic of Science: A History of Science, 2 volumes. Auspitz has said, "The extent of Peirce's immersion in the science of his day is evident in his reviews in the Nation [...] and in his papers, grant applications, and publishers' prospectuses in the history and practice of science", referring latterly to Historical Perspectives. Edited by Carolyn Eisele, back in print.
1992: Reasoning and the Logic of Things collects in one place Peirce's 1898 series of lectures invited by William James. Edited by Kenneth Laine Ketner, with commentary by Hilary Putnam, in print.
1992–1998: The Essential Peirce (EP), 2 volumes, is an important recent sampler of Peirce's philosophical writings. Edited (1) by Nathan Hauser and Christian Kloesel and (2) by Peirce Edition Project editors, in print.
1997: Pragmatism as a Principle and Method of Right Thinking collects Peirce's 1903 Harvard "Lectures on Pragmatism" in a study edition, including drafts, of Peirce's lecture manuscripts, which had been previously published in abridged form; the lectures now also appear in The Essential Peirce, 2. Edited by Patricia Ann Turisi, in print.
2010: Philosophy of Mathematics: Selected Writings collects important writings by Peirce on the subject, many not previously in print. Edited by Matthew E. Moore, in print.
Peirce's most important work in pure mathematics was in logical and foundational areas. He also worked on linear algebra, matrices, various geometries, topology and Listing numbers, Bell numbers, graphs, the four-color problem, and the nature of continuity.
He worked on applied mathematics in economics, engineering, and map projections, and was especially active in probability and statistics.
Peirce made a number of striking discoveries in formal logic and foundational mathematics, nearly all of which came to be appreciated only long after he died:
In 1860 he suggested a cardinal arithmetic for infinite numbers, years before any work by Georg Cantor (who completed his dissertation in 1867) and without access to Bernard Bolzano's 1851 (posthumous) Paradoxien des Unendlichen.
In 1880–1881 he showed how Boolean algebra could be done via a repeated sufficient single binary operation (logical NOR), anticipating Henry M. Sheffer by 33 years. (See also De Morgan's Laws.)
In 1881 he set out the axiomatization of natural number arithmetic, a few years before Richard Dedekind and Giuseppe Peano. In the same paper Peirce gave, years before Dedekind, the first purely cardinal definition of a finite set in the sense now known as "Dedekind-finite", and implied by the same stroke an important formal definition of an infinite set (Dedekind-infinite), as a set that can be put into a one-to-one correspondence with one of its proper subsets.
In 1885 he distinguished between first-order and second-order quantification. In the same paper he set out what can be read as the first (primitive) axiomatic set theory, anticipating Zermelo by about two decades (Brady 2000, pp. 132–133).
In 1886, he saw that Boolean calculations could be carried out via electrical switches, anticipating Claude Shannon by more than 50 years. By the later 1890s he was devising existential graphs, a diagrammatic notation for the predicate calculus. Based on them are John F. Sowa's conceptual graphs and Sun-Joo Shin's diagrammatic reasoning.
Peirce wrote drafts for an introductory textbook, with the working title The New Elements of Mathematics, that presented mathematics from an original standpoint. Those drafts and many other of his previously unpublished mathematical manuscripts finally appeared in The New Elements of Mathematics by Charles S. Peirce (1976), edited by mathematician Carolyn Eisele.
Peirce agreed with Auguste Comte in regarding mathematics as more basic than philosophy and the special sciences (of nature and mind). Peirce classified mathematics into three subareas: (1) mathematics of logic, (2) discrete series, and (3) pseudo-continua (as he called them, including the real numbers) and continua. Influenced by his father Benjamin, Peirce argued that mathematics studies purely hypothetical objects and is not just the science of quantity but is more broadly the science which draws necessary conclusions; that mathematics aids logic, not vice versa; and that logic itself is part of philosophy and is the science about drawing conclusions necessary and otherwise.
Peirce held that science achieves statistical probabilities, not certainties, and that spontaneity (absolute chance) is real (see Tychism on his view). Most of his statistical writings promote the frequency interpretation of probability (objective ratios of cases), and many of his writings express skepticism about (and criticize the use of) probability when such models are not based on objective randomization. Though Peirce was largely a frequentist, his possible world semantics introduced the "propensity" theory of probability before Karl Popper. Peirce (sometimes with Joseph Jastrow) investigated the probability judgments of experimental subjects, "perhaps the very first" elicitation and estimation of subjective probabilities in experimental psychology and (what came to be called) Bayesian statistics.
Peirce was one of the founders of statistics. He formulated modern statistics in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). With a repeated measures design, Charles Sanders Peirce and Joseph Jastrow introduced blinded, controlled randomized experiments in 1884 (Hacking 1990:205) (before Ronald A. Fisher). He invented optimal design for experiments on gravity, in which he "corrected the means". He used correlation and smoothing. Peirce extended the work on outliers by Benjamin Peirce, his father. He introduced terms "confidence" and "likelihood" (before Jerzy Neyman and Fisher). (See Stephen Stigler's historical books and Ian Hacking 1990.)
It is not sufficiently recognized that Peirce's career was that of a scientist, not a philosopher; and that during his lifetime he was known and valued chiefly as a scientist, only secondarily as a logician, and scarcely at all as a philosopher. Even his work in philosophy and logic will not be understood until this fact becomes a standing premise of Peircean studies.
Peirce was a working scientist for 30 years, and arguably was a professional philosopher only during the five years he lectured at Johns Hopkins. He learned philosophy mainly by reading, each day, a few pages of Immanuel Kant's Critique of Pure Reason, in the original German, while a Harvard undergraduate. His writings bear on a wide array of disciplines, including mathematics, logic, philosophy, statistics, astronomy, metrology, geodesy, experimental psychology, economics, linguistics, and the history and philosophy of science. This work has enjoyed renewed interest and approval, a revival inspired not only by his anticipations of recent scientific developments but also by his demonstration of how philosophy can be applied effectively to human problems.
Peirce's philosophy includes (see below in related sections) a pervasive three-category system: belief that truth is immutable and is both independent from actual opinion (fallibilism) and discoverable (no radical skepticism), logic as formal semiotic on signs, on arguments, and on inquiry's ways—including philosophical pragmatism (which he founded), critical common-sensism, and scientific method—and, in metaphysics: Scholastic realism, e.g. John Duns Scotus, belief in God, freedom, and at least an attenuated immortality, objective idealism, and belief in the reality of continuity and of absolute chance, mechanical necessity, and creative love. In his work, fallibilism and pragmatism may seem to work somewhat like skepticism and positivism, respectively, in others' work. However, for Peirce, fallibilism is balanced by an anti-skepticism and is a basis for belief in the reality of absolute chance and of continuity, and pragmatism commits one to anti-nominalist belief in the reality of the general (CP 5.453–457).
For Peirce, First Philosophy, which he also called cenoscopy, is less basic than mathematics and more basic than the special sciences (of nature and mind). It studies positive phenomena in general, phenomena available to any person at any waking moment, and does not settle questions by resorting to special experiences. He divided such philosophy into (1) phenomenology (which he also called phaneroscopy or categorics), (2) normative sciences (esthetics, ethics, and logic), and (3) metaphysics; his views on them are discussed in order below.
Peirce's recipe for pragmatic thinking, which he called pragmatism and, later, pragmaticism, is recapitulated in several versions of the so-called pragmatic maxim. Here is one of his more emphatic reiterations of it:
Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object.
As a movement, pragmatism began in the early 1870s in discussions among Peirce, William James, and others in the Metaphysical Club. James among others regarded some articles by Peirce such as "The Fixation of Belief" (1877) and especially "How to Make Our Ideas Clear" (1878) as foundational to pragmatism. Peirce (CP 5.11–12), like James (Pragmatism: A New Name for Some Old Ways of Thinking, 1907), saw pragmatism as embodying familiar attitudes, in philosophy and elsewhere, elaborated into a new deliberate method for fruitful thinking about problems. Peirce differed from James and the early John Dewey, in some of their tangential enthusiasms, in being decidedly more rationalistic and realistic, in several senses of those terms, throughout the preponderance of his own philosophical moods.
In 1905 Peirce coined the new name pragmaticism "for the precise purpose of expressing the original definition", saying that "all went happily" with James's and F.C.S. Schiller's variant uses of the old name "pragmatism" and that he coined the new name because of the old name's growing use in "literary journals, where it gets abused". Yet he cited as causes, in a 1906 manuscript, his differences with James and Schiller and, in a 1908 publication, his differences with James as well as literary author Giovanni Papini's declaration of pragmatism's indefinability. Peirce in any case regarded his views that truth is immutable and infinity is real, as being opposed by the other pragmatists, but he remained allied with them on other issues.
Pragmatism begins with the idea that belief is that on which one is prepared to act. Peirce's pragmatism is a method of clarification of conceptions of objects. It equates any conception of an object to a conception of that object's effects to a general extent of the effects' conceivable implications for informed practice. It is a method of sorting out conceptual confusions occasioned, for example, by distinctions that make (sometimes needed) formal yet not practical differences. He formulated both pragmatism and statistical principles as aspects of scientific logic, in his "Illustrations of the Logic of Science" series of articles. In the second one, "How to Make Our Ideas Clear", Peirce discussed three grades of clearness of conception:
By way of example of how to clarify conceptions, he addressed conceptions about truth and the real as questions of the presuppositions of reasoning in general. In clearness's second grade (the "nominal" grade), he defined truth as a sign's correspondence to its object, and the real as the object of such correspondence, such that truth and the real are independent of that which you or I or any actual, definite community of inquirers think. After that needful but confined step, next in clearness's third grade (the pragmatic, practice-oriented grade) he defined truth as that opinion which would be reached, sooner or later but still inevitably, by research taken far enough, such that the real does depend on that ideal final opinion—a dependence to which he appeals in theoretical arguments elsewhere, for instance for the long-run validity of the rule of induction. Peirce argued that even to argue against the independence and discoverability of truth and the real is to presuppose that there is, about that very question under argument, a truth with just such independence and discoverability.
Peirce said that a conception's meaning consists in "all general modes of rational conduct" implied by "acceptance" of the conception—that is, if one were to accept, first of all, the conception as true, then what could one conceive to be consequent general modes of rational conduct by all who accept the conception as true?—the whole of such consequent general modes is the whole meaning. His pragmatism does not equate a conception's meaning, its intellectual purport, with the conceived benefit or cost of the conception itself, like a meme (or, say, propaganda), outside the perspective of its being true, nor, since a conception is general, is its meaning equated with any definite set of actual consequences or upshots corroborating or undermining the conception or its worth. His pragmatism also bears no resemblance to "vulgar" pragmatism, which misleadingly connotes a ruthless and Machiavellian search for mercenary or political advantage. Instead the pragmatic maxim is the heart of his pragmatism as a method of experimentational mental reflection arriving at conceptions in terms of conceivable confirmatory and disconfirmatory circumstances—a method hospitable to the formation of explanatory hypotheses, and conducive to the use and improvement of verification.
Peirce's pragmatism, as method and theory of definitions and conceptual clearness, is part of his theory of inquiry, which he variously called speculative, general, formal or universal rhetoric or simply methodeutic. He applied his pragmatism as a method throughout his work.
In "The Fixation of Belief" (1877), Peirce gives his take on the psychological origin and aim of inquiry. On his view, individuals are motivated to inquiry by desire to escape the feelings of anxiety and unease which Peirce takes to be characteristic of the state of doubt. Doubt is described by Peirce as an "uneasy and dissatisfied state from which we struggle to free ourselves and pass into the state of belief." Peirce uses words like "irritation" to describe the experience of being in doubt and to explain why he thinks we find such experiences to be motivating. The irritating feeling of doubt is appeased, Peirce says, through our efforts to achieve a settled state of satisfaction with what we land on as our answer to the question which led to that doubt in the first place. This settled state, namely, belief, is described by Peirce as "a calm and satisfactory state which we do not wish to avoid." Our efforts to achieve the satisfaction of belief, by whichever methods we may pursue, are what Peirce calls "inquiry". Four methods which Peirce describes as having been actually pursued throughout the history of thought are summarized below in the section after next.
Critical common-sensism, treated by Peirce as a consequence of his pragmatism, is his combination of Thomas Reid's common-sense philosophy with a fallibilism that recognizes that propositions of our more or less vague common sense now indubitable may later come into question, for example because of transformations of our world through science. It includes efforts to work up in tests genuine doubts for a core group of common indubitables that vary slowly if at all.
In "The Fixation of Belief" (1877), Peirce described inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubt born of surprise, disagreement, and the like, and to reach a secure belief, belief being that on which one is prepared to act. That let Peirce frame scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal, quarrelsome, or hyperbolic doubt, which he held to be fruitless. Peirce sketched four methods of settling opinion, ordered from least to most successful:
Peirce held that, in practical affairs, slow and stumbling ratiocination is often dangerously inferior to instinct and traditional sentiment, and that the scientific method is best suited to theoretical research, which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry. Scientific method excels over the others finally by being deliberately designed to arrive—eventually—at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential conduct correctly to its given goal, and wed themselves to the scientific method.
Insofar as clarification by pragmatic reflection suits explanatory hypotheses and fosters predictions and testing, pragmatism points beyond the usual duo of foundational alternatives: deduction from self-evident truths, or rationalism; and induction from experiential phenomena, or empiricism.
Based on his critique of three modes of argument and different from either foundationalism or coherentism, Peirce's approach seeks to justify claims by a three-phase dynamic of inquiry:
Thereby, Peirce devised an approach to inquiry far more solid than the flatter image of inductive generalization simpliciter, which is a mere re-labeling of phenomenological patterns. Peirce's pragmatism was the first time the scientific method was proposed as an epistemology for philosophical questions.
A theory that succeeds better than its rivals in predicting and controlling our world is said to be nearer the truth. This is an operational notion of truth used by scientists.
Peirce extracted the pragmatic model or theory of inquiry from its raw materials in classical logic and refined it in parallel with the early development of symbolic logic to address problems about the nature of scientific reasoning.
Abduction, deduction, and induction make incomplete sense in isolation from one another but comprise a cycle understandable as a whole insofar as they collaborate toward the common end of inquiry. In the pragmatic way of thinking about conceivable practical implications, every thing has a purpose, and, as possible, its purpose should first be denoted. Abduction hypothesizes an explanation for deduction to clarify into implications to be tested so that induction can evaluate the hypothesis, in the struggle to move from troublesome uncertainty to more secure belief. No matter how traditional and needful it is to study the modes of inference in abstraction from one another, the integrity of inquiry strongly limits the effective modularity of its principal components.
Peirce's outline of the scientific method in §III–IV of "A Neglected Argument" is summarized below (except as otherwise noted). There he also reviewed plausibility and inductive precision (issues of critique of arguments).
1. Abductive (or retroductive) phase. Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicated phenomenon. The modicum of success in our guesses far exceeds that of random luck, and seems born of attunement to nature by developed or inherent instincts, especially insofar as best guesses are optimally plausible and simple in the sense of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and it has no substitute in expediting us toward new truths. In 1903, Peirce called pragmatism "the logic of abduction". Coordinative method leads from abducting a plausible hypothesis to judging it for its testability and for how its trial would economize inquiry itself. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if not costly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be selected for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, or incomplexity. One can discover only that which would be revealed through their sufficient experience anyway, and so the point is to expedite it; economy of research demands the leap, so to speak, of abduction and governs its art.
2. Deductive phase. Two stages:
3. Inductive phase. Evaluation of the hypothesis, inferring from observational or experimental tests of its deduced consequences. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general) that the real "is only the object of the final opinion to which sufficient investigation would lead"; in other words, anything excluding such a process would never be real. Induction involving the ongoing accumulation of evidence follows "a method which, sufficiently persisted in", will "diminish the error below any predesignate degree". Three stages:
Peirce drew on the methodological implications of the four incapacities—no genuine introspection, no intuition in the sense of non-inferential cognition, no thought but in signs, and no conception of the absolutely incognizable—to attack philosophical Cartesianism, of which he said that:
On May 14, 1867, the 27-year-old Peirce presented a paper entitled "On a New List of Categories" to the American Academy of Arts and Sciences, which published it the following year. The paper outlined a theory of predication, involving three universal categories that Peirce developed in response to reading Aristotle, Immanuel Kant, and G. W. F. Hegel, categories that Peirce applied throughout his work for the rest of his life. Peirce scholars generally regard the "New List" as foundational or breaking the ground for Peirce's "architectonic", his blueprint for a pragmatic philosophy. In the categories one will discern, concentrated, the pattern that one finds formed by the three grades of clearness in "How To Make Our Ideas Clear" (1878 paper foundational to pragmatism), and in numerous other trichotomies in his work.
"On a New List of Categories" is cast as a Kantian deduction; it is short but dense and difficult to summarize. The following table is compiled from that and later works. In 1893, Peirce restated most of it for a less advanced audience.
*Note: An interpretant is an interpretation (human or otherwise) in the sense of the product of an interpretive process.
In 1918 the logician C. I. Lewis wrote, "The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century."
Beginning with his first paper on the "Logic of Relatives" (1870), Peirce extended the theory of relations pioneered by Augustus De Morgan. Beginning in 1940, Alfred Tarski and his students rediscovered aspects of Peirce's larger vision of relational logic, developing the perspective of relation algebra.
Relational logic gained applications. In mathematics, it influenced the abstract analysis of E. H. Moore and the lattice theory of Garrett Birkhoff. In computer science, the relational model for databases was developed with Peircean ideas in work of Edgar F. Codd, who was a doctoral student of Arthur W. Burks, a Peirce scholar. In economics, relational logic was used by Frank P. Ramsey, John von Neumann, and Paul Samuelson to study preferences and utility and by Kenneth J. Arrow in Social Choice and Individual Values, following Arrow's association with Tarski at City College of New York.
On Peirce and his contemporaries Ernst Schröder and Gottlob Frege, Hilary Putnam (1982) documented that Frege's work on the logic of quantifiers had little influence on his contemporaries, although it was published four years before the work of Peirce and his student Oscar Howard Mitchell. Putnam found that mathematicians and logicians learned about the logic of quantifiers through the independent work of Peirce and Mitchell, particularly through Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation" (1885), published in the premier American mathematical journal of the day, and cited by Peano and Schröder, among others, who ignored Frege. They also adopted and modified Peirce's notations, typographical variants of those now used. Peirce apparently was ignorant of Frege's work, despite their overlapping achievements in logic, philosophy of language, and the foundations of mathematics.
Peirce's work on formal logic had admirers besides Ernst Schröder:
A philosophy of logic, grounded in his categories and semiotic, can be extracted from Peirce's writings and, along with Peirce's logical work more generally, is exposited and defended in Hilary Putnam (1982); the Introduction in Nathan Houser et al. (1997); and Randall Dipert's chapter in Cheryl Misak (2004).
Peirce regarded logic per se as a division of philosophy, as a normative science based on esthetics and ethics, as more basic than metaphysics, and as "the art of devising methods of research". More generally, as inference, "logic is rooted in the social principle", since inference depends on a standpoint that, in a sense, is unlimited. Peirce called (with no sense of deprecation) "mathematics of logic" much of the kind of thing which, in current research and applications, is called simply "logic". He was productive in both (philosophical) logic and logic's mathematics, which were connected deeply in his work and thought.
Peirce argued that logic is formal semiotic: the formal study of signs in the broadest sense, not only signs that are artificial, linguistic, or symbolic, but also signs that are semblances or are indexical such as reactions. Peirce held that "all this universe is perfused with signs, if it is not composed exclusively of signs", along with their representational and inferential relations. He argued that, since all thought takes time, all thought is in signs and sign processes ("semiosis") such as the inquiry process. He divided logic into: (1) speculative grammar, or stechiology, on how signs can be meaningful and, in relation to that, what kinds of signs there are, how they combine, and how some embody or incorporate others; (2) logical critic, or logic proper, on the modes of inference; and (3) speculative or universal rhetoric, or methodeutic, the philosophical theory of inquiry, including pragmatism.
In his "F.R.L." [First Rule of Logic] (1899), Peirce states that the first, and "in one sense, the sole", rule of reason is that, to learn, one needs to desire to learn and desire it without resting satisfied with that which one is inclined to think. So, the first rule is, to wonder. Peirce proceeds to a critical theme in research practices and the shaping of theories:
...there follows one corollary which itself deserves to be inscribed upon every wall of the city of philosophy: Do not block the way of inquiry.
Peirce adds, that method and economy are best in research but no outright sin inheres in trying any theory in the sense that the investigation via its trial adoption can proceed unimpeded and undiscouraged, and that "the one unpardonable offence" is a philosophical barricade against truth's advance, an offense to which "metaphysicians in all ages have shown themselves the most addicted". Peirce in many writings holds that logic precedes metaphysics (ontological, religious, and physical).
Peirce goes on to list four common barriers to inquiry: (1) Assertion of absolute certainty; (2) maintaining that something is absolutely unknowable; (3) maintaining that something is absolutely inexplicable because absolutely basic or ultimate; (4) holding that perfect exactitude is possible, especially such as to quite preclude unusual and anomalous phenomena. To refuse absolute theoretical certainty is the heart of fallibilism, which Peirce unfolds into refusals to set up any of the listed barriers. Peirce elsewhere argues (1897) that logic's presupposition of fallibilism leads at length to the view that chance and continuity are very real (tychism and synechism).
The First Rule of Logic pertains to the mind's presuppositions in undertaking reason and logic; presuppositions, for instance, that truth and the real do not depend on yours or my opinion of them but do depend on representational relation and consist in the destined end in investigation taken far enough (see below). He describes such ideas as, collectively, hopes which, in particular cases, one is unable seriously to doubt.
In three articles in 1868–1869, Peirce rejected mere verbal or hyperbolic doubt and first or ultimate principles, and argued that we have (as he numbered them):
(The above sense of the term "intuition" is almost Kant's, said Peirce. It differs from the current looser sense that encompasses instinctive or anyway half-conscious inference.)
Peirce argued that those incapacities imply the reality of the general and of the continuous, the validity of the modes of reasoning, and the falsity of philosophical Cartesianism (see below).
Peirce rejected the conception (usually ascribed to Kant) of the unknowable thing-in-itself and later said that to "dismiss make-believes" is a prerequisite for pragmatism.
Peirce sought, through his wide-ranging studies through the decades, formal philosophical ways to articulate thought's processes, and also to explain the workings of science. These inextricably entangled questions of a dynamics of inquiry rooted in nature and nurture led him to develop his semiotic with very broadened conceptions of signs and inference, and, as its culmination, a theory of inquiry for the task of saying 'how science works' and devising research methods. This would be logic by the medieval definition taught for centuries: art of arts, science of sciences, having the way to the principles of all methods. Influences radiate from points on parallel lines of inquiry in Aristotle's work, in such loci as: the basic terminology of psychology in On the Soul; the founding description of sign relations in On Interpretation; and the differentiation of inference into three modes that are commonly translated into English as abduction, deduction, and induction, in the Prior Analytics, as well as inference by analogy (called paradeigma by Aristotle), which Peirce regarded as involving the other three modes.
Peirce began writing on semiotic in the 1860s, around the time when he devised his system of three categories. He called it both semiotic and semeiotic. Both are current in singular and plural. He based it on the conception of a triadic sign relation, and defined semiosis as "action, or influence, which is, or involves, a cooperation of three subjects, such as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between pairs". As to signs in thought, Peirce emphasized the reverse: "To say, therefore, that thought cannot happen in an instant, but requires a time, is but another way of saying that every thought must be interpreted in another, or that all thought is in signs."
Peirce held that all thought is in signs, issuing in and from interpretation, where sign is the word for the broadest variety of conceivable semblances, diagrams, metaphors, symptoms, signals, designations, symbols, texts, even mental concepts and ideas, all as determinations of a mind or quasi-mind, that which at least functions like a mind, as in the work of crystals or bees—the focus is on sign action in general rather than on psychology, linguistics, or social studies (fields which he also pursued).
Inquiry is a kind of inference process, a manner of thinking and semiosis. Global divisions of ways for phenomena to stand as signs, and the subsumption of inquiry and thinking within inference as a sign process, enable the study of inquiry on semiotics' three levels:
Peirce uses examples often from common experience, but defines and discusses such things as assertion and interpretation in terms of philosophical logic. In a formal vein, Peirce said:
On the Definition of Logic. Logic is formal semiotic. A sign is something, A, which brings something, B, its interpretant sign, determined or created by it, into the same sort of correspondence (or a lower implied sort) with something, C, its object, as that in which itself stands to C. This definition no more involves any reference to human thought than does the definition of a line as the place within which a particle lies during a lapse of time. It is from this definition that I deduce the principles of logic by mathematical reasoning, and by mathematical reasoning that, I aver, will support criticism of Weierstrassian severity, and that is perfectly evident. The word "formal" in the definition is also defined.
Peirce's theory of signs is known to be one of the most complex semiotic theories due to its generalistic claim. Anything is a sign—not absolutely as itself, but instead in some relation or other. The sign relation is the key. It defines three roles encompassing (1) the sign, (2) the sign's subject matter, called its object, and (3) the sign's meaning or ramification as formed into a kind of effect called its interpretant (a further sign, for example a translation). It is an irreducible triadic relation, according to Peirce. The roles are distinct even when the things that fill those roles are not. The roles are but three; a sign of an object leads to one or more interpretants, and, as signs, they lead to further interpretants.
Extension × intension = information. Two traditional approaches to sign relation, necessary though insufficient, are the way of extension (a sign's objects, also called breadth, denotation, or application) and the way of intension (the objects' characteristics, qualities, attributes referenced by the sign, also called depth, comprehension, significance, or connotation). Peirce adds a third, the way of information, including change of information, to integrate the other two approaches into a unified whole. For example, because of the equation above, if a term's total amount of information stays the same, then the more that the term 'intends' or signifies about objects, the fewer are the objects to which the term 'extends' or applies.
Determination. A sign depends on its object in such a way as to represent its object—the object enables and, in a sense, determines the sign. A physically causal sense of this stands out when a sign consists in an indicative reaction. The interpretant depends likewise on both the sign and the object—an object determines a sign to determine an interpretant. But this determination is not a succession of dyadic events, like a row of toppling dominoes; sign determination is triadic. For example, an interpretant does not merely represent something which represented an object; instead an interpretant represents something as a sign representing the object. The object (be it a quality or fact or law or even fictional) determines the sign to an interpretant through one's collateral experience with the object, in which the object is found or from which it is recalled, as when a sign consists in a chance semblance of an absent object. Peirce used the word "determine" not in a strictly deterministic sense, but in a sense of "specializes", bestimmt, involving variable amount, like an influence. Peirce came to define representation and interpretation in terms of (triadic) determination. The object determines the sign to determine another sign—the interpretant—to be related to the object as the sign is related to the object, hence the interpretant, fulfilling its function as sign of the object, determines a further interpretant sign. The process is logically structured to perpetuate itself, and is definitive of sign, object, and interpretant in general.
Peirce held there are exactly three basic elements in semiosis (sign action):
Some of the understanding needed by the mind depends on familiarity with the object. To know what a given sign denotes, the mind needs some experience of that sign's object, experience outside of, and collateral to, that sign or sign system. In that context Peirce speaks of collateral experience, collateral observation, collateral acquaintance, all in much the same terms.
Among Peirce's many sign typologies, three stand out, interlocked. The first typology depends on the sign itself, the second on how the sign stands for its denoted object, and the third on how the sign stands for its object to its interpretant. Also, each of the three typologies is a three-way division, a trichotomy, via Peirce's three phenomenological categories: (1) quality of feeling, (2) reaction, resistance, and (3) representation, mediation.
I. Qualisign, sinsign, legisign (also called tone, token, type, and also called potisign, actisign, famisign): This typology classifies every sign according to the sign's own phenomenological category—the qualisign is a quality, a possibility, a "First"; the sinsign is a reaction or resistance, a singular object, an actual event or fact, a "Second"; and the legisign is a habit, a rule, a representational relation, a "Third".
II. Icon, index, symbol: This typology, the best known one, classifies every sign according to the category of the sign's way of denoting its object—the icon (also called semblance or likeness) by a quality of its own, the index by factual connection to its object, and the symbol by a habit or rule for its interpretant.
III. Rheme, dicisign, argument (also called sumisign, dicisign, suadisign, also seme, pheme, delome, and regarded as very broadened versions of the traditional term, proposition, argument): This typology classifies every sign according to the category which the interpretant attributes to the sign's way of denoting its object—the rheme, for example a term, is a sign interpreted to represent its object in respect of quality; the dicisign, for example a proposition, is a sign interpreted to represent its object in respect of fact; and the argument is a sign interpreted to represent its object in respect of habit or law. This is the culminating typology of the three, where the sign is understood as a structural element of inference.
Every sign belongs to one class or another within (I) and within (II) and within (III). Thus each of the three typologies is a three-valued parameter for every sign. The three parameters are not independent of each other; many co-classifications are absent, for reasons pertaining to the lack of either habit-taking or singular reaction in a quality, and the lack of habit-taking in a singular reaction. The result is not 27 but instead ten classes of signs fully specified at this level of analysis.
Borrowing a brace of concepts from Aristotle, Peirce examined three basic modes of inference—abduction, deduction, and induction—in his "critique of arguments" or "logic proper". Peirce also called abduction "retroduction", "presumption", and, earliest of all, "hypothesis". He characterized it as guessing and as inference to an explanatory hypothesis. He sometimes expounded the modes of inference by transformations of the categorical syllogism Barbara (AAA), for example in "Deduction, Induction, and Hypothesis" (1878). He does this by rearranging the rule (Barbara's major premise), the case (Barbara's minor premise), and the result (Barbara's conclusion):
Peirce 1883 in "A Theory of Probable Inference" (Studies in Logic) equated hypothetical inference with the induction of characters of objects (as he had done in effect before). Eventually dissatisfied, by 1900 he distinguished them once and for all and also wrote that he now took the syllogistic forms and the doctrine of logical extension and comprehension as being less basic than he had thought. In 1903 he presented the following logical form for abductive inference:
The surprising fact, C, is observed;
The logical form does not also cover induction, since induction neither depends on surprise nor proposes a new idea for its conclusion. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. "Deduction proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that something may be." Peirce did not remain quite convinced that one logical form covers all abduction. In his methodeutic or theory of inquiry (see below), he portrayed abduction as an economic initiative to further inference and study, and portrayed all three modes as clarified by their coordination in essential roles in inquiry: hypothetical explanation, deductive prediction, inductive testing
Peirce did not write extensively in aesthetics and ethics, but came by 1902 to hold that aesthetics, ethics, and logic, in that order, comprise the normative sciences. He characterized aesthetics as the study of the good (grasped as the admirable), and thus of the ends governing all conduct and thought.
Peirce divided metaphysics into (1) ontology or general metaphysics, (2) psychical or religious metaphysics, and (3) physical metaphysics.
On the issue of universals, Peirce was a scholastic realist, declaring the reality of generals as early as 1868. According to Peirce, his category he called "thirdness", the more general facts about the world, are extra-mental realities. Regarding modalities (possibility, necessity, etc.), he came in later years to regard himself as having wavered earlier as to just how positively real the modalities are. In his 1897 "The Logic of Relatives" he wrote:
I formerly defined the possible as that which in a given state of information (real or feigned) we do not know not to be true. But this definition today seems to me only a twisted phrase which, by means of two negatives, conceals an anacoluthon. We know in advance of experience that certain things are not true, because we see they are impossible.
Peirce retained, as useful for some purposes, the definitions in terms of information states, but insisted that the pragmaticist is committed to a strong modal realism by conceiving of objects in terms of predictive general conditional propositions about how they would behave under certain circumstances.
Continuity and synechism are central in Peirce's philosophy: "I did not at first suppose that it was, as I gradually came to find it, the master-Key of philosophy".
From a mathematical point of view, he embraced infinitesimals and worked long on the mathematics of continua. He long held that the real numbers constitute a pseudo-continuum; that a true continuum is the real subject matter of analysis situs (topology); and that a true continuum of instants exceeds—and within any lapse of time has room for—any Aleph number (any infinite multitude as he called it) of instants.
In 1908 Peirce wrote that he found that a true continuum might have or lack such room. Jérôme Havenel (2008): "It is on 26 May 1908, that Peirce finally gave up his idea that in every continuum there is room for whatever collection of any multitude. From now on, there are different kinds of continua, which have different properties."
Peirce believed in God, and characterized such belief as founded in an instinct explorable in musing over the worlds of ideas, brute facts, and evolving habits—and it is a belief in God not as an actual or existent being (in Peirce's sense of those words), but all the same as a real being. In "A Neglected Argument for the Reality of God" (1908), Peirce sketches, for God's reality, an argument to a hypothesis of God as the Necessary Being, a hypothesis which he describes in terms of how it would tend to develop and become compelling in musement and inquiry by a normal person who is led, by the hypothesis, to consider as being purposed the features of the worlds of ideas, brute facts, and evolving habits (for example scientific progress), such that the thought of such purposefulness will "stand or fall with the hypothesis"; meanwhile, according to Peirce, the hypothesis, in supposing an "infinitely incomprehensible" being, starts off at odds with its own nature as a purportively true conception, and so, no matter how much the hypothesis grows, it both (A) inevitably regards itself as partly true, partly vague, and as continuing to define itself without limit, and (B) inevitably has God appearing likewise vague but growing, though God as the Necessary Being is not vague or growing; but the hypothesis will hold it to be more false to say the opposite, that God is purposeless. Peirce also argued that the will is free and (see Synechism) that there is at least an attenuated kind of immortality.
Peirce held the view, which he called objective idealism, that "matter is effete mind, inveterate habits becoming physical laws". Peirce observed that "Berkeley's metaphysical theories have at first sight an air of paradox and levity very unbecoming to a bishop".
Peirce asserted the reality of (1) absolute chance (his tychist view), (2) mechanical necessity (anancist view), and (3) that which he called the law of love (agapist view), echoing his categories Firstness, Secondness, and Thirdness, respectively. He held that fortuitous variation (which he also called "sporting"), mechanical necessity, and creative love are the three modes of evolution (modes called "tychasm", "anancasm", and "agapasm") of the cosmos and its parts. He found his conception of agapasm embodied in Lamarckian evolution; the overall idea in any case is that of evolution tending toward an end or goal, and it could also be the evolution of a mind or a society; it is the kind of evolution which manifests workings of mind in some general sense. He said that overall he was a synechist, holding with reality of continuity, especially of space, time, and law.
Peirce outlined two fields, "Cenoscopy" and "Science of Review", both of which he called philosophy. Both included philosophy about science. In 1903 he arranged them, from more to less theoretically basic, thus:
Peirce placed, within Science of Review, the work and theory of classifying the sciences (including mathematics and philosophy). His classifications, on which he worked for many years, draw on argument and wide knowledge, and are of interest both as a map for navigating his philosophy and as an accomplished polymath's survey of research in his time. | [
{
"paragraph_id": 0,
"text": "Charles Sanders Peirce (/pɜːrs/ PURSS; September 10, 1839 – April 19, 1914) was an American scientist, mathematician, logician, and philosopher who is sometimes known as \"the father of pragmatism\". According to philosopher Paul Weiss, Peirce was \"the most original and versatile of America's philosophers and America's greatest logician\". Bertrand Russell wrote \"he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Educated as a chemist and employed as a scientist for thirty years, Peirce meanwhile made major contributions to logic, such as theories of relations and quantification. C. I. Lewis wrote, \"The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century.\" For Peirce, logic also encompassed much of what is now called epistemology and the philosophy of science. He saw logic as the formal branch of semiotics or study of signs, of which he is a founder, which foreshadowed the debate among logical positivists and proponents of philosophy of language that dominated 20th-century Western philosophy. Peirce's study of signs also included a tripartite theory of predication.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Additionally, he defined the concept of abductive reasoning, as well as rigorously formulating mathematical induction and deductive reasoning. He was one of the founders of statistics. As early as 1886, he saw that logical operations could be carried out by electrical switching circuits. The same idea was used decades later to produce digital computers.",
"title": ""
},
{
"paragraph_id": 3,
"text": "For metaphysics, Peirce was an \"objective idealist\" in the tradition of German philosopher Immanuel Kant as well as a scholastic realist about universals. He also held a commitment to the ideas of continuity and chance as real features of the universe, views he labeled synechism and tychism respectively. Peirce believed an epistemic fallibilism and anti-skepticism went along with these views.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Peirce was born at 3 Phillips Place in Cambridge, Massachusetts. He was the son of Sarah Hunt Mills and Benjamin Peirce, himself a professor of mathematics and astronomy at Harvard University. At age 12, Charles read his older brother's copy of Richard Whately's Elements of Logic, then the leading English-language text on the subject. So began his lifelong fascination with logic and reasoning.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "He suffered from his late teens onward from a nervous condition then known as \"facial neuralgia\", which would today be diagnosed as trigeminal neuralgia. His biographer, Joseph Brent, says that when in the throes of its pain \"he was, at first, almost stupefied, and then aloof, cold, depressed, extremely suspicious, impatient of the slightest crossing, and subject to violent outbursts of temper\". Its consequences may have led to the social isolation of his later life.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Peirce went on to earn a Bachelor of Arts degree and a Master of Arts degree (1862) from Harvard. In 1863 the Lawrence Scientific School awarded him a Bachelor of Science degree, Harvard's first summa cum laude chemistry degree. His academic record was otherwise undistinguished. At Harvard, he began lifelong friendships with Francis Ellingwood Abbot, Chauncey Wright, and William James. One of his Harvard instructors, Charles William Eliot, formed an unfavorable opinion of Peirce. This proved fateful, because Eliot, while President of Harvard (1869–1909—a period encompassing nearly all of Peirce's working life), repeatedly vetoed Peirce's employment at the university.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "Between 1859 and 1891, Peirce was intermittently employed in various scientific capacities by the United States Coast Survey, which in 1878 was renamed the United States Coast and Geodetic Survey, where he enjoyed his highly influential father's protection until the latter's death in 1880. At the Survey, he worked mainly in geodesy and gravimetry, refining the use of pendulums to determine small local variations in the Earth's gravity.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "This employment exempted Peirce from having to take part in the American Civil War; it would have been very awkward for him to do so, as the Boston Brahmin Peirces sympathized with the Confederacy. No members of the Peirce family volunteered or enlisted. Peirce grew up in a home where white supremacy was taken for granted, and slavery was considered natural. Peirce's father had described himself as a secessionist until the outbreak of the war, after which he became a Union partisan, providing donations to the Sanitary Commission, the leading Northern war charity.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "Peirce liked to use the following syllogism to illustrate the unreliability of traditional forms of logic (for the first premise arguably assumes the conclusion):",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "All Men are equal in their political rights. Negroes are Men. Therefore, negroes are equal in political rights to whites.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "He was elected a resident fellow of the American Academy of Arts and Sciences in January 1867. The Survey sent him to Europe five times, first in 1871 as part of a group sent to observe a solar eclipse. There, he sought out Augustus De Morgan, William Stanley Jevons, and William Kingdon Clifford, British mathematicians and logicians whose turn of mind resembled his own.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "From 1869 to 1872, he was employed as an assistant in Harvard's astronomical observatory, doing important work on determining the brightness of stars and the shape of the Milky Way. In 1872 he founded the Metaphysical Club, a conversational philosophical club that Peirce, the future Supreme Court Justice Oliver Wendell Holmes Jr., the philosopher and psychologist William James, amongst others, formed in January 1872 in Cambridge, Massachusetts, and dissolved in December 1872. Other members of the club included Chauncey Wright, John Fiske, Francis Ellingwood Abbot, Nicholas St. John Green, and Joseph Bangs Warner. The discussions eventually birthed Peirce's notion of pragmatism.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "On April 20, 1877, he was elected a member of the National Academy of Sciences. Also in 1877, he proposed measuring the meter as so many wavelengths of light of a certain frequency, the kind of definition employed from 1960 to 1983.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "In 1879 Peirce developed Peirce quincuncial projection, having been inspired by H. A. Schwarz's 1869 conformal transformation of a circle onto a polygon of n sides (known as the Schwarz–Christoffel mapping).",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "During the 1880s, Peirce's indifference to bureaucratic detail waxed while his Survey work's quality and timeliness waned. Peirce took years to write reports that he should have completed in months. Meanwhile, he wrote entries, ultimately thousands, during 1883–1909 on philosophy, logic, science, and other subjects for the encyclopedic Century Dictionary. In 1885, an investigation by the Allison Commission exonerated Peirce, but led to the dismissal of Superintendent Julius Hilgard and several other Coast Survey employees for misuse of public funds. In 1891, Peirce resigned from the Coast Survey at Superintendent Thomas Corwin Mendenhall's request.",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "In 1879, Peirce was appointed lecturer in logic at Johns Hopkins University, which had strong departments in areas that interested him, such as philosophy (Royce and Dewey completed their Ph.D.s at Hopkins), psychology (taught by G. Stanley Hall and studied by Joseph Jastrow, who coauthored a landmark empirical study with Peirce), and mathematics (taught by J. J. Sylvester, who came to admire Peirce's work on mathematics and logic). His Studies in Logic by Members of the Johns Hopkins University (1883) contained works by himself and Allan Marquand, Christine Ladd, Benjamin Ives Gilman, and Oscar Howard Mitchell, several of whom were his graduate students. Peirce's nontenured position at Hopkins was the only academic appointment he ever held.",
"title": "Biography"
},
{
"paragraph_id": 17,
"text": "Brent documents something Peirce never suspected, namely that his efforts to obtain academic employment, grants, and scientific respectability were repeatedly frustrated by the covert opposition of a major Canadian-American scientist of the day, Simon Newcomb. Newcomb had been a favourite student of Peirce's father; although \"no doubt quite bright\", \"like Salieri in Peter Shaffer’s Amadeus he also had just enough talent to recognize he was not a genius and just enough pettiness to resent someone who was\". Additionally \"an intensely devout and literal-minded Christian of rigid moral standards\", he was appalled by what he considered Peirce's personal shortcomings. Peirce's efforts may also have been hampered by what Brent characterizes as \"his difficult personality\". In contrast, Keith Devlin believes that Peirce's work was too far ahead of his time to be appreciated by the academic establishment of the day and that this played a large role in his inability to obtain a tenured position.",
"title": "Biography"
},
{
"paragraph_id": 18,
"text": "Peirce's personal life undoubtedly worked against his professional success. After his first wife, Harriet Melusina Fay (\"Zina\"), left him in 1875, Peirce, while still legally married, became involved with Juliette, whose last name, given variously as Froissy and Pourtalai, and nationality (she spoke French) remains uncertain. When his divorce from Zina became final in 1883, he married Juliette. That year, Newcomb pointed out to a Johns Hopkins trustee that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married; the ensuing scandal led to his dismissal in January 1884. Over the years Peirce sought academic employment at various universities without success. He had no children by either marriage.",
"title": "Biography"
},
{
"paragraph_id": 19,
"text": "In 1887, Peirce spent part of his inheritance from his parents to buy 2,000 acres (8 km) of rural land near Milford, Pennsylvania, which never yielded an economic return. There he had an 1854 farmhouse remodeled to his design. The Peirces named the property \"Arisbe\". There they lived with few interruptions for the rest of their lives, Charles writing prolifically, with much of his work remaining unpublished to this day (see Works). Living beyond their means soon led to grave financial and legal difficulties. Charles spent much of his last two decades unable to afford heat in winter and subsisting on old bread donated by the local baker. Unable to afford new stationery, he wrote on the verso side of old manuscripts. An outstanding warrant for assault and unpaid debts led to his being a fugitive in New York City for a while. Several people, including his brother James Mills Peirce and his neighbors, relatives of Gifford Pinchot, settled his debts and paid his property taxes and mortgage.",
"title": "Biography"
},
{
"paragraph_id": 20,
"text": "Peirce did some scientific and engineering consulting and wrote much for meager pay, mainly encyclopedic dictionary entries, and reviews for The Nation (with whose editor, Wendell Phillips Garrison, he became friendly). He did translations for the Smithsonian Institution, at its director Samuel Langley's instigation. Peirce also did substantial mathematical calculations for Langley's research on powered flight. Hoping to make money, Peirce tried inventing. He began but did not complete several books. In 1888, President Grover Cleveland appointed him to the Assay Commission.",
"title": "Biography"
},
{
"paragraph_id": 21,
"text": "From 1890 on, he had a friend and admirer in Judge Francis C. Russell of Chicago, who introduced Peirce to editor Paul Carus and owner Edward C. Hegeler of the pioneering American philosophy journal The Monist, which eventually published at least 14 articles by Peirce. He wrote many texts in James Mark Baldwin's Dictionary of Philosophy and Psychology (1901–1905); half of those credited to him appear to have been written actually by Christine Ladd-Franklin under his supervision. He applied in 1902 to the newly formed Carnegie Institution for a grant to write a systematic book describing his life's work. The application was doomed; his nemesis, Newcomb, served on the Carnegie Institution executive committee, and its president had been president of Johns Hopkins at the time of Peirce's dismissal.",
"title": "Biography"
},
{
"paragraph_id": 22,
"text": "The one who did the most to help Peirce in these desperate times was his old friend William James, dedicating his Will to Believe (1897) to Peirce, and arranging for Peirce to be paid to give two series of lectures at or near Harvard (1898 and 1903). Most important, each year from 1907 until James's death in 1910, James wrote to his friends in the Boston intelligentsia to request financial aid for Peirce; the fund continued even after James died. Peirce reciprocated by designating James's eldest son as his heir should Juliette predecease him. It has been believed that this was also why Peirce used \"Santiago\" (\"St. James\" in English) as a middle name, but he appeared in print as early as 1890 as Charles Santiago Peirce. (See Charles Santiago Sanders Peirce for discussion and references).",
"title": "Biography"
},
{
"paragraph_id": 23,
"text": "Peirce died destitute in Milford, Pennsylvania, twenty years before his widow. Juliette Peirce kept the urn with Peirce's ashes at Arisbe. In 1934, Pennsylvania Governor Gifford Pinchot arranged for Juliette's burial in Milford Cemetery. The urn with Peirce's ashes was interred with Juliette.",
"title": "Death and legacy"
},
{
"paragraph_id": 24,
"text": "Bertrand Russell (1959) wrote \"Beyond doubt [...] he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever\". Russell and Whitehead's Principia Mathematica, published from 1910 to 1913, does not mention Peirce (Peirce's work was not widely known until later). A. N. Whitehead, while reading some of Peirce's unpublished manuscripts soon after arriving at Harvard in 1924, was struck by how Peirce had anticipated his own \"process\" thinking. (On Peirce and process metaphysics, see Lowe 1964.) Karl Popper viewed Peirce as \"one of the greatest philosophers of all times\". Yet Peirce's achievements were not immediately recognized. His imposing contemporaries William James and Josiah Royce admired him and Cassius Jackson Keyser, at Columbia and C. K. Ogden, wrote about Peirce with respect but to no immediate effect.",
"title": "Death and legacy"
},
{
"paragraph_id": 25,
"text": "The first scholar to give Peirce his considered professional attention was Royce's student Morris Raphael Cohen, the editor of an anthology of Peirce's writings entitled Chance, Love, and Logic (1923), and the author of the first bibliography of Peirce's scattered writings. John Dewey studied under Peirce at Johns Hopkins. From 1916 onward, Dewey's writings repeatedly mention Peirce with deference. His 1938 Logic: The Theory of Inquiry is much influenced by Peirce. The publication of the first six volumes of Collected Papers (1931–1935), the most important event to date in Peirce studies and one that Cohen made possible by raising the needed funds, did not prompt an outpouring of secondary studies. The editors of those volumes, Charles Hartshorne and Paul Weiss, did not become Peirce specialists. Early landmarks of the secondary literature include the monographs by Buchler (1939), Feibleman (1946), and Goudge (1950), the 1941 PhD thesis by Arthur W. Burks (who went on to edit volumes 7 and 8), and the studies edited by Wiener and Young (1952). The Charles S. Peirce Society was founded in 1946. Its Transactions, an academic quarterly specializing in Peirce's pragmatism and American philosophy has appeared since 1965. (See Phillips 2014, 62 for discussion of Peirce and Dewey relative to transactionalism.)",
"title": "Death and legacy"
},
{
"paragraph_id": 26,
"text": "By 1943 such was Peirce's reputation, in the US at least, that Webster's Biographical Dictionary said that Peirce was \"now regarded as the most original thinker and greatest logician of his time\".",
"title": "Death and legacy"
},
{
"paragraph_id": 27,
"text": "In 1949, while doing unrelated archival work, the historian of mathematics Carolyn Eisele (1902–2000) chanced on an autograph letter by Peirce. So began her forty years of research on Peirce, “the mathematician and scientist,” culminating in Eisele (1976, 1979, 1985). Beginning around 1960, the philosopher and historian of ideas Max Fisch (1900–1995) emerged as an authority on Peirce (Fisch, 1986). He includes many of his relevant articles in a survey (Fisch 1986: 422–448) of the impact of Peirce's thought through 1983.",
"title": "Death and legacy"
},
{
"paragraph_id": 28,
"text": "Peirce has gained an international following, marked by university research centers devoted to Peirce studies and pragmatism in Brazil (CeneP/CIEP), Finland (HPRC and Commens), Germany (Wirth's group, Hoffman's and Otte's group, and Deuser's and Härle's group), France (L'I.R.S.C.E.), Spain (GEP), and Italy (CSP). His writings have been translated into several languages, including German, French, Finnish, Spanish, and Swedish. Since 1950, there have been French, Italian, Spanish, British, and Brazilian Peirce scholars of note. For many years, the North American philosophy department most devoted to Peirce was the University of Toronto, thanks in part to the leadership of Thomas Goudge and David Savan. In recent years, U.S. Peirce scholars have clustered at Indiana University – Purdue University Indianapolis, home of the Peirce Edition Project (PEP) –, and Pennsylvania State University.",
"title": "Death and legacy"
},
{
"paragraph_id": 29,
"text": "Currently, considerable interest is being taken in Peirce's ideas by researchers wholly outside the arena of academic philosophy. The interest comes from industry, business, technology, intelligence organizations, and the military; and it has resulted in the existence of a substantial number of agencies, institutes, businesses, and laboratories in which ongoing research into and development of Peircean concepts are being vigorously undertaken.",
"title": "Death and legacy"
},
{
"paragraph_id": 30,
"text": "In recent years, Peirce's trichotomy of signs is exploited by a growing number of practitioners for marketing and design tasks.",
"title": "Death and legacy"
},
{
"paragraph_id": 31,
"text": "John Deely writes that Peirce was the last of the \"moderns\" and \"first of the postmoderns\". He lauds Peirce's doctrine of signs as a contribution to the dawn of the Postmodern epoch. Deely additionally comments that \"Peirce stands...in a position analogous to the position occupied by Augustine as last of the Western Fathers and first of the medievals\".",
"title": "Death and legacy"
},
{
"paragraph_id": 32,
"text": "Peirce's reputation rests largely on academic papers published in American scientific and scholarly journals such as Proceedings of the American Academy of Arts and Sciences, the Journal of Speculative Philosophy, The Monist, Popular Science Monthly, the American Journal of Mathematics, Memoirs of the National Academy of Sciences, The Nation, and others. See Articles by Peirce, published in his lifetime for an extensive list with links to them online. The only full-length book (neither extract nor pamphlet) that Peirce authored and saw published in his lifetime was Photometric Researches (1878), a 181-page monograph on the applications of spectrographic methods to astronomy. While at Johns Hopkins, he edited Studies in Logic (1883), containing chapters by himself and his graduate students. Besides lectures during his years (1879–1884) as lecturer in Logic at Johns Hopkins, he gave at least nine series of lectures, many now published; see Lectures by Peirce.",
"title": "Works"
},
{
"paragraph_id": 33,
"text": "After Peirce's death, Harvard University obtained from Peirce's widow the papers found in his study, but did not microfilm them until 1964. Only after Richard Robin (1967) catalogued this Nachlass did it become clear that Peirce had left approximately 1,650 unpublished manuscripts, totaling over 100,000 pages, mostly still unpublished except on microfilm. On the vicissitudes of Peirce's papers, see Houser (1989). Reportedly the papers remain in unsatisfactory condition.",
"title": "Works"
},
{
"paragraph_id": 34,
"text": "The first published anthology of Peirce's articles was the one-volume Chance, Love and Logic: Philosophical Essays, edited by Morris Raphael Cohen, 1923, still in print. Other one-volume anthologies were published in 1940, 1957, 1958, 1972, 1994, and 2009, most still in print. The main posthumous editions of Peirce's works in their long trek to light, often multi-volume, and some still in print, have included:",
"title": "Works"
},
{
"paragraph_id": 35,
"text": "1931–1958: Collected Papers of Charles Sanders Peirce (CP), 8 volumes, includes many published works, along with a selection of previously unpublished work and a smattering of his correspondence. This long-time standard edition drawn from Peirce's work from the 1860s to 1913 remains the most comprehensive survey of his prolific output from 1893 to 1913. It is organized thematically, but texts (including lecture series) are often split up across volumes, while texts from various stages in Peirce's development are often combined, requiring frequent visits to editors' notes. Edited (1–6) by Charles Hartshorne and Paul Weiss and (7–8) by Arthur Burks, in print and online.",
"title": "Works"
},
{
"paragraph_id": 36,
"text": "1975–1987: Charles Sanders Peirce: Contributions to The Nation, 4 volumes, includes Peirce's more than 300 reviews and articles published 1869–1908 in The Nation. Edited by Kenneth Laine Ketner and James Edward Cook, online.",
"title": "Works"
},
{
"paragraph_id": 37,
"text": "1976: The New Elements of Mathematics by Charles S. Peirce, 4 volumes in 5, included many previously unpublished Peirce manuscripts on mathematical subjects, along with Peirce's important published mathematical articles. Edited by Carolyn Eisele, back in print.",
"title": "Works"
},
{
"paragraph_id": 38,
"text": "1977: Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby (2nd edition 2001), included Peirce's entire correspondence (1903–1912) with Victoria, Lady Welby. Peirce's other published correspondence is largely limited to the 14 letters included in volume 8 of the Collected Papers, and the 20-odd pre-1890 items included so far in the Writings. Edited by Charles S. Hardwick with James Cook, out of print.",
"title": "Works"
},
{
"paragraph_id": 39,
"text": "1982–now: Writings of Charles S. Peirce, A Chronological Edition (W), Volumes 1–6 & 8, of a projected 30. The limited coverage, and defective editing and organization, of the Collected Papers led Max Fisch and others in the 1970s to found the Peirce Edition Project (PEP), whose mission is to prepare a more complete critical chronological edition. Only seven volumes have appeared to date, but they cover the period from 1859 to 1892, when Peirce carried out much of his best-known work. Writings of Charles S. Peirce, 8 was published in November 2010; and work continues on Writings of Charles S. Peirce, 7, 9, and 11. In print and online.",
"title": "Works"
},
{
"paragraph_id": 40,
"text": "1985: Historical Perspectives on Peirce's Logic of Science: A History of Science, 2 volumes. Auspitz has said, \"The extent of Peirce's immersion in the science of his day is evident in his reviews in the Nation [...] and in his papers, grant applications, and publishers' prospectuses in the history and practice of science\", referring latterly to Historical Perspectives. Edited by Carolyn Eisele, back in print.",
"title": "Works"
},
{
"paragraph_id": 41,
"text": "1992: Reasoning and the Logic of Things collects in one place Peirce's 1898 series of lectures invited by William James. Edited by Kenneth Laine Ketner, with commentary by Hilary Putnam, in print.",
"title": "Works"
},
{
"paragraph_id": 42,
"text": "1992–1998: The Essential Peirce (EP), 2 volumes, is an important recent sampler of Peirce's philosophical writings. Edited (1) by Nathan Hauser and Christian Kloesel and (2) by Peirce Edition Project editors, in print.",
"title": "Works"
},
{
"paragraph_id": 43,
"text": "1997: Pragmatism as a Principle and Method of Right Thinking collects Peirce's 1903 Harvard \"Lectures on Pragmatism\" in a study edition, including drafts, of Peirce's lecture manuscripts, which had been previously published in abridged form; the lectures now also appear in The Essential Peirce, 2. Edited by Patricia Ann Turisi, in print.",
"title": "Works"
},
{
"paragraph_id": 44,
"text": "2010: Philosophy of Mathematics: Selected Writings collects important writings by Peirce on the subject, many not previously in print. Edited by Matthew E. Moore, in print.",
"title": "Works"
},
{
"paragraph_id": 45,
"text": "Peirce's most important work in pure mathematics was in logical and foundational areas. He also worked on linear algebra, matrices, various geometries, topology and Listing numbers, Bell numbers, graphs, the four-color problem, and the nature of continuity.",
"title": "Mathematics"
},
{
"paragraph_id": 46,
"text": "He worked on applied mathematics in economics, engineering, and map projections, and was especially active in probability and statistics.",
"title": "Mathematics"
},
{
"paragraph_id": 47,
"text": "Peirce made a number of striking discoveries in formal logic and foundational mathematics, nearly all of which came to be appreciated only long after he died:",
"title": "Mathematics"
},
{
"paragraph_id": 48,
"text": "In 1860 he suggested a cardinal arithmetic for infinite numbers, years before any work by Georg Cantor (who completed his dissertation in 1867) and without access to Bernard Bolzano's 1851 (posthumous) Paradoxien des Unendlichen.",
"title": "Mathematics"
},
{
"paragraph_id": 49,
"text": "In 1880–1881 he showed how Boolean algebra could be done via a repeated sufficient single binary operation (logical NOR), anticipating Henry M. Sheffer by 33 years. (See also De Morgan's Laws.)",
"title": "Mathematics"
},
{
"paragraph_id": 50,
"text": "In 1881 he set out the axiomatization of natural number arithmetic, a few years before Richard Dedekind and Giuseppe Peano. In the same paper Peirce gave, years before Dedekind, the first purely cardinal definition of a finite set in the sense now known as \"Dedekind-finite\", and implied by the same stroke an important formal definition of an infinite set (Dedekind-infinite), as a set that can be put into a one-to-one correspondence with one of its proper subsets.",
"title": "Mathematics"
},
{
"paragraph_id": 51,
"text": "In 1885 he distinguished between first-order and second-order quantification. In the same paper he set out what can be read as the first (primitive) axiomatic set theory, anticipating Zermelo by about two decades (Brady 2000, pp. 132–133).",
"title": "Mathematics"
},
{
"paragraph_id": 52,
"text": "In 1886, he saw that Boolean calculations could be carried out via electrical switches, anticipating Claude Shannon by more than 50 years. By the later 1890s he was devising existential graphs, a diagrammatic notation for the predicate calculus. Based on them are John F. Sowa's conceptual graphs and Sun-Joo Shin's diagrammatic reasoning.",
"title": "Mathematics"
},
{
"paragraph_id": 53,
"text": "Peirce wrote drafts for an introductory textbook, with the working title The New Elements of Mathematics, that presented mathematics from an original standpoint. Those drafts and many other of his previously unpublished mathematical manuscripts finally appeared in The New Elements of Mathematics by Charles S. Peirce (1976), edited by mathematician Carolyn Eisele.",
"title": "Mathematics"
},
{
"paragraph_id": 54,
"text": "Peirce agreed with Auguste Comte in regarding mathematics as more basic than philosophy and the special sciences (of nature and mind). Peirce classified mathematics into three subareas: (1) mathematics of logic, (2) discrete series, and (3) pseudo-continua (as he called them, including the real numbers) and continua. Influenced by his father Benjamin, Peirce argued that mathematics studies purely hypothetical objects and is not just the science of quantity but is more broadly the science which draws necessary conclusions; that mathematics aids logic, not vice versa; and that logic itself is part of philosophy and is the science about drawing conclusions necessary and otherwise.",
"title": "Mathematics"
},
{
"paragraph_id": 55,
"text": "Peirce held that science achieves statistical probabilities, not certainties, and that spontaneity (absolute chance) is real (see Tychism on his view). Most of his statistical writings promote the frequency interpretation of probability (objective ratios of cases), and many of his writings express skepticism about (and criticize the use of) probability when such models are not based on objective randomization. Though Peirce was largely a frequentist, his possible world semantics introduced the \"propensity\" theory of probability before Karl Popper. Peirce (sometimes with Joseph Jastrow) investigated the probability judgments of experimental subjects, \"perhaps the very first\" elicitation and estimation of subjective probabilities in experimental psychology and (what came to be called) Bayesian statistics.",
"title": "Mathematics"
},
{
"paragraph_id": 56,
"text": "Peirce was one of the founders of statistics. He formulated modern statistics in \"Illustrations of the Logic of Science\" (1877–1878) and \"A Theory of Probable Inference\" (1883). With a repeated measures design, Charles Sanders Peirce and Joseph Jastrow introduced blinded, controlled randomized experiments in 1884 (Hacking 1990:205) (before Ronald A. Fisher). He invented optimal design for experiments on gravity, in which he \"corrected the means\". He used correlation and smoothing. Peirce extended the work on outliers by Benjamin Peirce, his father. He introduced terms \"confidence\" and \"likelihood\" (before Jerzy Neyman and Fisher). (See Stephen Stigler's historical books and Ian Hacking 1990.)",
"title": "Mathematics"
},
{
"paragraph_id": 57,
"text": "It is not sufficiently recognized that Peirce's career was that of a scientist, not a philosopher; and that during his lifetime he was known and valued chiefly as a scientist, only secondarily as a logician, and scarcely at all as a philosopher. Even his work in philosophy and logic will not be understood until this fact becomes a standing premise of Peircean studies.",
"title": "Philosophy"
},
{
"paragraph_id": 58,
"text": "Peirce was a working scientist for 30 years, and arguably was a professional philosopher only during the five years he lectured at Johns Hopkins. He learned philosophy mainly by reading, each day, a few pages of Immanuel Kant's Critique of Pure Reason, in the original German, while a Harvard undergraduate. His writings bear on a wide array of disciplines, including mathematics, logic, philosophy, statistics, astronomy, metrology, geodesy, experimental psychology, economics, linguistics, and the history and philosophy of science. This work has enjoyed renewed interest and approval, a revival inspired not only by his anticipations of recent scientific developments but also by his demonstration of how philosophy can be applied effectively to human problems.",
"title": "Philosophy"
},
{
"paragraph_id": 59,
"text": "Peirce's philosophy includes (see below in related sections) a pervasive three-category system: belief that truth is immutable and is both independent from actual opinion (fallibilism) and discoverable (no radical skepticism), logic as formal semiotic on signs, on arguments, and on inquiry's ways—including philosophical pragmatism (which he founded), critical common-sensism, and scientific method—and, in metaphysics: Scholastic realism, e.g. John Duns Scotus, belief in God, freedom, and at least an attenuated immortality, objective idealism, and belief in the reality of continuity and of absolute chance, mechanical necessity, and creative love. In his work, fallibilism and pragmatism may seem to work somewhat like skepticism and positivism, respectively, in others' work. However, for Peirce, fallibilism is balanced by an anti-skepticism and is a basis for belief in the reality of absolute chance and of continuity, and pragmatism commits one to anti-nominalist belief in the reality of the general (CP 5.453–457).",
"title": "Philosophy"
},
{
"paragraph_id": 60,
"text": "For Peirce, First Philosophy, which he also called cenoscopy, is less basic than mathematics and more basic than the special sciences (of nature and mind). It studies positive phenomena in general, phenomena available to any person at any waking moment, and does not settle questions by resorting to special experiences. He divided such philosophy into (1) phenomenology (which he also called phaneroscopy or categorics), (2) normative sciences (esthetics, ethics, and logic), and (3) metaphysics; his views on them are discussed in order below.",
"title": "Philosophy"
},
{
"paragraph_id": 61,
"text": "Peirce's recipe for pragmatic thinking, which he called pragmatism and, later, pragmaticism, is recapitulated in several versions of the so-called pragmatic maxim. Here is one of his more emphatic reiterations of it:",
"title": "Philosophy"
},
{
"paragraph_id": 62,
"text": "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object.",
"title": "Philosophy"
},
{
"paragraph_id": 63,
"text": "As a movement, pragmatism began in the early 1870s in discussions among Peirce, William James, and others in the Metaphysical Club. James among others regarded some articles by Peirce such as \"The Fixation of Belief\" (1877) and especially \"How to Make Our Ideas Clear\" (1878) as foundational to pragmatism. Peirce (CP 5.11–12), like James (Pragmatism: A New Name for Some Old Ways of Thinking, 1907), saw pragmatism as embodying familiar attitudes, in philosophy and elsewhere, elaborated into a new deliberate method for fruitful thinking about problems. Peirce differed from James and the early John Dewey, in some of their tangential enthusiasms, in being decidedly more rationalistic and realistic, in several senses of those terms, throughout the preponderance of his own philosophical moods.",
"title": "Philosophy"
},
{
"paragraph_id": 64,
"text": "In 1905 Peirce coined the new name pragmaticism \"for the precise purpose of expressing the original definition\", saying that \"all went happily\" with James's and F.C.S. Schiller's variant uses of the old name \"pragmatism\" and that he coined the new name because of the old name's growing use in \"literary journals, where it gets abused\". Yet he cited as causes, in a 1906 manuscript, his differences with James and Schiller and, in a 1908 publication, his differences with James as well as literary author Giovanni Papini's declaration of pragmatism's indefinability. Peirce in any case regarded his views that truth is immutable and infinity is real, as being opposed by the other pragmatists, but he remained allied with them on other issues.",
"title": "Philosophy"
},
{
"paragraph_id": 65,
"text": "Pragmatism begins with the idea that belief is that on which one is prepared to act. Peirce's pragmatism is a method of clarification of conceptions of objects. It equates any conception of an object to a conception of that object's effects to a general extent of the effects' conceivable implications for informed practice. It is a method of sorting out conceptual confusions occasioned, for example, by distinctions that make (sometimes needed) formal yet not practical differences. He formulated both pragmatism and statistical principles as aspects of scientific logic, in his \"Illustrations of the Logic of Science\" series of articles. In the second one, \"How to Make Our Ideas Clear\", Peirce discussed three grades of clearness of conception:",
"title": "Philosophy"
},
{
"paragraph_id": 66,
"text": "By way of example of how to clarify conceptions, he addressed conceptions about truth and the real as questions of the presuppositions of reasoning in general. In clearness's second grade (the \"nominal\" grade), he defined truth as a sign's correspondence to its object, and the real as the object of such correspondence, such that truth and the real are independent of that which you or I or any actual, definite community of inquirers think. After that needful but confined step, next in clearness's third grade (the pragmatic, practice-oriented grade) he defined truth as that opinion which would be reached, sooner or later but still inevitably, by research taken far enough, such that the real does depend on that ideal final opinion—a dependence to which he appeals in theoretical arguments elsewhere, for instance for the long-run validity of the rule of induction. Peirce argued that even to argue against the independence and discoverability of truth and the real is to presuppose that there is, about that very question under argument, a truth with just such independence and discoverability.",
"title": "Philosophy"
},
{
"paragraph_id": 67,
"text": "Peirce said that a conception's meaning consists in \"all general modes of rational conduct\" implied by \"acceptance\" of the conception—that is, if one were to accept, first of all, the conception as true, then what could one conceive to be consequent general modes of rational conduct by all who accept the conception as true?—the whole of such consequent general modes is the whole meaning. His pragmatism does not equate a conception's meaning, its intellectual purport, with the conceived benefit or cost of the conception itself, like a meme (or, say, propaganda), outside the perspective of its being true, nor, since a conception is general, is its meaning equated with any definite set of actual consequences or upshots corroborating or undermining the conception or its worth. His pragmatism also bears no resemblance to \"vulgar\" pragmatism, which misleadingly connotes a ruthless and Machiavellian search for mercenary or political advantage. Instead the pragmatic maxim is the heart of his pragmatism as a method of experimentational mental reflection arriving at conceptions in terms of conceivable confirmatory and disconfirmatory circumstances—a method hospitable to the formation of explanatory hypotheses, and conducive to the use and improvement of verification.",
"title": "Philosophy"
},
{
"paragraph_id": 68,
"text": "Peirce's pragmatism, as method and theory of definitions and conceptual clearness, is part of his theory of inquiry, which he variously called speculative, general, formal or universal rhetoric or simply methodeutic. He applied his pragmatism as a method throughout his work.",
"title": "Philosophy"
},
{
"paragraph_id": 69,
"text": "In \"The Fixation of Belief\" (1877), Peirce gives his take on the psychological origin and aim of inquiry. On his view, individuals are motivated to inquiry by desire to escape the feelings of anxiety and unease which Peirce takes to be characteristic of the state of doubt. Doubt is described by Peirce as an \"uneasy and dissatisfied state from which we struggle to free ourselves and pass into the state of belief.\" Peirce uses words like \"irritation\" to describe the experience of being in doubt and to explain why he thinks we find such experiences to be motivating. The irritating feeling of doubt is appeased, Peirce says, through our efforts to achieve a settled state of satisfaction with what we land on as our answer to the question which led to that doubt in the first place. This settled state, namely, belief, is described by Peirce as \"a calm and satisfactory state which we do not wish to avoid.\" Our efforts to achieve the satisfaction of belief, by whichever methods we may pursue, are what Peirce calls \"inquiry\". Four methods which Peirce describes as having been actually pursued throughout the history of thought are summarized below in the section after next.",
"title": "Philosophy"
},
{
"paragraph_id": 70,
"text": "Critical common-sensism, treated by Peirce as a consequence of his pragmatism, is his combination of Thomas Reid's common-sense philosophy with a fallibilism that recognizes that propositions of our more or less vague common sense now indubitable may later come into question, for example because of transformations of our world through science. It includes efforts to work up in tests genuine doubts for a core group of common indubitables that vary slowly if at all.",
"title": "Philosophy"
},
{
"paragraph_id": 71,
"text": "In \"The Fixation of Belief\" (1877), Peirce described inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubt born of surprise, disagreement, and the like, and to reach a secure belief, belief being that on which one is prepared to act. That let Peirce frame scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal, quarrelsome, or hyperbolic doubt, which he held to be fruitless. Peirce sketched four methods of settling opinion, ordered from least to most successful:",
"title": "Philosophy"
},
{
"paragraph_id": 72,
"text": "Peirce held that, in practical affairs, slow and stumbling ratiocination is often dangerously inferior to instinct and traditional sentiment, and that the scientific method is best suited to theoretical research, which in turn should not be trammeled by the other methods and practical ends; reason's \"first rule\" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry. Scientific method excels over the others finally by being deliberately designed to arrive—eventually—at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential conduct correctly to its given goal, and wed themselves to the scientific method.",
"title": "Philosophy"
},
{
"paragraph_id": 73,
"text": "Insofar as clarification by pragmatic reflection suits explanatory hypotheses and fosters predictions and testing, pragmatism points beyond the usual duo of foundational alternatives: deduction from self-evident truths, or rationalism; and induction from experiential phenomena, or empiricism.",
"title": "Philosophy"
},
{
"paragraph_id": 74,
"text": "Based on his critique of three modes of argument and different from either foundationalism or coherentism, Peirce's approach seeks to justify claims by a three-phase dynamic of inquiry:",
"title": "Philosophy"
},
{
"paragraph_id": 75,
"text": "Thereby, Peirce devised an approach to inquiry far more solid than the flatter image of inductive generalization simpliciter, which is a mere re-labeling of phenomenological patterns. Peirce's pragmatism was the first time the scientific method was proposed as an epistemology for philosophical questions.",
"title": "Philosophy"
},
{
"paragraph_id": 76,
"text": "A theory that succeeds better than its rivals in predicting and controlling our world is said to be nearer the truth. This is an operational notion of truth used by scientists.",
"title": "Philosophy"
},
{
"paragraph_id": 77,
"text": "Peirce extracted the pragmatic model or theory of inquiry from its raw materials in classical logic and refined it in parallel with the early development of symbolic logic to address problems about the nature of scientific reasoning.",
"title": "Philosophy"
},
{
"paragraph_id": 78,
"text": "Abduction, deduction, and induction make incomplete sense in isolation from one another but comprise a cycle understandable as a whole insofar as they collaborate toward the common end of inquiry. In the pragmatic way of thinking about conceivable practical implications, every thing has a purpose, and, as possible, its purpose should first be denoted. Abduction hypothesizes an explanation for deduction to clarify into implications to be tested so that induction can evaluate the hypothesis, in the struggle to move from troublesome uncertainty to more secure belief. No matter how traditional and needful it is to study the modes of inference in abstraction from one another, the integrity of inquiry strongly limits the effective modularity of its principal components.",
"title": "Philosophy"
},
{
"paragraph_id": 79,
"text": "Peirce's outline of the scientific method in §III–IV of \"A Neglected Argument\" is summarized below (except as otherwise noted). There he also reviewed plausibility and inductive precision (issues of critique of arguments).",
"title": "Philosophy"
},
{
"paragraph_id": 80,
"text": "1. Abductive (or retroductive) phase. Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicated phenomenon. The modicum of success in our guesses far exceeds that of random luck, and seems born of attunement to nature by developed or inherent instincts, especially insofar as best guesses are optimally plausible and simple in the sense of the \"facile and natural\", as by Galileo's natural light of reason and as distinct from \"logical simplicity\". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and it has no substitute in expediting us toward new truths. In 1903, Peirce called pragmatism \"the logic of abduction\". Coordinative method leads from abducting a plausible hypothesis to judging it for its testability and for how its trial would economize inquiry itself. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if not costly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be selected for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, or incomplexity. One can discover only that which would be revealed through their sufficient experience anyway, and so the point is to expedite it; economy of research demands the leap, so to speak, of abduction and governs its art.",
"title": "Philosophy"
},
{
"paragraph_id": 81,
"text": "2. Deductive phase. Two stages:",
"title": "Philosophy"
},
{
"paragraph_id": 82,
"text": "3. Inductive phase. Evaluation of the hypothesis, inferring from observational or experimental tests of its deduced consequences. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general) that the real \"is only the object of the final opinion to which sufficient investigation would lead\"; in other words, anything excluding such a process would never be real. Induction involving the ongoing accumulation of evidence follows \"a method which, sufficiently persisted in\", will \"diminish the error below any predesignate degree\". Three stages:",
"title": "Philosophy"
},
{
"paragraph_id": 83,
"text": "Peirce drew on the methodological implications of the four incapacities—no genuine introspection, no intuition in the sense of non-inferential cognition, no thought but in signs, and no conception of the absolutely incognizable—to attack philosophical Cartesianism, of which he said that:",
"title": "Philosophy"
},
{
"paragraph_id": 84,
"text": "On May 14, 1867, the 27-year-old Peirce presented a paper entitled \"On a New List of Categories\" to the American Academy of Arts and Sciences, which published it the following year. The paper outlined a theory of predication, involving three universal categories that Peirce developed in response to reading Aristotle, Immanuel Kant, and G. W. F. Hegel, categories that Peirce applied throughout his work for the rest of his life. Peirce scholars generally regard the \"New List\" as foundational or breaking the ground for Peirce's \"architectonic\", his blueprint for a pragmatic philosophy. In the categories one will discern, concentrated, the pattern that one finds formed by the three grades of clearness in \"How To Make Our Ideas Clear\" (1878 paper foundational to pragmatism), and in numerous other trichotomies in his work.",
"title": "Philosophy"
},
{
"paragraph_id": 85,
"text": "\"On a New List of Categories\" is cast as a Kantian deduction; it is short but dense and difficult to summarize. The following table is compiled from that and later works. In 1893, Peirce restated most of it for a less advanced audience.",
"title": "Philosophy"
},
{
"paragraph_id": 86,
"text": "*Note: An interpretant is an interpretation (human or otherwise) in the sense of the product of an interpretive process.",
"title": "Philosophy"
},
{
"paragraph_id": 87,
"text": "In 1918 the logician C. I. Lewis wrote, \"The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century.\"",
"title": "Philosophy"
},
{
"paragraph_id": 88,
"text": "Beginning with his first paper on the \"Logic of Relatives\" (1870), Peirce extended the theory of relations pioneered by Augustus De Morgan. Beginning in 1940, Alfred Tarski and his students rediscovered aspects of Peirce's larger vision of relational logic, developing the perspective of relation algebra.",
"title": "Philosophy"
},
{
"paragraph_id": 89,
"text": "Relational logic gained applications. In mathematics, it influenced the abstract analysis of E. H. Moore and the lattice theory of Garrett Birkhoff. In computer science, the relational model for databases was developed with Peircean ideas in work of Edgar F. Codd, who was a doctoral student of Arthur W. Burks, a Peirce scholar. In economics, relational logic was used by Frank P. Ramsey, John von Neumann, and Paul Samuelson to study preferences and utility and by Kenneth J. Arrow in Social Choice and Individual Values, following Arrow's association with Tarski at City College of New York.",
"title": "Philosophy"
},
{
"paragraph_id": 90,
"text": "On Peirce and his contemporaries Ernst Schröder and Gottlob Frege, Hilary Putnam (1982) documented that Frege's work on the logic of quantifiers had little influence on his contemporaries, although it was published four years before the work of Peirce and his student Oscar Howard Mitchell. Putnam found that mathematicians and logicians learned about the logic of quantifiers through the independent work of Peirce and Mitchell, particularly through Peirce's \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" (1885), published in the premier American mathematical journal of the day, and cited by Peano and Schröder, among others, who ignored Frege. They also adopted and modified Peirce's notations, typographical variants of those now used. Peirce apparently was ignorant of Frege's work, despite their overlapping achievements in logic, philosophy of language, and the foundations of mathematics.",
"title": "Philosophy"
},
{
"paragraph_id": 91,
"text": "Peirce's work on formal logic had admirers besides Ernst Schröder:",
"title": "Philosophy"
},
{
"paragraph_id": 92,
"text": "A philosophy of logic, grounded in his categories and semiotic, can be extracted from Peirce's writings and, along with Peirce's logical work more generally, is exposited and defended in Hilary Putnam (1982); the Introduction in Nathan Houser et al. (1997); and Randall Dipert's chapter in Cheryl Misak (2004).",
"title": "Philosophy"
},
{
"paragraph_id": 93,
"text": "Peirce regarded logic per se as a division of philosophy, as a normative science based on esthetics and ethics, as more basic than metaphysics, and as \"the art of devising methods of research\". More generally, as inference, \"logic is rooted in the social principle\", since inference depends on a standpoint that, in a sense, is unlimited. Peirce called (with no sense of deprecation) \"mathematics of logic\" much of the kind of thing which, in current research and applications, is called simply \"logic\". He was productive in both (philosophical) logic and logic's mathematics, which were connected deeply in his work and thought.",
"title": "Philosophy"
},
{
"paragraph_id": 94,
"text": "Peirce argued that logic is formal semiotic: the formal study of signs in the broadest sense, not only signs that are artificial, linguistic, or symbolic, but also signs that are semblances or are indexical such as reactions. Peirce held that \"all this universe is perfused with signs, if it is not composed exclusively of signs\", along with their representational and inferential relations. He argued that, since all thought takes time, all thought is in signs and sign processes (\"semiosis\") such as the inquiry process. He divided logic into: (1) speculative grammar, or stechiology, on how signs can be meaningful and, in relation to that, what kinds of signs there are, how they combine, and how some embody or incorporate others; (2) logical critic, or logic proper, on the modes of inference; and (3) speculative or universal rhetoric, or methodeutic, the philosophical theory of inquiry, including pragmatism.",
"title": "Philosophy"
},
{
"paragraph_id": 95,
"text": "In his \"F.R.L.\" [First Rule of Logic] (1899), Peirce states that the first, and \"in one sense, the sole\", rule of reason is that, to learn, one needs to desire to learn and desire it without resting satisfied with that which one is inclined to think. So, the first rule is, to wonder. Peirce proceeds to a critical theme in research practices and the shaping of theories:",
"title": "Philosophy"
},
{
"paragraph_id": 96,
"text": "...there follows one corollary which itself deserves to be inscribed upon every wall of the city of philosophy: Do not block the way of inquiry.",
"title": "Philosophy"
},
{
"paragraph_id": 97,
"text": "Peirce adds, that method and economy are best in research but no outright sin inheres in trying any theory in the sense that the investigation via its trial adoption can proceed unimpeded and undiscouraged, and that \"the one unpardonable offence\" is a philosophical barricade against truth's advance, an offense to which \"metaphysicians in all ages have shown themselves the most addicted\". Peirce in many writings holds that logic precedes metaphysics (ontological, religious, and physical).",
"title": "Philosophy"
},
{
"paragraph_id": 98,
"text": "Peirce goes on to list four common barriers to inquiry: (1) Assertion of absolute certainty; (2) maintaining that something is absolutely unknowable; (3) maintaining that something is absolutely inexplicable because absolutely basic or ultimate; (4) holding that perfect exactitude is possible, especially such as to quite preclude unusual and anomalous phenomena. To refuse absolute theoretical certainty is the heart of fallibilism, which Peirce unfolds into refusals to set up any of the listed barriers. Peirce elsewhere argues (1897) that logic's presupposition of fallibilism leads at length to the view that chance and continuity are very real (tychism and synechism).",
"title": "Philosophy"
},
{
"paragraph_id": 99,
"text": "The First Rule of Logic pertains to the mind's presuppositions in undertaking reason and logic; presuppositions, for instance, that truth and the real do not depend on yours or my opinion of them but do depend on representational relation and consist in the destined end in investigation taken far enough (see below). He describes such ideas as, collectively, hopes which, in particular cases, one is unable seriously to doubt.",
"title": "Philosophy"
},
{
"paragraph_id": 100,
"text": "In three articles in 1868–1869, Peirce rejected mere verbal or hyperbolic doubt and first or ultimate principles, and argued that we have (as he numbered them):",
"title": "Philosophy"
},
{
"paragraph_id": 101,
"text": "(The above sense of the term \"intuition\" is almost Kant's, said Peirce. It differs from the current looser sense that encompasses instinctive or anyway half-conscious inference.)",
"title": "Philosophy"
},
{
"paragraph_id": 102,
"text": "Peirce argued that those incapacities imply the reality of the general and of the continuous, the validity of the modes of reasoning, and the falsity of philosophical Cartesianism (see below).",
"title": "Philosophy"
},
{
"paragraph_id": 103,
"text": "Peirce rejected the conception (usually ascribed to Kant) of the unknowable thing-in-itself and later said that to \"dismiss make-believes\" is a prerequisite for pragmatism.",
"title": "Philosophy"
},
{
"paragraph_id": 104,
"text": "Peirce sought, through his wide-ranging studies through the decades, formal philosophical ways to articulate thought's processes, and also to explain the workings of science. These inextricably entangled questions of a dynamics of inquiry rooted in nature and nurture led him to develop his semiotic with very broadened conceptions of signs and inference, and, as its culmination, a theory of inquiry for the task of saying 'how science works' and devising research methods. This would be logic by the medieval definition taught for centuries: art of arts, science of sciences, having the way to the principles of all methods. Influences radiate from points on parallel lines of inquiry in Aristotle's work, in such loci as: the basic terminology of psychology in On the Soul; the founding description of sign relations in On Interpretation; and the differentiation of inference into three modes that are commonly translated into English as abduction, deduction, and induction, in the Prior Analytics, as well as inference by analogy (called paradeigma by Aristotle), which Peirce regarded as involving the other three modes.",
"title": "Philosophy"
},
{
"paragraph_id": 105,
"text": "Peirce began writing on semiotic in the 1860s, around the time when he devised his system of three categories. He called it both semiotic and semeiotic. Both are current in singular and plural. He based it on the conception of a triadic sign relation, and defined semiosis as \"action, or influence, which is, or involves, a cooperation of three subjects, such as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between pairs\". As to signs in thought, Peirce emphasized the reverse: \"To say, therefore, that thought cannot happen in an instant, but requires a time, is but another way of saying that every thought must be interpreted in another, or that all thought is in signs.\"",
"title": "Philosophy"
},
{
"paragraph_id": 106,
"text": "Peirce held that all thought is in signs, issuing in and from interpretation, where sign is the word for the broadest variety of conceivable semblances, diagrams, metaphors, symptoms, signals, designations, symbols, texts, even mental concepts and ideas, all as determinations of a mind or quasi-mind, that which at least functions like a mind, as in the work of crystals or bees—the focus is on sign action in general rather than on psychology, linguistics, or social studies (fields which he also pursued).",
"title": "Philosophy"
},
{
"paragraph_id": 107,
"text": "Inquiry is a kind of inference process, a manner of thinking and semiosis. Global divisions of ways for phenomena to stand as signs, and the subsumption of inquiry and thinking within inference as a sign process, enable the study of inquiry on semiotics' three levels:",
"title": "Philosophy"
},
{
"paragraph_id": 108,
"text": "Peirce uses examples often from common experience, but defines and discusses such things as assertion and interpretation in terms of philosophical logic. In a formal vein, Peirce said:",
"title": "Philosophy"
},
{
"paragraph_id": 109,
"text": "On the Definition of Logic. Logic is formal semiotic. A sign is something, A, which brings something, B, its interpretant sign, determined or created by it, into the same sort of correspondence (or a lower implied sort) with something, C, its object, as that in which itself stands to C. This definition no more involves any reference to human thought than does the definition of a line as the place within which a particle lies during a lapse of time. It is from this definition that I deduce the principles of logic by mathematical reasoning, and by mathematical reasoning that, I aver, will support criticism of Weierstrassian severity, and that is perfectly evident. The word \"formal\" in the definition is also defined.",
"title": "Philosophy"
},
{
"paragraph_id": 110,
"text": "Peirce's theory of signs is known to be one of the most complex semiotic theories due to its generalistic claim. Anything is a sign—not absolutely as itself, but instead in some relation or other. The sign relation is the key. It defines three roles encompassing (1) the sign, (2) the sign's subject matter, called its object, and (3) the sign's meaning or ramification as formed into a kind of effect called its interpretant (a further sign, for example a translation). It is an irreducible triadic relation, according to Peirce. The roles are distinct even when the things that fill those roles are not. The roles are but three; a sign of an object leads to one or more interpretants, and, as signs, they lead to further interpretants.",
"title": "Philosophy"
},
{
"paragraph_id": 111,
"text": "Extension × intension = information. Two traditional approaches to sign relation, necessary though insufficient, are the way of extension (a sign's objects, also called breadth, denotation, or application) and the way of intension (the objects' characteristics, qualities, attributes referenced by the sign, also called depth, comprehension, significance, or connotation). Peirce adds a third, the way of information, including change of information, to integrate the other two approaches into a unified whole. For example, because of the equation above, if a term's total amount of information stays the same, then the more that the term 'intends' or signifies about objects, the fewer are the objects to which the term 'extends' or applies.",
"title": "Philosophy"
},
{
"paragraph_id": 112,
"text": "Determination. A sign depends on its object in such a way as to represent its object—the object enables and, in a sense, determines the sign. A physically causal sense of this stands out when a sign consists in an indicative reaction. The interpretant depends likewise on both the sign and the object—an object determines a sign to determine an interpretant. But this determination is not a succession of dyadic events, like a row of toppling dominoes; sign determination is triadic. For example, an interpretant does not merely represent something which represented an object; instead an interpretant represents something as a sign representing the object. The object (be it a quality or fact or law or even fictional) determines the sign to an interpretant through one's collateral experience with the object, in which the object is found or from which it is recalled, as when a sign consists in a chance semblance of an absent object. Peirce used the word \"determine\" not in a strictly deterministic sense, but in a sense of \"specializes\", bestimmt, involving variable amount, like an influence. Peirce came to define representation and interpretation in terms of (triadic) determination. The object determines the sign to determine another sign—the interpretant—to be related to the object as the sign is related to the object, hence the interpretant, fulfilling its function as sign of the object, determines a further interpretant sign. The process is logically structured to perpetuate itself, and is definitive of sign, object, and interpretant in general.",
"title": "Philosophy"
},
{
"paragraph_id": 113,
"text": "Peirce held there are exactly three basic elements in semiosis (sign action):",
"title": "Philosophy"
},
{
"paragraph_id": 114,
"text": "Some of the understanding needed by the mind depends on familiarity with the object. To know what a given sign denotes, the mind needs some experience of that sign's object, experience outside of, and collateral to, that sign or sign system. In that context Peirce speaks of collateral experience, collateral observation, collateral acquaintance, all in much the same terms.",
"title": "Philosophy"
},
{
"paragraph_id": 115,
"text": "Among Peirce's many sign typologies, three stand out, interlocked. The first typology depends on the sign itself, the second on how the sign stands for its denoted object, and the third on how the sign stands for its object to its interpretant. Also, each of the three typologies is a three-way division, a trichotomy, via Peirce's three phenomenological categories: (1) quality of feeling, (2) reaction, resistance, and (3) representation, mediation.",
"title": "Philosophy"
},
{
"paragraph_id": 116,
"text": "I. Qualisign, sinsign, legisign (also called tone, token, type, and also called potisign, actisign, famisign): This typology classifies every sign according to the sign's own phenomenological category—the qualisign is a quality, a possibility, a \"First\"; the sinsign is a reaction or resistance, a singular object, an actual event or fact, a \"Second\"; and the legisign is a habit, a rule, a representational relation, a \"Third\".",
"title": "Philosophy"
},
{
"paragraph_id": 117,
"text": "II. Icon, index, symbol: This typology, the best known one, classifies every sign according to the category of the sign's way of denoting its object—the icon (also called semblance or likeness) by a quality of its own, the index by factual connection to its object, and the symbol by a habit or rule for its interpretant.",
"title": "Philosophy"
},
{
"paragraph_id": 118,
"text": "III. Rheme, dicisign, argument (also called sumisign, dicisign, suadisign, also seme, pheme, delome, and regarded as very broadened versions of the traditional term, proposition, argument): This typology classifies every sign according to the category which the interpretant attributes to the sign's way of denoting its object—the rheme, for example a term, is a sign interpreted to represent its object in respect of quality; the dicisign, for example a proposition, is a sign interpreted to represent its object in respect of fact; and the argument is a sign interpreted to represent its object in respect of habit or law. This is the culminating typology of the three, where the sign is understood as a structural element of inference.",
"title": "Philosophy"
},
{
"paragraph_id": 119,
"text": "Every sign belongs to one class or another within (I) and within (II) and within (III). Thus each of the three typologies is a three-valued parameter for every sign. The three parameters are not independent of each other; many co-classifications are absent, for reasons pertaining to the lack of either habit-taking or singular reaction in a quality, and the lack of habit-taking in a singular reaction. The result is not 27 but instead ten classes of signs fully specified at this level of analysis.",
"title": "Philosophy"
},
{
"paragraph_id": 120,
"text": "Borrowing a brace of concepts from Aristotle, Peirce examined three basic modes of inference—abduction, deduction, and induction—in his \"critique of arguments\" or \"logic proper\". Peirce also called abduction \"retroduction\", \"presumption\", and, earliest of all, \"hypothesis\". He characterized it as guessing and as inference to an explanatory hypothesis. He sometimes expounded the modes of inference by transformations of the categorical syllogism Barbara (AAA), for example in \"Deduction, Induction, and Hypothesis\" (1878). He does this by rearranging the rule (Barbara's major premise), the case (Barbara's minor premise), and the result (Barbara's conclusion):",
"title": "Philosophy"
},
{
"paragraph_id": 121,
"text": "Peirce 1883 in \"A Theory of Probable Inference\" (Studies in Logic) equated hypothetical inference with the induction of characters of objects (as he had done in effect before). Eventually dissatisfied, by 1900 he distinguished them once and for all and also wrote that he now took the syllogistic forms and the doctrine of logical extension and comprehension as being less basic than he had thought. In 1903 he presented the following logical form for abductive inference:",
"title": "Philosophy"
},
{
"paragraph_id": 122,
"text": "The surprising fact, C, is observed;",
"title": "Philosophy"
},
{
"paragraph_id": 123,
"text": "The logical form does not also cover induction, since induction neither depends on surprise nor proposes a new idea for its conclusion. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. \"Deduction proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that something may be.\" Peirce did not remain quite convinced that one logical form covers all abduction. In his methodeutic or theory of inquiry (see below), he portrayed abduction as an economic initiative to further inference and study, and portrayed all three modes as clarified by their coordination in essential roles in inquiry: hypothetical explanation, deductive prediction, inductive testing",
"title": "Philosophy"
},
{
"paragraph_id": 124,
"text": "Peirce did not write extensively in aesthetics and ethics, but came by 1902 to hold that aesthetics, ethics, and logic, in that order, comprise the normative sciences. He characterized aesthetics as the study of the good (grasped as the admirable), and thus of the ends governing all conduct and thought.",
"title": "Philosophy"
},
{
"paragraph_id": 125,
"text": "Peirce divided metaphysics into (1) ontology or general metaphysics, (2) psychical or religious metaphysics, and (3) physical metaphysics.",
"title": "Philosophy"
},
{
"paragraph_id": 126,
"text": "On the issue of universals, Peirce was a scholastic realist, declaring the reality of generals as early as 1868. According to Peirce, his category he called \"thirdness\", the more general facts about the world, are extra-mental realities. Regarding modalities (possibility, necessity, etc.), he came in later years to regard himself as having wavered earlier as to just how positively real the modalities are. In his 1897 \"The Logic of Relatives\" he wrote:",
"title": "Philosophy"
},
{
"paragraph_id": 127,
"text": "I formerly defined the possible as that which in a given state of information (real or feigned) we do not know not to be true. But this definition today seems to me only a twisted phrase which, by means of two negatives, conceals an anacoluthon. We know in advance of experience that certain things are not true, because we see they are impossible.",
"title": "Philosophy"
},
{
"paragraph_id": 128,
"text": "Peirce retained, as useful for some purposes, the definitions in terms of information states, but insisted that the pragmaticist is committed to a strong modal realism by conceiving of objects in terms of predictive general conditional propositions about how they would behave under certain circumstances.",
"title": "Philosophy"
},
{
"paragraph_id": 129,
"text": "Continuity and synechism are central in Peirce's philosophy: \"I did not at first suppose that it was, as I gradually came to find it, the master-Key of philosophy\".",
"title": "Philosophy"
},
{
"paragraph_id": 130,
"text": "From a mathematical point of view, he embraced infinitesimals and worked long on the mathematics of continua. He long held that the real numbers constitute a pseudo-continuum; that a true continuum is the real subject matter of analysis situs (topology); and that a true continuum of instants exceeds—and within any lapse of time has room for—any Aleph number (any infinite multitude as he called it) of instants.",
"title": "Philosophy"
},
{
"paragraph_id": 131,
"text": "In 1908 Peirce wrote that he found that a true continuum might have or lack such room. Jérôme Havenel (2008): \"It is on 26 May 1908, that Peirce finally gave up his idea that in every continuum there is room for whatever collection of any multitude. From now on, there are different kinds of continua, which have different properties.\"",
"title": "Philosophy"
},
{
"paragraph_id": 132,
"text": "Peirce believed in God, and characterized such belief as founded in an instinct explorable in musing over the worlds of ideas, brute facts, and evolving habits—and it is a belief in God not as an actual or existent being (in Peirce's sense of those words), but all the same as a real being. In \"A Neglected Argument for the Reality of God\" (1908), Peirce sketches, for God's reality, an argument to a hypothesis of God as the Necessary Being, a hypothesis which he describes in terms of how it would tend to develop and become compelling in musement and inquiry by a normal person who is led, by the hypothesis, to consider as being purposed the features of the worlds of ideas, brute facts, and evolving habits (for example scientific progress), such that the thought of such purposefulness will \"stand or fall with the hypothesis\"; meanwhile, according to Peirce, the hypothesis, in supposing an \"infinitely incomprehensible\" being, starts off at odds with its own nature as a purportively true conception, and so, no matter how much the hypothesis grows, it both (A) inevitably regards itself as partly true, partly vague, and as continuing to define itself without limit, and (B) inevitably has God appearing likewise vague but growing, though God as the Necessary Being is not vague or growing; but the hypothesis will hold it to be more false to say the opposite, that God is purposeless. Peirce also argued that the will is free and (see Synechism) that there is at least an attenuated kind of immortality.",
"title": "Philosophy"
},
{
"paragraph_id": 133,
"text": "Peirce held the view, which he called objective idealism, that \"matter is effete mind, inveterate habits becoming physical laws\". Peirce observed that \"Berkeley's metaphysical theories have at first sight an air of paradox and levity very unbecoming to a bishop\".",
"title": "Philosophy"
},
{
"paragraph_id": 134,
"text": "Peirce asserted the reality of (1) absolute chance (his tychist view), (2) mechanical necessity (anancist view), and (3) that which he called the law of love (agapist view), echoing his categories Firstness, Secondness, and Thirdness, respectively. He held that fortuitous variation (which he also called \"sporting\"), mechanical necessity, and creative love are the three modes of evolution (modes called \"tychasm\", \"anancasm\", and \"agapasm\") of the cosmos and its parts. He found his conception of agapasm embodied in Lamarckian evolution; the overall idea in any case is that of evolution tending toward an end or goal, and it could also be the evolution of a mind or a society; it is the kind of evolution which manifests workings of mind in some general sense. He said that overall he was a synechist, holding with reality of continuity, especially of space, time, and law.",
"title": "Philosophy"
},
{
"paragraph_id": 135,
"text": "Peirce outlined two fields, \"Cenoscopy\" and \"Science of Review\", both of which he called philosophy. Both included philosophy about science. In 1903 he arranged them, from more to less theoretically basic, thus:",
"title": "Philosophy"
},
{
"paragraph_id": 136,
"text": "Peirce placed, within Science of Review, the work and theory of classifying the sciences (including mathematics and philosophy). His classifications, on which he worked for many years, draw on argument and wide knowledge, and are of interest both as a map for navigating his philosophy and as an accomplished polymath's survey of research in his time.",
"title": "Philosophy"
}
]
| Charles Sanders Peirce was an American scientist, mathematician, logician, and philosopher who is sometimes known as "the father of pragmatism". According to philosopher Paul Weiss, Peirce was "the most original and versatile of America's philosophers and America's greatest logician". Bertrand Russell wrote "he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever". Educated as a chemist and employed as a scientist for thirty years, Peirce meanwhile made major contributions to logic, such as theories of relations and quantification. C. I. Lewis wrote, "The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century." For Peirce, logic also encompassed much of what is now called epistemology and the philosophy of science. He saw logic as the formal branch of semiotics or study of signs, of which he is a founder, which foreshadowed the debate among logical positivists and proponents of philosophy of language that dominated 20th-century Western philosophy. Peirce's study of signs also included a tripartite theory of predication. Additionally, he defined the concept of abductive reasoning, as well as rigorously formulating mathematical induction and deductive reasoning. He was one of the founders of statistics. As early as 1886, he saw that logical operations could be carried out by electrical switching circuits. The same idea was used decades later to produce digital computers. For metaphysics, Peirce was an "objective idealist" in the tradition of German philosopher Immanuel Kant as well as a scholastic realist about universals. He also held a commitment to the ideas of continuity and chance as real features of the universe, views he labeled synechism and tychism respectively. Peirce believed an epistemic fallibilism and anti-skepticism went along with these views. | 2001-08-17T00:44:48Z | 2023-12-01T21:03:58Z | [
"Template:See also",
"Template:Semiotics",
"Template:Col-break",
"Template:Cite encyclopedia",
"Template:Lang",
"Template:Classical logic",
"Template:Authority control",
"Template:Infobox philosopher",
"Template:Not a typo",
"Template:Main",
"Template:Webarchive",
"Template:Cite book",
"Template:Pragmatism",
"Template:Use mdy dates",
"Template:C. S. Peirce articles",
"Template:IPAc-en",
"Template:Cite web",
"Template:Librivox author",
"Template:Pp-move-indef",
"Template:Em",
"Template:C. S. Peirce ninefold sign table",
"Template:Div col end",
"Template:Harvnb",
"Template:Anchor",
"Template:Efn",
"Template:According to whom",
"Template:Col-begin",
"Template:Cite news",
"Template:Cite journal",
"Template:Slink",
"Template:Reflist",
"Template:Cite dictionary",
"Template:JSTOR",
"Template:Sister project links",
"Template:Quote",
"Template:Div col",
"Template:Notelist",
"Template:Audio",
"Template:Philosophy of science",
"Template:Metaphysics",
"Template:Hatnote",
"Template:Col-end",
"Template:Short description",
"Template:Respell",
"Template:Convert",
"Template:Blockquote",
"Template:Sic",
"Template:C. S. Peirce categorial table",
"Template:Doi",
"Template:MathGenealogy"
]
| https://en.wikipedia.org/wiki/Charles_Sanders_Peirce |
6,118 | Carnot heat engine | A Carnot heat engine is a theoretical heat engine that operates on the Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857, work that led to the fundamental thermodynamic concept of entropy. The Carnot engine is the most efficient heat engine which is theoretically possible. The efficiency depends only upon the absolute temperatures of the hot and cold heat reservoirs between which it operates.
A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump rather than a heat engine.
Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine.
The Carnot engine is a theoretical construct, useful for exploring the efficiency limits of other heat engines. An actual Carnot engine, however, would be completely impractical to build.
In the adjacent diagram, from Carnot's 1824 work, Reflections on the Motive Power of Fire, there are "two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies to which we can give, or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs of caloric. We will call the first the furnace and the second the refrigerator." Carnot then explains how we can obtain motive power, i.e., "work", by carrying a certain quantity of heat from body A to body B. It also acts as a cooler and hence can also act as a refrigerator.
The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine. The figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the "working body" (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, air, etc. Although in those early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water was boiled over a furnace; QC was typically removed by a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work, W, is transmitted by the movement of the piston as it is used to turn a crank-arm, which in turn was typically used to power a pulley so as to lift water out of flooded salt mines. Carnot defined work as "weight lifted through a height".
The Carnot cycle when acting as a heat engine consists of the following steps:
Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.
η I = W Q H = 1 − T C T H {\displaystyle \eta _{I}={\frac {W}{Q_{\mathrm {H} }}}=1-{\frac {T_{\mathrm {C} }}{T_{\mathrm {H} }}}}
Explanation This maximum efficiency η I {\displaystyle \eta _{\text{I}}} is defined as above:
A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient.
It is easily shown that the efficiency η is maximum when the entire cyclic process is a reversible process. This means the total entropy of system and surroundings (the entropies of the hot furnace, the "working fluid" of the heat engine, and the cold sink) remains constant when the "working fluid" completes one cycle and returns to its original state. (In the general and more realistic case of an irreversible process, the total entropy of this combined system would increase.)
Since the "working fluid" comes back to the same state after one cycle, and entropy of the system is a state function, the change in entropy of the "working fluid" system is 0. Thus, it implies that the total entropy change of the furnace and sink is zero, for the process to be reversible and the efficiency of the engine to be maximum. This derivation is carried out in the next section.
The coefficient of performance (COP) of the heat engine is the reciprocal of its efficiency.
For a real heat engine, the total thermodynamic process is generally irreversible. The working fluid is brought back to its initial state after one cycle, and thus the change of entropy of the fluid system is 0, but the sum of the entropy changes in the hot and cold reservoir in this one cyclical process is greater than 0.
The internal energy of the fluid is also a state variable, so its total change in one cycle is 0. So the total work done by the system W is equal to the net heat put into the system, the sum of Q H {\displaystyle Q_{\text{H}}} > 0 taken up and the waste heat Q C {\displaystyle Q_{\text{C}}} < 0 given off:
For real engines, stages 1 and 3 of the Carnot cycle, in which heat is absorbed by the "working fluid" from the hot reservoir, and released by it to the cold reservoir, respectively, no longer remain ideally reversible, and there is a temperature differential between the temperature of the reservoir and the temperature of the fluid while heat exchange takes place.
During heat transfer from the hot reservoir at T H {\displaystyle T_{\text{H}}} to the fluid, the fluid would have a slightly lower temperature than T H {\displaystyle T_{\text{H}}} , and the process for the fluid may not necessarily remain isothermal. Let Δ S H {\displaystyle \Delta S_{\text{H}}} be the total entropy change of the fluid in the process of intake of heat.
where the temperature of the fluid T is always slightly lesser than T H {\displaystyle T_{\text{H}}} , in this process.
So, one would get:
Similarly, at the time of heat injection from the fluid to the cold reservoir one would have, for the magnitude of total entropy change Δ S C {\displaystyle \Delta S_{\text{C}}} < 0 of the fluid in the process of expelling heat:
where, during this process of transfer of heat to the cold reservoir, the temperature of the fluid T is always slightly greater than T C {\displaystyle T_{\text{C}}} .
We have only considered the magnitude of the entropy change here. Since the total change of entropy of the fluid system for the cyclic process is 0, we must have
The previous three equations combine to give:
Equations (2) and (7) combine to give
Hence,
where η = W Q H {\displaystyle \eta ={\frac {W}{Q_{\text{H}}}}} is the efficiency of the real engine, and η I {\displaystyle \eta _{\text{I}}} is the efficiency of the Carnot engine working between the same two reservoirs at the temperatures T H {\displaystyle T_{\text{H}}} and T C {\displaystyle T_{\text{C}}} . For the Carnot engine, the entire process is 'reversible', and Equation (7) is an equality. Hence, the efficiency of the real engine is always less than the ideal Carnot engine.
Equation (7) signifies that the total entropy of system and surroundings (the fluid and the two reservoirs) increases for the real engine, because (in a surroundings-based analysis) the entropy gain of the cold reservoir as Q C {\displaystyle Q_{\text{C}}} flows into it at the fixed temperature T C {\displaystyle T_{\text{C}}} , is greater than the entropy loss of the hot reservoir as Q H {\displaystyle Q_{\text{H}}} leaves it at its fixed temperature T H {\displaystyle T_{\text{H}}} . The inequality in Equation (7) is essentially the statement of the Clausius theorem.
According to the second theorem, "The efficiency of the Carnot engine is independent of the nature of the working substance".
In 1892 Rudolf Diesel patented an internal combustion engine inspired by the Carnot engine. Diesel knew a Carnot engine is an ideal that cannot be built, but he thought he had invented a working approximation. His principle was unsound, but in his struggle to implement it he developed the practical engine that bears his name.
The conceptual problem was how to achieve isothermal expansion in an internal combustion engine, since burning fuel at the highest temperature of the cycle would only raise the temperature further. Diesel's patented solution was: having achieved the highest temperature just by compressing the air, to add a small amount of fuel at a controlled rate, such that heating caused by burning the fuel would be counteracted by cooling caused by air expansion as the piston moved. Hence all the heat from the fuel would be transformed into work during the isothermal expansion, as required by Carnot's theorem.
For the idea to work a small mass of fuel would have to be burnt in a huge mass of air. Diesel first proposed a working engine that would compress air to 250 atmospheres at 800 °C (1,450 °F), then cycle to one atmosphere at 20 °C (50 °F). However, this was well beyond the technological capabilities of the day, since it implied a compression ratio of 60:1. Such an engine, could it have been built, would have had an efficiency of 73%. (In contrast, the best steam engines of his day achieved 7%.)
Accordingly, Diesel sought to compromise. He calculated that, were he to reduce the peak pressure to a less ambitious 90 atmospheres, he would sacrifice only 5% of the thermal efficiency. Seeking financial support, he published the "Theory and Construction of a Rational Heat Engine to Take the Place of the Steam Engine and All Presently Known Combustion Engines" (1893). Endorsed by scientific opinion, including Lord Kelvin, he won the backing of Krupp and Maschinenfabrik Augsburg. He clung to the Carnot cycle as a symbol. But years of practical work failed to achieve an isothermal combustion engine, nor could have done, since it requires such an enormous quantity of air that it cannot develop enough power to compress it. Furthermore, controlled fuel injection turned out to be no easy matter.
Even so, it slowly evolved over 25 years to become a practical high-compression air engine, its fuel injected near the end of the compression stroke and ignited by the heat of compression, in a word, the diesel engine. Today its efficiency is 40%.
Episode 46. Engine of Nature: The Carnot engine, part one, beginning with simple steam engines. The Mechanical Universe. Caltech – via YouTube. | [
{
"paragraph_id": 0,
"text": "A Carnot heat engine is a theoretical heat engine that operates on the Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857, work that led to the fundamental thermodynamic concept of entropy. The Carnot engine is the most efficient heat engine which is theoretically possible. The efficiency depends only upon the absolute temperatures of the hot and cold heat reservoirs between which it operates.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump rather than a heat engine.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Carnot engine is a theoretical construct, useful for exploring the efficiency limits of other heat engines. An actual Carnot engine, however, would be completely impractical to build.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the adjacent diagram, from Carnot's 1824 work, Reflections on the Motive Power of Fire, there are \"two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies to which we can give, or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs of caloric. We will call the first the furnace and the second the refrigerator.\" Carnot then explains how we can obtain motive power, i.e., \"work\", by carrying a certain quantity of heat from body A to body B. It also acts as a cooler and hence can also act as a refrigerator.",
"title": "Carnot's diagram"
},
{
"paragraph_id": 5,
"text": "The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine. The figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the \"working body\" (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, air, etc. Although in those early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water was boiled over a furnace; QC was typically removed by a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work, W, is transmitted by the movement of the piston as it is used to turn a crank-arm, which in turn was typically used to power a pulley so as to lift water out of flooded salt mines. Carnot defined work as \"weight lifted through a height\".",
"title": "Modern diagram"
},
{
"paragraph_id": 6,
"text": "",
"title": "Modern diagram"
},
{
"paragraph_id": 7,
"text": "The Carnot cycle when acting as a heat engine consists of the following steps:",
"title": "Carnot cycle"
},
{
"paragraph_id": 8,
"text": "Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.",
"title": "Carnot's theorem"
},
{
"paragraph_id": 9,
"text": "η I = W Q H = 1 − T C T H {\\displaystyle \\eta _{I}={\\frac {W}{Q_{\\mathrm {H} }}}=1-{\\frac {T_{\\mathrm {C} }}{T_{\\mathrm {H} }}}}",
"title": "Carnot's theorem"
},
{
"paragraph_id": 10,
"text": "Explanation This maximum efficiency η I {\\displaystyle \\eta _{\\text{I}}} is defined as above:",
"title": "Carnot's theorem"
},
{
"paragraph_id": 11,
"text": "A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient.",
"title": "Carnot's theorem"
},
{
"paragraph_id": 12,
"text": "It is easily shown that the efficiency η is maximum when the entire cyclic process is a reversible process. This means the total entropy of system and surroundings (the entropies of the hot furnace, the \"working fluid\" of the heat engine, and the cold sink) remains constant when the \"working fluid\" completes one cycle and returns to its original state. (In the general and more realistic case of an irreversible process, the total entropy of this combined system would increase.)",
"title": "Carnot's theorem"
},
{
"paragraph_id": 13,
"text": "Since the \"working fluid\" comes back to the same state after one cycle, and entropy of the system is a state function, the change in entropy of the \"working fluid\" system is 0. Thus, it implies that the total entropy change of the furnace and sink is zero, for the process to be reversible and the efficiency of the engine to be maximum. This derivation is carried out in the next section.",
"title": "Carnot's theorem"
},
{
"paragraph_id": 14,
"text": "The coefficient of performance (COP) of the heat engine is the reciprocal of its efficiency.",
"title": "Carnot's theorem"
},
{
"paragraph_id": 15,
"text": "For a real heat engine, the total thermodynamic process is generally irreversible. The working fluid is brought back to its initial state after one cycle, and thus the change of entropy of the fluid system is 0, but the sum of the entropy changes in the hot and cold reservoir in this one cyclical process is greater than 0.",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 16,
"text": "The internal energy of the fluid is also a state variable, so its total change in one cycle is 0. So the total work done by the system W is equal to the net heat put into the system, the sum of Q H {\\displaystyle Q_{\\text{H}}} > 0 taken up and the waste heat Q C {\\displaystyle Q_{\\text{C}}} < 0 given off:",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 17,
"text": "For real engines, stages 1 and 3 of the Carnot cycle, in which heat is absorbed by the \"working fluid\" from the hot reservoir, and released by it to the cold reservoir, respectively, no longer remain ideally reversible, and there is a temperature differential between the temperature of the reservoir and the temperature of the fluid while heat exchange takes place.",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 18,
"text": "During heat transfer from the hot reservoir at T H {\\displaystyle T_{\\text{H}}} to the fluid, the fluid would have a slightly lower temperature than T H {\\displaystyle T_{\\text{H}}} , and the process for the fluid may not necessarily remain isothermal. Let Δ S H {\\displaystyle \\Delta S_{\\text{H}}} be the total entropy change of the fluid in the process of intake of heat.",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 19,
"text": "where the temperature of the fluid T is always slightly lesser than T H {\\displaystyle T_{\\text{H}}} , in this process.",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 20,
"text": "So, one would get:",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 21,
"text": "Similarly, at the time of heat injection from the fluid to the cold reservoir one would have, for the magnitude of total entropy change Δ S C {\\displaystyle \\Delta S_{\\text{C}}} < 0 of the fluid in the process of expelling heat:",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 22,
"text": "where, during this process of transfer of heat to the cold reservoir, the temperature of the fluid T is always slightly greater than T C {\\displaystyle T_{\\text{C}}} .",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 23,
"text": "We have only considered the magnitude of the entropy change here. Since the total change of entropy of the fluid system for the cyclic process is 0, we must have",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 24,
"text": "The previous three equations combine to give:",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 25,
"text": "Equations (2) and (7) combine to give",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 26,
"text": "Hence,",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 27,
"text": "where η = W Q H {\\displaystyle \\eta ={\\frac {W}{Q_{\\text{H}}}}} is the efficiency of the real engine, and η I {\\displaystyle \\eta _{\\text{I}}} is the efficiency of the Carnot engine working between the same two reservoirs at the temperatures T H {\\displaystyle T_{\\text{H}}} and T C {\\displaystyle T_{\\text{C}}} . For the Carnot engine, the entire process is 'reversible', and Equation (7) is an equality. Hence, the efficiency of the real engine is always less than the ideal Carnot engine.",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 28,
"text": "Equation (7) signifies that the total entropy of system and surroundings (the fluid and the two reservoirs) increases for the real engine, because (in a surroundings-based analysis) the entropy gain of the cold reservoir as Q C {\\displaystyle Q_{\\text{C}}} flows into it at the fixed temperature T C {\\displaystyle T_{\\text{C}}} , is greater than the entropy loss of the hot reservoir as Q H {\\displaystyle Q_{\\text{H}}} leaves it at its fixed temperature T H {\\displaystyle T_{\\text{H}}} . The inequality in Equation (7) is essentially the statement of the Clausius theorem.",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 29,
"text": "According to the second theorem, \"The efficiency of the Carnot engine is independent of the nature of the working substance\".",
"title": "Efficiency of real heat engines"
},
{
"paragraph_id": 30,
"text": "In 1892 Rudolf Diesel patented an internal combustion engine inspired by the Carnot engine. Diesel knew a Carnot engine is an ideal that cannot be built, but he thought he had invented a working approximation. His principle was unsound, but in his struggle to implement it he developed the practical engine that bears his name.",
"title": "The Carnot engine and Rudolf Diesel"
},
{
"paragraph_id": 31,
"text": "The conceptual problem was how to achieve isothermal expansion in an internal combustion engine, since burning fuel at the highest temperature of the cycle would only raise the temperature further. Diesel's patented solution was: having achieved the highest temperature just by compressing the air, to add a small amount of fuel at a controlled rate, such that heating caused by burning the fuel would be counteracted by cooling caused by air expansion as the piston moved. Hence all the heat from the fuel would be transformed into work during the isothermal expansion, as required by Carnot's theorem.",
"title": "The Carnot engine and Rudolf Diesel"
},
{
"paragraph_id": 32,
"text": "For the idea to work a small mass of fuel would have to be burnt in a huge mass of air. Diesel first proposed a working engine that would compress air to 250 atmospheres at 800 °C (1,450 °F), then cycle to one atmosphere at 20 °C (50 °F). However, this was well beyond the technological capabilities of the day, since it implied a compression ratio of 60:1. Such an engine, could it have been built, would have had an efficiency of 73%. (In contrast, the best steam engines of his day achieved 7%.)",
"title": "The Carnot engine and Rudolf Diesel"
},
{
"paragraph_id": 33,
"text": "Accordingly, Diesel sought to compromise. He calculated that, were he to reduce the peak pressure to a less ambitious 90 atmospheres, he would sacrifice only 5% of the thermal efficiency. Seeking financial support, he published the \"Theory and Construction of a Rational Heat Engine to Take the Place of the Steam Engine and All Presently Known Combustion Engines\" (1893). Endorsed by scientific opinion, including Lord Kelvin, he won the backing of Krupp and Maschinenfabrik Augsburg. He clung to the Carnot cycle as a symbol. But years of practical work failed to achieve an isothermal combustion engine, nor could have done, since it requires such an enormous quantity of air that it cannot develop enough power to compress it. Furthermore, controlled fuel injection turned out to be no easy matter.",
"title": "The Carnot engine and Rudolf Diesel"
},
{
"paragraph_id": 34,
"text": "Even so, it slowly evolved over 25 years to become a practical high-compression air engine, its fuel injected near the end of the compression stroke and ignited by the heat of compression, in a word, the diesel engine. Today its efficiency is 40%.",
"title": "The Carnot engine and Rudolf Diesel"
},
{
"paragraph_id": 35,
"text": "Episode 46. Engine of Nature: The Carnot engine, part one, beginning with simple steam engines. The Mechanical Universe. Caltech – via YouTube.",
"title": "External links"
}
]
| A Carnot heat engine is a theoretical heat engine that operates on the Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857, work that led to the fundamental thermodynamic concept of entropy. The Carnot engine is the most efficient heat engine which is theoretically possible. The efficiency depends only upon the absolute temperatures of the hot and cold heat reservoirs between which it operates. A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump rather than a heat engine. Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine. The Carnot engine is a theoretical construct, useful for exploring the efficiency limits of other heat engines. An actual Carnot engine, however, would be completely impractical to build. | 2001-09-24T02:59:18Z | 2023-10-09T07:01:20Z | [
"Template:Main",
"Template:EquationNote",
"Template:Cvt",
"Template:Reflist",
"Template:Cite journal",
"Template:More citations needed",
"Template:NumBlk",
"Template:EquationRef",
"Template:Cite AV media",
"Template:Heat engines",
"Template:Short description",
"Template:Mvar",
"Template:Cite web",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Carnot_heat_engine |
6,119 | Context-sensitive | Context-sensitive is an adjective meaning "depending on context" or "depending on circumstances". It may refer to: | [
{
"paragraph_id": 0,
"text": "Context-sensitive is an adjective meaning \"depending on context\" or \"depending on circumstances\". It may refer to:",
"title": ""
}
]
| Context-sensitive is an adjective meaning "depending on context" or "depending on circumstances". It may refer to: Context-sensitive meaning, where meaning depends on context
Context-sensitive grammar, a formal grammar in which the left-hand sides and right-hand sides of any production rules may be surrounded by a context of terminal and nonterminal symbols
Context-sensitive language, a formal language that can be defined by a context-sensitive grammar. Context-sensitive is one of the four types of grammars in the Chomsky hierarchy
Context-sensitive help, a kind of online help that is obtained from a specific point in the state of the software, providing help for the situation that is associated with that state
Context-sensitive solutions, a theoretical and practical approach to transportation decision-making and design that takes into consideration the communities and lands through which streets, roads, and highways pass
Context-sensitive user interface, in computing | 2019-02-06T21:06:16Z | [
"Template:Wiktionary",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Context-sensitive |
|
6,121 | Central America | Central America is a subregion of the Americas, frequently considered part of North America. Its political boundaries are defined as bordering Mexico to the north, Colombia to the south, the Caribbean Sea to the east, and the Pacific Ocean to the west. Central America usually consists of seven countries: Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama. Within Central America is the Mesoamerican biodiversity hotspot, which extends from northern Guatemala to central Panama. Due to the presence of several active geologic faults and the Central America Volcanic Arc, there is a high amount of seismic activity in the region, such as volcanic eruptions and earthquakes, which has resulted in death, injury, and property damage.
In the pre-Columbian era, Central America was inhabited by the Indigenous peoples of Mesoamerica to the north and west and the Isthmo-Colombian peoples to the south and east. Following the Spanish expedition of Christopher Columbus' voyages to the Americas, Spain began to colonize the Americas. From 1609 to 1821, the majority of Central American territories (except for what would become Belize and Panama, and including the modern Mexican state of Chiapas) were governed by the viceroyalty of New Spain from Mexico City as the Captaincy General of Guatemala. On 24 August 1821, Spanish Viceroy Juan de O'Donojú signed the Treaty of Córdoba, which established New Spain's independence from Spain. On 15 September 1821, the Act of Independence of Central America was enacted to announce Central America's separation from the Spanish Empire and provide for the establishment of a new Central American state. Some of New Spain's provinces in the Central American region (i.e. what would become Guatemala, Honduras, El Salvador, Nicaragua and Costa Rica) were annexed to the First Mexican Empire; however in 1823 they seceded from Mexico to form the Federal Republic of Central America until 1838.
In 1838, Costa Rica, Guatemala, Honduras, and Nicaragua became the first of Central America's seven states to become independent countries, followed by El Salvador in 1841, Panama in 1903, and Belize in 1981. Despite the dissolution of the Federal Republic of Central America, countries like Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua continue to maintain a Central American identity. The Belizeans are usually identified as culturally Caribbean rather than Central American, while the Panamanians identify themselves more broadly with their South American neighbours.
The Spanish-speaking countries officially include both North America and South America as a single continent, América, which is split into four subregions: North America (Northern America and Mexico), Central America, South America, and Insular America (the West Indies).
"Central America" may mean different things to various people, based upon different contexts:
Central America was formed more than 3 million years ago, as part of the Isthmus of Panama, when its portion of land connected each side of water.
In the Pre-Columbian era, the northern areas of Central America were inhabited by the indigenous peoples of Mesoamerica. Most notable among these were the Mayans, who had built numerous cities throughout the region, and the Aztecs, who had created a vast empire. The pre-Columbian cultures of eastern El Salvador, eastern Honduras, Caribbean Nicaragua, most of Costa Rica and Panama were predominantly speakers of the Chibchan languages at the time of European contact and are considered by some culturally different and grouped in the Isthmo-Colombian Area.
Following the Spanish expedition of Christopher Columbus's voyages to the Americas, the Spanish sent many expeditions to the region, and they began their conquest of Maya territory in 1523. Soon after the conquest of the Aztec Empire, Spanish conquistador Pedro de Alvarado commenced the conquest of northern Central America for the Spanish Empire. Beginning with his arrival in Soconusco in 1523, Alvarado's forces systematically conquered and subjugated most of the major Maya kingdoms, including the K'iche', Tz'utujil, Pipil, and the Kaqchikel. By 1528, the conquest of Guatemala was nearly complete, with only the Petén Basin remaining outside the Spanish sphere of influence. The last independent Maya kingdoms – the Kowoj and the Itza people – were finally defeated in 1697, as part of the Spanish conquest of Petén.
In 1538, Spain established the Real Audiencia of Panama, which had jurisdiction over all land from the Strait of Magellan to the Gulf of Fonseca. This entity was dissolved in 1543, and most of the territory within Central America then fell under the jurisdiction of the Audiencia Real de Guatemala. This area included the current territories of Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Mexican state of Chiapas, but excluded the lands that would become Belize and Panama. The president of the Audiencia, which had its seat in Antigua Guatemala, was the governor of the entire area. In 1609 the area became a captaincy general and the governor was also granted the title of captain general. The Captaincy General of Guatemala encompassed most of Central America, with the exception of present-day Belize and Panama.
The Captaincy General of Guatemala lasted for more than two centuries, but began to fray after a rebellion in 1811 which began in the Intendancy of San Salvador. The Captaincy General formally ended on 15 September 1821, with the signing of the Act of Independence of Central America. Mexican independence was achieved at virtually the same time with the signing of the Treaty of Córdoba and the Declaration of Independence of the Mexican Empire, and the entire region was finally independent from Spanish authority by 28 September 1821.
From its independence from Spain in 1821 until 1823, the former Captaincy General remained intact as part of the short-lived First Mexican Empire. When the Emperor of Mexico abdicated on 19 March 1823, Central America again became independent. On 1 July 1823, the Congress of Central America peacefully seceded from Mexico and declared absolute independence from all foreign nations, and the region formed the Federal Republic of Central America.
The Federal Republic of Central America was a representative democracy with its capital at Guatemala City. This union consisted of the provinces of Costa Rica, El Salvador, Guatemala, Honduras, Los Altos, Mosquito Coast, and Nicaragua. The lowlands of southwest Chiapas, including Soconusco, initially belonged to the Republic until 1824, when Mexico annexed most of Chiapas and began its claims to Soconusco. The Republic lasted from 1823 to 1838, when it disintegrated as a result of civil wars.
The territory that now makes up Belize was heavily contested in a dispute that continued for decades after Guatemala achieved independence. Spain, and later Guatemala, considered this land a Guatemalan department. In 1862, Britain formally declared it a British colony and named it British Honduras. It became independent as Belize in 1981.
Panama, situated in the southernmost part of Central America on the Isthmus of Panama, has for most of its history been culturally and politically linked to South America. Panama was part of the Province of Tierra Firme from 1510 until 1538 when it came under the jurisdiction of the newly formed Audiencia Real de Panama. Beginning in 1543, Panama was administered as part of the Viceroyalty of Peru, along with all other Spanish possessions in South America. Panama remained as part of the Viceroyalty of Peru until 1739, when it was transferred to the Viceroyalty of New Granada, the capital of which was located at Santa Fé de Bogotá. Panama remained as part of the Viceroyalty of New Granada until the disestablishment of that viceroyalty in 1819. A series of military and political struggles took place from that time until 1822, the result of which produced the republic of Gran Colombia. After the dissolution of Gran Colombia in 1830, Panama became part of a successor state, the Republic of New Granada. From 1855 until 1886, Panama existed as Panama State, first within the Republic of New Granada, then within the Granadine Confederation, and finally within the United States of Colombia. The United States of Colombia was replaced by the Republic of Colombia in 1886. As part of the Republic of Colombia, Panama State was abolished and it became the Isthmus Department. Despite the many political reorganizations, Colombia was still deeply plagued by conflict, which eventually led to the secession of Panama on 3 November 1903. Only after that time did some begin to regard Panama as a North or Central American entity.
By the 1930s the United Fruit Company owned 14,000 square kilometres (3.5 million acres) of land in Central America and the Caribbean and was the single largest land owner in Guatemala. Such holdings gave it great power over the governments of small countries. That was one of the factors that led to the coining of the phrase banana republic.
After more than two hundred years of social unrest, violent conflict, and revolution, Central America today remains in a period of political transformation. Poverty, social injustice, and violence are still widespread. Nicaragua is the second poorest country in the western hemisphere (only Haiti is poorer).
Central America is a part of North America consisting of a tapering isthmus running from the southern extent of Mexico to the northwestern portion of South America. Central America has the Gulf of Mexico, a body of water within the Atlantic Ocean, to the north; the Caribbean Sea, also part of the Atlantic Ocean, to the northeast; and the Pacific Ocean to the southwest. Some physiographists define the Isthmus of Tehuantepec as the northern geographic border of Central America, while others use the northwestern borders of Belize and Guatemala. From there, the Central American land mass extends southeastward to the Atrato River, where it connects to the Pacific Lowlands in northwestern South America.
Of the many mountain ranges within Central America, the longest are the Sierra Madre de Chiapas, the Cordillera Isabelia and the Cordillera de Talamanca. At 4,220 meters (13,850 ft), Volcán Tajumulco is the highest peak in Central America. Other high points of Central America are as listed in the table below:
Between the mountain ranges lie fertile valleys that are suitable for the raising of livestock and for the production of coffee, tobacco, beans and other crops. Most of the population of Honduras, Costa Rica and Guatemala lives in valleys.
Trade winds have a significant effect upon the climate of Central America. Temperatures in Central America are highest just prior to the summer wet season, and are lowest during the winter dry season, when trade winds contribute to a cooler climate. The highest temperatures occur in April, due to higher levels of sunlight, lower cloud cover and a decrease in trade winds.
Central America is part of the Mesoamerican biodiversity hotspot, boasting 7% of the world's biodiversity. The Pacific Flyway is a major north–south flyway for migratory birds in the Americas, extending from Alaska to Tierra del Fuego. Due to the funnel-like shape of its land mass, migratory birds can be seen in very high concentrations in Central America, especially in the spring and autumn. As a bridge between North America and South America, Central America has many species from the Nearctic and the Neotropical realms. However the southern countries (Costa Rica and Panama) of the region have more biodiversity than the northern countries (Guatemala and Belize), meanwhile the central countries (Honduras, Nicaragua and El Salvador) have the least biodiversity. The table below shows recent statistics:
Over 300 species of the region's flora and fauna are threatened, 107 of which are classified as critically endangered. The underlying problems are deforestation, which is estimated by FAO at 1.2% per year in Central America and Mexico combined, fragmentation of rainforests and the fact that 80% of the vegetation in Central America has already been converted to agriculture.
Efforts to protect fauna and flora in the region are made by creating ecoregions and nature reserves. 36% of Belize's land territory falls under some form of official protected status, giving Belize one of the most extensive systems of terrestrial protected areas in the Americas. In addition, 13% of Belize's marine territory are also protected. A large coral reef extends from Mexico to Honduras: the Mesoamerican Barrier Reef System. The Belize Barrier Reef is part of this. The Belize Barrier Reef is home to a large diversity of plants and animals, and is one of the most diverse ecosystems of the world. It is home to 70 hard coral species, 36 soft coral species, 500 species of fish and hundreds of invertebrate species. So far only about 10% of the species in the Belize barrier reef have been discovered.
From 2001 to 2010, 5,376 square kilometers (2,076 sq mi) of forest were lost in the region. In 2010 Belize had 63% of remaining forest cover, Costa Rica 46%, Panama 45%, Honduras 41%, Guatemala 37%, Nicaragua 29%, and El Salvador 21%. Most of the loss occurred in the moist forest biome, with 12,201 square kilometers (4,711 sq mi). Woody vegetation loss was partially set off by a gain in the coniferous forest biome with 4,730 square kilometers (1,830 sq mi), and a gain in the dry forest biome at 2,054 square kilometers (793 sq mi). Mangroves and deserts contributed only 1% to the loss in forest vegetation. The bulk of the deforestation was located at the Caribbean slopes of Nicaragua with a loss of 8,574 square kilometers (3,310 sq mi) of forest in the period from 2001 to 2010. The most significant regrowth of 3,050 square kilometers (1,180 sq mi) of forest was seen in the coniferous woody vegetation of Honduras.
The Central American pine-oak forests ecoregion, in the tropical and subtropical coniferous forests biome, is found in Central America and southern Mexico. The Central American pine-oak forests occupy an area of 111,400 square kilometers (43,000 sq mi), extending along the mountainous spine of Central America, extending from the Sierra Madre de Chiapas in Mexico's Chiapas state through the highlands of Guatemala, El Salvador, and Honduras to central Nicaragua. The pine-oak forests lie between 600–1,800 metres (2,000–5,900 ft) elevation, and are surrounded at lower elevations by tropical moist forests and tropical dry forests. Higher elevations above 1,800 metres (5,900 ft) are usually covered with Central American montane forests. The Central American pine-oak forests are composed of many species characteristic of temperate North America including oak, pine, fir, and cypress.
Laurel forest is the most common type of Central American temperate evergreen cloud forest, found in almost all Central American countries, normally more than 1,000 meters (3,300 ft) above sea level. Tree species include evergreen oaks, members of the laurel family, species of Weinmannia and Magnolia, and Drimys granadensis. The cloud forest of Sierra de las Minas, Guatemala, is the largest in Central America. In some areas of southeastern Honduras there are cloud forests, the largest located near the border with Nicaragua. In Nicaragua, cloud forests are situated near the border with Honduras, but many were cleared to grow coffee. There are still some temperate evergreen hills in the north. The only cloud forest in the Pacific coastal zone of Central America is on the Mombacho volcano in Nicaragua. In Costa Rica, there are laurel forests in the Cordillera de Tilarán and Volcán Arenal, called Monteverde, also in the Cordillera de Talamanca.
The Central American montane forests are an ecoregion of the tropical and subtropical moist broadleaf forests biome, as defined by the World Wildlife Fund. These forests are of the moist deciduous and the semi-evergreen seasonal subtype of tropical and subtropical moist broadleaf forests and receive high overall rainfall with a warm summer wet season and a cooler winter dry season. Central American montane forests consist of forest patches located at altitudes ranging from 1,800–4,000 metres (5,900–13,100 ft), on the summits and slopes of the highest mountains in Central America ranging from Southern Mexico, through Guatemala, El Salvador, and Honduras, to northern Nicaragua. The entire ecoregion covers an area of 13,200 square kilometers (5,100 sq mi) and has a temperate climate with relatively high precipitation levels.
Ecoregions are not only established to protect the forests themselves but also because they are habitats for an incomparably rich and often endemic fauna. Almost half of the bird population of the Talamancan montane forests in Costa Rica and Panama are endemic to this region. Several birds are listed as threatened, most notably the resplendent quetzal (Pharomacrus mocinno), three-wattled bellbird (Procnias tricarunculata), bare-necked umbrellabird (Cephalopterus glabricollis), and black guan (Chamaepetes unicolor). Many of the amphibians are endemic and depend on the existence of forest. The golden toad that once inhabited a small region in the Monteverde Reserve, which is part of the Talamancan montane forests, has not been seen alive since 1989 and is listed as extinct by IUCN. The exact causes for its extinction are unknown. Global warming may have played a role, because the development of that frog is typical for this area may have been compromised. Seven small mammals are endemic to the Costa Rica-Chiriqui highlands within the Talamancan montane forest region. Jaguars, cougars, spider monkeys, as well as tapirs, and anteaters live in the woods of Central America. The Central American red brocket is a brocket deer found in Central America's tropical forest.
Central America is geologically very active, with volcanic eruptions and earthquakes occurring frequently, and tsunamis occurring occasionally. Many thousands of people have died as a result of these natural disasters.
Most of Central America rests atop the Caribbean Plate. This tectonic plate converges with the Cocos, Nazca, and North American plates to form the Middle America Trench, a major subduction zone. The Middle America Trench is situated some 60–160 kilometers (37–99 mi) off the Pacific coast of Central America and runs roughly parallel to it. Many large earthquakes have occurred as a result of seismic activity at the Middle America Trench. For example, subduction of the Cocos Plate beneath the North American Plate at the Middle America Trench is believed to have caused the 1985 Mexico City earthquake that killed as many as 40,000 people. Seismic activity at the Middle America Trench is also responsible for earthquakes in 1902, 1942, 1956, 1982, 1992, January 2001, February 2001, 2007, 2012, 2014, and many other earthquakes throughout Central America.
The Middle America Trench is not the only source of seismic activity in Central America. The Motagua Fault is an onshore continuation of the Cayman Trough which forms part of the tectonic boundary between the North American Plate and the Caribbean Plate. This transform fault cuts right across Guatemala and then continues offshore until it merges with the Middle America Trench along the Pacific coast of Mexico, near Acapulco. Seismic activity at the Motagua Fault has been responsible for earthquakes in 1717, 1773, 1902, 1976, 1980, and 2009.
Another onshore continuation of the Cayman Trough is the Chixoy-Polochic Fault, which runs parallel to, and roughly 80 kilometers (50 mi) to the north, of the Motagua Fault. Though less active than the Motagua Fault, seismic activity at the Chixoy-Polochic Fault is still thought to be capable of producing very large earthquakes, such as the 1816 earthquake of Guatemala.
Managua, the capital of Nicaragua, was devastated by earthquakes in 1931 and 1972.
Volcanic eruptions are also common in Central America. In 1968 the Arenal Volcano, in Costa Rica, erupted killing 87 people as the 3 villages of Tabacon, Pueblo Nuevo and San Luis were buried under pyroclastic flows and debris. Fertile soils from weathered volcanic lava have made it possible to sustain dense populations in the agriculturally productive highland areas.
List of countries by life expectancy at birth for 2021, according to the World Bank Group. List of Central American countries is expanded by Mexico and Colombia.
The population of Central America is estimated at 50,956,791 as of 2021. With an area of 523,780 square kilometers (202,230 sq mi), it has a population density of 97.3 per square kilometre (252 per square mile). Human Development Index values are from the estimates for 2017.
The official language majority in all Central American countries is Spanish, except in Belize, where the official language is English. Mayan languages constitute a language family consisting of about 26 related languages. Guatemala formally recognized 21 of these in 1996. Xinca, Miskito, and Garifuna are also present in Central America.
This region of the continent is very rich in terms of ethnic groups. The majority of the population is mestizo, with sizable Mayan and African descendent populations present, along with numerous other indigenous groups such as the Miskito people. The immigration of Arabs, Jews, Chinese, Europeans and others brought additional groups to the area.
The predominant religion in Central America is Christianity (95.6%). Beginning with the Spanish colonization of Central America in the 16th century, Roman Catholicism became the most popular religion in the region until the first half of the 20th century. Since the 1960s, there has been an increase in other Christian groups, particularly Protestantism, as well as other religious organizations, and individuals identifying themselves as having no religion.
Source: Jason Mandrik, Operation World Statistics (2020).
Central America is currently undergoing a process of political, economic and cultural transformation that started in 1907 with the creation of the Central American Court of Justice.
In 1951 the integration process continued with the signature of the San Salvador Treaty, which created the ODECA, the Organization of Central American States. However, the unity of the ODECA was limited by conflicts between several member states.
In 1991, the integration agenda was further advanced by the creation of the Central American Integration System (Sistema para la Integración Centroamericana, or SICA). SICA provides a clear legal basis to avoid disputes between the member states. SICA membership includes the 7 nations of Central America plus the Dominican Republic, a state that is traditionally considered part of the Caribbean.
On 6 December 2008, SICA announced an agreement to pursue a common currency and common passport for the member nations. No timeline for implementation was discussed.
Central America already has several supranational institutions such as the Central American Parliament, the Central American Bank for Economic Integration and the Central American Common Market.
On 22 July 2011, President Mauricio Funes of El Salvador became the first president pro tempore to SICA. El Salvador also became the headquarters of SICA with the inauguration of a new building.
Until recently, all Central American countries maintained diplomatic relations with Taiwan instead of China. President Óscar Arias of Costa Rica, however, established diplomatic relations with China in 2007, severing formal diplomatic ties with Taiwan. After breaking off relations with the Republic of China in 2017, Panama established diplomatic relations with the People's Republic of China. In August 2018, El Salvador also severed ties with Taiwan to formally start recognizing the People's Republic of China as sole China, a move many considered lacked transparency due to its abruptness and reports of the Chinese government's desires to invest in the department of La Union while also promising to fund the ruling party's reelection campaign. The President of El Salvador, Nayib Bukele, broke diplomatic relations with Taiwan and established ties with China. On 9 December 2021, Nicaragua resumed relations with the PRC.
The Central American Parliament (aka PARLACEN) is a political and parliamentary body of SICA. The parliament started around 1980, and its primary goal was to resolve conflicts in Nicaragua, Guatemala, and El Salvador. Although the group was disbanded in 1986, ideas of unity of Central Americans still remained, so a treaty was signed in 1987 to create the Central American Parliament and other political bodies. Its original members were Guatemala, El Salvador, Nicaragua and Honduras. The parliament is the political organ of Central America, and is part of SICA. New members have since then joined including Panama and the Dominican Republic.
Costa Rica is not a member State of the Central American Parliament and its adhesion remains as a very unpopular topic at all levels of the Costa Rican society due to existing strong political criticism towards the regional parliament, since it is regarded by Costa Ricans as a menace to democratic accountability and effectiveness of integration efforts. Excessively high salaries for its members, legal immunity of jurisdiction from any member State, corruption, lack of a binding nature and effectiveness of the regional parliament's decisions, high operative costs and immediate membership of Central American Presidents once they leave their office and presidential terms, are the most common reasons invoked by Costa Ricans against the Central American Parliament.
Signed in 2004, the Central American Free Trade Agreement (CAFTA) is an agreement between the United States, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic. The treaty is aimed at promoting free trade among its members.
Guatemala has the largest economy in the region. Its main exports are coffee, sugar, bananas, petroleum, clothing, and cardamom. Of its 10.29 billion dollar annual exports, 40.2% go to the United States, 11.1% to neighboring El Salvador, 8% to Honduras, 5.5% to Mexico, 4.7% to Nicaragua, and 4.3% to Costa Rica.
The region is particularly attractive for companies (especially clothing companies) because of its geographical proximity to the United States, very low wages and considerable tax advantages. In addition, the decline in the prices of coffee and other export products and the structural adjustment measures promoted by the international financial institutions have partly ruined agriculture, favouring the emergence of maquiladoras. This sector accounts for 42 per cent of total exports from El Salvador, 55 per cent from Guatemala, and 65 per cent from Honduras. However, its contribution to the economies of these countries is disputed; raw materials are imported, jobs are precarious and low-paid, and tax exemptions weaken public finances.
They are also criticised for the working conditions of employees: insults and physical violence, abusive dismissals (especially of pregnant workers), working hours, non-payment of overtime. According to Lucrecia Bautista, coordinator of the maquilas sector of the audit firm Coverco, labour law regulations are regularly violated in maquilas and there is no political will to enforce their application. In the case of infringements, the labour inspectorate shows remarkable leniency. It is a question of not discouraging investors. Trade unionists are subject to pressure, and sometimes to kidnapping or murder. In some cases, business leaders have used the services of the maras. Finally, black lists containing the names of trade unionists or political activists are circulating in employers' circles.
Economic growth in Central America is projected to slow slightly in 2014–15, as country-specific domestic factors offset the positive effects from stronger economic activity in the United States.
Tourism in Belize has grown considerably in more recent times, and it is now the second largest industry in the nation. Belizean Prime Minister Dean Barrow has stated his intention to use tourism to combat poverty throughout the country. The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Belize's tourism-driven economy have been significant, with the nation welcoming almost one million tourists in a calendar year for the first time in its history in 2012. Belize is also the only country in Central America with English as its official language, making this country a comfortable destination for English-speaking tourists.
Costa Rica is the most visited nation in Central America. Tourism in Costa Rica is one of the fastest growing economic sectors of the country, having become the largest source of foreign revenue by 1995. Since 1999, tourism has earned more foreign exchange than bananas, pineapples and coffee exports combined. The tourism boom began in 1987, with the number of visitors up from 329,000 in 1988, through 1.03 million in 1999, to a historical record of 2.43 million foreign visitors and $1.92-billion in revenue in 2013. In 2012 tourism contributed with 12.5% of the country's GDP and it was responsible for 11.7% of direct and indirect employment.
Tourism in Nicaragua has grown considerably recently, and it is now the second largest industry in the nation. Nicaraguan President Daniel Ortega has stated his intention to use tourism to combat poverty throughout the country. The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Nicaragua's tourism-driven economy have been significant, with the nation welcoming one million tourists in a calendar year for the first time in its history in 2010.
The Inter-American Highway is the Central American section of the Pan-American Highway, and spans 5,470 kilometers (3,400 mi) between Nuevo Laredo, Mexico, and Panama City, Panama. Because of the 87-kilometer (54 mi) break in the highway known as the Darién Gap, it is not possible to cross between Central America and South America in an automobile. | [
{
"paragraph_id": 0,
"text": "Central America is a subregion of the Americas, frequently considered part of North America. Its political boundaries are defined as bordering Mexico to the north, Colombia to the south, the Caribbean Sea to the east, and the Pacific Ocean to the west. Central America usually consists of seven countries: Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama. Within Central America is the Mesoamerican biodiversity hotspot, which extends from northern Guatemala to central Panama. Due to the presence of several active geologic faults and the Central America Volcanic Arc, there is a high amount of seismic activity in the region, such as volcanic eruptions and earthquakes, which has resulted in death, injury, and property damage.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the pre-Columbian era, Central America was inhabited by the Indigenous peoples of Mesoamerica to the north and west and the Isthmo-Colombian peoples to the south and east. Following the Spanish expedition of Christopher Columbus' voyages to the Americas, Spain began to colonize the Americas. From 1609 to 1821, the majority of Central American territories (except for what would become Belize and Panama, and including the modern Mexican state of Chiapas) were governed by the viceroyalty of New Spain from Mexico City as the Captaincy General of Guatemala. On 24 August 1821, Spanish Viceroy Juan de O'Donojú signed the Treaty of Córdoba, which established New Spain's independence from Spain. On 15 September 1821, the Act of Independence of Central America was enacted to announce Central America's separation from the Spanish Empire and provide for the establishment of a new Central American state. Some of New Spain's provinces in the Central American region (i.e. what would become Guatemala, Honduras, El Salvador, Nicaragua and Costa Rica) were annexed to the First Mexican Empire; however in 1823 they seceded from Mexico to form the Federal Republic of Central America until 1838.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 1838, Costa Rica, Guatemala, Honduras, and Nicaragua became the first of Central America's seven states to become independent countries, followed by El Salvador in 1841, Panama in 1903, and Belize in 1981. Despite the dissolution of the Federal Republic of Central America, countries like Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua continue to maintain a Central American identity. The Belizeans are usually identified as culturally Caribbean rather than Central American, while the Panamanians identify themselves more broadly with their South American neighbours.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Spanish-speaking countries officially include both North America and South America as a single continent, América, which is split into four subregions: North America (Northern America and Mexico), Central America, South America, and Insular America (the West Indies).",
"title": ""
},
{
"paragraph_id": 4,
"text": "\"Central America\" may mean different things to various people, based upon different contexts:",
"title": "Different definitions"
},
{
"paragraph_id": 5,
"text": "Central America was formed more than 3 million years ago, as part of the Isthmus of Panama, when its portion of land connected each side of water.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In the Pre-Columbian era, the northern areas of Central America were inhabited by the indigenous peoples of Mesoamerica. Most notable among these were the Mayans, who had built numerous cities throughout the region, and the Aztecs, who had created a vast empire. The pre-Columbian cultures of eastern El Salvador, eastern Honduras, Caribbean Nicaragua, most of Costa Rica and Panama were predominantly speakers of the Chibchan languages at the time of European contact and are considered by some culturally different and grouped in the Isthmo-Colombian Area.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Following the Spanish expedition of Christopher Columbus's voyages to the Americas, the Spanish sent many expeditions to the region, and they began their conquest of Maya territory in 1523. Soon after the conquest of the Aztec Empire, Spanish conquistador Pedro de Alvarado commenced the conquest of northern Central America for the Spanish Empire. Beginning with his arrival in Soconusco in 1523, Alvarado's forces systematically conquered and subjugated most of the major Maya kingdoms, including the K'iche', Tz'utujil, Pipil, and the Kaqchikel. By 1528, the conquest of Guatemala was nearly complete, with only the Petén Basin remaining outside the Spanish sphere of influence. The last independent Maya kingdoms – the Kowoj and the Itza people – were finally defeated in 1697, as part of the Spanish conquest of Petén.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1538, Spain established the Real Audiencia of Panama, which had jurisdiction over all land from the Strait of Magellan to the Gulf of Fonseca. This entity was dissolved in 1543, and most of the territory within Central America then fell under the jurisdiction of the Audiencia Real de Guatemala. This area included the current territories of Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Mexican state of Chiapas, but excluded the lands that would become Belize and Panama. The president of the Audiencia, which had its seat in Antigua Guatemala, was the governor of the entire area. In 1609 the area became a captaincy general and the governor was also granted the title of captain general. The Captaincy General of Guatemala encompassed most of Central America, with the exception of present-day Belize and Panama.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Captaincy General of Guatemala lasted for more than two centuries, but began to fray after a rebellion in 1811 which began in the Intendancy of San Salvador. The Captaincy General formally ended on 15 September 1821, with the signing of the Act of Independence of Central America. Mexican independence was achieved at virtually the same time with the signing of the Treaty of Córdoba and the Declaration of Independence of the Mexican Empire, and the entire region was finally independent from Spanish authority by 28 September 1821.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "From its independence from Spain in 1821 until 1823, the former Captaincy General remained intact as part of the short-lived First Mexican Empire. When the Emperor of Mexico abdicated on 19 March 1823, Central America again became independent. On 1 July 1823, the Congress of Central America peacefully seceded from Mexico and declared absolute independence from all foreign nations, and the region formed the Federal Republic of Central America.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The Federal Republic of Central America was a representative democracy with its capital at Guatemala City. This union consisted of the provinces of Costa Rica, El Salvador, Guatemala, Honduras, Los Altos, Mosquito Coast, and Nicaragua. The lowlands of southwest Chiapas, including Soconusco, initially belonged to the Republic until 1824, when Mexico annexed most of Chiapas and began its claims to Soconusco. The Republic lasted from 1823 to 1838, when it disintegrated as a result of civil wars.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The territory that now makes up Belize was heavily contested in a dispute that continued for decades after Guatemala achieved independence. Spain, and later Guatemala, considered this land a Guatemalan department. In 1862, Britain formally declared it a British colony and named it British Honduras. It became independent as Belize in 1981.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Panama, situated in the southernmost part of Central America on the Isthmus of Panama, has for most of its history been culturally and politically linked to South America. Panama was part of the Province of Tierra Firme from 1510 until 1538 when it came under the jurisdiction of the newly formed Audiencia Real de Panama. Beginning in 1543, Panama was administered as part of the Viceroyalty of Peru, along with all other Spanish possessions in South America. Panama remained as part of the Viceroyalty of Peru until 1739, when it was transferred to the Viceroyalty of New Granada, the capital of which was located at Santa Fé de Bogotá. Panama remained as part of the Viceroyalty of New Granada until the disestablishment of that viceroyalty in 1819. A series of military and political struggles took place from that time until 1822, the result of which produced the republic of Gran Colombia. After the dissolution of Gran Colombia in 1830, Panama became part of a successor state, the Republic of New Granada. From 1855 until 1886, Panama existed as Panama State, first within the Republic of New Granada, then within the Granadine Confederation, and finally within the United States of Colombia. The United States of Colombia was replaced by the Republic of Colombia in 1886. As part of the Republic of Colombia, Panama State was abolished and it became the Isthmus Department. Despite the many political reorganizations, Colombia was still deeply plagued by conflict, which eventually led to the secession of Panama on 3 November 1903. Only after that time did some begin to regard Panama as a North or Central American entity.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "By the 1930s the United Fruit Company owned 14,000 square kilometres (3.5 million acres) of land in Central America and the Caribbean and was the single largest land owner in Guatemala. Such holdings gave it great power over the governments of small countries. That was one of the factors that led to the coining of the phrase banana republic.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "After more than two hundred years of social unrest, violent conflict, and revolution, Central America today remains in a period of political transformation. Poverty, social injustice, and violence are still widespread. Nicaragua is the second poorest country in the western hemisphere (only Haiti is poorer).",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Central America is a part of North America consisting of a tapering isthmus running from the southern extent of Mexico to the northwestern portion of South America. Central America has the Gulf of Mexico, a body of water within the Atlantic Ocean, to the north; the Caribbean Sea, also part of the Atlantic Ocean, to the northeast; and the Pacific Ocean to the southwest. Some physiographists define the Isthmus of Tehuantepec as the northern geographic border of Central America, while others use the northwestern borders of Belize and Guatemala. From there, the Central American land mass extends southeastward to the Atrato River, where it connects to the Pacific Lowlands in northwestern South America.",
"title": "Geography"
},
{
"paragraph_id": 17,
"text": "Of the many mountain ranges within Central America, the longest are the Sierra Madre de Chiapas, the Cordillera Isabelia and the Cordillera de Talamanca. At 4,220 meters (13,850 ft), Volcán Tajumulco is the highest peak in Central America. Other high points of Central America are as listed in the table below:",
"title": "Geography"
},
{
"paragraph_id": 18,
"text": "Between the mountain ranges lie fertile valleys that are suitable for the raising of livestock and for the production of coffee, tobacco, beans and other crops. Most of the population of Honduras, Costa Rica and Guatemala lives in valleys.",
"title": "Geography"
},
{
"paragraph_id": 19,
"text": "Trade winds have a significant effect upon the climate of Central America. Temperatures in Central America are highest just prior to the summer wet season, and are lowest during the winter dry season, when trade winds contribute to a cooler climate. The highest temperatures occur in April, due to higher levels of sunlight, lower cloud cover and a decrease in trade winds.",
"title": "Geography"
},
{
"paragraph_id": 20,
"text": "Central America is part of the Mesoamerican biodiversity hotspot, boasting 7% of the world's biodiversity. The Pacific Flyway is a major north–south flyway for migratory birds in the Americas, extending from Alaska to Tierra del Fuego. Due to the funnel-like shape of its land mass, migratory birds can be seen in very high concentrations in Central America, especially in the spring and autumn. As a bridge between North America and South America, Central America has many species from the Nearctic and the Neotropical realms. However the southern countries (Costa Rica and Panama) of the region have more biodiversity than the northern countries (Guatemala and Belize), meanwhile the central countries (Honduras, Nicaragua and El Salvador) have the least biodiversity. The table below shows recent statistics:",
"title": "Geography"
},
{
"paragraph_id": 21,
"text": "Over 300 species of the region's flora and fauna are threatened, 107 of which are classified as critically endangered. The underlying problems are deforestation, which is estimated by FAO at 1.2% per year in Central America and Mexico combined, fragmentation of rainforests and the fact that 80% of the vegetation in Central America has already been converted to agriculture.",
"title": "Geography"
},
{
"paragraph_id": 22,
"text": "Efforts to protect fauna and flora in the region are made by creating ecoregions and nature reserves. 36% of Belize's land territory falls under some form of official protected status, giving Belize one of the most extensive systems of terrestrial protected areas in the Americas. In addition, 13% of Belize's marine territory are also protected. A large coral reef extends from Mexico to Honduras: the Mesoamerican Barrier Reef System. The Belize Barrier Reef is part of this. The Belize Barrier Reef is home to a large diversity of plants and animals, and is one of the most diverse ecosystems of the world. It is home to 70 hard coral species, 36 soft coral species, 500 species of fish and hundreds of invertebrate species. So far only about 10% of the species in the Belize barrier reef have been discovered.",
"title": "Geography"
},
{
"paragraph_id": 23,
"text": "From 2001 to 2010, 5,376 square kilometers (2,076 sq mi) of forest were lost in the region. In 2010 Belize had 63% of remaining forest cover, Costa Rica 46%, Panama 45%, Honduras 41%, Guatemala 37%, Nicaragua 29%, and El Salvador 21%. Most of the loss occurred in the moist forest biome, with 12,201 square kilometers (4,711 sq mi). Woody vegetation loss was partially set off by a gain in the coniferous forest biome with 4,730 square kilometers (1,830 sq mi), and a gain in the dry forest biome at 2,054 square kilometers (793 sq mi). Mangroves and deserts contributed only 1% to the loss in forest vegetation. The bulk of the deforestation was located at the Caribbean slopes of Nicaragua with a loss of 8,574 square kilometers (3,310 sq mi) of forest in the period from 2001 to 2010. The most significant regrowth of 3,050 square kilometers (1,180 sq mi) of forest was seen in the coniferous woody vegetation of Honduras.",
"title": "Geography"
},
{
"paragraph_id": 24,
"text": "The Central American pine-oak forests ecoregion, in the tropical and subtropical coniferous forests biome, is found in Central America and southern Mexico. The Central American pine-oak forests occupy an area of 111,400 square kilometers (43,000 sq mi), extending along the mountainous spine of Central America, extending from the Sierra Madre de Chiapas in Mexico's Chiapas state through the highlands of Guatemala, El Salvador, and Honduras to central Nicaragua. The pine-oak forests lie between 600–1,800 metres (2,000–5,900 ft) elevation, and are surrounded at lower elevations by tropical moist forests and tropical dry forests. Higher elevations above 1,800 metres (5,900 ft) are usually covered with Central American montane forests. The Central American pine-oak forests are composed of many species characteristic of temperate North America including oak, pine, fir, and cypress.",
"title": "Geography"
},
{
"paragraph_id": 25,
"text": "Laurel forest is the most common type of Central American temperate evergreen cloud forest, found in almost all Central American countries, normally more than 1,000 meters (3,300 ft) above sea level. Tree species include evergreen oaks, members of the laurel family, species of Weinmannia and Magnolia, and Drimys granadensis. The cloud forest of Sierra de las Minas, Guatemala, is the largest in Central America. In some areas of southeastern Honduras there are cloud forests, the largest located near the border with Nicaragua. In Nicaragua, cloud forests are situated near the border with Honduras, but many were cleared to grow coffee. There are still some temperate evergreen hills in the north. The only cloud forest in the Pacific coastal zone of Central America is on the Mombacho volcano in Nicaragua. In Costa Rica, there are laurel forests in the Cordillera de Tilarán and Volcán Arenal, called Monteverde, also in the Cordillera de Talamanca.",
"title": "Geography"
},
{
"paragraph_id": 26,
"text": "The Central American montane forests are an ecoregion of the tropical and subtropical moist broadleaf forests biome, as defined by the World Wildlife Fund. These forests are of the moist deciduous and the semi-evergreen seasonal subtype of tropical and subtropical moist broadleaf forests and receive high overall rainfall with a warm summer wet season and a cooler winter dry season. Central American montane forests consist of forest patches located at altitudes ranging from 1,800–4,000 metres (5,900–13,100 ft), on the summits and slopes of the highest mountains in Central America ranging from Southern Mexico, through Guatemala, El Salvador, and Honduras, to northern Nicaragua. The entire ecoregion covers an area of 13,200 square kilometers (5,100 sq mi) and has a temperate climate with relatively high precipitation levels.",
"title": "Geography"
},
{
"paragraph_id": 27,
"text": "Ecoregions are not only established to protect the forests themselves but also because they are habitats for an incomparably rich and often endemic fauna. Almost half of the bird population of the Talamancan montane forests in Costa Rica and Panama are endemic to this region. Several birds are listed as threatened, most notably the resplendent quetzal (Pharomacrus mocinno), three-wattled bellbird (Procnias tricarunculata), bare-necked umbrellabird (Cephalopterus glabricollis), and black guan (Chamaepetes unicolor). Many of the amphibians are endemic and depend on the existence of forest. The golden toad that once inhabited a small region in the Monteverde Reserve, which is part of the Talamancan montane forests, has not been seen alive since 1989 and is listed as extinct by IUCN. The exact causes for its extinction are unknown. Global warming may have played a role, because the development of that frog is typical for this area may have been compromised. Seven small mammals are endemic to the Costa Rica-Chiriqui highlands within the Talamancan montane forest region. Jaguars, cougars, spider monkeys, as well as tapirs, and anteaters live in the woods of Central America. The Central American red brocket is a brocket deer found in Central America's tropical forest.",
"title": "Geography"
},
{
"paragraph_id": 28,
"text": "Central America is geologically very active, with volcanic eruptions and earthquakes occurring frequently, and tsunamis occurring occasionally. Many thousands of people have died as a result of these natural disasters.",
"title": "Geography"
},
{
"paragraph_id": 29,
"text": "Most of Central America rests atop the Caribbean Plate. This tectonic plate converges with the Cocos, Nazca, and North American plates to form the Middle America Trench, a major subduction zone. The Middle America Trench is situated some 60–160 kilometers (37–99 mi) off the Pacific coast of Central America and runs roughly parallel to it. Many large earthquakes have occurred as a result of seismic activity at the Middle America Trench. For example, subduction of the Cocos Plate beneath the North American Plate at the Middle America Trench is believed to have caused the 1985 Mexico City earthquake that killed as many as 40,000 people. Seismic activity at the Middle America Trench is also responsible for earthquakes in 1902, 1942, 1956, 1982, 1992, January 2001, February 2001, 2007, 2012, 2014, and many other earthquakes throughout Central America.",
"title": "Geography"
},
{
"paragraph_id": 30,
"text": "The Middle America Trench is not the only source of seismic activity in Central America. The Motagua Fault is an onshore continuation of the Cayman Trough which forms part of the tectonic boundary between the North American Plate and the Caribbean Plate. This transform fault cuts right across Guatemala and then continues offshore until it merges with the Middle America Trench along the Pacific coast of Mexico, near Acapulco. Seismic activity at the Motagua Fault has been responsible for earthquakes in 1717, 1773, 1902, 1976, 1980, and 2009.",
"title": "Geography"
},
{
"paragraph_id": 31,
"text": "Another onshore continuation of the Cayman Trough is the Chixoy-Polochic Fault, which runs parallel to, and roughly 80 kilometers (50 mi) to the north, of the Motagua Fault. Though less active than the Motagua Fault, seismic activity at the Chixoy-Polochic Fault is still thought to be capable of producing very large earthquakes, such as the 1816 earthquake of Guatemala.",
"title": "Geography"
},
{
"paragraph_id": 32,
"text": "Managua, the capital of Nicaragua, was devastated by earthquakes in 1931 and 1972.",
"title": "Geography"
},
{
"paragraph_id": 33,
"text": "Volcanic eruptions are also common in Central America. In 1968 the Arenal Volcano, in Costa Rica, erupted killing 87 people as the 3 villages of Tabacon, Pueblo Nuevo and San Luis were buried under pyroclastic flows and debris. Fertile soils from weathered volcanic lava have made it possible to sustain dense populations in the agriculturally productive highland areas.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "List of countries by life expectancy at birth for 2021, according to the World Bank Group. List of Central American countries is expanded by Mexico and Colombia.",
"title": "Demographics"
},
{
"paragraph_id": 35,
"text": "The population of Central America is estimated at 50,956,791 as of 2021. With an area of 523,780 square kilometers (202,230 sq mi), it has a population density of 97.3 per square kilometre (252 per square mile). Human Development Index values are from the estimates for 2017.",
"title": "Demographics"
},
{
"paragraph_id": 36,
"text": "The official language majority in all Central American countries is Spanish, except in Belize, where the official language is English. Mayan languages constitute a language family consisting of about 26 related languages. Guatemala formally recognized 21 of these in 1996. Xinca, Miskito, and Garifuna are also present in Central America.",
"title": "Demographics"
},
{
"paragraph_id": 37,
"text": "This region of the continent is very rich in terms of ethnic groups. The majority of the population is mestizo, with sizable Mayan and African descendent populations present, along with numerous other indigenous groups such as the Miskito people. The immigration of Arabs, Jews, Chinese, Europeans and others brought additional groups to the area.",
"title": "Demographics"
},
{
"paragraph_id": 38,
"text": "The predominant religion in Central America is Christianity (95.6%). Beginning with the Spanish colonization of Central America in the 16th century, Roman Catholicism became the most popular religion in the region until the first half of the 20th century. Since the 1960s, there has been an increase in other Christian groups, particularly Protestantism, as well as other religious organizations, and individuals identifying themselves as having no religion.",
"title": "Demographics"
},
{
"paragraph_id": 39,
"text": "Source: Jason Mandrik, Operation World Statistics (2020).",
"title": "Demographics"
},
{
"paragraph_id": 40,
"text": "Central America is currently undergoing a process of political, economic and cultural transformation that started in 1907 with the creation of the Central American Court of Justice.",
"title": "Politics"
},
{
"paragraph_id": 41,
"text": "In 1951 the integration process continued with the signature of the San Salvador Treaty, which created the ODECA, the Organization of Central American States. However, the unity of the ODECA was limited by conflicts between several member states.",
"title": "Politics"
},
{
"paragraph_id": 42,
"text": "In 1991, the integration agenda was further advanced by the creation of the Central American Integration System (Sistema para la Integración Centroamericana, or SICA). SICA provides a clear legal basis to avoid disputes between the member states. SICA membership includes the 7 nations of Central America plus the Dominican Republic, a state that is traditionally considered part of the Caribbean.",
"title": "Politics"
},
{
"paragraph_id": 43,
"text": "On 6 December 2008, SICA announced an agreement to pursue a common currency and common passport for the member nations. No timeline for implementation was discussed.",
"title": "Politics"
},
{
"paragraph_id": 44,
"text": "Central America already has several supranational institutions such as the Central American Parliament, the Central American Bank for Economic Integration and the Central American Common Market.",
"title": "Politics"
},
{
"paragraph_id": 45,
"text": "On 22 July 2011, President Mauricio Funes of El Salvador became the first president pro tempore to SICA. El Salvador also became the headquarters of SICA with the inauguration of a new building.",
"title": "Politics"
},
{
"paragraph_id": 46,
"text": "Until recently, all Central American countries maintained diplomatic relations with Taiwan instead of China. President Óscar Arias of Costa Rica, however, established diplomatic relations with China in 2007, severing formal diplomatic ties with Taiwan. After breaking off relations with the Republic of China in 2017, Panama established diplomatic relations with the People's Republic of China. In August 2018, El Salvador also severed ties with Taiwan to formally start recognizing the People's Republic of China as sole China, a move many considered lacked transparency due to its abruptness and reports of the Chinese government's desires to invest in the department of La Union while also promising to fund the ruling party's reelection campaign. The President of El Salvador, Nayib Bukele, broke diplomatic relations with Taiwan and established ties with China. On 9 December 2021, Nicaragua resumed relations with the PRC.",
"title": "Politics"
},
{
"paragraph_id": 47,
"text": "The Central American Parliament (aka PARLACEN) is a political and parliamentary body of SICA. The parliament started around 1980, and its primary goal was to resolve conflicts in Nicaragua, Guatemala, and El Salvador. Although the group was disbanded in 1986, ideas of unity of Central Americans still remained, so a treaty was signed in 1987 to create the Central American Parliament and other political bodies. Its original members were Guatemala, El Salvador, Nicaragua and Honduras. The parliament is the political organ of Central America, and is part of SICA. New members have since then joined including Panama and the Dominican Republic.",
"title": "Politics"
},
{
"paragraph_id": 48,
"text": "Costa Rica is not a member State of the Central American Parliament and its adhesion remains as a very unpopular topic at all levels of the Costa Rican society due to existing strong political criticism towards the regional parliament, since it is regarded by Costa Ricans as a menace to democratic accountability and effectiveness of integration efforts. Excessively high salaries for its members, legal immunity of jurisdiction from any member State, corruption, lack of a binding nature and effectiveness of the regional parliament's decisions, high operative costs and immediate membership of Central American Presidents once they leave their office and presidential terms, are the most common reasons invoked by Costa Ricans against the Central American Parliament.",
"title": "Politics"
},
{
"paragraph_id": 49,
"text": "Signed in 2004, the Central American Free Trade Agreement (CAFTA) is an agreement between the United States, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic. The treaty is aimed at promoting free trade among its members.",
"title": "Economy"
},
{
"paragraph_id": 50,
"text": "Guatemala has the largest economy in the region. Its main exports are coffee, sugar, bananas, petroleum, clothing, and cardamom. Of its 10.29 billion dollar annual exports, 40.2% go to the United States, 11.1% to neighboring El Salvador, 8% to Honduras, 5.5% to Mexico, 4.7% to Nicaragua, and 4.3% to Costa Rica.",
"title": "Economy"
},
{
"paragraph_id": 51,
"text": "The region is particularly attractive for companies (especially clothing companies) because of its geographical proximity to the United States, very low wages and considerable tax advantages. In addition, the decline in the prices of coffee and other export products and the structural adjustment measures promoted by the international financial institutions have partly ruined agriculture, favouring the emergence of maquiladoras. This sector accounts for 42 per cent of total exports from El Salvador, 55 per cent from Guatemala, and 65 per cent from Honduras. However, its contribution to the economies of these countries is disputed; raw materials are imported, jobs are precarious and low-paid, and tax exemptions weaken public finances.",
"title": "Economy"
},
{
"paragraph_id": 52,
"text": "They are also criticised for the working conditions of employees: insults and physical violence, abusive dismissals (especially of pregnant workers), working hours, non-payment of overtime. According to Lucrecia Bautista, coordinator of the maquilas sector of the audit firm Coverco, labour law regulations are regularly violated in maquilas and there is no political will to enforce their application. In the case of infringements, the labour inspectorate shows remarkable leniency. It is a question of not discouraging investors. Trade unionists are subject to pressure, and sometimes to kidnapping or murder. In some cases, business leaders have used the services of the maras. Finally, black lists containing the names of trade unionists or political activists are circulating in employers' circles.",
"title": "Economy"
},
{
"paragraph_id": 53,
"text": "Economic growth in Central America is projected to slow slightly in 2014–15, as country-specific domestic factors offset the positive effects from stronger economic activity in the United States.",
"title": "Economy"
},
{
"paragraph_id": 54,
"text": "Tourism in Belize has grown considerably in more recent times, and it is now the second largest industry in the nation. Belizean Prime Minister Dean Barrow has stated his intention to use tourism to combat poverty throughout the country. The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Belize's tourism-driven economy have been significant, with the nation welcoming almost one million tourists in a calendar year for the first time in its history in 2012. Belize is also the only country in Central America with English as its official language, making this country a comfortable destination for English-speaking tourists.",
"title": "Economy"
},
{
"paragraph_id": 55,
"text": "Costa Rica is the most visited nation in Central America. Tourism in Costa Rica is one of the fastest growing economic sectors of the country, having become the largest source of foreign revenue by 1995. Since 1999, tourism has earned more foreign exchange than bananas, pineapples and coffee exports combined. The tourism boom began in 1987, with the number of visitors up from 329,000 in 1988, through 1.03 million in 1999, to a historical record of 2.43 million foreign visitors and $1.92-billion in revenue in 2013. In 2012 tourism contributed with 12.5% of the country's GDP and it was responsible for 11.7% of direct and indirect employment.",
"title": "Economy"
},
{
"paragraph_id": 56,
"text": "Tourism in Nicaragua has grown considerably recently, and it is now the second largest industry in the nation. Nicaraguan President Daniel Ortega has stated his intention to use tourism to combat poverty throughout the country. The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Nicaragua's tourism-driven economy have been significant, with the nation welcoming one million tourists in a calendar year for the first time in its history in 2010.",
"title": "Economy"
},
{
"paragraph_id": 57,
"text": "The Inter-American Highway is the Central American section of the Pan-American Highway, and spans 5,470 kilometers (3,400 mi) between Nuevo Laredo, Mexico, and Panama City, Panama. Because of the 87-kilometer (54 mi) break in the highway known as the Darién Gap, it is not possible to cross between Central America and South America in an automobile.",
"title": "Transport"
}
]
| Central America is a subregion of the Americas, frequently considered part of North America. Its political boundaries are defined as bordering Mexico to the north, Colombia to the south, the Caribbean Sea to the east, and the Pacific Ocean to the west. Central America usually consists of seven countries: Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama. Within Central America is the Mesoamerican biodiversity hotspot, which extends from northern Guatemala to central Panama. Due to the presence of several active geologic faults and the Central America Volcanic Arc, there is a high amount of seismic activity in the region, such as volcanic eruptions and earthquakes, which has resulted in death, injury, and property damage. In the pre-Columbian era, Central America was inhabited by the Indigenous peoples of Mesoamerica to the north and west and the Isthmo-Colombian peoples to the south and east. Following the Spanish expedition of Christopher Columbus' voyages to the Americas, Spain began to colonize the Americas. From 1609 to 1821, the majority of Central American territories were governed by the viceroyalty of New Spain from Mexico City as the Captaincy General of Guatemala. On 24 August 1821, Spanish Viceroy Juan de O'Donojú signed the Treaty of Córdoba, which established New Spain's independence from Spain. On 15 September 1821, the Act of Independence of Central America was enacted to announce Central America's separation from the Spanish Empire and provide for the establishment of a new Central American state. Some of New Spain's provinces in the Central American region were annexed to the First Mexican Empire; however in 1823 they seceded from Mexico to form the Federal Republic of Central America until 1838. In 1838, Costa Rica, Guatemala, Honduras, and Nicaragua became the first of Central America's seven states to become independent countries, followed by El Salvador in 1841, Panama in 1903, and Belize in 1981. Despite the dissolution of the Federal Republic of Central America, countries like Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua continue to maintain a Central American identity. The Belizeans are usually identified as culturally Caribbean rather than Central American, while the Panamanians identify themselves more broadly with their South American neighbours. The Spanish-speaking countries officially include both North America and South America as a single continent, América, which is split into four subregions: North America, Central America, South America, and Insular America. | 2001-08-17T14:08:16Z | 2023-12-26T21:43:31Z | [
"Template:Convert",
"Template:Central American volcanoes",
"Template:When",
"Template:In lang",
"Template:Authority control",
"Template:Hatgrp",
"Template:Static row numbers",
"Template:Sort row",
"Template:UN Population",
"Template:Notelist",
"Template:Dead link",
"Template:Central America topic",
"Template:Citation needed",
"Template:Lang",
"Template:Tooltip",
"Template:Clear left",
"Template:Reflist",
"Template:Cite book",
"Template:Music of Central America",
"Template:Latin America topic",
"Template:Central America series",
"Template:Further",
"Template:Infobox geopolitical organization",
"Template:Div col end",
"Template:Use dmy dates",
"Template:See also",
"Template:Val",
"Template:Cbignore",
"Template:Cite web",
"Template:Central American and Caribbean Games",
"Template:Efn",
"Template:Main",
"Template:Flaglist",
"Template:Div col",
"Template:Infobox Continent",
"Template:CN",
"Template:Nts",
"Template:Portal",
"Template:Cite news",
"Template:Sister project links",
"Template:Short description",
"Template:Flag",
"Template:Image frame",
"Template:Multiple image",
"Template:Regions of the world"
]
| https://en.wikipedia.org/wiki/Central_America |
6,122 | Continuous function | In mathematics, a continuous function is a function such that a continuous variation (that is, a change without jump) of the argument induces a continuous variation of the value of the function. This means there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of y = f ( x ) {\displaystyle y=f(x)} as follows: an infinitely small increment α {\displaystyle \alpha } of the independent variable x always produces an infinitely small change f ( x + α ) − f ( x ) {\displaystyle f(x+\alpha )-f(x)} of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of f ( x ) , {\displaystyle f(x),} as x tends to c, is equal to f ( c ) . {\displaystyle f(c).}
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval ( − ∞ , + ∞ ) {\displaystyle (-\infty ,+\infty )} (the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} is continuous on its whole domain, which is the closed interval [ 0 , + ∞ ) . {\displaystyle [0,+\infty ).}
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples are the functions x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} and x ↦ tan x . {\displaystyle x\mapsto \tan x.} When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} and x ↦ sin ( 1 x ) {\textstyle x\mapsto \sin({\frac {1}{x}})} are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity.
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let
be a function defined on a subset D {\displaystyle D} of the set R {\displaystyle \mathbb {R} } of real numbers.
This subset D {\displaystyle D} is the domain of f. Some possible choices include
In the case of the domain D {\displaystyle D} being defined as an open interval, a {\displaystyle a} and b {\displaystyle b} do not belong to D {\displaystyle D} , and the values of f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} do not matter for continuity on D {\displaystyle D} .
The function f is continuous at some point c of its domain if the limit of f ( x ) , {\displaystyle f(x),} as x approaches c through the domain of f, exists and is equal to f ( c ) . {\displaystyle f(c).} In mathematical notation, this is written as
In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal f ( c ) . {\displaystyle f(c).}
(Here, we have assumed that the domain of f does not have any isolated points.)
A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point f ( c ) {\displaystyle f(c)} as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood N 1 ( f ( c ) ) {\displaystyle N_{1}(f(c))} there is a neighborhood N 2 ( c ) {\displaystyle N_{2}(c)} in its domain such that f ( x ) ∈ N 1 ( f ( c ) ) {\displaystyle f(x)\in N_{1}(f(c))} whenever x ∈ N 2 ( c ) . {\displaystyle x\in N_{2}(c).}
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
One can instead require that for any sequence ( x n ) n ∈ N {\displaystyle (x_{n})_{n\in \mathbb {N} }} of points in the domain which converges to c, the corresponding sequence ( f ( x n ) ) n ∈ N {\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }} converges to f ( c ) . {\displaystyle f(c).} In mathematical notation,
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function f : D → R {\displaystyle f:D\to \mathbb {R} } as above and an element x 0 {\displaystyle x_{0}} of the domain D {\displaystyle D} , f {\displaystyle f} is said to be continuous at the point x 0 {\displaystyle x_{0}} when the following holds: For any positive real number ε > 0 , {\displaystyle \varepsilon >0,} however small, there exists some positive real number δ > 0 {\displaystyle \delta >0} such that for all x {\displaystyle x} in the domain of f {\displaystyle f} with x 0 − δ < x < x 0 + δ , {\displaystyle x_{0}-\delta <x<x_{0}+\delta ,} the value of f ( x ) {\displaystyle f(x)} satisfies
Alternatively written, continuity of f : D → R {\displaystyle f:D\to \mathbb {R} } at x 0 ∈ D {\displaystyle x_{0}\in D} means that for every ε > 0 , {\displaystyle \varepsilon >0,} there exists a δ > 0 {\displaystyle \delta >0} such that for all x ∈ D {\displaystyle x\in D} :
More intuitively, we can say that if we want to get all the f ( x ) {\displaystyle f(x)} values to stay in some small neighborhood around f ( x 0 ) , {\displaystyle f\left(x_{0}\right),} we need to choose a small enough neighborhood for the x {\displaystyle x} values around x 0 . {\displaystyle x_{0}.} If we can do that no matter how small the f ( x 0 ) {\displaystyle f(x_{0})} neighborhood is, then f {\displaystyle f} is continuous at x 0 . {\displaystyle x_{0}.}
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval x 0 − δ < x < x 0 + δ {\displaystyle x_{0}-\delta <x<x_{0}+\delta } be entirely within the domain D {\displaystyle D} , but Jordan removed that restriction.
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity. A function C : [ 0 , ∞ ) → [ 0 , ∞ ] {\displaystyle C:[0,\infty )\to [0,\infty ]} is called a control function if
A function f : D → R {\displaystyle f:D\to R} is C-continuous at x 0 {\displaystyle x_{0}} if there exists such a neighbourhood N ( x 0 ) {\textstyle N(x_{0})} that
A function is continuous in x 0 {\displaystyle x_{0}} if it is C-continuous for some control function C.
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions C {\displaystyle {\mathcal {C}}} a function is C {\displaystyle {\mathcal {C}}} -continuous if it is C {\displaystyle C} -continuous for some C ∈ C . {\displaystyle C\in {\mathcal {C}}.} For example, the Lipschitz and Hölder continuous functions of exponent α below are defined by the set of control functions
respectively
Continuity can also be defined in terms of oscillation: a function f is continuous at a point x 0 {\displaystyle x_{0}} if and only if its oscillation at that point is zero; in symbols, ω f ( x 0 ) = 0. {\displaystyle \omega _{f}(x_{0})=0.} A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than ε {\displaystyle \varepsilon } (hence a G δ {\displaystyle G_{\delta }} set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the ε − δ {\displaystyle \varepsilon -\delta } definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given ε 0 {\displaystyle \varepsilon _{0}} there is no δ {\displaystyle \delta } that satisfies the ε − δ {\displaystyle \varepsilon -\delta } definition, then the oscillation is at least ε 0 , {\displaystyle \varepsilon _{0},} and conversely if for every ε {\displaystyle \varepsilon } there is a desired δ , {\displaystyle \delta ,} the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
Checking the continuity of a given function can be simplified by checking one of the above defining properties for the building blocks of the given function. It is straightforward to show that the sum of two functions, continuous on some domain, is also continuous on this domain. Given
then the sum of continuous functions
(defined by s ( x ) = f ( x ) + g ( x ) {\displaystyle s(x)=f(x)+g(x)} for all x ∈ D {\displaystyle x\in D} ) is continuous in D . {\displaystyle D.}
The same holds for the product of continuous functions,
(defined by p ( x ) = f ( x ) ⋅ g ( x ) {\displaystyle p(x)=f(x)\cdot g(x)} for all x ∈ D {\displaystyle x\in D} ) is continuous in D . {\displaystyle D.}
Combining the above preservations of continuity and the continuity of constant functions and of the identity function I ( x ) = x {\displaystyle I(x)=x} on R {\displaystyle \mathbb {R} } , one arrives at the continuity of all polynomial functions on R {\displaystyle \mathbb {R} } , such as
(pictured on the right).
In the same way, it can be shown that the reciprocal of a continuous function
(defined by r ( x ) = 1 / f ( x ) {\displaystyle r(x)=1/f(x)} for all x ∈ D {\displaystyle x\in D} such that f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} ) is continuous in D ∖ { x : f ( x ) = 0 } . {\displaystyle D\setminus \{x:f(x)=0\}.}
This implies that, excluding the roots of g , {\displaystyle g,} the quotient of continuous functions
(defined by q ( x ) = f ( x ) / g ( x ) {\displaystyle q(x)=f(x)/g(x)} for all x ∈ D {\displaystyle x\in D} , such that g ( x ) ≠ 0 {\displaystyle g(x)\neq 0} ) is also continuous on D ∖ { x : g ( x ) = 0 } {\displaystyle D\setminus \{x:g(x)=0\}} .
For example, the function (pictured)
is defined for all real numbers x ≠ − 2 {\displaystyle x\neq -2} and is continuous at every such point. Thus, it is a continuous function. The question of continuity at x = − 2 {\displaystyle x=-2} does not arise since x = − 2 {\displaystyle x=-2} is not in the domain of y . {\displaystyle y.} There is no continuous function F : R → R {\displaystyle F:\mathbb {R} \to \mathbb {R} } that agrees with y ( x ) {\displaystyle y(x)} for all x ≠ − 2. {\displaystyle x\neq -2.}
Since the function sine is continuous on all reals, the sinc function G ( x ) = sin ( x ) / x , {\displaystyle G(x)=\sin(x)/x,} is defined and continuous for all real x ≠ 0. {\displaystyle x\neq 0.} However, unlike the previous example, G can be extended to a continuous function on all real numbers, by defining the value G ( 0 ) {\displaystyle G(0)} to be 1, which is the limit of G ( x ) , {\displaystyle G(x),} when x approaches 0, i.e.,
Thus, by setting
the sinc-function becomes a continuous function on all real numbers. The term removable singularity is used in such cases when (re)defining values of a function to coincide with the appropriate limits make a function continuous at specific points.
A more involved construction of continuous functions is the function composition. Given two continuous functions
their composition, denoted as c = g ∘ f : D f → R , {\displaystyle c=g\circ f:D_{f}\to \mathbb {R} ,} and defined by c ( x ) = g ( f ( x ) ) , {\displaystyle c(x)=g(f(x)),} is continuous.
This construction allows stating, for example, that
is continuous for all x > 0. {\displaystyle x>0.}
An example of a discontinuous function is the Heaviside step function H {\displaystyle H} , defined by
Pick for instance ε = 1 / 2 {\displaystyle \varepsilon =1/2} . Then there is no δ {\displaystyle \delta } -neighborhood around x = 0 {\displaystyle x=0} , i.e. no open interval ( − δ , δ ) {\displaystyle (-\delta ,\;\delta )} with δ > 0 , {\displaystyle \delta >0,} that will force all the H ( x ) {\displaystyle H(x)} values to be within the ε {\displaystyle \varepsilon } -neighborhood of H ( 0 ) {\displaystyle H(0)} , i.e. within ( 1 / 2 , 3 / 2 ) {\displaystyle (1/2,\;3/2)} . Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
is discontinuous at x = 0 {\displaystyle x=0} but continuous everywhere else. Yet another example: the function
is continuous everywhere apart from x = 0 {\displaystyle x=0} .
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
is nowhere continuous.
Let f ( x ) {\displaystyle f(x)} be a function that is continuous at a point x 0 , {\displaystyle x_{0},} and y 0 {\displaystyle y_{0}} be a value such f ( x 0 ) ≠ y 0 . {\displaystyle f\left(x_{0}\right)\neq y_{0}.} Then f ( x ) ≠ y 0 {\displaystyle f(x)\neq y_{0}} throughout some neighbourhood of x 0 . {\displaystyle x_{0}.}
Proof: By the definition of continuity, take ε = | y 0 − f ( x 0 ) | 2 > 0 {\displaystyle \varepsilon ={\frac {|y_{0}-f(x_{0})|}{2}}>0} , then there exists δ > 0 {\displaystyle \delta >0} such that
Suppose there is a point in the neighbourhood | x − x 0 | < δ {\displaystyle |x-x_{0}|<\delta } for which f ( x ) = y 0 ; {\displaystyle f(x)=y_{0};} then we have the contradiction
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if f is continuous on [ a , b ] {\displaystyle [a,b]} and f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} differ in sign, then, at some point c ∈ [ a , b ] , {\displaystyle c\in [a,b],} f ( c ) {\displaystyle f(c)} must equal zero.
The extreme value theorem states that if a function f is defined on a closed interval [ a , b ] {\displaystyle [a,b]} (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists c ∈ [ a , b ] {\displaystyle c\in [a,b]} with f ( c ) ≥ f ( x ) {\displaystyle f(c)\geq f(x)} for all x ∈ [ a , b ] . {\displaystyle x\in [a,b].} The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval ( a , b ) {\displaystyle (a,b)} (or any set that is not both closed and bounded), as, for example, the continuous function f ( x ) = 1 x , {\displaystyle f(x)={\frac {1}{x}},} defined on the open interval (0,1), does not attain a maximum, being unbounded above.
Every differentiable function
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
is everywhere continuous. However, it is not differentiable at x = 0 {\displaystyle x=0} (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted C 1 ( ( a , b ) ) . {\displaystyle C^{1}((a,b)).} More generally, the set of functions
(from an open interval (or open subset of R {\displaystyle \mathbb {R} } ) Ω {\displaystyle \Omega } to the reals) such that f is n {\displaystyle n} times differentiable and such that the n {\displaystyle n} -th derivative of f is continuous is denoted C n ( Ω ) . {\displaystyle C^{n}(\Omega ).} See differentiability class. In the field of computer graphics, properties related (but not identical) to C 0 , C 1 , C 2 {\displaystyle C^{0},C^{1},C^{2}} are sometimes called G 0 {\displaystyle G^{0}} (continuity of position), G 1 {\displaystyle G^{1}} (continuity of tangency), and G 2 {\displaystyle G^{2}} (continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
Given a sequence
of functions such that the limit
exists for all x ∈ D , {\displaystyle x\in D,} , the resulting function f ( x ) {\displaystyle f(x)} is referred to as the pointwise limit of the sequence of functions ( f n ) n ∈ N . {\displaystyle \left(f_{n}\right)_{n\in N}.} The pointwise limit function need not be continuous, even if all functions f n {\displaystyle f_{n}} are continuous, as the animation at the right shows. However, f is continuous if all functions f n {\displaystyle f_{n}} are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number ε > 0 {\displaystyle \varepsilon >0} however small, there exists some number δ > 0 {\displaystyle \delta >0} such that for all x in the domain with c < x < c + δ , {\displaystyle c<x<c+\delta ,} the value of f ( x ) {\displaystyle f(x)} will satisfy
This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with c − δ < x < c {\displaystyle c-\delta <x<c} yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous.
A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any ε > 0 , {\displaystyle \varepsilon >0,} there exists some number δ > 0 {\displaystyle \delta >0} such that for all x in the domain with | x − c | < δ , {\displaystyle |x-c|<\delta ,} the value of f ( x ) {\displaystyle f(x)} satisfies
The reverse condition is upper semi-continuity.
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set X {\displaystyle X} equipped with a function (called metric) d X , {\displaystyle d_{X},} that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces ( X , d X ) {\displaystyle \left(X,d_{X}\right)} and ( Y , d Y ) {\displaystyle \left(Y,d_{Y}\right)} and a function
then f {\displaystyle f} is continuous at the point c ∈ X {\displaystyle c\in X} (with respect to the given metrics) if for any positive real number ε > 0 , {\displaystyle \varepsilon >0,} there exists a positive real number δ > 0 {\displaystyle \delta >0} such that all x ∈ X {\displaystyle x\in X} satisfying d X ( x , c ) < δ {\displaystyle d_{X}(x,c)<\delta } will also satisfy d Y ( f ( x ) , f ( c ) ) < ε . {\displaystyle d_{Y}(f(x),f(c))<\varepsilon .} As in the case of real functions above, this is equivalent to the condition that for every sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} with limit lim x n = c , {\displaystyle \lim x_{n}=c,} we have lim f ( x n ) = f ( c ) . {\displaystyle \lim f\left(x_{n}\right)=f(c).} The latter condition can be weakened as follows: f {\displaystyle f} is continuous at the point c {\displaystyle c} if and only if for every convergent sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} with limit c {\displaystyle c} , the sequence ( f ( x n ) ) {\displaystyle \left(f\left(x_{n}\right)\right)} is a Cauchy sequence, and c {\displaystyle c} is in the domain of f {\displaystyle f} .
The set of points at which a function between metric spaces is continuous is a G δ {\displaystyle G_{\delta }} set – this follows from the ε − δ {\displaystyle \varepsilon -\delta } definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
between normed vector spaces V {\displaystyle V} and W {\displaystyle W} (which are vector spaces equipped with a compatible norm, denoted ‖ x ‖ {\displaystyle \|x\|} ) is continuous if and only if it is bounded, that is, there is a constant K {\displaystyle K} such that
for all x ∈ V . {\displaystyle x\in V.}
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way δ {\displaystyle \delta } depends on ε {\displaystyle \varepsilon } and c in the definition above. Intuitively, a function f as above is uniformly continuous if the δ {\displaystyle \delta } does not depend on the point c. More precisely, it is required that for every real number ε > 0 {\displaystyle \varepsilon >0} there exists δ > 0 {\displaystyle \delta >0} such that for every c , b ∈ X {\displaystyle c,b\in X} with d X ( b , c ) < δ , {\displaystyle d_{X}(b,c)<\delta ,} we have that d Y ( f ( b ) , f ( c ) ) < ε . {\displaystyle d_{Y}(f(b),f(c))<\varepsilon .} Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all b , c ∈ X , {\displaystyle b,c\in X,} the inequality
holds. Any Hölder continuous function is uniformly continuous. The particular case α = 1 {\displaystyle \alpha =1} is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality
holds for any b , c ∈ X . {\displaystyle b,c\in X.} The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).
A function
between two topological spaces X and Y is continuous if for every open set V ⊆ Y , {\displaystyle V\subseteq Y,} the inverse image
is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology T X {\displaystyle T_{X}} ), but the continuity of f depends on the topologies used on X and Y.
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.
An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions
to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
The translation in the language of neighborhoods of the ( ε , δ ) {\displaystyle (\varepsilon ,\delta )} -definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and f − 1 ( V ) {\displaystyle f^{-1}(V)} is the largest subset U of X such that f ( U ) ⊆ V , {\displaystyle f(U)\subseteq V,} this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function f : X → Y {\displaystyle f:X\to Y} is continuous at every point of X if and only if it is a continuous function.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above ε − δ {\displaystyle \varepsilon -\delta } definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Given x ∈ X , {\displaystyle x\in X,} a map f : X → Y {\displaystyle f:X\to Y} is continuous at x {\displaystyle x} if and only if whenever B {\displaystyle {\mathcal {B}}} is a filter on X {\displaystyle X} that converges to x {\displaystyle x} in X , {\displaystyle X,} which is expressed by writing B → x , {\displaystyle {\mathcal {B}}\to x,} then necessarily f ( B ) → f ( x ) {\displaystyle f({\mathcal {B}})\to f(x)} in Y . {\displaystyle Y.} If N ( x ) {\displaystyle {\mathcal {N}}(x)} denotes the neighborhood filter at x {\displaystyle x} then f : X → Y {\displaystyle f:X\to Y} is continuous at x {\displaystyle x} if and only if f ( N ( x ) ) → f ( x ) {\displaystyle f({\mathcal {N}}(x))\to f(x)} in Y . {\displaystyle Y.} Moreover, this happens if and only if the prefilter f ( N ( x ) ) {\displaystyle f({\mathcal {N}}(x))} is a filter base for the neighborhood filter of f ( x ) {\displaystyle f(x)} in Y . {\displaystyle Y.}
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function f : X → Y {\displaystyle f:X\to Y} is sequentially continuous if whenever a sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} converges to a limit x , {\displaystyle x,} the sequence ( f ( x n ) ) {\displaystyle \left(f\left(x_{n}\right)\right)} converges to f ( x ) . {\displaystyle f(x).} Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If X {\displaystyle X} is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X {\displaystyle X} is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
Theorem — A function f : A ⊆ R → R {\displaystyle f:A\subseteq \mathbb {R} \to \mathbb {R} } is continuous at x 0 {\displaystyle x_{0}} if and only if it is sequentially continuous at that point.
In terms of the interior operator, a function f : X → Y {\displaystyle f:X\to Y} between topological spaces is continuous if and only if for every subset B ⊆ Y , {\displaystyle B\subseteq Y,}
In terms of the closure operator, f : X → Y {\displaystyle f:X\to Y} is continuous if and only if for every subset A ⊆ X , {\displaystyle A\subseteq X,}
That is to say, given any element x ∈ X {\displaystyle x\in X} that belongs to the closure of a subset A ⊆ X , {\displaystyle A\subseteq X,} f ( x ) {\displaystyle f(x)} necessarily belongs to the closure of f ( A ) {\displaystyle f(A)} in Y . {\displaystyle Y.} If we declare that a point x {\displaystyle x} is close to a subset A ⊆ X {\displaystyle A\subseteq X} if x ∈ cl X A , {\displaystyle x\in \operatorname {cl} _{X}A,} then this terminology allows for a plain English description of continuity: f {\displaystyle f} is continuous if and only if for every subset A ⊆ X , {\displaystyle A\subseteq X,} f {\displaystyle f} maps points that are close to A {\displaystyle A} to points that are close to f ( A ) . {\displaystyle f(A).} Similarly, f {\displaystyle f} is continuous at a fixed given point x ∈ X {\displaystyle x\in X} if and only if whenever x {\displaystyle x} is close to a subset A ⊆ X , {\displaystyle A\subseteq X,} then f ( x ) {\displaystyle f(x)} is close to f ( A ) . {\displaystyle f(A).}
Instead of specifying topological spaces by their open subsets, any topology on X {\displaystyle X} can alternatively be determined by a closure operator or by an interior operator. Specifically, the map that sends a subset A {\displaystyle A} of a topological space X {\displaystyle X} to its topological closure cl X A {\displaystyle \operatorname {cl} _{X}A} satisfies the Kuratowski closure axioms. Conversely, for any closure operator A ↦ cl A {\displaystyle A\mapsto \operatorname {cl} A} there exists a unique topology τ {\displaystyle \tau } on X {\displaystyle X} (specifically, τ := { X ∖ cl A : A ⊆ X } {\displaystyle \tau :=\{X\setminus \operatorname {cl} A:A\subseteq X\}} ) such that for every subset A ⊆ X , {\displaystyle A\subseteq X,} cl A {\displaystyle \operatorname {cl} A} is equal to the topological closure cl ( X , τ ) A {\displaystyle \operatorname {cl} _{(X,\tau )}A} of A {\displaystyle A} in ( X , τ ) . {\displaystyle (X,\tau ).} If the sets X {\displaystyle X} and Y {\displaystyle Y} are each associated with closure operators (both denoted by cl {\displaystyle \operatorname {cl} } ) then a map f : X → Y {\displaystyle f:X\to Y} is continuous if and only if f ( cl A ) ⊆ cl ( f ( A ) ) {\displaystyle f(\operatorname {cl} A)\subseteq \operatorname {cl} (f(A))} for every subset A ⊆ X . {\displaystyle A\subseteq X.}
Similarly, the map that sends a subset A {\displaystyle A} of X {\displaystyle X} to its topological interior int X A {\displaystyle \operatorname {int} _{X}A} defines an interior operator. Conversely, any interior operator A ↦ int A {\displaystyle A\mapsto \operatorname {int} A} induces a unique topology τ {\displaystyle \tau } on X {\displaystyle X} (specifically, τ := { int A : A ⊆ X } {\displaystyle \tau :=\{\operatorname {int} A:A\subseteq X\}} ) such that for every A ⊆ X , {\displaystyle A\subseteq X,} int A {\displaystyle \operatorname {int} A} is equal to the topological interior int ( X , τ ) A {\displaystyle \operatorname {int} _{(X,\tau )}A} of A {\displaystyle A} in ( X , τ ) . {\displaystyle (X,\tau ).} If the sets X {\displaystyle X} and Y {\displaystyle Y} are each associated with interior operators (both denoted by int {\displaystyle \operatorname {int} } ) then a map f : X → Y {\displaystyle f:X\to Y} is continuous if and only if f − 1 ( int B ) ⊆ int ( f − 1 ( B ) ) {\displaystyle f^{-1}(\operatorname {int} B)\subseteq \operatorname {int} \left(f^{-1}(B)\right)} for every subset B ⊆ Y . {\displaystyle B\subseteq Y.}
Continuity can also be characterized in terms of filters. A function f : X → Y {\displaystyle f:X\to Y} is continuous if and only if whenever a filter B {\displaystyle {\mathcal {B}}} on X {\displaystyle X} converges in X {\displaystyle X} to a point x ∈ X , {\displaystyle x\in X,} then the prefilter f ( B ) {\displaystyle f({\mathcal {B}})} converges in Y {\displaystyle Y} to f ( x ) . {\displaystyle f(x).} This characterization remains true if the word "filter" is replaced by "prefilter."
If f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} are continuous, then so is the composition g ∘ f : X → Z . {\displaystyle g\circ f:X\to Z.} If f : X → Y {\displaystyle f:X\to Y} is continuous and
The possible topologies on a fixed set X are partially ordered: a topology τ 1 {\displaystyle \tau _{1}} is said to be coarser than another topology τ 2 {\displaystyle \tau _{2}} (notation: τ 1 ⊆ τ 2 {\displaystyle \tau _{1}\subseteq \tau _{2}} ) if every open subset with respect to τ 1 {\displaystyle \tau _{1}} is also open with respect to τ 2 . {\displaystyle \tau _{2}.} Then, the identity map
is continuous if and only if τ 1 ⊆ τ 2 {\displaystyle \tau _{1}\subseteq \tau _{2}} (see also comparison of topologies). More generally, a continuous function
stays continuous if the topology τ Y {\displaystyle \tau _{Y}} is replaced by a coarser topology and/or τ X {\displaystyle \tau _{X}} is replaced by a finer topology.
Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f − 1 {\displaystyle f^{-1}} need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism.
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
Given a function
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f − 1 ( A ) {\displaystyle f^{-1}(A)} is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that A = f − 1 ( U ) {\displaystyle A=f^{-1}(U)} for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions S → X {\displaystyle S\to X} into all topological spaces X. Dually, a similar idea can be applied to maps X → S . {\displaystyle X\to S.}
If f : S → Y {\displaystyle f:S\to Y} is a continuous function from some subset S {\displaystyle S} of a topological space X {\displaystyle X} then a continuous extension of f {\displaystyle f} to X {\displaystyle X} is any continuous function F : X → Y {\displaystyle F:X\to Y} such that F ( s ) = f ( s ) {\displaystyle F(s)=f(s)} for every s ∈ S , {\displaystyle s\in S,} which is a condition that often written as f = F | S . {\displaystyle f=F{\big \vert }_{S}.} In words, it is any continuous function F : X → Y {\displaystyle F:X\to Y} that restricts to f {\displaystyle f} on S . {\displaystyle S.} This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If f : S → Y {\displaystyle f:S\to Y} is not continuous, then it could not possibly have a continuous extension. If Y {\displaystyle Y} is a Hausdorff space and S {\displaystyle S} is a dense subset of X {\displaystyle X} then a continuous extension of f : S → Y {\displaystyle f:S\to Y} to X , {\displaystyle X,} if one exists, will be unique. The Blumberg theorem states that if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is an arbitrary function then there exists a dense subset D {\displaystyle D} of R {\displaystyle \mathbb {R} } such that the restriction f | D : D → R {\displaystyle f{\big \vert }_{D}:D\to \mathbb {R} } is continuous; in other words, every function R → R {\displaystyle \mathbb {R} \to \mathbb {R} } can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function f : X → Y {\displaystyle f:X\to Y} between particular types of partially ordered sets X {\displaystyle X} and Y {\displaystyle Y} is continuous if for each directed subset A {\displaystyle A} of X , {\displaystyle X,} we have sup f ( A ) = f ( sup A ) . {\displaystyle \sup f(A)=f(\sup A).} Here sup {\displaystyle \,\sup \,} is the supremum with respect to the orderings in X {\displaystyle X} and Y , {\displaystyle Y,} respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
between two categories is called continuous if it commutes with small limits. That is to say,
for any small (that is, indexed by a set I , {\displaystyle I,} as opposed to a class) diagram of objects in C {\displaystyle {\mathcal {C}}} .
A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains. | [
{
"paragraph_id": 0,
"text": "In mathematics, a continuous function is a function such that a continuous variation (that is, a change without jump) of the argument induces a continuous variation of the value of the function. This means there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.",
"title": ""
},
{
"paragraph_id": 3,
"text": "As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it \"jumps\" at each point in time when money is deposited or withdrawn.",
"title": ""
},
{
"paragraph_id": 4,
"text": "A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of y = f ( x ) {\\displaystyle y=f(x)} as follows: an infinitely small increment α {\\displaystyle \\alpha } of the independent variable x always produces an infinitely small change f ( x + α ) − f ( x ) {\\displaystyle f(x+\\alpha )-f(x)} of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.",
"title": "Real functions"
},
{
"paragraph_id": 6,
"text": "Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of f ( x ) , {\\displaystyle f(x),} as x tends to c, is equal to f ( c ) . {\\displaystyle f(c).}",
"title": "Real functions"
},
{
"paragraph_id": 7,
"text": "There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.",
"title": "Real functions"
},
{
"paragraph_id": 8,
"text": "A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval ( − ∞ , + ∞ ) {\\displaystyle (-\\infty ,+\\infty )} (the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.",
"title": "Real functions"
},
{
"paragraph_id": 9,
"text": "A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function f ( x ) = x {\\displaystyle f(x)={\\sqrt {x}}} is continuous on its whole domain, which is the closed interval [ 0 , + ∞ ) . {\\displaystyle [0,+\\infty ).}",
"title": "Real functions"
},
{
"paragraph_id": 10,
"text": "Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples are the functions x ↦ 1 x {\\textstyle x\\mapsto {\\frac {1}{x}}} and x ↦ tan x . {\\displaystyle x\\mapsto \\tan x.} When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.",
"title": "Real functions"
},
{
"paragraph_id": 11,
"text": "A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions x ↦ 1 x {\\textstyle x\\mapsto {\\frac {1}{x}}} and x ↦ sin ( 1 x ) {\\textstyle x\\mapsto \\sin({\\frac {1}{x}})} are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity.",
"title": "Real functions"
},
{
"paragraph_id": 12,
"text": "Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.",
"title": "Real functions"
},
{
"paragraph_id": 13,
"text": "Let",
"title": "Real functions"
},
{
"paragraph_id": 14,
"text": "be a function defined on a subset D {\\displaystyle D} of the set R {\\displaystyle \\mathbb {R} } of real numbers.",
"title": "Real functions"
},
{
"paragraph_id": 15,
"text": "This subset D {\\displaystyle D} is the domain of f. Some possible choices include",
"title": "Real functions"
},
{
"paragraph_id": 16,
"text": "In the case of the domain D {\\displaystyle D} being defined as an open interval, a {\\displaystyle a} and b {\\displaystyle b} do not belong to D {\\displaystyle D} , and the values of f ( a ) {\\displaystyle f(a)} and f ( b ) {\\displaystyle f(b)} do not matter for continuity on D {\\displaystyle D} .",
"title": "Real functions"
},
{
"paragraph_id": 17,
"text": "The function f is continuous at some point c of its domain if the limit of f ( x ) , {\\displaystyle f(x),} as x approaches c through the domain of f, exists and is equal to f ( c ) . {\\displaystyle f(c).} In mathematical notation, this is written as",
"title": "Real functions"
},
{
"paragraph_id": 18,
"text": "In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal f ( c ) . {\\displaystyle f(c).}",
"title": "Real functions"
},
{
"paragraph_id": 19,
"text": "(Here, we have assumed that the domain of f does not have any isolated points.)",
"title": "Real functions"
},
{
"paragraph_id": 20,
"text": "A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point f ( c ) {\\displaystyle f(c)} as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood N 1 ( f ( c ) ) {\\displaystyle N_{1}(f(c))} there is a neighborhood N 2 ( c ) {\\displaystyle N_{2}(c)} in its domain such that f ( x ) ∈ N 1 ( f ( c ) ) {\\displaystyle f(x)\\in N_{1}(f(c))} whenever x ∈ N 2 ( c ) . {\\displaystyle x\\in N_{2}(c).}",
"title": "Real functions"
},
{
"paragraph_id": 21,
"text": "As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.",
"title": "Real functions"
},
{
"paragraph_id": 22,
"text": "One can instead require that for any sequence ( x n ) n ∈ N {\\displaystyle (x_{n})_{n\\in \\mathbb {N} }} of points in the domain which converges to c, the corresponding sequence ( f ( x n ) ) n ∈ N {\\displaystyle \\left(f(x_{n})\\right)_{n\\in \\mathbb {N} }} converges to f ( c ) . {\\displaystyle f(c).} In mathematical notation,",
"title": "Real functions"
},
{
"paragraph_id": 23,
"text": "Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function f : D → R {\\displaystyle f:D\\to \\mathbb {R} } as above and an element x 0 {\\displaystyle x_{0}} of the domain D {\\displaystyle D} , f {\\displaystyle f} is said to be continuous at the point x 0 {\\displaystyle x_{0}} when the following holds: For any positive real number ε > 0 , {\\displaystyle \\varepsilon >0,} however small, there exists some positive real number δ > 0 {\\displaystyle \\delta >0} such that for all x {\\displaystyle x} in the domain of f {\\displaystyle f} with x 0 − δ < x < x 0 + δ , {\\displaystyle x_{0}-\\delta <x<x_{0}+\\delta ,} the value of f ( x ) {\\displaystyle f(x)} satisfies",
"title": "Real functions"
},
{
"paragraph_id": 24,
"text": "Alternatively written, continuity of f : D → R {\\displaystyle f:D\\to \\mathbb {R} } at x 0 ∈ D {\\displaystyle x_{0}\\in D} means that for every ε > 0 , {\\displaystyle \\varepsilon >0,} there exists a δ > 0 {\\displaystyle \\delta >0} such that for all x ∈ D {\\displaystyle x\\in D} :",
"title": "Real functions"
},
{
"paragraph_id": 25,
"text": "More intuitively, we can say that if we want to get all the f ( x ) {\\displaystyle f(x)} values to stay in some small neighborhood around f ( x 0 ) , {\\displaystyle f\\left(x_{0}\\right),} we need to choose a small enough neighborhood for the x {\\displaystyle x} values around x 0 . {\\displaystyle x_{0}.} If we can do that no matter how small the f ( x 0 ) {\\displaystyle f(x_{0})} neighborhood is, then f {\\displaystyle f} is continuous at x 0 . {\\displaystyle x_{0}.}",
"title": "Real functions"
},
{
"paragraph_id": 26,
"text": "In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.",
"title": "Real functions"
},
{
"paragraph_id": 27,
"text": "Weierstrass had required that the interval x 0 − δ < x < x 0 + δ {\\displaystyle x_{0}-\\delta <x<x_{0}+\\delta } be entirely within the domain D {\\displaystyle D} , but Jordan removed that restriction.",
"title": "Real functions"
},
{
"paragraph_id": 28,
"text": "In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity. A function C : [ 0 , ∞ ) → [ 0 , ∞ ] {\\displaystyle C:[0,\\infty )\\to [0,\\infty ]} is called a control function if",
"title": "Real functions"
},
{
"paragraph_id": 29,
"text": "A function f : D → R {\\displaystyle f:D\\to R} is C-continuous at x 0 {\\displaystyle x_{0}} if there exists such a neighbourhood N ( x 0 ) {\\textstyle N(x_{0})} that",
"title": "Real functions"
},
{
"paragraph_id": 30,
"text": "A function is continuous in x 0 {\\displaystyle x_{0}} if it is C-continuous for some control function C.",
"title": "Real functions"
},
{
"paragraph_id": 31,
"text": "This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions C {\\displaystyle {\\mathcal {C}}} a function is C {\\displaystyle {\\mathcal {C}}} -continuous if it is C {\\displaystyle C} -continuous for some C ∈ C . {\\displaystyle C\\in {\\mathcal {C}}.} For example, the Lipschitz and Hölder continuous functions of exponent α below are defined by the set of control functions",
"title": "Real functions"
},
{
"paragraph_id": 32,
"text": "respectively",
"title": "Real functions"
},
{
"paragraph_id": 33,
"text": "Continuity can also be defined in terms of oscillation: a function f is continuous at a point x 0 {\\displaystyle x_{0}} if and only if its oscillation at that point is zero; in symbols, ω f ( x 0 ) = 0. {\\displaystyle \\omega _{f}(x_{0})=0.} A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point.",
"title": "Real functions"
},
{
"paragraph_id": 34,
"text": "This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than ε {\\displaystyle \\varepsilon } (hence a G δ {\\displaystyle G_{\\delta }} set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.",
"title": "Real functions"
},
{
"paragraph_id": 35,
"text": "The oscillation is equivalent to the ε − δ {\\displaystyle \\varepsilon -\\delta } definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given ε 0 {\\displaystyle \\varepsilon _{0}} there is no δ {\\displaystyle \\delta } that satisfies the ε − δ {\\displaystyle \\varepsilon -\\delta } definition, then the oscillation is at least ε 0 , {\\displaystyle \\varepsilon _{0},} and conversely if for every ε {\\displaystyle \\varepsilon } there is a desired δ , {\\displaystyle \\delta ,} the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.",
"title": "Real functions"
},
{
"paragraph_id": 36,
"text": "Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.",
"title": "Real functions"
},
{
"paragraph_id": 37,
"text": "(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.",
"title": "Real functions"
},
{
"paragraph_id": 38,
"text": "Checking the continuity of a given function can be simplified by checking one of the above defining properties for the building blocks of the given function. It is straightforward to show that the sum of two functions, continuous on some domain, is also continuous on this domain. Given",
"title": "Real functions"
},
{
"paragraph_id": 39,
"text": "then the sum of continuous functions",
"title": "Real functions"
},
{
"paragraph_id": 40,
"text": "(defined by s ( x ) = f ( x ) + g ( x ) {\\displaystyle s(x)=f(x)+g(x)} for all x ∈ D {\\displaystyle x\\in D} ) is continuous in D . {\\displaystyle D.}",
"title": "Real functions"
},
{
"paragraph_id": 41,
"text": "The same holds for the product of continuous functions,",
"title": "Real functions"
},
{
"paragraph_id": 42,
"text": "(defined by p ( x ) = f ( x ) ⋅ g ( x ) {\\displaystyle p(x)=f(x)\\cdot g(x)} for all x ∈ D {\\displaystyle x\\in D} ) is continuous in D . {\\displaystyle D.}",
"title": "Real functions"
},
{
"paragraph_id": 43,
"text": "Combining the above preservations of continuity and the continuity of constant functions and of the identity function I ( x ) = x {\\displaystyle I(x)=x} on R {\\displaystyle \\mathbb {R} } , one arrives at the continuity of all polynomial functions on R {\\displaystyle \\mathbb {R} } , such as",
"title": "Real functions"
},
{
"paragraph_id": 44,
"text": "(pictured on the right).",
"title": "Real functions"
},
{
"paragraph_id": 45,
"text": "In the same way, it can be shown that the reciprocal of a continuous function",
"title": "Real functions"
},
{
"paragraph_id": 46,
"text": "(defined by r ( x ) = 1 / f ( x ) {\\displaystyle r(x)=1/f(x)} for all x ∈ D {\\displaystyle x\\in D} such that f ( x ) ≠ 0 {\\displaystyle f(x)\\neq 0} ) is continuous in D ∖ { x : f ( x ) = 0 } . {\\displaystyle D\\setminus \\{x:f(x)=0\\}.}",
"title": "Real functions"
},
{
"paragraph_id": 47,
"text": "This implies that, excluding the roots of g , {\\displaystyle g,} the quotient of continuous functions",
"title": "Real functions"
},
{
"paragraph_id": 48,
"text": "(defined by q ( x ) = f ( x ) / g ( x ) {\\displaystyle q(x)=f(x)/g(x)} for all x ∈ D {\\displaystyle x\\in D} , such that g ( x ) ≠ 0 {\\displaystyle g(x)\\neq 0} ) is also continuous on D ∖ { x : g ( x ) = 0 } {\\displaystyle D\\setminus \\{x:g(x)=0\\}} .",
"title": "Real functions"
},
{
"paragraph_id": 49,
"text": "For example, the function (pictured)",
"title": "Real functions"
},
{
"paragraph_id": 50,
"text": "is defined for all real numbers x ≠ − 2 {\\displaystyle x\\neq -2} and is continuous at every such point. Thus, it is a continuous function. The question of continuity at x = − 2 {\\displaystyle x=-2} does not arise since x = − 2 {\\displaystyle x=-2} is not in the domain of y . {\\displaystyle y.} There is no continuous function F : R → R {\\displaystyle F:\\mathbb {R} \\to \\mathbb {R} } that agrees with y ( x ) {\\displaystyle y(x)} for all x ≠ − 2. {\\displaystyle x\\neq -2.}",
"title": "Real functions"
},
{
"paragraph_id": 51,
"text": "Since the function sine is continuous on all reals, the sinc function G ( x ) = sin ( x ) / x , {\\displaystyle G(x)=\\sin(x)/x,} is defined and continuous for all real x ≠ 0. {\\displaystyle x\\neq 0.} However, unlike the previous example, G can be extended to a continuous function on all real numbers, by defining the value G ( 0 ) {\\displaystyle G(0)} to be 1, which is the limit of G ( x ) , {\\displaystyle G(x),} when x approaches 0, i.e.,",
"title": "Real functions"
},
{
"paragraph_id": 52,
"text": "Thus, by setting",
"title": "Real functions"
},
{
"paragraph_id": 53,
"text": "the sinc-function becomes a continuous function on all real numbers. The term removable singularity is used in such cases when (re)defining values of a function to coincide with the appropriate limits make a function continuous at specific points.",
"title": "Real functions"
},
{
"paragraph_id": 54,
"text": "A more involved construction of continuous functions is the function composition. Given two continuous functions",
"title": "Real functions"
},
{
"paragraph_id": 55,
"text": "their composition, denoted as c = g ∘ f : D f → R , {\\displaystyle c=g\\circ f:D_{f}\\to \\mathbb {R} ,} and defined by c ( x ) = g ( f ( x ) ) , {\\displaystyle c(x)=g(f(x)),} is continuous.",
"title": "Real functions"
},
{
"paragraph_id": 56,
"text": "This construction allows stating, for example, that",
"title": "Real functions"
},
{
"paragraph_id": 57,
"text": "is continuous for all x > 0. {\\displaystyle x>0.}",
"title": "Real functions"
},
{
"paragraph_id": 58,
"text": "An example of a discontinuous function is the Heaviside step function H {\\displaystyle H} , defined by",
"title": "Real functions"
},
{
"paragraph_id": 59,
"text": "Pick for instance ε = 1 / 2 {\\displaystyle \\varepsilon =1/2} . Then there is no δ {\\displaystyle \\delta } -neighborhood around x = 0 {\\displaystyle x=0} , i.e. no open interval ( − δ , δ ) {\\displaystyle (-\\delta ,\\;\\delta )} with δ > 0 , {\\displaystyle \\delta >0,} that will force all the H ( x ) {\\displaystyle H(x)} values to be within the ε {\\displaystyle \\varepsilon } -neighborhood of H ( 0 ) {\\displaystyle H(0)} , i.e. within ( 1 / 2 , 3 / 2 ) {\\displaystyle (1/2,\\;3/2)} . Intuitively, we can think of this type of discontinuity as a sudden jump in function values.",
"title": "Real functions"
},
{
"paragraph_id": 60,
"text": "Similarly, the signum or sign function",
"title": "Real functions"
},
{
"paragraph_id": 61,
"text": "is discontinuous at x = 0 {\\displaystyle x=0} but continuous everywhere else. Yet another example: the function",
"title": "Real functions"
},
{
"paragraph_id": 62,
"text": "is continuous everywhere apart from x = 0 {\\displaystyle x=0} .",
"title": "Real functions"
},
{
"paragraph_id": 63,
"text": "Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,",
"title": "Real functions"
},
{
"paragraph_id": 64,
"text": "is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,",
"title": "Real functions"
},
{
"paragraph_id": 65,
"text": "is nowhere continuous.",
"title": "Real functions"
},
{
"paragraph_id": 66,
"text": "Let f ( x ) {\\displaystyle f(x)} be a function that is continuous at a point x 0 , {\\displaystyle x_{0},} and y 0 {\\displaystyle y_{0}} be a value such f ( x 0 ) ≠ y 0 . {\\displaystyle f\\left(x_{0}\\right)\\neq y_{0}.} Then f ( x ) ≠ y 0 {\\displaystyle f(x)\\neq y_{0}} throughout some neighbourhood of x 0 . {\\displaystyle x_{0}.}",
"title": "Real functions"
},
{
"paragraph_id": 67,
"text": "Proof: By the definition of continuity, take ε = | y 0 − f ( x 0 ) | 2 > 0 {\\displaystyle \\varepsilon ={\\frac {|y_{0}-f(x_{0})|}{2}}>0} , then there exists δ > 0 {\\displaystyle \\delta >0} such that",
"title": "Real functions"
},
{
"paragraph_id": 68,
"text": "Suppose there is a point in the neighbourhood | x − x 0 | < δ {\\displaystyle |x-x_{0}|<\\delta } for which f ( x ) = y 0 ; {\\displaystyle f(x)=y_{0};} then we have the contradiction",
"title": "Real functions"
},
{
"paragraph_id": 69,
"text": "The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:",
"title": "Real functions"
},
{
"paragraph_id": 70,
"text": "For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.",
"title": "Real functions"
},
{
"paragraph_id": 71,
"text": "As a consequence, if f is continuous on [ a , b ] {\\displaystyle [a,b]} and f ( a ) {\\displaystyle f(a)} and f ( b ) {\\displaystyle f(b)} differ in sign, then, at some point c ∈ [ a , b ] , {\\displaystyle c\\in [a,b],} f ( c ) {\\displaystyle f(c)} must equal zero.",
"title": "Real functions"
},
{
"paragraph_id": 72,
"text": "The extreme value theorem states that if a function f is defined on a closed interval [ a , b ] {\\displaystyle [a,b]} (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists c ∈ [ a , b ] {\\displaystyle c\\in [a,b]} with f ( c ) ≥ f ( x ) {\\displaystyle f(c)\\geq f(x)} for all x ∈ [ a , b ] . {\\displaystyle x\\in [a,b].} The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval ( a , b ) {\\displaystyle (a,b)} (or any set that is not both closed and bounded), as, for example, the continuous function f ( x ) = 1 x , {\\displaystyle f(x)={\\frac {1}{x}},} defined on the open interval (0,1), does not attain a maximum, being unbounded above.",
"title": "Real functions"
},
{
"paragraph_id": 73,
"text": "Every differentiable function",
"title": "Real functions"
},
{
"paragraph_id": 74,
"text": "is continuous, as can be shown. The converse does not hold: for example, the absolute value function",
"title": "Real functions"
},
{
"paragraph_id": 75,
"text": "is everywhere continuous. However, it is not differentiable at x = 0 {\\displaystyle x=0} (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.",
"title": "Real functions"
},
{
"paragraph_id": 76,
"text": "The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted C 1 ( ( a , b ) ) . {\\displaystyle C^{1}((a,b)).} More generally, the set of functions",
"title": "Real functions"
},
{
"paragraph_id": 77,
"text": "(from an open interval (or open subset of R {\\displaystyle \\mathbb {R} } ) Ω {\\displaystyle \\Omega } to the reals) such that f is n {\\displaystyle n} times differentiable and such that the n {\\displaystyle n} -th derivative of f is continuous is denoted C n ( Ω ) . {\\displaystyle C^{n}(\\Omega ).} See differentiability class. In the field of computer graphics, properties related (but not identical) to C 0 , C 1 , C 2 {\\displaystyle C^{0},C^{1},C^{2}} are sometimes called G 0 {\\displaystyle G^{0}} (continuity of position), G 1 {\\displaystyle G^{1}} (continuity of tangency), and G 2 {\\displaystyle G^{2}} (continuity of curvature); see Smoothness of curves and surfaces.",
"title": "Real functions"
},
{
"paragraph_id": 78,
"text": "Every continuous function",
"title": "Real functions"
},
{
"paragraph_id": 79,
"text": "is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.",
"title": "Real functions"
},
{
"paragraph_id": 80,
"text": "Given a sequence",
"title": "Real functions"
},
{
"paragraph_id": 81,
"text": "of functions such that the limit",
"title": "Real functions"
},
{
"paragraph_id": 82,
"text": "exists for all x ∈ D , {\\displaystyle x\\in D,} , the resulting function f ( x ) {\\displaystyle f(x)} is referred to as the pointwise limit of the sequence of functions ( f n ) n ∈ N . {\\displaystyle \\left(f_{n}\\right)_{n\\in N}.} The pointwise limit function need not be continuous, even if all functions f n {\\displaystyle f_{n}} are continuous, as the animation at the right shows. However, f is continuous if all functions f n {\\displaystyle f_{n}} are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.",
"title": "Real functions"
},
{
"paragraph_id": 83,
"text": "Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number ε > 0 {\\displaystyle \\varepsilon >0} however small, there exists some number δ > 0 {\\displaystyle \\delta >0} such that for all x in the domain with c < x < c + δ , {\\displaystyle c<x<c+\\delta ,} the value of f ( x ) {\\displaystyle f(x)} will satisfy",
"title": "Real functions"
},
{
"paragraph_id": 84,
"text": "This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with c − δ < x < c {\\displaystyle c-\\delta <x<c} yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous.",
"title": "Real functions"
},
{
"paragraph_id": 85,
"text": "A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any ε > 0 , {\\displaystyle \\varepsilon >0,} there exists some number δ > 0 {\\displaystyle \\delta >0} such that for all x in the domain with | x − c | < δ , {\\displaystyle |x-c|<\\delta ,} the value of f ( x ) {\\displaystyle f(x)} satisfies",
"title": "Real functions"
},
{
"paragraph_id": 86,
"text": "The reverse condition is upper semi-continuity.",
"title": "Real functions"
},
{
"paragraph_id": 87,
"text": "",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 88,
"text": "The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set X {\\displaystyle X} equipped with a function (called metric) d X , {\\displaystyle d_{X},} that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 89,
"text": "that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces ( X , d X ) {\\displaystyle \\left(X,d_{X}\\right)} and ( Y , d Y ) {\\displaystyle \\left(Y,d_{Y}\\right)} and a function",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 90,
"text": "then f {\\displaystyle f} is continuous at the point c ∈ X {\\displaystyle c\\in X} (with respect to the given metrics) if for any positive real number ε > 0 , {\\displaystyle \\varepsilon >0,} there exists a positive real number δ > 0 {\\displaystyle \\delta >0} such that all x ∈ X {\\displaystyle x\\in X} satisfying d X ( x , c ) < δ {\\displaystyle d_{X}(x,c)<\\delta } will also satisfy d Y ( f ( x ) , f ( c ) ) < ε . {\\displaystyle d_{Y}(f(x),f(c))<\\varepsilon .} As in the case of real functions above, this is equivalent to the condition that for every sequence ( x n ) {\\displaystyle \\left(x_{n}\\right)} in X {\\displaystyle X} with limit lim x n = c , {\\displaystyle \\lim x_{n}=c,} we have lim f ( x n ) = f ( c ) . {\\displaystyle \\lim f\\left(x_{n}\\right)=f(c).} The latter condition can be weakened as follows: f {\\displaystyle f} is continuous at the point c {\\displaystyle c} if and only if for every convergent sequence ( x n ) {\\displaystyle \\left(x_{n}\\right)} in X {\\displaystyle X} with limit c {\\displaystyle c} , the sequence ( f ( x n ) ) {\\displaystyle \\left(f\\left(x_{n}\\right)\\right)} is a Cauchy sequence, and c {\\displaystyle c} is in the domain of f {\\displaystyle f} .",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 91,
"text": "The set of points at which a function between metric spaces is continuous is a G δ {\\displaystyle G_{\\delta }} set – this follows from the ε − δ {\\displaystyle \\varepsilon -\\delta } definition of continuity.",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 92,
"text": "This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 93,
"text": "between normed vector spaces V {\\displaystyle V} and W {\\displaystyle W} (which are vector spaces equipped with a compatible norm, denoted ‖ x ‖ {\\displaystyle \\|x\\|} ) is continuous if and only if it is bounded, that is, there is a constant K {\\displaystyle K} such that",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 94,
"text": "for all x ∈ V . {\\displaystyle x\\in V.}",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 95,
"text": "The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way δ {\\displaystyle \\delta } depends on ε {\\displaystyle \\varepsilon } and c in the definition above. Intuitively, a function f as above is uniformly continuous if the δ {\\displaystyle \\delta } does not depend on the point c. More precisely, it is required that for every real number ε > 0 {\\displaystyle \\varepsilon >0} there exists δ > 0 {\\displaystyle \\delta >0} such that for every c , b ∈ X {\\displaystyle c,b\\in X} with d X ( b , c ) < δ , {\\displaystyle d_{X}(b,c)<\\delta ,} we have that d Y ( f ( b ) , f ( c ) ) < ε . {\\displaystyle d_{Y}(f(b),f(c))<\\varepsilon .} Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 96,
"text": "A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all b , c ∈ X , {\\displaystyle b,c\\in X,} the inequality",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 97,
"text": "holds. Any Hölder continuous function is uniformly continuous. The particular case α = 1 {\\displaystyle \\alpha =1} is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 98,
"text": "holds for any b , c ∈ X . {\\displaystyle b,c\\in X.} The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.",
"title": "Continuous functions between metric spaces"
},
{
"paragraph_id": 99,
"text": "Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 100,
"text": "A function",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 101,
"text": "between two topological spaces X and Y is continuous if for every open set V ⊆ Y , {\\displaystyle V\\subseteq Y,} the inverse image",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 102,
"text": "is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology T X {\\displaystyle T_{X}} ), but the continuity of f depends on the topologies used on X and Y.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 103,
"text": "This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 104,
"text": "An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 105,
"text": "to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 106,
"text": "The translation in the language of neighborhoods of the ( ε , δ ) {\\displaystyle (\\varepsilon ,\\delta )} -definition of continuity leads to the following definition of the continuity at a point:",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 107,
"text": "This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 108,
"text": "Also, as every set that contains a neighborhood is also a neighborhood, and f − 1 ( V ) {\\displaystyle f^{-1}(V)} is the largest subset U of X such that f ( U ) ⊆ V , {\\displaystyle f(U)\\subseteq V,} this definition may be simplified into:",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 109,
"text": "As an open set is a set that is a neighborhood of all its points, a function f : X → Y {\\displaystyle f:X\\to Y} is continuous at every point of X if and only if it is a continuous function.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 110,
"text": "If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above ε − δ {\\displaystyle \\varepsilon -\\delta } definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 111,
"text": "Given x ∈ X , {\\displaystyle x\\in X,} a map f : X → Y {\\displaystyle f:X\\to Y} is continuous at x {\\displaystyle x} if and only if whenever B {\\displaystyle {\\mathcal {B}}} is a filter on X {\\displaystyle X} that converges to x {\\displaystyle x} in X , {\\displaystyle X,} which is expressed by writing B → x , {\\displaystyle {\\mathcal {B}}\\to x,} then necessarily f ( B ) → f ( x ) {\\displaystyle f({\\mathcal {B}})\\to f(x)} in Y . {\\displaystyle Y.} If N ( x ) {\\displaystyle {\\mathcal {N}}(x)} denotes the neighborhood filter at x {\\displaystyle x} then f : X → Y {\\displaystyle f:X\\to Y} is continuous at x {\\displaystyle x} if and only if f ( N ( x ) ) → f ( x ) {\\displaystyle f({\\mathcal {N}}(x))\\to f(x)} in Y . {\\displaystyle Y.} Moreover, this happens if and only if the prefilter f ( N ( x ) ) {\\displaystyle f({\\mathcal {N}}(x))} is a filter base for the neighborhood filter of f ( x ) {\\displaystyle f(x)} in Y . {\\displaystyle Y.}",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 112,
"text": "Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 113,
"text": "In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 114,
"text": "In detail, a function f : X → Y {\\displaystyle f:X\\to Y} is sequentially continuous if whenever a sequence ( x n ) {\\displaystyle \\left(x_{n}\\right)} in X {\\displaystyle X} converges to a limit x , {\\displaystyle x,} the sequence ( f ( x n ) ) {\\displaystyle \\left(f\\left(x_{n}\\right)\\right)} converges to f ( x ) . {\\displaystyle f(x).} Thus, sequentially continuous functions \"preserve sequential limits.\" Every continuous function is sequentially continuous. If X {\\displaystyle X} is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X {\\displaystyle X} is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 115,
"text": "For instance, consider the case of real-valued functions of one real variable:",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 116,
"text": "Theorem — A function f : A ⊆ R → R {\\displaystyle f:A\\subseteq \\mathbb {R} \\to \\mathbb {R} } is continuous at x 0 {\\displaystyle x_{0}} if and only if it is sequentially continuous at that point.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 117,
"text": "In terms of the interior operator, a function f : X → Y {\\displaystyle f:X\\to Y} between topological spaces is continuous if and only if for every subset B ⊆ Y , {\\displaystyle B\\subseteq Y,}",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 118,
"text": "In terms of the closure operator, f : X → Y {\\displaystyle f:X\\to Y} is continuous if and only if for every subset A ⊆ X , {\\displaystyle A\\subseteq X,}",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 119,
"text": "That is to say, given any element x ∈ X {\\displaystyle x\\in X} that belongs to the closure of a subset A ⊆ X , {\\displaystyle A\\subseteq X,} f ( x ) {\\displaystyle f(x)} necessarily belongs to the closure of f ( A ) {\\displaystyle f(A)} in Y . {\\displaystyle Y.} If we declare that a point x {\\displaystyle x} is close to a subset A ⊆ X {\\displaystyle A\\subseteq X} if x ∈ cl X A , {\\displaystyle x\\in \\operatorname {cl} _{X}A,} then this terminology allows for a plain English description of continuity: f {\\displaystyle f} is continuous if and only if for every subset A ⊆ X , {\\displaystyle A\\subseteq X,} f {\\displaystyle f} maps points that are close to A {\\displaystyle A} to points that are close to f ( A ) . {\\displaystyle f(A).} Similarly, f {\\displaystyle f} is continuous at a fixed given point x ∈ X {\\displaystyle x\\in X} if and only if whenever x {\\displaystyle x} is close to a subset A ⊆ X , {\\displaystyle A\\subseteq X,} then f ( x ) {\\displaystyle f(x)} is close to f ( A ) . {\\displaystyle f(A).}",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 120,
"text": "Instead of specifying topological spaces by their open subsets, any topology on X {\\displaystyle X} can alternatively be determined by a closure operator or by an interior operator. Specifically, the map that sends a subset A {\\displaystyle A} of a topological space X {\\displaystyle X} to its topological closure cl X A {\\displaystyle \\operatorname {cl} _{X}A} satisfies the Kuratowski closure axioms. Conversely, for any closure operator A ↦ cl A {\\displaystyle A\\mapsto \\operatorname {cl} A} there exists a unique topology τ {\\displaystyle \\tau } on X {\\displaystyle X} (specifically, τ := { X ∖ cl A : A ⊆ X } {\\displaystyle \\tau :=\\{X\\setminus \\operatorname {cl} A:A\\subseteq X\\}} ) such that for every subset A ⊆ X , {\\displaystyle A\\subseteq X,} cl A {\\displaystyle \\operatorname {cl} A} is equal to the topological closure cl ( X , τ ) A {\\displaystyle \\operatorname {cl} _{(X,\\tau )}A} of A {\\displaystyle A} in ( X , τ ) . {\\displaystyle (X,\\tau ).} If the sets X {\\displaystyle X} and Y {\\displaystyle Y} are each associated with closure operators (both denoted by cl {\\displaystyle \\operatorname {cl} } ) then a map f : X → Y {\\displaystyle f:X\\to Y} is continuous if and only if f ( cl A ) ⊆ cl ( f ( A ) ) {\\displaystyle f(\\operatorname {cl} A)\\subseteq \\operatorname {cl} (f(A))} for every subset A ⊆ X . {\\displaystyle A\\subseteq X.}",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 121,
"text": "Similarly, the map that sends a subset A {\\displaystyle A} of X {\\displaystyle X} to its topological interior int X A {\\displaystyle \\operatorname {int} _{X}A} defines an interior operator. Conversely, any interior operator A ↦ int A {\\displaystyle A\\mapsto \\operatorname {int} A} induces a unique topology τ {\\displaystyle \\tau } on X {\\displaystyle X} (specifically, τ := { int A : A ⊆ X } {\\displaystyle \\tau :=\\{\\operatorname {int} A:A\\subseteq X\\}} ) such that for every A ⊆ X , {\\displaystyle A\\subseteq X,} int A {\\displaystyle \\operatorname {int} A} is equal to the topological interior int ( X , τ ) A {\\displaystyle \\operatorname {int} _{(X,\\tau )}A} of A {\\displaystyle A} in ( X , τ ) . {\\displaystyle (X,\\tau ).} If the sets X {\\displaystyle X} and Y {\\displaystyle Y} are each associated with interior operators (both denoted by int {\\displaystyle \\operatorname {int} } ) then a map f : X → Y {\\displaystyle f:X\\to Y} is continuous if and only if f − 1 ( int B ) ⊆ int ( f − 1 ( B ) ) {\\displaystyle f^{-1}(\\operatorname {int} B)\\subseteq \\operatorname {int} \\left(f^{-1}(B)\\right)} for every subset B ⊆ Y . {\\displaystyle B\\subseteq Y.}",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 122,
"text": "Continuity can also be characterized in terms of filters. A function f : X → Y {\\displaystyle f:X\\to Y} is continuous if and only if whenever a filter B {\\displaystyle {\\mathcal {B}}} on X {\\displaystyle X} converges in X {\\displaystyle X} to a point x ∈ X , {\\displaystyle x\\in X,} then the prefilter f ( B ) {\\displaystyle f({\\mathcal {B}})} converges in Y {\\displaystyle Y} to f ( x ) . {\\displaystyle f(x).} This characterization remains true if the word \"filter\" is replaced by \"prefilter.\"",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 123,
"text": "If f : X → Y {\\displaystyle f:X\\to Y} and g : Y → Z {\\displaystyle g:Y\\to Z} are continuous, then so is the composition g ∘ f : X → Z . {\\displaystyle g\\circ f:X\\to Z.} If f : X → Y {\\displaystyle f:X\\to Y} is continuous and",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 124,
"text": "The possible topologies on a fixed set X are partially ordered: a topology τ 1 {\\displaystyle \\tau _{1}} is said to be coarser than another topology τ 2 {\\displaystyle \\tau _{2}} (notation: τ 1 ⊆ τ 2 {\\displaystyle \\tau _{1}\\subseteq \\tau _{2}} ) if every open subset with respect to τ 1 {\\displaystyle \\tau _{1}} is also open with respect to τ 2 . {\\displaystyle \\tau _{2}.} Then, the identity map",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 125,
"text": "is continuous if and only if τ 1 ⊆ τ 2 {\\displaystyle \\tau _{1}\\subseteq \\tau _{2}} (see also comparison of topologies). More generally, a continuous function",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 126,
"text": "stays continuous if the topology τ Y {\\displaystyle \\tau _{Y}} is replaced by a coarser topology and/or τ X {\\displaystyle \\tau _{X}} is replaced by a finer topology.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 127,
"text": "Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f − 1 {\\displaystyle f^{-1}} need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 128,
"text": "If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 129,
"text": "Given a function",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 130,
"text": "where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f − 1 ( A ) {\\displaystyle f^{-1}(A)} is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 131,
"text": "Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that A = f − 1 ( U ) {\\displaystyle A=f^{-1}(U)} for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 132,
"text": "A topology on a set S is uniquely determined by the class of all continuous functions S → X {\\displaystyle S\\to X} into all topological spaces X. Dually, a similar idea can be applied to maps X → S . {\\displaystyle X\\to S.}",
"title": "Continuous functions between topological spaces"
},
{
"paragraph_id": 133,
"text": "If f : S → Y {\\displaystyle f:S\\to Y} is a continuous function from some subset S {\\displaystyle S} of a topological space X {\\displaystyle X} then a continuous extension of f {\\displaystyle f} to X {\\displaystyle X} is any continuous function F : X → Y {\\displaystyle F:X\\to Y} such that F ( s ) = f ( s ) {\\displaystyle F(s)=f(s)} for every s ∈ S , {\\displaystyle s\\in S,} which is a condition that often written as f = F | S . {\\displaystyle f=F{\\big \\vert }_{S}.} In words, it is any continuous function F : X → Y {\\displaystyle F:X\\to Y} that restricts to f {\\displaystyle f} on S . {\\displaystyle S.} This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If f : S → Y {\\displaystyle f:S\\to Y} is not continuous, then it could not possibly have a continuous extension. If Y {\\displaystyle Y} is a Hausdorff space and S {\\displaystyle S} is a dense subset of X {\\displaystyle X} then a continuous extension of f : S → Y {\\displaystyle f:S\\to Y} to X , {\\displaystyle X,} if one exists, will be unique. The Blumberg theorem states that if f : R → R {\\displaystyle f:\\mathbb {R} \\to \\mathbb {R} } is an arbitrary function then there exists a dense subset D {\\displaystyle D} of R {\\displaystyle \\mathbb {R} } such that the restriction f | D : D → R {\\displaystyle f{\\big \\vert }_{D}:D\\to \\mathbb {R} } is continuous; in other words, every function R → R {\\displaystyle \\mathbb {R} \\to \\mathbb {R} } can be restricted to some dense subset on which it is continuous.",
"title": "Related notions"
},
{
"paragraph_id": 134,
"text": "Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function f : X → Y {\\displaystyle f:X\\to Y} between particular types of partially ordered sets X {\\displaystyle X} and Y {\\displaystyle Y} is continuous if for each directed subset A {\\displaystyle A} of X , {\\displaystyle X,} we have sup f ( A ) = f ( sup A ) . {\\displaystyle \\sup f(A)=f(\\sup A).} Here sup {\\displaystyle \\,\\sup \\,} is the supremum with respect to the orderings in X {\\displaystyle X} and Y , {\\displaystyle Y,} respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.",
"title": "Related notions"
},
{
"paragraph_id": 135,
"text": "In category theory, a functor",
"title": "Related notions"
},
{
"paragraph_id": 136,
"text": "between two categories is called continuous if it commutes with small limits. That is to say,",
"title": "Related notions"
},
{
"paragraph_id": 137,
"text": "for any small (that is, indexed by a set I , {\\displaystyle I,} as opposed to a class) diagram of objects in C {\\displaystyle {\\mathcal {C}}} .",
"title": "Related notions"
},
{
"paragraph_id": 138,
"text": "A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.",
"title": "Related notions"
}
]
| In mathematics, a continuous function is a function such that a continuous variation of the argument induces a continuous variation of the value of the function. This means there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity. Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology. A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity. As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn. | 2001-08-25T14:09:52Z | 2023-12-27T17:40:03Z | [
"Template:Springer",
"Template:Calculus",
"Template:Nowrap",
"Template:Anchor",
"Template:Main",
"Template:Citation",
"Template:Reflist",
"Template:Em",
"Template:Math",
"Template:Div col",
"Template:Commons category",
"Template:Div col end",
"Template:Cite web",
"Template:Cite book",
"Template:Short description",
"Template:Mvar",
"Template:Block indent",
"Template:Sfn",
"Template:Collapse top",
"Template:Calculus topics",
"Template:Authority control",
"Template:Cite journal",
"Template:Quote frame",
"Template:Dugundji Topology",
"Template:Analysis-footer",
"Template:Math theorem",
"Template:Collapse bottom"
]
| https://en.wikipedia.org/wiki/Continuous_function |
6,123 | Curl (mathematics) | In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.
A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve.
The notation curl F is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation rot F is traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in ∇ × F {\displaystyle \nabla \times \mathbf {F} } , which also reveals the relation between curl (rotor), divergence, and gradient operators.
Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation ∇ × {\displaystyle \nabla \times } for the curl.
The name "curl" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839.
The curl of a vector field F, denoted by curl F, or ∇ × F {\displaystyle \nabla \times \mathbf {F} } , or rot F, is an operator that maps C functions in R to C functions in R, and in particular, it maps continuously differentiable functions R → R to continuous functions R → R. It can be defined in several ways, to be mentioned below:
One way to define the curl of a vector field at a point is implicitly through its projections onto various axes passing through the point: if u ^ {\displaystyle \mathbf {\hat {u}} } is any unit vector, the projection of the curl of F onto u ^ {\displaystyle \mathbf {\hat {u}} } may be defined to be the limiting value of a closed line integral in a plane orthogonal to u ^ {\displaystyle \mathbf {\hat {u}} } divided by the area enclosed, as the path of integration is contracted indefinitely around the point.
More specifically, the curl is defined at a point p as
where the line integral is calculated along the boundary C of the area A in question, |A| being the magnitude of the area. This equation defines the projection of the curl of F onto u ^ {\displaystyle \mathbf {\hat {u}} } . The infinitesimal surfaces bounded by C have u ^ {\displaystyle \mathbf {\hat {u}} } as their normal. C is oriented via the right-hand rule.
The above formula means that the projection of the curl of a vector field along a certain axis is the infinitesimal area density of the circulation of the field projected onto a plane perpendicular to that axis. This formula does not a priori define a legitimate vector field, for the individual circulation densities with respect to various axes a priori need not relate to each other in the same way as the components of a vector do; that they do indeed relate to each other in this precise manner must be proven separately.
To this definition fits naturally the Kelvin–Stokes theorem, as a global formula corresponding to the definition. It equates the surface integral of the curl of a vector field to the above line integral taken around the boundary of the surface.
Another way one can define the curl vector of a function F at a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosing p divided by the volume enclosed, as the shell is contracted indefinitely around p.
More specifically, the curl may be defined by the vector formula
where the surface integral is calculated along the boundary S of the volume V, |V| being the magnitude of the volume, and n ^ {\displaystyle \mathbf {\hat {n}} } pointing outward from the surface S perpendicularly at every point in S.
In this formula, the cross product in the integrand measures the tangential component of F at each point on the surface S, together with the orientation of these tangential components with respect to the surface S. Thus, the surface integral measures the overall extent to which F circulates around S, together with the net orientation of this circulation in space. The curl of a vector field at a point is then the infinitesimal volume density of the net vector circulation (i.e., both magnitude and spatial orientation) of the field around the point.
To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates the volume integral of the curl of a vector field to the above surface integral taken over the boundary of the volume.
Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinear orthogonal coordinates, e.g. in Cartesian coordinates, spherical, cylindrical, or even elliptical or parabolic coordinates:
The equation for each component (curl F)k can be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices).
If (x1, x2, x3) are the Cartesian coordinates and (u1, u2, u3) are the orthogonal coordinates, then
is the length of the coordinate vector corresponding to ui. The remaining two components of curl result from cyclic permutation of indices: 3,1,2 → 1,2,3 → 2,3,1.
In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curl operator can be applied using some set of curvilinear coordinates, for which simpler representations have been derived.
The notation ∇ × F has its origins in the similarities to the 3-dimensional cross product, and it is useful as a mnemonic in Cartesian coordinates if ∇ is taken as a vector differential operator del. Such notation involving operators is common in physics and algebra.
Expanded in 3-dimensional Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations),∇ × F is, for F composed of [Fx, Fy, Fz] (where the subscripts indicate the components of the vector, not partial derivatives):
where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. This expands as follows:
Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection.
In a general coordinate system, the curl is given by
where ε denotes the Levi-Civita tensor, ∇ the covariant derivative, g {\displaystyle g} is the determinant of the metric tensor and the Einstein summation convention implies that repeated indices are summed over. Due to the symmetry of the Christoffel symbols participating in the covariant derivative, this expression reduces to the partial derivative:
where Rk are the local basis vectors. Equivalently, using the exterior derivative, the curl can be expressed as:
Here ♭ and ♯ are the musical isomorphisms, and ★ is the Hodge star operator. This formula shows how to calculate the curl of F in any coordinate system, and how to extend the curl to any oriented three-dimensional Riemannian manifold. Since this depends on a choice of orientation, curl is a chiral operation. In other words, if the orientation is reversed, then the direction of the curl is also reversed.
Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the centre of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the centre of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The curl of the vector at any point is given by the rotation of an infinitesimal area in the xy-plane (for z-axis component of the curl), zx-plane (for y-axis component of the curl) and yz-plane (for x-axis component of the curl vector). This can be clearly seen in the examples below.
The vector field
can be decomposed as
Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linear force acting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed.
Calculating the curl:
The resulting vector field describing the curl would at all points be pointing in the negative z direction. The results of this equation align with what could have been predicted using the right-hand rule using a right-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed.
For the vector field
the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the line x = 3, the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negative z direction. Inversely, if placed on x = −3, the object would rotate counterclockwise and the right-hand rule would result in a positive z direction.
Calculating the curl:
The curl points in the negative z direction when x is positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the plane x = 0.
In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields v and F can be shown to be
Interchanging the vector field v and ∇ operator, we arrive at the cross product of a vector field with curl of a vector field:
where ∇F is the Feynman subscript notation, which considers only the variation due to the vector field F (i.e., in this case, v is treated as being constant in space).
Another example is the curl of a curl of a vector field. It can be shown that in general coordinates
and this identity defines the vector Laplacian of F, symbolized as ∇F.
The curl of the gradient of any scalar field φ is always the zero vector field
which follows from the antisymmetry in the definition of the curl, and the symmetry of second derivatives.
The divergence of the curl of any vector field is equal to zero:
If φ is a scalar valued function and F is a vector field, then
The vector calculus operations of grad, curl, and div are most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifying bivectors (2-vectors) in 3 dimensions with the special orthogonal Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) and s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} , these all being 3-dimensional spaces.
In 3 dimensions, a differential 0-form is a real-valued function f(x, y, z); a differential 1-form is the following expression, where the coefficients are functions:
a differential 2-form is the formal sum, again with function coefficients:
and a differential 3-form is defined by a single term with one function as coefficient:
(Here the a-coefficients are real functions of three variables; the "wedge products", e.g. dx ∧ dy, can be interpreted as some kind of oriented area elements, dx ∧ dy = −dy ∧ dx, etc.)
The exterior derivative of a k-form in R is defined as the (k + 1)-form from above—and in R if, e.g.,
then the exterior derivative d leads to
The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives,
and antisymmetry,
the twofold application of the exterior derivative yields 0 {\displaystyle 0} (the zero k + 2 {\displaystyle k+2} -form).
Thus, denoting the space of k-forms by Ω(R) and the exterior derivative by d one gets a sequence:
Here Ω(R) is the space of sections of the exterior algebra Λ(R) vector bundle over R, whose dimension is the binomial coefficient (k); note that Ω(R) = 0 for k > 3 or k < 0. Writing only dimensions, one obtains a row of Pascal's triangle:
the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div.
Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On a Riemannian manifold, or more generally pseudo-Riemannian manifold, k-forms can be identified with k-vector fields (k-forms are k-covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on an oriented vector space with a nondegenerate form (an isomorphism between vectors and covectors), there is an isomorphism between k-vectors and (n − k)-vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchange k-forms, k-vector fields, (n − k)-forms, and (n − k)-vector fields; this is known as Hodge duality. Concretely, on R this is given by:
Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields:
On the other hand, the fact that d = 0 corresponds to the identities
for any scalar field f, and
for any vector field v.
Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms and n-forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and (n − 1)-forms are always fiberwise n-dimensional and can be identified with vector fields.
Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are
so the curl of a 1-vector field (fiberwise 4-dimensional) is a 2-vector field, which at each point belongs to 6-dimensional vector space, and so one has
which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero (d = 0). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way.
However, one can define a curl of a vector field as a 2-vector field in general, as described below.
2-vectors correspond to the exterior power ΛV; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as the special orthogonal Lie algebra s o {\displaystyle {\mathfrak {so}}} (V) of infinitesimal rotations. This has (2) = 1/2n(n − 1) dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we have n = 1/2n(n − 1), which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebra s o ( 4 ) {\displaystyle {\mathfrak {so}}(4)} .
The curl of a 3-dimensional vector field which only depends on 2 coordinates (say x and y) is simply a vertical vector field (in the z direction) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page.
Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions.
In the case where the divergence of a vector field V is zero, a vector field W exists such that V = curl(W). This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential.
If W is a vector field with curl(W) = V, then adding any gradient vector field grad(f) to W will result in another vector field W + grad(f) such that curl(W + grad(f)) = V as well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknown irrotational field with the Biot–Savart law. | [
{
"paragraph_id": 0,
"text": "In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The notation curl F is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation rot F is traditionally used, which comes from the \"rate of rotation\" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in ∇ × F {\\displaystyle \\nabla \\times \\mathbf {F} } , which also reveals the relation between curl (rotor), divergence, and gradient operators.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation ∇ × {\\displaystyle \\nabla \\times } for the curl.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The name \"curl\" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The curl of a vector field F, denoted by curl F, or ∇ × F {\\displaystyle \\nabla \\times \\mathbf {F} } , or rot F, is an operator that maps C functions in R to C functions in R, and in particular, it maps continuously differentiable functions R → R to continuous functions R → R. It can be defined in several ways, to be mentioned below:",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "One way to define the curl of a vector field at a point is implicitly through its projections onto various axes passing through the point: if u ^ {\\displaystyle \\mathbf {\\hat {u}} } is any unit vector, the projection of the curl of F onto u ^ {\\displaystyle \\mathbf {\\hat {u}} } may be defined to be the limiting value of a closed line integral in a plane orthogonal to u ^ {\\displaystyle \\mathbf {\\hat {u}} } divided by the area enclosed, as the path of integration is contracted indefinitely around the point.",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "More specifically, the curl is defined at a point p as",
"title": "Definition"
},
{
"paragraph_id": 8,
"text": "where the line integral is calculated along the boundary C of the area A in question, |A| being the magnitude of the area. This equation defines the projection of the curl of F onto u ^ {\\displaystyle \\mathbf {\\hat {u}} } . The infinitesimal surfaces bounded by C have u ^ {\\displaystyle \\mathbf {\\hat {u}} } as their normal. C is oriented via the right-hand rule.",
"title": "Definition"
},
{
"paragraph_id": 9,
"text": "The above formula means that the projection of the curl of a vector field along a certain axis is the infinitesimal area density of the circulation of the field projected onto a plane perpendicular to that axis. This formula does not a priori define a legitimate vector field, for the individual circulation densities with respect to various axes a priori need not relate to each other in the same way as the components of a vector do; that they do indeed relate to each other in this precise manner must be proven separately.",
"title": "Definition"
},
{
"paragraph_id": 10,
"text": "To this definition fits naturally the Kelvin–Stokes theorem, as a global formula corresponding to the definition. It equates the surface integral of the curl of a vector field to the above line integral taken around the boundary of the surface.",
"title": "Definition"
},
{
"paragraph_id": 11,
"text": "Another way one can define the curl vector of a function F at a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosing p divided by the volume enclosed, as the shell is contracted indefinitely around p.",
"title": "Definition"
},
{
"paragraph_id": 12,
"text": "More specifically, the curl may be defined by the vector formula",
"title": "Definition"
},
{
"paragraph_id": 13,
"text": "where the surface integral is calculated along the boundary S of the volume V, |V| being the magnitude of the volume, and n ^ {\\displaystyle \\mathbf {\\hat {n}} } pointing outward from the surface S perpendicularly at every point in S.",
"title": "Definition"
},
{
"paragraph_id": 14,
"text": "In this formula, the cross product in the integrand measures the tangential component of F at each point on the surface S, together with the orientation of these tangential components with respect to the surface S. Thus, the surface integral measures the overall extent to which F circulates around S, together with the net orientation of this circulation in space. The curl of a vector field at a point is then the infinitesimal volume density of the net vector circulation (i.e., both magnitude and spatial orientation) of the field around the point.",
"title": "Definition"
},
{
"paragraph_id": 15,
"text": "To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates the volume integral of the curl of a vector field to the above surface integral taken over the boundary of the volume.",
"title": "Definition"
},
{
"paragraph_id": 16,
"text": "Whereas the above two definitions of the curl are coordinate free, there is another \"easy to memorize\" definition of the curl in curvilinear orthogonal coordinates, e.g. in Cartesian coordinates, spherical, cylindrical, or even elliptical or parabolic coordinates:",
"title": "Definition"
},
{
"paragraph_id": 17,
"text": "The equation for each component (curl F)k can be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices).",
"title": "Definition"
},
{
"paragraph_id": 18,
"text": "If (x1, x2, x3) are the Cartesian coordinates and (u1, u2, u3) are the orthogonal coordinates, then",
"title": "Definition"
},
{
"paragraph_id": 19,
"text": "is the length of the coordinate vector corresponding to ui. The remaining two components of curl result from cyclic permutation of indices: 3,1,2 → 1,2,3 → 2,3,1.",
"title": "Definition"
},
{
"paragraph_id": 20,
"text": "In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curl operator can be applied using some set of curvilinear coordinates, for which simpler representations have been derived.",
"title": "Usage"
},
{
"paragraph_id": 21,
"text": "The notation ∇ × F has its origins in the similarities to the 3-dimensional cross product, and it is useful as a mnemonic in Cartesian coordinates if ∇ is taken as a vector differential operator del. Such notation involving operators is common in physics and algebra.",
"title": "Usage"
},
{
"paragraph_id": 22,
"text": "Expanded in 3-dimensional Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations),∇ × F is, for F composed of [Fx, Fy, Fz] (where the subscripts indicate the components of the vector, not partial derivatives):",
"title": "Usage"
},
{
"paragraph_id": 23,
"text": "where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. This expands as follows:",
"title": "Usage"
},
{
"paragraph_id": 24,
"text": "Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection.",
"title": "Usage"
},
{
"paragraph_id": 25,
"text": "In a general coordinate system, the curl is given by",
"title": "Usage"
},
{
"paragraph_id": 26,
"text": "where ε denotes the Levi-Civita tensor, ∇ the covariant derivative, g {\\displaystyle g} is the determinant of the metric tensor and the Einstein summation convention implies that repeated indices are summed over. Due to the symmetry of the Christoffel symbols participating in the covariant derivative, this expression reduces to the partial derivative:",
"title": "Usage"
},
{
"paragraph_id": 27,
"text": "where Rk are the local basis vectors. Equivalently, using the exterior derivative, the curl can be expressed as:",
"title": "Usage"
},
{
"paragraph_id": 28,
"text": "Here ♭ and ♯ are the musical isomorphisms, and ★ is the Hodge star operator. This formula shows how to calculate the curl of F in any coordinate system, and how to extend the curl to any oriented three-dimensional Riemannian manifold. Since this depends on a choice of orientation, curl is a chiral operation. In other words, if the orientation is reversed, then the direction of the curl is also reversed.",
"title": "Usage"
},
{
"paragraph_id": 29,
"text": "Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the centre of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the centre of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The curl of the vector at any point is given by the rotation of an infinitesimal area in the xy-plane (for z-axis component of the curl), zx-plane (for y-axis component of the curl) and yz-plane (for x-axis component of the curl vector). This can be clearly seen in the examples below.",
"title": "Examples"
},
{
"paragraph_id": 30,
"text": "The vector field",
"title": "Examples"
},
{
"paragraph_id": 31,
"text": "can be decomposed as",
"title": "Examples"
},
{
"paragraph_id": 32,
"text": "Upon visual inspection, the field can be described as \"rotating\". If the vectors of the field were to represent a linear force acting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed.",
"title": "Examples"
},
{
"paragraph_id": 33,
"text": "Calculating the curl:",
"title": "Examples"
},
{
"paragraph_id": 34,
"text": "The resulting vector field describing the curl would at all points be pointing in the negative z direction. The results of this equation align with what could have been predicted using the right-hand rule using a right-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed.",
"title": "Examples"
},
{
"paragraph_id": 35,
"text": "For the vector field",
"title": "Examples"
},
{
"paragraph_id": 36,
"text": "the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the line x = 3, the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negative z direction. Inversely, if placed on x = −3, the object would rotate counterclockwise and the right-hand rule would result in a positive z direction.",
"title": "Examples"
},
{
"paragraph_id": 37,
"text": "Calculating the curl:",
"title": "Examples"
},
{
"paragraph_id": 38,
"text": "The curl points in the negative z direction when x is positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the plane x = 0.",
"title": "Examples"
},
{
"paragraph_id": 39,
"text": "In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields v and F can be shown to be",
"title": "Identities"
},
{
"paragraph_id": 40,
"text": "Interchanging the vector field v and ∇ operator, we arrive at the cross product of a vector field with curl of a vector field:",
"title": "Identities"
},
{
"paragraph_id": 41,
"text": "where ∇F is the Feynman subscript notation, which considers only the variation due to the vector field F (i.e., in this case, v is treated as being constant in space).",
"title": "Identities"
},
{
"paragraph_id": 42,
"text": "Another example is the curl of a curl of a vector field. It can be shown that in general coordinates",
"title": "Identities"
},
{
"paragraph_id": 43,
"text": "and this identity defines the vector Laplacian of F, symbolized as ∇F.",
"title": "Identities"
},
{
"paragraph_id": 44,
"text": "The curl of the gradient of any scalar field φ is always the zero vector field",
"title": "Identities"
},
{
"paragraph_id": 45,
"text": "which follows from the antisymmetry in the definition of the curl, and the symmetry of second derivatives.",
"title": "Identities"
},
{
"paragraph_id": 46,
"text": "The divergence of the curl of any vector field is equal to zero:",
"title": "Identities"
},
{
"paragraph_id": 47,
"text": "If φ is a scalar valued function and F is a vector field, then",
"title": "Identities"
},
{
"paragraph_id": 48,
"text": "The vector calculus operations of grad, curl, and div are most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifying bivectors (2-vectors) in 3 dimensions with the special orthogonal Lie algebra s o ( 3 ) {\\displaystyle {\\mathfrak {so}}(3)} of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) and s o ( 3 ) {\\displaystyle {\\mathfrak {so}}(3)} , these all being 3-dimensional spaces.",
"title": "Generalizations"
},
{
"paragraph_id": 49,
"text": "In 3 dimensions, a differential 0-form is a real-valued function f(x, y, z); a differential 1-form is the following expression, where the coefficients are functions:",
"title": "Generalizations"
},
{
"paragraph_id": 50,
"text": "a differential 2-form is the formal sum, again with function coefficients:",
"title": "Generalizations"
},
{
"paragraph_id": 51,
"text": "and a differential 3-form is defined by a single term with one function as coefficient:",
"title": "Generalizations"
},
{
"paragraph_id": 52,
"text": "(Here the a-coefficients are real functions of three variables; the \"wedge products\", e.g. dx ∧ dy, can be interpreted as some kind of oriented area elements, dx ∧ dy = −dy ∧ dx, etc.)",
"title": "Generalizations"
},
{
"paragraph_id": 53,
"text": "The exterior derivative of a k-form in R is defined as the (k + 1)-form from above—and in R if, e.g.,",
"title": "Generalizations"
},
{
"paragraph_id": 54,
"text": "then the exterior derivative d leads to",
"title": "Generalizations"
},
{
"paragraph_id": 55,
"text": "The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives,",
"title": "Generalizations"
},
{
"paragraph_id": 56,
"text": "and antisymmetry,",
"title": "Generalizations"
},
{
"paragraph_id": 57,
"text": "the twofold application of the exterior derivative yields 0 {\\displaystyle 0} (the zero k + 2 {\\displaystyle k+2} -form).",
"title": "Generalizations"
},
{
"paragraph_id": 58,
"text": "Thus, denoting the space of k-forms by Ω(R) and the exterior derivative by d one gets a sequence:",
"title": "Generalizations"
},
{
"paragraph_id": 59,
"text": "Here Ω(R) is the space of sections of the exterior algebra Λ(R) vector bundle over R, whose dimension is the binomial coefficient (k); note that Ω(R) = 0 for k > 3 or k < 0. Writing only dimensions, one obtains a row of Pascal's triangle:",
"title": "Generalizations"
},
{
"paragraph_id": 60,
"text": "the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div.",
"title": "Generalizations"
},
{
"paragraph_id": 61,
"text": "Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On a Riemannian manifold, or more generally pseudo-Riemannian manifold, k-forms can be identified with k-vector fields (k-forms are k-covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on an oriented vector space with a nondegenerate form (an isomorphism between vectors and covectors), there is an isomorphism between k-vectors and (n − k)-vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchange k-forms, k-vector fields, (n − k)-forms, and (n − k)-vector fields; this is known as Hodge duality. Concretely, on R this is given by:",
"title": "Generalizations"
},
{
"paragraph_id": 62,
"text": "Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields:",
"title": "Generalizations"
},
{
"paragraph_id": 63,
"text": "On the other hand, the fact that d = 0 corresponds to the identities",
"title": "Generalizations"
},
{
"paragraph_id": 64,
"text": "for any scalar field f, and",
"title": "Generalizations"
},
{
"paragraph_id": 65,
"text": "for any vector field v.",
"title": "Generalizations"
},
{
"paragraph_id": 66,
"text": "Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms and n-forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and (n − 1)-forms are always fiberwise n-dimensional and can be identified with vector fields.",
"title": "Generalizations"
},
{
"paragraph_id": 67,
"text": "Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are",
"title": "Generalizations"
},
{
"paragraph_id": 68,
"text": "so the curl of a 1-vector field (fiberwise 4-dimensional) is a 2-vector field, which at each point belongs to 6-dimensional vector space, and so one has",
"title": "Generalizations"
},
{
"paragraph_id": 69,
"text": "which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero (d = 0). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way.",
"title": "Generalizations"
},
{
"paragraph_id": 70,
"text": "However, one can define a curl of a vector field as a 2-vector field in general, as described below.",
"title": "Generalizations"
},
{
"paragraph_id": 71,
"text": "2-vectors correspond to the exterior power ΛV; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as the special orthogonal Lie algebra s o {\\displaystyle {\\mathfrak {so}}} (V) of infinitesimal rotations. This has (2) = 1/2n(n − 1) dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we have n = 1/2n(n − 1), which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebra s o ( 4 ) {\\displaystyle {\\mathfrak {so}}(4)} .",
"title": "Generalizations"
},
{
"paragraph_id": 72,
"text": "The curl of a 3-dimensional vector field which only depends on 2 coordinates (say x and y) is simply a vertical vector field (in the z direction) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page.",
"title": "Generalizations"
},
{
"paragraph_id": 73,
"text": "Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions.",
"title": "Generalizations"
},
{
"paragraph_id": 74,
"text": "In the case where the divergence of a vector field V is zero, a vector field W exists such that V = curl(W). This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential.",
"title": "Inverse"
},
{
"paragraph_id": 75,
"text": "If W is a vector field with curl(W) = V, then adding any gradient vector field grad(f) to W will result in another vector field W + grad(f) such that curl(W + grad(f)) = V as well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknown irrotational field with the Biot–Savart law.",
"title": "Inverse"
}
]
| In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field. A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve. The notation curl F is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation rot F is traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in ∇ × F , which also reveals the relation between curl (rotor), divergence, and gradient operators. Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation ∇ × for the curl. The name "curl" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839. | 2001-09-16T17:50:54Z | 2023-12-27T08:29:34Z | [
"Template:Cite web",
"Template:Cbignore",
"Template:Multiple image",
"Template:Mvar",
"Template:Block indent",
"Template:MathWorld",
"Template:ISBN",
"Template:Springer",
"Template:Short description",
"Template:Nowrap",
"Template:Main",
"Template:Cite book",
"Template:Calculus",
"Template:Clear",
"Template:Reflist",
"Template:Citation",
"Template:Calculus topics",
"Template:Redirect",
"Template:Math",
"Template:Music",
"Template:Citation needed",
"Template:Cite arXiv"
]
| https://en.wikipedia.org/wiki/Curl_(mathematics) |
6,125 | Carl Friedrich Gauss | Johann Carl Friedrich Gauss (German: Gauß [kaʁl ˈfʁiːdʁɪç ˈɡaʊs] ; Latin: Carolus Fridericus Gauss; 30 April 1777 – 23 February 1855) was a German mathematician, geodesist, and physicist who made significant contributions to many fields in mathematics and science. Gauss ranks among history's most influential mathematicians. He has been referred to as the "Prince of Mathematicians".
Gauss was a child prodigy in mathematics. While still a student at the University of Göttingen, he propounded several mathematical theorems. Gauss completed his masterpieces Disquisitiones Arithmeticae and Theoria motus corporum coelestium as a private scholar. Later he was director of the Göttingen Observatory and professor at the university for nearly half a century, from 1807 until his death in 1855.
Gauss published the second and third complete proofs of the fundamental theorem of algebra, made contributions to number theory and developed the theories of binary and ternary quadratic forms. He is credited with inventing the fast Fourier transform algorithm and was instrumental in the discovery of the dwarf planet Ceres. His work on the motion of planetoids disturbed by large planets led to the introduction of the Gaussian gravitational constant and the method of least squares, which he discovered before Adrien-Marie Legendre published on the method, and which is still used in all sciences to minimize measurement error. He also anticipated non-Euclidean geometry, and was the first to analyze it, even coining the term. He is considered one of its discoverers alongside Nikolai Lobachevsky and János Bolyai.
Gauss invented the heliotrope in 1821, a magnetometer in 1833 and, alongside Wilhelm Eduard Weber, invented the first electromagnetic telegraph in 1833.
Gauss was a careful author. He refused to publish incomplete work. Although he published extensively during his life, he left behind several works to be published posthumously.
Although Gauss was known to dislike teaching, some of his students became influential mathematicians. He believed that the act of learning, not possession of knowledge, provided the greatest enjoyment.
Johann Carl Friedrich Gauss was born on 30 April 1777 in Brunswick (Braunschweig), in the Duchy of Brunswick-Wolfenbüttel (now part of Lower Saxony, Germany), to a family of lower social status. His father Gebhard Dietrich Gauss (1744–1808) worked in several jobs, as butcher, bricklayer, gardener, and as treasurer of a death-benefit fund. Gauss characterized his father as an honourable and respected man, but rough and dominating at home. He was experienced in writing and calculating, but his wife Dorothea (1743–1839), Carl Friedrich's mother, was nearly illiterate. Carl Friedrich was christened and confirmed in a church near the school that he attended as a child. He had one elder brother from his father's first marriage.
Gauss was a child prodigy in the field of mathematics. When the elementary teachers noticed his intellectual abilities, they brought him to the attention of the Duke of Brunswick, who sent him to the local Collegium Carolinum, which he attended from 1792 to 1795 with Eberhard August Wilhelm von Zimmermann as one of his teachers. Thereafter the Duke granted him the resources for studies of mathematics, sciences, and classical languages at the Hanoverian University of Göttingen until 1798. It is not known why Gauss went to Göttingen and not to the University of Helmstedt near his native Brunswick, but it is assumed that the large library of Göttingen, where students were allowed to borrow books and take them home, was the decisive reason. One of his professors in mathematics was Abraham Gotthelf Kästner, whom Gauss called "the leading mathematician among poets, and the leading poet among mathematicians" because of his epigrams. Gauss depicted him in a drawing showing a lecture scene where he produced errors in a simple calculation. Astronomy was taught by Karl Felix von Seyffer (1762–1822), with whom Gauss stayed in correspondence after graduation; Olbers and Gauss mocked him in their correspondence. On the other hand, he thought highly of Georg Christoph Lichtenberg, his teacher of physics, and of Christian Gottlob Heyne, whose lectures in classics Gauss attended with pleasure. Fellow students of this time were Johann Friedrich Benzenberg, Farkas Bolyai, and Heinrich Wilhelm Brandes.
Though being a registered student at university, it is evident that he was a self-taught student in mathematics, since he independently rediscovered several theorems. He succeeded with a breakthrough in a geometrical problem that had occupied mathematicians since the days of the Ancient Greeks when he determined in 1796 which regular polygons can be constructed by compass and straightedge. This discovery was the subject of his first publication and ultimately led Gauss to choose mathematics instead of philology as a career. Gauss' mathematical diary shows that, in the same year, he was also productive in number theory. He made advanced discoveries in modular arithmetic, found the first proof of the quadratic reciprocity law, and dealt with the prime number theorem. Many ideas for his mathematical magnum opus Disquisitiones arithmeticae, published in 1801, date from this time.
Gauss graduated as a Doctor of Philosophy in 1799. He did not graduate from Göttingen, as is sometimes stated, but rather, at the Duke of Brunswick's special request, from the University of Helmstedt, the only state university of the duchy. There, Johann Friedrich Pfaff assessed his doctoral thesis, and Gauss got the degree in absentia without the further oral examination that was usually requested. The Duke then granted him his cost of living as a private scholar in Brunswick. Gauss showed his gratitude and loyalty for this bequest when he refused several calls from the Russian Academy of Sciences in St. Peterburg and from Landshut University. Later, the Duke promised him the foundation of an observatory in Brunswick in 1804. Architect Peter Joseph Krahe made preliminary designs, but one of Napoleon's wars cancelled those plans: the Duke was mortally wounded in the battle of Jena in 1806. The duchy was abolished in the following year, and Gauss's financial support stopped. He then followed a call to the University of Göttingen, an institution of the newly founded Kingdom of Westphalia under Jérôme Bonaparte, as full professor and director of the astronomical observatory.
Studying the calculation of asteroid orbits, Gauss established contact with the astronomical community of Bremen and Lilienthal, especially Wilhelm Olbers, Karl Ludwig Harding and Friedrich Wilhelm Bessel, an informal group of astronomers known as the Celestial police. One of their aims was the discovery of further planets, and they assembled data on asteroids and comets as a basis for Gauss's research. Gauss was thereby able to develop new, powerful methods for the determination of orbits, which he later published in his astronomical magnum opus Theoria motus corporum coelestium (1809).
Gauss arrived at Göttingen in November 1807, and in the following years he was confronted with the demand for two thousand francs from the Westphalian government as a war contribution. Without having yet received his salary, he could not raise this enormous amount. Both Olbers and Laplace wanted to help him with the payment, but Gauss refused their assistance. Finally, an anonymous person from Frankfurt, later discovered to be Prince-primate Dalberg, paid the sum.
Gauss took on the directorate of the 60-year-old observatory, founded in 1748 by Prince-elector George II and built on a converted fortification tower, with usable, but partly out-of-date instruments. The construction of a new observatory had been approved by Prince-elector George III in principle since 1802, and the Westphalian government continued the planning, but the building was not finished until October 1816. It contained new up-to-date instruments, for instance two meridian circles from Repsold and Reichenbach, and a heliometer from Fraunhofer.
The scientific activity of Gauss, besides pure mathematics, can be roughly divided into three periods: in the first two decades of the 19th century astronomy was the main focus, in the third decade geodesy, and in the fourth decade he occupied himself with physics, mainly magnetism.
Gauss remained mentally active into his old age, even while suffering from gout and general unhappiness. His last observation was the solar eclipse of July 28, 1851. On 23 February 1855, Gauss died of a heart attack in Göttingen; he is interred in the Albani Cemetery there. Heinrich Ewald, Gauss's son-in-law, and Wolfgang Sartorius von Waltershausen, Gauss's close friend and biographer, gave eulogies at his funeral.
The day after Gauss's death his brain was removed, preserved and studied by Rudolf Wagner, who found its mass to be slightly above average, at 1,492 grams (52.6 oz). The cerebral area was determined by Wagner's son Hermann in his doctoral thesis to be 219,588 square millimetres (340.362 sq in). Highly developed convolutions were also found, which in the early 20th century were suggested as the explanation for his genius. After various previous investigations, a magnetic resonance study of 1998, done at the Max Planck Institute for Biophysical Chemistry in Göttingen, gave no results which could be used to explain his mathematical abilities.
In 2013, a neurobiologist at the same institute discovered that Gauss's brain had been mixed up, due to mislabelling, with that of the physician Conrad Heinrich Fuchs [de], who died in Göttingen a few months after Gauss. A further investigation showed no remarkable anomalies in the brains of either person. Thus, all investigations on Gauss's brain until 1998, except the first ones of Rudolf and Hermann Wagner, actually refer to the brain of Fuchs.
Gauss married Johanna Osthoff (1780–1809) on 9 October 1805. They had two sons and a daughter: Joseph (1806–1873), Wilhelmina (1808–1840) and Louis (1809–1810). Johanna died on 11 October 1809 one month after the birth of Louis, who himself died a few months later.
Gauss remarried within a year, on 4 August 1810, to Wilhelmine (Minna) Waldeck (1788–1831), a friend of his first wife. They had three more children: Eugen (later Eugene) (1811–1896), Wilhelm (later William) (1813–1879) and Therese Staufenau [de] (1816–1864). Minna Gauss died on 12 September 1831 after being seriously ill for more than a decade. Therese then took over the household and cared for Gauss for the rest of his life; after her father's death she married the actor Constantin Staufenau. Her sister Wilhelmina married the orientalist Heinrich Ewald. Gauss' mother Dorothea lived in his house from 1817 until her death in 1839.
The eldest son Joseph, whilst still a schoolboy, helped his father as an assistant during his survey campaign in summer 1821. After a short time at university, in 1824 Joseph joined the Hanoverian army and assisted in surveying again in 1829. In the 1830s he was responsible for the enlargement of the survey network to the western parts of the kingdom. With his geodetical qualifications he left the service and engaged in the construction of the railway network as director of the Royal Hanoverian State Railways. In 1836 he studied the railroad system in the US for some months.
Eugen left Göttingen in September 1830 and emigrated to the United States, where he joined the army for five years. He then worked for the American Fur Company in the Midwest, where he learned the Sioux language. Later, he moved to Missouri and became a successful businessman. Wilhelm married a niece of the astronomer Friedrich Bessel and also moved to Missouri in 1837, starting as a farmer and later becoming wealthy in the shoe business in St. Louis. Eugene and William have numerous descendants in America, but the descendants left in Germany all derive from Joseph, as the Gauss daughters had no children.
At the end of the 18th century, German academic mathematics was in a poor condition: the prolific mathematicians of that time worked in France and other European countries. The mathematical mainstream was orientated at solving practical problems in mechanics, astronomy, geodesy, etc. In this scientific environment, Gauss can be seen, following Felix Klein, as typical of both 18th and 19th-century mathematicians. His interest in practical applicability, for example in geodesy and astronomy, qualified Gauss to be taken as a typical applied mathematician of the century of enlightenment. On the other hand, he began research in numerous parts of mathematics without defined links to practical purposes, and thus showed himself as a pioneer of what was later called "pure mathematics". In contrast to earlier mathematicians, such as Leonhard Euler—who let their readers take part in their reasoning as they developed new ideas, and included certain erroneous deviations from the correct path—Gauss developed a new style of direct and complete explanation that did not attempt to show the reader the author's train of thought.
But for himself, he propagated a quite different ideal, given in a letter to Farkas Bolyai on 2 September 1808 as follows:
It is not knowledge, but the act of learning, not possession but the act of getting there, which grants the greatest enjoyment. When I have clarified and exhausted a subject, then I turn away from it, in order to go into darkness again.
Gauss refused to publish work which he did not consider complete and above criticism. This perfectionism was in keeping with the motto of his personal seal Pauca sed Matura ("Few, but Ripe"). His personal diary indicates that he had made several mathematical discoveries years or decades before contemporaries published them. He put down new ideas in writing to his colleagues, who encouraged him to publish, and sometimes rebuked him if he hesitated too long, in their opinion. Gauss defended himself, claiming that the initial discovery of ideas was easy, but preparing a publishable elaboration was a demanding matter for him, for either lack of time or "serenity of mind". Nevertheless, he published many short communications of urgent content in various journals, but his "Collected Works" contain a considerable literary estate, too. Eric Temple Bell said that if Gauss had published all of his discoveries in a timely manner, he would have advanced mathematics by fifty years.
On certain occasions, Gauss claimed that a finding published by another scholar had already been in his possession previously. Thus his concept of priority as "the first to discover, not the first to publish" differed from that of his scientific contemporaries. In contrast to his perfectionism in presenting mathematical ideas, he was criticized for his negligent way of quoting. He justified himself with a very special view of correct quoting: if he gave references, then only in a quite complete way, with respect to the previous authors of importance, which no one should ignore; but quoting in this way needed knowledge of the history of science and more time than he wished to spend.
Though Gauss is seen as a master of axiomatic presentation, it became obvious from his posthumously published papers, his diary, and short glosses in his own textbooks, that he worked to a great extent in an empirical way. Gauss was a lifelong busy and enthusiastic calculator. He coped with the enormous workload by using skillful tools. Gauss used a lot of mathematical tables, examined their qualities, and constructed new tables on various matters for personal use. He developed new tools for effective calculation, for example the Gaussian elimination. It has been taken as a curious feature of his working style that he carried out calculations with a high degree of precision, much more than required. Very likely, this method gave him a lot of material which he used in finding theorems in number theory.
It was well known to his close colleagues that Gauss disliked giving academic lectures. He first stated this to Olbers in 1802, so this aversion was not the result of bad experience. Thus he refused to accept any academic position with teaching duties during his years as a private scholar. But from the start of his academic career at Göttingen in 1807, he continuously gave lectures until 1854. He often complained about the efforts of teaching, feeling that it was a waste of his time, but on the other hand he occasionally described one or other student as talented. In all these 47 years of teaching he gave only three lectures on subjects of pure mathematics, whereas most of his lectures dealt with astronomy, geodesy, and applied mathematics. However, many of Gauss' students went on to become renowned mathematicians, physicists, and astronomers: Moritz Cantor, Dedekind, Dirksen, Encke, Gould, Heine, Klinkerfues, Kupffer, Listing, Möbius, Nicolai, Riemann, Ritter, Schering, Scherk, Schumacher, Seeber, von Staudt, Stern, Ursin; as geoscientists Sartorius von Waltershausen and Wappäus.
Gauss wrote no textbooks, and (unlike his friends Bessel, Humboldt, and Olbers) he disliked the popularization of scientific matters. His only attempts at popularization were his works on the date of Easter and the essay Erdmagnetismus und Magnetometer of 1836.
Gauss published his papers and books exclusively in Latin or in German.
At Göttingen University, Gauss was accompanied by a staff of other lecturers in his disciplines, who completed the educational program: for instance the brilliant Thibaut in mathematics, in physics Weber and Mayer, well known for his successful textbooks, and Harding, who took the main part of lectures in astronomy. When the observatory was completed, Gauss took his living accommodation in the western wing of the new observatory and Harding in the eastern one. Once they had been on friendly terms with another, but in the course of time they became alienated, possibly – as some biographers presume – because Gauss had wished the equal-ranked Harding to be no more than his assistant or observer. The years since 1820 were evaluated as a "period of lower astronomical activity". The new, well-equipped observatory did not work as effectively as others; Gauss' astronomical research had the character of a one-man enterprise, and the university established a place for an assistant only after Harding's death in 1834. But nevertheless Gauss twice refused the opportunity to solve the problem by accepting offers from Berlin in 1810 and 1825 to become a full member of the Prussian Academy, without no great lecturing duties, as well as from Leipzig University in 1810 and from Vienna University in 1842. Perhaps the reason was the difficult situation of his family. In his later years, Gauss was one of the best-paid professors of the university.
When Gauss was asked for help by his friend Friedrich Wilhelm Bessel in 1810, who was in trouble at Königsberg University because of his lack of an academic title, Gauss provided a doctorate honoris causa for Bessel from the Philosophy Faculty of Göttingen in March 1811. Gauss gave another recommendation for an honorary degree for Sophie Germain, but only shortly before her death, so she never received it. He also gave successful support for the talented mathematician Gotthold Eisenstein in Berlin.
After King William IV's death in 1837, the personal union between the kingdoms of Great Britain and Ireland and Hanover ceased. In the same year, the new Hanoverian king Ernest Augustus annulled the constitution given to the state by his brother in 1833. Seven prominent professors, later known as the "Göttingen Seven", protested against this, among them Gauss' friend and collaborator Wilhelm Weber and Gauss' son-in-law Heinrich Ewald. All of them were dismissed, three of them were expelled, but Ewald and Weber could stay in Göttingen. Ewald took a position the University of Tübingen in 1838, where Gauss' daughter Wilhelmina died soon afterwards in 1840, and with Weber went to Leipzig in 1843; but both of them returned to their Göttingen positions in 1849 as the only ones of the Göttingen Seven. Gauss was deeply affected by this quarrel, but saw no possibility to help them.
Gauss took part in academic administration: three times he was elected as dean of the Philosophy Faculty. Being entrusted with the widow's pension fund of the university, he dealt with actuarial science and wrote a report on the strategy for stabilizing the benefits. He was appointed director of the Royal Academy of Sciences in Göttingen for nine years, even in his last year of life.
Soon after Gauss' death, his friend Sartorius published the first biography (1856), written in a rather enthusiastic style. Sartorius saw Gauss as a serene and forward-striving man with childlike modesty, but also of "iron character" with an unshakeable strength of mind. He was noted for a sense of justice and religious tolerance. Apart from his closer circle, others regarded him as reserved and unapproachable, "like an Olympian sitting enthroned on the summit of science". His close contemporaries agreed that Gauss was a man of difficult character. He often refused to accept compliments. His visitors were occasionally irritated by grumpy behaviour, but a short time later his mood could change, and he became a charming, open-minded host.
Gauss' life was overshadowed by severe problems in his family. When his first wife Johanna suddenly died shortly after the death of their third child, he plunged into a depression from which he never fully recovered. Soon after her death he wrote a last letter to her in the style of an ancient threnody, the most personal surviving document of Gauss'. The situation worsened when tuberculosis afflicted, and ultimately destroyed the health of, his second wife Minna over 13 years; both his daughters later suffered from the same disease. Both younger sons were educated for some years in Celle far from Göttingen. Gauss himself gave only slight hints of his personal distress: in a letter to Bessel dated December 1831 he described himself as "the victim of the worst domestic sufferings".
Gauss grew to dominate his children and eventually had conflicts with his sons, because he did not want any of them to enter mathematics or science for "fear of lowering the family name", as he believed none of them would surpass his own achievements. The military career of his elder son Joseph ended after more than two decades with the rank of a poorly paid first lieutenant, although he had acquired a considerable knowledge of geodesy. He needed financial support from his father even after he was married. The second son Eugen shared a good measure of Gauss' talent in computation and languages, but had a vivacious and sometimes rebellious character. He wanted to study philology, whereas Gauss wanted him to become a lawyer. Having run up debts and caused a scandal in public, he suddenly left Göttingen under dramatic circumstances in September 1830 and emigrated via Bremen to the United States. He wasted the little money he had taken for starting, after which his father refused further financial support. The youngest son Wilhelm wanted to qualify for agricultural administration, but had difficulties to get an appropriate education, and emigrated as well. Only Gauss' youngest daughter Therese accompanied him in his last years of life.
Collecting numerical data on very different things, useful or useless, became a habit in his later years, for example the number of paths from his home to certain places in Göttingen, or the numbers of living days of persons; he congratulated Humboldt in December 1851, when he had reached the same age as Isaac Newton at his death, calculated in days.
Gauss had a good knowledge of Latin as well as of modern languages. At the age of 62, he began to teach himself Russian, very likely to understand scientific writings from Russia, among them those of Lobachevsky on non-Euclidean geometry. Gauss read both classical and modern literature, the English and French in the original languages. His favorite English author was Walter Scott, his favorite German Jean Paul. Gauss liked singing and went to concerts. He was a busy newspaper reader, and in his last years he used to visit an academic press salon of the university every noon. Gauss did not care much for philosophy, and mocked the "splitting hairs of the so-called metaphysicians", by which he meant proponents of the contemporary school of Naturphilosophie.
Gauss' religious beliefs have been a subject of speculation by some of his biographers. He sometimes said: "God is calculating." Gauss was a member of the Lutheran church, like most of the population in northern Germany, but it seems that he did not believe all dogmas or understand the Holy Bible to be true quite literally. Sartorius mentioned Gauss' religious tolerance, and estimated his "insatiable thirst for truth" and his sense of justice as motivated by religious convictions.
Gauss had an "aristocratic and through and through conservative nature", with little respect for people's intelligence and morals, in accordance with the motto "mundus vult decipi". As far as the political system is concerned, he had a low estimation of the constitutional system; he criticized parliamentarians of his time for a lack of knowledge and logical errors. Gauss was loyal to the House of Hanover, disliked Napoleon and his system, and all kind of violence and revolution caused horror to him. Thus he condemned the methods of the Revolutions of 1848, though he agreed with some of their aims, such as the idea of a unified Germany.
Gauss was a successful investor and accumulated considerable wealth with stocks and securities, but he disapproved of the idea of paper money. After his death a great sum of money was found hidden in his rooms.
In his doctoral thesis from 1799 Gauss proved the fundamental theorem of algebra which states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. Mathematicians including Jean le Rond d'Alembert had produced false proofs before him, and Gauss' dissertation contains a critique of d'Alembert's work. He subsequently produced three other proofs, the last one in 1849 being generally rigorous. His attempts clarified the concept of complex numbers considerably along the way.
The entries in Gauss' Mathematical diary indicate that he was busy with the subject of number theory at least since 1796. A detailed study of previous researches showed him that some of his findings had been already done by other scholars. In the years 1798 and 1799 Gauss wrote a voluminous compilation of all these results in the famous Disquisitiones Arithmeticae, published in 1801, that was fundamental in consolidating number theory as a discipline and covered both elementary and algebraic number theory. Therein he introduces, among other things, the triple bar symbol (≡) for congruence and uses it in a clean presentation of modular arithmetic. It deals with the unique factorization theorem and primitive roots modulo n. In the main chapters, Gauss presents the first two proofs of the law of quadratic reciprocity, which allows mathematicians to determine the solvability of any quadratic equation in modular arithmetic, and develops the theories of binary and ternary quadratic forms.
Highlights of these theories include the remarkable Gauss composition law for binary quadratic forms, as well as his enumeration of the number of representations of an integer as sum of three squares. As an almost immediate corollary of his theorem on three squares, he proves the triangular case of the Fermat polygonal number theorem for n = 3. From several remarkable analytic results on class numbers that Gauss gives without proof towards the end of the fifth chapter, it appears that Gauss already knew the class number formula in 1801.
In the last chapter Gauss gives his proof for the constructibility of a regular heptadecagon (17-sided polygon) with straightedge and compass by reducing this geometrical to an algebraic problem. He shows that a regular polygon is constructible if the number of its sides is a product of distinct Fermat primes and a power of 2. In the same chapter, he gives a result on the number of solutions of certain cubic polynomials with coefficients in finite fields, which amounts to counting integral points on an elliptic curve. Some 150 years later, Andre Weil remarked that this particular result, together with some other unpublished results of Gauss, led him to formulate what is now called Weil conjectures.
Gauss intended to include an eighth chapter that would treat the topic of higher congruences modulo a prime number in its full generality, but the unfinished chapter was found among his papers only after his death, consisting of work done during the years 1797–1799. It contains a systematic theory of finite fields, among other things; one particular result of special importance is a counting formula for the number of irreducible polynomials of a given degree over a finite field. He also makes use of the powerful tool of the Frobenius automorphism to explore the subfields of finite fields. Gauss finishes the chapter by indicating possible generalizations of his investigations, and proves an early version of "Hensel lemma", which enables to lift modular properties with respect to a prime p into ever growing powers of the same prime.
It is unclear how much Gauss was aware of the importance of the last result, but there are indications he was aware of some sort of a p-adic method, such as his motive to prove his lemma on polynomials or his method of deriving Hensel's lemma. In the beginning of the 20th century, Kurt Hensel introduced p-adic numbers, and in this way shed light on these investigations and brought them to conceptual maturity.
In 1831, Ludwig August Seeber published a book on the theory of reduction of positive ternary quadratic forms, with accordance with the program outlined in Gauss's Disquisitiones. However, he did not prove a central theorem of his theory, so it remained a mere conjecture. In his review of Seeber's book, Gauss simplified many of Seeber's lengthy arguments, proved this central conjecture, and remarked that this theorem is equivalent to Kepler conjecture for regular arrangements.
Gauss proved Fermat's Last Theorem for n = 3 and sketchingly proved it for n = 5 in his unpublished writings. The particular case of n = 3 was proved much earlier by Leonhard Euler, but Gauss developed a more streamlined proof which made use of Eisenstein integers; though more general, the proof was simpler than in the real integers case.
Among his published number theoretical works, his two papers on biquadratic residues (published in 1828 and 1832) are considered second in importance only to Disquisitions Arithmeticae. In these papers Gauss introduces the ring of Gaussian integers Z [ i ] {\displaystyle \mathbb {Z} [i]} , and shows that this ring is a unique factorization domain. Furthermore, he generalizes into this ring many key arithmetic concepts, such as Fermat's little theorem and Gauss's lemma. The main objective of introducing this ring was to formulate the law of biquadratic reciprocity – as Gauss discovered, rings of complex integers are the natural setting for such higher reciprocity laws.
In the second paper, he states the general law of biquadratic reciprocity and proves several special cases of it, but proof of the general theorem is lacking, despite Gauss's statements that he found such a proof around 1814. He promised a third paper with a general proof, but this never appeared. In an earlier publication from 1818 containing his fifth and sixth proofs of quadratic reciprocity, he claims the techniques of these proofs (Gauss sums) can be applied to prove higher reciprocity laws. In his posthumous papers, two proofs of the general case were found: one is believed to be not original of Gauss but rather based in its principles on Gotthold Eisenstein's proof, while the other was a highly original proof based on geometrical considerations involving counting lattice points in certain geometric figures. Despite its originality, the geometric proof is very long and cumbersome, and this may be the reason why he withheld its publication after he saw Eisenstein's much more direct proof.
Gauss's publications on biquadratic residues opened the way for boundless enlargement of the theory of numbers, and are memorable for the wealth of investigations in "higher arithmetic" that they led to.
One of Gauss's first independent discoveries was the notion of the arithmetic-geometric mean (AGM) of two positive real numbers; his systematic investigations on the AGM led him to discover an unusually rich mathematical landscape, and to obtain plenty of new results associated with it. He discovered its relation to elliptic integrals in the years 1798-1799 through the so-called Landen's transformation, and in a diary entry recorded his discovery of the connection of Gauss's constant to lemniscatic elliptic functions, a result that Gauss stated that "will surely open a new area of analysis". He also made early inroads into the more formal issues of the foundations of complex analysis, and from a letter to Bessel in 1811 it is clear that he knew the so-called "fundamental theorem of complex analysis" - Cauchy's integral theorem - and understood the notion of complex residues when integrating around poles.
Another source of inspiration for Gauss's early work in analysis was his acquaintance with Euler's pentagonal numbers theorem. This theorem together with his other researches on the AGM and lemniscatic functions led him to plenty of results on Jacobi theta functions, work which culminated with his discovery in 1808 of the very general Jacobi triple product identity, which includes Euler's theorem as a special case. In his publication from 1811 on the determination of the sign of quadratic Gauss sum, Gauss solved the problem by introducing Gaussian binomial coefficients and by using a line of reasoning that somehow "hides" its origin in theta function theory, as later mathematicians have shown. All this work was done several decades before the publication of Jacobi's "Fundamenta nova" in 1829; however, Gauss never found the time to systematically write and organize all his thoughts and theorems of this kind, and his contemporaries never knew the scope of his work.
Several mathematical fragments in his Nachlass indicate that he knew quite well parts of the modern theory of modular forms of Felix Klein and Robert Fricke. In his work on the multivalued AGM of two complex numbers, he discovered a very deep connection between the infinitely many values of the AGM to its two "simplest values". His unpublished writings include several drawings that show he was quite aware of the geometric side of the theory; in the context of his work on the complex AGM he recognized and made a sketch of the key concept of fundamental domain for the modular group. Perhaps the most remarkable of Gauss's sketches of this kind was his drawing of a tessellation of the unit disk by "equilateral" hyperbolic triangles with all angles equal to π / 4 {\displaystyle \pi /4} .
In his lifetime Gauss published almost nothing about those more modern theories of elliptic functions, but he did publish most of his results on the related theme of the hypergeometric function. In his work "Disquisitiones generales circa series infinitam..." (1812), he provided the first systematic treatment of the general hypergeometric function F ( α , β , γ , x ) {\displaystyle F(\alpha ,\beta ,\gamma ,x)} , and showed that many of the functions known to science at the time, such as the elementary functions and some special functions, are a special case of the hypergeometric function. This work was the first one with an exact inquiry of convergence of infinite series in the history of mathematics. Furthermore, it dealt with infinite continued fractions arising as ratios of hypergeometric functions.
In 1822 Gauss published his prize winning essay on conformal mappings, which contains several developments that pertain to the field of complex analysis. In this essay, Gauss made explicit the insight that angle-preserving mappings in the complex plane must be complex analytic functions, and used the so-called Beltrami equation to prove the existence of isothermal coordinates on analytic surfaces. The essay concludes with examples of conformal mappings into a sphere and an ellipsoid of revolution. In addition, in unpublished fragments from the years 1834-1839 he investigated and solved the more difficult task of explicitly constructing a conformal mapping from the interior of an ellipse to the unit disk. His solution, which combined his early work on elliptic functions and his later ideas on potential theory, reveals his mastery of the theory of logarithmic potential, and his final results corresponded to the formula found by Hermann Schwarz in 1870.
Gauss often deduced theorems inductively from numerical data he had collected in an empirical way. As such, the use of efficient algorithms to facilitate calculations was vital to his researches, and he made many contributions to numeric analysis. In 1815, he published an article on numeric integration, in which he described his method of Gaussian quadrature, that greatly improved existing methods and inspired much of the work made by later mathematicians.
In a private letter to Gerling from 1823, he described a solution of a certain 4X4 system of linear equations by using Gauss-Seidel method – an "indirect" iterative method for the solution of linear systems, that in some cases converges very rapidly to the exact solution. Gauss recommended it over the usual method (the so-called "direct elimination") for systems of more than 2 equations, stating that it can be done "while half asleep, or while thinking about other things". As such, it was an early contribution to numerical linear algebra.
Gauss invented an algorithm for calculating discrete Fourier transforms, sometimes called "the most important numerical algorithm of our lifetime", when calculating the orbits of Pallas and Juno in 1805, 160 years before Cooley and Tukey published their similar Cooley–Tukey FFT algorithm. He developed it as a trigonometric interpolation method, but his paper Theoria Interpolationis Methodo Nova Tractata was published only posthumously in 1866, preceded by the first presentation by Joseph Fourier on the subject in 1807.
The first publication following the doctoral thesis dealt with the determination of the date of Easter (1800), a very elementary matter of mathematics. Gauss aimed to present a most convenient algorithm for people without any knowledge in ecclesiastical or even astronomical chronology, and thus avoided the usually required terms of golden number, epact, solar cycle, and domenical letter, and any religious connotations. Biographers speculated on the reason why Gauss dealt with this matter, but it is likely comprehensible by the historical background. The replacement of the Julian calendar by the Gregorian calendar had caused great confusion to the hundreds of states of the Holy Roman Empire since the 16th century, and was finished in Germany not until the year 1700, when the difference of eleven days was deleted, but the difference in calculating the date of Easter remained between Protestant and Catholic territories. A further agreement of 1776 equalized the confessional way of counting, thus in the Protestant states like the Duchy of Brunswick the Easter of 1777, five weeks before Gauss' birth, was the first one calculated in the new manner. The public difficulties of replacement may be the historical background for the confusion on this matter in the Gauss family (see chapter: Anecdotes). For being connected with the Easter regulations, an essay on the date of Pesach followed soon in 1802.
On 1 January 1801, Italian astronomer Giuseppe Piazzi discovered the dwarf planet Ceres. Piazzi could track Ceres for only somewhat more than a month, following it for three degrees across the night sky, less than 1% of the total orbit, until it disappeared temporarily behind the glare of the Sun. Several months later, when Ceres should have reappeared, Piazzi could not locate it: the mathematical tools of the time were not able to extrapolate a position from such a scant amount of data. Gauss tackled the problem within three months of intense work, and predicted a position for Ceres in December 1801. This turned out to be accurate within a half-degree when it was rediscovered by Franz Xaver von Zach on 7/31 December at Gotha, and independently by Heinrich Olbers on 1/2 January in Bremen. This confirmation eventually led to the classification of Ceres as minor-planet designation 1 Ceres; that was taken as the predicted planet between Mars and Jupiter by the most speculative Titius–Bode law.
Gauss's method involved determining a conic section in space, given one focus (the Sun) and the conic's intersection with three given lines (lines of sight from the Earth, which is itself moving on an ellipse, to the planet) and given the time it takes the planet to traverse the arcs determined by these lines (from which the lengths of the arcs can be calculated by Kepler's Second Law). This problem leads to an equation of the eighth degree, of which one solution, the Earth's orbit, is known. The solution sought is then separated from the remaining six based on physical conditions. In this work, Gauss used comprehensive approximation methods which he created for that purpose. Zach noted that "without the intelligent work and calculations of Doctor Gauss we might not have found Ceres again".
The discovery of Ceres led Gauss to his work on a theory of the motion of planetoids disturbed by large planets, eventually published in 1809 as Theoria motus corporum coelestium in sectionibus conicis solem ambientum. In the process, he so streamlined the cumbersome mathematics of 18th-century orbital prediction that his work remains a cornerstone of astronomical computation. It introduced the Gaussian gravitational constant.
Since the new asteroids had been discovered, Gauss occupied himself with the perturbations of their orbital elements. Firstly he examined Ceres with analytical methods similar to those of Laplace, but his favorite object was Pallas, because of its great eccentricity and orbital inclination, whereby Laplace's method did not work. Gauss used his own tools : the arithmetic–geometric mean, the hypergeometric function, and his method of interpolation. He found an orbital resonance with Jupiter in proportion 18 : 7 in 1812; Gauss published this result as cipher, and gave the explicit meaning only in letters to Olbers and Bessel. However, after long years he finished his work in 1816 without a result that seemed sufficient to him. This marked the end of his activities in theoretical astronomy, too.
One fruit of Gauss's research on Pallas perturbations was his article Determinatio Attractionis... (1818) on a method of theoretical astronomy that later became known as the "elliptic ring method". This method introduced a useful averaging conception in which a planet in orbit is replaced by a fictitious ring with mass density proportional to the time taking the planet to follow the corresponding orbital arcs. Gauss presents his method of evaluating the gravitational attraction of such an elliptic ring, which includes several complicated steps; one such step involves a direct application of the arithmetic-geometric mean (AGM) algorithm to calculate an elliptic integral. In the late 19th century Gauss's method was adapted by American astronomer George William Hill, who applied it directly to the problem of secular perturbation induced by Venus on Mercury orbit.
It is likely that Gauss used the method of least squares for calculating the orbit of Ceres to minimize the impact of measurement error. The method was published first by Adrien-Marie Legendre in 1805, but Gauss claimed in Theoria motus (1809) that he had been using it since 1794 or 1795. In the history of statistics, this disagreement is called the "priority dispute over the discovery of the method of least squares". Gauss proved the method under the assumption of normally distributed errors (Gauss–Markov theorem) in his paper Theoria combinationis observationum erroribus minimis obnoxiae from 1821.
In this paper, which was relatively little known in the English speaking world in the first century after its publication, he stated and proved Gauss's inequality (a Chebyshev-type inequality) for unimodal distributions, and stated without proof another inequality for moments of the fourth order (a special case of Gauss-Winckler inequality). He derived lower and upper bounds for the variance of sample variance. In a supplement to this paper Gauss described recursive least squares methods that went unnoticed until 1950, when his work was rediscovered as a consequence of the growing demand of quick estimation for various new technologies. Gauss's work on the theory of errors was extended in several directions by the geodesist Friedrich Robert Helmert, and the Gauss-Helmert theory is considered today as the "classical" theory of errors.
Gauss made several striking contributions to problems in probability theory that are not directly concerned with the theory of errors, but offer a glimpse into his broad minded view on the applicability of probabilistic thinking. One remarkable example appears as a note in his diary and is concerned with a very unusual problem that came to his mind: to describe the asymptotic distribution of entries in the continued fraction expansion of a random number uniformly distributed in (0,1). He derived this distribution, now known as the Gauss-Kuzmin distribution, as a by-product of his discovery of the ergodicity of the Gauss map for continued fractions. Gauss's solution is the first ever result in the metrical theory of continued fractions.
Gauss was busy with geodetic problems since 1799, when he helped Karl Ludwig von Lecoq with calculations during his survey in Westphalia. Later since 1804, he taught himself some geodetic practise with a sextant in Brunswick, and Göttingen.
Since 1816, his former student Heinrich Christian Schumacher, then professor in Copenhagen, but living in Altona (Holstein) near Hamburg, made a triangulation of the Jutland peninsula from Skagen in the north to Lauenburg in the south. The aim was not only the foundation of map production, but also the determination of the geodetic arc of that distance. Schumacher asked Gauss to continue this work further to the south and said he could find support for this project directly from the government of Hanover. Finally in May 1820, King George IV gave the order to Gauss.
Gauss and Schumacher had yet determined some angles between Lüneburg, Hamburg, and Lauenburg for the geodetic connection in October 1818. During the summers of 1821 until 1825 Gauss directed the triangulation personally, that reached from Thuringia in the south to the river Elbe in the north. The triangel between Hoher Hagen, Großer Inselsberg in the Thuringian Forest, and Brocken in the Harz mountains was the largest one Gauss had ever measured with a maximum side of 107 km (66.5 miles). In the thin populated Lüneburg Heath, without significant natural summits or artificial buildings, he had great difficulties to find suitable triangulation points, sometimes cutting lanes through the vegetation was necessary or even the erection of signal towers.
For pointing signals, Gauss invented a new instrument with movable mirrors and a small telescope that reflects the sunbeams to the triangulation points, and named it heliotrope. Another suitable construction for the same purpose was a sextant with an additional mirror which he named vice heliotrope. Gauss got assistance by soldiers of the Hanoveran army, among them his eldest son Joseph. Gauss took part in the baseline measurement (Braak Base Line) of Schumacher in the village Braak near Hamburg in 1820, and used the result for the evaluation of his triangulation.
The arc measurement needed a precise astronomical determination of two points in the network. Gauss and Schumacher used the favourite occasion that both observatories in Göttingen and in Altona, in the garden of Schumacher's house, laid nearly in the same longitude. The latitude was measured with both their own instruments and a zenith sector of Ramsden that was transported to both observatories.
An additional result was a better value of flattening of the approximative earth ellipsoid. Gauss developed the universal transverse Mercator projection of the ellipsoidal shaped earth (what he named conform projection) for representing geodetical data in plane charts.
When the arc measurement was finished, Gauss intended the enlargement of the triangulation to the west to get a survey of the whole Kingdom of Hanover. The practical work was directed by three army officers, among them Lieutenant Joseph Gauss. The complete data evaluation laid in the hands of Carl Friedrich Gauss, who applied his mathematical inventions as the method of least squares and his elimination method to it. The project was finished in 1844, but Gauss did not publish a final report of the project and his method of projection; this work was not done until 1866.
In 1828, when studying differences in latitude, Gauss first defined a physical approximation for the figure of the Earth as the surface everywhere perpendicular to the direction of gravity; later his doctoral student Johann Benedict Listing called this the geoid.
The geodetic survey of Hanover fueled Gauss' interest in differential geometry and topology, fields of mathematics dealing with curves and surfaces. This led him in 1828 to the publication of a memoir that marks the birth of modern differential geometry of surfaces, as it departed from the traditional ways of treating surfaces as cartesian graphs of functions of two variables, and instead pioneered a revolutionary approach that initiated the exploration of surfaces from the "inner" point of view of a two-dimensional being constrained to move on it. Its crowning result, the Theorema Egregium (remarkable theorem), established a property of the notion of Gaussian curvature. Informally, the theorem says that the curvature of a surface can be determined entirely by measuring angles and distances on the surface. That is, curvature does not depend on how the surface might be embedded in 3-dimensional space or 2-dimensional space.
The Theorema Egregium leads to the abstraction of surfaces as doubly-extended manifolds - it makes clear the distinction between the intrinsic properties of the manifold (the metric) and its physical realization (the embedding) in ambient space. A consequence is the impossibility of an isometric transformation between surfaces of different Gaussian curvature. This means practically that a sphere or an ellipsoid cannot be transformed to a plane without distortion, what causes a fundamental problem in designing projections for geographical maps.
An additional significant portion of his essay is dedicated to a profound study of geodesics. In particular, Gauss proves the local Gauss-Bonnet theorem on geodesic triangles, and generalizes Legendre's theorem on spherical triangles to geodesic triangles on arbitrary surfaces with continuous curvature; he found that the angles of a "sufficiently small" geodesic triangle deviate from that of a planar triangle of the same sides in a way that depends only on the values of the surface curvature at the vertices of the triangle - regardless of the behaviour of the surface in the triangle interior.
One key differential geometric conception was lacking from Gauss's memoir, that of geodesic curvature. However, his posthumous papers show that this notion did not escape his mind, and in the years of composing his memoir he also wrote up a manuscript in which he introduced it and referred to it as "side curvature" (in German: "Seitenkrümmung"). More importantly, he proved its invariance under isometric transformations, a result later obtained by Ferdinand Minding. Based on this evidence and the announcement in his memoir of further investigations on the curvature integral, it is very likely that he knew the more general version of the Gauss-Bonnet theorem proved by Pierre Ossian Bonnet in 1848, which is closer in spirit to the global version of this theorem.
Gauss was undoubtedly the first to discover and analyze non-Euclidean geometries, despite never publishing. He is the one who coined the term "non-Euclidean geometry". This discovery was a major paradigm shift in mathematics, as it freed mathematicians from the mistaken belief that Euclid's axioms were the only way to make geometry consistent and non-contradictory. Research on these geometries led to, among other things, Einstein's theory of general relativity, which describes the universe as non-Euclidean.
Gauss' friend Farkas Bolyai with whom he had sworn "brotherhood and the banner of truth" as a student, had tried in vain for many years to prove the parallel postulate from Euclid's other axioms of geometry. Bolyai's son Janos discovered non-Euclidean geometry in 1829 and published his work in 1832. After seeing it, Gauss wrote to Farkas Bolyai: "To praise it would amount to praising myself. For the entire content of the work ... coincides almost exactly with my own meditations which have occupied my mind for the past thirty or thirty-five years." This statement put a strain on his relationship with Janos Bolyai who thought that Gauss was stealing his idea.
Letters from Gauss years before 1829 reveal him obscurely discussing the problem of parallel lines. Dunnington argues that Gauss was in fact in full possession of non-Euclidean geometry long before it was published by Bolyai, but that he refused to publish any of it because of his fear of controversy.
In 1854, Gauss selected the topic for Bernhard Riemann's inaugural lecture Über die Hypothesen, welche der Geometrie zu Grunde liegen from three proposals. On the way home from Riemann's lecture, Weber reported that Gauss was full of praise and excitement.
One of the lesser known aspects of Gauss's work is that he was also an early pioneer of topology, or as it was called in his lifetime, Geometria Situs. His first proof of the fundamental theorem of the algebra contained an essentially topological argument; fifty years later, he further developed the topological argument in his fourth proof of this theorem (in 1849).
His earliest "serious" encounter with topological notions occurred to him in the course of his astronomical work, and in a small article from 1804 he determined the limits of the region on the celestial sphere in which comets and asteroids might appear, region which he termed "Zodiacus". He determined this region, and observed that if the Earth's and comet's orbits are linked, then by topological reasons the Zodiacus is the entire sphere. In 1848, in the context of the discovery of the asteroid 7 Iris, he published another short article in which he further elaborated the qualitative discussion of the Zodiacus.
From Gauss's letters during the period of 1820–1830, one can learn that he thought intensively on topics with close affinity to Geometria Situs, and became gradually conscious of semantic difficulty in this field. Fragments from this period reveal that he tried to classify "Tractfigurens", which are closed plane curves with a finite number of transverse self-intersections, that may also be planar projections of knots. To do so he devised a symbolical scheme, the so-called Gauss code, that in a sense captured the characteristic features of tract figures. He unsuccessfully attempted to find a method that enables to determine which tract figures actually represent knot projections.
In a fragment from 1833, Gauss defined the linking number of two space curves by a certain double integral, and in doing so provided for the first time an analytical formulation of a topological phenomenon. In the same note, he lamented on the little progress made in Geometria Situs, and remarked that one of its central problems will be "to count the intertwinings of two closed or infinite curves". His notebooks from that period reveal that he was also thinking about other topological objects such as braids and tangles.
In his later years Gauss held the emerging field of topology in a very high esteem and expected great future developments for it, but since there is so few written material by Gauss from this period, his influence was made mainly through occasional remarks and oral communications. For example, an indirect report by Mobius referred to a surface constructed by Gauss, which Gauss called "double ring" and sayed something about its connectivity properties. This report is consistent with a fragment of Gauss, written around 1840, which sketched a theory of the order of connectivity of surfaces. Finally, it is worth mentioning that in the introduction to Listing's book "Vorstudien zur Topologie" (1847), Listing expressed his indebtness to Gauss's influence.
Gauss's work did not only initiate significant mathematical theories, as he was also the author of many little "gems" in mathematics, especially in elementary geometry and algebra. In his way, he helped spread the new mathematical ideas of his time by demonstrating how they illuminate and shorten the solution of small mathematical problems.
For example, he was a vivid spirit in applying complex numbers to various problems, and used them in his work on perspective and projective geometry: in a short 1836 note on "Projections of the Cube", he stated the fundamental theorem of axonometry, which tells how to represent a 3D cube on a 2D plane with complete accuracy, via complex numbers. In an unpublished 1819 note entitled "the Sphere", he conceived of the complex plane extended by a point at infinity as the stereographic projection of a sphere (the Riemann sphere), and described rotations of this sphere as the action of certain linear fractional transformations on the extended complex plane.
Under the context of Gauss's heading of extended algebraic systems, it must be mentioned that there is solid evidence that he had in his foresight the algebraic system of quaternions, the discovery of the great William Rowan Hamilton. In 1819, Gauss drafted an unpublished short treatise on "Rotations of Space", in which he elaborated on the use of quadruples of real numbers (of which he called "scales") to describe 3D rotations.
In elementary geometry, he contributed his solution to the problem of constructing the largest-area ellipse that can be inscribed in a given quadrilateral, which was published in 1810 as an addition to Schumacher's translation of Lazare Carnot's treatise Géométrie de position. He discovered a surprising result about the computation of area of pentagons. He made many contributions to spherical geometry, and in this context solved some practical problems about navigation by stars.
One of his most remarkable investigations was concerned with John Napier's "Pentagramma mirificum" - a certain spherical pentagram whose properties intrigued and occupied Gauss's mind for several decades. In his studies of the Pentagramma he approached it from various points of view, and gradually gained a full understanding of its geometric, algebraic and analytic aspects. In particular, in 1843 he stated and proved several theorems connecting elliptic functions, Napier spherical pentagons and Poncelet pentagons in the plane.
Gauss' interest in magnetism is obvious since the first decennium of the 19th century. Since 1826, when Alexander von Humboldt visited him in Göttingen, both scientists began intensive research on geomagnetism, partly independent, partly in productive cooperation. In 1828, Gauss was Humboldt's personal guest during the conference of the Society of German Natural Scientists and Physicians in Berlin, where he got acquaintance with the physicist Wilhelm Weber.
When Weber got the chair for physics in Göttingen as successor of Johann Tobias Mayer by Gauss' recommendation in 1831, both of them started a fruitful collaboration, leading to a new knowledge of magnetism with a representation for the unit of magnetism in terms of mass, charge, and time. They founded the Magnetic Association (German: "Magnetischer Verein"), an international working group of several observatories, which supported measurements of Earth's magnetic field in many regions of the world with equal methods at arranged dates in the years 1836 to 1841. In 1836, Humboldt was helpful to organize the worldwide spread of observatories including the British dominions with a letter to the Duke of Sussex, then president of the Royal Society, wherein he asked for support for a program of global research based on Gauss' methods. Together with other instigators, this led to a global programm known as "Magnetical crusade" under directory of Edward Sabine. The dates, times, and intervalls of observations were determined in advance, the Göttingen mean time was used as standard. Finally 61 stations participated in this global program. Gauss and Weber founded a series for the publication of the results, six volumes were edited between 1837 and 1843. Weber's departure to Leipzig in 1843 as late effect of the Göttingen Seven affair marked the end of Magnetic Assiciation activity.
Following Humboldt's example, Gauss ordered a magnetic observatory to be built in the garden of his observatory, but both scientists differed over instrumental equipment; Gauss preferred stationary instruments, which he thought to give more precise results, whereas Humboldt was accustomed to movable instruments. Gauss was interested in the temporal and spatial variation of magnetic declination, inclination, and intensity, but discriminated Humboldt's concept of magnetic intensity to the terms of "horizontal" and "vertical" intensity. Together with Weber, he developed methods of measuring the components of intensity of the magnetic field, and constructed a suitable magnetometer to measure absolute values of the strength of the Earth's magnetic field, not more relative ones that depended on the apparatus. The precision of the magnetometer was about ten times higher than of previous instruments. With this work, Gauss was the first one who derived a non-mechanical quantity by basic mechanical quantities.
Gauss carried out a "General Theory of Terrestrial Magnetism" (1839), in what he believed to describe the nature of magnetic Force; following Felix Klein, this work is actually a presentation of observations by use of spherical harmonics rather than a physical theory. The theory predicted the existence of exactly two magnetic poles on the earth, thus Hansteen's idea of four magnetic poles became obsolete, and the data allowed to determine their location with rather good precision. In his "General theorems concerning the attractive and repulsive forces acting in reciprocal proportions of quadratic distances" (1840) Gauss gave the baseline of a theory of the magnetic potential, based on Lagrange, Laplace, and Poisson; it seems rather unlikely that he had knowledge of the previous works of George Green on this subject. However, Gauss could never give any reasons for magnetism, nor a theory of magnetism similar to Newton's work on gravitation, that enabled scientists to predict geomagnetic effects in the future.
Gauss got a remarkable influence on the begin of geophysics in Russia, when Adolph Theodor Kupffer, one of his former students, founded a magnetic observatory in St. Petersburg, following the eample of the observatory in Götttingen, and similar Ivan Simonov in Kazan.
The discoveries of Hans Christian Ørsted on electromagnetism and Michael Faraday on electromagnetic induction drew Gauss' attention to these matters. Gauss and Weber found the rules for branched electric circuits, later benamed as Kirchhoff's circuit laws, and made inquiries on electromagnetism. They constructed the first electromechanical telegraph in 1833, and Weber himself connected the observatory with the institute for physics in the town centre of Göttingen, but they did not care for any further development of this invention with regard to commercial purposes.
Gauss's main theoretical interests in electromagnetism were reflected in his attempts to formulate quantitive laws governing electromagnetic induction. In his notebooks from these years, he recorded several innovative formulations; he discovered the idea of vector potential function (independently rediscovered by Franz Ernst Neumann in 1845), and in January 1835 he wrote down an "induction law" equivalent to Faraday's law, which stated that the electromotive force at a given point in space is equal to the instantaneous rate of change (with respect to time) of this function.
In the same year Gauss had an insightful speculative thought, according to which electromagnetic interaction between two electric charges propagates in space in finite speed, in a manner similar to light, and that the magnitude of this interaction might depend on their relative velocity. In this way, he refuted the notion of immediate action at a distance. In unpublished fragments and in an 1845 letter to Weber, Gauss attempted to unite electricity and magnetism by forming a single expression for the interaction between two charges in relative motion, from which both Coulomb's law and the effects of magnetism could be derived.
His unpublished insights in these directions eventually merged into the so-called Weber electrodynamics, a theory that became obsolete today due to some essential difficulties to reconcile it with the undisputed Maxwell's theory. In retrospect, despite its incorrectness, the Gauss-Weber theory contained some of the germs of later ideas, such as the existence of an electromagnetic field that is in some sense independent of its point sources (Faraday's view), as well as the notion of retarded potential.
Instrument maker Johann Georg Repsold in Hamburg asked Gauss in 1807 for help to construct an achromatic lens system. Based on Gauss' calculations, Repsold succeeded with a new objective in 1810. A main problem, among other difficulties, was the non precise knowledge of the refractive index and dispersion of the used glass types. In a short article from 1817 Gauss dealt with the problem of removal of chromatic aberration in double lenses, and made calculations about adjustments of the shape and coefficients of refraction required to minimize it. His work was noted by the optician Carl August von Steinheil, who in 1860 indroduced the achromatic Steinheil doublet, based in part on Gauss's calculations. Many results in geometrical optics are scattered in Gauss's correspondences and handnotes.
In his influential Dioptrical Investigations (1840), Gauss gave the first systematic analysis on the formation of images under a paraxial approximation (Gaussian optics). Gauss demonstrated, that under a paraxial approximation an optical system can be characterized by its cardinal points, and he derived the Gaussian lens formula, applicable without restrictions in respect to the thickness of the lenses.
Gauss' first and last business in mechanics concerned the earth's rotation. When his university friend Benzenberg carried out experiments to determine the deviation of falling masses from the perpendicular in 1802, what today is known as an effect of the Coriolis force, he asked Gauss for a theory based calculation of the values for comparison with the experimental ones. Gauss elaborated a system of fundamental equations for the motion, and his results correspondent sufficiently with Benzenberg's data, who published Gauss' considerations as appendix to his book on falling experiments.
After Foucault had demonstrated his pendulum in public in 1851, Gerling questioned Gauss for further explanations. This instigated Gauss to design a new apparatus for demonstration with a much shorter length of pendulum than Foucault's one. The oscillations were observed with a reading telescope, with a vertical scale and a mirror fastened at the pendulum; the time of oscillation was 3.1 seconds. It is described in the Gauss–Gerling correspondence, and Weber made some experiments with this obviously working apparatus in 1853, but no data were published.
Gauss's principle of least constraint of 1829 was established as a general concept to overcome the division of mechanics into statics and dynamics, combining D'Alembert's principle with Lagrange's principle of Virtual Work, and showing analogies to the method of least squares.
In 1828, Gauss was appointed to head of a Board for weights and measures of the Kingdom of Hanover. He provided the creation of standards of length and measures. Gauss himself took care of the time-consuming measures and gave detailed orders for the mechanical preparation. In his correspondence with Schumacher, who was also working on this matter, he described new ideas for scales of high precision. He gave his final reports on the Hanoveran foot and pound to the government in 1841. This work got more than regional importance by the order of a law of 1836, that connected the Hanoveran measures with the English ones.
Several stories of his early genius have been reported. Carl Friedrich Gauss' mother had never recorded the date of his birth, remembering only that he had been born on a Wednesday, eight days before the Feast of the Ascension, which occurs 39 days after Easter. Gauss later solved this puzzle about his birthdate in the context of finding the date of Easter, deriving methods to compute the date in both past and future years. Gauss felt sorry for his new born daughter Wilhelmine, because she was born on the leap day in 1808 and thus would celebrate her birthday only every four years.
In his memorial on Gauss, Wolfgang Sartorius von Waltershausen tells a story about the three-years-aged Gauss, who corrected a math error his father made. The most popular story, also told by Sartorius, tells of a school exercise: the teacher, J.G. Büttner, and his assistant, Martin Bartels, ordered students to add an arithmetic series. Out of about a hundred pupils, Gauss was the first to solve the problem correctly by a significant margin. Although (or because) Sartorius gave no details, in the course of time many versions of this story have been created, with more and more details regarding the nature of the series – the most frequent being the classical problem of adding together all the integers from 1 to 100 – and the circumstances in the classroom.
Gauss' favorite English author was Walter Scott, but when he sometimes read the words "the moon rises broad in the nord west", he was very amused.
Gauss referred to mathematics as "the queen of sciences" and arithmetics as "the queen of mathematics", and supposedly once espoused a belief in the necessity of immediately understanding Euler's identity as a benchmark pursuant to becoming a first-class mathematician.
The first membership of a scientific society was given to Gauss in 1802 by the Russian Academy of Sciences. Further memberships (corresponding, foreign or full) were from the Academy of Sciences in Göttingen (1802/ 1807), the French Academy of Sciences (1804/ 1820), the Royal Society of London (1804), the Royal Prussian Academy in Berlin (1810), the National Academy of Science in Verona (1810), the Royal Society of Edinburgh (1820), the Bavarian Academy of Sciences of Munich (1820), the Royal Danish Academy in Copenhagen (1821), the Royal Astronomical Society in London (1821), the Royal Swedish Academy of Sciences (1821), the American Academy of Arts and Sciences in Boston (1822), the Royal Bohemian Society of Sciences in Prague (1833), the Royal Academy of Science, Letters and Fine Arts of Belgium (1841/ 1845), the Royal Society of Sciences in Uppsala (1843), the Royal Irish Academy in Dublin (1843), the Royal Institute of the Netherlands (1845/ 1851), the Spanish Royal Academy of Sciences in Madrid (1850), the Russian Geographical Society (1851), the Imperial Academy of Sciences in Vienna (1848), the American Philosophical Society (1853), the Cambridge Philosophical Society, and the Royal Hollandish Society of Sciences in Haarlem.
Gauss was an honorary member of the University of Kazan and of the Philosophy Faculty of the University of Prague since 1849.
Gauss received the Lalande Prize from the French Academy of Science in 1809 for the theory of planets and the means of determining their orbits from only three observations, the Danish Academy of Science prize in 1823 for "his study of angle-preserving maps", and the Copley Medal from the Royal Society in 1838 for "his inventions and mathematical researches in magnetism".
Gauss was appointed Knight of the French Legion of Honour in 1837 and was one of the first members of the Prussian Order Pour le Merite (Civil class) when it was established in 1842. Furthermore, he received the Order of the Crown of Westphalia (1810), the Danish Order of the Dannebrog (1817), the Hanoverian Royal Guelphic Order (1815), the Swedish Order of the Polar Star (1844), the Order of Henry the Lion (1849), and the Bavarian Maximilian Order for Science and Art (1853).
The Kings of Hanover appointed him the honorary titles "Hofrath" (1816) and "Geheimer Hofrath" (1845). On the occasion of his golden doctor degree jubilee he got the honorary citizenship of both towns of Brunswick and Göttingen in 1849. Soon after his death a medal was issued by order of King George V of Hanover with the back side inscription : GEORGIVS V REX HANNOVERAE MATHEMATICORVM PRINCIPI and the circumscription : ACADEMIAE SVAE GEORGIAE AVGVSTAE DECORI AETERNO.
The ″Gauss-Gesellschaft Göttingen″ (Gauss Society) was founded in 1964 for researches on life and work of Carl Friedrich Gauss and related persons and edits the ″Mitteilungen der Gauss-Gesellschaft″ (Communications of the Gauss Society).
The Göttingen Academy of Sciences and Humanities provides a complete collection of the yet known letters from and to Carl Friedrich Gauss that is accessible online. Written estate from Carl Friedrich Gauss and family members can also be found in the municipal archive of Brunswick. | [
{
"paragraph_id": 0,
"text": "Johann Carl Friedrich Gauss (German: Gauß [kaʁl ˈfʁiːdʁɪç ˈɡaʊs] ; Latin: Carolus Fridericus Gauss; 30 April 1777 – 23 February 1855) was a German mathematician, geodesist, and physicist who made significant contributions to many fields in mathematics and science. Gauss ranks among history's most influential mathematicians. He has been referred to as the \"Prince of Mathematicians\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Gauss was a child prodigy in mathematics. While still a student at the University of Göttingen, he propounded several mathematical theorems. Gauss completed his masterpieces Disquisitiones Arithmeticae and Theoria motus corporum coelestium as a private scholar. Later he was director of the Göttingen Observatory and professor at the university for nearly half a century, from 1807 until his death in 1855.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Gauss published the second and third complete proofs of the fundamental theorem of algebra, made contributions to number theory and developed the theories of binary and ternary quadratic forms. He is credited with inventing the fast Fourier transform algorithm and was instrumental in the discovery of the dwarf planet Ceres. His work on the motion of planetoids disturbed by large planets led to the introduction of the Gaussian gravitational constant and the method of least squares, which he discovered before Adrien-Marie Legendre published on the method, and which is still used in all sciences to minimize measurement error. He also anticipated non-Euclidean geometry, and was the first to analyze it, even coining the term. He is considered one of its discoverers alongside Nikolai Lobachevsky and János Bolyai.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Gauss invented the heliotrope in 1821, a magnetometer in 1833 and, alongside Wilhelm Eduard Weber, invented the first electromagnetic telegraph in 1833.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Gauss was a careful author. He refused to publish incomplete work. Although he published extensively during his life, he left behind several works to be published posthumously.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Although Gauss was known to dislike teaching, some of his students became influential mathematicians. He believed that the act of learning, not possession of knowledge, provided the greatest enjoyment.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Johann Carl Friedrich Gauss was born on 30 April 1777 in Brunswick (Braunschweig), in the Duchy of Brunswick-Wolfenbüttel (now part of Lower Saxony, Germany), to a family of lower social status. His father Gebhard Dietrich Gauss (1744–1808) worked in several jobs, as butcher, bricklayer, gardener, and as treasurer of a death-benefit fund. Gauss characterized his father as an honourable and respected man, but rough and dominating at home. He was experienced in writing and calculating, but his wife Dorothea (1743–1839), Carl Friedrich's mother, was nearly illiterate. Carl Friedrich was christened and confirmed in a church near the school that he attended as a child. He had one elder brother from his father's first marriage.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "Gauss was a child prodigy in the field of mathematics. When the elementary teachers noticed his intellectual abilities, they brought him to the attention of the Duke of Brunswick, who sent him to the local Collegium Carolinum, which he attended from 1792 to 1795 with Eberhard August Wilhelm von Zimmermann as one of his teachers. Thereafter the Duke granted him the resources for studies of mathematics, sciences, and classical languages at the Hanoverian University of Göttingen until 1798. It is not known why Gauss went to Göttingen and not to the University of Helmstedt near his native Brunswick, but it is assumed that the large library of Göttingen, where students were allowed to borrow books and take them home, was the decisive reason. One of his professors in mathematics was Abraham Gotthelf Kästner, whom Gauss called \"the leading mathematician among poets, and the leading poet among mathematicians\" because of his epigrams. Gauss depicted him in a drawing showing a lecture scene where he produced errors in a simple calculation. Astronomy was taught by Karl Felix von Seyffer (1762–1822), with whom Gauss stayed in correspondence after graduation; Olbers and Gauss mocked him in their correspondence. On the other hand, he thought highly of Georg Christoph Lichtenberg, his teacher of physics, and of Christian Gottlob Heyne, whose lectures in classics Gauss attended with pleasure. Fellow students of this time were Johann Friedrich Benzenberg, Farkas Bolyai, and Heinrich Wilhelm Brandes.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "Though being a registered student at university, it is evident that he was a self-taught student in mathematics, since he independently rediscovered several theorems. He succeeded with a breakthrough in a geometrical problem that had occupied mathematicians since the days of the Ancient Greeks when he determined in 1796 which regular polygons can be constructed by compass and straightedge. This discovery was the subject of his first publication and ultimately led Gauss to choose mathematics instead of philology as a career. Gauss' mathematical diary shows that, in the same year, he was also productive in number theory. He made advanced discoveries in modular arithmetic, found the first proof of the quadratic reciprocity law, and dealt with the prime number theorem. Many ideas for his mathematical magnum opus Disquisitiones arithmeticae, published in 1801, date from this time.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "Gauss graduated as a Doctor of Philosophy in 1799. He did not graduate from Göttingen, as is sometimes stated, but rather, at the Duke of Brunswick's special request, from the University of Helmstedt, the only state university of the duchy. There, Johann Friedrich Pfaff assessed his doctoral thesis, and Gauss got the degree in absentia without the further oral examination that was usually requested. The Duke then granted him his cost of living as a private scholar in Brunswick. Gauss showed his gratitude and loyalty for this bequest when he refused several calls from the Russian Academy of Sciences in St. Peterburg and from Landshut University. Later, the Duke promised him the foundation of an observatory in Brunswick in 1804. Architect Peter Joseph Krahe made preliminary designs, but one of Napoleon's wars cancelled those plans: the Duke was mortally wounded in the battle of Jena in 1806. The duchy was abolished in the following year, and Gauss's financial support stopped. He then followed a call to the University of Göttingen, an institution of the newly founded Kingdom of Westphalia under Jérôme Bonaparte, as full professor and director of the astronomical observatory.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "Studying the calculation of asteroid orbits, Gauss established contact with the astronomical community of Bremen and Lilienthal, especially Wilhelm Olbers, Karl Ludwig Harding and Friedrich Wilhelm Bessel, an informal group of astronomers known as the Celestial police. One of their aims was the discovery of further planets, and they assembled data on asteroids and comets as a basis for Gauss's research. Gauss was thereby able to develop new, powerful methods for the determination of orbits, which he later published in his astronomical magnum opus Theoria motus corporum coelestium (1809).",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "Gauss arrived at Göttingen in November 1807, and in the following years he was confronted with the demand for two thousand francs from the Westphalian government as a war contribution. Without having yet received his salary, he could not raise this enormous amount. Both Olbers and Laplace wanted to help him with the payment, but Gauss refused their assistance. Finally, an anonymous person from Frankfurt, later discovered to be Prince-primate Dalberg, paid the sum.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "Gauss took on the directorate of the 60-year-old observatory, founded in 1748 by Prince-elector George II and built on a converted fortification tower, with usable, but partly out-of-date instruments. The construction of a new observatory had been approved by Prince-elector George III in principle since 1802, and the Westphalian government continued the planning, but the building was not finished until October 1816. It contained new up-to-date instruments, for instance two meridian circles from Repsold and Reichenbach, and a heliometer from Fraunhofer.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "The scientific activity of Gauss, besides pure mathematics, can be roughly divided into three periods: in the first two decades of the 19th century astronomy was the main focus, in the third decade geodesy, and in the fourth decade he occupied himself with physics, mainly magnetism.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "Gauss remained mentally active into his old age, even while suffering from gout and general unhappiness. His last observation was the solar eclipse of July 28, 1851. On 23 February 1855, Gauss died of a heart attack in Göttingen; he is interred in the Albani Cemetery there. Heinrich Ewald, Gauss's son-in-law, and Wolfgang Sartorius von Waltershausen, Gauss's close friend and biographer, gave eulogies at his funeral.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "The day after Gauss's death his brain was removed, preserved and studied by Rudolf Wagner, who found its mass to be slightly above average, at 1,492 grams (52.6 oz). The cerebral area was determined by Wagner's son Hermann in his doctoral thesis to be 219,588 square millimetres (340.362 sq in). Highly developed convolutions were also found, which in the early 20th century were suggested as the explanation for his genius. After various previous investigations, a magnetic resonance study of 1998, done at the Max Planck Institute for Biophysical Chemistry in Göttingen, gave no results which could be used to explain his mathematical abilities.",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "In 2013, a neurobiologist at the same institute discovered that Gauss's brain had been mixed up, due to mislabelling, with that of the physician Conrad Heinrich Fuchs [de], who died in Göttingen a few months after Gauss. A further investigation showed no remarkable anomalies in the brains of either person. Thus, all investigations on Gauss's brain until 1998, except the first ones of Rudolf and Hermann Wagner, actually refer to the brain of Fuchs.",
"title": "Biography"
},
{
"paragraph_id": 17,
"text": "Gauss married Johanna Osthoff (1780–1809) on 9 October 1805. They had two sons and a daughter: Joseph (1806–1873), Wilhelmina (1808–1840) and Louis (1809–1810). Johanna died on 11 October 1809 one month after the birth of Louis, who himself died a few months later.",
"title": "Biography"
},
{
"paragraph_id": 18,
"text": "Gauss remarried within a year, on 4 August 1810, to Wilhelmine (Minna) Waldeck (1788–1831), a friend of his first wife. They had three more children: Eugen (later Eugene) (1811–1896), Wilhelm (later William) (1813–1879) and Therese Staufenau [de] (1816–1864). Minna Gauss died on 12 September 1831 after being seriously ill for more than a decade. Therese then took over the household and cared for Gauss for the rest of his life; after her father's death she married the actor Constantin Staufenau. Her sister Wilhelmina married the orientalist Heinrich Ewald. Gauss' mother Dorothea lived in his house from 1817 until her death in 1839.",
"title": "Biography"
},
{
"paragraph_id": 19,
"text": "The eldest son Joseph, whilst still a schoolboy, helped his father as an assistant during his survey campaign in summer 1821. After a short time at university, in 1824 Joseph joined the Hanoverian army and assisted in surveying again in 1829. In the 1830s he was responsible for the enlargement of the survey network to the western parts of the kingdom. With his geodetical qualifications he left the service and engaged in the construction of the railway network as director of the Royal Hanoverian State Railways. In 1836 he studied the railroad system in the US for some months.",
"title": "Biography"
},
{
"paragraph_id": 20,
"text": "Eugen left Göttingen in September 1830 and emigrated to the United States, where he joined the army for five years. He then worked for the American Fur Company in the Midwest, where he learned the Sioux language. Later, he moved to Missouri and became a successful businessman. Wilhelm married a niece of the astronomer Friedrich Bessel and also moved to Missouri in 1837, starting as a farmer and later becoming wealthy in the shoe business in St. Louis. Eugene and William have numerous descendants in America, but the descendants left in Germany all derive from Joseph, as the Gauss daughters had no children.",
"title": "Biography"
},
{
"paragraph_id": 21,
"text": "At the end of the 18th century, German academic mathematics was in a poor condition: the prolific mathematicians of that time worked in France and other European countries. The mathematical mainstream was orientated at solving practical problems in mechanics, astronomy, geodesy, etc. In this scientific environment, Gauss can be seen, following Felix Klein, as typical of both 18th and 19th-century mathematicians. His interest in practical applicability, for example in geodesy and astronomy, qualified Gauss to be taken as a typical applied mathematician of the century of enlightenment. On the other hand, he began research in numerous parts of mathematics without defined links to practical purposes, and thus showed himself as a pioneer of what was later called \"pure mathematics\". In contrast to earlier mathematicians, such as Leonhard Euler—who let their readers take part in their reasoning as they developed new ideas, and included certain erroneous deviations from the correct path—Gauss developed a new style of direct and complete explanation that did not attempt to show the reader the author's train of thought.",
"title": "Biography"
},
{
"paragraph_id": 22,
"text": "But for himself, he propagated a quite different ideal, given in a letter to Farkas Bolyai on 2 September 1808 as follows:",
"title": "Biography"
},
{
"paragraph_id": 23,
"text": "It is not knowledge, but the act of learning, not possession but the act of getting there, which grants the greatest enjoyment. When I have clarified and exhausted a subject, then I turn away from it, in order to go into darkness again.",
"title": "Biography"
},
{
"paragraph_id": 24,
"text": "Gauss refused to publish work which he did not consider complete and above criticism. This perfectionism was in keeping with the motto of his personal seal Pauca sed Matura (\"Few, but Ripe\"). His personal diary indicates that he had made several mathematical discoveries years or decades before contemporaries published them. He put down new ideas in writing to his colleagues, who encouraged him to publish, and sometimes rebuked him if he hesitated too long, in their opinion. Gauss defended himself, claiming that the initial discovery of ideas was easy, but preparing a publishable elaboration was a demanding matter for him, for either lack of time or \"serenity of mind\". Nevertheless, he published many short communications of urgent content in various journals, but his \"Collected Works\" contain a considerable literary estate, too. Eric Temple Bell said that if Gauss had published all of his discoveries in a timely manner, he would have advanced mathematics by fifty years.",
"title": "Biography"
},
{
"paragraph_id": 25,
"text": "On certain occasions, Gauss claimed that a finding published by another scholar had already been in his possession previously. Thus his concept of priority as \"the first to discover, not the first to publish\" differed from that of his scientific contemporaries. In contrast to his perfectionism in presenting mathematical ideas, he was criticized for his negligent way of quoting. He justified himself with a very special view of correct quoting: if he gave references, then only in a quite complete way, with respect to the previous authors of importance, which no one should ignore; but quoting in this way needed knowledge of the history of science and more time than he wished to spend.",
"title": "Biography"
},
{
"paragraph_id": 26,
"text": "Though Gauss is seen as a master of axiomatic presentation, it became obvious from his posthumously published papers, his diary, and short glosses in his own textbooks, that he worked to a great extent in an empirical way. Gauss was a lifelong busy and enthusiastic calculator. He coped with the enormous workload by using skillful tools. Gauss used a lot of mathematical tables, examined their qualities, and constructed new tables on various matters for personal use. He developed new tools for effective calculation, for example the Gaussian elimination. It has been taken as a curious feature of his working style that he carried out calculations with a high degree of precision, much more than required. Very likely, this method gave him a lot of material which he used in finding theorems in number theory.",
"title": "Biography"
},
{
"paragraph_id": 27,
"text": "It was well known to his close colleagues that Gauss disliked giving academic lectures. He first stated this to Olbers in 1802, so this aversion was not the result of bad experience. Thus he refused to accept any academic position with teaching duties during his years as a private scholar. But from the start of his academic career at Göttingen in 1807, he continuously gave lectures until 1854. He often complained about the efforts of teaching, feeling that it was a waste of his time, but on the other hand he occasionally described one or other student as talented. In all these 47 years of teaching he gave only three lectures on subjects of pure mathematics, whereas most of his lectures dealt with astronomy, geodesy, and applied mathematics. However, many of Gauss' students went on to become renowned mathematicians, physicists, and astronomers: Moritz Cantor, Dedekind, Dirksen, Encke, Gould, Heine, Klinkerfues, Kupffer, Listing, Möbius, Nicolai, Riemann, Ritter, Schering, Scherk, Schumacher, Seeber, von Staudt, Stern, Ursin; as geoscientists Sartorius von Waltershausen and Wappäus.",
"title": "Biography"
},
{
"paragraph_id": 28,
"text": "Gauss wrote no textbooks, and (unlike his friends Bessel, Humboldt, and Olbers) he disliked the popularization of scientific matters. His only attempts at popularization were his works on the date of Easter and the essay Erdmagnetismus und Magnetometer of 1836.",
"title": "Biography"
},
{
"paragraph_id": 29,
"text": "Gauss published his papers and books exclusively in Latin or in German.",
"title": "Biography"
},
{
"paragraph_id": 30,
"text": "At Göttingen University, Gauss was accompanied by a staff of other lecturers in his disciplines, who completed the educational program: for instance the brilliant Thibaut in mathematics, in physics Weber and Mayer, well known for his successful textbooks, and Harding, who took the main part of lectures in astronomy. When the observatory was completed, Gauss took his living accommodation in the western wing of the new observatory and Harding in the eastern one. Once they had been on friendly terms with another, but in the course of time they became alienated, possibly – as some biographers presume – because Gauss had wished the equal-ranked Harding to be no more than his assistant or observer. The years since 1820 were evaluated as a \"period of lower astronomical activity\". The new, well-equipped observatory did not work as effectively as others; Gauss' astronomical research had the character of a one-man enterprise, and the university established a place for an assistant only after Harding's death in 1834. But nevertheless Gauss twice refused the opportunity to solve the problem by accepting offers from Berlin in 1810 and 1825 to become a full member of the Prussian Academy, without no great lecturing duties, as well as from Leipzig University in 1810 and from Vienna University in 1842. Perhaps the reason was the difficult situation of his family. In his later years, Gauss was one of the best-paid professors of the university.",
"title": "Biography"
},
{
"paragraph_id": 31,
"text": "When Gauss was asked for help by his friend Friedrich Wilhelm Bessel in 1810, who was in trouble at Königsberg University because of his lack of an academic title, Gauss provided a doctorate honoris causa for Bessel from the Philosophy Faculty of Göttingen in March 1811. Gauss gave another recommendation for an honorary degree for Sophie Germain, but only shortly before her death, so she never received it. He also gave successful support for the talented mathematician Gotthold Eisenstein in Berlin.",
"title": "Biography"
},
{
"paragraph_id": 32,
"text": "After King William IV's death in 1837, the personal union between the kingdoms of Great Britain and Ireland and Hanover ceased. In the same year, the new Hanoverian king Ernest Augustus annulled the constitution given to the state by his brother in 1833. Seven prominent professors, later known as the \"Göttingen Seven\", protested against this, among them Gauss' friend and collaborator Wilhelm Weber and Gauss' son-in-law Heinrich Ewald. All of them were dismissed, three of them were expelled, but Ewald and Weber could stay in Göttingen. Ewald took a position the University of Tübingen in 1838, where Gauss' daughter Wilhelmina died soon afterwards in 1840, and with Weber went to Leipzig in 1843; but both of them returned to their Göttingen positions in 1849 as the only ones of the Göttingen Seven. Gauss was deeply affected by this quarrel, but saw no possibility to help them.",
"title": "Biography"
},
{
"paragraph_id": 33,
"text": "Gauss took part in academic administration: three times he was elected as dean of the Philosophy Faculty. Being entrusted with the widow's pension fund of the university, he dealt with actuarial science and wrote a report on the strategy for stabilizing the benefits. He was appointed director of the Royal Academy of Sciences in Göttingen for nine years, even in his last year of life.",
"title": "Biography"
},
{
"paragraph_id": 34,
"text": "Soon after Gauss' death, his friend Sartorius published the first biography (1856), written in a rather enthusiastic style. Sartorius saw Gauss as a serene and forward-striving man with childlike modesty, but also of \"iron character\" with an unshakeable strength of mind. He was noted for a sense of justice and religious tolerance. Apart from his closer circle, others regarded him as reserved and unapproachable, \"like an Olympian sitting enthroned on the summit of science\". His close contemporaries agreed that Gauss was a man of difficult character. He often refused to accept compliments. His visitors were occasionally irritated by grumpy behaviour, but a short time later his mood could change, and he became a charming, open-minded host.",
"title": "Biography"
},
{
"paragraph_id": 35,
"text": "Gauss' life was overshadowed by severe problems in his family. When his first wife Johanna suddenly died shortly after the death of their third child, he plunged into a depression from which he never fully recovered. Soon after her death he wrote a last letter to her in the style of an ancient threnody, the most personal surviving document of Gauss'. The situation worsened when tuberculosis afflicted, and ultimately destroyed the health of, his second wife Minna over 13 years; both his daughters later suffered from the same disease. Both younger sons were educated for some years in Celle far from Göttingen. Gauss himself gave only slight hints of his personal distress: in a letter to Bessel dated December 1831 he described himself as \"the victim of the worst domestic sufferings\".",
"title": "Biography"
},
{
"paragraph_id": 36,
"text": "Gauss grew to dominate his children and eventually had conflicts with his sons, because he did not want any of them to enter mathematics or science for \"fear of lowering the family name\", as he believed none of them would surpass his own achievements. The military career of his elder son Joseph ended after more than two decades with the rank of a poorly paid first lieutenant, although he had acquired a considerable knowledge of geodesy. He needed financial support from his father even after he was married. The second son Eugen shared a good measure of Gauss' talent in computation and languages, but had a vivacious and sometimes rebellious character. He wanted to study philology, whereas Gauss wanted him to become a lawyer. Having run up debts and caused a scandal in public, he suddenly left Göttingen under dramatic circumstances in September 1830 and emigrated via Bremen to the United States. He wasted the little money he had taken for starting, after which his father refused further financial support. The youngest son Wilhelm wanted to qualify for agricultural administration, but had difficulties to get an appropriate education, and emigrated as well. Only Gauss' youngest daughter Therese accompanied him in his last years of life.",
"title": "Biography"
},
{
"paragraph_id": 37,
"text": "Collecting numerical data on very different things, useful or useless, became a habit in his later years, for example the number of paths from his home to certain places in Göttingen, or the numbers of living days of persons; he congratulated Humboldt in December 1851, when he had reached the same age as Isaac Newton at his death, calculated in days.",
"title": "Biography"
},
{
"paragraph_id": 38,
"text": "Gauss had a good knowledge of Latin as well as of modern languages. At the age of 62, he began to teach himself Russian, very likely to understand scientific writings from Russia, among them those of Lobachevsky on non-Euclidean geometry. Gauss read both classical and modern literature, the English and French in the original languages. His favorite English author was Walter Scott, his favorite German Jean Paul. Gauss liked singing and went to concerts. He was a busy newspaper reader, and in his last years he used to visit an academic press salon of the university every noon. Gauss did not care much for philosophy, and mocked the \"splitting hairs of the so-called metaphysicians\", by which he meant proponents of the contemporary school of Naturphilosophie.",
"title": "Biography"
},
{
"paragraph_id": 39,
"text": "Gauss' religious beliefs have been a subject of speculation by some of his biographers. He sometimes said: \"God is calculating.\" Gauss was a member of the Lutheran church, like most of the population in northern Germany, but it seems that he did not believe all dogmas or understand the Holy Bible to be true quite literally. Sartorius mentioned Gauss' religious tolerance, and estimated his \"insatiable thirst for truth\" and his sense of justice as motivated by religious convictions.",
"title": "Biography"
},
{
"paragraph_id": 40,
"text": "Gauss had an \"aristocratic and through and through conservative nature\", with little respect for people's intelligence and morals, in accordance with the motto \"mundus vult decipi\". As far as the political system is concerned, he had a low estimation of the constitutional system; he criticized parliamentarians of his time for a lack of knowledge and logical errors. Gauss was loyal to the House of Hanover, disliked Napoleon and his system, and all kind of violence and revolution caused horror to him. Thus he condemned the methods of the Revolutions of 1848, though he agreed with some of their aims, such as the idea of a unified Germany.",
"title": "Biography"
},
{
"paragraph_id": 41,
"text": "Gauss was a successful investor and accumulated considerable wealth with stocks and securities, but he disapproved of the idea of paper money. After his death a great sum of money was found hidden in his rooms.",
"title": "Biography"
},
{
"paragraph_id": 42,
"text": "In his doctoral thesis from 1799 Gauss proved the fundamental theorem of algebra which states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. Mathematicians including Jean le Rond d'Alembert had produced false proofs before him, and Gauss' dissertation contains a critique of d'Alembert's work. He subsequently produced three other proofs, the last one in 1849 being generally rigorous. His attempts clarified the concept of complex numbers considerably along the way.",
"title": "Scientific work"
},
{
"paragraph_id": 43,
"text": "The entries in Gauss' Mathematical diary indicate that he was busy with the subject of number theory at least since 1796. A detailed study of previous researches showed him that some of his findings had been already done by other scholars. In the years 1798 and 1799 Gauss wrote a voluminous compilation of all these results in the famous Disquisitiones Arithmeticae, published in 1801, that was fundamental in consolidating number theory as a discipline and covered both elementary and algebraic number theory. Therein he introduces, among other things, the triple bar symbol (≡) for congruence and uses it in a clean presentation of modular arithmetic. It deals with the unique factorization theorem and primitive roots modulo n. In the main chapters, Gauss presents the first two proofs of the law of quadratic reciprocity, which allows mathematicians to determine the solvability of any quadratic equation in modular arithmetic, and develops the theories of binary and ternary quadratic forms.",
"title": "Scientific work"
},
{
"paragraph_id": 44,
"text": "Highlights of these theories include the remarkable Gauss composition law for binary quadratic forms, as well as his enumeration of the number of representations of an integer as sum of three squares. As an almost immediate corollary of his theorem on three squares, he proves the triangular case of the Fermat polygonal number theorem for n = 3. From several remarkable analytic results on class numbers that Gauss gives without proof towards the end of the fifth chapter, it appears that Gauss already knew the class number formula in 1801.",
"title": "Scientific work"
},
{
"paragraph_id": 45,
"text": "In the last chapter Gauss gives his proof for the constructibility of a regular heptadecagon (17-sided polygon) with straightedge and compass by reducing this geometrical to an algebraic problem. He shows that a regular polygon is constructible if the number of its sides is a product of distinct Fermat primes and a power of 2. In the same chapter, he gives a result on the number of solutions of certain cubic polynomials with coefficients in finite fields, which amounts to counting integral points on an elliptic curve. Some 150 years later, Andre Weil remarked that this particular result, together with some other unpublished results of Gauss, led him to formulate what is now called Weil conjectures.",
"title": "Scientific work"
},
{
"paragraph_id": 46,
"text": "Gauss intended to include an eighth chapter that would treat the topic of higher congruences modulo a prime number in its full generality, but the unfinished chapter was found among his papers only after his death, consisting of work done during the years 1797–1799. It contains a systematic theory of finite fields, among other things; one particular result of special importance is a counting formula for the number of irreducible polynomials of a given degree over a finite field. He also makes use of the powerful tool of the Frobenius automorphism to explore the subfields of finite fields. Gauss finishes the chapter by indicating possible generalizations of his investigations, and proves an early version of \"Hensel lemma\", which enables to lift modular properties with respect to a prime p into ever growing powers of the same prime.",
"title": "Scientific work"
},
{
"paragraph_id": 47,
"text": "It is unclear how much Gauss was aware of the importance of the last result, but there are indications he was aware of some sort of a p-adic method, such as his motive to prove his lemma on polynomials or his method of deriving Hensel's lemma. In the beginning of the 20th century, Kurt Hensel introduced p-adic numbers, and in this way shed light on these investigations and brought them to conceptual maturity.",
"title": "Scientific work"
},
{
"paragraph_id": 48,
"text": "In 1831, Ludwig August Seeber published a book on the theory of reduction of positive ternary quadratic forms, with accordance with the program outlined in Gauss's Disquisitiones. However, he did not prove a central theorem of his theory, so it remained a mere conjecture. In his review of Seeber's book, Gauss simplified many of Seeber's lengthy arguments, proved this central conjecture, and remarked that this theorem is equivalent to Kepler conjecture for regular arrangements.",
"title": "Scientific work"
},
{
"paragraph_id": 49,
"text": "Gauss proved Fermat's Last Theorem for n = 3 and sketchingly proved it for n = 5 in his unpublished writings. The particular case of n = 3 was proved much earlier by Leonhard Euler, but Gauss developed a more streamlined proof which made use of Eisenstein integers; though more general, the proof was simpler than in the real integers case.",
"title": "Scientific work"
},
{
"paragraph_id": 50,
"text": "Among his published number theoretical works, his two papers on biquadratic residues (published in 1828 and 1832) are considered second in importance only to Disquisitions Arithmeticae. In these papers Gauss introduces the ring of Gaussian integers Z [ i ] {\\displaystyle \\mathbb {Z} [i]} , and shows that this ring is a unique factorization domain. Furthermore, he generalizes into this ring many key arithmetic concepts, such as Fermat's little theorem and Gauss's lemma. The main objective of introducing this ring was to formulate the law of biquadratic reciprocity – as Gauss discovered, rings of complex integers are the natural setting for such higher reciprocity laws.",
"title": "Scientific work"
},
{
"paragraph_id": 51,
"text": "In the second paper, he states the general law of biquadratic reciprocity and proves several special cases of it, but proof of the general theorem is lacking, despite Gauss's statements that he found such a proof around 1814. He promised a third paper with a general proof, but this never appeared. In an earlier publication from 1818 containing his fifth and sixth proofs of quadratic reciprocity, he claims the techniques of these proofs (Gauss sums) can be applied to prove higher reciprocity laws. In his posthumous papers, two proofs of the general case were found: one is believed to be not original of Gauss but rather based in its principles on Gotthold Eisenstein's proof, while the other was a highly original proof based on geometrical considerations involving counting lattice points in certain geometric figures. Despite its originality, the geometric proof is very long and cumbersome, and this may be the reason why he withheld its publication after he saw Eisenstein's much more direct proof.",
"title": "Scientific work"
},
{
"paragraph_id": 52,
"text": "Gauss's publications on biquadratic residues opened the way for boundless enlargement of the theory of numbers, and are memorable for the wealth of investigations in \"higher arithmetic\" that they led to.",
"title": "Scientific work"
},
{
"paragraph_id": 53,
"text": "One of Gauss's first independent discoveries was the notion of the arithmetic-geometric mean (AGM) of two positive real numbers; his systematic investigations on the AGM led him to discover an unusually rich mathematical landscape, and to obtain plenty of new results associated with it. He discovered its relation to elliptic integrals in the years 1798-1799 through the so-called Landen's transformation, and in a diary entry recorded his discovery of the connection of Gauss's constant to lemniscatic elliptic functions, a result that Gauss stated that \"will surely open a new area of analysis\". He also made early inroads into the more formal issues of the foundations of complex analysis, and from a letter to Bessel in 1811 it is clear that he knew the so-called \"fundamental theorem of complex analysis\" - Cauchy's integral theorem - and understood the notion of complex residues when integrating around poles.",
"title": "Scientific work"
},
{
"paragraph_id": 54,
"text": "Another source of inspiration for Gauss's early work in analysis was his acquaintance with Euler's pentagonal numbers theorem. This theorem together with his other researches on the AGM and lemniscatic functions led him to plenty of results on Jacobi theta functions, work which culminated with his discovery in 1808 of the very general Jacobi triple product identity, which includes Euler's theorem as a special case. In his publication from 1811 on the determination of the sign of quadratic Gauss sum, Gauss solved the problem by introducing Gaussian binomial coefficients and by using a line of reasoning that somehow \"hides\" its origin in theta function theory, as later mathematicians have shown. All this work was done several decades before the publication of Jacobi's \"Fundamenta nova\" in 1829; however, Gauss never found the time to systematically write and organize all his thoughts and theorems of this kind, and his contemporaries never knew the scope of his work.",
"title": "Scientific work"
},
{
"paragraph_id": 55,
"text": "Several mathematical fragments in his Nachlass indicate that he knew quite well parts of the modern theory of modular forms of Felix Klein and Robert Fricke. In his work on the multivalued AGM of two complex numbers, he discovered a very deep connection between the infinitely many values of the AGM to its two \"simplest values\". His unpublished writings include several drawings that show he was quite aware of the geometric side of the theory; in the context of his work on the complex AGM he recognized and made a sketch of the key concept of fundamental domain for the modular group. Perhaps the most remarkable of Gauss's sketches of this kind was his drawing of a tessellation of the unit disk by \"equilateral\" hyperbolic triangles with all angles equal to π / 4 {\\displaystyle \\pi /4} .",
"title": "Scientific work"
},
{
"paragraph_id": 56,
"text": "In his lifetime Gauss published almost nothing about those more modern theories of elliptic functions, but he did publish most of his results on the related theme of the hypergeometric function. In his work \"Disquisitiones generales circa series infinitam...\" (1812), he provided the first systematic treatment of the general hypergeometric function F ( α , β , γ , x ) {\\displaystyle F(\\alpha ,\\beta ,\\gamma ,x)} , and showed that many of the functions known to science at the time, such as the elementary functions and some special functions, are a special case of the hypergeometric function. This work was the first one with an exact inquiry of convergence of infinite series in the history of mathematics. Furthermore, it dealt with infinite continued fractions arising as ratios of hypergeometric functions.",
"title": "Scientific work"
},
{
"paragraph_id": 57,
"text": "In 1822 Gauss published his prize winning essay on conformal mappings, which contains several developments that pertain to the field of complex analysis. In this essay, Gauss made explicit the insight that angle-preserving mappings in the complex plane must be complex analytic functions, and used the so-called Beltrami equation to prove the existence of isothermal coordinates on analytic surfaces. The essay concludes with examples of conformal mappings into a sphere and an ellipsoid of revolution. In addition, in unpublished fragments from the years 1834-1839 he investigated and solved the more difficult task of explicitly constructing a conformal mapping from the interior of an ellipse to the unit disk. His solution, which combined his early work on elliptic functions and his later ideas on potential theory, reveals his mastery of the theory of logarithmic potential, and his final results corresponded to the formula found by Hermann Schwarz in 1870.",
"title": "Scientific work"
},
{
"paragraph_id": 58,
"text": "Gauss often deduced theorems inductively from numerical data he had collected in an empirical way. As such, the use of efficient algorithms to facilitate calculations was vital to his researches, and he made many contributions to numeric analysis. In 1815, he published an article on numeric integration, in which he described his method of Gaussian quadrature, that greatly improved existing methods and inspired much of the work made by later mathematicians.",
"title": "Scientific work"
},
{
"paragraph_id": 59,
"text": "In a private letter to Gerling from 1823, he described a solution of a certain 4X4 system of linear equations by using Gauss-Seidel method – an \"indirect\" iterative method for the solution of linear systems, that in some cases converges very rapidly to the exact solution. Gauss recommended it over the usual method (the so-called \"direct elimination\") for systems of more than 2 equations, stating that it can be done \"while half asleep, or while thinking about other things\". As such, it was an early contribution to numerical linear algebra.",
"title": "Scientific work"
},
{
"paragraph_id": 60,
"text": "Gauss invented an algorithm for calculating discrete Fourier transforms, sometimes called \"the most important numerical algorithm of our lifetime\", when calculating the orbits of Pallas and Juno in 1805, 160 years before Cooley and Tukey published their similar Cooley–Tukey FFT algorithm. He developed it as a trigonometric interpolation method, but his paper Theoria Interpolationis Methodo Nova Tractata was published only posthumously in 1866, preceded by the first presentation by Joseph Fourier on the subject in 1807.",
"title": "Scientific work"
},
{
"paragraph_id": 61,
"text": "The first publication following the doctoral thesis dealt with the determination of the date of Easter (1800), a very elementary matter of mathematics. Gauss aimed to present a most convenient algorithm for people without any knowledge in ecclesiastical or even astronomical chronology, and thus avoided the usually required terms of golden number, epact, solar cycle, and domenical letter, and any religious connotations. Biographers speculated on the reason why Gauss dealt with this matter, but it is likely comprehensible by the historical background. The replacement of the Julian calendar by the Gregorian calendar had caused great confusion to the hundreds of states of the Holy Roman Empire since the 16th century, and was finished in Germany not until the year 1700, when the difference of eleven days was deleted, but the difference in calculating the date of Easter remained between Protestant and Catholic territories. A further agreement of 1776 equalized the confessional way of counting, thus in the Protestant states like the Duchy of Brunswick the Easter of 1777, five weeks before Gauss' birth, was the first one calculated in the new manner. The public difficulties of replacement may be the historical background for the confusion on this matter in the Gauss family (see chapter: Anecdotes). For being connected with the Easter regulations, an essay on the date of Pesach followed soon in 1802.",
"title": "Scientific work"
},
{
"paragraph_id": 62,
"text": "On 1 January 1801, Italian astronomer Giuseppe Piazzi discovered the dwarf planet Ceres. Piazzi could track Ceres for only somewhat more than a month, following it for three degrees across the night sky, less than 1% of the total orbit, until it disappeared temporarily behind the glare of the Sun. Several months later, when Ceres should have reappeared, Piazzi could not locate it: the mathematical tools of the time were not able to extrapolate a position from such a scant amount of data. Gauss tackled the problem within three months of intense work, and predicted a position for Ceres in December 1801. This turned out to be accurate within a half-degree when it was rediscovered by Franz Xaver von Zach on 7/31 December at Gotha, and independently by Heinrich Olbers on 1/2 January in Bremen. This confirmation eventually led to the classification of Ceres as minor-planet designation 1 Ceres; that was taken as the predicted planet between Mars and Jupiter by the most speculative Titius–Bode law.",
"title": "Scientific work"
},
{
"paragraph_id": 63,
"text": "Gauss's method involved determining a conic section in space, given one focus (the Sun) and the conic's intersection with three given lines (lines of sight from the Earth, which is itself moving on an ellipse, to the planet) and given the time it takes the planet to traverse the arcs determined by these lines (from which the lengths of the arcs can be calculated by Kepler's Second Law). This problem leads to an equation of the eighth degree, of which one solution, the Earth's orbit, is known. The solution sought is then separated from the remaining six based on physical conditions. In this work, Gauss used comprehensive approximation methods which he created for that purpose. Zach noted that \"without the intelligent work and calculations of Doctor Gauss we might not have found Ceres again\".",
"title": "Scientific work"
},
{
"paragraph_id": 64,
"text": "The discovery of Ceres led Gauss to his work on a theory of the motion of planetoids disturbed by large planets, eventually published in 1809 as Theoria motus corporum coelestium in sectionibus conicis solem ambientum. In the process, he so streamlined the cumbersome mathematics of 18th-century orbital prediction that his work remains a cornerstone of astronomical computation. It introduced the Gaussian gravitational constant.",
"title": "Scientific work"
},
{
"paragraph_id": 65,
"text": "Since the new asteroids had been discovered, Gauss occupied himself with the perturbations of their orbital elements. Firstly he examined Ceres with analytical methods similar to those of Laplace, but his favorite object was Pallas, because of its great eccentricity and orbital inclination, whereby Laplace's method did not work. Gauss used his own tools : the arithmetic–geometric mean, the hypergeometric function, and his method of interpolation. He found an orbital resonance with Jupiter in proportion 18 : 7 in 1812; Gauss published this result as cipher, and gave the explicit meaning only in letters to Olbers and Bessel. However, after long years he finished his work in 1816 without a result that seemed sufficient to him. This marked the end of his activities in theoretical astronomy, too.",
"title": "Scientific work"
},
{
"paragraph_id": 66,
"text": "One fruit of Gauss's research on Pallas perturbations was his article Determinatio Attractionis... (1818) on a method of theoretical astronomy that later became known as the \"elliptic ring method\". This method introduced a useful averaging conception in which a planet in orbit is replaced by a fictitious ring with mass density proportional to the time taking the planet to follow the corresponding orbital arcs. Gauss presents his method of evaluating the gravitational attraction of such an elliptic ring, which includes several complicated steps; one such step involves a direct application of the arithmetic-geometric mean (AGM) algorithm to calculate an elliptic integral. In the late 19th century Gauss's method was adapted by American astronomer George William Hill, who applied it directly to the problem of secular perturbation induced by Venus on Mercury orbit.",
"title": "Scientific work"
},
{
"paragraph_id": 67,
"text": "It is likely that Gauss used the method of least squares for calculating the orbit of Ceres to minimize the impact of measurement error. The method was published first by Adrien-Marie Legendre in 1805, but Gauss claimed in Theoria motus (1809) that he had been using it since 1794 or 1795. In the history of statistics, this disagreement is called the \"priority dispute over the discovery of the method of least squares\". Gauss proved the method under the assumption of normally distributed errors (Gauss–Markov theorem) in his paper Theoria combinationis observationum erroribus minimis obnoxiae from 1821.",
"title": "Scientific work"
},
{
"paragraph_id": 68,
"text": "In this paper, which was relatively little known in the English speaking world in the first century after its publication, he stated and proved Gauss's inequality (a Chebyshev-type inequality) for unimodal distributions, and stated without proof another inequality for moments of the fourth order (a special case of Gauss-Winckler inequality). He derived lower and upper bounds for the variance of sample variance. In a supplement to this paper Gauss described recursive least squares methods that went unnoticed until 1950, when his work was rediscovered as a consequence of the growing demand of quick estimation for various new technologies. Gauss's work on the theory of errors was extended in several directions by the geodesist Friedrich Robert Helmert, and the Gauss-Helmert theory is considered today as the \"classical\" theory of errors.",
"title": "Scientific work"
},
{
"paragraph_id": 69,
"text": "Gauss made several striking contributions to problems in probability theory that are not directly concerned with the theory of errors, but offer a glimpse into his broad minded view on the applicability of probabilistic thinking. One remarkable example appears as a note in his diary and is concerned with a very unusual problem that came to his mind: to describe the asymptotic distribution of entries in the continued fraction expansion of a random number uniformly distributed in (0,1). He derived this distribution, now known as the Gauss-Kuzmin distribution, as a by-product of his discovery of the ergodicity of the Gauss map for continued fractions. Gauss's solution is the first ever result in the metrical theory of continued fractions.",
"title": "Scientific work"
},
{
"paragraph_id": 70,
"text": "Gauss was busy with geodetic problems since 1799, when he helped Karl Ludwig von Lecoq with calculations during his survey in Westphalia. Later since 1804, he taught himself some geodetic practise with a sextant in Brunswick, and Göttingen.",
"title": "Scientific work"
},
{
"paragraph_id": 71,
"text": "Since 1816, his former student Heinrich Christian Schumacher, then professor in Copenhagen, but living in Altona (Holstein) near Hamburg, made a triangulation of the Jutland peninsula from Skagen in the north to Lauenburg in the south. The aim was not only the foundation of map production, but also the determination of the geodetic arc of that distance. Schumacher asked Gauss to continue this work further to the south and said he could find support for this project directly from the government of Hanover. Finally in May 1820, King George IV gave the order to Gauss.",
"title": "Scientific work"
},
{
"paragraph_id": 72,
"text": "Gauss and Schumacher had yet determined some angles between Lüneburg, Hamburg, and Lauenburg for the geodetic connection in October 1818. During the summers of 1821 until 1825 Gauss directed the triangulation personally, that reached from Thuringia in the south to the river Elbe in the north. The triangel between Hoher Hagen, Großer Inselsberg in the Thuringian Forest, and Brocken in the Harz mountains was the largest one Gauss had ever measured with a maximum side of 107 km (66.5 miles). In the thin populated Lüneburg Heath, without significant natural summits or artificial buildings, he had great difficulties to find suitable triangulation points, sometimes cutting lanes through the vegetation was necessary or even the erection of signal towers.",
"title": "Scientific work"
},
{
"paragraph_id": 73,
"text": "For pointing signals, Gauss invented a new instrument with movable mirrors and a small telescope that reflects the sunbeams to the triangulation points, and named it heliotrope. Another suitable construction for the same purpose was a sextant with an additional mirror which he named vice heliotrope. Gauss got assistance by soldiers of the Hanoveran army, among them his eldest son Joseph. Gauss took part in the baseline measurement (Braak Base Line) of Schumacher in the village Braak near Hamburg in 1820, and used the result for the evaluation of his triangulation.",
"title": "Scientific work"
},
{
"paragraph_id": 74,
"text": "The arc measurement needed a precise astronomical determination of two points in the network. Gauss and Schumacher used the favourite occasion that both observatories in Göttingen and in Altona, in the garden of Schumacher's house, laid nearly in the same longitude. The latitude was measured with both their own instruments and a zenith sector of Ramsden that was transported to both observatories.",
"title": "Scientific work"
},
{
"paragraph_id": 75,
"text": "An additional result was a better value of flattening of the approximative earth ellipsoid. Gauss developed the universal transverse Mercator projection of the ellipsoidal shaped earth (what he named conform projection) for representing geodetical data in plane charts.",
"title": "Scientific work"
},
{
"paragraph_id": 76,
"text": "When the arc measurement was finished, Gauss intended the enlargement of the triangulation to the west to get a survey of the whole Kingdom of Hanover. The practical work was directed by three army officers, among them Lieutenant Joseph Gauss. The complete data evaluation laid in the hands of Carl Friedrich Gauss, who applied his mathematical inventions as the method of least squares and his elimination method to it. The project was finished in 1844, but Gauss did not publish a final report of the project and his method of projection; this work was not done until 1866.",
"title": "Scientific work"
},
{
"paragraph_id": 77,
"text": "In 1828, when studying differences in latitude, Gauss first defined a physical approximation for the figure of the Earth as the surface everywhere perpendicular to the direction of gravity; later his doctoral student Johann Benedict Listing called this the geoid.",
"title": "Scientific work"
},
{
"paragraph_id": 78,
"text": "The geodetic survey of Hanover fueled Gauss' interest in differential geometry and topology, fields of mathematics dealing with curves and surfaces. This led him in 1828 to the publication of a memoir that marks the birth of modern differential geometry of surfaces, as it departed from the traditional ways of treating surfaces as cartesian graphs of functions of two variables, and instead pioneered a revolutionary approach that initiated the exploration of surfaces from the \"inner\" point of view of a two-dimensional being constrained to move on it. Its crowning result, the Theorema Egregium (remarkable theorem), established a property of the notion of Gaussian curvature. Informally, the theorem says that the curvature of a surface can be determined entirely by measuring angles and distances on the surface. That is, curvature does not depend on how the surface might be embedded in 3-dimensional space or 2-dimensional space.",
"title": "Scientific work"
},
{
"paragraph_id": 79,
"text": "The Theorema Egregium leads to the abstraction of surfaces as doubly-extended manifolds - it makes clear the distinction between the intrinsic properties of the manifold (the metric) and its physical realization (the embedding) in ambient space. A consequence is the impossibility of an isometric transformation between surfaces of different Gaussian curvature. This means practically that a sphere or an ellipsoid cannot be transformed to a plane without distortion, what causes a fundamental problem in designing projections for geographical maps.",
"title": "Scientific work"
},
{
"paragraph_id": 80,
"text": "An additional significant portion of his essay is dedicated to a profound study of geodesics. In particular, Gauss proves the local Gauss-Bonnet theorem on geodesic triangles, and generalizes Legendre's theorem on spherical triangles to geodesic triangles on arbitrary surfaces with continuous curvature; he found that the angles of a \"sufficiently small\" geodesic triangle deviate from that of a planar triangle of the same sides in a way that depends only on the values of the surface curvature at the vertices of the triangle - regardless of the behaviour of the surface in the triangle interior.",
"title": "Scientific work"
},
{
"paragraph_id": 81,
"text": "One key differential geometric conception was lacking from Gauss's memoir, that of geodesic curvature. However, his posthumous papers show that this notion did not escape his mind, and in the years of composing his memoir he also wrote up a manuscript in which he introduced it and referred to it as \"side curvature\" (in German: \"Seitenkrümmung\"). More importantly, he proved its invariance under isometric transformations, a result later obtained by Ferdinand Minding. Based on this evidence and the announcement in his memoir of further investigations on the curvature integral, it is very likely that he knew the more general version of the Gauss-Bonnet theorem proved by Pierre Ossian Bonnet in 1848, which is closer in spirit to the global version of this theorem.",
"title": "Scientific work"
},
{
"paragraph_id": 82,
"text": "Gauss was undoubtedly the first to discover and analyze non-Euclidean geometries, despite never publishing. He is the one who coined the term \"non-Euclidean geometry\". This discovery was a major paradigm shift in mathematics, as it freed mathematicians from the mistaken belief that Euclid's axioms were the only way to make geometry consistent and non-contradictory. Research on these geometries led to, among other things, Einstein's theory of general relativity, which describes the universe as non-Euclidean.",
"title": "Scientific work"
},
{
"paragraph_id": 83,
"text": "Gauss' friend Farkas Bolyai with whom he had sworn \"brotherhood and the banner of truth\" as a student, had tried in vain for many years to prove the parallel postulate from Euclid's other axioms of geometry. Bolyai's son Janos discovered non-Euclidean geometry in 1829 and published his work in 1832. After seeing it, Gauss wrote to Farkas Bolyai: \"To praise it would amount to praising myself. For the entire content of the work ... coincides almost exactly with my own meditations which have occupied my mind for the past thirty or thirty-five years.\" This statement put a strain on his relationship with Janos Bolyai who thought that Gauss was stealing his idea.",
"title": "Scientific work"
},
{
"paragraph_id": 84,
"text": "Letters from Gauss years before 1829 reveal him obscurely discussing the problem of parallel lines. Dunnington argues that Gauss was in fact in full possession of non-Euclidean geometry long before it was published by Bolyai, but that he refused to publish any of it because of his fear of controversy.",
"title": "Scientific work"
},
{
"paragraph_id": 85,
"text": "In 1854, Gauss selected the topic for Bernhard Riemann's inaugural lecture Über die Hypothesen, welche der Geometrie zu Grunde liegen from three proposals. On the way home from Riemann's lecture, Weber reported that Gauss was full of praise and excitement.",
"title": "Scientific work"
},
{
"paragraph_id": 86,
"text": "One of the lesser known aspects of Gauss's work is that he was also an early pioneer of topology, or as it was called in his lifetime, Geometria Situs. His first proof of the fundamental theorem of the algebra contained an essentially topological argument; fifty years later, he further developed the topological argument in his fourth proof of this theorem (in 1849).",
"title": "Scientific work"
},
{
"paragraph_id": 87,
"text": "His earliest \"serious\" encounter with topological notions occurred to him in the course of his astronomical work, and in a small article from 1804 he determined the limits of the region on the celestial sphere in which comets and asteroids might appear, region which he termed \"Zodiacus\". He determined this region, and observed that if the Earth's and comet's orbits are linked, then by topological reasons the Zodiacus is the entire sphere. In 1848, in the context of the discovery of the asteroid 7 Iris, he published another short article in which he further elaborated the qualitative discussion of the Zodiacus.",
"title": "Scientific work"
},
{
"paragraph_id": 88,
"text": "From Gauss's letters during the period of 1820–1830, one can learn that he thought intensively on topics with close affinity to Geometria Situs, and became gradually conscious of semantic difficulty in this field. Fragments from this period reveal that he tried to classify \"Tractfigurens\", which are closed plane curves with a finite number of transverse self-intersections, that may also be planar projections of knots. To do so he devised a symbolical scheme, the so-called Gauss code, that in a sense captured the characteristic features of tract figures. He unsuccessfully attempted to find a method that enables to determine which tract figures actually represent knot projections.",
"title": "Scientific work"
},
{
"paragraph_id": 89,
"text": "In a fragment from 1833, Gauss defined the linking number of two space curves by a certain double integral, and in doing so provided for the first time an analytical formulation of a topological phenomenon. In the same note, he lamented on the little progress made in Geometria Situs, and remarked that one of its central problems will be \"to count the intertwinings of two closed or infinite curves\". His notebooks from that period reveal that he was also thinking about other topological objects such as braids and tangles.",
"title": "Scientific work"
},
{
"paragraph_id": 90,
"text": "In his later years Gauss held the emerging field of topology in a very high esteem and expected great future developments for it, but since there is so few written material by Gauss from this period, his influence was made mainly through occasional remarks and oral communications. For example, an indirect report by Mobius referred to a surface constructed by Gauss, which Gauss called \"double ring\" and sayed something about its connectivity properties. This report is consistent with a fragment of Gauss, written around 1840, which sketched a theory of the order of connectivity of surfaces. Finally, it is worth mentioning that in the introduction to Listing's book \"Vorstudien zur Topologie\" (1847), Listing expressed his indebtness to Gauss's influence.",
"title": "Scientific work"
},
{
"paragraph_id": 91,
"text": "Gauss's work did not only initiate significant mathematical theories, as he was also the author of many little \"gems\" in mathematics, especially in elementary geometry and algebra. In his way, he helped spread the new mathematical ideas of his time by demonstrating how they illuminate and shorten the solution of small mathematical problems.",
"title": "Scientific work"
},
{
"paragraph_id": 92,
"text": "For example, he was a vivid spirit in applying complex numbers to various problems, and used them in his work on perspective and projective geometry: in a short 1836 note on \"Projections of the Cube\", he stated the fundamental theorem of axonometry, which tells how to represent a 3D cube on a 2D plane with complete accuracy, via complex numbers. In an unpublished 1819 note entitled \"the Sphere\", he conceived of the complex plane extended by a point at infinity as the stereographic projection of a sphere (the Riemann sphere), and described rotations of this sphere as the action of certain linear fractional transformations on the extended complex plane.",
"title": "Scientific work"
},
{
"paragraph_id": 93,
"text": "Under the context of Gauss's heading of extended algebraic systems, it must be mentioned that there is solid evidence that he had in his foresight the algebraic system of quaternions, the discovery of the great William Rowan Hamilton. In 1819, Gauss drafted an unpublished short treatise on \"Rotations of Space\", in which he elaborated on the use of quadruples of real numbers (of which he called \"scales\") to describe 3D rotations.",
"title": "Scientific work"
},
{
"paragraph_id": 94,
"text": "In elementary geometry, he contributed his solution to the problem of constructing the largest-area ellipse that can be inscribed in a given quadrilateral, which was published in 1810 as an addition to Schumacher's translation of Lazare Carnot's treatise Géométrie de position. He discovered a surprising result about the computation of area of pentagons. He made many contributions to spherical geometry, and in this context solved some practical problems about navigation by stars.",
"title": "Scientific work"
},
{
"paragraph_id": 95,
"text": "One of his most remarkable investigations was concerned with John Napier's \"Pentagramma mirificum\" - a certain spherical pentagram whose properties intrigued and occupied Gauss's mind for several decades. In his studies of the Pentagramma he approached it from various points of view, and gradually gained a full understanding of its geometric, algebraic and analytic aspects. In particular, in 1843 he stated and proved several theorems connecting elliptic functions, Napier spherical pentagons and Poncelet pentagons in the plane.",
"title": "Scientific work"
},
{
"paragraph_id": 96,
"text": "Gauss' interest in magnetism is obvious since the first decennium of the 19th century. Since 1826, when Alexander von Humboldt visited him in Göttingen, both scientists began intensive research on geomagnetism, partly independent, partly in productive cooperation. In 1828, Gauss was Humboldt's personal guest during the conference of the Society of German Natural Scientists and Physicians in Berlin, where he got acquaintance with the physicist Wilhelm Weber.",
"title": "Scientific work"
},
{
"paragraph_id": 97,
"text": "When Weber got the chair for physics in Göttingen as successor of Johann Tobias Mayer by Gauss' recommendation in 1831, both of them started a fruitful collaboration, leading to a new knowledge of magnetism with a representation for the unit of magnetism in terms of mass, charge, and time. They founded the Magnetic Association (German: \"Magnetischer Verein\"), an international working group of several observatories, which supported measurements of Earth's magnetic field in many regions of the world with equal methods at arranged dates in the years 1836 to 1841. In 1836, Humboldt was helpful to organize the worldwide spread of observatories including the British dominions with a letter to the Duke of Sussex, then president of the Royal Society, wherein he asked for support for a program of global research based on Gauss' methods. Together with other instigators, this led to a global programm known as \"Magnetical crusade\" under directory of Edward Sabine. The dates, times, and intervalls of observations were determined in advance, the Göttingen mean time was used as standard. Finally 61 stations participated in this global program. Gauss and Weber founded a series for the publication of the results, six volumes were edited between 1837 and 1843. Weber's departure to Leipzig in 1843 as late effect of the Göttingen Seven affair marked the end of Magnetic Assiciation activity.",
"title": "Scientific work"
},
{
"paragraph_id": 98,
"text": "Following Humboldt's example, Gauss ordered a magnetic observatory to be built in the garden of his observatory, but both scientists differed over instrumental equipment; Gauss preferred stationary instruments, which he thought to give more precise results, whereas Humboldt was accustomed to movable instruments. Gauss was interested in the temporal and spatial variation of magnetic declination, inclination, and intensity, but discriminated Humboldt's concept of magnetic intensity to the terms of \"horizontal\" and \"vertical\" intensity. Together with Weber, he developed methods of measuring the components of intensity of the magnetic field, and constructed a suitable magnetometer to measure absolute values of the strength of the Earth's magnetic field, not more relative ones that depended on the apparatus. The precision of the magnetometer was about ten times higher than of previous instruments. With this work, Gauss was the first one who derived a non-mechanical quantity by basic mechanical quantities.",
"title": "Scientific work"
},
{
"paragraph_id": 99,
"text": "Gauss carried out a \"General Theory of Terrestrial Magnetism\" (1839), in what he believed to describe the nature of magnetic Force; following Felix Klein, this work is actually a presentation of observations by use of spherical harmonics rather than a physical theory. The theory predicted the existence of exactly two magnetic poles on the earth, thus Hansteen's idea of four magnetic poles became obsolete, and the data allowed to determine their location with rather good precision. In his \"General theorems concerning the attractive and repulsive forces acting in reciprocal proportions of quadratic distances\" (1840) Gauss gave the baseline of a theory of the magnetic potential, based on Lagrange, Laplace, and Poisson; it seems rather unlikely that he had knowledge of the previous works of George Green on this subject. However, Gauss could never give any reasons for magnetism, nor a theory of magnetism similar to Newton's work on gravitation, that enabled scientists to predict geomagnetic effects in the future.",
"title": "Scientific work"
},
{
"paragraph_id": 100,
"text": "Gauss got a remarkable influence on the begin of geophysics in Russia, when Adolph Theodor Kupffer, one of his former students, founded a magnetic observatory in St. Petersburg, following the eample of the observatory in Götttingen, and similar Ivan Simonov in Kazan.",
"title": "Scientific work"
},
{
"paragraph_id": 101,
"text": "The discoveries of Hans Christian Ørsted on electromagnetism and Michael Faraday on electromagnetic induction drew Gauss' attention to these matters. Gauss and Weber found the rules for branched electric circuits, later benamed as Kirchhoff's circuit laws, and made inquiries on electromagnetism. They constructed the first electromechanical telegraph in 1833, and Weber himself connected the observatory with the institute for physics in the town centre of Göttingen, but they did not care for any further development of this invention with regard to commercial purposes.",
"title": "Scientific work"
},
{
"paragraph_id": 102,
"text": "Gauss's main theoretical interests in electromagnetism were reflected in his attempts to formulate quantitive laws governing electromagnetic induction. In his notebooks from these years, he recorded several innovative formulations; he discovered the idea of vector potential function (independently rediscovered by Franz Ernst Neumann in 1845), and in January 1835 he wrote down an \"induction law\" equivalent to Faraday's law, which stated that the electromotive force at a given point in space is equal to the instantaneous rate of change (with respect to time) of this function.",
"title": "Scientific work"
},
{
"paragraph_id": 103,
"text": "In the same year Gauss had an insightful speculative thought, according to which electromagnetic interaction between two electric charges propagates in space in finite speed, in a manner similar to light, and that the magnitude of this interaction might depend on their relative velocity. In this way, he refuted the notion of immediate action at a distance. In unpublished fragments and in an 1845 letter to Weber, Gauss attempted to unite electricity and magnetism by forming a single expression for the interaction between two charges in relative motion, from which both Coulomb's law and the effects of magnetism could be derived.",
"title": "Scientific work"
},
{
"paragraph_id": 104,
"text": "His unpublished insights in these directions eventually merged into the so-called Weber electrodynamics, a theory that became obsolete today due to some essential difficulties to reconcile it with the undisputed Maxwell's theory. In retrospect, despite its incorrectness, the Gauss-Weber theory contained some of the germs of later ideas, such as the existence of an electromagnetic field that is in some sense independent of its point sources (Faraday's view), as well as the notion of retarded potential.",
"title": "Scientific work"
},
{
"paragraph_id": 105,
"text": "Instrument maker Johann Georg Repsold in Hamburg asked Gauss in 1807 for help to construct an achromatic lens system. Based on Gauss' calculations, Repsold succeeded with a new objective in 1810. A main problem, among other difficulties, was the non precise knowledge of the refractive index and dispersion of the used glass types. In a short article from 1817 Gauss dealt with the problem of removal of chromatic aberration in double lenses, and made calculations about adjustments of the shape and coefficients of refraction required to minimize it. His work was noted by the optician Carl August von Steinheil, who in 1860 indroduced the achromatic Steinheil doublet, based in part on Gauss's calculations. Many results in geometrical optics are scattered in Gauss's correspondences and handnotes.",
"title": "Scientific work"
},
{
"paragraph_id": 106,
"text": "In his influential Dioptrical Investigations (1840), Gauss gave the first systematic analysis on the formation of images under a paraxial approximation (Gaussian optics). Gauss demonstrated, that under a paraxial approximation an optical system can be characterized by its cardinal points, and he derived the Gaussian lens formula, applicable without restrictions in respect to the thickness of the lenses.",
"title": "Scientific work"
},
{
"paragraph_id": 107,
"text": "Gauss' first and last business in mechanics concerned the earth's rotation. When his university friend Benzenberg carried out experiments to determine the deviation of falling masses from the perpendicular in 1802, what today is known as an effect of the Coriolis force, he asked Gauss for a theory based calculation of the values for comparison with the experimental ones. Gauss elaborated a system of fundamental equations for the motion, and his results correspondent sufficiently with Benzenberg's data, who published Gauss' considerations as appendix to his book on falling experiments.",
"title": "Scientific work"
},
{
"paragraph_id": 108,
"text": "After Foucault had demonstrated his pendulum in public in 1851, Gerling questioned Gauss for further explanations. This instigated Gauss to design a new apparatus for demonstration with a much shorter length of pendulum than Foucault's one. The oscillations were observed with a reading telescope, with a vertical scale and a mirror fastened at the pendulum; the time of oscillation was 3.1 seconds. It is described in the Gauss–Gerling correspondence, and Weber made some experiments with this obviously working apparatus in 1853, but no data were published.",
"title": "Scientific work"
},
{
"paragraph_id": 109,
"text": "Gauss's principle of least constraint of 1829 was established as a general concept to overcome the division of mechanics into statics and dynamics, combining D'Alembert's principle with Lagrange's principle of Virtual Work, and showing analogies to the method of least squares.",
"title": "Scientific work"
},
{
"paragraph_id": 110,
"text": "In 1828, Gauss was appointed to head of a Board for weights and measures of the Kingdom of Hanover. He provided the creation of standards of length and measures. Gauss himself took care of the time-consuming measures and gave detailed orders for the mechanical preparation. In his correspondence with Schumacher, who was also working on this matter, he described new ideas for scales of high precision. He gave his final reports on the Hanoveran foot and pound to the government in 1841. This work got more than regional importance by the order of a law of 1836, that connected the Hanoveran measures with the English ones.",
"title": "Scientific work"
},
{
"paragraph_id": 111,
"text": "Several stories of his early genius have been reported. Carl Friedrich Gauss' mother had never recorded the date of his birth, remembering only that he had been born on a Wednesday, eight days before the Feast of the Ascension, which occurs 39 days after Easter. Gauss later solved this puzzle about his birthdate in the context of finding the date of Easter, deriving methods to compute the date in both past and future years. Gauss felt sorry for his new born daughter Wilhelmine, because she was born on the leap day in 1808 and thus would celebrate her birthday only every four years.",
"title": "Anecdotes"
},
{
"paragraph_id": 112,
"text": "In his memorial on Gauss, Wolfgang Sartorius von Waltershausen tells a story about the three-years-aged Gauss, who corrected a math error his father made. The most popular story, also told by Sartorius, tells of a school exercise: the teacher, J.G. Büttner, and his assistant, Martin Bartels, ordered students to add an arithmetic series. Out of about a hundred pupils, Gauss was the first to solve the problem correctly by a significant margin. Although (or because) Sartorius gave no details, in the course of time many versions of this story have been created, with more and more details regarding the nature of the series – the most frequent being the classical problem of adding together all the integers from 1 to 100 – and the circumstances in the classroom.",
"title": "Anecdotes"
},
{
"paragraph_id": 113,
"text": "Gauss' favorite English author was Walter Scott, but when he sometimes read the words \"the moon rises broad in the nord west\", he was very amused.",
"title": "Anecdotes"
},
{
"paragraph_id": 114,
"text": "Gauss referred to mathematics as \"the queen of sciences\" and arithmetics as \"the queen of mathematics\", and supposedly once espoused a belief in the necessity of immediately understanding Euler's identity as a benchmark pursuant to becoming a first-class mathematician.",
"title": "Anecdotes"
},
{
"paragraph_id": 115,
"text": "The first membership of a scientific society was given to Gauss in 1802 by the Russian Academy of Sciences. Further memberships (corresponding, foreign or full) were from the Academy of Sciences in Göttingen (1802/ 1807), the French Academy of Sciences (1804/ 1820), the Royal Society of London (1804), the Royal Prussian Academy in Berlin (1810), the National Academy of Science in Verona (1810), the Royal Society of Edinburgh (1820), the Bavarian Academy of Sciences of Munich (1820), the Royal Danish Academy in Copenhagen (1821), the Royal Astronomical Society in London (1821), the Royal Swedish Academy of Sciences (1821), the American Academy of Arts and Sciences in Boston (1822), the Royal Bohemian Society of Sciences in Prague (1833), the Royal Academy of Science, Letters and Fine Arts of Belgium (1841/ 1845), the Royal Society of Sciences in Uppsala (1843), the Royal Irish Academy in Dublin (1843), the Royal Institute of the Netherlands (1845/ 1851), the Spanish Royal Academy of Sciences in Madrid (1850), the Russian Geographical Society (1851), the Imperial Academy of Sciences in Vienna (1848), the American Philosophical Society (1853), the Cambridge Philosophical Society, and the Royal Hollandish Society of Sciences in Haarlem.",
"title": "Honours and awards"
},
{
"paragraph_id": 116,
"text": "Gauss was an honorary member of the University of Kazan and of the Philosophy Faculty of the University of Prague since 1849.",
"title": "Honours and awards"
},
{
"paragraph_id": 117,
"text": "Gauss received the Lalande Prize from the French Academy of Science in 1809 for the theory of planets and the means of determining their orbits from only three observations, the Danish Academy of Science prize in 1823 for \"his study of angle-preserving maps\", and the Copley Medal from the Royal Society in 1838 for \"his inventions and mathematical researches in magnetism\".",
"title": "Honours and awards"
},
{
"paragraph_id": 118,
"text": "Gauss was appointed Knight of the French Legion of Honour in 1837 and was one of the first members of the Prussian Order Pour le Merite (Civil class) when it was established in 1842. Furthermore, he received the Order of the Crown of Westphalia (1810), the Danish Order of the Dannebrog (1817), the Hanoverian Royal Guelphic Order (1815), the Swedish Order of the Polar Star (1844), the Order of Henry the Lion (1849), and the Bavarian Maximilian Order for Science and Art (1853).",
"title": "Honours and awards"
},
{
"paragraph_id": 119,
"text": "The Kings of Hanover appointed him the honorary titles \"Hofrath\" (1816) and \"Geheimer Hofrath\" (1845). On the occasion of his golden doctor degree jubilee he got the honorary citizenship of both towns of Brunswick and Göttingen in 1849. Soon after his death a medal was issued by order of King George V of Hanover with the back side inscription : GEORGIVS V REX HANNOVERAE MATHEMATICORVM PRINCIPI and the circumscription : ACADEMIAE SVAE GEORGIAE AVGVSTAE DECORI AETERNO.",
"title": "Honours and awards"
},
{
"paragraph_id": 120,
"text": "The ″Gauss-Gesellschaft Göttingen″ (Gauss Society) was founded in 1964 for researches on life and work of Carl Friedrich Gauss and related persons and edits the ″Mitteilungen der Gauss-Gesellschaft″ (Communications of the Gauss Society).",
"title": "Honours and awards"
},
{
"paragraph_id": 121,
"text": "The Göttingen Academy of Sciences and Humanities provides a complete collection of the yet known letters from and to Carl Friedrich Gauss that is accessible online. Written estate from Carl Friedrich Gauss and family members can also be found in the municipal archive of Brunswick.",
"title": "Writings"
}
]
| Johann Carl Friedrich Gauss was a German mathematician, geodesist, and physicist who made significant contributions to many fields in mathematics and science. Gauss ranks among history's most influential mathematicians. He has been referred to as the "Prince of Mathematicians". Gauss was a child prodigy in mathematics. While still a student at the University of Göttingen, he propounded several mathematical theorems. Gauss completed his masterpieces Disquisitiones Arithmeticae and Theoria motus corporum coelestium as a private scholar. Later he was director of the Göttingen Observatory and professor at the university for nearly half a century, from 1807 until his death in 1855. Gauss published the second and third complete proofs of the fundamental theorem of algebra, made contributions to number theory and developed the theories of binary and ternary quadratic forms. He is credited with inventing the fast Fourier transform algorithm and was instrumental in the discovery of the dwarf planet Ceres. His work on the motion of planetoids disturbed by large planets led to the introduction of the Gaussian gravitational constant and the method of least squares, which he discovered before Adrien-Marie Legendre published on the method, and which is still used in all sciences to minimize measurement error. He also anticipated non-Euclidean geometry, and was the first to analyze it, even coining the term. He is considered one of its discoverers alongside Nikolai Lobachevsky and János Bolyai. Gauss invented the heliotrope in 1821, a magnetometer in 1833 and, alongside Wilhelm Eduard Weber, invented the first electromagnetic telegraph in 1833. Gauss was a careful author. He refused to publish incomplete work. Although he published extensively during his life, he left behind several works to be published posthumously. Although Gauss was known to dislike teaching, some of his students became influential mathematicians. He believed that the act of learning, not possession of knowledge, provided the greatest enjoyment. | 2001-10-05T23:04:42Z | 2023-12-31T23:20:39Z | [
"Template:Copley Medallists 1801–1850",
"Template:Redirect",
"Template:Spaced ndash",
"Template:Ill",
"Template:Cite web",
"Template:Refbegin",
"Template:Wikiquote",
"Template:Reflist",
"Template:Authority control",
"Template:Infobox scientist",
"Template:Lang-de",
"Template:Efn",
"Template:Interlanguage link",
"Template:Clarify",
"Template:Main",
"Template:Math",
"Template:MacTutor Biography",
"Template:Refend",
"Template:Commons",
"Template:Age of Enlightenment",
"Template:Pp-pc",
"Template:IPA-de",
"Template:Sfn",
"Template:Snd",
"Template:Citation",
"Template:Cite journal",
"Template:Cite arXiv",
"Template:Wikisource",
"Template:Scientists whose names are used as non SI units",
"Template:Portal bar",
"Template:Lang-la",
"Template:Blockquote",
"Template:Internet Archive author",
"Template:Carl Friedrich Gauss",
"Template:Short description",
"Template:Lang",
"Template:Cite book",
"Template:Notelist",
"Template:Use dmy dates",
"Template:Convert",
"Template:Who",
"Template:Library resources box"
]
| https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss |
6,130 | Cornish language | Cornish (Standard Written Form: Kernewek or Kernowek; [kəɾˈnuːək]) is a Southwestern Brittonic language of the Celtic language family. It is a revived language, having become extinct as a living community language in Cornwall at the end of the 18th century. However, knowledge of Cornish, including speaking ability to a certain extent, continued to be passed on within families and by individuals, and a revival began in the early 20th century. The language has a growing number of second-language speakers, and a very small number of families now raise children to speak revived Cornish as a first language. Cornish is currently recognised under the European Charter for Regional or Minority Languages, and the language is often described as an important part of Cornish identity, culture and heritage.
Along with Welsh and Breton, Cornish is descended from the Common Brittonic language spoken throughout much of Great Britain before the English language came to dominate. For centuries, until it was pushed westwards by English, it was the main language of Cornwall, maintaining close links with its sister language Breton, with which it was mutually intelligible, perhaps even as long as Cornish continued to be spoken as a vernacular. Cornish continued to function as a common community language in parts of Cornwall until the mid 18th century. There is some evidence of knowledge of the language persisting into the 19th century, possibly almost overlapping the beginning of revival efforts.
A process to revive the language began in the early 20th century, and in 2010, UNESCO announced that its former classification of the language as "extinct" was "no longer accurate." Since the revival of the language, some Cornish textbooks and works of literature have been published, and an increasing number of people are studying the language. Recent developments include Cornish music, independent films, and children's books. A small number of people in Cornwall have been brought up to be bilingual native speakers, and the language is taught in schools and appears on road signs. The first Cornish-language day care opened in 2010.
Cornish is a Southwestern Brittonic language, a branch of the Insular Celtic section of the Celtic language family, which is a sub-family of the Indo-European language family. Brittonic also includes Welsh, Breton, Cumbric and possibly Pictish, the last two of which are extinct. Scottish Gaelic, Irish and Manx are part of the separate Goidelic branch of Insular Celtic.
Joseph Loth viewed Cornish and Breton as being two dialects of the same language, claiming that "Middle Cornish is without doubt closer to Breton as a whole than the modern Breton dialect of Quiberon [Kiberen] is to that of Saint-Pol-de-Léon [Kastell-Paol]." Also, Kenneth Jackson argued that it is almost certain that Cornish and Breton would have been mutually intelligible as long as Cornish was a living language, and that Cornish and Breton are especially closely related to each other and less closely related to Welsh.
Cornish evolved from the Common Brittonic spoken throughout Britain south of the Firth of Forth during the British Iron Age and Roman period. As a result of westward Anglo-Saxon expansion, the Britons of the southwest were separated from those in modern-day Wales and Cumbria, which Jackson links to the defeat of the Britons at the Battle of Deorham in about 577. The western dialects eventually evolved into modern Welsh and the now extinct Cumbric, while Southwestern Brittonic developed into Cornish and Breton, the latter as a result of emigration to parts of the continent, known as Brittany over the following centuries.
The area controlled by the southwestern Britons was progressively reduced by the expansion of Wessex over the next few centuries. During the Old Cornish (Kernewek Koth) period (800–1200), the Cornish-speaking area was largely coterminous with modern-day Cornwall, after the Saxons had taken over Devon in their south-westward advance, which probably was facilitated by a second migration wave to Brittany that resulted in the partial depopulation of Devon.
The earliest written record of the Cornish language comes from this period: a 9th-century gloss in a Latin manuscript of De Consolatione Philosophiae by Boethius, which used the words ud rocashaas. The phrase may mean "it [the mind] hated the gloomy places", or alternatively, as Andrew Breeze suggests, "she hated the land". Other sources from this period include the Saints' List, a list of almost fifty Cornish saints, the Bodmin manumissions, which is a list of manumittors and slaves, the latter with mostly Cornish names, and, more substantially, a Latin-Cornish glossary (the Vocabularium Cornicum or Cottonian Vocabulary), a Cornish translation of Ælfric of Eynsham's Latin-Old English Glossary, which is thematically arranged into several groups, such as the Genesis creation narrative, anatomy, church hierarchy, the family, names for various kinds of artisans and their tools, flora, fauna, and household items. The manuscript was widely thought to be in Old Welsh until the 18th century when it was identified as Cornish by Edward Lhuyd. Some Brittonic glosses in the 9th-century colloquy De raris fabulis were once identified as Old Cornish, but they are more likely Old Welsh, possibly influenced by a Cornish scribe. No single phonological feature distinguishes Cornish from both Welsh and Breton until the beginning of the assibilation of dental stops in Cornish, which is not found before the second half of the eleventh century, and it is not always possible to distinguish Old Cornish, Old Breton, and Old Welsh orthographically.
The Cornish language continued to flourish well through the Middle Cornish (Kernewek Kres) period (1200–1600), reaching a peak of about 39,000 speakers in the 13th century, after which the number started to decline. This period provided the bulk of traditional Cornish literature, and was used to reconstruct the language during its revival. Most important is the Ordinalia, a cycle of three mystery plays, Origo Mundi, Passio Christi and Resurrexio Domini. Together these provide about 8,734 lines of text. The three plays exhibit a mixture of English and Brittonic influences, and, like other Cornish literature, may have been written at Glasney College near Penryn. From this period also are the hagiographical dramas Beunans Meriasek (The Life of Meriasek) and Bewnans Ke (The Life of Ke), both of which feature as an antagonist the villainous and tyrannical King Tewdar (or Teudar), a historical medieval king in Armorica and Cornwall, who, in these plays, has been interpreted as a lampoon of either of the Tudor kings Henry VII or Henry VIII.
Others are the Charter Fragment, the earliest known continuous text in the Cornish language, apparently part of a play about a medieval marriage, and Pascon agan Arluth (The Passion of Our Lord), a poem probably intended for personal worship, were written during this period, probably in the second half of the 14th century. Another important text, the Tregear Homilies, was realized to be Cornish in 1949, having previously been incorrectly classified as Welsh. It is the longest text in the traditional Cornish language, consisting of around 30,000 words of continuous prose. This text is a late 16th century translation of twelve of Bishop Bonner's thirteen homilies by a certain John Tregear, tentatively identified as a vicar of St Allen from Crowan, and has an additional catena, Sacrament an Alter, added later by his fellow priest, Thomas Stephyn. In the reign of Henry VIII, an account was given by Andrew Boorde in his 1542 Boke of the Introduction of Knowledge. He states, "In Cornwall is two speches, the one is naughty Englysshe, and the other is Cornysshe speche. And there be many men and women the which cannot speake one worde of Englysshe, but all Cornyshe."
When Parliament passed the Act of Uniformity 1549, which established the 1549 edition of the English Book of Common Prayer as the sole legal form of worship in England, including Cornwall, people in many areas of Cornwall did not speak or understand English. The passing of this Act was one of the causes of the Prayer Book Rebellion (which may also have been influenced by the retaliation of the English after the failed Cornish Rebellion of 1497), with "the commoners of Devonshyre and Cornwall" producing a manifesto demanding a return to the old religious services and included an article that concluded, "and so we the Cornyshe men (whereof certen of us understande no Englysh) utterly refuse thys newe Englysh." In response to their articles, the government spokesman (either Philip Nichols or Nicholas Udall) wondered why they did not just ask the king for a version of the liturgy in their own language. Archbishop Thomas Cranmer asked why the Cornishmen should be offended by holding the service in English, when they had before held it in Latin, which even fewer of them could understand. Anthony Fletcher points out that this rebellion was primarily motivated by religious and economic, rather than linguistic, concerns. The rebellion prompted a heavy-handed response from the government, and 5,500 people died during the fighting and the rebellion's aftermath. Government officials then directed troops under the command of Sir Anthony Kingston to carry out pacification operations throughout the West Country. Kingston subsequently ordered the executions of numerous individuals suspected of involvement with the rebellion as part of the post-rebellion reprisals.
The rebellion eventually proved a turning-point for the Cornish language, as the authorities came to associate it with sedition and "backwardness". This proved to be one of the reasons why the Book of Common Prayer was never translated into Cornish (unlike Welsh), as proposals to do so were suppressed in the rebellion's aftermath. The failure to translate the Book of Common Prayer into Cornish led to the language's rapid decline during the 16th and 17th centuries. Peter Berresford Ellis cites the years 1550–1650 as a century of immense damage for the language, and its decline can be traced to this period. In 1680 William Scawen wrote an essay describing 16 reasons for the decline of Cornish, among them the lack of a distinctive Cornish alphabet, the loss of contact between Cornwall and Brittany, the cessation of the miracle plays, loss of records in the Civil War, lack of a Cornish Bible and immigration to Cornwall. Mark Stoyle, however, has argued that the 'glotticide' of the Cornish language was mainly a result of the Cornish gentry adopting English to dissociate themselves from the reputation for disloyalty and rebellion associated with the Cornish language since the 1497 uprising.
By the middle of the 17th century, the language had retreated to Penwith and Kerrier, and transmission of the language to new generations had almost entirely ceased. In his Survey of Cornwall, published in 1602, Richard Carew writes:
[M]ost of the inhabitants can speak no word of Cornish, but very few are ignorant of the English; and yet some so affect their own, as to a stranger they will not speak it; for if meeting them by chance, you inquire the way, or any such matter, your answer shall be, "Meea navidna caw zasawzneck," "I [will] speak no Saxonage."
The Late Cornish (Kernewek Diwedhes) period from 1600 to about 1800 has a less substantial body of literature than the Middle Cornish period, but the sources are more varied in nature, including songs, poems about fishing and curing pilchards, and various translations of verses from the Bible, the Ten Commandments, the Lord's Prayer and the Creed. Edward Lhuyd's Archaeologia Britannica, which was mainly recorded in the field from native speakers in the early 1700s, and his unpublished field notebook are seen as important sources of Cornish vocabulary, some of which are not found in any other source. Archaeologia Britannica also features a complete version of a traditional folk tale, John of Chyanhor, a short story about a man from St Levan who goes far to the east seeking work, eventually returning home after three years to find that his wife has borne him a child during his absence.
In 1776, William Bodinar, who describes himself as having learned Cornish from old fishermen when he was a boy, wrote a letter to Daines Barrington in Cornish, with an English translation, which was probably the last prose written in the traditional language. In his letter, he describes the sociolinguistics of the Cornish language at the time, stating that there are no more than four or five old people in his village who can still speak Cornish, concluding with the remark that Cornish is no longer known by young people. However, the last recorded traditional Cornish literature may have been the Cranken Rhyme, a corrupted version of a verse or song published in the late 19th century by John Hobson Matthews, recorded orally by John Davey (or Davy) of Boswednack, of uncertain date but probably originally composed during the last years of the traditional language. Davey had traditional knowledge of at least some Cornish. John Kelynack (1796–1885), a fisherman of Newlyn, was sought by philologists for old Cornish words and technical phrases in the 19th century.
It is difficult to state with certainty when Cornish ceased to be spoken, due to the fact that its last speakers were of relatively low social class and that the definition of what constitutes "a living language" is not clear cut. Peter Pool argues that by 1800 nobody was using Cornish as a daily language and no evidence exists of anyone capable of conversing in the language at that date. However, passive speakers, semi-speakers and rememberers, who retain some competence in the language despite not being fluent nor using the language in daily life, generally survive even longer.
The traditional view that Dolly Pentreath (1692–1777) was the last native speaker of Cornish has been challenged, and in the 18th and 19th centuries there was academic interest in the language and in attempting to find the last speaker of Cornish. It has been suggested that, whereas Pentreath was probably the last monolingual speaker, the last native speaker may have been John Davey of Zennor, who died in 1891. However, although it is clear Davey possessed some traditional knowledge in addition to having read books on Cornish, accounts differ of his competence in the language. Some contemporaries stated he was able to converse on certain topics in Cornish whereas others affirmed they had never heard him claim to be able to do so. Robert Morton Nance, who reworked and translated Davey's Cranken Rhyme, remarked, "There can be no doubt, after the evidence of this rhyme, of what there was to lose by neglecting John Davey."
The search for the last speaker is hampered by a lack of transcriptions or audio recordings, so that it is impossible to tell from this distance whether the language these people were reported to be speaking was Cornish, or English with a heavy Cornish substratum, nor what their level of fluency was. Nevertheless this academic interest, along with the beginning of the Celtic Revival in the late 19th century, provided the groundwork for a Cornish language revival movement.
Notwithstanding the uncertainty over who was the last speaker of Cornish, researchers have posited the following numbers for the prevalence of the language between 1050 and 1800.
In 1904, the Celtic language scholar and Cornish cultural activist Henry Jenner published A Handbook of the Cornish Language. The publication of this book is often considered to be the point at which the revival movement started. Jenner wrote about the Cornish language in 1905, "one may fairly say that most of what there was of it has been preserved, and that it has been continuously preserved, for there has never been a time when there were not some Cornishmen who knew some Cornish."
The revival focused on reconstructing and standardising the language, including coining new words for modern concepts, and creating educational material in order to teach Cornish to others. In 1929 Robert Morton Nance published his Unified Cornish (Kernewek Unys) system, based on the Middle Cornish literature while extending the attested vocabulary with neologisms and forms based on Celtic roots also found in Breton and Welsh, publishing a dictionary in 1938. Nance's work became the basis of revived Cornish (Kernewek Dasserghys) for most of the 20th century. During the 1970s, criticism of Nance's system, including the inconsistent orthography and unpredictable correspondence between spelling and pronunciation, as well as on other grounds such as the archaic basis of Unified and a lack of emphasis on the spoken language, resulted in the creation of several rival systems. In the 1980s, Ken George published a new system, Kernewek Kemmyn ('Common Cornish'), based on a reconstruction of the phonological system of Middle Cornish, but with an approximately morphophonemic orthography. It was subsequently adopted by the Cornish Language Board and was the written form used by a reported 54.5% of all Cornish language users according to a survey in 2008, but was heavily criticised for a variety of reasons by Jon Mills and Nicholas Williams, including making phonological distinctions that they state were not made in the traditional language c. 1500, failing to make distinctions that they believe were made in the traditional language at this time, and the use of an orthography that deviated too far from the traditional texts and Unified Cornish. Also during this period, Richard Gendall created his Modern Cornish system (also known as Revived Late Cornish), which used Late Cornish as a basis, and Nicholas Williams published a revised version of Unified; however neither of these systems gained the popularity of Unified or Kemmyn.
The revival entered a period of factionalism and public disputes, with each orthography attempting to push the others aside. By the time that Cornish was recognised by the UK government under the European Charter for Regional or Minority Languages in 2002, it had become recognised that the existence of multiple orthographies was unsustainable with regards to using the language in education and public life, as none had achieved a wide consensus. A process of unification was set about which resulted in the creation of the public-body Cornish Language Partnership in 2005 and agreement on a Standard Written Form in 2008. In 2010 a new milestone was reached when UNESCO altered its classification of Cornish, stating that its previous label of "extinct" was no longer accurate.
Speakers of Cornish reside primarily in Cornwall, which has a population of 563,600 (2017 estimate). There are also some speakers living outside Cornwall, particularly in the countries of the Cornish diaspora, as well as in other Celtic nations. Estimates of the number of Cornish speakers vary according to the definition of a speaker, and is difficult to determine accurately due to the individualised nature of language take-up. Nevertheless, there is recognition that the number of Cornish speakers is growing. From before the 1980s to the end of the 20th century there was a sixfold increase in the number of speakers to around 300. One figure for the number of people who know a few basic words, such as knowing that "Kernow" means "Cornwall", was 300,000; the same survey gave the number of people able to have simple conversations as 3,000.
The Cornish Language Strategy project commissioned research to provide quantitative and qualitative evidence for the number of Cornish speakers: due to the success of the revival project it was estimated that 2,000 people were fluent (surveyed in spring 2008), an increase from the estimated 300 people who spoke Cornish fluently suggested in a study by Kenneth MacKinnon in 2000.
Jenefer Lowe of the Cornish Language Partnership said in an interview with the BBC in 2010 that there were around 300 fluent speakers. Bert Biscoe, a councillor and bard, in a statement to the Western Morning News in 2014 said there were "several hundred fluent speakers". Cornwall Council estimated in 2015 that there were 300–400 fluent speakers who used the language regularly, with 5,000 people having a basic conversational ability in the language.
A report on the 2011 Census published in 2013 by the Office for National Statistics placed the number of speakers at somewhere between 325 and 625. In 2017 the ONS released data based on the 2011 Census that placed the number of speakers at 557 people in England and Wales who declared Cornish to be their main language, 464 of whom lived in Cornwall. The 2021 census listed the number of Cornish speakers at 563.
A study that appeared in 2018 established the number of people in Cornwall with at least minimal skills in Cornish, such as the use of some words and phrases, to be more than 3,000, including around 500 estimated to be fluent.
The Institute of Cornish Studies at the University of Exeter is working with the Cornish Language Partnership to study the Cornish language revival of the 20th century, including the growth in number of speakers.
In 2002, Cornish was recognized by the UK government under Part II of the European Charter for Regional or Minority Languages. UNESCO's Atlas of World Languages classifies Cornish as "critically endangered". UNESCO has said that a previous classification of 'extinct' "does not reflect the current situation for Cornish" and is "no longer accurate".
Cornwall Council's policy is to support the language, in line with the European Charter. A motion was passed in November 2009 in which the council promoted the inclusion of Cornish, as appropriate and where possible, in council publications and on signs. This plan has drawn some criticism. In October 2015, Cornwall Council announced that staff would be encouraged to use "basic words and phrases" in Cornish when dealing with the public. In 2021 Cornwall Council prohibited a marriage ceremony from being conducted in Cornish as the Marriage Act 1949 only allowed for marriage ceremonies in English or Welsh.
In 2014, the Cornish people were recognised by the UK Government as a national minority under the Framework Convention for the Protection of National Minorities. The FCNM provides certain rights and protections to a national minority with regard to their minority language.
In 2016, British government funding for the Cornish language ceased, and responsibility transferred to Cornwall Council.
Old Cornish
Until around the middle of the 11th century, Old Cornish scribes used a traditional spelling system shared with Old Breton and Old Welsh, based on the pronunciation of British Latin. By the time of the Vocabularium Cornicum, usually dated to around 1100, Old English spelling conventions, such as the use of thorn (Þ, þ) and eth (Ð, ð) for dental fricatives, and wynn (Ƿ, ƿ) for /w/, had come into use, allowing documents written at this time to be distinguished from Old Welsh, which rarely uses these characters, and Old Breton, which does not use them at all. Old Cornish features include using initial ⟨ch⟩, ⟨c⟩, or ⟨k⟩ for /k/, and, in internal and final position, ⟨p⟩, ⟨t⟩, ⟨c⟩, ⟨b⟩, ⟨d⟩, and ⟨g⟩ are generally used for the phonemes /b/, /d/, /ɡ/, /β/, /ð/, and /ɣ/ respectively, meaning that the results of Brittonic lenition are not usually apparent from the orthography at this time.
Middle Cornish
Middle Cornish orthography has a significant level of variation, and shows influence from Middle English spelling practices. Yogh (Ȝ ȝ) is used in certain Middle Cornish texts, where it is used to represent a variety of sounds, including the dental fricatives /θ/ and /ð/, a usage which is unique to Middle Cornish and is never found in Middle English. Middle Cornish scribes tend to use ⟨c⟩ for /k/ before back vowels, and ⟨k⟩ for /k/ before front vowels, though this is not always true, and this rule is less consistent in certain texts. Middle Cornish scribes almost universally use ⟨wh⟩ to represent /ʍ/ (or /hw/), as in Middle English. Middle Cornish, especially towards the end of this period, tends to use orthographic ⟨g⟩ and ⟨b⟩ in word-final position in stressed monosyllables, and ⟨k⟩ and ⟨p⟩ in word-final position in unstressed final syllables, to represent the reflexes of late Brittonic /ɡ/ and /b/, respectively.
Late Cornish
Written sources from this period are often spelled following English spelling conventions since many of the writers of the time had not been exposed to Middle Cornish texts or the Cornish orthography within them. Around 1700, Edward Lhuyd visited Cornwall, introducing his own partly phonetic orthography that he used in his Archaeologia Britannica, which was adopted by some local writers, leading to the use of some Lhuydian features such as the use of circumflexes to denote long vowels, ⟨k⟩ before front vowels, word-final ⟨i⟩, and the use of ⟨dh⟩ to represent the voiced dental fricative /ð/.
Revived Cornish
After the publication of Jenner's Handbook of the Cornish Language, the earliest revivalists used Jenner's orthography, which was influenced by Lhuyd's system. This system was abandoned following the development by Nance of a "unified spelling", later known as Unified Cornish, a system based on a standardization of the orthography of the early Middle Cornish texts. Nance's system was used by almost all Revived Cornish speakers and writers until the 1970s. Criticism of Nance's system, particularly the relationship of spelling to sounds and the phonological basis of Unified Cornish, resulted in rival orthographies appearing by the early 1980s, including Gendal's Modern Cornish, based on Late Cornish native writers and Lhuyd, and Ken George's Kernewek Kemmyn, a mainly morphophonemic orthography based on George's reconstruction of Middle Cornish c. 1500, which features a number of orthographic, and phonological, distinctions not found in Unified Cornish. Kernewek Kemmyn is characterised by the use of universal ⟨k⟩ for /k/ (instead of ⟨c⟩ before back vowels as in Unified); ⟨hw⟩ for /hw/, instead of ⟨wh⟩ as in Unified; and ⟨y⟩, ⟨oe⟩, and ⟨eu⟩ to represent the phonemes /ɪ/, /o/, and /œ/ respectively, which are not found in Unified Cornish. Criticism of all of these systems, especially Kernewek Kemmyn, by Nicolas Williams, resulted in the creation of Unified Cornish Revised, a modified version of Nance's orthography, featuring: an additional phoneme not distinguished by Nance, "ö in German schön", represented in the UCR orthography by ⟨ue⟩; replacement of ⟨y⟩ with ⟨e⟩ in many words; internal ⟨h⟩ rather than ⟨gh⟩; and use of final ⟨b⟩, ⟨g⟩, and ⟨dh⟩ in stressed monosyllables. A Standard Written Form, intended as a compromise orthography for official and educational purposes, was introduced in 2008, although a number of previous orthographic systems remain in use and, in response to the publication of the SWF, another new orthography, Kernowek Standard, was created, mainly by Nicholas Williams and Michael Everson, which is proposed as an amended version of the Standard Written Form.
The phonological system of Old Cornish, inherited from Proto-Southwestern Brittonic and originally differing little from Old Breton and Old Welsh, underwent various changes during its Middle and Late phases, eventually resulting in several characteristics not found in the other Brittonic languages. The first sound change to distinguish Cornish from both Breton and Welsh, the assibilation of the dental stops /t/ and /d/ in medial and final position, had begun by the time of the Vocabularium Cornicum, c. 1100 or earlier. This change, and the subsequent, or perhaps dialectical, palatalization (or occasional rhotacization in a few words) of these sounds, results in orthographic forms such as Middle Cornish tas 'father', Late Cornish tâz (Welsh tad), Middle Cornish cresy 'believe', Late Cornish cregy (Welsh credu), and Middle Cornish gasa 'leave', Late Cornish gara (Welsh gadael). A further characteristic sound change, pre-occlusion, occurred during the sixteenth century, resulting in the nasals /nn/ and /mm/ being realised as [ᵈn] and [ᵇm] respectively in stressed syllables, and giving Late Cornish forms such as pedn 'head' (Welsh pen) and kabm 'crooked' (Welsh cam).
As a revitalised language, the phonology of contemporary spoken Cornish is based on a number of sources, including various reconstructions of the sound system of middle and early modern Cornish based on an analysis of internal evidence such as the orthography and rhyme used in the historical texts, comparison with the other Brittonic languages Breton and Welsh, and the work of the linguist Edward Lhuyd, who visited Cornwall in 1700 and recorded the language in a partly phonetic orthography.
Cornish is a Celtic language, and the majority of its vocabulary, when usage frequency is taken into account, at every documented stage of its history is inherited direct from Proto-Celtic, either through the ancestral Proto-Indo-European language, or through vocabulary borrowed from unknown substrate language(s) at some point in the development of the Celtic proto-language from PIE. Examples of the PIE > PCelt. development are various terms related to kinship and people, including mam 'mother', modereb 'aunt, mother's sister', huir 'sister', mab 'son', gur 'man', den 'person, human', and tus 'people', and words for parts of the body, including lof 'hand' and dans 'tooth'. Inherited adjectives with an Indo-European etymology include newyth 'new', ledan 'broad, wide', rud 'red', hen 'old', iouenc 'young', and byw 'alive, living'.
Several Celtic or Brittonic words cannot be reconstructed to Proto-Indo-European, and are suggested to have been borrowed from unknown substrate language(s) at an early stage, such as Proto-Celtic or Proto-Brittonic. Proposed examples in Cornish include coruf 'beer' and broch 'badger'.
Other words in Cornish inherited direct from Proto-Celtic include a number of toponyms, for example bre 'hill', din 'fort', and bro 'land', and a variety of animal names such as logoden 'mouse', mols 'wether', mogh 'pigs', and tarow 'bull'.
During the Roman occupation of Britain a large number (around 800) of Latin loan words entered the vocabulary of Common Brittonic, which subsequently developed in a similar way to the inherited lexicon. These include brech 'arm' (from British Latin bracc(h)ium), ruid 'net' (from retia), and cos 'cheese' (from caseus).
A substantial number of loan words from English and to a lesser extent French entered the Cornish language throughout its history. Whereas only 5% of the vocabulary of the Old Cornish Vocabularium Cornicum is thought to be borrowed from English, and only 10% of the lexicon of the early modern Cornish writer William Rowe, around 42% of the vocabulary of the whole Cornish corpus is estimated to be English loan words, without taking frequency into account. (However when frequency is taken into account this figure for the entire corpus drops to 8%.) The many English loanwords, some of which were sufficiently well assimilated to acquire native Cornish verbal or plural suffixes or be affected by the mutation system, include redya 'to read', onderstondya 'to understand', ford 'way', hos 'boot' and creft 'art'.
Many Cornish words, such as mining and fishing terms, are specific to the culture of Cornwall. Examples include atal 'mine waste' and beetia 'to mend fishing nets'. Foogan and hogan are different types of pastries. Troyl is a 'traditional Cornish dance get-together' and Furry is a specific kind of ceremonial dance that takes place in Cornwall. Certain Cornish words may have several translation equivalents in English, so for instance lyver may be translated into English as either 'book' or 'volume' and dorn can mean either 'hand' or 'fist'. As in other Celtic languages, Cornish lacks a number of verbs commonly found in other languages, including modals and psych-verbs; examples are 'have', 'like', 'hate', 'prefer', 'must/have to' and 'make/compel to'. These functions are instead fulfilled by periphrastic constructions involving a verb and various prepositional phrases.
The grammar of Cornish shares with other Celtic languages a number of features which, while not unique, are unusual in an Indo-European context. The grammatical features most unfamiliar to English speakers of the language are the initial consonant mutations, the verb–subject–object word order, inflected prepositions, fronting of emphasised syntactic elements and the use of two different forms for 'to be'.
Cornish has initial consonant mutation: The first sound of a Cornish word may change according to grammatical context. As in Breton, there are four types of mutation in Cornish (compared with three in Welsh, two in Irish and Manx and one in Scottish Gaelic). These changes apply to only certain letters (sounds) in particular grammatical contexts, some of which are given below:
Cornish has no indefinite article. Porth can either mean 'harbour' or 'a harbour'. In certain contexts, unn can be used, with the meaning 'a certain, a particular', e.g. unn porth 'a certain harbour'. There is, however, a definite article an 'the', which is used for all nouns regardless of their gender or number, e.g. an porth 'the harbour'.
Cornish nouns belong to one of two grammatical genders, masculine and feminine, but are not inflected for case. Nouns may be singular or plural. Plurals can be formed in various ways, depending on the noun:
Some nouns are collective or mass nouns. Singulatives can be formed from collective nouns by the addition of the suffix ⫽-enn⫽ (SWF -en):
Verbs are conjugated for person, number, tense and mood. For example, the verbal noun gweles 'see' has derived forms such as 1st person singular present indicative gwelav 'I see', 3rd person plural imperfect indicative gwelens 'they saw', and 2nd person singular imperative gwel 'see!' Grammatical categories can be indicated either by inflection of the main verb, or by the use of auxiliary verbs such as bos 'be' or gul 'do'.
Cornish uses inflected (or conjugated) prepositions: Prepositions are inflected for person and number. For example, gans (with, by) has derived forms such as genev 'with me', ganso 'with him', and genowgh 'with you (plural)'.
Word order in Cornish is somewhat fluid and varies depending on several factors such as the intended element to be emphasised and whether a statement is negative or affirmative. In a study on Cornish word order in the play Bewnans Meriasek (c. 1500), Ken George has argued that the most common word order in main clauses in Middle Cornish was, in affirmative statements, SVO, with the verb in the third person singular:
My
1SG
a
PTCL
wel
see-PRES.3SG
an
DEF
gath
cat
My a wel an gath
1SG PTCL see-PRES.3SG DEF cat
'I see the cat.'
When affirmative statements are in the less common VSO order, they usually begin with an adverb or other element, followed by an affirmative particle, with the verb inflected for person and tense:
Ev
3SG.M
a
PTCL
grys
believe-PRES.3SG
y
PTCL
hwelav
see-PRES.1SG
an
DEF
gath
cat
Ev a grys y hwelav an gath
3SG.M PTCL believe-PRES.3SG PTCL see-PRES.1SG DEF cat
'He believes that I see the cat.'
In negative statements, the order was usually VSO, with an initial negative particle and the verb conjugated for person and tense:
Ny
NEG
welav
see-PRES.1SG
an
DEF
gath
cat
Ny welav an gath
NEG see-PRES.1SG DEF cat
'I do not see the cat.'
A similar structure is used for questions:
a
PTCL
glewsyugh
hear-PLUPERF.2PL
why?
2PL
a glewsyugh why?
PTCL hear-PLUPERF.2PL 2PL
'Did you hear?'
Elements can be fronted for emphasis:
an
DEF
gath
cat
my
1SG
a
PTCL
wel
see-PRES.3SG
an gath my a wel
DEF cat 1SG PTCL see-PRES.3SG
'I see the cat.'
Sentences can also be constructed periphrastically using auxiliary verbs such as bos 'be, exist':
Yma
be-PRES-AFF.3SG
ow
PTCL
kelwel
call-VN
ely
Ely
Yma ow kelwel ely
be-PRES-AFF.3SG PTCL call-VN Ely
'(He) is calling Ely.'
As Cornish lacks verbs such as 'to have', possession can also be indicated in this way:
'ma
be-PRES-AFF.3SG
'gen
1PL
ehaz
health
nyi
1PL
dhen
to+us
'ma 'gen ehaz nyi dhen
be-PRES-AFF.3SG 1PL health 1PL to+us
'We have our health.'
Enquiring about possession is similar, using a different interrogative form of bos:
Hostes,
Hostess
ues
be-PRES-INTERR-INDEF.3SG
boues
food
dewhy?
to+you
Hostes, ues boues dewhy?
Hostess be-PRES-INTERR-INDEF.3SG food to+you
'Hostess, have you [any] food?'
Nouns usually precede the adjective, unlike in English:
Benyn
woman
vas
good
Benyn vas
woman good
'[A] good woman.'
Some adjectives usually precede the noun, however:
Drog
evil
den
man
Drog den
evil man
'[An] evil man.'
The Celtic Congress and Celtic League are groups that advocate cooperation amongst the Celtic Nations in order to protect and promote Celtic languages and cultures, thus working in the interests of the Cornish language.
There have been films such as Hwerow Hweg, some televised, made entirely, or significantly, in the language. Some businesses use Cornish names.
Cornish has significantly and durably affected Cornwall's place-names as well as Cornish surnames and knowledge of the language helps the understanding of these ancient meanings. Cornish names are adopted for children, pets, houses and boats.
There is Cornish literature, including spoken poetry and song, as well as traditional Cornish chants historically performed in marketplaces during religious holidays and public festivals and gatherings.
There are periodicals solely in the language, such as the monthly An Gannas, An Gowsva and An Garrick. BBC Radio Cornwall has a news broadcast in Cornish and sometimes has other programmes and features for learners and enthusiasts. Local newspapers such as the Western Morning News have articles in Cornish, and newspapers such as The Packet, The West Briton, and The Cornishman have also been known to have Cornish features. There is an online radio and TV service in Cornish called Radyo an Gernewegva, publishing a one-hour podcast each week, based on a magazine format. It includes music in Cornish as well as interviews and features.
The language has financial sponsorship from sources including the Millennium Commission. A number of language organisations exist in Cornwall: Agan Tavas (Our Language), the Cornish sub-group of the European Bureau for Lesser-Used Languages, Gorsedh Kernow, Kesva an Taves Kernewek (the Cornish Language Board) and Kowethas an Yeth Kernewek (the Cornish Language Fellowship).
There are ceremonies, some ancient, some modern, that use the language or are entirely in the language.
Though estimates of the number of Cornish speakers vary, there are thought to be around five hundred today. Currently Cornish is spoken at home, outside the home, in the workplace and at ritual ceremonies. Cornish is also being used in the arts.
Cornwall has had cultural events associated with the language, including the international Celtic Media Festival, hosted in St Ives in 1997. The Old Cornwall Society has promoted the use of the language at events and meetings. Two examples of ceremonies that are performed in both the English and Cornish languages are Crying the Neck and the annual mid-summer bonfires.
Since 1969, there have been three full performances of the Ordinalia, originally written in the Cornish language, the most recent of which took place at the plen-an-gwary in St Just in September 2021. While significantly adapted from the original, as well as using mostly English-speaking actors, the plays used sizable amounts of Cornish, including a character who spoke only in Cornish and another who spoke both English and Cornish. The event drew thousands over two weeks, also serving as a celebration of Celtic culture. The next production, scheduled for 2024, could, in theory, be entirely in Cornish, without English, if assisted by a professional linguist.
Outside of Cornwall, efforts to revive the Cornish language and culture through community events are occurring in Australia. A biennial festival, Kernewek Lowender, takes place in South Australia, where both cultural displays and language lessons are offered.
Cornish is taught in some schools; it was previously taught at degree level at the University of Wales, though the only existing course in the language at university level is as part of a course in Cornish studies at the University of Exeter. In March 2008 a course in the language was started as part of the Celtic Studies curriculum at the University of Vienna, Austria. The University of Cambridge offers courses in Cornish through its John Trim Resources Centre, which is part of the university's Language Centre. In addition, the Department of Anglo-Saxon, Norse and Celtic (which is part of the Faculty of English) also carries out research into the Cornish language.
In 2015 a university-level course aiming at encouraging and supporting practitioners working with young children to introduce the Cornish language into their settings was launched. The Cornish Language Practice Project (Early Years) is a level 4 course approved by Plymouth University and run at Cornwall College. The course is not a Cornish-language course but students will be assessed on their ability to use the Cornish language constructively in their work with young children. The course will cover such topics as Understanding Bilingualism, Creating Resources and Integrating Language and Play, but the focus of the language provision will be on Cornish. A non-accredited specialist Cornish-language course has been developed to run alongside the level 4 course for those who prefer tutor support to learn the language or develop their skills for use with young children.
Cornwall's first Cornish-language crèche, Skol dy'Sadorn Kernewek, was established in 2010 at Cornwall College, Camborne. The nursery teaches children aged between two and five years alongside their parents to ensure the language is also spoken in the home.
A number of dictionaries are available in the various orthographies, including A Learners' Cornish Dictionary in the Standard Written Form by Steve Harris (ed.), An Gerlyver Meur by Ken George, Gerlyver Sawsnek–Kernowek by Nicholas Williams and A Practical Dictionary of Modern Cornish by Richard Gendall. Course books include the three-part Skeul an Yeth series, Clappya Kernowek, Tavas a Ragadazow and Skeul an Tavas, as well as the more recent Bora Brav and Desky Kernowek. Several online dictionaries are now available, including one organised by An Akademi Kernewek in SWF.
Classes and conversation groups for adults are available at several locations in Cornwall as well as in London, Cardiff and Bristol. Since the onset of the COVID-19 pandemic a number of conversation groups entitled Yeth an Werin Warlinen have been held online, advertised through Facebook and other media. A surge in interest, not just from people in Cornwall but from all over the world, has meant that extra classes have been organised.
William Scawen produced a manuscript on the declining Cornish language that continually evolved until he died in 1689, aged 89. He was one of the first to realise the language was dying out and wrote detailed manuscripts which he started working on when he was 78. The only version that was ever published was a short first draft but the final version, which he worked on until his death, is a few hundred pages long. At the same time a group of scholars led by John Keigwin (nephew of William Scawen) of Mousehole tried to preserve and further the Cornish language and chose to write in Cornish. One of their number, Nicholas Boson, tells how he had been discouraged from using Cornish to servants by his mother. This group left behind a large number of translations of parts of the Bible, proverbs and songs. They were contacted by the Welsh linguist Edward Lhuyd, who came to Cornwall to study the language.
Early Modern Cornish was the subject of a study published by Lhuyd in 1707, and differs from the medieval language in having a considerably simpler structure and grammar. Such differences included sound changes and more frequent use of auxiliary verbs. The medieval language also possessed two additional tenses for expressing past events and an extended set of possessive suffixes.
John Whitaker, the Manchester-born rector of Ruan Lanihorne, studied the decline of the Cornish language. In his 1804 work the Ancient Cathedral of Cornwall he concluded that: "[T]he English Liturgy, was not desired by the Cornish, but forced upon them by the tyranny of England, at a time when the English language was yet unknown in Cornwall. This act of tyranny was at once gross barbarity to the Cornish people, and a death blow to the Cornish language."
Robert Williams published the first comprehensive Cornish dictionary in 1865, the Lexicon Cornu-Britannicum. As a result of the discovery of additional ancient Cornish manuscripts, 2000 new words were added to the vocabulary by Whitley Stokes in A Cornish Glossary. William C. Borlase published Proverbs and Rhymes in Cornish in 1866 while A Glossary of Cornish Names was produced by John Bannister in the same year. Frederick Jago published his English–Cornish Dictionary in 1882.
In 2002, the Cornish language gained new recognition because of the European Charter for Regional and Minority Languages. Conversely, along with government provision was the governmental basis of "New Public Management", measuring quantifiable results as means of determining effectiveness. This put enormous pressure on finding a single orthography that could be used in unison. The revival of Cornish required extensive rebuilding. The Cornish orthographies that were reconstructed may be considered versions of Cornish because they are not traditional sociolinguistic variations. In the middle-to-late twentieth century, the debate over Cornish orthographies angered more people because several language groups received public funding. This caused other groups to sense favouritism as playing a role in the debate.
A governmental policymaking structure called New Public Management (NPM) has helped the Cornish language by managing public life of the Cornish language and people. In 2007, the Cornish Language Partnership MAGA represents separate divisions of government and their purpose is to further enhance the Cornish Language Developmental Plan. MAGA established an Ad-Hoc Group, which resulted in three orthographies being presented. The relations for the Ad-Hoc Group were to obtain consensus among the three orthographies and then develop a "single written form". The result was creating a new form of Cornish, which had to be natural for both new learners and skilled speakers.
In 1981, the Breton library Preder edited Passyon agan arluth (Passion of our lord), a 15th-century Cornish poem. The first complete translation of the Bible into Cornish, translated from English, was published in 2011. Another Bible translation project translating from original languages is underway. The New Testament and Psalms were posted on-line on YouVersion (Bible.com) and Bibles.org in July 2014 by the Bible Society.
A few small publishers produce books in Cornish which are stocked in some local bookshops, as well as in Cornish branches of Waterstones and WH Smith, although publications are becoming increasingly available on the Internet. Printed copies of these may also be found from Amazon. The Truro Waterstones hosts the annual Holyer an Gof literary awards, established by Gorsedh Kernow to recognise publications relating to Cornwall or in the Cornish language. In recent years, a number of Cornish translations of literature have been published, including Alice's Adventures in Wonderland (2009), Around the World in Eighty Days (2009), Treasure Island (2010), The Railway Children (2012), Hound of the Baskervilles (2012), The War of the Worlds (2012), The Wind in the Willows (2013), Three Men in a Boat (2013), Alice in Wonderland and Through the Looking-Glass (2014), and A Christmas Carol (which won the 2012 Holyer an Gof award for Cornish Language books), as well as original Cornish literature such as Jowal Lethesow (The Lyonesse Stone) by Craig Weatherhill. Literature aimed at children is also available, such as Ple'ma Spot? (Where's Spot?), Best Goon Brèn (The Beast of Bodmin Moor), three Topsy and Tim titles, two Tintin titles and Briallen ha'n Alyon (Briallen and the Alien), which won the 2015 Holyer an Gof award for Cornish Language books for children. In 2014 An Hobys, Nicholas Williams's translation of J. R. R. Tolkien's The Hobbit, was published.
An Gannas is a monthly magazine published entirely in the Cornish language. Members contribute articles on various subjects. The magazine is produced by Graham Sandercock who has been its editor since 1976.
In 1983 BBC Radio Cornwall started broadcasting around two minutes of Cornish every week. In 1987, however, they gave over 15 minutes of airtime on Sunday mornings for a programme called Kroeder Kroghen ('Holdall'), presented by John King, running until the early 1990s. It was eventually replaced with a five-minute news bulletin called An Nowodhow ('The News'). The bulletin was presented every Sunday evening for many years by Rod Lyon, then Elizabeth Stewart, and currently a team presents in rotation. Pirate FM ran short bulletins on Saturday lunchtimes from 1998 to 1999. In 2006, Matthew Clarke who had presented the Pirate FM bulletin, launched a web-streamed news bulletin called Nowodhow an Seythen ('Weekly News'), which in 2008 was merged into a new weekly magazine podcast Radyo an Gernewegva (RanG).
Cornish television shows have included a 1982 series by Westward Television with each episode containing a three-minute lesson in Cornish. An Canker-Seth, an eight-episode series produced by Television South West and broadcast between June and July 1984, later on S4C from May to July 1985, and as a schools programme in 1986. Also by Television South West were two bilingual programmes on Cornish Culture called Nosweyth Lowen. In 2016 Kelly's Ice Cream of Bodmin introduced a light hearted television commercial in the Cornish language and this was repeated in 2017.
The first episode from the third season of the US television program Deadwood features a conversation between miners, purportedly in the Cornish language, but really in Irish. One of the miners is then shot by thugs working for businessman George Hearst who justify the murder by saying, "He come at me with his foreign gibberish."
A number of Cornish language films have been made, including Hwerow Hweg, a 2002 drama film written and directed by Hungarian film-maker Antal Kovacs and Trengellick Rising, a short film written and directed by Guy Potter.
Screen Cornwall works with Cornwall Council to commission a short film in the Cornish language each year, with their FilmK competition. Their website states "FylmK is an annual contemporary Cornish language short film competition, producing an imaginative and engaging film, in any genre, from distinctive and exciting filmmakers".
A monthly half-hour online TV show began in 2017 called An Mis (The Month). It contained news items about cultural events and more mainstream news stories all through Cornish. It also ran a cookery segment called "Kegin Esther" ('Esther's Kitchen'). The program has been out of production since March 2023.
English composer Peter Warlock wrote a Christmas carol in Cornish (setting words by Henry Jenner). The Cornish electronic musician Aphex Twin has used Cornish names for track titles, most notably on his Drukqs album.
Several traditional Cornish folk songs have been collected and can be sung to various tunes. These include "An Awhesyth", "Bro Goth agan Tasow", and "Delkiow Sivy".
In 2018, the singer Gwenno Saunders released an album in Cornish, entitled Le Kov, saying: "I speak Cornish with my son: if you're comfortable expressing yourself in a language, you want to share it."
The Cornish language features in the toponymy of Cornwall, with a significant contrast between English place-names prevalent in eastern Cornwall and Cornish place-names to the west of the Camel-Fowey river valleys, where English place-names are much less common. Hundreds of Cornish family names have an etymology in the Cornish language, the majority of which are derived from Cornish place-names. Long before the agreement of the Standard Written Form of Cornish in the 21st century, Late Cornish orthography in the Early Modern period usually followed Welsh to English transliteration, phonetically rendering C for K, I for Y, U for W, and Z for S. This meant that place names were adopted into English with spellings such as 'Porthcurno' and 'Penzance'; they are written Porth Kernow and Pen Sans in the Standard Written Form of Cornish, agreed upon in 2008. Likewise words such as Enys ('island') can be found spelled as Ince as at Ince Castle. These apparent mistransliterations can, however, reveal an insight into how names and places were actually pronounced, explaining, for example, how anglicised Launceston is still pronounced [ˈlansǝn] with emphasis on the first element, perhaps from Cornish Lann Stefan, though the Concise Oxford Dictionary of English Place-Names considers this unlikely.
The following tables present some examples of Cornish place names and surnames and their anglicised versions:
From the Universal Declaration of Human Rights:
From Bro Goth agan Tasow, the Cornish anthem: | [
{
"paragraph_id": 0,
"text": "Cornish (Standard Written Form: Kernewek or Kernowek; [kəɾˈnuːək]) is a Southwestern Brittonic language of the Celtic language family. It is a revived language, having become extinct as a living community language in Cornwall at the end of the 18th century. However, knowledge of Cornish, including speaking ability to a certain extent, continued to be passed on within families and by individuals, and a revival began in the early 20th century. The language has a growing number of second-language speakers, and a very small number of families now raise children to speak revived Cornish as a first language. Cornish is currently recognised under the European Charter for Regional or Minority Languages, and the language is often described as an important part of Cornish identity, culture and heritage.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Along with Welsh and Breton, Cornish is descended from the Common Brittonic language spoken throughout much of Great Britain before the English language came to dominate. For centuries, until it was pushed westwards by English, it was the main language of Cornwall, maintaining close links with its sister language Breton, with which it was mutually intelligible, perhaps even as long as Cornish continued to be spoken as a vernacular. Cornish continued to function as a common community language in parts of Cornwall until the mid 18th century. There is some evidence of knowledge of the language persisting into the 19th century, possibly almost overlapping the beginning of revival efforts.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A process to revive the language began in the early 20th century, and in 2010, UNESCO announced that its former classification of the language as \"extinct\" was \"no longer accurate.\" Since the revival of the language, some Cornish textbooks and works of literature have been published, and an increasing number of people are studying the language. Recent developments include Cornish music, independent films, and children's books. A small number of people in Cornwall have been brought up to be bilingual native speakers, and the language is taught in schools and appears on road signs. The first Cornish-language day care opened in 2010.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cornish is a Southwestern Brittonic language, a branch of the Insular Celtic section of the Celtic language family, which is a sub-family of the Indo-European language family. Brittonic also includes Welsh, Breton, Cumbric and possibly Pictish, the last two of which are extinct. Scottish Gaelic, Irish and Manx are part of the separate Goidelic branch of Insular Celtic.",
"title": "Classification"
},
{
"paragraph_id": 4,
"text": "Joseph Loth viewed Cornish and Breton as being two dialects of the same language, claiming that \"Middle Cornish is without doubt closer to Breton as a whole than the modern Breton dialect of Quiberon [Kiberen] is to that of Saint-Pol-de-Léon [Kastell-Paol].\" Also, Kenneth Jackson argued that it is almost certain that Cornish and Breton would have been mutually intelligible as long as Cornish was a living language, and that Cornish and Breton are especially closely related to each other and less closely related to Welsh.",
"title": "Classification"
},
{
"paragraph_id": 5,
"text": "Cornish evolved from the Common Brittonic spoken throughout Britain south of the Firth of Forth during the British Iron Age and Roman period. As a result of westward Anglo-Saxon expansion, the Britons of the southwest were separated from those in modern-day Wales and Cumbria, which Jackson links to the defeat of the Britons at the Battle of Deorham in about 577. The western dialects eventually evolved into modern Welsh and the now extinct Cumbric, while Southwestern Brittonic developed into Cornish and Breton, the latter as a result of emigration to parts of the continent, known as Brittany over the following centuries.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The area controlled by the southwestern Britons was progressively reduced by the expansion of Wessex over the next few centuries. During the Old Cornish (Kernewek Koth) period (800–1200), the Cornish-speaking area was largely coterminous with modern-day Cornwall, after the Saxons had taken over Devon in their south-westward advance, which probably was facilitated by a second migration wave to Brittany that resulted in the partial depopulation of Devon.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The earliest written record of the Cornish language comes from this period: a 9th-century gloss in a Latin manuscript of De Consolatione Philosophiae by Boethius, which used the words ud rocashaas. The phrase may mean \"it [the mind] hated the gloomy places\", or alternatively, as Andrew Breeze suggests, \"she hated the land\". Other sources from this period include the Saints' List, a list of almost fifty Cornish saints, the Bodmin manumissions, which is a list of manumittors and slaves, the latter with mostly Cornish names, and, more substantially, a Latin-Cornish glossary (the Vocabularium Cornicum or Cottonian Vocabulary), a Cornish translation of Ælfric of Eynsham's Latin-Old English Glossary, which is thematically arranged into several groups, such as the Genesis creation narrative, anatomy, church hierarchy, the family, names for various kinds of artisans and their tools, flora, fauna, and household items. The manuscript was widely thought to be in Old Welsh until the 18th century when it was identified as Cornish by Edward Lhuyd. Some Brittonic glosses in the 9th-century colloquy De raris fabulis were once identified as Old Cornish, but they are more likely Old Welsh, possibly influenced by a Cornish scribe. No single phonological feature distinguishes Cornish from both Welsh and Breton until the beginning of the assibilation of dental stops in Cornish, which is not found before the second half of the eleventh century, and it is not always possible to distinguish Old Cornish, Old Breton, and Old Welsh orthographically.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Cornish language continued to flourish well through the Middle Cornish (Kernewek Kres) period (1200–1600), reaching a peak of about 39,000 speakers in the 13th century, after which the number started to decline. This period provided the bulk of traditional Cornish literature, and was used to reconstruct the language during its revival. Most important is the Ordinalia, a cycle of three mystery plays, Origo Mundi, Passio Christi and Resurrexio Domini. Together these provide about 8,734 lines of text. The three plays exhibit a mixture of English and Brittonic influences, and, like other Cornish literature, may have been written at Glasney College near Penryn. From this period also are the hagiographical dramas Beunans Meriasek (The Life of Meriasek) and Bewnans Ke (The Life of Ke), both of which feature as an antagonist the villainous and tyrannical King Tewdar (or Teudar), a historical medieval king in Armorica and Cornwall, who, in these plays, has been interpreted as a lampoon of either of the Tudor kings Henry VII or Henry VIII.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Others are the Charter Fragment, the earliest known continuous text in the Cornish language, apparently part of a play about a medieval marriage, and Pascon agan Arluth (The Passion of Our Lord), a poem probably intended for personal worship, were written during this period, probably in the second half of the 14th century. Another important text, the Tregear Homilies, was realized to be Cornish in 1949, having previously been incorrectly classified as Welsh. It is the longest text in the traditional Cornish language, consisting of around 30,000 words of continuous prose. This text is a late 16th century translation of twelve of Bishop Bonner's thirteen homilies by a certain John Tregear, tentatively identified as a vicar of St Allen from Crowan, and has an additional catena, Sacrament an Alter, added later by his fellow priest, Thomas Stephyn. In the reign of Henry VIII, an account was given by Andrew Boorde in his 1542 Boke of the Introduction of Knowledge. He states, \"In Cornwall is two speches, the one is naughty Englysshe, and the other is Cornysshe speche. And there be many men and women the which cannot speake one worde of Englysshe, but all Cornyshe.\"",
"title": "History"
},
{
"paragraph_id": 10,
"text": "When Parliament passed the Act of Uniformity 1549, which established the 1549 edition of the English Book of Common Prayer as the sole legal form of worship in England, including Cornwall, people in many areas of Cornwall did not speak or understand English. The passing of this Act was one of the causes of the Prayer Book Rebellion (which may also have been influenced by the retaliation of the English after the failed Cornish Rebellion of 1497), with \"the commoners of Devonshyre and Cornwall\" producing a manifesto demanding a return to the old religious services and included an article that concluded, \"and so we the Cornyshe men (whereof certen of us understande no Englysh) utterly refuse thys newe Englysh.\" In response to their articles, the government spokesman (either Philip Nichols or Nicholas Udall) wondered why they did not just ask the king for a version of the liturgy in their own language. Archbishop Thomas Cranmer asked why the Cornishmen should be offended by holding the service in English, when they had before held it in Latin, which even fewer of them could understand. Anthony Fletcher points out that this rebellion was primarily motivated by religious and economic, rather than linguistic, concerns. The rebellion prompted a heavy-handed response from the government, and 5,500 people died during the fighting and the rebellion's aftermath. Government officials then directed troops under the command of Sir Anthony Kingston to carry out pacification operations throughout the West Country. Kingston subsequently ordered the executions of numerous individuals suspected of involvement with the rebellion as part of the post-rebellion reprisals.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The rebellion eventually proved a turning-point for the Cornish language, as the authorities came to associate it with sedition and \"backwardness\". This proved to be one of the reasons why the Book of Common Prayer was never translated into Cornish (unlike Welsh), as proposals to do so were suppressed in the rebellion's aftermath. The failure to translate the Book of Common Prayer into Cornish led to the language's rapid decline during the 16th and 17th centuries. Peter Berresford Ellis cites the years 1550–1650 as a century of immense damage for the language, and its decline can be traced to this period. In 1680 William Scawen wrote an essay describing 16 reasons for the decline of Cornish, among them the lack of a distinctive Cornish alphabet, the loss of contact between Cornwall and Brittany, the cessation of the miracle plays, loss of records in the Civil War, lack of a Cornish Bible and immigration to Cornwall. Mark Stoyle, however, has argued that the 'glotticide' of the Cornish language was mainly a result of the Cornish gentry adopting English to dissociate themselves from the reputation for disloyalty and rebellion associated with the Cornish language since the 1497 uprising.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "By the middle of the 17th century, the language had retreated to Penwith and Kerrier, and transmission of the language to new generations had almost entirely ceased. In his Survey of Cornwall, published in 1602, Richard Carew writes:",
"title": "History"
},
{
"paragraph_id": 13,
"text": "[M]ost of the inhabitants can speak no word of Cornish, but very few are ignorant of the English; and yet some so affect their own, as to a stranger they will not speak it; for if meeting them by chance, you inquire the way, or any such matter, your answer shall be, \"Meea navidna caw zasawzneck,\" \"I [will] speak no Saxonage.\"",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Late Cornish (Kernewek Diwedhes) period from 1600 to about 1800 has a less substantial body of literature than the Middle Cornish period, but the sources are more varied in nature, including songs, poems about fishing and curing pilchards, and various translations of verses from the Bible, the Ten Commandments, the Lord's Prayer and the Creed. Edward Lhuyd's Archaeologia Britannica, which was mainly recorded in the field from native speakers in the early 1700s, and his unpublished field notebook are seen as important sources of Cornish vocabulary, some of which are not found in any other source. Archaeologia Britannica also features a complete version of a traditional folk tale, John of Chyanhor, a short story about a man from St Levan who goes far to the east seeking work, eventually returning home after three years to find that his wife has borne him a child during his absence.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1776, William Bodinar, who describes himself as having learned Cornish from old fishermen when he was a boy, wrote a letter to Daines Barrington in Cornish, with an English translation, which was probably the last prose written in the traditional language. In his letter, he describes the sociolinguistics of the Cornish language at the time, stating that there are no more than four or five old people in his village who can still speak Cornish, concluding with the remark that Cornish is no longer known by young people. However, the last recorded traditional Cornish literature may have been the Cranken Rhyme, a corrupted version of a verse or song published in the late 19th century by John Hobson Matthews, recorded orally by John Davey (or Davy) of Boswednack, of uncertain date but probably originally composed during the last years of the traditional language. Davey had traditional knowledge of at least some Cornish. John Kelynack (1796–1885), a fisherman of Newlyn, was sought by philologists for old Cornish words and technical phrases in the 19th century.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "It is difficult to state with certainty when Cornish ceased to be spoken, due to the fact that its last speakers were of relatively low social class and that the definition of what constitutes \"a living language\" is not clear cut. Peter Pool argues that by 1800 nobody was using Cornish as a daily language and no evidence exists of anyone capable of conversing in the language at that date. However, passive speakers, semi-speakers and rememberers, who retain some competence in the language despite not being fluent nor using the language in daily life, generally survive even longer.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The traditional view that Dolly Pentreath (1692–1777) was the last native speaker of Cornish has been challenged, and in the 18th and 19th centuries there was academic interest in the language and in attempting to find the last speaker of Cornish. It has been suggested that, whereas Pentreath was probably the last monolingual speaker, the last native speaker may have been John Davey of Zennor, who died in 1891. However, although it is clear Davey possessed some traditional knowledge in addition to having read books on Cornish, accounts differ of his competence in the language. Some contemporaries stated he was able to converse on certain topics in Cornish whereas others affirmed they had never heard him claim to be able to do so. Robert Morton Nance, who reworked and translated Davey's Cranken Rhyme, remarked, \"There can be no doubt, after the evidence of this rhyme, of what there was to lose by neglecting John Davey.\"",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The search for the last speaker is hampered by a lack of transcriptions or audio recordings, so that it is impossible to tell from this distance whether the language these people were reported to be speaking was Cornish, or English with a heavy Cornish substratum, nor what their level of fluency was. Nevertheless this academic interest, along with the beginning of the Celtic Revival in the late 19th century, provided the groundwork for a Cornish language revival movement.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Notwithstanding the uncertainty over who was the last speaker of Cornish, researchers have posited the following numbers for the prevalence of the language between 1050 and 1800.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1904, the Celtic language scholar and Cornish cultural activist Henry Jenner published A Handbook of the Cornish Language. The publication of this book is often considered to be the point at which the revival movement started. Jenner wrote about the Cornish language in 1905, \"one may fairly say that most of what there was of it has been preserved, and that it has been continuously preserved, for there has never been a time when there were not some Cornishmen who knew some Cornish.\"",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The revival focused on reconstructing and standardising the language, including coining new words for modern concepts, and creating educational material in order to teach Cornish to others. In 1929 Robert Morton Nance published his Unified Cornish (Kernewek Unys) system, based on the Middle Cornish literature while extending the attested vocabulary with neologisms and forms based on Celtic roots also found in Breton and Welsh, publishing a dictionary in 1938. Nance's work became the basis of revived Cornish (Kernewek Dasserghys) for most of the 20th century. During the 1970s, criticism of Nance's system, including the inconsistent orthography and unpredictable correspondence between spelling and pronunciation, as well as on other grounds such as the archaic basis of Unified and a lack of emphasis on the spoken language, resulted in the creation of several rival systems. In the 1980s, Ken George published a new system, Kernewek Kemmyn ('Common Cornish'), based on a reconstruction of the phonological system of Middle Cornish, but with an approximately morphophonemic orthography. It was subsequently adopted by the Cornish Language Board and was the written form used by a reported 54.5% of all Cornish language users according to a survey in 2008, but was heavily criticised for a variety of reasons by Jon Mills and Nicholas Williams, including making phonological distinctions that they state were not made in the traditional language c. 1500, failing to make distinctions that they believe were made in the traditional language at this time, and the use of an orthography that deviated too far from the traditional texts and Unified Cornish. Also during this period, Richard Gendall created his Modern Cornish system (also known as Revived Late Cornish), which used Late Cornish as a basis, and Nicholas Williams published a revised version of Unified; however neither of these systems gained the popularity of Unified or Kemmyn.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The revival entered a period of factionalism and public disputes, with each orthography attempting to push the others aside. By the time that Cornish was recognised by the UK government under the European Charter for Regional or Minority Languages in 2002, it had become recognised that the existence of multiple orthographies was unsustainable with regards to using the language in education and public life, as none had achieved a wide consensus. A process of unification was set about which resulted in the creation of the public-body Cornish Language Partnership in 2005 and agreement on a Standard Written Form in 2008. In 2010 a new milestone was reached when UNESCO altered its classification of Cornish, stating that its previous label of \"extinct\" was no longer accurate.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Speakers of Cornish reside primarily in Cornwall, which has a population of 563,600 (2017 estimate). There are also some speakers living outside Cornwall, particularly in the countries of the Cornish diaspora, as well as in other Celtic nations. Estimates of the number of Cornish speakers vary according to the definition of a speaker, and is difficult to determine accurately due to the individualised nature of language take-up. Nevertheless, there is recognition that the number of Cornish speakers is growing. From before the 1980s to the end of the 20th century there was a sixfold increase in the number of speakers to around 300. One figure for the number of people who know a few basic words, such as knowing that \"Kernow\" means \"Cornwall\", was 300,000; the same survey gave the number of people able to have simple conversations as 3,000.",
"title": "Geographic distribution and number of speakers"
},
{
"paragraph_id": 24,
"text": "The Cornish Language Strategy project commissioned research to provide quantitative and qualitative evidence for the number of Cornish speakers: due to the success of the revival project it was estimated that 2,000 people were fluent (surveyed in spring 2008), an increase from the estimated 300 people who spoke Cornish fluently suggested in a study by Kenneth MacKinnon in 2000.",
"title": "Geographic distribution and number of speakers"
},
{
"paragraph_id": 25,
"text": "Jenefer Lowe of the Cornish Language Partnership said in an interview with the BBC in 2010 that there were around 300 fluent speakers. Bert Biscoe, a councillor and bard, in a statement to the Western Morning News in 2014 said there were \"several hundred fluent speakers\". Cornwall Council estimated in 2015 that there were 300–400 fluent speakers who used the language regularly, with 5,000 people having a basic conversational ability in the language.",
"title": "Geographic distribution and number of speakers"
},
{
"paragraph_id": 26,
"text": "A report on the 2011 Census published in 2013 by the Office for National Statistics placed the number of speakers at somewhere between 325 and 625. In 2017 the ONS released data based on the 2011 Census that placed the number of speakers at 557 people in England and Wales who declared Cornish to be their main language, 464 of whom lived in Cornwall. The 2021 census listed the number of Cornish speakers at 563.",
"title": "Geographic distribution and number of speakers"
},
{
"paragraph_id": 27,
"text": "A study that appeared in 2018 established the number of people in Cornwall with at least minimal skills in Cornish, such as the use of some words and phrases, to be more than 3,000, including around 500 estimated to be fluent.",
"title": "Geographic distribution and number of speakers"
},
{
"paragraph_id": 28,
"text": "The Institute of Cornish Studies at the University of Exeter is working with the Cornish Language Partnership to study the Cornish language revival of the 20th century, including the growth in number of speakers.",
"title": "Geographic distribution and number of speakers"
},
{
"paragraph_id": 29,
"text": "In 2002, Cornish was recognized by the UK government under Part II of the European Charter for Regional or Minority Languages. UNESCO's Atlas of World Languages classifies Cornish as \"critically endangered\". UNESCO has said that a previous classification of 'extinct' \"does not reflect the current situation for Cornish\" and is \"no longer accurate\".",
"title": "Legal status and recognition"
},
{
"paragraph_id": 30,
"text": "Cornwall Council's policy is to support the language, in line with the European Charter. A motion was passed in November 2009 in which the council promoted the inclusion of Cornish, as appropriate and where possible, in council publications and on signs. This plan has drawn some criticism. In October 2015, Cornwall Council announced that staff would be encouraged to use \"basic words and phrases\" in Cornish when dealing with the public. In 2021 Cornwall Council prohibited a marriage ceremony from being conducted in Cornish as the Marriage Act 1949 only allowed for marriage ceremonies in English or Welsh.",
"title": "Legal status and recognition"
},
{
"paragraph_id": 31,
"text": "In 2014, the Cornish people were recognised by the UK Government as a national minority under the Framework Convention for the Protection of National Minorities. The FCNM provides certain rights and protections to a national minority with regard to their minority language.",
"title": "Legal status and recognition"
},
{
"paragraph_id": 32,
"text": "In 2016, British government funding for the Cornish language ceased, and responsibility transferred to Cornwall Council.",
"title": "Legal status and recognition"
},
{
"paragraph_id": 33,
"text": "Old Cornish",
"title": "Orthography"
},
{
"paragraph_id": 34,
"text": "Until around the middle of the 11th century, Old Cornish scribes used a traditional spelling system shared with Old Breton and Old Welsh, based on the pronunciation of British Latin. By the time of the Vocabularium Cornicum, usually dated to around 1100, Old English spelling conventions, such as the use of thorn (Þ, þ) and eth (Ð, ð) for dental fricatives, and wynn (Ƿ, ƿ) for /w/, had come into use, allowing documents written at this time to be distinguished from Old Welsh, which rarely uses these characters, and Old Breton, which does not use them at all. Old Cornish features include using initial ⟨ch⟩, ⟨c⟩, or ⟨k⟩ for /k/, and, in internal and final position, ⟨p⟩, ⟨t⟩, ⟨c⟩, ⟨b⟩, ⟨d⟩, and ⟨g⟩ are generally used for the phonemes /b/, /d/, /ɡ/, /β/, /ð/, and /ɣ/ respectively, meaning that the results of Brittonic lenition are not usually apparent from the orthography at this time.",
"title": "Orthography"
},
{
"paragraph_id": 35,
"text": "Middle Cornish",
"title": "Orthography"
},
{
"paragraph_id": 36,
"text": "Middle Cornish orthography has a significant level of variation, and shows influence from Middle English spelling practices. Yogh (Ȝ ȝ) is used in certain Middle Cornish texts, where it is used to represent a variety of sounds, including the dental fricatives /θ/ and /ð/, a usage which is unique to Middle Cornish and is never found in Middle English. Middle Cornish scribes tend to use ⟨c⟩ for /k/ before back vowels, and ⟨k⟩ for /k/ before front vowels, though this is not always true, and this rule is less consistent in certain texts. Middle Cornish scribes almost universally use ⟨wh⟩ to represent /ʍ/ (or /hw/), as in Middle English. Middle Cornish, especially towards the end of this period, tends to use orthographic ⟨g⟩ and ⟨b⟩ in word-final position in stressed monosyllables, and ⟨k⟩ and ⟨p⟩ in word-final position in unstressed final syllables, to represent the reflexes of late Brittonic /ɡ/ and /b/, respectively.",
"title": "Orthography"
},
{
"paragraph_id": 37,
"text": "Late Cornish",
"title": "Orthography"
},
{
"paragraph_id": 38,
"text": "Written sources from this period are often spelled following English spelling conventions since many of the writers of the time had not been exposed to Middle Cornish texts or the Cornish orthography within them. Around 1700, Edward Lhuyd visited Cornwall, introducing his own partly phonetic orthography that he used in his Archaeologia Britannica, which was adopted by some local writers, leading to the use of some Lhuydian features such as the use of circumflexes to denote long vowels, ⟨k⟩ before front vowels, word-final ⟨i⟩, and the use of ⟨dh⟩ to represent the voiced dental fricative /ð/.",
"title": "Orthography"
},
{
"paragraph_id": 39,
"text": "Revived Cornish",
"title": "Orthography"
},
{
"paragraph_id": 40,
"text": "After the publication of Jenner's Handbook of the Cornish Language, the earliest revivalists used Jenner's orthography, which was influenced by Lhuyd's system. This system was abandoned following the development by Nance of a \"unified spelling\", later known as Unified Cornish, a system based on a standardization of the orthography of the early Middle Cornish texts. Nance's system was used by almost all Revived Cornish speakers and writers until the 1970s. Criticism of Nance's system, particularly the relationship of spelling to sounds and the phonological basis of Unified Cornish, resulted in rival orthographies appearing by the early 1980s, including Gendal's Modern Cornish, based on Late Cornish native writers and Lhuyd, and Ken George's Kernewek Kemmyn, a mainly morphophonemic orthography based on George's reconstruction of Middle Cornish c. 1500, which features a number of orthographic, and phonological, distinctions not found in Unified Cornish. Kernewek Kemmyn is characterised by the use of universal ⟨k⟩ for /k/ (instead of ⟨c⟩ before back vowels as in Unified); ⟨hw⟩ for /hw/, instead of ⟨wh⟩ as in Unified; and ⟨y⟩, ⟨oe⟩, and ⟨eu⟩ to represent the phonemes /ɪ/, /o/, and /œ/ respectively, which are not found in Unified Cornish. Criticism of all of these systems, especially Kernewek Kemmyn, by Nicolas Williams, resulted in the creation of Unified Cornish Revised, a modified version of Nance's orthography, featuring: an additional phoneme not distinguished by Nance, \"ö in German schön\", represented in the UCR orthography by ⟨ue⟩; replacement of ⟨y⟩ with ⟨e⟩ in many words; internal ⟨h⟩ rather than ⟨gh⟩; and use of final ⟨b⟩, ⟨g⟩, and ⟨dh⟩ in stressed monosyllables. A Standard Written Form, intended as a compromise orthography for official and educational purposes, was introduced in 2008, although a number of previous orthographic systems remain in use and, in response to the publication of the SWF, another new orthography, Kernowek Standard, was created, mainly by Nicholas Williams and Michael Everson, which is proposed as an amended version of the Standard Written Form.",
"title": "Orthography"
},
{
"paragraph_id": 41,
"text": "The phonological system of Old Cornish, inherited from Proto-Southwestern Brittonic and originally differing little from Old Breton and Old Welsh, underwent various changes during its Middle and Late phases, eventually resulting in several characteristics not found in the other Brittonic languages. The first sound change to distinguish Cornish from both Breton and Welsh, the assibilation of the dental stops /t/ and /d/ in medial and final position, had begun by the time of the Vocabularium Cornicum, c. 1100 or earlier. This change, and the subsequent, or perhaps dialectical, palatalization (or occasional rhotacization in a few words) of these sounds, results in orthographic forms such as Middle Cornish tas 'father', Late Cornish tâz (Welsh tad), Middle Cornish cresy 'believe', Late Cornish cregy (Welsh credu), and Middle Cornish gasa 'leave', Late Cornish gara (Welsh gadael). A further characteristic sound change, pre-occlusion, occurred during the sixteenth century, resulting in the nasals /nn/ and /mm/ being realised as [ᵈn] and [ᵇm] respectively in stressed syllables, and giving Late Cornish forms such as pedn 'head' (Welsh pen) and kabm 'crooked' (Welsh cam).",
"title": "Phonology"
},
{
"paragraph_id": 42,
"text": "As a revitalised language, the phonology of contemporary spoken Cornish is based on a number of sources, including various reconstructions of the sound system of middle and early modern Cornish based on an analysis of internal evidence such as the orthography and rhyme used in the historical texts, comparison with the other Brittonic languages Breton and Welsh, and the work of the linguist Edward Lhuyd, who visited Cornwall in 1700 and recorded the language in a partly phonetic orthography.",
"title": "Phonology"
},
{
"paragraph_id": 43,
"text": "Cornish is a Celtic language, and the majority of its vocabulary, when usage frequency is taken into account, at every documented stage of its history is inherited direct from Proto-Celtic, either through the ancestral Proto-Indo-European language, or through vocabulary borrowed from unknown substrate language(s) at some point in the development of the Celtic proto-language from PIE. Examples of the PIE > PCelt. development are various terms related to kinship and people, including mam 'mother', modereb 'aunt, mother's sister', huir 'sister', mab 'son', gur 'man', den 'person, human', and tus 'people', and words for parts of the body, including lof 'hand' and dans 'tooth'. Inherited adjectives with an Indo-European etymology include newyth 'new', ledan 'broad, wide', rud 'red', hen 'old', iouenc 'young', and byw 'alive, living'.",
"title": "Vocabulary"
},
{
"paragraph_id": 44,
"text": "Several Celtic or Brittonic words cannot be reconstructed to Proto-Indo-European, and are suggested to have been borrowed from unknown substrate language(s) at an early stage, such as Proto-Celtic or Proto-Brittonic. Proposed examples in Cornish include coruf 'beer' and broch 'badger'.",
"title": "Vocabulary"
},
{
"paragraph_id": 45,
"text": "Other words in Cornish inherited direct from Proto-Celtic include a number of toponyms, for example bre 'hill', din 'fort', and bro 'land', and a variety of animal names such as logoden 'mouse', mols 'wether', mogh 'pigs', and tarow 'bull'.",
"title": "Vocabulary"
},
{
"paragraph_id": 46,
"text": "During the Roman occupation of Britain a large number (around 800) of Latin loan words entered the vocabulary of Common Brittonic, which subsequently developed in a similar way to the inherited lexicon. These include brech 'arm' (from British Latin bracc(h)ium), ruid 'net' (from retia), and cos 'cheese' (from caseus).",
"title": "Vocabulary"
},
{
"paragraph_id": 47,
"text": "A substantial number of loan words from English and to a lesser extent French entered the Cornish language throughout its history. Whereas only 5% of the vocabulary of the Old Cornish Vocabularium Cornicum is thought to be borrowed from English, and only 10% of the lexicon of the early modern Cornish writer William Rowe, around 42% of the vocabulary of the whole Cornish corpus is estimated to be English loan words, without taking frequency into account. (However when frequency is taken into account this figure for the entire corpus drops to 8%.) The many English loanwords, some of which were sufficiently well assimilated to acquire native Cornish verbal or plural suffixes or be affected by the mutation system, include redya 'to read', onderstondya 'to understand', ford 'way', hos 'boot' and creft 'art'.",
"title": "Vocabulary"
},
{
"paragraph_id": 48,
"text": "Many Cornish words, such as mining and fishing terms, are specific to the culture of Cornwall. Examples include atal 'mine waste' and beetia 'to mend fishing nets'. Foogan and hogan are different types of pastries. Troyl is a 'traditional Cornish dance get-together' and Furry is a specific kind of ceremonial dance that takes place in Cornwall. Certain Cornish words may have several translation equivalents in English, so for instance lyver may be translated into English as either 'book' or 'volume' and dorn can mean either 'hand' or 'fist'. As in other Celtic languages, Cornish lacks a number of verbs commonly found in other languages, including modals and psych-verbs; examples are 'have', 'like', 'hate', 'prefer', 'must/have to' and 'make/compel to'. These functions are instead fulfilled by periphrastic constructions involving a verb and various prepositional phrases.",
"title": "Vocabulary"
},
{
"paragraph_id": 49,
"text": "The grammar of Cornish shares with other Celtic languages a number of features which, while not unique, are unusual in an Indo-European context. The grammatical features most unfamiliar to English speakers of the language are the initial consonant mutations, the verb–subject–object word order, inflected prepositions, fronting of emphasised syntactic elements and the use of two different forms for 'to be'.",
"title": "Grammar"
},
{
"paragraph_id": 50,
"text": "Cornish has initial consonant mutation: The first sound of a Cornish word may change according to grammatical context. As in Breton, there are four types of mutation in Cornish (compared with three in Welsh, two in Irish and Manx and one in Scottish Gaelic). These changes apply to only certain letters (sounds) in particular grammatical contexts, some of which are given below:",
"title": "Grammar"
},
{
"paragraph_id": 51,
"text": "Cornish has no indefinite article. Porth can either mean 'harbour' or 'a harbour'. In certain contexts, unn can be used, with the meaning 'a certain, a particular', e.g. unn porth 'a certain harbour'. There is, however, a definite article an 'the', which is used for all nouns regardless of their gender or number, e.g. an porth 'the harbour'.",
"title": "Grammar"
},
{
"paragraph_id": 52,
"text": "Cornish nouns belong to one of two grammatical genders, masculine and feminine, but are not inflected for case. Nouns may be singular or plural. Plurals can be formed in various ways, depending on the noun:",
"title": "Grammar"
},
{
"paragraph_id": 53,
"text": "Some nouns are collective or mass nouns. Singulatives can be formed from collective nouns by the addition of the suffix ⫽-enn⫽ (SWF -en):",
"title": "Grammar"
},
{
"paragraph_id": 54,
"text": "Verbs are conjugated for person, number, tense and mood. For example, the verbal noun gweles 'see' has derived forms such as 1st person singular present indicative gwelav 'I see', 3rd person plural imperfect indicative gwelens 'they saw', and 2nd person singular imperative gwel 'see!' Grammatical categories can be indicated either by inflection of the main verb, or by the use of auxiliary verbs such as bos 'be' or gul 'do'.",
"title": "Grammar"
},
{
"paragraph_id": 55,
"text": "Cornish uses inflected (or conjugated) prepositions: Prepositions are inflected for person and number. For example, gans (with, by) has derived forms such as genev 'with me', ganso 'with him', and genowgh 'with you (plural)'.",
"title": "Grammar"
},
{
"paragraph_id": 56,
"text": "Word order in Cornish is somewhat fluid and varies depending on several factors such as the intended element to be emphasised and whether a statement is negative or affirmative. In a study on Cornish word order in the play Bewnans Meriasek (c. 1500), Ken George has argued that the most common word order in main clauses in Middle Cornish was, in affirmative statements, SVO, with the verb in the third person singular:",
"title": "Grammar"
},
{
"paragraph_id": 57,
"text": "My",
"title": "Grammar"
},
{
"paragraph_id": 58,
"text": "1SG",
"title": "Grammar"
},
{
"paragraph_id": 59,
"text": "a",
"title": "Grammar"
},
{
"paragraph_id": 60,
"text": "PTCL",
"title": "Grammar"
},
{
"paragraph_id": 61,
"text": "wel",
"title": "Grammar"
},
{
"paragraph_id": 62,
"text": "see-PRES.3SG",
"title": "Grammar"
},
{
"paragraph_id": 63,
"text": "an",
"title": "Grammar"
},
{
"paragraph_id": 64,
"text": "DEF",
"title": "Grammar"
},
{
"paragraph_id": 65,
"text": "gath",
"title": "Grammar"
},
{
"paragraph_id": 66,
"text": "cat",
"title": "Grammar"
},
{
"paragraph_id": 67,
"text": "My a wel an gath",
"title": "Grammar"
},
{
"paragraph_id": 68,
"text": "1SG PTCL see-PRES.3SG DEF cat",
"title": "Grammar"
},
{
"paragraph_id": 69,
"text": "'I see the cat.'",
"title": "Grammar"
},
{
"paragraph_id": 70,
"text": "When affirmative statements are in the less common VSO order, they usually begin with an adverb or other element, followed by an affirmative particle, with the verb inflected for person and tense:",
"title": "Grammar"
},
{
"paragraph_id": 71,
"text": "Ev",
"title": "Grammar"
},
{
"paragraph_id": 72,
"text": "3SG.M",
"title": "Grammar"
},
{
"paragraph_id": 73,
"text": "a",
"title": "Grammar"
},
{
"paragraph_id": 74,
"text": "PTCL",
"title": "Grammar"
},
{
"paragraph_id": 75,
"text": "grys",
"title": "Grammar"
},
{
"paragraph_id": 76,
"text": "believe-PRES.3SG",
"title": "Grammar"
},
{
"paragraph_id": 77,
"text": "y",
"title": "Grammar"
},
{
"paragraph_id": 78,
"text": "PTCL",
"title": "Grammar"
},
{
"paragraph_id": 79,
"text": "hwelav",
"title": "Grammar"
},
{
"paragraph_id": 80,
"text": "see-PRES.1SG",
"title": "Grammar"
},
{
"paragraph_id": 81,
"text": "an",
"title": "Grammar"
},
{
"paragraph_id": 82,
"text": "DEF",
"title": "Grammar"
},
{
"paragraph_id": 83,
"text": "gath",
"title": "Grammar"
},
{
"paragraph_id": 84,
"text": "cat",
"title": "Grammar"
},
{
"paragraph_id": 85,
"text": "Ev a grys y hwelav an gath",
"title": "Grammar"
},
{
"paragraph_id": 86,
"text": "3SG.M PTCL believe-PRES.3SG PTCL see-PRES.1SG DEF cat",
"title": "Grammar"
},
{
"paragraph_id": 87,
"text": "'He believes that I see the cat.'",
"title": "Grammar"
},
{
"paragraph_id": 88,
"text": "In negative statements, the order was usually VSO, with an initial negative particle and the verb conjugated for person and tense:",
"title": "Grammar"
},
{
"paragraph_id": 89,
"text": "Ny",
"title": "Grammar"
},
{
"paragraph_id": 90,
"text": "NEG",
"title": "Grammar"
},
{
"paragraph_id": 91,
"text": "welav",
"title": "Grammar"
},
{
"paragraph_id": 92,
"text": "see-PRES.1SG",
"title": "Grammar"
},
{
"paragraph_id": 93,
"text": "an",
"title": "Grammar"
},
{
"paragraph_id": 94,
"text": "DEF",
"title": "Grammar"
},
{
"paragraph_id": 95,
"text": "gath",
"title": "Grammar"
},
{
"paragraph_id": 96,
"text": "cat",
"title": "Grammar"
},
{
"paragraph_id": 97,
"text": "Ny welav an gath",
"title": "Grammar"
},
{
"paragraph_id": 98,
"text": "NEG see-PRES.1SG DEF cat",
"title": "Grammar"
},
{
"paragraph_id": 99,
"text": "'I do not see the cat.'",
"title": "Grammar"
},
{
"paragraph_id": 100,
"text": "A similar structure is used for questions:",
"title": "Grammar"
},
{
"paragraph_id": 101,
"text": "a",
"title": "Grammar"
},
{
"paragraph_id": 102,
"text": "PTCL",
"title": "Grammar"
},
{
"paragraph_id": 103,
"text": "glewsyugh",
"title": "Grammar"
},
{
"paragraph_id": 104,
"text": "hear-PLUPERF.2PL",
"title": "Grammar"
},
{
"paragraph_id": 105,
"text": "why?",
"title": "Grammar"
},
{
"paragraph_id": 106,
"text": "2PL",
"title": "Grammar"
},
{
"paragraph_id": 107,
"text": "a glewsyugh why?",
"title": "Grammar"
},
{
"paragraph_id": 108,
"text": "PTCL hear-PLUPERF.2PL 2PL",
"title": "Grammar"
},
{
"paragraph_id": 109,
"text": "'Did you hear?'",
"title": "Grammar"
},
{
"paragraph_id": 110,
"text": "Elements can be fronted for emphasis:",
"title": "Grammar"
},
{
"paragraph_id": 111,
"text": "an",
"title": "Grammar"
},
{
"paragraph_id": 112,
"text": "DEF",
"title": "Grammar"
},
{
"paragraph_id": 113,
"text": "gath",
"title": "Grammar"
},
{
"paragraph_id": 114,
"text": "cat",
"title": "Grammar"
},
{
"paragraph_id": 115,
"text": "my",
"title": "Grammar"
},
{
"paragraph_id": 116,
"text": "1SG",
"title": "Grammar"
},
{
"paragraph_id": 117,
"text": "a",
"title": "Grammar"
},
{
"paragraph_id": 118,
"text": "PTCL",
"title": "Grammar"
},
{
"paragraph_id": 119,
"text": "wel",
"title": "Grammar"
},
{
"paragraph_id": 120,
"text": "see-PRES.3SG",
"title": "Grammar"
},
{
"paragraph_id": 121,
"text": "an gath my a wel",
"title": "Grammar"
},
{
"paragraph_id": 122,
"text": "DEF cat 1SG PTCL see-PRES.3SG",
"title": "Grammar"
},
{
"paragraph_id": 123,
"text": "'I see the cat.'",
"title": "Grammar"
},
{
"paragraph_id": 124,
"text": "Sentences can also be constructed periphrastically using auxiliary verbs such as bos 'be, exist':",
"title": "Grammar"
},
{
"paragraph_id": 125,
"text": "Yma",
"title": "Grammar"
},
{
"paragraph_id": 126,
"text": "be-PRES-AFF.3SG",
"title": "Grammar"
},
{
"paragraph_id": 127,
"text": "ow",
"title": "Grammar"
},
{
"paragraph_id": 128,
"text": "PTCL",
"title": "Grammar"
},
{
"paragraph_id": 129,
"text": "kelwel",
"title": "Grammar"
},
{
"paragraph_id": 130,
"text": "call-VN",
"title": "Grammar"
},
{
"paragraph_id": 131,
"text": "ely",
"title": "Grammar"
},
{
"paragraph_id": 132,
"text": "Ely",
"title": "Grammar"
},
{
"paragraph_id": 133,
"text": "Yma ow kelwel ely",
"title": "Grammar"
},
{
"paragraph_id": 134,
"text": "be-PRES-AFF.3SG PTCL call-VN Ely",
"title": "Grammar"
},
{
"paragraph_id": 135,
"text": "'(He) is calling Ely.'",
"title": "Grammar"
},
{
"paragraph_id": 136,
"text": "As Cornish lacks verbs such as 'to have', possession can also be indicated in this way:",
"title": "Grammar"
},
{
"paragraph_id": 137,
"text": "'ma",
"title": "Grammar"
},
{
"paragraph_id": 138,
"text": "be-PRES-AFF.3SG",
"title": "Grammar"
},
{
"paragraph_id": 139,
"text": "'gen",
"title": "Grammar"
},
{
"paragraph_id": 140,
"text": "1PL",
"title": "Grammar"
},
{
"paragraph_id": 141,
"text": "ehaz",
"title": "Grammar"
},
{
"paragraph_id": 142,
"text": "health",
"title": "Grammar"
},
{
"paragraph_id": 143,
"text": "nyi",
"title": "Grammar"
},
{
"paragraph_id": 144,
"text": "1PL",
"title": "Grammar"
},
{
"paragraph_id": 145,
"text": "dhen",
"title": "Grammar"
},
{
"paragraph_id": 146,
"text": "to+us",
"title": "Grammar"
},
{
"paragraph_id": 147,
"text": "'ma 'gen ehaz nyi dhen",
"title": "Grammar"
},
{
"paragraph_id": 148,
"text": "be-PRES-AFF.3SG 1PL health 1PL to+us",
"title": "Grammar"
},
{
"paragraph_id": 149,
"text": "'We have our health.'",
"title": "Grammar"
},
{
"paragraph_id": 150,
"text": "Enquiring about possession is similar, using a different interrogative form of bos:",
"title": "Grammar"
},
{
"paragraph_id": 151,
"text": "Hostes,",
"title": "Grammar"
},
{
"paragraph_id": 152,
"text": "Hostess",
"title": "Grammar"
},
{
"paragraph_id": 153,
"text": "ues",
"title": "Grammar"
},
{
"paragraph_id": 154,
"text": "be-PRES-INTERR-INDEF.3SG",
"title": "Grammar"
},
{
"paragraph_id": 155,
"text": "boues",
"title": "Grammar"
},
{
"paragraph_id": 156,
"text": "food",
"title": "Grammar"
},
{
"paragraph_id": 157,
"text": "dewhy?",
"title": "Grammar"
},
{
"paragraph_id": 158,
"text": "to+you",
"title": "Grammar"
},
{
"paragraph_id": 159,
"text": "Hostes, ues boues dewhy?",
"title": "Grammar"
},
{
"paragraph_id": 160,
"text": "Hostess be-PRES-INTERR-INDEF.3SG food to+you",
"title": "Grammar"
},
{
"paragraph_id": 161,
"text": "'Hostess, have you [any] food?'",
"title": "Grammar"
},
{
"paragraph_id": 162,
"text": "Nouns usually precede the adjective, unlike in English:",
"title": "Grammar"
},
{
"paragraph_id": 163,
"text": "Benyn",
"title": "Grammar"
},
{
"paragraph_id": 164,
"text": "woman",
"title": "Grammar"
},
{
"paragraph_id": 165,
"text": "vas",
"title": "Grammar"
},
{
"paragraph_id": 166,
"text": "good",
"title": "Grammar"
},
{
"paragraph_id": 167,
"text": "Benyn vas",
"title": "Grammar"
},
{
"paragraph_id": 168,
"text": "woman good",
"title": "Grammar"
},
{
"paragraph_id": 169,
"text": "'[A] good woman.'",
"title": "Grammar"
},
{
"paragraph_id": 170,
"text": "Some adjectives usually precede the noun, however:",
"title": "Grammar"
},
{
"paragraph_id": 171,
"text": "Drog",
"title": "Grammar"
},
{
"paragraph_id": 172,
"text": "evil",
"title": "Grammar"
},
{
"paragraph_id": 173,
"text": "den",
"title": "Grammar"
},
{
"paragraph_id": 174,
"text": "man",
"title": "Grammar"
},
{
"paragraph_id": 175,
"text": "Drog den",
"title": "Grammar"
},
{
"paragraph_id": 176,
"text": "evil man",
"title": "Grammar"
},
{
"paragraph_id": 177,
"text": "'[An] evil man.'",
"title": "Grammar"
},
{
"paragraph_id": 178,
"text": "The Celtic Congress and Celtic League are groups that advocate cooperation amongst the Celtic Nations in order to protect and promote Celtic languages and cultures, thus working in the interests of the Cornish language.",
"title": "Culture"
},
{
"paragraph_id": 179,
"text": "There have been films such as Hwerow Hweg, some televised, made entirely, or significantly, in the language. Some businesses use Cornish names.",
"title": "Culture"
},
{
"paragraph_id": 180,
"text": "Cornish has significantly and durably affected Cornwall's place-names as well as Cornish surnames and knowledge of the language helps the understanding of these ancient meanings. Cornish names are adopted for children, pets, houses and boats.",
"title": "Culture"
},
{
"paragraph_id": 181,
"text": "There is Cornish literature, including spoken poetry and song, as well as traditional Cornish chants historically performed in marketplaces during religious holidays and public festivals and gatherings.",
"title": "Culture"
},
{
"paragraph_id": 182,
"text": "There are periodicals solely in the language, such as the monthly An Gannas, An Gowsva and An Garrick. BBC Radio Cornwall has a news broadcast in Cornish and sometimes has other programmes and features for learners and enthusiasts. Local newspapers such as the Western Morning News have articles in Cornish, and newspapers such as The Packet, The West Briton, and The Cornishman have also been known to have Cornish features. There is an online radio and TV service in Cornish called Radyo an Gernewegva, publishing a one-hour podcast each week, based on a magazine format. It includes music in Cornish as well as interviews and features.",
"title": "Culture"
},
{
"paragraph_id": 183,
"text": "The language has financial sponsorship from sources including the Millennium Commission. A number of language organisations exist in Cornwall: Agan Tavas (Our Language), the Cornish sub-group of the European Bureau for Lesser-Used Languages, Gorsedh Kernow, Kesva an Taves Kernewek (the Cornish Language Board) and Kowethas an Yeth Kernewek (the Cornish Language Fellowship).",
"title": "Culture"
},
{
"paragraph_id": 184,
"text": "There are ceremonies, some ancient, some modern, that use the language or are entirely in the language.",
"title": "Culture"
},
{
"paragraph_id": 185,
"text": "Though estimates of the number of Cornish speakers vary, there are thought to be around five hundred today. Currently Cornish is spoken at home, outside the home, in the workplace and at ritual ceremonies. Cornish is also being used in the arts.",
"title": "Culture"
},
{
"paragraph_id": 186,
"text": "Cornwall has had cultural events associated with the language, including the international Celtic Media Festival, hosted in St Ives in 1997. The Old Cornwall Society has promoted the use of the language at events and meetings. Two examples of ceremonies that are performed in both the English and Cornish languages are Crying the Neck and the annual mid-summer bonfires.",
"title": "Culture"
},
{
"paragraph_id": 187,
"text": "Since 1969, there have been three full performances of the Ordinalia, originally written in the Cornish language, the most recent of which took place at the plen-an-gwary in St Just in September 2021. While significantly adapted from the original, as well as using mostly English-speaking actors, the plays used sizable amounts of Cornish, including a character who spoke only in Cornish and another who spoke both English and Cornish. The event drew thousands over two weeks, also serving as a celebration of Celtic culture. The next production, scheduled for 2024, could, in theory, be entirely in Cornish, without English, if assisted by a professional linguist.",
"title": "Culture"
},
{
"paragraph_id": 188,
"text": "Outside of Cornwall, efforts to revive the Cornish language and culture through community events are occurring in Australia. A biennial festival, Kernewek Lowender, takes place in South Australia, where both cultural displays and language lessons are offered.",
"title": "Culture"
},
{
"paragraph_id": 189,
"text": "Cornish is taught in some schools; it was previously taught at degree level at the University of Wales, though the only existing course in the language at university level is as part of a course in Cornish studies at the University of Exeter. In March 2008 a course in the language was started as part of the Celtic Studies curriculum at the University of Vienna, Austria. The University of Cambridge offers courses in Cornish through its John Trim Resources Centre, which is part of the university's Language Centre. In addition, the Department of Anglo-Saxon, Norse and Celtic (which is part of the Faculty of English) also carries out research into the Cornish language.",
"title": "Culture"
},
{
"paragraph_id": 190,
"text": "In 2015 a university-level course aiming at encouraging and supporting practitioners working with young children to introduce the Cornish language into their settings was launched. The Cornish Language Practice Project (Early Years) is a level 4 course approved by Plymouth University and run at Cornwall College. The course is not a Cornish-language course but students will be assessed on their ability to use the Cornish language constructively in their work with young children. The course will cover such topics as Understanding Bilingualism, Creating Resources and Integrating Language and Play, but the focus of the language provision will be on Cornish. A non-accredited specialist Cornish-language course has been developed to run alongside the level 4 course for those who prefer tutor support to learn the language or develop their skills for use with young children.",
"title": "Culture"
},
{
"paragraph_id": 191,
"text": "Cornwall's first Cornish-language crèche, Skol dy'Sadorn Kernewek, was established in 2010 at Cornwall College, Camborne. The nursery teaches children aged between two and five years alongside their parents to ensure the language is also spoken in the home.",
"title": "Culture"
},
{
"paragraph_id": 192,
"text": "A number of dictionaries are available in the various orthographies, including A Learners' Cornish Dictionary in the Standard Written Form by Steve Harris (ed.), An Gerlyver Meur by Ken George, Gerlyver Sawsnek–Kernowek by Nicholas Williams and A Practical Dictionary of Modern Cornish by Richard Gendall. Course books include the three-part Skeul an Yeth series, Clappya Kernowek, Tavas a Ragadazow and Skeul an Tavas, as well as the more recent Bora Brav and Desky Kernowek. Several online dictionaries are now available, including one organised by An Akademi Kernewek in SWF.",
"title": "Culture"
},
{
"paragraph_id": 193,
"text": "Classes and conversation groups for adults are available at several locations in Cornwall as well as in London, Cardiff and Bristol. Since the onset of the COVID-19 pandemic a number of conversation groups entitled Yeth an Werin Warlinen have been held online, advertised through Facebook and other media. A surge in interest, not just from people in Cornwall but from all over the world, has meant that extra classes have been organised.",
"title": "Culture"
},
{
"paragraph_id": 194,
"text": "William Scawen produced a manuscript on the declining Cornish language that continually evolved until he died in 1689, aged 89. He was one of the first to realise the language was dying out and wrote detailed manuscripts which he started working on when he was 78. The only version that was ever published was a short first draft but the final version, which he worked on until his death, is a few hundred pages long. At the same time a group of scholars led by John Keigwin (nephew of William Scawen) of Mousehole tried to preserve and further the Cornish language and chose to write in Cornish. One of their number, Nicholas Boson, tells how he had been discouraged from using Cornish to servants by his mother. This group left behind a large number of translations of parts of the Bible, proverbs and songs. They were contacted by the Welsh linguist Edward Lhuyd, who came to Cornwall to study the language.",
"title": "Culture"
},
{
"paragraph_id": 195,
"text": "Early Modern Cornish was the subject of a study published by Lhuyd in 1707, and differs from the medieval language in having a considerably simpler structure and grammar. Such differences included sound changes and more frequent use of auxiliary verbs. The medieval language also possessed two additional tenses for expressing past events and an extended set of possessive suffixes.",
"title": "Culture"
},
{
"paragraph_id": 196,
"text": "John Whitaker, the Manchester-born rector of Ruan Lanihorne, studied the decline of the Cornish language. In his 1804 work the Ancient Cathedral of Cornwall he concluded that: \"[T]he English Liturgy, was not desired by the Cornish, but forced upon them by the tyranny of England, at a time when the English language was yet unknown in Cornwall. This act of tyranny was at once gross barbarity to the Cornish people, and a death blow to the Cornish language.\"",
"title": "Culture"
},
{
"paragraph_id": 197,
"text": "Robert Williams published the first comprehensive Cornish dictionary in 1865, the Lexicon Cornu-Britannicum. As a result of the discovery of additional ancient Cornish manuscripts, 2000 new words were added to the vocabulary by Whitley Stokes in A Cornish Glossary. William C. Borlase published Proverbs and Rhymes in Cornish in 1866 while A Glossary of Cornish Names was produced by John Bannister in the same year. Frederick Jago published his English–Cornish Dictionary in 1882.",
"title": "Culture"
},
{
"paragraph_id": 198,
"text": "In 2002, the Cornish language gained new recognition because of the European Charter for Regional and Minority Languages. Conversely, along with government provision was the governmental basis of \"New Public Management\", measuring quantifiable results as means of determining effectiveness. This put enormous pressure on finding a single orthography that could be used in unison. The revival of Cornish required extensive rebuilding. The Cornish orthographies that were reconstructed may be considered versions of Cornish because they are not traditional sociolinguistic variations. In the middle-to-late twentieth century, the debate over Cornish orthographies angered more people because several language groups received public funding. This caused other groups to sense favouritism as playing a role in the debate.",
"title": "Culture"
},
{
"paragraph_id": 199,
"text": "A governmental policymaking structure called New Public Management (NPM) has helped the Cornish language by managing public life of the Cornish language and people. In 2007, the Cornish Language Partnership MAGA represents separate divisions of government and their purpose is to further enhance the Cornish Language Developmental Plan. MAGA established an Ad-Hoc Group, which resulted in three orthographies being presented. The relations for the Ad-Hoc Group were to obtain consensus among the three orthographies and then develop a \"single written form\". The result was creating a new form of Cornish, which had to be natural for both new learners and skilled speakers.",
"title": "Culture"
},
{
"paragraph_id": 200,
"text": "In 1981, the Breton library Preder edited Passyon agan arluth (Passion of our lord), a 15th-century Cornish poem. The first complete translation of the Bible into Cornish, translated from English, was published in 2011. Another Bible translation project translating from original languages is underway. The New Testament and Psalms were posted on-line on YouVersion (Bible.com) and Bibles.org in July 2014 by the Bible Society.",
"title": "Culture"
},
{
"paragraph_id": 201,
"text": "A few small publishers produce books in Cornish which are stocked in some local bookshops, as well as in Cornish branches of Waterstones and WH Smith, although publications are becoming increasingly available on the Internet. Printed copies of these may also be found from Amazon. The Truro Waterstones hosts the annual Holyer an Gof literary awards, established by Gorsedh Kernow to recognise publications relating to Cornwall or in the Cornish language. In recent years, a number of Cornish translations of literature have been published, including Alice's Adventures in Wonderland (2009), Around the World in Eighty Days (2009), Treasure Island (2010), The Railway Children (2012), Hound of the Baskervilles (2012), The War of the Worlds (2012), The Wind in the Willows (2013), Three Men in a Boat (2013), Alice in Wonderland and Through the Looking-Glass (2014), and A Christmas Carol (which won the 2012 Holyer an Gof award for Cornish Language books), as well as original Cornish literature such as Jowal Lethesow (The Lyonesse Stone) by Craig Weatherhill. Literature aimed at children is also available, such as Ple'ma Spot? (Where's Spot?), Best Goon Brèn (The Beast of Bodmin Moor), three Topsy and Tim titles, two Tintin titles and Briallen ha'n Alyon (Briallen and the Alien), which won the 2015 Holyer an Gof award for Cornish Language books for children. In 2014 An Hobys, Nicholas Williams's translation of J. R. R. Tolkien's The Hobbit, was published.",
"title": "Culture"
},
{
"paragraph_id": 202,
"text": "An Gannas is a monthly magazine published entirely in the Cornish language. Members contribute articles on various subjects. The magazine is produced by Graham Sandercock who has been its editor since 1976.",
"title": "Culture"
},
{
"paragraph_id": 203,
"text": "In 1983 BBC Radio Cornwall started broadcasting around two minutes of Cornish every week. In 1987, however, they gave over 15 minutes of airtime on Sunday mornings for a programme called Kroeder Kroghen ('Holdall'), presented by John King, running until the early 1990s. It was eventually replaced with a five-minute news bulletin called An Nowodhow ('The News'). The bulletin was presented every Sunday evening for many years by Rod Lyon, then Elizabeth Stewart, and currently a team presents in rotation. Pirate FM ran short bulletins on Saturday lunchtimes from 1998 to 1999. In 2006, Matthew Clarke who had presented the Pirate FM bulletin, launched a web-streamed news bulletin called Nowodhow an Seythen ('Weekly News'), which in 2008 was merged into a new weekly magazine podcast Radyo an Gernewegva (RanG).",
"title": "Culture"
},
{
"paragraph_id": 204,
"text": "Cornish television shows have included a 1982 series by Westward Television with each episode containing a three-minute lesson in Cornish. An Canker-Seth, an eight-episode series produced by Television South West and broadcast between June and July 1984, later on S4C from May to July 1985, and as a schools programme in 1986. Also by Television South West were two bilingual programmes on Cornish Culture called Nosweyth Lowen. In 2016 Kelly's Ice Cream of Bodmin introduced a light hearted television commercial in the Cornish language and this was repeated in 2017.",
"title": "Culture"
},
{
"paragraph_id": 205,
"text": "The first episode from the third season of the US television program Deadwood features a conversation between miners, purportedly in the Cornish language, but really in Irish. One of the miners is then shot by thugs working for businessman George Hearst who justify the murder by saying, \"He come at me with his foreign gibberish.\"",
"title": "Culture"
},
{
"paragraph_id": 206,
"text": "A number of Cornish language films have been made, including Hwerow Hweg, a 2002 drama film written and directed by Hungarian film-maker Antal Kovacs and Trengellick Rising, a short film written and directed by Guy Potter.",
"title": "Culture"
},
{
"paragraph_id": 207,
"text": "Screen Cornwall works with Cornwall Council to commission a short film in the Cornish language each year, with their FilmK competition. Their website states \"FylmK is an annual contemporary Cornish language short film competition, producing an imaginative and engaging film, in any genre, from distinctive and exciting filmmakers\".",
"title": "Culture"
},
{
"paragraph_id": 208,
"text": "A monthly half-hour online TV show began in 2017 called An Mis (The Month). It contained news items about cultural events and more mainstream news stories all through Cornish. It also ran a cookery segment called \"Kegin Esther\" ('Esther's Kitchen'). The program has been out of production since March 2023.",
"title": "Culture"
},
{
"paragraph_id": 209,
"text": "English composer Peter Warlock wrote a Christmas carol in Cornish (setting words by Henry Jenner). The Cornish electronic musician Aphex Twin has used Cornish names for track titles, most notably on his Drukqs album.",
"title": "Culture"
},
{
"paragraph_id": 210,
"text": "Several traditional Cornish folk songs have been collected and can be sung to various tunes. These include \"An Awhesyth\", \"Bro Goth agan Tasow\", and \"Delkiow Sivy\".",
"title": "Culture"
},
{
"paragraph_id": 211,
"text": "In 2018, the singer Gwenno Saunders released an album in Cornish, entitled Le Kov, saying: \"I speak Cornish with my son: if you're comfortable expressing yourself in a language, you want to share it.\"",
"title": "Culture"
},
{
"paragraph_id": 212,
"text": "The Cornish language features in the toponymy of Cornwall, with a significant contrast between English place-names prevalent in eastern Cornwall and Cornish place-names to the west of the Camel-Fowey river valleys, where English place-names are much less common. Hundreds of Cornish family names have an etymology in the Cornish language, the majority of which are derived from Cornish place-names. Long before the agreement of the Standard Written Form of Cornish in the 21st century, Late Cornish orthography in the Early Modern period usually followed Welsh to English transliteration, phonetically rendering C for K, I for Y, U for W, and Z for S. This meant that place names were adopted into English with spellings such as 'Porthcurno' and 'Penzance'; they are written Porth Kernow and Pen Sans in the Standard Written Form of Cornish, agreed upon in 2008. Likewise words such as Enys ('island') can be found spelled as Ince as at Ince Castle. These apparent mistransliterations can, however, reveal an insight into how names and places were actually pronounced, explaining, for example, how anglicised Launceston is still pronounced [ˈlansǝn] with emphasis on the first element, perhaps from Cornish Lann Stefan, though the Concise Oxford Dictionary of English Place-Names considers this unlikely.",
"title": "Culture"
},
{
"paragraph_id": 213,
"text": "The following tables present some examples of Cornish place names and surnames and their anglicised versions:",
"title": "Culture"
},
{
"paragraph_id": 214,
"text": "From the Universal Declaration of Human Rights:",
"title": "Samples"
},
{
"paragraph_id": 215,
"text": "From Bro Goth agan Tasow, the Cornish anthem:",
"title": "Samples"
}
]
| Cornish is a Southwestern Brittonic language of the Celtic language family. It is a revived language, having become extinct as a living community language in Cornwall at the end of the 18th century. However, knowledge of Cornish, including speaking ability to a certain extent, continued to be passed on within families and by individuals, and a revival began in the early 20th century. The language has a growing number of second-language speakers, and a very small number of families now raise children to speak revived Cornish as a first language. Cornish is currently recognised under the European Charter for Regional or Minority Languages, and the language is often described as an important part of Cornish identity, culture and heritage. Along with Welsh and Breton, Cornish is descended from the Common Brittonic language spoken throughout much of Great Britain before the English language came to dominate. For centuries, until it was pushed westwards by English, it was the main language of Cornwall, maintaining close links with its sister language Breton, with which it was mutually intelligible, perhaps even as long as Cornish continued to be spoken as a vernacular. Cornish continued to function as a common community language in parts of Cornwall until the mid 18th century. There is some evidence of knowledge of the language persisting into the 19th century, possibly almost overlapping the beginning of revival efforts. A process to revive the language began in the early 20th century, and in 2010, UNESCO announced that its former classification of the language as "extinct" was "no longer accurate." Since the revival of the language, some Cornish textbooks and works of literature have been published, and an increasing number of people are studying the language. Recent developments include Cornish music, independent films, and children's books. A small number of people in Cornwall have been brought up to be bilingual native speakers, and the language is taught in schools and appears on road signs. The first Cornish-language day care opened in 2010. | 2001-10-24T09:07:31Z | 2023-12-28T13:57:01Z | [
"Template:Navboxes",
"Template:Reflist",
"Template:Cite journal",
"Template:Portal",
"Template:Cite conference",
"Template:Wiktionary category",
"Template:For",
"Template:Lang",
"Template:Cite news",
"Template:Cite encyclopedia",
"Template:Cite ODNB",
"Template:Commons category",
"Template:Authority control",
"Template:IPA",
"Template:Col-end",
"Template:Cite book",
"Template:Citation",
"Template:InterWiki",
"Template:Sfn",
"Template:Rp",
"Template:Infobox language",
"Template:See also",
"Template:Further",
"Template:ISBN",
"Template:Short description",
"Template:Use dmy dates",
"Template:E18",
"Template:Cbignore",
"Template:Circa",
"Template:Col-2",
"Template:Col-begin",
"Template:Cite web",
"Template:Main",
"Template:Interlinear"
]
| https://en.wikipedia.org/wiki/Cornish_language |
6,132 | Complexity theory | Complexity theory may refer to: | [
{
"paragraph_id": 0,
"text": "Complexity theory may refer to:",
"title": ""
}
]
| Complexity theory may refer to: | 2023-03-17T15:39:50Z | [
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Complexity_theory |
|
6,134 | Charybdis | Charybdis (/kəˈrɪbdɪs/; Ancient Greek: Χάρυβδις, romanized: Khárybdis, Attic Greek: [kʰá.ryb.dis̠]; Latin: Charybdis, Classical Latin: [kʰäˈrʏb.d̪ɪs]) is a sea monster in Greek mythology. She, with the sea monster Scylla, appears as a challenge to epic characters such as Odysseus, Jason, and Aeneas. Scholarship locates her in the Strait of Messina.
The idiom "between Scylla and Charybdis" has come to mean being forced to choose between two similarly dangerous situations.
The sea monster Charybdis was believed to live under a small rock on one side of a narrow channel. Opposite her was Scylla, another sea monster, that lived inside a much larger rock. The sides of the strait were within an arrow-shot of each other, and sailors attempting to avoid one of them would come in reach of the other. To be "between Scylla and Charybdis" therefore means to be presented with two opposite dangers, the task being to find a route that avoids both. Three times a day, Charybdis swallowed a huge amount of water, before belching it back out again, creating large whirlpools capable of dragging a ship underwater. In some variations of the story, Charybdis was simply a large whirlpool instead of a sea monster.
Through the descriptions of Greek mythical chroniclers and Greek historians such as Thucydides, modern scholars generally agree that Charybdis was said to have been located in the Strait of Messina, off the coast of Sicily and opposite a rock on the mainland identified with Scylla. A whirlpool does exist there, caused by currents meeting, but it is dangerous only to small craft in extreme conditions.
Another myth makes Charybdis the daughter of Poseidon and Gaia and living as a loyal servant to her father.
Charybdis aided her father Poseidon in his feud with her paternal uncle Zeus and, as such, helped him engulf lands and islands in water. Zeus, angry over the land she stole from him, captured and chained her to the sea-bed. Charybdis was then cursed by the god and transformed into a hideous bladder of a monster, with flippers for arms and legs, and an uncontrollable thirst for the sea. As such, she drank the water from the sea thrice a day to quench it, which created whirlpools. She lingered on a rock with Scylla facing her directly on another rock, making a strait.
In some myths, Charybdis was a voracious woman who stole oxen from Heracles, and was hurled by the thunderbolt of Zeus into the sea, where she retained her voracious nature.
Odysseus faced both Charybdis and Scylla while rowing through a narrow channel. He ordered his men to avoid Charybdis, thus forcing them to pass near Scylla, which resulted in the deaths of six of his men. Later, stranded on a raft, Odysseus was swept back through the strait and passed near Charybdis. His raft was sucked into her maw, but he survived by clinging to a fig tree growing on a rock over her lair. On the next outflow of water, when his raft was expelled, Odysseus recovered it and paddled away safely.
The Argonauts were able to avoid both dangers because Hera ordered the Nereid Thetis to guide them through the perilous passage.
In the Aeneid, the Trojans are warned by Helenus of Scylla and Charybdis, and are advised to avoid them by sailing around Pachynus point (Cape Passero) rather than risk the strait. Later, however, they find themselves passing Etna, and have to row for their lives to escape Charybdis.
Aristotle mentions in his Meteorologica that Aesop once teased a ferryman by telling him a myth concerning Charybdis. With one gulp of the sea, she brought the mountains to view; islands appeared after the next. The third is yet to come and will dry the sea altogether, thus depriving the ferryman of his livelihood. | [
{
"paragraph_id": 0,
"text": "Charybdis (/kəˈrɪbdɪs/; Ancient Greek: Χάρυβδις, romanized: Khárybdis, Attic Greek: [kʰá.ryb.dis̠]; Latin: Charybdis, Classical Latin: [kʰäˈrʏb.d̪ɪs]) is a sea monster in Greek mythology. She, with the sea monster Scylla, appears as a challenge to epic characters such as Odysseus, Jason, and Aeneas. Scholarship locates her in the Strait of Messina.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The idiom \"between Scylla and Charybdis\" has come to mean being forced to choose between two similarly dangerous situations.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The sea monster Charybdis was believed to live under a small rock on one side of a narrow channel. Opposite her was Scylla, another sea monster, that lived inside a much larger rock. The sides of the strait were within an arrow-shot of each other, and sailors attempting to avoid one of them would come in reach of the other. To be \"between Scylla and Charybdis\" therefore means to be presented with two opposite dangers, the task being to find a route that avoids both. Three times a day, Charybdis swallowed a huge amount of water, before belching it back out again, creating large whirlpools capable of dragging a ship underwater. In some variations of the story, Charybdis was simply a large whirlpool instead of a sea monster.",
"title": "Description"
},
{
"paragraph_id": 3,
"text": "Through the descriptions of Greek mythical chroniclers and Greek historians such as Thucydides, modern scholars generally agree that Charybdis was said to have been located in the Strait of Messina, off the coast of Sicily and opposite a rock on the mainland identified with Scylla. A whirlpool does exist there, caused by currents meeting, but it is dangerous only to small craft in extreme conditions.",
"title": "Description"
},
{
"paragraph_id": 4,
"text": "Another myth makes Charybdis the daughter of Poseidon and Gaia and living as a loyal servant to her father.",
"title": "Family"
},
{
"paragraph_id": 5,
"text": "Charybdis aided her father Poseidon in his feud with her paternal uncle Zeus and, as such, helped him engulf lands and islands in water. Zeus, angry over the land she stole from him, captured and chained her to the sea-bed. Charybdis was then cursed by the god and transformed into a hideous bladder of a monster, with flippers for arms and legs, and an uncontrollable thirst for the sea. As such, she drank the water from the sea thrice a day to quench it, which created whirlpools. She lingered on a rock with Scylla facing her directly on another rock, making a strait.",
"title": "Mythology"
},
{
"paragraph_id": 6,
"text": "In some myths, Charybdis was a voracious woman who stole oxen from Heracles, and was hurled by the thunderbolt of Zeus into the sea, where she retained her voracious nature.",
"title": "Mythology"
},
{
"paragraph_id": 7,
"text": "Odysseus faced both Charybdis and Scylla while rowing through a narrow channel. He ordered his men to avoid Charybdis, thus forcing them to pass near Scylla, which resulted in the deaths of six of his men. Later, stranded on a raft, Odysseus was swept back through the strait and passed near Charybdis. His raft was sucked into her maw, but he survived by clinging to a fig tree growing on a rock over her lair. On the next outflow of water, when his raft was expelled, Odysseus recovered it and paddled away safely.",
"title": "Mythology"
},
{
"paragraph_id": 8,
"text": "The Argonauts were able to avoid both dangers because Hera ordered the Nereid Thetis to guide them through the perilous passage.",
"title": "Mythology"
},
{
"paragraph_id": 9,
"text": "In the Aeneid, the Trojans are warned by Helenus of Scylla and Charybdis, and are advised to avoid them by sailing around Pachynus point (Cape Passero) rather than risk the strait. Later, however, they find themselves passing Etna, and have to row for their lives to escape Charybdis.",
"title": "Mythology"
},
{
"paragraph_id": 10,
"text": "Aristotle mentions in his Meteorologica that Aesop once teased a ferryman by telling him a myth concerning Charybdis. With one gulp of the sea, she brought the mountains to view; islands appeared after the next. The third is yet to come and will dry the sea altogether, thus depriving the ferryman of his livelihood.",
"title": "Mythology"
}
]
| Charybdis is a sea monster in Greek mythology. She, with the sea monster Scylla, appears as a challenge to epic characters such as Odysseus, Jason, and Aeneas. Scholarship locates her in the Strait of Messina. The idiom "between Scylla and Charybdis" has come to mean being forced to choose between two similarly dangerous situations. | 2001-08-20T09:06:55Z | 2023-11-24T13:25:51Z | [
"Template:IPA",
"Template:Lang-la",
"Template:ISBN",
"Template:Cite EB1911",
"Template:Metamorphoses in Greco-Roman mythology",
"Template:Short description",
"Template:Other uses",
"Template:IPAc-en",
"Template:Lang-grc",
"Template:Clear",
"Template:Cite web",
"Template:Authority control",
"Template:Reflist",
"Template:Cite news",
"Template:Commonscatinline"
]
| https://en.wikipedia.org/wiki/Charybdis |
6,136 | Carbon monoxide | Carbon monoxide (chemical formula CO) is a poisonous, flammable gas that is colorless, odorless, tasteless, and slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom connected by a triple bond. It is the simplest carbon oxide. In coordination complexes, the carbon monoxide ligand is called carbonyl. It is a key ingredient in many processes in industrial chemistry.
The most common source of carbon monoxide is the partial combustion of carbon-containing compounds. Numerous environmental and biological sources generate carbon monoxide. In industry, carbon monoxide is important in the production of many compounds, including drugs, fragrances, and fuels. Upon emission into the atmosphere, carbon monoxide affects several processes that contribute to climate change.
Carbon monoxide has important biological roles across phylogenetic kingdoms. It is produced by many organisms, including humans. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic resulting in carbon monoxide poisoning. It is isoelectronic with cyanide anion CN.
Humans have maintained a complex relationship with carbon monoxide since first learning to control fire circa 800,000 BC. Early humans probably discovered the toxicity of carbon monoxide poisoning upon introducing fire into their dwellings. The early development of metallurgy and smelting technologies emerging circa 6,000 BC through the Bronze Age likewise plagued humankind from carbon monoxide exposure. Apart from the toxicity of carbon monoxide, indigenous Native Americans may have experienced the neuroactive properties of carbon monoxide through shamanistic fireside rituals.
Early civilizations developed mythological tales to explain the origin of fire, such as Prometheus from Greek mythology who shared fire with humans. Aristotle (384–322 BC) first recorded that burning coals produced toxic fumes. Greek physician Galen (129–199 AD) speculated that there was a change in the composition of the air that caused harm when inhaled, and many others of the era developed a basis of knowledge about carbon monoxide in the context of coal fume toxicity. Cleopatra may have died from carbon monoxide poisoning.
Georg Ernst Stahl mentioned carbonarii halitus in 1697 in reference to toxic vapors thought to be carbon monoxide. Friedrich Hoffmann conducted the first modern scientific investigation into carbon monoxide poisoning from coal in 1716. Herman Boerhaave conducted the first scientific experiments on the effect of carbon monoxide (coal fumes) on animals in the 1730s.
Joseph Priestley is considered to have first synthesized carbon monoxide in 1772. Carl Wilhelm Scheele similarly isolated carbon monoxide from charcoal in 1773 and thought it could be the carbonic entity making fumes toxic. Torbern Bergman isolated carbon monoxide from oxalic acid in 1775. Later in 1776, the French chemist de Lassone [fr] produced CO by heating zinc oxide with coke, but mistakenly concluded that the gaseous product was hydrogen, as it burned with a blue flame. In the presence of oxygen, including atmospheric concentrations, carbon monoxide burns with a blue flame, producing carbon dioxide. Antoine Lavoisier conducted similar inconclusive experiments to Lassone in 1777. The gas was identified as a compound containing carbon and oxygen by William Cruickshank in 1800.
Thomas Beddoes and James Watt recognized carbon monoxide (as hydrocarbonate) to brighten venous blood in 1793. Watt suggested coal fumes could act as an antidote to the oxygen in blood, and Beddoes and Watt likewise suggested hydrocarbonate has a greater affinity for animal fiber than oxygen in 1796. In 1854, Adrien Chenot similarly suggested carbon monoxide to remove the oxygen from blood and then be oxidized by the body to carbon dioxide. The mechanism for carbon monoxide poisoning is widely credited to Claude Bernard whose memoirs beginning in 1846 and published in 1857 phrased, "prevents arterials blood from becoming venous". Felix Hoppe-Seyler independently published similar conclusions in the following year.
Carbon monoxide gained recognition as an essential reagent in the 1900s. Three industrial processes illustrate its evolution in industry. In the Fischer–Tropsch process, coal and related carbon-rich feedstocks are converted into liquid fuels via the intermediacy of CO. Originally developed as part of the German war effort to compensate for their lack of domestic petroleum, this technology continues today. Also in Germany, a mixture of CO and hydrogen was found to combine with olefins to give aldehydes. This process, called hydroformylation, is used to produce many large scale chemicals such as surfactants as well as specialty compounds that are popular fragrances and drugs. For example, CO is used in the production of vitamin A. In a third major process, attributed to researchers at Monsanto, CO combines with methanol to give acetic acid. Most acetic acid is produced by the Cativa process. Hydroformylation and the acetic acid syntheses are two of myriad carbonylation processes.
Carbon monoxide is the simplest oxocarbon and is isoelectronic with other triply-bonded diatomic species possessing 10 valence electrons, including the cyanide anion, the nitrosonium cation, boron monofluoride and molecular nitrogen. It has a molar mass of 28.0, which, according to the ideal gas law, makes it slightly less dense than air, whose average molar mass is 28.8.
The carbon and oxygen are connected by a triple bond that consists of a net two pi bonds and one sigma bond. The bond length between the carbon atom and the oxygen atom is 112.8 pm. This bond length is consistent with a triple bond, as in molecular nitrogen (N2), which has a similar bond length (109.76 pm) and nearly the same molecular mass. Carbon–oxygen double bonds are significantly longer, 120.8 pm in formaldehyde, for example. The boiling point (82 K) and melting point (68 K) are very similar to those of N2 (77 K and 63 K, respectively). The bond-dissociation energy of 1072 kJ/mol is stronger than that of N2 (942 kJ/mol) and represents the strongest chemical bond known.
The ground electronic state of carbon monoxide is a singlet state since there are no unpaired electrons.
Carbon and oxygen together have a total of 10 electrons in the valence shell. Following the octet rule for both carbon and oxygen, the two atoms form a triple bond, with six shared electrons in three bonding molecular orbitals, rather than the usual double bond found in organic carbonyl compounds. Since four of the shared electrons come from the oxygen atom and only two from carbon, one bonding orbital is occupied by two electrons from oxygen, forming a dative or dipolar bond. This causes a C←O polarization of the molecule, with a small negative charge on carbon and a small positive charge on oxygen. The other two bonding orbitals are each occupied by one electron from carbon and one from oxygen, forming (polar) covalent bonds with a reverse C→O polarization since oxygen is more electronegative than carbon. In the free carbon monoxide molecule, a net negative charge δ remains at the carbon end and the molecule has a small dipole moment of 0.122 D.
The molecule is therefore asymmetric: oxygen has more electron density than carbon and is also slightly positively charged compared to carbon being negative. By contrast, the isoelectronic dinitrogen molecule has no dipole moment.
Carbon monoxide has a computed fractional bond order of 2.6, indicating that the "third" bond is important but constitutes somewhat less than a full bond. Thus, in valence bond terms, C≡O is the most important structure, while :C=O is non-octet, but has a neutral formal charge on each atom and represents the second most important resonance contributor. Because of the lone pair and divalence of carbon in this resonance structure, carbon monoxide is often considered to be an extraordinarily stabilized carbene. Isocyanides are compounds in which the O is replaced by an NR (R = alkyl or aryl) group and have a similar bonding scheme.
If carbon monoxide acts as a ligand, the polarity of the dipole may reverse with a net negative charge on the oxygen end, depending on the structure of the coordination complex. See also the section "Coordination chemistry" below.
Theoretical and experimental studies show that, despite the greater electronegativity of oxygen, the dipole moment points from the more-negative carbon end to the more-positive oxygen end. The three bonds are in fact polar covalent bonds that are strongly polarized. The calculated polarization toward the oxygen atom is 71% for the σ-bond and 77% for both π-bonds.
The oxidation state of carbon in carbon monoxide is +2 in each of these structures. It is calculated by counting all the bonding electrons as belonging to the more electronegative oxygen. Only the two non-bonding electrons on carbon are assigned to carbon. In this count, carbon then has only two valence electrons in the molecule compared to four in the free atom.
Carbon monoxide occurs in various natural and artificial environments. Photochemical degradation of plant matter for example generates an estimated 60 million tons/year. Typical concentrations in parts per million are as follows:
Carbon monoxide (CO) is present in small amounts (about 80 ppb) in the Earth's atmosphere. Most of the rest comes from chemical reactions with organic compounds emitted by human activities and natural origins due to photochemical reactions in the troposphere that generate about 5 × 10 kilograms per year. Other natural sources of CO include volcanoes, forest and bushfires, and other miscellaneous forms of combustion such as fossil fuels. Small amounts are also emitted from the ocean, and from geological activity because carbon monoxide occurs dissolved in molten volcanic rock at high pressures in the Earth's mantle. Because natural sources of carbon monoxide vary from year to year, it is difficult to accurately measure natural emissions of the gas.
Carbon monoxide has an indirect effect on radiative forcing by elevating concentrations of direct greenhouse gases, including methane and tropospheric ozone. CO can react chemically with other atmospheric constituents (primarily the hydroxyl radical, OH) that would otherwise destroy methane. Through natural processes in the atmosphere, it is oxidized to carbon dioxide and ozone. Carbon monoxide is short-lived in the atmosphere (with an average lifetime of about one to two months), and spatially variable in concentration.
Due to its long lifetime in the mid-troposphere, carbon monoxide is also used as a tracer for pollutant plumes.
Carbon monoxide is a temporary atmospheric pollutant in some urban areas, chiefly from the exhaust of internal combustion engines (including vehicles, portable and back-up generators, lawnmowers, power washers, etc.), but also from incomplete combustion of various other fuels (including wood, coal, charcoal, oil, paraffin, propane, natural gas, and trash).
Large CO pollution events can be observed from space over cities.
Carbon monoxide is, along with aldehydes, part of the series of cycles of chemical reactions that form photochemical smog. It reacts with hydroxyl radical (OH) to produce a radical intermediate HOCO, which transfers rapidly its radical hydrogen to O2 to form peroxy radical (HO2) and carbon dioxide (CO2). Peroxy radical subsequently reacts with nitrogen oxide (NO) to form nitrogen dioxide (NO2) and hydroxyl radical. NO2 gives O(P) via photolysis, thereby forming O3 following reaction with O2. Since hydroxyl radical is formed during the formation of NO2, the balance of the sequence of chemical reactions starting with carbon monoxide and leading to the formation of ozone is:
(where hν refers to the photon of light absorbed by the NO2 molecule in the sequence)
Although the creation of NO2 is the critical step leading to low level ozone formation, it also increases this ozone in another, somewhat mutually exclusive way, by reducing the quantity of NO that is available to react with ozone.
In closed environments, the concentration of carbon monoxide can rise to lethal levels. On average, 170 people in the United States die every year from carbon monoxide produced by non-automotive consumer products. These products include malfunctioning fuel-burning appliances such as furnaces, ranges, water heaters, and gas and kerosene room heaters; engine-powered equipment such as portable generators (and cars left running in attached garages); fireplaces; and charcoal that is burned in homes and other enclosed areas. Many deaths have occurred during power outages due to severe weather such as Hurricane Katrina and the 2021 Texas power crisis.
Miners refer to carbon monoxide as "whitedamp" or the "silent killer". It can be found in confined areas of poor ventilation in both surface mines and underground mines. The most common sources of carbon monoxide in mining operations are the internal combustion engine and explosives; however, in coal mines, carbon monoxide can also be found due to the low-temperature oxidation of coal. The idiom "Canary in the coal mine" pertained to an early warning of a carbon monoxide presence.
Beyond Earth, carbon monoxide is the second-most common diatomic molecule in the interstellar medium, after molecular hydrogen. Because of its asymmetry, this polar molecule produces far brighter spectral lines than the hydrogen molecule, making CO much easier to detect. Interstellar CO was first detected with radio telescopes in 1970. It is now the most commonly used tracer of molecular gas in general in the interstellar medium of galaxies, as molecular hydrogen can only be detected using ultraviolet light, which requires space telescopes. Carbon monoxide observations provide much of the information about the molecular clouds in which most stars form.
Beta Pictoris, the second brightest star in the constellation Pictor, shows an excess of infrared emission compared to normal stars of its type, which is caused by large quantities of dust and gas (including carbon monoxide) near the star.
In the atmosphere of Venus carbon monoxide occurs as a result of the photodissociation of carbon dioxide by electromagnetic radiation of wavelengths shorter than 169 nm. It has also been identified spectroscopically on the surface of Neptune's moon Triton.
Solid carbon monoxide is a component of comets. The volatile or "ice" component of Halley's Comet is about 15% CO. At room temperature and at atmospheric pressure, carbon monoxide is actually only metastable (see Boudouard reaction) and the same is true at low temperatures where CO and CO2 are solid, but nevertheless it can exist for billions of years in comets. There is very little CO in the atmosphere of Pluto, which seems to have been formed from comets. This may be because there is (or was) liquid water inside Pluto.
Carbon monoxide can react with water to form carbon dioxide and hydrogen:
This is called the water-gas shift reaction when occurring in the gas phase, but it can also take place (very slowly) in an aqueous solution. If the hydrogen partial pressure is high enough (for instance in an underground sea), formic acid will be formed:
These reactions can take place in a few million years even at temperatures such as found on Pluto.
Carbon monoxide has a wide range of functions across all disciplines of chemistry. The four premier categories of reactivity involve metal-carbonyl catalysis, radical chemistry, cation and anion chemistries.
Most metals form coordination complexes containing covalently attached carbon monoxide. Only metals in lower oxidation states will complex with carbon monoxide ligands. This is because there must be sufficient electron density to facilitate back-donation from the metal dxz-orbital, to the π* molecular orbital from CO. The lone pair on the carbon atom in CO also donates electron density to the dx−y on the metal to form a sigma bond. This electron donation is also exhibited with the cis effect, or the labilization of CO ligands in the cis position. Nickel carbonyl, for example, forms by the direct combination of carbon monoxide and nickel metal:
For this reason, nickel in any tubing or part must not come into prolonged contact with carbon monoxide. Nickel carbonyl decomposes readily back to Ni and CO upon contact with hot surfaces, and this method is used for the industrial purification of nickel in the Mond process.
In nickel carbonyl and other carbonyls, the electron pair on the carbon interacts with the metal; the carbon monoxide donates the electron pair to the metal. In these situations, carbon monoxide is called the carbonyl ligand. One of the most important metal carbonyls is iron pentacarbonyl, Fe(CO)5:
Many metal–CO complexes are prepared by decarbonylation of organic solvents, not from CO. For instance, iridium trichloride and triphenylphosphine react in boiling 2-methoxyethanol or DMF to afford IrCl(CO)(PPh3)2.
Metal carbonyls in coordination chemistry are usually studied using infrared spectroscopy.
In the presence of strong acids and water, carbon monoxide reacts with alkenes to form carboxylic acids in a process known as the Koch–Haaf reaction. In the Gattermann–Koch reaction, arenes are converted to benzaldehyde derivatives in the presence of CO, AlCl3 and HCl. Organolithium compounds (e.g. butyl lithium) react with carbon monoxide, but these reactions have little scientific use.
Although CO reacts with carbocations and carbanions, it is relatively nonreactive toward organic compounds without the intervention of metal catalysts.
With main group reagents, CO undergoes several noteworthy reactions. Chlorination of CO is the industrial route to the important compound phosgene. With borane CO forms the adduct H3BCO, which is isoelectronic with the acetylium cation [H3CCO]. CO reacts with sodium to give products resulting from C−C coupling such as sodium acetylenediolate 2Na·C2O2. It reacts with molten potassium to give a mixture of an organometallic compound, potassium acetylenediolate 2K·C2O2, potassium benzenehexolate 6KC6O6, and potassium rhodizonate 2K·C6O6.
The compounds cyclohexanehexone or triquinoyl (C6O6) and cyclopentanepentone or leuconic acid (C5O5), which so far have been obtained only in trace amounts, can be regarded as polymers of carbon monoxide. At pressures exceeding 5 GPa, carbon monoxide converts to polycarbonyl, a solid polymer that is metastable at atmospheric pressure but is explosive.
Carbon monoxide is conveniently produced in the laboratory by the dehydration of formic acid or oxalic acid, for example with concentrated sulfuric acid. Another method is heating an intimate mixture of powdered zinc metal and calcium carbonate, which releases CO and leaves behind zinc oxide and calcium oxide:
Silver nitrate and iodoform also afford carbon monoxide:
Finally, metal oxalate salts release CO upon heating, leaving a carbonate as byproduct:
Thermal combustion is the most common source for carbon monoxide. Carbon monoxide is produced from the partial oxidation of carbon-containing compounds; it forms when there is not enough oxygen to produce carbon dioxide (CO2), such as when operating a stove or an internal combustion engine in an enclosed space.
A large quantity of CO byproduct is formed during the oxidative processes for the production of chemicals. For this reason, the process off-gases have to be purified.
Many methods have been developed for carbon monoxide production.
A major industrial source of CO is producer gas, a mixture containing mostly carbon monoxide and nitrogen, formed by combustion of carbon in air at high temperature when there is an excess of carbon. In an oven, air is passed through a bed of coke. The initially produced CO2 equilibrates with the remaining hot carbon to give CO. The reaction of CO2 with carbon to give CO is described as the Boudouard reaction. Above 800 °C, CO is the predominant product:
Another source is "water gas", a mixture of hydrogen and carbon monoxide produced via the endothermic reaction of steam and carbon:
Other similar "synthesis gases" can be obtained from natural gas and other fuels.
Carbon monoxide can also be produced by high-temperature electrolysis of carbon dioxide with solid oxide electrolyzer cells. One method developed at DTU Energy uses a cerium oxide catalyst and does not have any issues of fouling of the catalyst.
Carbon monoxide is also a byproduct of the reduction of metal oxide ores with carbon, shown in a simplified form as follows:
Carbon monoxide is also produced by the direct oxidation of carbon in a limited supply of oxygen or air.
Since CO is a gas, the reduction process can be driven by heating, exploiting the positive (favorable) entropy of reaction. The Ellingham diagram shows that CO formation is favored over CO2 in high temperatures.
Carbon monoxide is an industrial gas that has many applications in bulk chemicals manufacturing. Large quantities of aldehydes are produced by the hydroformylation reaction of alkenes, carbon monoxide, and H2. Hydroformylation is coupled to the Shell higher olefin process to give precursors to detergents.
Phosgene, useful for preparing isocyanates, polycarbonates, and polyurethanes, is produced by passing purified carbon monoxide and chlorine gas through a bed of porous activated carbon, which serves as a catalyst. World production of this compound was estimated to be 2.74 million tonnes in 1989.
Methanol is produced by the hydrogenation of carbon monoxide. In a related reaction, the hydrogenation of carbon monoxide is coupled to C−C bond formation, as in the Fischer–Tropsch process where carbon monoxide is hydrogenated to liquid hydrocarbon fuels. This technology allows coal or biomass to be converted to diesel.
In the Cativa process, carbon monoxide and methanol react in the presence of a homogeneous iridium catalyst and hydroiodic acid to give acetic acid. This process is responsible for most of the industrial production of acetic acid.
Carbon monoxide is a strong reductive agent and has been used in pyrometallurgy to reduce metals from ores since ancient times. Carbon monoxide strips oxygen off metal oxides, reducing them to pure metal in high temperatures, forming carbon dioxide in the process. Carbon monoxide is not usually supplied as is, in the gaseous phase, in the reactor, but rather it is formed in high temperature in presence of oxygen-carrying ore, or a carboniferous agent such as coke, and high temperature. The blast furnace process is a typical example of a process of reduction of metal from ore with carbon monoxide.
Likewise, blast furnace gas collected at the top of blast furnace, still contains some 10% to 30% of carbon monoxide, and is used as fuel on Cowper stoves and on Siemens-Martin furnaces on open hearth steelmaking.
Carbon monoxide has also been used as a lasing medium in high-powered infrared lasers.
Carbon monoxide has been proposed for use as a fuel on Mars. Carbon monoxide/oxygen engines have been suggested for early surface transportation use as both carbon monoxide and oxygen can be straightforwardly produced from the carbon dioxide atmosphere of Mars by zirconia electrolysis, without using any Martian water resources to obtain hydrogen, which would be needed to make methane or any hydrogen-based fuel.
Carbon monoxide is a bioactive molecule which acts as a gaseous signaling molecule. It is naturally produced by many enzymatic and non-enzymatic pathways, the best understood of which is the catabolic action of heme oxygenase on the heme derived from hemoproteins such as hemoglobin. Following the first report that carbon monoxide is a normal neurotransmitter in 1993, carbon monoxide has received significant clinical attention as a biological regulator.
Because of carbon monoxide's role in the body, abnormalities in its metabolism have been linked to a variety of diseases, including neurodegenerations, hypertension, heart failure, and pathological inflammation. In many tissues, carbon monoxide acts as anti-inflammatory, vasodilatory, and encouragers of neovascular growth. In animal model studies, carbon monoxide reduced the severity of experimentally induced bacterial sepsis, pancreatitis, hepatic ischemia/reperfusion injury, colitis, osteoarthritis, lung injury, lung transplantation rejection, and neuropathic pain while promoting skin wound healing. Therefore, there is significant interest in the therapeutic potential of carbon monoxide becoming pharmaceutical agent and clinical standard of care.
Studies involving carbon monoxide have been conducted in many laboratories throughout the world for its anti-inflammatory and cytoprotective properties. These properties have the potential to be used to prevent the development of a series of pathological conditions including ischemia reperfusion injury, transplant rejection, atherosclerosis, severe sepsis, severe malaria, or autoimmunity. Many pharmaceutical drug delivery initiatives have developed methods to safely administer carbon monoxide, and subsequent controlled clinical trials have evaluated the therapeutic effect of carbon monoxide.
Microbiota may also utilize carbon monoxide as a gasotransmitter. Carbon monoxide sensing is a signaling pathway facilitated by proteins such as CooA. The scope of the biological roles for carbon monoxide sensing is still unknown.
The human microbiome produces, consumes, and responds to carbon monoxide. For example, in certain bacteria, carbon monoxide is produced via the reduction of carbon dioxide by the enzyme carbon monoxide dehydrogenase with favorable bioenergetics to power downstream cellular operations. In another example, carbon monoxide is a nutrient for methanogenic archaea which reduce it to methane using hydrogen.
Carbon monoxide has certain antimicrobial properties which have been studied to treat against infectious diseases.
Carbon monoxide is used in modified atmosphere packaging systems in the US, mainly with fresh meat products such as beef, pork, and fish to keep them looking fresh. The benefit is two-fold, carbon monoxide protects against microbial spoilage and it enhances the meat color for consumer appeal. The carbon monoxide combines with myoglobin to form carboxymyoglobin, a bright-cherry-red pigment. Carboxymyoglobin is more stable than the oxygenated form of myoglobin, oxymyoglobin, which can become oxidized to the brown pigment metmyoglobin. This stable red color can persist much longer than in normally packaged meat. Typical levels of carbon monoxide used in the facilities that use this process are between 0.4% and 0.5%.
The technology was first given "generally recognized as safe" (GRAS) status by the U.S. Food and Drug Administration (FDA) in 2002 for use as a secondary packaging system, and does not require labeling. In 2004, the FDA approved CO as primary packaging method, declaring that CO does not mask spoilage odor. The process is currently unauthorized in many other countries, including Japan, Singapore, and the European Union.
Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries. The Centers for Disease Control and Prevention estimates that several thousand people go to hospital emergency rooms every year to be treated for carbon monoxide poisoning. According to the Florida Department of Health, "every year more than 500 Americans die from accidental exposure to carbon monoxide and thousands more across the U.S. require emergency medical care for non-fatal carbon monoxide poisoning." The American Association of Poison Control Centers (AAPCC) reported 15,769 cases of carbon monoxide poisoning resulting in 39 deaths in 2007. In 2005, the CPSC reported 94 generator-related carbon monoxide poisoning deaths.
Carbon monoxide is colorless, odorless, and tasteless. As such, it is relatively undetectable. It readily combines with hemoglobin to produce carboxyhemoglobin which potentially affects gas exchange; therefore exposure can be highly toxic. Concentrations as low as 667 ppm may cause up to 50% of the body's hemoglobin to convert to carboxyhemoglobin. A level of 50% carboxyhemoglobin may result in seizure, coma, and fatality. In the United States, the OSHA limits long-term workplace exposure levels above 50 ppm.
In addition to affecting oxygen delivery, carbon monoxide also binds to other hemoproteins such as myoglobin and mitochondrial cytochrome oxidase, metallic and non-metallic cellular targets to affect many cell operations.
In ancient history, Hannibal executed Roman prisoners with coal fumes during the Second Punic War.
Carbon monoxide had been used for genocide during the Holocaust at some extermination camps, the most notable by gas vans in Chełmno, and in the Action T4 "euthanasia" program. | [
{
"paragraph_id": 0,
"text": "Carbon monoxide (chemical formula CO) is a poisonous, flammable gas that is colorless, odorless, tasteless, and slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom connected by a triple bond. It is the simplest carbon oxide. In coordination complexes, the carbon monoxide ligand is called carbonyl. It is a key ingredient in many processes in industrial chemistry.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The most common source of carbon monoxide is the partial combustion of carbon-containing compounds. Numerous environmental and biological sources generate carbon monoxide. In industry, carbon monoxide is important in the production of many compounds, including drugs, fragrances, and fuels. Upon emission into the atmosphere, carbon monoxide affects several processes that contribute to climate change.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Carbon monoxide has important biological roles across phylogenetic kingdoms. It is produced by many organisms, including humans. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic resulting in carbon monoxide poisoning. It is isoelectronic with cyanide anion CN.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Humans have maintained a complex relationship with carbon monoxide since first learning to control fire circa 800,000 BC. Early humans probably discovered the toxicity of carbon monoxide poisoning upon introducing fire into their dwellings. The early development of metallurgy and smelting technologies emerging circa 6,000 BC through the Bronze Age likewise plagued humankind from carbon monoxide exposure. Apart from the toxicity of carbon monoxide, indigenous Native Americans may have experienced the neuroactive properties of carbon monoxide through shamanistic fireside rituals.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Early civilizations developed mythological tales to explain the origin of fire, such as Prometheus from Greek mythology who shared fire with humans. Aristotle (384–322 BC) first recorded that burning coals produced toxic fumes. Greek physician Galen (129–199 AD) speculated that there was a change in the composition of the air that caused harm when inhaled, and many others of the era developed a basis of knowledge about carbon monoxide in the context of coal fume toxicity. Cleopatra may have died from carbon monoxide poisoning.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Georg Ernst Stahl mentioned carbonarii halitus in 1697 in reference to toxic vapors thought to be carbon monoxide. Friedrich Hoffmann conducted the first modern scientific investigation into carbon monoxide poisoning from coal in 1716. Herman Boerhaave conducted the first scientific experiments on the effect of carbon monoxide (coal fumes) on animals in the 1730s.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Joseph Priestley is considered to have first synthesized carbon monoxide in 1772. Carl Wilhelm Scheele similarly isolated carbon monoxide from charcoal in 1773 and thought it could be the carbonic entity making fumes toxic. Torbern Bergman isolated carbon monoxide from oxalic acid in 1775. Later in 1776, the French chemist de Lassone [fr] produced CO by heating zinc oxide with coke, but mistakenly concluded that the gaseous product was hydrogen, as it burned with a blue flame. In the presence of oxygen, including atmospheric concentrations, carbon monoxide burns with a blue flame, producing carbon dioxide. Antoine Lavoisier conducted similar inconclusive experiments to Lassone in 1777. The gas was identified as a compound containing carbon and oxygen by William Cruickshank in 1800.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Thomas Beddoes and James Watt recognized carbon monoxide (as hydrocarbonate) to brighten venous blood in 1793. Watt suggested coal fumes could act as an antidote to the oxygen in blood, and Beddoes and Watt likewise suggested hydrocarbonate has a greater affinity for animal fiber than oxygen in 1796. In 1854, Adrien Chenot similarly suggested carbon monoxide to remove the oxygen from blood and then be oxidized by the body to carbon dioxide. The mechanism for carbon monoxide poisoning is widely credited to Claude Bernard whose memoirs beginning in 1846 and published in 1857 phrased, \"prevents arterials blood from becoming venous\". Felix Hoppe-Seyler independently published similar conclusions in the following year.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Carbon monoxide gained recognition as an essential reagent in the 1900s. Three industrial processes illustrate its evolution in industry. In the Fischer–Tropsch process, coal and related carbon-rich feedstocks are converted into liquid fuels via the intermediacy of CO. Originally developed as part of the German war effort to compensate for their lack of domestic petroleum, this technology continues today. Also in Germany, a mixture of CO and hydrogen was found to combine with olefins to give aldehydes. This process, called hydroformylation, is used to produce many large scale chemicals such as surfactants as well as specialty compounds that are popular fragrances and drugs. For example, CO is used in the production of vitamin A. In a third major process, attributed to researchers at Monsanto, CO combines with methanol to give acetic acid. Most acetic acid is produced by the Cativa process. Hydroformylation and the acetic acid syntheses are two of myriad carbonylation processes.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Carbon monoxide is the simplest oxocarbon and is isoelectronic with other triply-bonded diatomic species possessing 10 valence electrons, including the cyanide anion, the nitrosonium cation, boron monofluoride and molecular nitrogen. It has a molar mass of 28.0, which, according to the ideal gas law, makes it slightly less dense than air, whose average molar mass is 28.8.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 10,
"text": "The carbon and oxygen are connected by a triple bond that consists of a net two pi bonds and one sigma bond. The bond length between the carbon atom and the oxygen atom is 112.8 pm. This bond length is consistent with a triple bond, as in molecular nitrogen (N2), which has a similar bond length (109.76 pm) and nearly the same molecular mass. Carbon–oxygen double bonds are significantly longer, 120.8 pm in formaldehyde, for example. The boiling point (82 K) and melting point (68 K) are very similar to those of N2 (77 K and 63 K, respectively). The bond-dissociation energy of 1072 kJ/mol is stronger than that of N2 (942 kJ/mol) and represents the strongest chemical bond known.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 11,
"text": "The ground electronic state of carbon monoxide is a singlet state since there are no unpaired electrons.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 12,
"text": "Carbon and oxygen together have a total of 10 electrons in the valence shell. Following the octet rule for both carbon and oxygen, the two atoms form a triple bond, with six shared electrons in three bonding molecular orbitals, rather than the usual double bond found in organic carbonyl compounds. Since four of the shared electrons come from the oxygen atom and only two from carbon, one bonding orbital is occupied by two electrons from oxygen, forming a dative or dipolar bond. This causes a C←O polarization of the molecule, with a small negative charge on carbon and a small positive charge on oxygen. The other two bonding orbitals are each occupied by one electron from carbon and one from oxygen, forming (polar) covalent bonds with a reverse C→O polarization since oxygen is more electronegative than carbon. In the free carbon monoxide molecule, a net negative charge δ remains at the carbon end and the molecule has a small dipole moment of 0.122 D.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 13,
"text": "The molecule is therefore asymmetric: oxygen has more electron density than carbon and is also slightly positively charged compared to carbon being negative. By contrast, the isoelectronic dinitrogen molecule has no dipole moment.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 14,
"text": "Carbon monoxide has a computed fractional bond order of 2.6, indicating that the \"third\" bond is important but constitutes somewhat less than a full bond. Thus, in valence bond terms, C≡O is the most important structure, while :C=O is non-octet, but has a neutral formal charge on each atom and represents the second most important resonance contributor. Because of the lone pair and divalence of carbon in this resonance structure, carbon monoxide is often considered to be an extraordinarily stabilized carbene. Isocyanides are compounds in which the O is replaced by an NR (R = alkyl or aryl) group and have a similar bonding scheme.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 15,
"text": "If carbon monoxide acts as a ligand, the polarity of the dipole may reverse with a net negative charge on the oxygen end, depending on the structure of the coordination complex. See also the section \"Coordination chemistry\" below.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 16,
"text": "Theoretical and experimental studies show that, despite the greater electronegativity of oxygen, the dipole moment points from the more-negative carbon end to the more-positive oxygen end. The three bonds are in fact polar covalent bonds that are strongly polarized. The calculated polarization toward the oxygen atom is 71% for the σ-bond and 77% for both π-bonds.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 17,
"text": "The oxidation state of carbon in carbon monoxide is +2 in each of these structures. It is calculated by counting all the bonding electrons as belonging to the more electronegative oxygen. Only the two non-bonding electrons on carbon are assigned to carbon. In this count, carbon then has only two valence electrons in the molecule compared to four in the free atom.",
"title": "Physical and chemical properties"
},
{
"paragraph_id": 18,
"text": "Carbon monoxide occurs in various natural and artificial environments. Photochemical degradation of plant matter for example generates an estimated 60 million tons/year. Typical concentrations in parts per million are as follows:",
"title": "Occurrence"
},
{
"paragraph_id": 19,
"text": "Carbon monoxide (CO) is present in small amounts (about 80 ppb) in the Earth's atmosphere. Most of the rest comes from chemical reactions with organic compounds emitted by human activities and natural origins due to photochemical reactions in the troposphere that generate about 5 × 10 kilograms per year. Other natural sources of CO include volcanoes, forest and bushfires, and other miscellaneous forms of combustion such as fossil fuels. Small amounts are also emitted from the ocean, and from geological activity because carbon monoxide occurs dissolved in molten volcanic rock at high pressures in the Earth's mantle. Because natural sources of carbon monoxide vary from year to year, it is difficult to accurately measure natural emissions of the gas.",
"title": "Occurrence"
},
{
"paragraph_id": 20,
"text": "Carbon monoxide has an indirect effect on radiative forcing by elevating concentrations of direct greenhouse gases, including methane and tropospheric ozone. CO can react chemically with other atmospheric constituents (primarily the hydroxyl radical, OH) that would otherwise destroy methane. Through natural processes in the atmosphere, it is oxidized to carbon dioxide and ozone. Carbon monoxide is short-lived in the atmosphere (with an average lifetime of about one to two months), and spatially variable in concentration.",
"title": "Occurrence"
},
{
"paragraph_id": 21,
"text": "Due to its long lifetime in the mid-troposphere, carbon monoxide is also used as a tracer for pollutant plumes.",
"title": "Occurrence"
},
{
"paragraph_id": 22,
"text": "Carbon monoxide is a temporary atmospheric pollutant in some urban areas, chiefly from the exhaust of internal combustion engines (including vehicles, portable and back-up generators, lawnmowers, power washers, etc.), but also from incomplete combustion of various other fuels (including wood, coal, charcoal, oil, paraffin, propane, natural gas, and trash).",
"title": "Occurrence"
},
{
"paragraph_id": 23,
"text": "Large CO pollution events can be observed from space over cities.",
"title": "Occurrence"
},
{
"paragraph_id": 24,
"text": "Carbon monoxide is, along with aldehydes, part of the series of cycles of chemical reactions that form photochemical smog. It reacts with hydroxyl radical (OH) to produce a radical intermediate HOCO, which transfers rapidly its radical hydrogen to O2 to form peroxy radical (HO2) and carbon dioxide (CO2). Peroxy radical subsequently reacts with nitrogen oxide (NO) to form nitrogen dioxide (NO2) and hydroxyl radical. NO2 gives O(P) via photolysis, thereby forming O3 following reaction with O2. Since hydroxyl radical is formed during the formation of NO2, the balance of the sequence of chemical reactions starting with carbon monoxide and leading to the formation of ozone is:",
"title": "Occurrence"
},
{
"paragraph_id": 25,
"text": "(where hν refers to the photon of light absorbed by the NO2 molecule in the sequence)",
"title": "Occurrence"
},
{
"paragraph_id": 26,
"text": "Although the creation of NO2 is the critical step leading to low level ozone formation, it also increases this ozone in another, somewhat mutually exclusive way, by reducing the quantity of NO that is available to react with ozone.",
"title": "Occurrence"
},
{
"paragraph_id": 27,
"text": "In closed environments, the concentration of carbon monoxide can rise to lethal levels. On average, 170 people in the United States die every year from carbon monoxide produced by non-automotive consumer products. These products include malfunctioning fuel-burning appliances such as furnaces, ranges, water heaters, and gas and kerosene room heaters; engine-powered equipment such as portable generators (and cars left running in attached garages); fireplaces; and charcoal that is burned in homes and other enclosed areas. Many deaths have occurred during power outages due to severe weather such as Hurricane Katrina and the 2021 Texas power crisis.",
"title": "Occurrence"
},
{
"paragraph_id": 28,
"text": "Miners refer to carbon monoxide as \"whitedamp\" or the \"silent killer\". It can be found in confined areas of poor ventilation in both surface mines and underground mines. The most common sources of carbon monoxide in mining operations are the internal combustion engine and explosives; however, in coal mines, carbon monoxide can also be found due to the low-temperature oxidation of coal. The idiom \"Canary in the coal mine\" pertained to an early warning of a carbon monoxide presence.",
"title": "Occurrence"
},
{
"paragraph_id": 29,
"text": "Beyond Earth, carbon monoxide is the second-most common diatomic molecule in the interstellar medium, after molecular hydrogen. Because of its asymmetry, this polar molecule produces far brighter spectral lines than the hydrogen molecule, making CO much easier to detect. Interstellar CO was first detected with radio telescopes in 1970. It is now the most commonly used tracer of molecular gas in general in the interstellar medium of galaxies, as molecular hydrogen can only be detected using ultraviolet light, which requires space telescopes. Carbon monoxide observations provide much of the information about the molecular clouds in which most stars form.",
"title": "Occurrence"
},
{
"paragraph_id": 30,
"text": "Beta Pictoris, the second brightest star in the constellation Pictor, shows an excess of infrared emission compared to normal stars of its type, which is caused by large quantities of dust and gas (including carbon monoxide) near the star.",
"title": "Occurrence"
},
{
"paragraph_id": 31,
"text": "In the atmosphere of Venus carbon monoxide occurs as a result of the photodissociation of carbon dioxide by electromagnetic radiation of wavelengths shorter than 169 nm. It has also been identified spectroscopically on the surface of Neptune's moon Triton.",
"title": "Occurrence"
},
{
"paragraph_id": 32,
"text": "Solid carbon monoxide is a component of comets. The volatile or \"ice\" component of Halley's Comet is about 15% CO. At room temperature and at atmospheric pressure, carbon monoxide is actually only metastable (see Boudouard reaction) and the same is true at low temperatures where CO and CO2 are solid, but nevertheless it can exist for billions of years in comets. There is very little CO in the atmosphere of Pluto, which seems to have been formed from comets. This may be because there is (or was) liquid water inside Pluto.",
"title": "Occurrence"
},
{
"paragraph_id": 33,
"text": "Carbon monoxide can react with water to form carbon dioxide and hydrogen:",
"title": "Occurrence"
},
{
"paragraph_id": 34,
"text": "This is called the water-gas shift reaction when occurring in the gas phase, but it can also take place (very slowly) in an aqueous solution. If the hydrogen partial pressure is high enough (for instance in an underground sea), formic acid will be formed:",
"title": "Occurrence"
},
{
"paragraph_id": 35,
"text": "These reactions can take place in a few million years even at temperatures such as found on Pluto.",
"title": "Occurrence"
},
{
"paragraph_id": 36,
"text": "Carbon monoxide has a wide range of functions across all disciplines of chemistry. The four premier categories of reactivity involve metal-carbonyl catalysis, radical chemistry, cation and anion chemistries.",
"title": "Chemistry"
},
{
"paragraph_id": 37,
"text": "Most metals form coordination complexes containing covalently attached carbon monoxide. Only metals in lower oxidation states will complex with carbon monoxide ligands. This is because there must be sufficient electron density to facilitate back-donation from the metal dxz-orbital, to the π* molecular orbital from CO. The lone pair on the carbon atom in CO also donates electron density to the dx−y on the metal to form a sigma bond. This electron donation is also exhibited with the cis effect, or the labilization of CO ligands in the cis position. Nickel carbonyl, for example, forms by the direct combination of carbon monoxide and nickel metal:",
"title": "Chemistry"
},
{
"paragraph_id": 38,
"text": "For this reason, nickel in any tubing or part must not come into prolonged contact with carbon monoxide. Nickel carbonyl decomposes readily back to Ni and CO upon contact with hot surfaces, and this method is used for the industrial purification of nickel in the Mond process.",
"title": "Chemistry"
},
{
"paragraph_id": 39,
"text": "In nickel carbonyl and other carbonyls, the electron pair on the carbon interacts with the metal; the carbon monoxide donates the electron pair to the metal. In these situations, carbon monoxide is called the carbonyl ligand. One of the most important metal carbonyls is iron pentacarbonyl, Fe(CO)5:",
"title": "Chemistry"
},
{
"paragraph_id": 40,
"text": "",
"title": "Chemistry"
},
{
"paragraph_id": 41,
"text": "Many metal–CO complexes are prepared by decarbonylation of organic solvents, not from CO. For instance, iridium trichloride and triphenylphosphine react in boiling 2-methoxyethanol or DMF to afford IrCl(CO)(PPh3)2.",
"title": "Chemistry"
},
{
"paragraph_id": 42,
"text": "Metal carbonyls in coordination chemistry are usually studied using infrared spectroscopy.",
"title": "Chemistry"
},
{
"paragraph_id": 43,
"text": "In the presence of strong acids and water, carbon monoxide reacts with alkenes to form carboxylic acids in a process known as the Koch–Haaf reaction. In the Gattermann–Koch reaction, arenes are converted to benzaldehyde derivatives in the presence of CO, AlCl3 and HCl. Organolithium compounds (e.g. butyl lithium) react with carbon monoxide, but these reactions have little scientific use.",
"title": "Chemistry"
},
{
"paragraph_id": 44,
"text": "Although CO reacts with carbocations and carbanions, it is relatively nonreactive toward organic compounds without the intervention of metal catalysts.",
"title": "Chemistry"
},
{
"paragraph_id": 45,
"text": "With main group reagents, CO undergoes several noteworthy reactions. Chlorination of CO is the industrial route to the important compound phosgene. With borane CO forms the adduct H3BCO, which is isoelectronic with the acetylium cation [H3CCO]. CO reacts with sodium to give products resulting from C−C coupling such as sodium acetylenediolate 2Na·C2O2. It reacts with molten potassium to give a mixture of an organometallic compound, potassium acetylenediolate 2K·C2O2, potassium benzenehexolate 6KC6O6, and potassium rhodizonate 2K·C6O6.",
"title": "Chemistry"
},
{
"paragraph_id": 46,
"text": "The compounds cyclohexanehexone or triquinoyl (C6O6) and cyclopentanepentone or leuconic acid (C5O5), which so far have been obtained only in trace amounts, can be regarded as polymers of carbon monoxide. At pressures exceeding 5 GPa, carbon monoxide converts to polycarbonyl, a solid polymer that is metastable at atmospheric pressure but is explosive.",
"title": "Chemistry"
},
{
"paragraph_id": 47,
"text": "Carbon monoxide is conveniently produced in the laboratory by the dehydration of formic acid or oxalic acid, for example with concentrated sulfuric acid. Another method is heating an intimate mixture of powdered zinc metal and calcium carbonate, which releases CO and leaves behind zinc oxide and calcium oxide:",
"title": "Chemistry"
},
{
"paragraph_id": 48,
"text": "Silver nitrate and iodoform also afford carbon monoxide:",
"title": "Chemistry"
},
{
"paragraph_id": 49,
"text": "Finally, metal oxalate salts release CO upon heating, leaving a carbonate as byproduct:",
"title": "Chemistry"
},
{
"paragraph_id": 50,
"text": "Thermal combustion is the most common source for carbon monoxide. Carbon monoxide is produced from the partial oxidation of carbon-containing compounds; it forms when there is not enough oxygen to produce carbon dioxide (CO2), such as when operating a stove or an internal combustion engine in an enclosed space.",
"title": "Production"
},
{
"paragraph_id": 51,
"text": "A large quantity of CO byproduct is formed during the oxidative processes for the production of chemicals. For this reason, the process off-gases have to be purified.",
"title": "Production"
},
{
"paragraph_id": 52,
"text": "Many methods have been developed for carbon monoxide production.",
"title": "Production"
},
{
"paragraph_id": 53,
"text": "A major industrial source of CO is producer gas, a mixture containing mostly carbon monoxide and nitrogen, formed by combustion of carbon in air at high temperature when there is an excess of carbon. In an oven, air is passed through a bed of coke. The initially produced CO2 equilibrates with the remaining hot carbon to give CO. The reaction of CO2 with carbon to give CO is described as the Boudouard reaction. Above 800 °C, CO is the predominant product:",
"title": "Production"
},
{
"paragraph_id": 54,
"text": "Another source is \"water gas\", a mixture of hydrogen and carbon monoxide produced via the endothermic reaction of steam and carbon:",
"title": "Production"
},
{
"paragraph_id": 55,
"text": "Other similar \"synthesis gases\" can be obtained from natural gas and other fuels.",
"title": "Production"
},
{
"paragraph_id": 56,
"text": "Carbon monoxide can also be produced by high-temperature electrolysis of carbon dioxide with solid oxide electrolyzer cells. One method developed at DTU Energy uses a cerium oxide catalyst and does not have any issues of fouling of the catalyst.",
"title": "Production"
},
{
"paragraph_id": 57,
"text": "Carbon monoxide is also a byproduct of the reduction of metal oxide ores with carbon, shown in a simplified form as follows:",
"title": "Production"
},
{
"paragraph_id": 58,
"text": "Carbon monoxide is also produced by the direct oxidation of carbon in a limited supply of oxygen or air.",
"title": "Production"
},
{
"paragraph_id": 59,
"text": "Since CO is a gas, the reduction process can be driven by heating, exploiting the positive (favorable) entropy of reaction. The Ellingham diagram shows that CO formation is favored over CO2 in high temperatures.",
"title": "Production"
},
{
"paragraph_id": 60,
"text": "Carbon monoxide is an industrial gas that has many applications in bulk chemicals manufacturing. Large quantities of aldehydes are produced by the hydroformylation reaction of alkenes, carbon monoxide, and H2. Hydroformylation is coupled to the Shell higher olefin process to give precursors to detergents.",
"title": "Use"
},
{
"paragraph_id": 61,
"text": "Phosgene, useful for preparing isocyanates, polycarbonates, and polyurethanes, is produced by passing purified carbon monoxide and chlorine gas through a bed of porous activated carbon, which serves as a catalyst. World production of this compound was estimated to be 2.74 million tonnes in 1989.",
"title": "Use"
},
{
"paragraph_id": 62,
"text": "Methanol is produced by the hydrogenation of carbon monoxide. In a related reaction, the hydrogenation of carbon monoxide is coupled to C−C bond formation, as in the Fischer–Tropsch process where carbon monoxide is hydrogenated to liquid hydrocarbon fuels. This technology allows coal or biomass to be converted to diesel.",
"title": "Use"
},
{
"paragraph_id": 63,
"text": "In the Cativa process, carbon monoxide and methanol react in the presence of a homogeneous iridium catalyst and hydroiodic acid to give acetic acid. This process is responsible for most of the industrial production of acetic acid.",
"title": "Use"
},
{
"paragraph_id": 64,
"text": "Carbon monoxide is a strong reductive agent and has been used in pyrometallurgy to reduce metals from ores since ancient times. Carbon monoxide strips oxygen off metal oxides, reducing them to pure metal in high temperatures, forming carbon dioxide in the process. Carbon monoxide is not usually supplied as is, in the gaseous phase, in the reactor, but rather it is formed in high temperature in presence of oxygen-carrying ore, or a carboniferous agent such as coke, and high temperature. The blast furnace process is a typical example of a process of reduction of metal from ore with carbon monoxide.",
"title": "Use"
},
{
"paragraph_id": 65,
"text": "Likewise, blast furnace gas collected at the top of blast furnace, still contains some 10% to 30% of carbon monoxide, and is used as fuel on Cowper stoves and on Siemens-Martin furnaces on open hearth steelmaking.",
"title": "Use"
},
{
"paragraph_id": 66,
"text": "Carbon monoxide has also been used as a lasing medium in high-powered infrared lasers.",
"title": "Use"
},
{
"paragraph_id": 67,
"text": "Carbon monoxide has been proposed for use as a fuel on Mars. Carbon monoxide/oxygen engines have been suggested for early surface transportation use as both carbon monoxide and oxygen can be straightforwardly produced from the carbon dioxide atmosphere of Mars by zirconia electrolysis, without using any Martian water resources to obtain hydrogen, which would be needed to make methane or any hydrogen-based fuel.",
"title": "Use"
},
{
"paragraph_id": 68,
"text": "Carbon monoxide is a bioactive molecule which acts as a gaseous signaling molecule. It is naturally produced by many enzymatic and non-enzymatic pathways, the best understood of which is the catabolic action of heme oxygenase on the heme derived from hemoproteins such as hemoglobin. Following the first report that carbon monoxide is a normal neurotransmitter in 1993, carbon monoxide has received significant clinical attention as a biological regulator.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 69,
"text": "Because of carbon monoxide's role in the body, abnormalities in its metabolism have been linked to a variety of diseases, including neurodegenerations, hypertension, heart failure, and pathological inflammation. In many tissues, carbon monoxide acts as anti-inflammatory, vasodilatory, and encouragers of neovascular growth. In animal model studies, carbon monoxide reduced the severity of experimentally induced bacterial sepsis, pancreatitis, hepatic ischemia/reperfusion injury, colitis, osteoarthritis, lung injury, lung transplantation rejection, and neuropathic pain while promoting skin wound healing. Therefore, there is significant interest in the therapeutic potential of carbon monoxide becoming pharmaceutical agent and clinical standard of care.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 70,
"text": "Studies involving carbon monoxide have been conducted in many laboratories throughout the world for its anti-inflammatory and cytoprotective properties. These properties have the potential to be used to prevent the development of a series of pathological conditions including ischemia reperfusion injury, transplant rejection, atherosclerosis, severe sepsis, severe malaria, or autoimmunity. Many pharmaceutical drug delivery initiatives have developed methods to safely administer carbon monoxide, and subsequent controlled clinical trials have evaluated the therapeutic effect of carbon monoxide.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 71,
"text": "Microbiota may also utilize carbon monoxide as a gasotransmitter. Carbon monoxide sensing is a signaling pathway facilitated by proteins such as CooA. The scope of the biological roles for carbon monoxide sensing is still unknown.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 72,
"text": "The human microbiome produces, consumes, and responds to carbon monoxide. For example, in certain bacteria, carbon monoxide is produced via the reduction of carbon dioxide by the enzyme carbon monoxide dehydrogenase with favorable bioenergetics to power downstream cellular operations. In another example, carbon monoxide is a nutrient for methanogenic archaea which reduce it to methane using hydrogen.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 73,
"text": "Carbon monoxide has certain antimicrobial properties which have been studied to treat against infectious diseases.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 74,
"text": "Carbon monoxide is used in modified atmosphere packaging systems in the US, mainly with fresh meat products such as beef, pork, and fish to keep them looking fresh. The benefit is two-fold, carbon monoxide protects against microbial spoilage and it enhances the meat color for consumer appeal. The carbon monoxide combines with myoglobin to form carboxymyoglobin, a bright-cherry-red pigment. Carboxymyoglobin is more stable than the oxygenated form of myoglobin, oxymyoglobin, which can become oxidized to the brown pigment metmyoglobin. This stable red color can persist much longer than in normally packaged meat. Typical levels of carbon monoxide used in the facilities that use this process are between 0.4% and 0.5%.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 75,
"text": "The technology was first given \"generally recognized as safe\" (GRAS) status by the U.S. Food and Drug Administration (FDA) in 2002 for use as a secondary packaging system, and does not require labeling. In 2004, the FDA approved CO as primary packaging method, declaring that CO does not mask spoilage odor. The process is currently unauthorized in many other countries, including Japan, Singapore, and the European Union.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 76,
"text": "Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries. The Centers for Disease Control and Prevention estimates that several thousand people go to hospital emergency rooms every year to be treated for carbon monoxide poisoning. According to the Florida Department of Health, \"every year more than 500 Americans die from accidental exposure to carbon monoxide and thousands more across the U.S. require emergency medical care for non-fatal carbon monoxide poisoning.\" The American Association of Poison Control Centers (AAPCC) reported 15,769 cases of carbon monoxide poisoning resulting in 39 deaths in 2007. In 2005, the CPSC reported 94 generator-related carbon monoxide poisoning deaths.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 77,
"text": "Carbon monoxide is colorless, odorless, and tasteless. As such, it is relatively undetectable. It readily combines with hemoglobin to produce carboxyhemoglobin which potentially affects gas exchange; therefore exposure can be highly toxic. Concentrations as low as 667 ppm may cause up to 50% of the body's hemoglobin to convert to carboxyhemoglobin. A level of 50% carboxyhemoglobin may result in seizure, coma, and fatality. In the United States, the OSHA limits long-term workplace exposure levels above 50 ppm.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 78,
"text": "In addition to affecting oxygen delivery, carbon monoxide also binds to other hemoproteins such as myoglobin and mitochondrial cytochrome oxidase, metallic and non-metallic cellular targets to affect many cell operations.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 79,
"text": "In ancient history, Hannibal executed Roman prisoners with coal fumes during the Second Punic War.",
"title": "Biological and physiological properties"
},
{
"paragraph_id": 80,
"text": "Carbon monoxide had been used for genocide during the Holocaust at some extermination camps, the most notable by gas vans in Chełmno, and in the Action T4 \"euthanasia\" program.",
"title": "Biological and physiological properties"
}
]
| Carbon monoxide is a poisonous, flammable gas that is colorless, odorless, tasteless, and slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom connected by a triple bond. It is the simplest carbon oxide. In coordination complexes, the carbon monoxide ligand is called carbonyl. It is a key ingredient in many processes in industrial chemistry. The most common source of carbon monoxide is the partial combustion of carbon-containing compounds. Numerous environmental and biological sources generate carbon monoxide. In industry, carbon monoxide is important in the production of many compounds, including drugs, fragrances, and fuels. Upon emission into the atmosphere, carbon monoxide affects several processes that contribute to climate change. Carbon monoxide has important biological roles across phylogenetic kingdoms. It is produced by many organisms, including humans. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic resulting in carbon monoxide poisoning. It is isoelectronic with cyanide anion CN−. | 2001-08-21T10:23:11Z | 2023-12-30T13:02:48Z | [
"Template:Efn",
"Template:Chem",
"Template:Commons category",
"Template:Cite book",
"Template:Webarchive",
"Template:Oxides",
"Template:Reflist",
"Template:Cite journal",
"Template:Ullmann's",
"Template:Ill",
"Template:Main",
"Template:Doi",
"Template:Molecules detected in outer space",
"Template:Carbon compounds",
"Template:See also",
"Template:Oxides of carbon",
"Template:Authority control",
"Template:Chembox",
"Template:Citation needed",
"Template:Noteslist",
"Template:Cite web",
"Template:Cite news",
"Template:Neurotransmitters",
"Template:Short description",
"Template:CO2",
"Template:CRC91",
"Template:OrgSynth",
"Template:Gas lasers",
"Template:Annotated link",
"Template:Ullmann",
"Template:ISBN",
"Template:Oxygen compounds"
]
| https://en.wikipedia.org/wiki/Carbon_monoxide |
6,138 | Conjecture | In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis (still a conjecture) or Fermat's Last Theorem (a conjecture until proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them.
Formal mathematics is based on provable truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 10 (over a trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample.
Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results.
A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details.
One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software.
When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others.
Conjectures disproven through counterexample are sometimes referred to as false conjectures (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller.
Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry).
In this case, if a proof uses this statement, researchers will often look for a new proof that doesn't require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular.
Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called conditional proofs: the conjectures assumed appear in the hypotheses of the theorem, for the time being.
These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type.
In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} can satisfy the equation a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} for any integer value of n {\displaystyle n} greater than two.
This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica, where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the Guinness Book of World Records for "most difficult mathematical problems".
In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not.
Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852.
The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain.
The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze.
This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion.
The manifold version is true in dimensions m ≤ 3. The cases m = 2 and 3 were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively.
In mathematics, the Weil conjectures were some highly influential proposals by André Weil (1949) on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields.
A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with q elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with q elements.
Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by Dwork (1960), the functional equation by Grothendieck (1965), and the analogue of the Riemann hypothesis was proved by Deligne (1974).
In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that:
Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere.
An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it.
Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time.
After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct.
The Poincaré conjecture, before being proven, was one of the most important open questions in topology.
In mathematics, the Riemann hypothesis, proposed by Bernhard Riemann (1859), is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields.
The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems.
The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution.
Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture. | [
{
"paragraph_id": 0,
"text": "In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis (still a conjecture) or Fermat's Last Theorem (a conjecture until proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Formal mathematics is based on provable truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 10 (over a trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample.",
"title": "Resolution of conjectures"
},
{
"paragraph_id": 2,
"text": "Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results.",
"title": "Resolution of conjectures"
},
{
"paragraph_id": 3,
"text": "A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details.",
"title": "Resolution of conjectures"
},
{
"paragraph_id": 4,
"text": "One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as \"brute force\": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software.",
"title": "Resolution of conjectures"
},
{
"paragraph_id": 5,
"text": "When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others.",
"title": "Resolution of conjectures"
},
{
"paragraph_id": 6,
"text": "Conjectures disproven through counterexample are sometimes referred to as false conjectures (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller.",
"title": "Resolution of conjectures"
},
{
"paragraph_id": 7,
"text": "Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry).",
"title": "Resolution of conjectures"
},
{
"paragraph_id": 8,
"text": "In this case, if a proof uses this statement, researchers will often look for a new proof that doesn't require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular.",
"title": "Resolution of conjectures"
},
{
"paragraph_id": 9,
"text": "Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called conditional proofs: the conjectures assumed appear in the hypotheses of the theorem, for the time being.",
"title": "Conditional proofs"
},
{
"paragraph_id": 10,
"text": "These \"proofs\", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type.",
"title": "Conditional proofs"
},
{
"paragraph_id": 11,
"text": "In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers a {\\displaystyle a} , b {\\displaystyle b} , and c {\\displaystyle c} can satisfy the equation a n + b n = c n {\\displaystyle a^{n}+b^{n}=c^{n}} for any integer value of n {\\displaystyle n} greater than two.",
"title": "Important examples"
},
{
"paragraph_id": 12,
"text": "This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica, where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the Guinness Book of World Records for \"most difficult mathematical problems\".",
"title": "Important examples"
},
{
"paragraph_id": 13,
"text": "In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not.",
"title": "Important examples"
},
{
"paragraph_id": 14,
"text": "Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852.",
"title": "Important examples"
},
{
"paragraph_id": 15,
"text": "The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain.",
"title": "Important examples"
},
{
"paragraph_id": 16,
"text": "The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze.",
"title": "Important examples"
},
{
"paragraph_id": 17,
"text": "This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion.",
"title": "Important examples"
},
{
"paragraph_id": 18,
"text": "The manifold version is true in dimensions m ≤ 3. The cases m = 2 and 3 were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively.",
"title": "Important examples"
},
{
"paragraph_id": 19,
"text": "In mathematics, the Weil conjectures were some highly influential proposals by André Weil (1949) on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields.",
"title": "Important examples"
},
{
"paragraph_id": 20,
"text": "A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with q elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with q elements.",
"title": "Important examples"
},
{
"paragraph_id": 21,
"text": "Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by Dwork (1960), the functional equation by Grothendieck (1965), and the analogue of the Riemann hypothesis was proved by Deligne (1974).",
"title": "Important examples"
},
{
"paragraph_id": 22,
"text": "In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that:",
"title": "Important examples"
},
{
"paragraph_id": 23,
"text": "Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere.",
"title": "Important examples"
},
{
"paragraph_id": 24,
"text": "An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it.",
"title": "Important examples"
},
{
"paragraph_id": 25,
"text": "Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time.",
"title": "Important examples"
},
{
"paragraph_id": 26,
"text": "After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method \"converged\" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct.",
"title": "Important examples"
},
{
"paragraph_id": 27,
"text": "The Poincaré conjecture, before being proven, was one of the most important open questions in topology.",
"title": "Important examples"
},
{
"paragraph_id": 28,
"text": "In mathematics, the Riemann hypothesis, proposed by Bernhard Riemann (1859), is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields.",
"title": "Important examples"
},
{
"paragraph_id": 29,
"text": "The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems.",
"title": "Important examples"
},
{
"paragraph_id": 30,
"text": "The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper \"The complexity of theorem proving procedures\" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution.",
"title": "Important examples"
},
{
"paragraph_id": 31,
"text": "Karl Popper pioneered the use of the term \"conjecture\" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture.",
"title": "In other sciences"
}
]
| In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's Last Theorem, have shaped much of mathematical history as new areas of mathematics are developed in order to prove them. | 2001-10-11T18:50:37Z | 2023-12-21T00:46:33Z | [
"Template:Cite journal",
"Template:Wiktionary",
"Template:Portal bar",
"Template:For",
"Template:Nowrap",
"Template:Cite book",
"Template:Commonscatinline",
"Template:Harvtxt",
"Template:Blockquote",
"Template:Cite web",
"Template:Doi",
"Template:Short description",
"Template:Main",
"Template:Harvs",
"Template:Reflist",
"Template:Citation"
]
| https://en.wikipedia.org/wiki/Conjecture |
6,139 | Christoph Ludwig Agricola | Christoph Ludwig Agricola (5 November 1665 – 8 August 1724) was a German landscape painter and etcher. He was born and died at Regensburg (Ratisbon).
Christoph Ludwig Agricola was born on 5 November 1665 at Regensburg in Germany. He trained, as many painters of the period did, by studying nature.
He spent a great part of his life in travel, visiting England, the Netherlands and France, and residing for a considerable period at Naples, where he may have been influenced by Nicolas Poussin. He also stayed for some years circa 1712 in Venice, where he painted many works for the patron Zaccaria Sagredo.
He died in Regensburg in 1724.
Although he primarily worked in gouache and oils, documentary sources reveal that he also produced a small number of etchings. He was a good draughtsman, used warm lighting and exhibited a warm, masterly brushstroke.
His numerous landscapes, chiefly cabinet pictures, are remarkable for fidelity to nature, and especially for their skilful representation of varied phases of climate, especially nocturnal scenes and weather anomalies such as thunderstorms. In composition his style shows the influence of Nicolas Poussin and his work often displays the idealistic scenes associated with Poussin. In light and colour he imitates Claude Lorrain. His compositions include ruins of ancient buildings in the foreground, but his favourite figure for the foreground was men dressed in Oriental attire. He also produced a series of etchings of birds.
His pictures can be found in Dresden, Braunschweig, Vienna, Florence, Naples and many other towns of both Germany and Italy.
He probably tutored the artist, Johann Theile, and had an enormous influence on him. Art historians have also noted that the work of the landscape painter, Christian Johann Bendeler (1699–1728), was also influenced by Agricola. | [
{
"paragraph_id": 0,
"text": "Christoph Ludwig Agricola (5 November 1665 – 8 August 1724) was a German landscape painter and etcher. He was born and died at Regensburg (Ratisbon).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Christoph Ludwig Agricola was born on 5 November 1665 at Regensburg in Germany. He trained, as many painters of the period did, by studying nature.",
"title": "Life and career"
},
{
"paragraph_id": 2,
"text": "He spent a great part of his life in travel, visiting England, the Netherlands and France, and residing for a considerable period at Naples, where he may have been influenced by Nicolas Poussin. He also stayed for some years circa 1712 in Venice, where he painted many works for the patron Zaccaria Sagredo.",
"title": "Life and career"
},
{
"paragraph_id": 3,
"text": "He died in Regensburg in 1724.",
"title": "Life and career"
},
{
"paragraph_id": 4,
"text": "Although he primarily worked in gouache and oils, documentary sources reveal that he also produced a small number of etchings. He was a good draughtsman, used warm lighting and exhibited a warm, masterly brushstroke.",
"title": "Work"
},
{
"paragraph_id": 5,
"text": "His numerous landscapes, chiefly cabinet pictures, are remarkable for fidelity to nature, and especially for their skilful representation of varied phases of climate, especially nocturnal scenes and weather anomalies such as thunderstorms. In composition his style shows the influence of Nicolas Poussin and his work often displays the idealistic scenes associated with Poussin. In light and colour he imitates Claude Lorrain. His compositions include ruins of ancient buildings in the foreground, but his favourite figure for the foreground was men dressed in Oriental attire. He also produced a series of etchings of birds.",
"title": "Work"
},
{
"paragraph_id": 6,
"text": "His pictures can be found in Dresden, Braunschweig, Vienna, Florence, Naples and many other towns of both Germany and Italy.",
"title": "Work"
},
{
"paragraph_id": 7,
"text": "He probably tutored the artist, Johann Theile, and had an enormous influence on him. Art historians have also noted that the work of the landscape painter, Christian Johann Bendeler (1699–1728), was also influenced by Agricola.",
"title": "Legacy"
}
]
| Christoph Ludwig Agricola was a German landscape painter and etcher. He was born and died at Regensburg (Ratisbon). | 2022-11-14T23:13:26Z | [
"Template:Use dmy dates",
"Template:NDB",
"Template:Commons category",
"Template:Authority control",
"Template:Short description",
"Template:Infobox artist",
"Template:Reflist",
"Template:EB1911",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Christoph_Ludwig_Agricola |
|
6,140 | Claudius | Tiberius Claudius Caesar Augustus Germanicus (/ˈklɔːdiəs/; Latin: [tɪˈbɛriʊs ˈklau̯diʊs ˈkae̯sar au̯ˈɡʊstʊs gɛrˈmaːnɪkʊs]; 1 August 10 BC – 13 October AD 54) was Roman emperor, ruling from AD 41 to 54. A member of the Julio-Claudian dynasty, Claudius was born to Drusus and Antonia Minor at Lugdunum in Roman Gaul, where his father was stationed as a military legate. He was the first Roman emperor to be born outside Italy.
As he had a limp and slight deafness due to sickness at a young age, he was ostracized by his family and was excluded from public office until his consulship (which was shared with his nephew, Caligula, in 37). Claudius's infirmity probably saved him from the fate of many other nobles during the purges throughout the reigns of Tiberius and Caligula, as potential enemies did not see him as a serious threat. His survival led to his being declared emperor by the Praetorian Guard after Caligula's assassination, at which point he was the last adult male of his family.
Despite his lack of experience, Claudius was an able and efficient administrator. He expanded the imperial bureaucracy to include freedmen, and helped restore the empire's finances after the excesses of Caligula's reign. He was also an ambitious builder, constructing new roads, aqueducts, and canals across the Empire. During his reign the Empire started its successful conquest of Britain. Having a personal interest in law, he presided at public trials, and issued edicts daily. He was seen as vulnerable throughout his reign, particularly by elements of the nobility. Claudius was constantly forced to shore up his position, which resulted in the deaths of many senators. Those events damaged his reputation among the ancient writers, though more recent historians have revised that opinion. Many authors contend that he was murdered by his own wife, Agrippina the Younger. After his death at the age of 63, his grand-nephew and legally adopted step-son, Nero, succeeded him as emperor.
Claudius was born on 1 August 10 BC at Lugdunum (modern Lyon, France). He had two older siblings, Germanicus and Livilla. His mother, Antonia Minor, may have had two other children who died young. Claudius's maternal grandparents were Mark Antony and Octavia Minor, Augustus's sister, and he was therefore the great-great-grandnephew of Gaius Julius Caesar. His paternal grandparents were Livia, Augustus's third wife, and Tiberius Claudius Nero. During his reign, Claudius revived the rumor that his father Nero Claudius Drusus was actually the illegitimate son of Augustus, to give the appearance that Augustus was Claudius's paternal grandfather.
In 9 BC, Claudius's father Drusus died on campaign in Germania from a fall from a horse. Claudius was then raised by his mother, who never remarried. When his disability became evident, the relationship with his family turned sour. Antonia referred to him as a monster, and used him as a standard for stupidity. She seems to have passed her son off to his grandmother Livia for a number of years.
Livia was a little kinder, but nevertheless sent Claudius short, angry letters of reproof. He was put under the care of a former mule-driver to keep him disciplined, under the logic that his condition was due to laziness and a lack of willpower. However, by the time he reached his teenage years, his symptoms apparently waned and his family began to take some notice of his scholarly interests. In AD 7, Livy was hired to tutor Claudius in history, with the assistance of Sulpicius Flavus. He spent a lot of his time with the latter, as well as the philosopher Athenodorus. Augustus, according to a letter, was surprised at the clarity of Claudius's oratory.
Claudius' work as an historian damaged his prospects for advancement in public life. According to Vincent Scramuzza and others, he began work on a history of the Civil Wars that was either too truthful or too critical of Octavian, then reigning as Caesar Augustus. In either case, it was far too early for such an account, and may have only served to remind Augustus that Claudius was Antony's descendant. His mother and grandmother quickly put a stop to it, and this may have convinced them that Claudius was not fit for public office, since he could not be trusted to toe the existing party line.
When Claudius returned to the narrative later in life, he skipped over the wars of the Second Triumvirate altogether; but the damage was done, and his family pushed him into the background. When the Arch of Pavia was erected to honor the Imperial clan in AD 8, Claudius's name (now Tiberius Claudius Nero Germanicus after his elevation to pater familias of the Claudii Nerones on the adoption of his brother) was inscribed on the edge, past the deceased princes, Gaius and Lucius, and Germanicus's children. There is some speculation that the inscription was added by Claudius himself decades later, and that he originally did not appear at all.
When Augustus died in AD 14, Claudius – then aged 23 – appealed to his uncle Tiberius to allow him to begin the cursus honorum. Tiberius, the new Emperor, responded by granting Claudius consular ornaments. Claudius requested office once more and was snubbed. Since the new emperor was no more generous than the old, Claudius gave up hope of public office and retired to a scholarly, private life.
Despite the disdain of the Imperial family, it seems that from very early on the general public respected Claudius. At Augustus's death, the equites, or knights, chose Claudius to head their delegation. When his house burned down, the Senate demanded it be rebuilt at public expense. They also requested that Claudius be allowed to debate in the Senate. Tiberius turned down both motions, but the sentiment remained.
During the period immediately after the death of Tiberius's son, Drusus, Claudius was pushed by some quarters as a potential heir to the throne. This again suggests the political nature of his exclusion from public life. However, as this was also the period during which the power and terror of the commander of the Praetorian Guard, Sejanus, was at its peak, Claudius chose to downplay this possibility. After the death of Tiberius, the new emperor Caligula (the son of Claudius's brother Germanicus) recognized Claudius to be of some use. He appointed Claudius his co-consul in 37 to emphasize the memory of Caligula's deceased father Germanicus.
Despite this, Caligula tormented his uncle: playing practical jokes, charging him enormous sums of money, humiliating him before the Senate, and the like. According to Cassius Dio, Claudius became sickly and thin by the end of Caligula's reign, most likely due to stress. A possible surviving portrait of Claudius from this period may support this.
On 24 January 41, Caligula was assassinated in a conspiracy involving Cassius Chaerea – a military tribune in the Praetorian Guard – and several senators. There is no evidence that Claudius had a direct hand in the assassination, although it has been argued that he knew about the plot – particularly since he left the scene of the crime shortly before his nephew was murdered. However, after the deaths of Caligula's wife and daughter, it became apparent that Cassius intended to go beyond the terms of the conspiracy and wipe out the Imperial family.
In the chaos following the murder, Claudius witnessed the German guard cut down several uninvolved noblemen, including many of his friends. He fled to the palace to hide. According to tradition, a Praetorian named Gratus found him hiding behind a curtain and suddenly declared him princeps. Claudius was spirited away to the Praetorian camp and put under their protection.
The Senate met and debated a change of government, but this devolved into an argument over which of them would be the new princeps. When they heard of the Praetorians' claim, they demanded that Claudius be delivered to them for approval, but he refused, sensing the danger that would come with complying. Some historians, particularly Josephus, claim that Claudius was directed in his actions by the Judaean King Herod Agrippa. However, an earlier version of events by the same ancient author downplays Agrippa's role so it remains uncertain. Eventually the Senate was forced to give in. In return, Claudius granted a general amnesty, although he executed a few junior officers involved in the conspiracy. The actual assassins, including Cassius Chaerea and Julius Lupus, the murderer of Caligula's wife and daughter, were put to death to ensure Claudius's own safety and as a future deterrent.
Claudius took several steps to legitimize his rule against potential usurpers, most of them emphasizing his place within the Julio-Claudian family. He adopted the name "Caesar" as a cognomen, as the name still carried great weight with the populace. To do so, he dropped the cognomen "Nero", which he had adopted as pater familias of the Claudii Nerones when his brother Germanicus was adopted. As Pharaoh of Egypt, Claudius adopted the royal titulary Tiberios Klaudios, Autokrator Heqaheqau Meryasetptah, Kanakht Djediakhshuemakhet ("Tiberius Claudius, Emperor and ruler of rulers, beloved of Isis and Ptah, the strong bull of the stable moon on the horizon").
While Claudius had never been formally adopted either by Augustus or his successors, he was nevertheless the grandson of Augustus's sister Octavia, and so he felt that he had the right of family. He also adopted the name "Augustus" as the two previous emperors had done at their accessions. He kept the honorific "Germanicus" to display the connection with his heroic brother. He deified his paternal grandmother Livia to highlight her position as wife of the divine Augustus. Claudius frequently used the term "filius Drusi" (son of Drusus) in his titles, to remind the people of his legendary father and lay claim to his reputation.
Since Claudius was the first emperor proclaimed on the initiative of the Praetorian Guard instead of the Senate, his repute suffered at the hands of commentators (such as Seneca). Moreover, he was the first emperor who resorted to bribery as a means to secure army loyalty and rewarded the soldiers of the Praetorian Guard that had elevated him with 15,000 sesterces. Tiberius and Augustus had both left gifts to the army and guard in their wills, and upon Caligula's death the same would have been expected, even if no will existed. Claudius remained grateful to the guard, issuing coins with tributes to the Praetorians in the early part of his reign.
Pliny the Elder noted, according to the 1938 Loeb Classical Library translation by Harris Rackham, "... many people do not allow any gems in a signet-ring, and seal with the gold itself; this was a fashion invented when Claudius Cæsar was emperor."
Claudius restored the status of the peaceful Imperial Roman provinces of Macedonia and Achaea as senatorial provinces.
Under Claudius, the Empire underwent its first major expansion since the reign of Augustus. The provinces of Thrace, Noricum, Lycia, and Judea were annexed (or put under direct rule) under various circumstances during his term. The annexation of Mauretania, begun under Caligula, was completed after the defeat of rebel forces, as well as the official division of the former client kingdom into two Imperial provinces. The most far-reaching conquest was that of Britannia.
In 43, Claudius sent Aulus Plautius with four legions to Britain (Britannia) after an appeal from an ousted tribal ally. Britain was an attractive target for Rome because of its material wealth: mines and the potential of slave labor, as well as being a haven for Gallic rebels. Claudius himself traveled to the island after the completion of initial offensives, bringing with him reinforcements and elephants. The Roman colonia of Colonia Claudia Victricensis was established as the provincial capital of the newly established province of Britannia at Camulodunum, where a large temple was dedicated in his honour.
He left Britain after 16 days, but remained in the provinces for some time. The Senate granted him a triumph for his efforts. Only members of the Imperial family were allowed such honours, but Claudius subsequently lifted this restriction for some of his conquering generals. He was granted the honorific "Britannicus" but only accepted it on behalf of his son, never using the title himself. When the British general Caractacus was captured in 50, Claudius granted him clemency. Caractacus lived out his days on land provided by the Roman state, an unusual end for an enemy commander.
Claudius conducted a census in 48 that found 5,984,072 (adult male) Roman citizens (women, children, slaves, and free adult males without Roman citizenship were not counted), an increase of around a million since the census conducted at Augustus's death. He had helped increase this number through the foundation of Roman colonies that were granted blanket citizenship. These colonies were often made out of existing communities, especially those with elites who could rally the populace to the Roman cause. Several colonies were placed in new provinces or on the border of the Empire to secure Roman holdings as quickly as possible.
Claudius personally judged many of the legal cases tried during his reign. Ancient historians have many complaints about this, stating that his judgments were variable and sometimes did not follow the law. He was also easily swayed. Nevertheless, Claudius paid detailed attention to the operation of the judicial system. He extended the summer court session, as well as the winter term, by shortening the traditional breaks. Claudius also made a law requiring plaintiffs to remain in the city while their cases were pending, as defendants had previously been required to do. These measures had the effect of clearing out the docket. The minimum age for jurors was also raised to 25 to ensure a more experienced jury pool.
Claudius also settled disputes in the provinces. He freed the island of Rhodes from Roman rule for their good faith and exempted Ilium (Troy) from taxes. Early in his reign, the Greeks and Jews of Alexandria each sent him embassies after riots broke out between the two communities. This resulted in the famous "Letter to the Alexandrians", which reaffirmed Jewish rights in the city but forbade them to move in more families en masse. According to Josephus, he then reaffirmed the rights and freedoms of all the Jews in the Empire.
One of Claudius's investigators discovered that many old Roman citizens based in the city of Tridentum (modern Trento) were not in fact citizens. The Emperor issued a declaration, contained in the Tabula clesiana, that they would be allowed to hold citizenship from then on, since to strip them of their status would cause major problems. However, in individual cases, Claudius punished the false assumption of citizenship harshly, making it a capital offense. Similarly, any freedmen found to be laying false claim to membership of the Roman equestrian order were sold back into slavery.
Numerous edicts were issued throughout Claudius's reign. These were on a number of topics, everything from medical advice to moral judgments. A famous medical example is one promoting yew juice as a cure for snakebite. Suetonius wrote that he is even said to have thought of an edict allowing public flatulence for good health. One of the more famous edicts concerned the status of sick slaves. Masters had been abandoning ailing slaves at the temple of Aesculapius on Tiber Island to die instead of providing them with medical assistance and care, and then reclaiming them if they lived. Claudius ruled that slaves who were thus abandoned and recovered after such treatment would be free. Furthermore, masters who chose to kill slaves rather than take care of them were liable to be charged with murder.
Claudius embarked on many public works throughout his reign, both in the capital and in the provinces. He built or finished two aqueducts, the Aqua Claudia, begun by Caligula, and the Aqua Anio Novus. These entered the city in 52 and met at the Porta Maggiore. He also restored a third, the Aqua Virgo.
He paid special attention to transportation. Throughout Italy and the provinces he built roads and canals. Among these was a large canal leading from the Rhine to the sea, as well as a road from Italy to Germany – both begun by his father, Drusus. Closer to Rome, he built a navigable canal on the Tiber, leading to Portus, his new port just north of Ostia. This port was constructed in a semicircle with two moles and a lighthouse at its mouth, reducing flooding in Rome.
The port at Ostia was part of Claudius's solution to the constant grain shortages that occurred in winter, after the Roman shipping season. The other part of his solution was to insure the ships of grain merchants who were willing to risk travelling to Egypt in the off-season. He also granted their sailors special privileges, including citizenship and exemption from the Lex Papia Poppaea, a law that regulated marriage. In addition, he repealed the taxes that Caligula had instituted on food, and further reduced taxes on communities suffering drought or famine.
The last part of Claudius's plan to avoid famine was to increase the amount of arable land in Italy. This was to be achieved by draining the Fucine lake, also making the nearby river navigable year-round. A serious famine is mentioned in the book of Acts as taking place during Claudius' reign, and had been prophecied by a Christian called Agabus while visiting Antioch.
A tunnel was dug through the lake bed, but the plan was a failure. The tunnel was crooked and not large enough to carry the water, which caused it to back up when opened. The resultant flood washed out a large gladiatorial exhibition held to commemorate the opening, causing Claudius to run for his life along with the other spectators. The draining of the lake continued to present a problem well into the Middle Ages. It was finally achieved by the Prince Torlonia in the 19th century, producing over 160,000 acres (650 km) of new arable land. He expanded the Claudian tunnel to three times its original size.
Because of the circumstances of his accession, Claudius took great pains to please the Senate. During regular sessions, the Emperor sat among the Senate body, speaking in turn. When introducing a law, he sat on a bench between the consuls in his position as holder of the power of Tribune, (the Emperor could not officially serve as a Tribune of the Plebes since he was a patrician, but this was a power taken by previous rulers, which he continued). He refused to accept all his predecessors' titles (including Imperator) at the beginning of his reign, preferring to earn them in due course. He allowed the Senate to issue its own bronze coinage for the first time since Augustus. He also put the Imperial provinces of Macedonia and Achaea back under Senate control.
Claudius set about remodeling the Senate into a more efficient, representative body. He chided the senators about their reluctance to debate bills introduced by himself, as noted in the fragments of a surviving speech:
If you accept these proposals, Conscript Fathers, say so at once and simply, in accordance with your convictions. If you do not accept them, find alternatives, but do so here and now; or if you wish to take time for consideration, take it, provided you do not forget that you must be ready to pronounce your opinion whenever you may be summoned to meet. It ill befits the dignity of the Senate that the consul designate should repeat the phrases of the consuls word for word as his opinion, and that every one else should merely say 'I approve', and that then, after leaving, the assembly should announce 'We debated'.
In 47, he assumed the office of censor with Lucius Vitellius, which had been allowed to lapse for some time. He struck out the names of many senators and equites who no longer met qualifications, but showed respect by allowing them to resign in advance. At the same time, he sought to admit to the senate eligible men from the provinces. The Lyon Tablet preserves his speech on the admittance of Gallic senators, in which he addresses the Senate with reverence but also with criticism for their disdain of these men. He even joked about how the Senate had admitted members from beyond Gallia Narbonensis (Lyons), i.e. himself. He also increased the number of patricians by adding new families to the dwindling number of noble lines. Here he followed the precedent of Lucius Junius Brutus and Julius Caesar.
Nevertheless, many in the Senate remained hostile to Claudius, and many plots were made on his life. This hostility carried over into the historical accounts. As a result, Claudius reduced the Senate's power for the sake of efficiency. The administration of Ostia was turned over to an Imperial procurator after construction of the port. Administration of many of the empire's financial concerns was turned over to Imperial appointees and freedmen. This led to further resentment and suggestions that these same freedmen were ruling the Emperor.
Several coup attempts were made during Claudius's reign, resulting in the deaths of many senators. Appius Silanus was executed early in Claudius's reign under questionable circumstances. Shortly after this, a large rebellion was undertaken by the Senator Vinicianus and Scribonianus - governor of Dalmatia - and gained quite a few senatorial supporters. It ultimately failed because of the reluctance of Scribonianus' troops, which led to the suicide of the main conspirators.
Many other senators tried different conspiracies and were condemned. Claudius's son-in-law Pompeius Magnus was executed for his part in a conspiracy with his father Crassus Frugi. Another plot involved the consulars Lusius Saturninus, Cornelius Lupus, and Pompeius Pedo.
In 46, Asinius Gallus, grandson of Asinius Pollio, and Titus Statilius Taurus Corvinus were exiled for a plot hatched with several of Claudius's own freedmen. Valerius Asiaticus was executed without public trial for unknown reasons. Ancient sources say the charge was adultery, and that Claudius was tricked into issuing the punishment. However, Claudius singles out Asiaticus for special damnation in his speech on the Gauls, which dates over a year later, suggesting that the charge must have been much more serious.
Asiaticus had been a claimant to the throne in the chaos following Caligula's death and a co-consul with Titus Statilius Taurus Corvinus. Most of these conspiracies took place before Claudius's term as Censor, and may have induced him to review the Senatorial rolls. The conspiracy of Gaius Silius in the year after his Censorship, 48, is detailed in book 11 of Tacitus' Annal. This section of Tacitus' history narrates the alleged conspiracy of Claudius's third wife, Messalina. Suetonius states that a total of 35 senators and 300 knights were executed for offenses during Claudius's reign. Needless to say, the responses to these conspiracies could not have helped Senate–emperor relations.
Claudius was hardly the first emperor to use freedmen to help with the day-to-day running of the Empire. He was, however, forced to increase their role as the powers of the princeps became more centralized and the burden of running the government became larger. Claudius did not want free-born magistrates to serve under him as if they were not peers.
The secretariat was divided into bureaus, with each being placed under the leadership of one freedman. Narcissus was the secretary of correspondence. Pallas became the secretary of the treasury. Callistus became secretary of justice. There was a fourth bureau for miscellaneous issues, which was put under Polybius until his execution for treason. The freedmen could also officially speak for the Emperor, as when Narcissus addressed the troops in Claudius's stead before the conquest of Britain.
Since these were important positions, the senators were aghast at their being placed in the hands of former slaves and "well-known eunuchs". If freedmen had total control of money, letters and law, it seemed it would not be hard for them to manipulate the Emperor. This is exactly the accusation put forth by ancient sources. However, these same sources admit that the freedmen were loyal to Claudius.
He was similarly appreciative of them and gave them due credit for policies where he had used their advice. However, if they showed treasonous inclinations, the Emperor punished them with just force, as in the case of Polybius and Pallas's brother, Felix. There is no evidence that the character of Claudius's policies and edicts changed with the rise and fall of the various freedmen, suggesting that he was firmly in control throughout.
Regardless of the extent of their political power, the freedmen did manage to amass wealth through their positions. Pliny the Elder notes that several of them were richer than Crassus, the richest man of the Republican era.
Claudius, as the author of a treatise on Augustus's religious reforms, felt himself in a good position to institute some of his own. He had strong opinions about the proper form for state religion. He refused the request of Alexandrian Greeks to dedicate a temple to his divinity, saying that only gods may choose new gods. He restored lost days to festivals and got rid of many extraneous celebrations added by Caligula. He re-instituted old observances and archaic language.
Claudius was concerned with the spread of eastern mysteries within the city and searched for more Roman replacements. He emphasized the Eleusinian Mysteries, which had been practiced by so many during the Republic. He expelled foreign astrologers, and at the same time rehabilitated the old Roman soothsayers (known as haruspices) as a replacement. He was especially hard on Druidism, because of its incompatibility with the Roman state religion and its proselytizing activities.
According to Suetonius, Claudius was extraordinarily fond of games. He is said to have risen with the crowd after gladiatorial matches and given unrestrained praise to the fighters. Claudius also presided over many new and original events. Soon after coming into power, Claudius instituted games to be held in honor of his father on the latter's birthday. Annual games were also held in honour of his accession, and took place at the Praetorian camp where Claudius had first been proclaimed Emperor.
Claudius organised a performance of the Secular Games, marking the 800th anniversary of the founding of Rome. Augustus had performed the same games less than a century prior. Augustus's excuse was that the interval for the games was 110 years, not 100, but his date actually did not qualify under either reasoning. Claudius also presented staged naval battles to mark the attempted draining of the Fucine Lake, as well as many other public games and shows.
At Ostia, in front of a crowd of spectators, Claudius fought an orca which was trapped in the harbour. The event was witnessed by Pliny the Elder:
A killer whale was actually seen in the harbour of Ostia in battle with the Emperor Claudius; it had come at the time when he was engaged in completing the structure of the harbour, being tempted by the wreck of a cargo of hides imported from Gaul, and in glutting itself for a number of days had furrowed a hollow in the shallow bottom and had been banked up with sand by the waves so high that it was quite unable to turn round, and while it was pursuing its food which was driven forward to the shore by the waves its back projected far above the water like a capsized boat. Caesar gave orders for a barrier of nets to be stretched between the mouths of the harbour and setting out in person with the praetorian cohorts afforded a show to the Roman public, the soldiery hurling lances from the vessels against the creatures when they leapt up alongside, and we saw one of the boats sunk from being filled with water owing to a beast's snorting.
Claudius also restored and adorned many public venues in Rome. At the Circus Maximus, the turning posts and starting stalls were replaced in marble and embellished, and an embankment was probably added to prevent flooding of the track. Claudius also reinforced or extended the seating rules that reserved front seating at the Circus for senators. He rebuilt Pompey's Theatre after it had been destroyed by fire, organising special fights at the re-dedication, which he observed from a special platform in the orchestra box.
Suetonius and the other ancient authors accused Claudius of being dominated by women and wives, and of being a womanizer.
Claudius married four times, after two failed betrothals. The first betrothal was to his distant cousin Aemilia Lepida, but was broken for political reasons. The second was to Livia Medullina Camilla, which ended with Medullina's sudden death on their wedding day.
Plautia Urgulanilla was the granddaughter of Livia's confidant Urgulania. During their marriage she gave birth to a son, Claudius Drusus. Drusus died of asphyxiation in his early teens, shortly after becoming engaged to Junilla, daughter of Sejanus.
Claudius later divorced Urgulanilla for adultery and on suspicion of murdering her sister-in-law Apronia. When Urgulanilla gave birth after the divorce, Claudius repudiated the baby girl, Claudia, as the father was allegedly one of his own freedmen. Later, this action made him the target of criticism by his enemies.
Soon after, (possibly in 28) Claudius married Aelia Paetina, a relative of Sejanus, if not Sejanus's adoptive sister. During their marriage, Claudius and Paetina had a daughter, Claudia Antonia. He later divorced her after the marriage became a political liability. One version suggests that it may have been due to emotional and mental abuse by Paetina.
Some years after divorcing Aelia Paetina, in 38 or early 39, Claudius married Valeria Messalina, who was his first cousin once removed (Claudius's grandmother, Octavia the Younger, was Valeria's great-grandmother on both her mother and father's side) and closely allied with Caligula's circle. Shortly thereafter, she gave birth to a daughter, Claudia Octavia. A son, first named Tiberius Claudius Germanicus, and later known as Britannicus, was born just after Claudius's accession.
This marriage ended in tragedy. The ancient historians allege that Messalina was a nymphomaniac who was regularly unfaithful to Claudius—Tacitus states she went so far as to compete with a prostitute to see who could have more sexual partners in a night – and manipulated his policies to amass wealth. In 48, Messalina married her lover Gaius Silius in a public ceremony while Claudius was at Ostia.
Sources disagree as to whether or not she divorced the Emperor first, and whether the intention was to usurp the throne. Under Roman law, the spouse needed to be informed that he or she had been divorced before a new marriage could take place; the sources state that Claudius was in total ignorance until after the marriage. Scramuzza, in his biography, suggests that Silius may have convinced Messalina that Claudius was doomed, and the union was her only hope of retaining her rank and protecting her children. The historian Tacitus suggests that Claudius's ongoing term as Censor may have prevented him from noticing the affair before it reached such a critical point, after which she was executed.
Claudius married once more. Ancient sources tell that his freedmen put forward three candidates, Caligula's third wife Lollia Paulina, Claudius's divorced second wife Aelia Paetina and Claudius's niece Agrippina the Younger. According to Suetonius, Agrippina won out through her feminine wiles. She gradually seized power from Claudius and successfully conspired to eliminate his son's rivals, opening the way for her son to become emperor.
The truth is probably more political. The attempted coup d'état by Silius and Messalina probably made Claudius realize the weakness of his position as a member of the Claudian (but not the Julian) family. This weakness was compounded by the fact that he did not yet have an obvious adult heir, Britannicus being just a boy. Agrippina was one of the few remaining descendants of Augustus, and her son Lucius Domitius Ahenobarbus (the future Nero) was one of the last males of the Imperial family. Coup attempts might rally around the pair and Agrippina was already showing such ambition. It has been suggested that the Senate may have pushed for the marriage, an attempt to end the feud between the Julian and Claudian branches.
This feud dated back to Agrippina's mother's actions against Tiberius after the death of her husband Germanicus (Claudius's brother), actions that Tiberius had punished. In any case, Claudius accepted Agrippina and later adopted the mature Ahenobarbus as his son, renaming him as 'Nero Claudius Caesar'.
Nero was married to Claudius's daughter Octavia, made joint heir with the underage Britannicus, and promoted; Augustus had similarly named his grandson Postumus Agrippa and his stepson Tiberius as joint heirs, and Tiberius had named Caligula as his joint heir with his grandson Tiberius Gemellus. Adoption of adults or near adults was an old tradition in Rome when a suitable natural adult heir was unavailable, as was the case during Britannicus's minority. Claudius may have previously looked to adopt one of his sons-in-law to protect his own reign.
Faustus Cornelius Sulla Felix, who was married to Claudius's daughter Claudia Antonia, was only descended from Octavia and Antony on one side – not close enough to the Imperial family insure his right to be Emperor (although that did not stop others from making him the object of a coup attempt against Nero a few years later), besides being the half-brother of Valeria Messalina, which told against him. Nero was more popular with the general public as both the grandson of Germanicus and the direct descendant of Augustus.
The historian Suetonius describes the physical manifestations of Claudius's condition in relatively good detail. His knees were weak and gave way under him and his head shook. He stammered and his speech was confused. He slobbered and his nose ran when he was excited. The Stoic Seneca states in his Apocolocyntosis that Claudius's voice belonged to no land animal, and that his hands were weak as well.
However, he showed no physical deformity, as Suetonius notes that when calm and seated he was a tall, well-built figure of dignitas. When angered or stressed, his symptoms became worse. Historians agree that this condition improved upon his accession to the throne. Claudius himself claimed that he had exaggerated his ailments to save his life.
Modern assessments of his health have changed several times in the past century. Prior to World War II, infantile paralysis (or polio) was widely accepted as the cause. This is the diagnosis used in Robert Graves's Claudius novels, first published in the 1930s. The New York Times wrote in 1934 that Claudius suffered from infantile paralysis (which led to his limp state) and measles (which made him deaf) at seven months of age, among several other ailments. Polio does not explain many of the described symptoms, however, and a more recent theory implicates cerebral palsy as the cause. Tourette syndrome has also been considered a possibility.
As a person, ancient historians described Claudius as generous and lowbrow, a man who sometimes lunched with the plebeians. They also paint him as bloodthirsty and cruel, over-fond of gladiatorial combat and executions, and very quick to anger; Claudius himself acknowledged the latter trait, and apologized publicly for his temper. According to the ancient historians he was also excessively trusting, and easily manipulated by his wives and freedmen, but at the same time they portray him as paranoid and apathetic, dull and easily confused.
Claudius's extant works present a different view, painting a picture of an intelligent, scholarly, well-read, and conscientious administrator with an eye to detail and justice. Thus, Claudius becomes an enigma. Since the discovery of his "Letter to the Alexandrians", much work has been done to rehabilitate Claudius and determine the truth.
Claudius wrote copiously throughout his life. Arnaldo Momigliano states that during the reign of Tiberius, which covers the peak of Claudius's literary career, it became impolitic to speak of republican Rome. The trend among the young historians was either to write about the new empire or about obscure antiquarian topics. Claudius was the rare scholar who covered both.
Besides his history of Augustus' reign that caused him so much grief, his major works included Tyrrhenika, a twenty-book Etruscan history, and Carchedonica, an eight-volume history of Carthage, as well as an Etruscan dictionary. He also wrote a book on dice-playing. Despite the general avoidance of the topic of the Republican era, he penned a defense of Cicero against the charges of Asinius Gallus. Modern historians have used this to determine the nature of his politics and of the aborted chapters of his civil war history.
He proposed a reform of the Latin alphabet by the addition of three new letters; he officially instituted the change during his censorship but they did not survive his reign. Claudius also tried to revive the old custom of putting dots between successive words (Classical Latin was written with no spacing). Finally, he wrote an eight-volume autobiography that Suetonius describes as lacking in taste. Claudius (like most of the members of his dynasty) harshly criticized his predecessors and relatives in surviving speeches.
None of the works survived, but other sources' reference to him provide material for the surviving histories of the Julio-Claudian dynasty. Suetonius quotes Claudius's autobiography once and must have used it as a source numerous times. Tacitus uses Claudius's arguments for the orthographical innovations mentioned above and may have used him for some of the more antiquarian passages in his annals. Claudius is the source for numerous passages of Pliny's Natural History.
The influence of historical study on Claudius is obvious. In his speech on Gallic senators, he uses a version of the founding of Rome identical to that of Livy, his tutor in adolescence. The speech is meticulous in details, a common mark of all his extant works, and he goes into long digressions on related matters. This indicates a deep knowledge of a variety of historical subjects that he shared. Many of the public works instituted in his reign were based on plans first suggested by Julius Caesar. Levick believes this emulation of Caesar may have spread to all aspects of his policies.
His censorship seems to have been based on those of his ancestors, particularly Appius Claudius Caecus, and he used the office to put into place many policies based on those of Republican times. This is when many of his religious reforms took effect; also, his building efforts greatly increased during his tenure. In fact, his assumption of the office of Censor may have been motivated by a desire to see his academic labors bear fruit. For example, he believed (as most Romans did) that Caecus had used the power of the censorship office to introduce the letter "R" and so used his own term to introduce his new letters.
Ancient historians agree that Claudius was murdered by poison – possibly contained in mushrooms or on a feather – and died in the early hours of 13 October 54.
Nearly all implicate his final and powerful wife, Agrippina, as the instigator. Agrippina and Claudius had become more combative in the months leading up to his death. This carried on to the point where Claudius openly lamented his bad wives, and began to comment on Britannicus' approaching manhood with an eye towards restoring his status within the imperial family. Agrippina had motive in ensuring the succession of Nero before Britannicus could gain power.
Some implicate either his taster Halotus, his doctor Xenophon, or the infamous poisoner Locusta as the administrator of the fatal substance. Some say he died after prolonged suffering following a single dose at dinner, and some have him recovering only to be poisoned again. Among his contemporary sources, Seneca the Younger ascribed the emperor's death to natural causes, while Josephus only spoke of rumors of his poisoning.
Some historians have cast doubt on whether Claudius was murdered or merely died from illness or old age. Evidence against his murder include his serious illnesses in his last years, his unhealthy lifestyle and the fact that his taster Halotus continued to serve in the same position under Nero. Claudius had been so ill the year before that Nero vowed games for his recovery and the year of 54 seems to have been such an unhealthy year that one sitting member of each magistracy died within the span of a few months. He may even have died by eating a naturally poisonous mushroom, possibly Amanita muscaria. On the other hand, some modern scholars claim the near universality of the accusations in ancient texts lends credence to the crime. Claudius's ashes were interred in the Mausoleum of Augustus on 24 October 54, after a funeral similar to that of his great-uncle Augustus 40 years earlier.
Already, while alive, he received the widespread private worship of a living princeps and was worshipped in Britannia in his own temple in Camulodunum.
Claudius was deified by Nero and the Senate almost immediately.
Agrippina had sent Narcissus away shortly before Claudius's death, and now had the freedman murdered.
The last act of this secretary of letters was to burn all of Claudius's correspondence – most likely so it could not be used against him and others in an already hostile new regime. Thus Claudius's private words about his own policies and motives were lost to history. Just as Claudius had criticized his predecessors in official edicts, Nero often criticized the deceased Emperor, and many Claudian laws and edicts were disregarded under the reasoning that he was too stupid and senile to have meant them.
Seneca's Apocolocyntosis mocks the deification of Claudius and reinforces the view of Claudius as an unpleasant fool; this remained the official view for the duration of Nero's reign. Eventually Nero stopped referring to his deified adoptive father at all. Claudius's temple was left unfinished after only some of the foundation had been laid down. Eventually the site was overtaken by Nero's Golden House.
The Flavians, who had risen to prominence under Claudius, took a different tack. They needed to shore up their legitimacy, but also justify the fall of the Julio-Claudians. They reached back to Claudius in contrast with Nero, to show that they were associated with a good regime. Commemorative coins were issued of Claudius and his son Britannicus, who had been a friend of Emperor Titus (Titus was born in 39, Britannicus was born in 41). When Nero's Golden House was burned, the Temple of Claudius was finally completed on the Caelian Hill.
However, as the Flavians became established, they needed to emphasize their own credentials more, and their references to Claudius ceased. Instead, he was lumped with the other emperors of the fallen dynasty. His state-cult in Rome probably continued until the abolition of all cults of dead Emperors by Maximinus Thrax in 237–238. The Feriale Duranum, probably identical to the festival calendars of every regular army unit, assigns him a sacrifice of a steer on his birthday, the Kalends of August. And such commemoration (and consequent feasting) probably continued until the Christianization and disintegration of the army in the late 4th century.
The ancient historians Tacitus, Suetonius (in The Twelve Caesars), and Cassius Dio all wrote after the last of the Flavians had gone. All three were senators or equites. They took the side of the Senate in most conflicts with the Princeps, invariably viewing him as being in the wrong. This resulted in biases, both conscious and unconscious. Suetonius lost access to the official archives shortly after beginning his work. He was forced to rely on second-hand accounts when it came to Claudius (with the exception of Augustus's letters, which had been gathered earlier). Suetonius painted Claudius as a ridiculous figure, belittling many of his acts and crediting his good works to his retinue.
Tacitus wrote a narrative for his fellow senators and fitted each of the emperors into a simple mold of his choosing. He wrote of Claudius as a passive pawn and an idiot in affairs relating to the palace and public life. During his Censorship of 47–48 Tacitus allows the reader a glimpse of a Claudius who is more statesmanlike (XI.23–25), but it is a mere glimpse. Tacitus is usually held to have 'hidden' his use of Claudius's writings and to have omitted Claudius's character from his works. Even his version of Claudius's Lyons tablet speech is edited to be devoid of the emperor's personality. Dio was less biased, but seems to have used Suetonius and Tacitus as sources. Thus, the conception of Claudius as a weak fool, controlled by those he supposedly ruled, was preserved for the ages.
As time passed, Claudius was mostly forgotten outside of the historians' accounts. His books were lost first, as their antiquarian subjects became unfashionable. In the 2nd century, Pertinax, who shared his birthday, became emperor, overshadowing commemoration of Claudius.
In literature, Claudius and his contemporaries appear in the historical novel The Roman by Mika Waltari. Canadian-born science fiction writer A. E. van Vogt reimagined Robert Graves's Claudius story, in his two novels, Empire of the Atom and The Wizard of Linn.
The historical novel Chariot of the Soul by Linda Proud features Claudius as host and mentor of the young Togidubnus, son of King Verica of the Atrebates, during his ten-year stay in Rome. When Togidubnus returns to Britain in advance of the Roman army, it is with a mission given to him by Claudius. | [
{
"paragraph_id": 0,
"text": "Tiberius Claudius Caesar Augustus Germanicus (/ˈklɔːdiəs/; Latin: [tɪˈbɛriʊs ˈklau̯diʊs ˈkae̯sar au̯ˈɡʊstʊs gɛrˈmaːnɪkʊs]; 1 August 10 BC – 13 October AD 54) was Roman emperor, ruling from AD 41 to 54. A member of the Julio-Claudian dynasty, Claudius was born to Drusus and Antonia Minor at Lugdunum in Roman Gaul, where his father was stationed as a military legate. He was the first Roman emperor to be born outside Italy.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As he had a limp and slight deafness due to sickness at a young age, he was ostracized by his family and was excluded from public office until his consulship (which was shared with his nephew, Caligula, in 37). Claudius's infirmity probably saved him from the fate of many other nobles during the purges throughout the reigns of Tiberius and Caligula, as potential enemies did not see him as a serious threat. His survival led to his being declared emperor by the Praetorian Guard after Caligula's assassination, at which point he was the last adult male of his family.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Despite his lack of experience, Claudius was an able and efficient administrator. He expanded the imperial bureaucracy to include freedmen, and helped restore the empire's finances after the excesses of Caligula's reign. He was also an ambitious builder, constructing new roads, aqueducts, and canals across the Empire. During his reign the Empire started its successful conquest of Britain. Having a personal interest in law, he presided at public trials, and issued edicts daily. He was seen as vulnerable throughout his reign, particularly by elements of the nobility. Claudius was constantly forced to shore up his position, which resulted in the deaths of many senators. Those events damaged his reputation among the ancient writers, though more recent historians have revised that opinion. Many authors contend that he was murdered by his own wife, Agrippina the Younger. After his death at the age of 63, his grand-nephew and legally adopted step-son, Nero, succeeded him as emperor.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Claudius was born on 1 August 10 BC at Lugdunum (modern Lyon, France). He had two older siblings, Germanicus and Livilla. His mother, Antonia Minor, may have had two other children who died young. Claudius's maternal grandparents were Mark Antony and Octavia Minor, Augustus's sister, and he was therefore the great-great-grandnephew of Gaius Julius Caesar. His paternal grandparents were Livia, Augustus's third wife, and Tiberius Claudius Nero. During his reign, Claudius revived the rumor that his father Nero Claudius Drusus was actually the illegitimate son of Augustus, to give the appearance that Augustus was Claudius's paternal grandfather.",
"title": "Family and youth"
},
{
"paragraph_id": 4,
"text": "In 9 BC, Claudius's father Drusus died on campaign in Germania from a fall from a horse. Claudius was then raised by his mother, who never remarried. When his disability became evident, the relationship with his family turned sour. Antonia referred to him as a monster, and used him as a standard for stupidity. She seems to have passed her son off to his grandmother Livia for a number of years.",
"title": "Family and youth"
},
{
"paragraph_id": 5,
"text": "Livia was a little kinder, but nevertheless sent Claudius short, angry letters of reproof. He was put under the care of a former mule-driver to keep him disciplined, under the logic that his condition was due to laziness and a lack of willpower. However, by the time he reached his teenage years, his symptoms apparently waned and his family began to take some notice of his scholarly interests. In AD 7, Livy was hired to tutor Claudius in history, with the assistance of Sulpicius Flavus. He spent a lot of his time with the latter, as well as the philosopher Athenodorus. Augustus, according to a letter, was surprised at the clarity of Claudius's oratory.",
"title": "Family and youth"
},
{
"paragraph_id": 6,
"text": "Claudius' work as an historian damaged his prospects for advancement in public life. According to Vincent Scramuzza and others, he began work on a history of the Civil Wars that was either too truthful or too critical of Octavian, then reigning as Caesar Augustus. In either case, it was far too early for such an account, and may have only served to remind Augustus that Claudius was Antony's descendant. His mother and grandmother quickly put a stop to it, and this may have convinced them that Claudius was not fit for public office, since he could not be trusted to toe the existing party line.",
"title": "Family and youth"
},
{
"paragraph_id": 7,
"text": "When Claudius returned to the narrative later in life, he skipped over the wars of the Second Triumvirate altogether; but the damage was done, and his family pushed him into the background. When the Arch of Pavia was erected to honor the Imperial clan in AD 8, Claudius's name (now Tiberius Claudius Nero Germanicus after his elevation to pater familias of the Claudii Nerones on the adoption of his brother) was inscribed on the edge, past the deceased princes, Gaius and Lucius, and Germanicus's children. There is some speculation that the inscription was added by Claudius himself decades later, and that he originally did not appear at all.",
"title": "Family and youth"
},
{
"paragraph_id": 8,
"text": "When Augustus died in AD 14, Claudius – then aged 23 – appealed to his uncle Tiberius to allow him to begin the cursus honorum. Tiberius, the new Emperor, responded by granting Claudius consular ornaments. Claudius requested office once more and was snubbed. Since the new emperor was no more generous than the old, Claudius gave up hope of public office and retired to a scholarly, private life.",
"title": "Family and youth"
},
{
"paragraph_id": 9,
"text": "Despite the disdain of the Imperial family, it seems that from very early on the general public respected Claudius. At Augustus's death, the equites, or knights, chose Claudius to head their delegation. When his house burned down, the Senate demanded it be rebuilt at public expense. They also requested that Claudius be allowed to debate in the Senate. Tiberius turned down both motions, but the sentiment remained.",
"title": "Family and youth"
},
{
"paragraph_id": 10,
"text": "During the period immediately after the death of Tiberius's son, Drusus, Claudius was pushed by some quarters as a potential heir to the throne. This again suggests the political nature of his exclusion from public life. However, as this was also the period during which the power and terror of the commander of the Praetorian Guard, Sejanus, was at its peak, Claudius chose to downplay this possibility. After the death of Tiberius, the new emperor Caligula (the son of Claudius's brother Germanicus) recognized Claudius to be of some use. He appointed Claudius his co-consul in 37 to emphasize the memory of Caligula's deceased father Germanicus.",
"title": "Family and youth"
},
{
"paragraph_id": 11,
"text": "Despite this, Caligula tormented his uncle: playing practical jokes, charging him enormous sums of money, humiliating him before the Senate, and the like. According to Cassius Dio, Claudius became sickly and thin by the end of Caligula's reign, most likely due to stress. A possible surviving portrait of Claudius from this period may support this.",
"title": "Family and youth"
},
{
"paragraph_id": 12,
"text": "On 24 January 41, Caligula was assassinated in a conspiracy involving Cassius Chaerea – a military tribune in the Praetorian Guard – and several senators. There is no evidence that Claudius had a direct hand in the assassination, although it has been argued that he knew about the plot – particularly since he left the scene of the crime shortly before his nephew was murdered. However, after the deaths of Caligula's wife and daughter, it became apparent that Cassius intended to go beyond the terms of the conspiracy and wipe out the Imperial family.",
"title": "Family and youth"
},
{
"paragraph_id": 13,
"text": "In the chaos following the murder, Claudius witnessed the German guard cut down several uninvolved noblemen, including many of his friends. He fled to the palace to hide. According to tradition, a Praetorian named Gratus found him hiding behind a curtain and suddenly declared him princeps. Claudius was spirited away to the Praetorian camp and put under their protection.",
"title": "Family and youth"
},
{
"paragraph_id": 14,
"text": "The Senate met and debated a change of government, but this devolved into an argument over which of them would be the new princeps. When they heard of the Praetorians' claim, they demanded that Claudius be delivered to them for approval, but he refused, sensing the danger that would come with complying. Some historians, particularly Josephus, claim that Claudius was directed in his actions by the Judaean King Herod Agrippa. However, an earlier version of events by the same ancient author downplays Agrippa's role so it remains uncertain. Eventually the Senate was forced to give in. In return, Claudius granted a general amnesty, although he executed a few junior officers involved in the conspiracy. The actual assassins, including Cassius Chaerea and Julius Lupus, the murderer of Caligula's wife and daughter, were put to death to ensure Claudius's own safety and as a future deterrent.",
"title": "Family and youth"
},
{
"paragraph_id": 15,
"text": "Claudius took several steps to legitimize his rule against potential usurpers, most of them emphasizing his place within the Julio-Claudian family. He adopted the name \"Caesar\" as a cognomen, as the name still carried great weight with the populace. To do so, he dropped the cognomen \"Nero\", which he had adopted as pater familias of the Claudii Nerones when his brother Germanicus was adopted. As Pharaoh of Egypt, Claudius adopted the royal titulary Tiberios Klaudios, Autokrator Heqaheqau Meryasetptah, Kanakht Djediakhshuemakhet (\"Tiberius Claudius, Emperor and ruler of rulers, beloved of Isis and Ptah, the strong bull of the stable moon on the horizon\").",
"title": "As Emperor"
},
{
"paragraph_id": 16,
"text": "While Claudius had never been formally adopted either by Augustus or his successors, he was nevertheless the grandson of Augustus's sister Octavia, and so he felt that he had the right of family. He also adopted the name \"Augustus\" as the two previous emperors had done at their accessions. He kept the honorific \"Germanicus\" to display the connection with his heroic brother. He deified his paternal grandmother Livia to highlight her position as wife of the divine Augustus. Claudius frequently used the term \"filius Drusi\" (son of Drusus) in his titles, to remind the people of his legendary father and lay claim to his reputation.",
"title": "As Emperor"
},
{
"paragraph_id": 17,
"text": "Since Claudius was the first emperor proclaimed on the initiative of the Praetorian Guard instead of the Senate, his repute suffered at the hands of commentators (such as Seneca). Moreover, he was the first emperor who resorted to bribery as a means to secure army loyalty and rewarded the soldiers of the Praetorian Guard that had elevated him with 15,000 sesterces. Tiberius and Augustus had both left gifts to the army and guard in their wills, and upon Caligula's death the same would have been expected, even if no will existed. Claudius remained grateful to the guard, issuing coins with tributes to the Praetorians in the early part of his reign.",
"title": "As Emperor"
},
{
"paragraph_id": 18,
"text": "Pliny the Elder noted, according to the 1938 Loeb Classical Library translation by Harris Rackham, \"... many people do not allow any gems in a signet-ring, and seal with the gold itself; this was a fashion invented when Claudius Cæsar was emperor.\"",
"title": "As Emperor"
},
{
"paragraph_id": 19,
"text": "Claudius restored the status of the peaceful Imperial Roman provinces of Macedonia and Achaea as senatorial provinces.",
"title": "As Emperor"
},
{
"paragraph_id": 20,
"text": "Under Claudius, the Empire underwent its first major expansion since the reign of Augustus. The provinces of Thrace, Noricum, Lycia, and Judea were annexed (or put under direct rule) under various circumstances during his term. The annexation of Mauretania, begun under Caligula, was completed after the defeat of rebel forces, as well as the official division of the former client kingdom into two Imperial provinces. The most far-reaching conquest was that of Britannia.",
"title": "As Emperor"
},
{
"paragraph_id": 21,
"text": "In 43, Claudius sent Aulus Plautius with four legions to Britain (Britannia) after an appeal from an ousted tribal ally. Britain was an attractive target for Rome because of its material wealth: mines and the potential of slave labor, as well as being a haven for Gallic rebels. Claudius himself traveled to the island after the completion of initial offensives, bringing with him reinforcements and elephants. The Roman colonia of Colonia Claudia Victricensis was established as the provincial capital of the newly established province of Britannia at Camulodunum, where a large temple was dedicated in his honour.",
"title": "As Emperor"
},
{
"paragraph_id": 22,
"text": "He left Britain after 16 days, but remained in the provinces for some time. The Senate granted him a triumph for his efforts. Only members of the Imperial family were allowed such honours, but Claudius subsequently lifted this restriction for some of his conquering generals. He was granted the honorific \"Britannicus\" but only accepted it on behalf of his son, never using the title himself. When the British general Caractacus was captured in 50, Claudius granted him clemency. Caractacus lived out his days on land provided by the Roman state, an unusual end for an enemy commander.",
"title": "As Emperor"
},
{
"paragraph_id": 23,
"text": "Claudius conducted a census in 48 that found 5,984,072 (adult male) Roman citizens (women, children, slaves, and free adult males without Roman citizenship were not counted), an increase of around a million since the census conducted at Augustus's death. He had helped increase this number through the foundation of Roman colonies that were granted blanket citizenship. These colonies were often made out of existing communities, especially those with elites who could rally the populace to the Roman cause. Several colonies were placed in new provinces or on the border of the Empire to secure Roman holdings as quickly as possible.",
"title": "As Emperor"
},
{
"paragraph_id": 24,
"text": "Claudius personally judged many of the legal cases tried during his reign. Ancient historians have many complaints about this, stating that his judgments were variable and sometimes did not follow the law. He was also easily swayed. Nevertheless, Claudius paid detailed attention to the operation of the judicial system. He extended the summer court session, as well as the winter term, by shortening the traditional breaks. Claudius also made a law requiring plaintiffs to remain in the city while their cases were pending, as defendants had previously been required to do. These measures had the effect of clearing out the docket. The minimum age for jurors was also raised to 25 to ensure a more experienced jury pool.",
"title": "As Emperor"
},
{
"paragraph_id": 25,
"text": "Claudius also settled disputes in the provinces. He freed the island of Rhodes from Roman rule for their good faith and exempted Ilium (Troy) from taxes. Early in his reign, the Greeks and Jews of Alexandria each sent him embassies after riots broke out between the two communities. This resulted in the famous \"Letter to the Alexandrians\", which reaffirmed Jewish rights in the city but forbade them to move in more families en masse. According to Josephus, he then reaffirmed the rights and freedoms of all the Jews in the Empire.",
"title": "As Emperor"
},
{
"paragraph_id": 26,
"text": "One of Claudius's investigators discovered that many old Roman citizens based in the city of Tridentum (modern Trento) were not in fact citizens. The Emperor issued a declaration, contained in the Tabula clesiana, that they would be allowed to hold citizenship from then on, since to strip them of their status would cause major problems. However, in individual cases, Claudius punished the false assumption of citizenship harshly, making it a capital offense. Similarly, any freedmen found to be laying false claim to membership of the Roman equestrian order were sold back into slavery.",
"title": "As Emperor"
},
{
"paragraph_id": 27,
"text": "Numerous edicts were issued throughout Claudius's reign. These were on a number of topics, everything from medical advice to moral judgments. A famous medical example is one promoting yew juice as a cure for snakebite. Suetonius wrote that he is even said to have thought of an edict allowing public flatulence for good health. One of the more famous edicts concerned the status of sick slaves. Masters had been abandoning ailing slaves at the temple of Aesculapius on Tiber Island to die instead of providing them with medical assistance and care, and then reclaiming them if they lived. Claudius ruled that slaves who were thus abandoned and recovered after such treatment would be free. Furthermore, masters who chose to kill slaves rather than take care of them were liable to be charged with murder.",
"title": "As Emperor"
},
{
"paragraph_id": 28,
"text": "Claudius embarked on many public works throughout his reign, both in the capital and in the provinces. He built or finished two aqueducts, the Aqua Claudia, begun by Caligula, and the Aqua Anio Novus. These entered the city in 52 and met at the Porta Maggiore. He also restored a third, the Aqua Virgo.",
"title": "As Emperor"
},
{
"paragraph_id": 29,
"text": "He paid special attention to transportation. Throughout Italy and the provinces he built roads and canals. Among these was a large canal leading from the Rhine to the sea, as well as a road from Italy to Germany – both begun by his father, Drusus. Closer to Rome, he built a navigable canal on the Tiber, leading to Portus, his new port just north of Ostia. This port was constructed in a semicircle with two moles and a lighthouse at its mouth, reducing flooding in Rome.",
"title": "As Emperor"
},
{
"paragraph_id": 30,
"text": "The port at Ostia was part of Claudius's solution to the constant grain shortages that occurred in winter, after the Roman shipping season. The other part of his solution was to insure the ships of grain merchants who were willing to risk travelling to Egypt in the off-season. He also granted their sailors special privileges, including citizenship and exemption from the Lex Papia Poppaea, a law that regulated marriage. In addition, he repealed the taxes that Caligula had instituted on food, and further reduced taxes on communities suffering drought or famine.",
"title": "As Emperor"
},
{
"paragraph_id": 31,
"text": "The last part of Claudius's plan to avoid famine was to increase the amount of arable land in Italy. This was to be achieved by draining the Fucine lake, also making the nearby river navigable year-round. A serious famine is mentioned in the book of Acts as taking place during Claudius' reign, and had been prophecied by a Christian called Agabus while visiting Antioch.",
"title": "As Emperor"
},
{
"paragraph_id": 32,
"text": "A tunnel was dug through the lake bed, but the plan was a failure. The tunnel was crooked and not large enough to carry the water, which caused it to back up when opened. The resultant flood washed out a large gladiatorial exhibition held to commemorate the opening, causing Claudius to run for his life along with the other spectators. The draining of the lake continued to present a problem well into the Middle Ages. It was finally achieved by the Prince Torlonia in the 19th century, producing over 160,000 acres (650 km) of new arable land. He expanded the Claudian tunnel to three times its original size.",
"title": "As Emperor"
},
{
"paragraph_id": 33,
"text": "Because of the circumstances of his accession, Claudius took great pains to please the Senate. During regular sessions, the Emperor sat among the Senate body, speaking in turn. When introducing a law, he sat on a bench between the consuls in his position as holder of the power of Tribune, (the Emperor could not officially serve as a Tribune of the Plebes since he was a patrician, but this was a power taken by previous rulers, which he continued). He refused to accept all his predecessors' titles (including Imperator) at the beginning of his reign, preferring to earn them in due course. He allowed the Senate to issue its own bronze coinage for the first time since Augustus. He also put the Imperial provinces of Macedonia and Achaea back under Senate control.",
"title": "As Emperor"
},
{
"paragraph_id": 34,
"text": "Claudius set about remodeling the Senate into a more efficient, representative body. He chided the senators about their reluctance to debate bills introduced by himself, as noted in the fragments of a surviving speech:",
"title": "As Emperor"
},
{
"paragraph_id": 35,
"text": "If you accept these proposals, Conscript Fathers, say so at once and simply, in accordance with your convictions. If you do not accept them, find alternatives, but do so here and now; or if you wish to take time for consideration, take it, provided you do not forget that you must be ready to pronounce your opinion whenever you may be summoned to meet. It ill befits the dignity of the Senate that the consul designate should repeat the phrases of the consuls word for word as his opinion, and that every one else should merely say 'I approve', and that then, after leaving, the assembly should announce 'We debated'.",
"title": "As Emperor"
},
{
"paragraph_id": 36,
"text": "In 47, he assumed the office of censor with Lucius Vitellius, which had been allowed to lapse for some time. He struck out the names of many senators and equites who no longer met qualifications, but showed respect by allowing them to resign in advance. At the same time, he sought to admit to the senate eligible men from the provinces. The Lyon Tablet preserves his speech on the admittance of Gallic senators, in which he addresses the Senate with reverence but also with criticism for their disdain of these men. He even joked about how the Senate had admitted members from beyond Gallia Narbonensis (Lyons), i.e. himself. He also increased the number of patricians by adding new families to the dwindling number of noble lines. Here he followed the precedent of Lucius Junius Brutus and Julius Caesar.",
"title": "As Emperor"
},
{
"paragraph_id": 37,
"text": "Nevertheless, many in the Senate remained hostile to Claudius, and many plots were made on his life. This hostility carried over into the historical accounts. As a result, Claudius reduced the Senate's power for the sake of efficiency. The administration of Ostia was turned over to an Imperial procurator after construction of the port. Administration of many of the empire's financial concerns was turned over to Imperial appointees and freedmen. This led to further resentment and suggestions that these same freedmen were ruling the Emperor.",
"title": "As Emperor"
},
{
"paragraph_id": 38,
"text": "Several coup attempts were made during Claudius's reign, resulting in the deaths of many senators. Appius Silanus was executed early in Claudius's reign under questionable circumstances. Shortly after this, a large rebellion was undertaken by the Senator Vinicianus and Scribonianus - governor of Dalmatia - and gained quite a few senatorial supporters. It ultimately failed because of the reluctance of Scribonianus' troops, which led to the suicide of the main conspirators.",
"title": "As Emperor"
},
{
"paragraph_id": 39,
"text": "Many other senators tried different conspiracies and were condemned. Claudius's son-in-law Pompeius Magnus was executed for his part in a conspiracy with his father Crassus Frugi. Another plot involved the consulars Lusius Saturninus, Cornelius Lupus, and Pompeius Pedo.",
"title": "As Emperor"
},
{
"paragraph_id": 40,
"text": "In 46, Asinius Gallus, grandson of Asinius Pollio, and Titus Statilius Taurus Corvinus were exiled for a plot hatched with several of Claudius's own freedmen. Valerius Asiaticus was executed without public trial for unknown reasons. Ancient sources say the charge was adultery, and that Claudius was tricked into issuing the punishment. However, Claudius singles out Asiaticus for special damnation in his speech on the Gauls, which dates over a year later, suggesting that the charge must have been much more serious.",
"title": "As Emperor"
},
{
"paragraph_id": 41,
"text": "Asiaticus had been a claimant to the throne in the chaos following Caligula's death and a co-consul with Titus Statilius Taurus Corvinus. Most of these conspiracies took place before Claudius's term as Censor, and may have induced him to review the Senatorial rolls. The conspiracy of Gaius Silius in the year after his Censorship, 48, is detailed in book 11 of Tacitus' Annal. This section of Tacitus' history narrates the alleged conspiracy of Claudius's third wife, Messalina. Suetonius states that a total of 35 senators and 300 knights were executed for offenses during Claudius's reign. Needless to say, the responses to these conspiracies could not have helped Senate–emperor relations.",
"title": "As Emperor"
},
{
"paragraph_id": 42,
"text": "Claudius was hardly the first emperor to use freedmen to help with the day-to-day running of the Empire. He was, however, forced to increase their role as the powers of the princeps became more centralized and the burden of running the government became larger. Claudius did not want free-born magistrates to serve under him as if they were not peers.",
"title": "As Emperor"
},
{
"paragraph_id": 43,
"text": "The secretariat was divided into bureaus, with each being placed under the leadership of one freedman. Narcissus was the secretary of correspondence. Pallas became the secretary of the treasury. Callistus became secretary of justice. There was a fourth bureau for miscellaneous issues, which was put under Polybius until his execution for treason. The freedmen could also officially speak for the Emperor, as when Narcissus addressed the troops in Claudius's stead before the conquest of Britain.",
"title": "As Emperor"
},
{
"paragraph_id": 44,
"text": "Since these were important positions, the senators were aghast at their being placed in the hands of former slaves and \"well-known eunuchs\". If freedmen had total control of money, letters and law, it seemed it would not be hard for them to manipulate the Emperor. This is exactly the accusation put forth by ancient sources. However, these same sources admit that the freedmen were loyal to Claudius.",
"title": "As Emperor"
},
{
"paragraph_id": 45,
"text": "He was similarly appreciative of them and gave them due credit for policies where he had used their advice. However, if they showed treasonous inclinations, the Emperor punished them with just force, as in the case of Polybius and Pallas's brother, Felix. There is no evidence that the character of Claudius's policies and edicts changed with the rise and fall of the various freedmen, suggesting that he was firmly in control throughout.",
"title": "As Emperor"
},
{
"paragraph_id": 46,
"text": "Regardless of the extent of their political power, the freedmen did manage to amass wealth through their positions. Pliny the Elder notes that several of them were richer than Crassus, the richest man of the Republican era.",
"title": "As Emperor"
},
{
"paragraph_id": 47,
"text": "Claudius, as the author of a treatise on Augustus's religious reforms, felt himself in a good position to institute some of his own. He had strong opinions about the proper form for state religion. He refused the request of Alexandrian Greeks to dedicate a temple to his divinity, saying that only gods may choose new gods. He restored lost days to festivals and got rid of many extraneous celebrations added by Caligula. He re-instituted old observances and archaic language.",
"title": "As Emperor"
},
{
"paragraph_id": 48,
"text": "Claudius was concerned with the spread of eastern mysteries within the city and searched for more Roman replacements. He emphasized the Eleusinian Mysteries, which had been practiced by so many during the Republic. He expelled foreign astrologers, and at the same time rehabilitated the old Roman soothsayers (known as haruspices) as a replacement. He was especially hard on Druidism, because of its incompatibility with the Roman state religion and its proselytizing activities.",
"title": "As Emperor"
},
{
"paragraph_id": 49,
"text": "According to Suetonius, Claudius was extraordinarily fond of games. He is said to have risen with the crowd after gladiatorial matches and given unrestrained praise to the fighters. Claudius also presided over many new and original events. Soon after coming into power, Claudius instituted games to be held in honor of his father on the latter's birthday. Annual games were also held in honour of his accession, and took place at the Praetorian camp where Claudius had first been proclaimed Emperor.",
"title": "As Emperor"
},
{
"paragraph_id": 50,
"text": "Claudius organised a performance of the Secular Games, marking the 800th anniversary of the founding of Rome. Augustus had performed the same games less than a century prior. Augustus's excuse was that the interval for the games was 110 years, not 100, but his date actually did not qualify under either reasoning. Claudius also presented staged naval battles to mark the attempted draining of the Fucine Lake, as well as many other public games and shows.",
"title": "As Emperor"
},
{
"paragraph_id": 51,
"text": "At Ostia, in front of a crowd of spectators, Claudius fought an orca which was trapped in the harbour. The event was witnessed by Pliny the Elder:",
"title": "As Emperor"
},
{
"paragraph_id": 52,
"text": "A killer whale was actually seen in the harbour of Ostia in battle with the Emperor Claudius; it had come at the time when he was engaged in completing the structure of the harbour, being tempted by the wreck of a cargo of hides imported from Gaul, and in glutting itself for a number of days had furrowed a hollow in the shallow bottom and had been banked up with sand by the waves so high that it was quite unable to turn round, and while it was pursuing its food which was driven forward to the shore by the waves its back projected far above the water like a capsized boat. Caesar gave orders for a barrier of nets to be stretched between the mouths of the harbour and setting out in person with the praetorian cohorts afforded a show to the Roman public, the soldiery hurling lances from the vessels against the creatures when they leapt up alongside, and we saw one of the boats sunk from being filled with water owing to a beast's snorting.",
"title": "As Emperor"
},
{
"paragraph_id": 53,
"text": "Claudius also restored and adorned many public venues in Rome. At the Circus Maximus, the turning posts and starting stalls were replaced in marble and embellished, and an embankment was probably added to prevent flooding of the track. Claudius also reinforced or extended the seating rules that reserved front seating at the Circus for senators. He rebuilt Pompey's Theatre after it had been destroyed by fire, organising special fights at the re-dedication, which he observed from a special platform in the orchestra box.",
"title": "As Emperor"
},
{
"paragraph_id": 54,
"text": "Suetonius and the other ancient authors accused Claudius of being dominated by women and wives, and of being a womanizer.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 55,
"text": "Claudius married four times, after two failed betrothals. The first betrothal was to his distant cousin Aemilia Lepida, but was broken for political reasons. The second was to Livia Medullina Camilla, which ended with Medullina's sudden death on their wedding day.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 56,
"text": "Plautia Urgulanilla was the granddaughter of Livia's confidant Urgulania. During their marriage she gave birth to a son, Claudius Drusus. Drusus died of asphyxiation in his early teens, shortly after becoming engaged to Junilla, daughter of Sejanus.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 57,
"text": "Claudius later divorced Urgulanilla for adultery and on suspicion of murdering her sister-in-law Apronia. When Urgulanilla gave birth after the divorce, Claudius repudiated the baby girl, Claudia, as the father was allegedly one of his own freedmen. Later, this action made him the target of criticism by his enemies.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 58,
"text": "Soon after, (possibly in 28) Claudius married Aelia Paetina, a relative of Sejanus, if not Sejanus's adoptive sister. During their marriage, Claudius and Paetina had a daughter, Claudia Antonia. He later divorced her after the marriage became a political liability. One version suggests that it may have been due to emotional and mental abuse by Paetina.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 59,
"text": "Some years after divorcing Aelia Paetina, in 38 or early 39, Claudius married Valeria Messalina, who was his first cousin once removed (Claudius's grandmother, Octavia the Younger, was Valeria's great-grandmother on both her mother and father's side) and closely allied with Caligula's circle. Shortly thereafter, she gave birth to a daughter, Claudia Octavia. A son, first named Tiberius Claudius Germanicus, and later known as Britannicus, was born just after Claudius's accession.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 60,
"text": "This marriage ended in tragedy. The ancient historians allege that Messalina was a nymphomaniac who was regularly unfaithful to Claudius—Tacitus states she went so far as to compete with a prostitute to see who could have more sexual partners in a night – and manipulated his policies to amass wealth. In 48, Messalina married her lover Gaius Silius in a public ceremony while Claudius was at Ostia.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 61,
"text": "Sources disagree as to whether or not she divorced the Emperor first, and whether the intention was to usurp the throne. Under Roman law, the spouse needed to be informed that he or she had been divorced before a new marriage could take place; the sources state that Claudius was in total ignorance until after the marriage. Scramuzza, in his biography, suggests that Silius may have convinced Messalina that Claudius was doomed, and the union was her only hope of retaining her rank and protecting her children. The historian Tacitus suggests that Claudius's ongoing term as Censor may have prevented him from noticing the affair before it reached such a critical point, after which she was executed.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 62,
"text": "Claudius married once more. Ancient sources tell that his freedmen put forward three candidates, Caligula's third wife Lollia Paulina, Claudius's divorced second wife Aelia Paetina and Claudius's niece Agrippina the Younger. According to Suetonius, Agrippina won out through her feminine wiles. She gradually seized power from Claudius and successfully conspired to eliminate his son's rivals, opening the way for her son to become emperor.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 63,
"text": "The truth is probably more political. The attempted coup d'état by Silius and Messalina probably made Claudius realize the weakness of his position as a member of the Claudian (but not the Julian) family. This weakness was compounded by the fact that he did not yet have an obvious adult heir, Britannicus being just a boy. Agrippina was one of the few remaining descendants of Augustus, and her son Lucius Domitius Ahenobarbus (the future Nero) was one of the last males of the Imperial family. Coup attempts might rally around the pair and Agrippina was already showing such ambition. It has been suggested that the Senate may have pushed for the marriage, an attempt to end the feud between the Julian and Claudian branches.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 64,
"text": "This feud dated back to Agrippina's mother's actions against Tiberius after the death of her husband Germanicus (Claudius's brother), actions that Tiberius had punished. In any case, Claudius accepted Agrippina and later adopted the mature Ahenobarbus as his son, renaming him as 'Nero Claudius Caesar'.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 65,
"text": "Nero was married to Claudius's daughter Octavia, made joint heir with the underage Britannicus, and promoted; Augustus had similarly named his grandson Postumus Agrippa and his stepson Tiberius as joint heirs, and Tiberius had named Caligula as his joint heir with his grandson Tiberius Gemellus. Adoption of adults or near adults was an old tradition in Rome when a suitable natural adult heir was unavailable, as was the case during Britannicus's minority. Claudius may have previously looked to adopt one of his sons-in-law to protect his own reign.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 66,
"text": "Faustus Cornelius Sulla Felix, who was married to Claudius's daughter Claudia Antonia, was only descended from Octavia and Antony on one side – not close enough to the Imperial family insure his right to be Emperor (although that did not stop others from making him the object of a coup attempt against Nero a few years later), besides being the half-brother of Valeria Messalina, which told against him. Nero was more popular with the general public as both the grandson of Germanicus and the direct descendant of Augustus.",
"title": "Marriages and personal life"
},
{
"paragraph_id": 67,
"text": "The historian Suetonius describes the physical manifestations of Claudius's condition in relatively good detail. His knees were weak and gave way under him and his head shook. He stammered and his speech was confused. He slobbered and his nose ran when he was excited. The Stoic Seneca states in his Apocolocyntosis that Claudius's voice belonged to no land animal, and that his hands were weak as well.",
"title": "Affliction and personality"
},
{
"paragraph_id": 68,
"text": "However, he showed no physical deformity, as Suetonius notes that when calm and seated he was a tall, well-built figure of dignitas. When angered or stressed, his symptoms became worse. Historians agree that this condition improved upon his accession to the throne. Claudius himself claimed that he had exaggerated his ailments to save his life.",
"title": "Affliction and personality"
},
{
"paragraph_id": 69,
"text": "Modern assessments of his health have changed several times in the past century. Prior to World War II, infantile paralysis (or polio) was widely accepted as the cause. This is the diagnosis used in Robert Graves's Claudius novels, first published in the 1930s. The New York Times wrote in 1934 that Claudius suffered from infantile paralysis (which led to his limp state) and measles (which made him deaf) at seven months of age, among several other ailments. Polio does not explain many of the described symptoms, however, and a more recent theory implicates cerebral palsy as the cause. Tourette syndrome has also been considered a possibility.",
"title": "Affliction and personality"
},
{
"paragraph_id": 70,
"text": "As a person, ancient historians described Claudius as generous and lowbrow, a man who sometimes lunched with the plebeians. They also paint him as bloodthirsty and cruel, over-fond of gladiatorial combat and executions, and very quick to anger; Claudius himself acknowledged the latter trait, and apologized publicly for his temper. According to the ancient historians he was also excessively trusting, and easily manipulated by his wives and freedmen, but at the same time they portray him as paranoid and apathetic, dull and easily confused.",
"title": "Affliction and personality"
},
{
"paragraph_id": 71,
"text": "Claudius's extant works present a different view, painting a picture of an intelligent, scholarly, well-read, and conscientious administrator with an eye to detail and justice. Thus, Claudius becomes an enigma. Since the discovery of his \"Letter to the Alexandrians\", much work has been done to rehabilitate Claudius and determine the truth.",
"title": "Affliction and personality"
},
{
"paragraph_id": 72,
"text": "Claudius wrote copiously throughout his life. Arnaldo Momigliano states that during the reign of Tiberius, which covers the peak of Claudius's literary career, it became impolitic to speak of republican Rome. The trend among the young historians was either to write about the new empire or about obscure antiquarian topics. Claudius was the rare scholar who covered both.",
"title": "Scholarly works and their impact"
},
{
"paragraph_id": 73,
"text": "Besides his history of Augustus' reign that caused him so much grief, his major works included Tyrrhenika, a twenty-book Etruscan history, and Carchedonica, an eight-volume history of Carthage, as well as an Etruscan dictionary. He also wrote a book on dice-playing. Despite the general avoidance of the topic of the Republican era, he penned a defense of Cicero against the charges of Asinius Gallus. Modern historians have used this to determine the nature of his politics and of the aborted chapters of his civil war history.",
"title": "Scholarly works and their impact"
},
{
"paragraph_id": 74,
"text": "He proposed a reform of the Latin alphabet by the addition of three new letters; he officially instituted the change during his censorship but they did not survive his reign. Claudius also tried to revive the old custom of putting dots between successive words (Classical Latin was written with no spacing). Finally, he wrote an eight-volume autobiography that Suetonius describes as lacking in taste. Claudius (like most of the members of his dynasty) harshly criticized his predecessors and relatives in surviving speeches.",
"title": "Scholarly works and their impact"
},
{
"paragraph_id": 75,
"text": "None of the works survived, but other sources' reference to him provide material for the surviving histories of the Julio-Claudian dynasty. Suetonius quotes Claudius's autobiography once and must have used it as a source numerous times. Tacitus uses Claudius's arguments for the orthographical innovations mentioned above and may have used him for some of the more antiquarian passages in his annals. Claudius is the source for numerous passages of Pliny's Natural History.",
"title": "Scholarly works and their impact"
},
{
"paragraph_id": 76,
"text": "The influence of historical study on Claudius is obvious. In his speech on Gallic senators, he uses a version of the founding of Rome identical to that of Livy, his tutor in adolescence. The speech is meticulous in details, a common mark of all his extant works, and he goes into long digressions on related matters. This indicates a deep knowledge of a variety of historical subjects that he shared. Many of the public works instituted in his reign were based on plans first suggested by Julius Caesar. Levick believes this emulation of Caesar may have spread to all aspects of his policies.",
"title": "Scholarly works and their impact"
},
{
"paragraph_id": 77,
"text": "His censorship seems to have been based on those of his ancestors, particularly Appius Claudius Caecus, and he used the office to put into place many policies based on those of Republican times. This is when many of his religious reforms took effect; also, his building efforts greatly increased during his tenure. In fact, his assumption of the office of Censor may have been motivated by a desire to see his academic labors bear fruit. For example, he believed (as most Romans did) that Caecus had used the power of the censorship office to introduce the letter \"R\" and so used his own term to introduce his new letters.",
"title": "Scholarly works and their impact"
},
{
"paragraph_id": 78,
"text": "Ancient historians agree that Claudius was murdered by poison – possibly contained in mushrooms or on a feather – and died in the early hours of 13 October 54.",
"title": "Death"
},
{
"paragraph_id": 79,
"text": "Nearly all implicate his final and powerful wife, Agrippina, as the instigator. Agrippina and Claudius had become more combative in the months leading up to his death. This carried on to the point where Claudius openly lamented his bad wives, and began to comment on Britannicus' approaching manhood with an eye towards restoring his status within the imperial family. Agrippina had motive in ensuring the succession of Nero before Britannicus could gain power.",
"title": "Death"
},
{
"paragraph_id": 80,
"text": "Some implicate either his taster Halotus, his doctor Xenophon, or the infamous poisoner Locusta as the administrator of the fatal substance. Some say he died after prolonged suffering following a single dose at dinner, and some have him recovering only to be poisoned again. Among his contemporary sources, Seneca the Younger ascribed the emperor's death to natural causes, while Josephus only spoke of rumors of his poisoning.",
"title": "Death"
},
{
"paragraph_id": 81,
"text": "Some historians have cast doubt on whether Claudius was murdered or merely died from illness or old age. Evidence against his murder include his serious illnesses in his last years, his unhealthy lifestyle and the fact that his taster Halotus continued to serve in the same position under Nero. Claudius had been so ill the year before that Nero vowed games for his recovery and the year of 54 seems to have been such an unhealthy year that one sitting member of each magistracy died within the span of a few months. He may even have died by eating a naturally poisonous mushroom, possibly Amanita muscaria. On the other hand, some modern scholars claim the near universality of the accusations in ancient texts lends credence to the crime. Claudius's ashes were interred in the Mausoleum of Augustus on 24 October 54, after a funeral similar to that of his great-uncle Augustus 40 years earlier.",
"title": "Death"
},
{
"paragraph_id": 82,
"text": "Already, while alive, he received the widespread private worship of a living princeps and was worshipped in Britannia in his own temple in Camulodunum.",
"title": "Legacy"
},
{
"paragraph_id": 83,
"text": "Claudius was deified by Nero and the Senate almost immediately.",
"title": "Legacy"
},
{
"paragraph_id": 84,
"text": "Agrippina had sent Narcissus away shortly before Claudius's death, and now had the freedman murdered.",
"title": "Legacy"
},
{
"paragraph_id": 85,
"text": "The last act of this secretary of letters was to burn all of Claudius's correspondence – most likely so it could not be used against him and others in an already hostile new regime. Thus Claudius's private words about his own policies and motives were lost to history. Just as Claudius had criticized his predecessors in official edicts, Nero often criticized the deceased Emperor, and many Claudian laws and edicts were disregarded under the reasoning that he was too stupid and senile to have meant them.",
"title": "Legacy"
},
{
"paragraph_id": 86,
"text": "Seneca's Apocolocyntosis mocks the deification of Claudius and reinforces the view of Claudius as an unpleasant fool; this remained the official view for the duration of Nero's reign. Eventually Nero stopped referring to his deified adoptive father at all. Claudius's temple was left unfinished after only some of the foundation had been laid down. Eventually the site was overtaken by Nero's Golden House.",
"title": "Legacy"
},
{
"paragraph_id": 87,
"text": "The Flavians, who had risen to prominence under Claudius, took a different tack. They needed to shore up their legitimacy, but also justify the fall of the Julio-Claudians. They reached back to Claudius in contrast with Nero, to show that they were associated with a good regime. Commemorative coins were issued of Claudius and his son Britannicus, who had been a friend of Emperor Titus (Titus was born in 39, Britannicus was born in 41). When Nero's Golden House was burned, the Temple of Claudius was finally completed on the Caelian Hill.",
"title": "Legacy"
},
{
"paragraph_id": 88,
"text": "However, as the Flavians became established, they needed to emphasize their own credentials more, and their references to Claudius ceased. Instead, he was lumped with the other emperors of the fallen dynasty. His state-cult in Rome probably continued until the abolition of all cults of dead Emperors by Maximinus Thrax in 237–238. The Feriale Duranum, probably identical to the festival calendars of every regular army unit, assigns him a sacrifice of a steer on his birthday, the Kalends of August. And such commemoration (and consequent feasting) probably continued until the Christianization and disintegration of the army in the late 4th century.",
"title": "Legacy"
},
{
"paragraph_id": 89,
"text": "The ancient historians Tacitus, Suetonius (in The Twelve Caesars), and Cassius Dio all wrote after the last of the Flavians had gone. All three were senators or equites. They took the side of the Senate in most conflicts with the Princeps, invariably viewing him as being in the wrong. This resulted in biases, both conscious and unconscious. Suetonius lost access to the official archives shortly after beginning his work. He was forced to rely on second-hand accounts when it came to Claudius (with the exception of Augustus's letters, which had been gathered earlier). Suetonius painted Claudius as a ridiculous figure, belittling many of his acts and crediting his good works to his retinue.",
"title": "Legacy"
},
{
"paragraph_id": 90,
"text": "Tacitus wrote a narrative for his fellow senators and fitted each of the emperors into a simple mold of his choosing. He wrote of Claudius as a passive pawn and an idiot in affairs relating to the palace and public life. During his Censorship of 47–48 Tacitus allows the reader a glimpse of a Claudius who is more statesmanlike (XI.23–25), but it is a mere glimpse. Tacitus is usually held to have 'hidden' his use of Claudius's writings and to have omitted Claudius's character from his works. Even his version of Claudius's Lyons tablet speech is edited to be devoid of the emperor's personality. Dio was less biased, but seems to have used Suetonius and Tacitus as sources. Thus, the conception of Claudius as a weak fool, controlled by those he supposedly ruled, was preserved for the ages.",
"title": "Legacy"
},
{
"paragraph_id": 91,
"text": "As time passed, Claudius was mostly forgotten outside of the historians' accounts. His books were lost first, as their antiquarian subjects became unfashionable. In the 2nd century, Pertinax, who shared his birthday, became emperor, overshadowing commemoration of Claudius.",
"title": "Legacy"
},
{
"paragraph_id": 92,
"text": "In literature, Claudius and his contemporaries appear in the historical novel The Roman by Mika Waltari. Canadian-born science fiction writer A. E. van Vogt reimagined Robert Graves's Claudius story, in his two novels, Empire of the Atom and The Wizard of Linn.",
"title": "In modern media"
},
{
"paragraph_id": 93,
"text": "The historical novel Chariot of the Soul by Linda Proud features Claudius as host and mentor of the young Togidubnus, son of King Verica of the Atrebates, during his ten-year stay in Rome. When Togidubnus returns to Britain in advance of the Roman army, it is with a mission given to him by Claudius.",
"title": "In modern media"
}
]
| Tiberius Claudius Caesar Augustus Germanicus was Roman emperor, ruling from AD 41 to 54. A member of the Julio-Claudian dynasty, Claudius was born to Drusus and Antonia Minor at Lugdunum in Roman Gaul, where his father was stationed as a military legate. He was the first Roman emperor to be born outside Italy. As he had a limp and slight deafness due to sickness at a young age, he was ostracized by his family and was excluded from public office until his consulship. Claudius's infirmity probably saved him from the fate of many other nobles during the purges throughout the reigns of Tiberius and Caligula, as potential enemies did not see him as a serious threat. His survival led to his being declared emperor by the Praetorian Guard after Caligula's assassination, at which point he was the last adult male of his family. Despite his lack of experience, Claudius was an able and efficient administrator. He expanded the imperial bureaucracy to include freedmen, and helped restore the empire's finances after the excesses of Caligula's reign. He was also an ambitious builder, constructing new roads, aqueducts, and canals across the Empire. During his reign the Empire started its successful conquest of Britain. Having a personal interest in law, he presided at public trials, and issued edicts daily. He was seen as vulnerable throughout his reign, particularly by elements of the nobility. Claudius was constantly forced to shore up his position, which resulted in the deaths of many senators. Those events damaged his reputation among the ancient writers, though more recent historians have revised that opinion. Many authors contend that he was murdered by his own wife, Agrippina the Younger. After his death at the age of 63, his grand-nephew and legally adopted step-son, Nero, succeeded him as emperor. | 2001-10-02T10:37:55Z | 2023-12-13T19:02:25Z | [
"Template:Blockquote",
"Template:Clarify",
"Template:Webarchive",
"Template:S-start",
"Template:Efn",
"Template:ISBN",
"Template:S-hou",
"Template:S-end",
"Template:Authority control",
"Template:IPA-la",
"Template:Citation needed",
"Template:Notelist",
"Template:Harvnb",
"Template:Stack",
"Template:IPAc-en",
"Template:S-ttl",
"Template:S-aft",
"Template:Roman emperors",
"Template:Distinguish",
"Template:Page needed",
"Template:S-roy",
"Template:S-bef",
"Template:Other people",
"Template:Multiple image",
"Template:Cite book",
"Template:Citation",
"Template:Commons",
"Template:EB1911 poster",
"Template:Use dmy dates",
"Template:Convert",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite news",
"Template:Refbegin",
"Template:Refend",
"Template:Short description",
"Template:See also",
"Template:Sfn",
"Template:Snd",
"Template:S-off"
]
| https://en.wikipedia.org/wiki/Claudius |
6,141 | Cardinal | Cardinal or The Cardinal may refer to: | [
{
"paragraph_id": 0,
"text": "Cardinal or The Cardinal may refer to:",
"title": ""
}
]
| Cardinal or The Cardinal may refer to: | 2001-10-13T19:07:10Z | 2023-10-11T05:50:04Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Cardinal |
6,172 | Cantor set | In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of unintuitive properties. It was discovered in 1874 by Henry John Stephen Smith and introduced by German mathematician Georg Cantor in 1883.
Through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. The most common construction is the Cantor ternary set, built by removing the middle third of a line segment and then repeating the process with the remaining shorter segments. Cantor mentioned the ternary construction only in passing, as an example of a more general idea, that of a perfect set that is nowhere dense.
More generally, in topology, a Cantor space is a topological space homeomorphic to the Cantor ternary set (equipped with its subspace topology). By a theorem of L. E. J. Brouwer, this is equivalent to being perfect nonempty, compact metrizable and zero dimensional.
The Cantor ternary set C {\displaystyle {\mathcal {C}}} is created by iteratively deleting the open middle third from a set of line segments. One starts by deleting the open middle third ( 1 3 , 2 3 ) {\textstyle \left({\frac {1}{3}},{\frac {2}{3}}\right)} from the interval [ 0 , 1 ] {\displaystyle \textstyle \left[0,1\right]} , leaving two line segments: [ 0 , 1 3 ] ∪ [ 2 3 , 1 ] {\textstyle \left[0,{\frac {1}{3}}\right]\cup \left[{\frac {2}{3}},1\right]} . Next, the open middle third of each of these remaining segments is deleted, leaving four line segments: [ 0 , 1 9 ] ∪ [ 2 9 , 1 3 ] ∪ [ 2 3 , 7 9 ] ∪ [ 8 9 , 1 ] {\textstyle \left[0,{\frac {1}{9}}\right]\cup \left[{\frac {2}{9}},{\frac {1}{3}}\right]\cup \left[{\frac {2}{3}},{\frac {7}{9}}\right]\cup \left[{\frac {8}{9}},1\right]} . The Cantor ternary set contains all points in the interval [ 0 , 1 ] {\displaystyle [0,1]} that are not deleted at any step in this infinite process. The same facts can be described recursively by setting
and
for n ≥ 1 {\displaystyle n\geq 1} , so that
The first six steps of this process are illustrated below.
Using the idea of self-similar transformations, T L ( x ) = x / 3 , {\displaystyle T_{L}(x)=x/3,} T R ( x ) = ( 2 + x ) / 3 {\displaystyle T_{R}(x)=(2+x)/3} and C n = T L ( C n − 1 ) ∪ T R ( C n − 1 ) , {\displaystyle C_{n}=T_{L}(C_{n-1})\cup T_{R}(C_{n-1}),} the explicit closed formulas for the Cantor set are
where every middle third is removed as the open interval ( 3 k + 1 3 n + 1 , 3 k + 2 3 n + 1 ) {\textstyle \left({\frac {3k+1}{3^{n+1}}},{\frac {3k+2}{3^{n+1}}}\right)} from the closed interval [ 3 k + 0 3 n + 1 , 3 k + 3 3 n + 1 ] = [ k + 0 3 n , k + 1 3 n ] {\textstyle \left[{\frac {3k+0}{3^{n+1}}},{\frac {3k+3}{3^{n+1}}}\right]=\left[{\frac {k+0}{3^{n}}},{\frac {k+1}{3^{n}}}\right]} surrounding it, or
where the middle third ( 3 k + 1 3 n , 3 k + 2 3 n ) {\textstyle \left({\frac {3k+1}{3^{n}}},{\frac {3k+2}{3^{n}}}\right)} of the foregoing closed interval [ k + 0 3 n − 1 , k + 1 3 n − 1 ] = [ 3 k + 0 3 n , 3 k + 3 3 n ] {\textstyle \left[{\frac {k+0}{3^{n-1}}},{\frac {k+1}{3^{n-1}}}\right]=\left[{\frac {3k+0}{3^{n}}},{\frac {3k+3}{3^{n}}}\right]} is removed by intersecting with [ 3 k + 0 3 n , 3 k + 1 3 n ] ∪ [ 3 k + 2 3 n , 3 k + 3 3 n ] . {\textstyle \left[{\frac {3k+0}{3^{n}}},{\frac {3k+1}{3^{n}}}\right]\cup \left[{\frac {3k+2}{3^{n}}},{\frac {3k+3}{3^{n}}}\right]\!.}
This process of removing middle thirds is a simple example of a finite subdivision rule. The complement of the Cantor ternary set is an example of a fractal string.
In arithmetical terms, the Cantor set consists of all real numbers of the unit interval [ 0 , 1 ] {\displaystyle [0,1]} that do not require the digit 1 in order to be expressed as a ternary (base 3) fraction. As the above diagram illustrates, each point in the Cantor set is uniquely located by a path through an infinitely deep binary tree, where the path turns left or right at each level according to which side of a deleted segment the point lies on. Representing each left turn with 0 and each right turn with 2 yields the ternary fraction for a point.
In The Fractal Geometry of Nature, mathematician Benoit Mandelbrot provides a whimsical thought experiment to assist non-mathematical readers in imagining the construction of C {\displaystyle {\mathcal {C}}} . His narrative begins with imagining a bar, perhaps of lightweight metal, in which the bar's matter "curdles" by iteratively shifting towards its extremities. As the bar's segments become smaller, they become thin, dense slugs that eventually grow too small and faint to see.
CURDLING: The construction of the Cantor bar results from the process I call curdling. It begins with a round bar. It is best to think of it as having a very low density. Then matter "curdles" out of this bar's middle third into the end thirds, so that the positions of the latter remain unchanged. Next matter curdles out of the middle third of each end third into its end thirds, and so on ad infinitum until one is left with an infinitely large number of infinitely thin slugs of infinitely high density. These slugs are spaced along the line in the very specific fashion induced by the generating process. In this illustration, curdling (which eventually requires hammering!) stops when both the printer's press and our eye cease to follow; the last line is indistinguishable from the last but one: each of its ultimate parts is seen as a gray slug rather than two parallel black slugs.
Since the Cantor set is defined as the set of points not excluded, the proportion (i.e., measure) of the unit interval remaining can be found by total length removed. This total is the geometric progression
So that the proportion left is 1 − 1 = 0.
This calculation suggests that the Cantor set cannot contain any interval of non-zero length. It may seem surprising that there should be anything left—after all, the sum of the lengths of the removed intervals is equal to the length of the original interval. However, a closer look at the process reveals that there must be something left, since removing the "middle third" of each interval involved removing open sets (sets that do not include their endpoints). So removing the line segment (1/3, 2/3) from the original interval [0, 1] leaves behind the points 1/3 and 2/3. Subsequent steps do not remove these (or other) endpoints, since the intervals removed are always internal to the intervals remaining. So the Cantor set is not empty, and in fact contains an uncountably infinite number of points (as follows from the above description in terms of paths in an infinite binary tree).
It may appear that only the endpoints of the construction segments are left, but that is not the case either. The number 1/4, for example, has the unique ternary form 0.020202... = 0.02. It is in the bottom third, and the top third of that third, and the bottom third of that top third, and so on. Since it is never in one of the middle segments, it is never removed. Yet it is also not an endpoint of any middle segment, because it is not a multiple of any power of 1/3. All endpoints of segments are terminating ternary fractions and are contained in the set
which is a countably infinite set. As to cardinality, almost all elements of the Cantor set are not endpoints of intervals, nor rational points like 1/4. The whole Cantor set is in fact not countable.
It can be shown that there are as many points left behind in this process as there were to begin with, and that therefore, the Cantor set is uncountable. To see this, we show that there is a function f from the Cantor set C {\displaystyle {\mathcal {C}}} to the closed interval [0,1] that is surjective (i.e. f maps from C {\displaystyle {\mathcal {C}}} onto [0,1]) so that the cardinality of C {\displaystyle {\mathcal {C}}} is no less than that of [0,1]. Since C {\displaystyle {\mathcal {C}}} is a subset of [0,1], its cardinality is also no greater, so the two cardinalities must in fact be equal, by the Cantor–Bernstein–Schröder theorem.
To construct this function, consider the points in the [0, 1] interval in terms of base 3 (or ternary) notation. Recall that the proper ternary fractions, more precisely: the elements of ( Z ∖ { 0 } ) ⋅ 3 − N 0 {\displaystyle {\bigl (}\mathbb {Z} \setminus \{0\}{\bigr )}\cdot 3^{-\mathbb {N} _{0}}} , admit more than one representation in this notation, as for example 1/3, that can be written as 0.13 = 0.103, but also as 0.0222...3 = 0.023, and 2/3, that can be written as 0.23 = 0.203 but also as 0.1222...3 = 0.123. When we remove the middle third, this contains the numbers with ternary numerals of the form 0.1xxxxx...3 where xxxxx...3 is strictly between 00000...3 and 22222...3. So the numbers remaining after the first step consist of
This can be summarized by saying that those numbers with a ternary representation such that the first digit after the radix point is not 1 are the ones remaining after the first step.
The second step removes numbers of the form 0.01xxxx...3 and 0.21xxxx...3, and (with appropriate care for the endpoints) it can be concluded that the remaining numbers are those with a ternary numeral where neither of the first two digits is 1.
Continuing in this way, for a number not to be excluded at step n, it must have a ternary representation whose nth digit is not 1. For a number to be in the Cantor set, it must not be excluded at any step, it must admit a numeral representation consisting entirely of 0s and 2s.
It is worth emphasizing that numbers like 1, 1/3 = 0.13 and 7/9 = 0.213 are in the Cantor set, as they have ternary numerals consisting entirely of 0s and 2s: 1 = 0.222...3 = 0.23, 1/3 = 0.0222...3 = 0.023 and 7/9 = 0.20222...3 = 0.2023. All the latter numbers are "endpoints", and these examples are right limit points of C {\displaystyle {\mathcal {C}}} . The same is true for the left limit points of C {\displaystyle {\mathcal {C}}} , e.g. 2/3 = 0.1222...3 = 0.123 = 0.203 and 8/9 = 0.21222...3 = 0.2123 = 0.2203. All these endpoints are proper ternary fractions (elements of Z ⋅ 3 − N 0 {\displaystyle \mathbb {Z} \cdot 3^{-\mathbb {N} _{0}}} ) of the form p/q, where denominator q is a power of 3 when the fraction is in its irreducible form. The ternary representation of these fractions terminates (i.e., is finite) or — recall from above that proper ternary fractions each have 2 representations — is infinite and "ends" in either infinitely many recurring 0s or infinitely many recurring 2s. Such a fraction is a left limit point of C {\displaystyle {\mathcal {C}}} if its ternary representation contains no 1's and "ends" in infinitely many recurring 0s. Similarly, a proper ternary fraction is a right limit point of C {\displaystyle {\mathcal {C}}} if it again its ternary expansion contains no 1's and "ends" in infinitely many recurring 2s.
This set of endpoints is dense in C {\displaystyle {\mathcal {C}}} (but not dense in [0, 1]) and makes up a countably infinite set. The numbers in C {\displaystyle {\mathcal {C}}} which are not endpoints also have only 0s and 2s in their ternary representation, but they cannot end in an infinite repetition of the digit 0, nor of the digit 2, because then it would be an endpoint.
The function from C {\displaystyle {\mathcal {C}}} to [0,1] is defined by taking the ternary numerals that do consist entirely of 0s and 2s, replacing all the 2s by 1s, and interpreting the sequence as a binary representation of a real number. In a formula,
For any number y in [0,1], its binary representation can be translated into a ternary representation of a number x in C {\displaystyle {\mathcal {C}}} by replacing all the 1s by 2s. With this, f(x) = y so that y is in the range of f. For instance if y = 3/5 = 0.100110011001...2 = 0.1001, we write x = 0.2002 = 0.200220022002...3 = 7/10. Consequently, f is surjective. However, f is not injective — the values for which f(x) coincides are those at opposing ends of one of the middle thirds removed. For instance, take
so
Thus there are as many points in the Cantor set as there are in the interval [0, 1] (which has the uncountable cardinality c = 2 ℵ 0 {\displaystyle {\mathfrak {c}}=2^{\aleph _{0}}} ). However, the set of endpoints of the removed intervals is countable, so there must be uncountably many numbers in the Cantor set which are not interval endpoints. As noted above, one example of such a number is 1/4, which can be written as 0.020202...3 = 0.02 in ternary notation. In fact, given any a ∈ [ − 1 , 1 ] {\displaystyle a\in [-1,1]} , there exist x , y ∈ C {\displaystyle x,y\in {\mathcal {C}}} such that a = y − x {\displaystyle a=y-x} . This was first demonstrated by Steinhaus in 1917, who proved, via a geometric argument, the equivalent assertion that { ( x , y ) ∈ R 2 ∣ y = x + a } ∩ ( C × C ) ≠ ∅ {\displaystyle \{(x,y)\in \mathbb {R} ^{2}\mid y=x+a\}\;\cap \;({\mathcal {C}}\times {\mathcal {C}})\neq \emptyset } for every a ∈ [ − 1 , 1 ] {\displaystyle a\in [-1,1]} . Since this construction provides an injection from [ − 1 , 1 ] {\displaystyle [-1,1]} to C × C {\displaystyle {\mathcal {C}}\times {\mathcal {C}}} , we have | C × C | ≥ | [ − 1 , 1 ] | = c {\displaystyle |{\mathcal {C}}\times {\mathcal {C}}|\geq |[-1,1]|={\mathfrak {c}}} as an immediate corollary. Assuming that | A × A | = | A | {\displaystyle |A\times A|=|A|} for any infinite set A {\displaystyle A} (a statement shown to be equivalent to the axiom of choice by Tarski), this provides another demonstration that | C | = c {\displaystyle |{\mathcal {C}}|={\mathfrak {c}}} .
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
It has been conjectured that all algebraic irrational numbers are normal. Since members of the Cantor set are not normal, this would imply that all members of the Cantor set are either rational or transcendental.
The Cantor set is the prototype of a fractal. It is self-similar, because it is equal to two copies of itself, if each copy is shrunk by a factor of 3 and translated. More precisely, the Cantor set is equal to the union of two functions, the left and right self-similarity transformations of itself, T L ( x ) = x / 3 {\displaystyle T_{L}(x)=x/3} and T R ( x ) = ( 2 + x ) / 3 {\displaystyle T_{R}(x)=(2+x)/3} , which leave the Cantor set invariant up to homeomorphism: T L ( C ) ≅ T R ( C ) ≅ C = T L ( C ) ∪ T R ( C ) . {\displaystyle T_{L}({\mathcal {C}})\cong T_{R}({\mathcal {C}})\cong {\mathcal {C}}=T_{L}({\mathcal {C}})\cup T_{R}({\mathcal {C}}).}
Repeated iteration of T L {\displaystyle T_{L}} and T R {\displaystyle T_{R}} can be visualized as an infinite binary tree. That is, at each node of the tree, one may consider the subtree to the left or to the right. Taking the set { T L , T R } {\displaystyle \{T_{L},T_{R}\}} together with function composition forms a monoid, the dyadic monoid.
The automorphisms of the binary tree are its hyperbolic rotations, and are given by the modular group. Thus, the Cantor set is a homogeneous space in the sense that for any two points x {\displaystyle x} and y {\displaystyle y} in the Cantor set C {\displaystyle {\mathcal {C}}} , there exists a homeomorphism h : C → C {\displaystyle h:{\mathcal {C}}\to {\mathcal {C}}} with h ( x ) = y {\displaystyle h(x)=y} . An explicit construction of h {\displaystyle h} can be described more easily if we see the Cantor set as a product space of countably many copies of the discrete space { 0 , 1 } {\displaystyle \{0,1\}} . Then the map h : { 0 , 1 } N → { 0 , 1 } N {\displaystyle h:\{0,1\}^{\mathbb {N} }\to \{0,1\}^{\mathbb {N} }} defined by h n ( u ) := u n + x n + y n mod 2 {\displaystyle h_{n}(u):=u_{n}+x_{n}+y_{n}\mod 2} is an involutive homeomorphism exchanging x {\displaystyle x} and y {\displaystyle y} .
It has been found that some form of conservation law is always responsible behind scaling and self-similarity. In the case of Cantor set it can be seen that the d f {\displaystyle d_{f}} th moment (where d f = ln ( 2 ) / ln ( 3 ) {\displaystyle d_{f}=\ln(2)/\ln(3)} is the fractal dimension) of all the surviving intervals at any stage of the construction process is equal to a constant which is one in the case of the Cantor set. We know that there are N = 2 n {\displaystyle N=2^{n}} intervals of size 1 / 3 n {\displaystyle 1/3^{n}} present in the system at the n {\displaystyle n} th step of its construction. Then if we label the surviving intervals as x 1 , x 2 , … , x 2 n {\displaystyle x_{1},x_{2},\ldots ,x_{2^{n}}} then the d f {\displaystyle d_{f}} th moment is x 1 d f + x 2 d f + ⋯ + x 2 n d f = 1 {\displaystyle x_{1}^{d_{f}}+x_{2}^{d_{f}}+\cdots +x_{2^{n}}^{d_{f}}=1} since x 1 = x 2 = ⋯ = x 2 n = 1 / 3 n {\displaystyle x_{1}=x_{2}=\cdots =x_{2^{n}}=1/3^{n}} .
The Hausdorff dimension of the Cantor set is equal to ln(2)/ln(3) ≈ 0.631.
Although "the" Cantor set typically refers to the original, middle-thirds Cantor set described above, topologists often talk about "a" Cantor set, which means any topological space that is homeomorphic (topologically equivalent) to it.
As the above summation argument shows, the Cantor set is uncountable but has Lebesgue measure 0. Since the Cantor set is the complement of a union of open sets, it itself is a closed subset of the reals, and therefore a complete metric space. Since it is also totally bounded, the Heine–Borel theorem says that it must be compact.
For any point in the Cantor set and any arbitrarily small neighborhood of the point, there is some other number with a ternary numeral of only 0s and 2s, as well as numbers whose ternary numerals contain 1s. Hence, every point in the Cantor set is an accumulation point (also called a cluster point or limit point) of the Cantor set, but none is an interior point. A closed set in which every point is an accumulation point is also called a perfect set in topology, while a closed subset of the interval with no interior points is nowhere dense in the interval.
Every point of the Cantor set is also an accumulation point of the complement of the Cantor set.
For any two points in the Cantor set, there will be some ternary digit where they differ — one will have 0 and the other 2. By splitting the Cantor set into "halves" depending on the value of this digit, one obtains a partition of the Cantor set into two closed sets that separate the original two points. In the relative topology on the Cantor set, the points have been separated by a clopen set. Consequently, the Cantor set is totally disconnected. As a compact totally disconnected Hausdorff space, the Cantor set is an example of a Stone space.
As a topological space, the Cantor set is naturally homeomorphic to the product of countably many copies of the space { 0 , 1 } {\displaystyle \{0,1\}} , where each copy carries the discrete topology. This is the space of all sequences in two digits
which can also be identified with the set of 2-adic integers. The basis for the open sets of the product topology are cylinder sets; the homeomorphism maps these to the subspace topology that the Cantor set inherits from the natural topology on the real line. This characterization of the Cantor space as a product of compact spaces gives a second proof that Cantor space is compact, via Tychonoff's theorem.
From the above characterization, the Cantor set is homeomorphic to the p-adic integers, and, if one point is removed from it, to the p-adic numbers.
The Cantor set is a subset of the reals, which are a metric space with respect to the ordinary distance metric; therefore the Cantor set itself is a metric space, by using that same metric. Alternatively, one can use the p-adic metric on 2 N {\displaystyle 2^{\mathbb {N} }} : given two sequences ( x n ) , ( y n ) ∈ 2 N {\displaystyle (x_{n}),(y_{n})\in 2^{\mathbb {N} }} , the distance between them is d ( ( x n ) , ( y n ) ) = 2 − k {\displaystyle d((x_{n}),(y_{n}))=2^{-k}} , where k {\displaystyle k} is the smallest index such that x k ≠ y k {\displaystyle x_{k}\neq y_{k}} ; if there is no such index, then the two sequences are the same, and one defines the distance to be zero. These two metrics generate the same topology on the Cantor set.
We have seen above that the Cantor set is a totally disconnected perfect compact metric space. Indeed, in a sense it is the only one: every nonempty totally disconnected perfect compact metric space is homeomorphic to the Cantor set. See Cantor space for more on spaces homeomorphic to the Cantor set.
The Cantor set is sometimes regarded as "universal" in the category of compact metric spaces, since any compact metric space is a continuous image of the Cantor set; however this construction is not unique and so the Cantor set is not universal in the precise categorical sense. The "universal" property has important applications in functional analysis, where it is sometimes known as the representation theorem for compact metric spaces.
For any integer q ≥ 2, the topology on the group G = Zq (the countable direct sum) is discrete. Although the Pontrjagin dual Γ is also Zq, the topology of Γ is compact. One can see that Γ is totally disconnected and perfect - thus it is homeomorphic to the Cantor set. It is easiest to write out the homeomorphism explicitly in the case q = 2. (See Rudin 1962 p 40.)
The geometric mean of the Cantor set is approximately 0.274974.
The Cantor set can be seen as the compact group of binary sequences, and as such, it is endowed with a natural Haar measure. When normalized so that the measure of the set is 1, it is a model of an infinite sequence of coin tosses. Furthermore, one can show that the usual Lebesgue measure on the interval is an image of the Haar measure on the Cantor set, while the natural injection into the ternary set is a canonical example of a singular measure. It can also be shown that the Haar measure is an image of any probability, making the Cantor set a universal probability space in some ways.
In Lebesgue measure theory, the Cantor set is an example of a set which is uncountable and has zero measure. In contrast, the set has a Hausdorff measure of 1 in its dimension of log 2 / log 3.
If we define a Cantor number as a member of the Cantor set, then
The Cantor set is a meagre set (or a set of first category) as a subset of [0,1] (although not as a subset of itself, since it is a Baire space). The Cantor set thus demonstrates that notions of "size" in terms of cardinality, measure, and (Baire) category need not coincide. Like the set Q ∩ [ 0 , 1 ] {\displaystyle \mathbb {Q} \cap [0,1]} , the Cantor set C {\displaystyle {\mathcal {C}}} is "small" in the sense that it is a null set (a set of measure zero) and it is a meagre subset of [0,1]. However, unlike Q ∩ [ 0 , 1 ] {\displaystyle \mathbb {Q} \cap [0,1]} , which is countable and has a "small" cardinality, ℵ 0 {\displaystyle \aleph _{0}} , the cardinality of C {\displaystyle {\mathcal {C}}} is the same as that of [0,1], the continuum c {\displaystyle {\mathfrak {c}}} , and is "large" in the sense of cardinality. In fact, it is also possible to construct a subset of [0,1] that is meagre but of positive measure and a subset that is non-meagre but of measure zero: By taking the countable union of "fat" Cantor sets C ( n ) {\displaystyle {\mathcal {C}}^{(n)}} of measure λ = ( n − 1 ) / n {\displaystyle \lambda =(n-1)/n} (see Smith–Volterra–Cantor set below for the construction), we obtain a set A := ⋃ n = 1 ∞ C ( n ) {\textstyle {\mathcal {A}}:=\bigcup _{n=1}^{\infty }{\mathcal {C}}^{(n)}} which has a positive measure (equal to 1) but is meagre in [0,1], since each C ( n ) {\displaystyle {\mathcal {C}}^{(n)}} is nowhere dense. Then consider the set A c = [ 0 , 1 ] ∖ ⋃ n = 1 ∞ C ( n ) {\textstyle {\mathcal {A}}^{\mathrm {c} }=[0,1]\setminus \bigcup _{n=1}^{\infty }{\mathcal {C}}^{(n)}} . Since A ∪ A c = [ 0 , 1 ] {\displaystyle {\mathcal {A}}\cup {\mathcal {A}}^{\mathrm {c} }=[0,1]} , A c {\displaystyle {\mathcal {A}}^{\mathrm {c} }} cannot be meagre, but since μ ( A ) = 1 {\displaystyle \mu ({\mathcal {A}})=1} , A c {\displaystyle {\mathcal {A}}^{\mathrm {c} }} must have measure zero.
Instead of repeatedly removing the middle third of every piece as in the Cantor set, we could also keep removing any other fixed percentage (other than 0% and 100%) from the middle. In the case where the middle 8/10 of the interval is removed, we get a remarkably accessible case — the set consists of all numbers in [0,1] that can be written as a decimal consisting entirely of 0s and 9s. If a fixed percentage is removed at each stage, then the limiting set will have measure zero, since the length of the remainder ( 1 − f ) n → 0 {\displaystyle (1-f)^{n}\to 0} as n → ∞ {\displaystyle n\to \infty } for any f {\displaystyle f} such that 0 < f ≤ 1 {\displaystyle 0<f\leq 1} .
On the other hand, "fat Cantor sets" of positive measure can be generated by removal of smaller fractions of the middle of the segment in each iteration. Thus, one can construct sets homeomorphic to the Cantor set that have positive Lebesgue measure while still being nowhere dense. If an interval of length r n {\displaystyle r^{n}} ( r ≤ 1 / 3 {\displaystyle r\leq 1/3} ) is removed from the middle of each segment at the nth iteration, then the total length removed is ∑ n = 1 ∞ 2 n − 1 r n = r / ( 1 − 2 r ) {\textstyle \sum _{n=1}^{\infty }2^{n-1}r^{n}=r/(1-2r)} , and the limiting set will have a Lebesgue measure of λ = ( 1 − 3 r ) / ( 1 − 2 r ) {\displaystyle \lambda =(1-3r)/(1-2r)} . Thus, in a sense, the middle-thirds Cantor set is a limiting case with r = 1 / 3 {\displaystyle r=1/3} . If 0 < r < 1 / 3 {\displaystyle 0<r<1/3} , then the remainder will have positive measure with 0 < λ < 1 {\displaystyle 0<\lambda <1} . The case r = 1 / 4 {\displaystyle r=1/4} is known as the Smith–Volterra–Cantor set, which has a Lebesgue measure of 1 / 2 {\displaystyle 1/2} .
One can modify the construction of the Cantor set by dividing randomly instead of equally. Besides, to incorporate time we can divide only one of the available intervals at each step instead of dividing all the available intervals. In the case of stochastic triadic Cantor set the resulting process can be described by the following rate equation
and for the stochastic dyadic Cantor set
where c ( x , t ) d x {\displaystyle c(x,t)dx} is the number of intervals of size between x {\displaystyle x} and x + d x {\displaystyle x+dx} . In the case of triadic Cantor set the fractal dimension is 0.5616 {\displaystyle 0.5616} which is less than its deterministic counterpart 0.6309 {\displaystyle 0.6309} . In the case of stochastic dyadic Cantor set the fractal dimension is p {\displaystyle p} which is again less than that of its deterministic counterpart ln ( 1 + p ) / ln 2 {\displaystyle \ln(1+p)/\ln 2} . In the case of stochastic dyadic Cantor set the solution for c ( x , t ) {\displaystyle c(x,t)} exhibits dynamic scaling as its solution in the long-time limit is t − ( 1 + d f ) e − x t {\displaystyle t^{-(1+d_{f})}e^{-xt}} where the fractal dimension of the stochastic dyadic Cantor set d f = p {\displaystyle d_{f}=p} . In either case, like triadic Cantor set, the d f {\displaystyle d_{f}} th moment ( ∫ x d f c ( x , t ) d x = constant {\textstyle \int x^{d_{f}}c(x,t)\,dx={\text{constant}}} ) of stochastic triadic and dyadic Cantor set too are conserved quantities.
Cantor dust is a multi-dimensional version of the Cantor set. It can be formed by taking a finite Cartesian product of the Cantor set with itself, making it a Cantor space. Like the Cantor set, Cantor dust has zero measure.
A different 2D analogue of the Cantor set is the Sierpinski carpet, where a square is divided up into nine smaller squares, and the middle one removed. The remaining squares are then further divided into nine each and the middle removed, and so on ad infinitum. One 3D analogue of this is the Menger sponge.
Cantor introduced what we call today the Cantor ternary set C {\displaystyle {\mathcal {C}}} as an example "of a perfect point-set, which is not everywhere-dense in any interval, however small." Cantor described C {\displaystyle {\mathcal {C}}} in terms of ternary expansions, as "the set of all real numbers given by the formula: z = c 1 / 3 + c 2 / 3 2 + ⋯ + c ν / 3 ν + ⋯ {\displaystyle z=c_{1}/3+c_{2}/3^{2}+\cdots +c_{\nu }/3^{\nu }+\cdots } where the coefficients c ν {\displaystyle c_{\nu }} arbitrarily take the two values 0 and 2, and the series can consist of a finite number or an infinite number of elements."
A topological space P {\displaystyle P} is perfect if all its points are limit points or, equivalently, if it coincides with its derived set P ′ {\displaystyle P'} . Subsets of the real line, like C {\displaystyle {\mathcal {C}}} , can be seen as topological spaces under the induced subspace topology.
Cantor was led to the study of derived sets by his results on uniqueness of trigonometric series. The latter did much to set him on the course for developing an abstract, general theory of infinite sets.
Benoit Mandelbrot wrote much on Cantor dusts and their relation to natural fractals and statistical physics. He further reflected on the puzzling or even upsetting nature of such structures to those in the mathematics and physics community. In The Fractal geometry of Nature, he described how "When I started on this topic in 1962, everyone was agreeing that Cantor dusts are at least as monstrous as the Koch and Peano curves," and added that "every self-respecting physicist was automatically turned off by a mention of Cantor, ready to run a mile from anyone claiming C {\displaystyle {\mathcal {C}}} to be interesting in science." | [
{
"paragraph_id": 0,
"text": "In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of unintuitive properties. It was discovered in 1874 by Henry John Stephen Smith and introduced by German mathematician Georg Cantor in 1883.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. The most common construction is the Cantor ternary set, built by removing the middle third of a line segment and then repeating the process with the remaining shorter segments. Cantor mentioned the ternary construction only in passing, as an example of a more general idea, that of a perfect set that is nowhere dense.",
"title": ""
},
{
"paragraph_id": 2,
"text": "More generally, in topology, a Cantor space is a topological space homeomorphic to the Cantor ternary set (equipped with its subspace topology). By a theorem of L. E. J. Brouwer, this is equivalent to being perfect nonempty, compact metrizable and zero dimensional.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Cantor ternary set C {\\displaystyle {\\mathcal {C}}} is created by iteratively deleting the open middle third from a set of line segments. One starts by deleting the open middle third ( 1 3 , 2 3 ) {\\textstyle \\left({\\frac {1}{3}},{\\frac {2}{3}}\\right)} from the interval [ 0 , 1 ] {\\displaystyle \\textstyle \\left[0,1\\right]} , leaving two line segments: [ 0 , 1 3 ] ∪ [ 2 3 , 1 ] {\\textstyle \\left[0,{\\frac {1}{3}}\\right]\\cup \\left[{\\frac {2}{3}},1\\right]} . Next, the open middle third of each of these remaining segments is deleted, leaving four line segments: [ 0 , 1 9 ] ∪ [ 2 9 , 1 3 ] ∪ [ 2 3 , 7 9 ] ∪ [ 8 9 , 1 ] {\\textstyle \\left[0,{\\frac {1}{9}}\\right]\\cup \\left[{\\frac {2}{9}},{\\frac {1}{3}}\\right]\\cup \\left[{\\frac {2}{3}},{\\frac {7}{9}}\\right]\\cup \\left[{\\frac {8}{9}},1\\right]} . The Cantor ternary set contains all points in the interval [ 0 , 1 ] {\\displaystyle [0,1]} that are not deleted at any step in this infinite process. The same facts can be described recursively by setting",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 4,
"text": "and",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 5,
"text": "for n ≥ 1 {\\displaystyle n\\geq 1} , so that",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 6,
"text": "The first six steps of this process are illustrated below.",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 7,
"text": "",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 8,
"text": "Using the idea of self-similar transformations, T L ( x ) = x / 3 , {\\displaystyle T_{L}(x)=x/3,} T R ( x ) = ( 2 + x ) / 3 {\\displaystyle T_{R}(x)=(2+x)/3} and C n = T L ( C n − 1 ) ∪ T R ( C n − 1 ) , {\\displaystyle C_{n}=T_{L}(C_{n-1})\\cup T_{R}(C_{n-1}),} the explicit closed formulas for the Cantor set are",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 9,
"text": "where every middle third is removed as the open interval ( 3 k + 1 3 n + 1 , 3 k + 2 3 n + 1 ) {\\textstyle \\left({\\frac {3k+1}{3^{n+1}}},{\\frac {3k+2}{3^{n+1}}}\\right)} from the closed interval [ 3 k + 0 3 n + 1 , 3 k + 3 3 n + 1 ] = [ k + 0 3 n , k + 1 3 n ] {\\textstyle \\left[{\\frac {3k+0}{3^{n+1}}},{\\frac {3k+3}{3^{n+1}}}\\right]=\\left[{\\frac {k+0}{3^{n}}},{\\frac {k+1}{3^{n}}}\\right]} surrounding it, or",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 10,
"text": "where the middle third ( 3 k + 1 3 n , 3 k + 2 3 n ) {\\textstyle \\left({\\frac {3k+1}{3^{n}}},{\\frac {3k+2}{3^{n}}}\\right)} of the foregoing closed interval [ k + 0 3 n − 1 , k + 1 3 n − 1 ] = [ 3 k + 0 3 n , 3 k + 3 3 n ] {\\textstyle \\left[{\\frac {k+0}{3^{n-1}}},{\\frac {k+1}{3^{n-1}}}\\right]=\\left[{\\frac {3k+0}{3^{n}}},{\\frac {3k+3}{3^{n}}}\\right]} is removed by intersecting with [ 3 k + 0 3 n , 3 k + 1 3 n ] ∪ [ 3 k + 2 3 n , 3 k + 3 3 n ] . {\\textstyle \\left[{\\frac {3k+0}{3^{n}}},{\\frac {3k+1}{3^{n}}}\\right]\\cup \\left[{\\frac {3k+2}{3^{n}}},{\\frac {3k+3}{3^{n}}}\\right]\\!.}",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 11,
"text": "This process of removing middle thirds is a simple example of a finite subdivision rule. The complement of the Cantor ternary set is an example of a fractal string.",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 12,
"text": "",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 13,
"text": "In arithmetical terms, the Cantor set consists of all real numbers of the unit interval [ 0 , 1 ] {\\displaystyle [0,1]} that do not require the digit 1 in order to be expressed as a ternary (base 3) fraction. As the above diagram illustrates, each point in the Cantor set is uniquely located by a path through an infinitely deep binary tree, where the path turns left or right at each level according to which side of a deleted segment the point lies on. Representing each left turn with 0 and each right turn with 2 yields the ternary fraction for a point.",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 14,
"text": "In The Fractal Geometry of Nature, mathematician Benoit Mandelbrot provides a whimsical thought experiment to assist non-mathematical readers in imagining the construction of C {\\displaystyle {\\mathcal {C}}} . His narrative begins with imagining a bar, perhaps of lightweight metal, in which the bar's matter \"curdles\" by iteratively shifting towards its extremities. As the bar's segments become smaller, they become thin, dense slugs that eventually grow too small and faint to see.",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 15,
"text": "CURDLING: The construction of the Cantor bar results from the process I call curdling. It begins with a round bar. It is best to think of it as having a very low density. Then matter \"curdles\" out of this bar's middle third into the end thirds, so that the positions of the latter remain unchanged. Next matter curdles out of the middle third of each end third into its end thirds, and so on ad infinitum until one is left with an infinitely large number of infinitely thin slugs of infinitely high density. These slugs are spaced along the line in the very specific fashion induced by the generating process. In this illustration, curdling (which eventually requires hammering!) stops when both the printer's press and our eye cease to follow; the last line is indistinguishable from the last but one: each of its ultimate parts is seen as a gray slug rather than two parallel black slugs.",
"title": "Construction and formula of the ternary set"
},
{
"paragraph_id": 16,
"text": "Since the Cantor set is defined as the set of points not excluded, the proportion (i.e., measure) of the unit interval remaining can be found by total length removed. This total is the geometric progression",
"title": "Composition"
},
{
"paragraph_id": 17,
"text": "So that the proportion left is 1 − 1 = 0.",
"title": "Composition"
},
{
"paragraph_id": 18,
"text": "This calculation suggests that the Cantor set cannot contain any interval of non-zero length. It may seem surprising that there should be anything left—after all, the sum of the lengths of the removed intervals is equal to the length of the original interval. However, a closer look at the process reveals that there must be something left, since removing the \"middle third\" of each interval involved removing open sets (sets that do not include their endpoints). So removing the line segment (1/3, 2/3) from the original interval [0, 1] leaves behind the points 1/3 and 2/3. Subsequent steps do not remove these (or other) endpoints, since the intervals removed are always internal to the intervals remaining. So the Cantor set is not empty, and in fact contains an uncountably infinite number of points (as follows from the above description in terms of paths in an infinite binary tree).",
"title": "Composition"
},
{
"paragraph_id": 19,
"text": "It may appear that only the endpoints of the construction segments are left, but that is not the case either. The number 1/4, for example, has the unique ternary form 0.020202... = 0.02. It is in the bottom third, and the top third of that third, and the bottom third of that top third, and so on. Since it is never in one of the middle segments, it is never removed. Yet it is also not an endpoint of any middle segment, because it is not a multiple of any power of 1/3. All endpoints of segments are terminating ternary fractions and are contained in the set",
"title": "Composition"
},
{
"paragraph_id": 20,
"text": "which is a countably infinite set. As to cardinality, almost all elements of the Cantor set are not endpoints of intervals, nor rational points like 1/4. The whole Cantor set is in fact not countable.",
"title": "Composition"
},
{
"paragraph_id": 21,
"text": "It can be shown that there are as many points left behind in this process as there were to begin with, and that therefore, the Cantor set is uncountable. To see this, we show that there is a function f from the Cantor set C {\\displaystyle {\\mathcal {C}}} to the closed interval [0,1] that is surjective (i.e. f maps from C {\\displaystyle {\\mathcal {C}}} onto [0,1]) so that the cardinality of C {\\displaystyle {\\mathcal {C}}} is no less than that of [0,1]. Since C {\\displaystyle {\\mathcal {C}}} is a subset of [0,1], its cardinality is also no greater, so the two cardinalities must in fact be equal, by the Cantor–Bernstein–Schröder theorem.",
"title": "Properties"
},
{
"paragraph_id": 22,
"text": "To construct this function, consider the points in the [0, 1] interval in terms of base 3 (or ternary) notation. Recall that the proper ternary fractions, more precisely: the elements of ( Z ∖ { 0 } ) ⋅ 3 − N 0 {\\displaystyle {\\bigl (}\\mathbb {Z} \\setminus \\{0\\}{\\bigr )}\\cdot 3^{-\\mathbb {N} _{0}}} , admit more than one representation in this notation, as for example 1/3, that can be written as 0.13 = 0.103, but also as 0.0222...3 = 0.023, and 2/3, that can be written as 0.23 = 0.203 but also as 0.1222...3 = 0.123. When we remove the middle third, this contains the numbers with ternary numerals of the form 0.1xxxxx...3 where xxxxx...3 is strictly between 00000...3 and 22222...3. So the numbers remaining after the first step consist of",
"title": "Properties"
},
{
"paragraph_id": 23,
"text": "This can be summarized by saying that those numbers with a ternary representation such that the first digit after the radix point is not 1 are the ones remaining after the first step.",
"title": "Properties"
},
{
"paragraph_id": 24,
"text": "The second step removes numbers of the form 0.01xxxx...3 and 0.21xxxx...3, and (with appropriate care for the endpoints) it can be concluded that the remaining numbers are those with a ternary numeral where neither of the first two digits is 1.",
"title": "Properties"
},
{
"paragraph_id": 25,
"text": "Continuing in this way, for a number not to be excluded at step n, it must have a ternary representation whose nth digit is not 1. For a number to be in the Cantor set, it must not be excluded at any step, it must admit a numeral representation consisting entirely of 0s and 2s.",
"title": "Properties"
},
{
"paragraph_id": 26,
"text": "It is worth emphasizing that numbers like 1, 1/3 = 0.13 and 7/9 = 0.213 are in the Cantor set, as they have ternary numerals consisting entirely of 0s and 2s: 1 = 0.222...3 = 0.23, 1/3 = 0.0222...3 = 0.023 and 7/9 = 0.20222...3 = 0.2023. All the latter numbers are \"endpoints\", and these examples are right limit points of C {\\displaystyle {\\mathcal {C}}} . The same is true for the left limit points of C {\\displaystyle {\\mathcal {C}}} , e.g. 2/3 = 0.1222...3 = 0.123 = 0.203 and 8/9 = 0.21222...3 = 0.2123 = 0.2203. All these endpoints are proper ternary fractions (elements of Z ⋅ 3 − N 0 {\\displaystyle \\mathbb {Z} \\cdot 3^{-\\mathbb {N} _{0}}} ) of the form p/q, where denominator q is a power of 3 when the fraction is in its irreducible form. The ternary representation of these fractions terminates (i.e., is finite) or — recall from above that proper ternary fractions each have 2 representations — is infinite and \"ends\" in either infinitely many recurring 0s or infinitely many recurring 2s. Such a fraction is a left limit point of C {\\displaystyle {\\mathcal {C}}} if its ternary representation contains no 1's and \"ends\" in infinitely many recurring 0s. Similarly, a proper ternary fraction is a right limit point of C {\\displaystyle {\\mathcal {C}}} if it again its ternary expansion contains no 1's and \"ends\" in infinitely many recurring 2s.",
"title": "Properties"
},
{
"paragraph_id": 27,
"text": "This set of endpoints is dense in C {\\displaystyle {\\mathcal {C}}} (but not dense in [0, 1]) and makes up a countably infinite set. The numbers in C {\\displaystyle {\\mathcal {C}}} which are not endpoints also have only 0s and 2s in their ternary representation, but they cannot end in an infinite repetition of the digit 0, nor of the digit 2, because then it would be an endpoint.",
"title": "Properties"
},
{
"paragraph_id": 28,
"text": "The function from C {\\displaystyle {\\mathcal {C}}} to [0,1] is defined by taking the ternary numerals that do consist entirely of 0s and 2s, replacing all the 2s by 1s, and interpreting the sequence as a binary representation of a real number. In a formula,",
"title": "Properties"
},
{
"paragraph_id": 29,
"text": "For any number y in [0,1], its binary representation can be translated into a ternary representation of a number x in C {\\displaystyle {\\mathcal {C}}} by replacing all the 1s by 2s. With this, f(x) = y so that y is in the range of f. For instance if y = 3/5 = 0.100110011001...2 = 0.1001, we write x = 0.2002 = 0.200220022002...3 = 7/10. Consequently, f is surjective. However, f is not injective — the values for which f(x) coincides are those at opposing ends of one of the middle thirds removed. For instance, take",
"title": "Properties"
},
{
"paragraph_id": 30,
"text": "so",
"title": "Properties"
},
{
"paragraph_id": 31,
"text": "Thus there are as many points in the Cantor set as there are in the interval [0, 1] (which has the uncountable cardinality c = 2 ℵ 0 {\\displaystyle {\\mathfrak {c}}=2^{\\aleph _{0}}} ). However, the set of endpoints of the removed intervals is countable, so there must be uncountably many numbers in the Cantor set which are not interval endpoints. As noted above, one example of such a number is 1/4, which can be written as 0.020202...3 = 0.02 in ternary notation. In fact, given any a ∈ [ − 1 , 1 ] {\\displaystyle a\\in [-1,1]} , there exist x , y ∈ C {\\displaystyle x,y\\in {\\mathcal {C}}} such that a = y − x {\\displaystyle a=y-x} . This was first demonstrated by Steinhaus in 1917, who proved, via a geometric argument, the equivalent assertion that { ( x , y ) ∈ R 2 ∣ y = x + a } ∩ ( C × C ) ≠ ∅ {\\displaystyle \\{(x,y)\\in \\mathbb {R} ^{2}\\mid y=x+a\\}\\;\\cap \\;({\\mathcal {C}}\\times {\\mathcal {C}})\\neq \\emptyset } for every a ∈ [ − 1 , 1 ] {\\displaystyle a\\in [-1,1]} . Since this construction provides an injection from [ − 1 , 1 ] {\\displaystyle [-1,1]} to C × C {\\displaystyle {\\mathcal {C}}\\times {\\mathcal {C}}} , we have | C × C | ≥ | [ − 1 , 1 ] | = c {\\displaystyle |{\\mathcal {C}}\\times {\\mathcal {C}}|\\geq |[-1,1]|={\\mathfrak {c}}} as an immediate corollary. Assuming that | A × A | = | A | {\\displaystyle |A\\times A|=|A|} for any infinite set A {\\displaystyle A} (a statement shown to be equivalent to the axiom of choice by Tarski), this provides another demonstration that | C | = c {\\displaystyle |{\\mathcal {C}}|={\\mathfrak {c}}} .",
"title": "Properties"
},
{
"paragraph_id": 32,
"text": "The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.",
"title": "Properties"
},
{
"paragraph_id": 33,
"text": "It has been conjectured that all algebraic irrational numbers are normal. Since members of the Cantor set are not normal, this would imply that all members of the Cantor set are either rational or transcendental.",
"title": "Properties"
},
{
"paragraph_id": 34,
"text": "The Cantor set is the prototype of a fractal. It is self-similar, because it is equal to two copies of itself, if each copy is shrunk by a factor of 3 and translated. More precisely, the Cantor set is equal to the union of two functions, the left and right self-similarity transformations of itself, T L ( x ) = x / 3 {\\displaystyle T_{L}(x)=x/3} and T R ( x ) = ( 2 + x ) / 3 {\\displaystyle T_{R}(x)=(2+x)/3} , which leave the Cantor set invariant up to homeomorphism: T L ( C ) ≅ T R ( C ) ≅ C = T L ( C ) ∪ T R ( C ) . {\\displaystyle T_{L}({\\mathcal {C}})\\cong T_{R}({\\mathcal {C}})\\cong {\\mathcal {C}}=T_{L}({\\mathcal {C}})\\cup T_{R}({\\mathcal {C}}).}",
"title": "Properties"
},
{
"paragraph_id": 35,
"text": "Repeated iteration of T L {\\displaystyle T_{L}} and T R {\\displaystyle T_{R}} can be visualized as an infinite binary tree. That is, at each node of the tree, one may consider the subtree to the left or to the right. Taking the set { T L , T R } {\\displaystyle \\{T_{L},T_{R}\\}} together with function composition forms a monoid, the dyadic monoid.",
"title": "Properties"
},
{
"paragraph_id": 36,
"text": "The automorphisms of the binary tree are its hyperbolic rotations, and are given by the modular group. Thus, the Cantor set is a homogeneous space in the sense that for any two points x {\\displaystyle x} and y {\\displaystyle y} in the Cantor set C {\\displaystyle {\\mathcal {C}}} , there exists a homeomorphism h : C → C {\\displaystyle h:{\\mathcal {C}}\\to {\\mathcal {C}}} with h ( x ) = y {\\displaystyle h(x)=y} . An explicit construction of h {\\displaystyle h} can be described more easily if we see the Cantor set as a product space of countably many copies of the discrete space { 0 , 1 } {\\displaystyle \\{0,1\\}} . Then the map h : { 0 , 1 } N → { 0 , 1 } N {\\displaystyle h:\\{0,1\\}^{\\mathbb {N} }\\to \\{0,1\\}^{\\mathbb {N} }} defined by h n ( u ) := u n + x n + y n mod 2 {\\displaystyle h_{n}(u):=u_{n}+x_{n}+y_{n}\\mod 2} is an involutive homeomorphism exchanging x {\\displaystyle x} and y {\\displaystyle y} .",
"title": "Properties"
},
{
"paragraph_id": 37,
"text": "It has been found that some form of conservation law is always responsible behind scaling and self-similarity. In the case of Cantor set it can be seen that the d f {\\displaystyle d_{f}} th moment (where d f = ln ( 2 ) / ln ( 3 ) {\\displaystyle d_{f}=\\ln(2)/\\ln(3)} is the fractal dimension) of all the surviving intervals at any stage of the construction process is equal to a constant which is one in the case of the Cantor set. We know that there are N = 2 n {\\displaystyle N=2^{n}} intervals of size 1 / 3 n {\\displaystyle 1/3^{n}} present in the system at the n {\\displaystyle n} th step of its construction. Then if we label the surviving intervals as x 1 , x 2 , … , x 2 n {\\displaystyle x_{1},x_{2},\\ldots ,x_{2^{n}}} then the d f {\\displaystyle d_{f}} th moment is x 1 d f + x 2 d f + ⋯ + x 2 n d f = 1 {\\displaystyle x_{1}^{d_{f}}+x_{2}^{d_{f}}+\\cdots +x_{2^{n}}^{d_{f}}=1} since x 1 = x 2 = ⋯ = x 2 n = 1 / 3 n {\\displaystyle x_{1}=x_{2}=\\cdots =x_{2^{n}}=1/3^{n}} .",
"title": "Properties"
},
{
"paragraph_id": 38,
"text": "The Hausdorff dimension of the Cantor set is equal to ln(2)/ln(3) ≈ 0.631.",
"title": "Properties"
},
{
"paragraph_id": 39,
"text": "Although \"the\" Cantor set typically refers to the original, middle-thirds Cantor set described above, topologists often talk about \"a\" Cantor set, which means any topological space that is homeomorphic (topologically equivalent) to it.",
"title": "Properties"
},
{
"paragraph_id": 40,
"text": "As the above summation argument shows, the Cantor set is uncountable but has Lebesgue measure 0. Since the Cantor set is the complement of a union of open sets, it itself is a closed subset of the reals, and therefore a complete metric space. Since it is also totally bounded, the Heine–Borel theorem says that it must be compact.",
"title": "Properties"
},
{
"paragraph_id": 41,
"text": "For any point in the Cantor set and any arbitrarily small neighborhood of the point, there is some other number with a ternary numeral of only 0s and 2s, as well as numbers whose ternary numerals contain 1s. Hence, every point in the Cantor set is an accumulation point (also called a cluster point or limit point) of the Cantor set, but none is an interior point. A closed set in which every point is an accumulation point is also called a perfect set in topology, while a closed subset of the interval with no interior points is nowhere dense in the interval.",
"title": "Properties"
},
{
"paragraph_id": 42,
"text": "Every point of the Cantor set is also an accumulation point of the complement of the Cantor set.",
"title": "Properties"
},
{
"paragraph_id": 43,
"text": "For any two points in the Cantor set, there will be some ternary digit where they differ — one will have 0 and the other 2. By splitting the Cantor set into \"halves\" depending on the value of this digit, one obtains a partition of the Cantor set into two closed sets that separate the original two points. In the relative topology on the Cantor set, the points have been separated by a clopen set. Consequently, the Cantor set is totally disconnected. As a compact totally disconnected Hausdorff space, the Cantor set is an example of a Stone space.",
"title": "Properties"
},
{
"paragraph_id": 44,
"text": "As a topological space, the Cantor set is naturally homeomorphic to the product of countably many copies of the space { 0 , 1 } {\\displaystyle \\{0,1\\}} , where each copy carries the discrete topology. This is the space of all sequences in two digits",
"title": "Properties"
},
{
"paragraph_id": 45,
"text": "which can also be identified with the set of 2-adic integers. The basis for the open sets of the product topology are cylinder sets; the homeomorphism maps these to the subspace topology that the Cantor set inherits from the natural topology on the real line. This characterization of the Cantor space as a product of compact spaces gives a second proof that Cantor space is compact, via Tychonoff's theorem.",
"title": "Properties"
},
{
"paragraph_id": 46,
"text": "From the above characterization, the Cantor set is homeomorphic to the p-adic integers, and, if one point is removed from it, to the p-adic numbers.",
"title": "Properties"
},
{
"paragraph_id": 47,
"text": "The Cantor set is a subset of the reals, which are a metric space with respect to the ordinary distance metric; therefore the Cantor set itself is a metric space, by using that same metric. Alternatively, one can use the p-adic metric on 2 N {\\displaystyle 2^{\\mathbb {N} }} : given two sequences ( x n ) , ( y n ) ∈ 2 N {\\displaystyle (x_{n}),(y_{n})\\in 2^{\\mathbb {N} }} , the distance between them is d ( ( x n ) , ( y n ) ) = 2 − k {\\displaystyle d((x_{n}),(y_{n}))=2^{-k}} , where k {\\displaystyle k} is the smallest index such that x k ≠ y k {\\displaystyle x_{k}\\neq y_{k}} ; if there is no such index, then the two sequences are the same, and one defines the distance to be zero. These two metrics generate the same topology on the Cantor set.",
"title": "Properties"
},
{
"paragraph_id": 48,
"text": "We have seen above that the Cantor set is a totally disconnected perfect compact metric space. Indeed, in a sense it is the only one: every nonempty totally disconnected perfect compact metric space is homeomorphic to the Cantor set. See Cantor space for more on spaces homeomorphic to the Cantor set.",
"title": "Properties"
},
{
"paragraph_id": 49,
"text": "The Cantor set is sometimes regarded as \"universal\" in the category of compact metric spaces, since any compact metric space is a continuous image of the Cantor set; however this construction is not unique and so the Cantor set is not universal in the precise categorical sense. The \"universal\" property has important applications in functional analysis, where it is sometimes known as the representation theorem for compact metric spaces.",
"title": "Properties"
},
{
"paragraph_id": 50,
"text": "For any integer q ≥ 2, the topology on the group G = Zq (the countable direct sum) is discrete. Although the Pontrjagin dual Γ is also Zq, the topology of Γ is compact. One can see that Γ is totally disconnected and perfect - thus it is homeomorphic to the Cantor set. It is easiest to write out the homeomorphism explicitly in the case q = 2. (See Rudin 1962 p 40.)",
"title": "Properties"
},
{
"paragraph_id": 51,
"text": "The geometric mean of the Cantor set is approximately 0.274974.",
"title": "Properties"
},
{
"paragraph_id": 52,
"text": "The Cantor set can be seen as the compact group of binary sequences, and as such, it is endowed with a natural Haar measure. When normalized so that the measure of the set is 1, it is a model of an infinite sequence of coin tosses. Furthermore, one can show that the usual Lebesgue measure on the interval is an image of the Haar measure on the Cantor set, while the natural injection into the ternary set is a canonical example of a singular measure. It can also be shown that the Haar measure is an image of any probability, making the Cantor set a universal probability space in some ways.",
"title": "Properties"
},
{
"paragraph_id": 53,
"text": "In Lebesgue measure theory, the Cantor set is an example of a set which is uncountable and has zero measure. In contrast, the set has a Hausdorff measure of 1 in its dimension of log 2 / log 3.",
"title": "Properties"
},
{
"paragraph_id": 54,
"text": "If we define a Cantor number as a member of the Cantor set, then",
"title": "Properties"
},
{
"paragraph_id": 55,
"text": "The Cantor set is a meagre set (or a set of first category) as a subset of [0,1] (although not as a subset of itself, since it is a Baire space). The Cantor set thus demonstrates that notions of \"size\" in terms of cardinality, measure, and (Baire) category need not coincide. Like the set Q ∩ [ 0 , 1 ] {\\displaystyle \\mathbb {Q} \\cap [0,1]} , the Cantor set C {\\displaystyle {\\mathcal {C}}} is \"small\" in the sense that it is a null set (a set of measure zero) and it is a meagre subset of [0,1]. However, unlike Q ∩ [ 0 , 1 ] {\\displaystyle \\mathbb {Q} \\cap [0,1]} , which is countable and has a \"small\" cardinality, ℵ 0 {\\displaystyle \\aleph _{0}} , the cardinality of C {\\displaystyle {\\mathcal {C}}} is the same as that of [0,1], the continuum c {\\displaystyle {\\mathfrak {c}}} , and is \"large\" in the sense of cardinality. In fact, it is also possible to construct a subset of [0,1] that is meagre but of positive measure and a subset that is non-meagre but of measure zero: By taking the countable union of \"fat\" Cantor sets C ( n ) {\\displaystyle {\\mathcal {C}}^{(n)}} of measure λ = ( n − 1 ) / n {\\displaystyle \\lambda =(n-1)/n} (see Smith–Volterra–Cantor set below for the construction), we obtain a set A := ⋃ n = 1 ∞ C ( n ) {\\textstyle {\\mathcal {A}}:=\\bigcup _{n=1}^{\\infty }{\\mathcal {C}}^{(n)}} which has a positive measure (equal to 1) but is meagre in [0,1], since each C ( n ) {\\displaystyle {\\mathcal {C}}^{(n)}} is nowhere dense. Then consider the set A c = [ 0 , 1 ] ∖ ⋃ n = 1 ∞ C ( n ) {\\textstyle {\\mathcal {A}}^{\\mathrm {c} }=[0,1]\\setminus \\bigcup _{n=1}^{\\infty }{\\mathcal {C}}^{(n)}} . Since A ∪ A c = [ 0 , 1 ] {\\displaystyle {\\mathcal {A}}\\cup {\\mathcal {A}}^{\\mathrm {c} }=[0,1]} , A c {\\displaystyle {\\mathcal {A}}^{\\mathrm {c} }} cannot be meagre, but since μ ( A ) = 1 {\\displaystyle \\mu ({\\mathcal {A}})=1} , A c {\\displaystyle {\\mathcal {A}}^{\\mathrm {c} }} must have measure zero.",
"title": "Properties"
},
{
"paragraph_id": 56,
"text": "Instead of repeatedly removing the middle third of every piece as in the Cantor set, we could also keep removing any other fixed percentage (other than 0% and 100%) from the middle. In the case where the middle 8/10 of the interval is removed, we get a remarkably accessible case — the set consists of all numbers in [0,1] that can be written as a decimal consisting entirely of 0s and 9s. If a fixed percentage is removed at each stage, then the limiting set will have measure zero, since the length of the remainder ( 1 − f ) n → 0 {\\displaystyle (1-f)^{n}\\to 0} as n → ∞ {\\displaystyle n\\to \\infty } for any f {\\displaystyle f} such that 0 < f ≤ 1 {\\displaystyle 0<f\\leq 1} .",
"title": "Variants"
},
{
"paragraph_id": 57,
"text": "On the other hand, \"fat Cantor sets\" of positive measure can be generated by removal of smaller fractions of the middle of the segment in each iteration. Thus, one can construct sets homeomorphic to the Cantor set that have positive Lebesgue measure while still being nowhere dense. If an interval of length r n {\\displaystyle r^{n}} ( r ≤ 1 / 3 {\\displaystyle r\\leq 1/3} ) is removed from the middle of each segment at the nth iteration, then the total length removed is ∑ n = 1 ∞ 2 n − 1 r n = r / ( 1 − 2 r ) {\\textstyle \\sum _{n=1}^{\\infty }2^{n-1}r^{n}=r/(1-2r)} , and the limiting set will have a Lebesgue measure of λ = ( 1 − 3 r ) / ( 1 − 2 r ) {\\displaystyle \\lambda =(1-3r)/(1-2r)} . Thus, in a sense, the middle-thirds Cantor set is a limiting case with r = 1 / 3 {\\displaystyle r=1/3} . If 0 < r < 1 / 3 {\\displaystyle 0<r<1/3} , then the remainder will have positive measure with 0 < λ < 1 {\\displaystyle 0<\\lambda <1} . The case r = 1 / 4 {\\displaystyle r=1/4} is known as the Smith–Volterra–Cantor set, which has a Lebesgue measure of 1 / 2 {\\displaystyle 1/2} .",
"title": "Variants"
},
{
"paragraph_id": 58,
"text": "One can modify the construction of the Cantor set by dividing randomly instead of equally. Besides, to incorporate time we can divide only one of the available intervals at each step instead of dividing all the available intervals. In the case of stochastic triadic Cantor set the resulting process can be described by the following rate equation",
"title": "Variants"
},
{
"paragraph_id": 59,
"text": "and for the stochastic dyadic Cantor set",
"title": "Variants"
},
{
"paragraph_id": 60,
"text": "where c ( x , t ) d x {\\displaystyle c(x,t)dx} is the number of intervals of size between x {\\displaystyle x} and x + d x {\\displaystyle x+dx} . In the case of triadic Cantor set the fractal dimension is 0.5616 {\\displaystyle 0.5616} which is less than its deterministic counterpart 0.6309 {\\displaystyle 0.6309} . In the case of stochastic dyadic Cantor set the fractal dimension is p {\\displaystyle p} which is again less than that of its deterministic counterpart ln ( 1 + p ) / ln 2 {\\displaystyle \\ln(1+p)/\\ln 2} . In the case of stochastic dyadic Cantor set the solution for c ( x , t ) {\\displaystyle c(x,t)} exhibits dynamic scaling as its solution in the long-time limit is t − ( 1 + d f ) e − x t {\\displaystyle t^{-(1+d_{f})}e^{-xt}} where the fractal dimension of the stochastic dyadic Cantor set d f = p {\\displaystyle d_{f}=p} . In either case, like triadic Cantor set, the d f {\\displaystyle d_{f}} th moment ( ∫ x d f c ( x , t ) d x = constant {\\textstyle \\int x^{d_{f}}c(x,t)\\,dx={\\text{constant}}} ) of stochastic triadic and dyadic Cantor set too are conserved quantities.",
"title": "Variants"
},
{
"paragraph_id": 61,
"text": "Cantor dust is a multi-dimensional version of the Cantor set. It can be formed by taking a finite Cartesian product of the Cantor set with itself, making it a Cantor space. Like the Cantor set, Cantor dust has zero measure.",
"title": "Variants"
},
{
"paragraph_id": 62,
"text": "A different 2D analogue of the Cantor set is the Sierpinski carpet, where a square is divided up into nine smaller squares, and the middle one removed. The remaining squares are then further divided into nine each and the middle removed, and so on ad infinitum. One 3D analogue of this is the Menger sponge.",
"title": "Variants"
},
{
"paragraph_id": 63,
"text": "Cantor introduced what we call today the Cantor ternary set C {\\displaystyle {\\mathcal {C}}} as an example \"of a perfect point-set, which is not everywhere-dense in any interval, however small.\" Cantor described C {\\displaystyle {\\mathcal {C}}} in terms of ternary expansions, as \"the set of all real numbers given by the formula: z = c 1 / 3 + c 2 / 3 2 + ⋯ + c ν / 3 ν + ⋯ {\\displaystyle z=c_{1}/3+c_{2}/3^{2}+\\cdots +c_{\\nu }/3^{\\nu }+\\cdots } where the coefficients c ν {\\displaystyle c_{\\nu }} arbitrarily take the two values 0 and 2, and the series can consist of a finite number or an infinite number of elements.\"",
"title": "Historical remarks"
},
{
"paragraph_id": 64,
"text": "A topological space P {\\displaystyle P} is perfect if all its points are limit points or, equivalently, if it coincides with its derived set P ′ {\\displaystyle P'} . Subsets of the real line, like C {\\displaystyle {\\mathcal {C}}} , can be seen as topological spaces under the induced subspace topology.",
"title": "Historical remarks"
},
{
"paragraph_id": 65,
"text": "Cantor was led to the study of derived sets by his results on uniqueness of trigonometric series. The latter did much to set him on the course for developing an abstract, general theory of infinite sets.",
"title": "Historical remarks"
},
{
"paragraph_id": 66,
"text": "Benoit Mandelbrot wrote much on Cantor dusts and their relation to natural fractals and statistical physics. He further reflected on the puzzling or even upsetting nature of such structures to those in the mathematics and physics community. In The Fractal geometry of Nature, he described how \"When I started on this topic in 1962, everyone was agreeing that Cantor dusts are at least as monstrous as the Koch and Peano curves,\" and added that \"every self-respecting physicist was automatically turned off by a mention of Cantor, ready to run a mile from anyone claiming C {\\displaystyle {\\mathcal {C}}} to be interesting in science.\"",
"title": "Historical remarks"
}
]
| In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of unintuitive properties. It was discovered in 1874 by Henry John Stephen Smith and introduced by German mathematician Georg Cantor in 1883. Through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. The most common construction is the Cantor ternary set, built by removing the middle third of a line segment and then repeating the process with the remaining shorter segments. Cantor mentioned the ternary construction only in passing, as an example of a more general idea, that of a perfect set that is nowhere dense. More generally, in topology, a Cantor space is a topological space homeomorphic to the Cantor ternary set. By a theorem of L. E. J. Brouwer, this is equivalent to being perfect nonempty, compact metrizable and zero dimensional. | 2001-11-18T23:24:10Z | 2023-12-23T04:48:48Z | [
"Template:Refend",
"Template:Springer",
"Template:Fractals",
"Template:Sfrac",
"Template:Unreliable source?",
"Template:Citation",
"Template:Cite web",
"Template:Refbegin",
"Template:Nowrap",
"Template:Main",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite book",
"Template:Authority control",
"Template:Short description",
"Template:Distinguish",
"Template:Overline"
]
| https://en.wikipedia.org/wiki/Cantor_set |
6,173 | Cardinal number | In mathematics, a cardinal number, or cardinal for short, is what is commonly called the number of elements of a set. In the case of a finite set, its cardinal number, or cardinality is therefore a natural number. For dealing with the case of infinite sets, the infinite cardinal numbers have been introduced, which are often denoted with the Hebrew letter ℵ {\displaystyle \aleph } (aleph) marked with subscript indicating their rank among the infinite cardinals.
Cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, there is a one-to-one correspondence (bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of number of elements. In the case of infinite sets, the behavior is more complex. A fundamental theorem due to Georg Cantor shows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers. It is also possible for a proper subset of an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets.
There is a transfinite sequence of cardinal numbers:
This sequence starts with the natural numbers including zero (finite cardinals), which are followed by the aleph numbers. The aleph numbers are indexed by ordinal numbers. If the axiom of choice is true, this transfinite sequence includes every cardinal number. If the axiom of choice is not true (see Axiom of choice § Independence), there are infinite cardinals that are not aleph numbers.
Cardinality is studied for its own sake as part of set theory. It is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra and mathematical analysis. In category theory, the cardinal numbers form a skeleton of the category of sets.
The notion of cardinality, as now understood, was formulated by Georg Cantor, the originator of set theory, in 1874–1884. Cardinality can be used to compare an aspect of finite sets. For example, the sets {1,2,3} and {4,5,6} are not equal, but have the same cardinality, namely three. This is established by the existence of a bijection (i.e., a one-to-one correspondence) between the two sets, such as the correspondence {1→4, 2→5, 3→6}.
Cantor applied his concept of bijection to infinite sets (for example the set of natural numbers N = {0, 1, 2, 3, ...}). Thus, he called all sets having a bijection with N denumerable (countably infinite) sets, which all share the same cardinal number. This cardinal number is called ℵ 0 {\displaystyle \aleph _{0}} , aleph-null. He called the cardinal numbers of infinite sets transfinite cardinal numbers.
Cantor proved that any unbounded subset of N has the same cardinality as N, even though this might appear to run contrary to intuition. He also proved that the set of all ordered pairs of natural numbers is denumerable; this implies that the set of all rational numbers is also denumerable, since every rational can be represented by a pair of integers. He later proved that the set of all real algebraic numbers is also denumerable. Each real algebraic number z may be encoded as a finite sequence of integers, which are the coefficients in the polynomial equation of which it is a solution, i.e. the ordered n-tuple (a0, a1, ..., an), ai ∈ Z together with a pair of rationals (b0, b1) such that z is the unique root of the polynomial with coefficients (a0, a1, ..., an) that lies in the interval (b0, b1).
In his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers", Cantor proved that there exist higher-order cardinal numbers, by showing that the set of real numbers has cardinality greater than that of N. His proof used an argument with nested intervals, but in an 1891 paper, he proved the same result using his ingenious and much simpler diagonal argument. The new cardinal number of the set of real numbers is called the cardinality of the continuum and Cantor used the symbol c {\displaystyle {\mathfrak {c}}} for it.
Cantor also developed a large portion of the general theory of cardinal numbers; he proved that there is a smallest transfinite cardinal number ( ℵ 0 {\displaystyle \aleph _{0}} , aleph-null), and that for every cardinal number there is a next-larger cardinal
His continuum hypothesis is the proposition that the cardinality c {\displaystyle {\mathfrak {c}}} of the set of real numbers is the same as ℵ 1 {\displaystyle \aleph _{1}} . This hypothesis is independent of the standard axioms of mathematical set theory, that is, it can neither be proved nor disproved from them. This was shown in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
In informal use, a cardinal number is what is normally referred to as a counting number, provided that 0 is included: 0, 1, 2, .... They may be identified with the natural numbers beginning with 0. The counting numbers are exactly what can be defined formally as the finite cardinal numbers. Infinite cardinals only occur in higher-level mathematics and logic.
More formally, a non-zero number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. For finite sets and sequences it is easy to see that these two notions coincide, since for every number describing a position in a sequence we can construct a set that has exactly the right size. For example, 3 describes the position of 'c' in the sequence <'a','b','c','d',...>, and we can construct the set {a,b,c}, which has 3 elements.
However, when dealing with infinite sets, it is essential to distinguish between the two, since the two notions are in fact different for infinite sets. Considering the position aspect leads to ordinal numbers, while the size aspect is generalized by the cardinal numbers described here.
The intuition behind the formal definition of cardinal is the construction of a notion of the relative size or "bigness" of a set, without reference to the kind of members which it has. For finite sets this is easy; one simply counts the number of elements a set has. In order to compare the sizes of larger sets, it is necessary to appeal to more refined notions.
A set Y is at least as big as a set X if there is an injective mapping from the elements of X to the elements of Y. An injective mapping identifies each element of the set X with a unique element of the set Y. This is most easily understood by an example; suppose we have the sets X = {1,2,3} and Y = {a,b,c,d}, then using this notion of size, we would observe that there is a mapping:
which is injective, and hence conclude that Y has cardinality greater than or equal to X. The element d has no element mapping to it, but this is permitted as we only require an injective mapping, and not necessarily a bijective mapping. The advantage of this notion is that it can be extended to infinite sets.
We can then extend this to an equality-style relation. Two sets X and Y are said to have the same cardinality if there exists a bijection between X and Y. By the Schroeder–Bernstein theorem, this is equivalent to there being both an injective mapping from X to Y, and an injective mapping from Y to X. We then write |X| = |Y|. The cardinal number of X itself is often defined as the least ordinal a with |a| = |X|. This is called the von Neumann cardinal assignment; for this definition to make sense, it must be proved that every set has the same cardinality as some ordinal; this statement is the well-ordering principle. It is however possible to discuss the relative cardinality of sets without explicitly assigning names to objects.
The classic example used is that of the infinite hotel paradox, also called Hilbert's paradox of the Grand Hotel. Supposing there is an innkeeper at a hotel with an infinite number of rooms. The hotel is full, and then a new guest arrives. It is possible to fit the extra guest in by asking the guest who was in room 1 to move to room 2, the guest in room 2 to move to room 3, and so on, leaving room 1 vacant. We can explicitly write a segment of this mapping:
With this assignment, we can see that the set {1,2,3,...} has the same cardinality as the set {2,3,4,...}, since a bijection between the first and the second has been shown. This motivates the definition of an infinite set being any set that has a proper subset of the same cardinality (i.e., a Dedekind-infinite set); in this case {2,3,4,...} is a proper subset of {1,2,3,...}.
When considering these large objects, one might also want to see if the notion of counting order coincides with that of cardinal defined above for these infinite sets. It happens that it does not; by considering the above example we can see that if some object "one greater than infinity" exists, then it must have the same cardinality as the infinite set we started out with. It is possible to use a different formal notion for number, called ordinals, based on the ideas of counting and considering each number in turn, and we discover that the notions of cardinality and ordinality are divergent once we move out of the finite numbers.
It can be proved that the cardinality of the real numbers is greater than that of the natural numbers just described. This can be visualized using Cantor's diagonal argument; classic questions of cardinality (for instance the continuum hypothesis) are concerned with discovering whether there is some cardinal between some pair of other infinite cardinals. In more recent times, mathematicians have been describing the properties of larger and larger cardinals.
Since cardinality is such a common concept in mathematics, a variety of names are in use. Sameness of cardinality is sometimes referred to as equipotence, equipollence, or equinumerosity. It is thus said that two sets with the same cardinality are, respectively, equipotent, equipollent, or equinumerous.
Formally, assuming the axiom of choice, the cardinality of a set X is the least ordinal number α such that there is a bijection between X and α. This definition is known as the von Neumann cardinal assignment. If the axiom of choice is not assumed, then a different approach is needed. The oldest definition of the cardinality of a set X (implicit in Cantor and explicit in Frege and Principia Mathematica) is as the class [X] of all sets that are equinumerous with X. This does not work in ZFC or other related systems of axiomatic set theory because if X is non-empty, this collection is too large to be a set. In fact, for X ≠ ∅ there is an injection from the universe into [X] by mapping a set m to {m} × X, and so by the axiom of limitation of size, [X] is a proper class. The definition does work however in type theory and in New Foundations and related systems. However, if we restrict from this class to those equinumerous with X that have the least rank, then it will work (this is a trick due to Dana Scott: it works because the collection of objects with any given rank is a set).
Von Neumann cardinal assignment implies that the cardinal number of a finite set is the common ordinal number of all possible well-orderings of that set, and cardinal and ordinal arithmetic (addition, multiplication, power, proper subtraction) then give the same answers for finite numbers. However, they differ for infinite numbers. For example, 2 ω = ω < ω 2 {\displaystyle 2^{\omega }=\omega <\omega ^{2}} in ordinal arithmetic while 2 ℵ 0 > ℵ 0 = ℵ 0 2 {\displaystyle 2^{\aleph _{0}}>\aleph _{0}=\aleph _{0}^{2}} in cardinal arithmetic, although the von Neumann assignment puts ℵ 0 = ω {\displaystyle \aleph _{0}=\omega } . On the other hand, Scott's trick implies that the cardinal number 0 is { ∅ } {\displaystyle \{\emptyset \}} , which is also the ordinal number 1, and this may be confusing. A possible compromise (to take advantage of the alignment in finite arithmetic while avoiding reliance on the axiom of choice and confusion in infinite arithmetic) is to apply von Neumann assignment to the cardinal numbers of finite sets (those which can be well ordered and are not equipotent to proper subsets) and to use Scott's trick for the cardinal numbers of other sets.
Formally, the order among cardinal numbers is defined as follows: |X| ≤ |Y| means that there exists an injective function from X to Y. The Cantor–Bernstein–Schroeder theorem states that if |X| ≤ |Y| and |Y| ≤ |X| then |X| = |Y|. The axiom of choice is equivalent to the statement that given two sets X and Y, either |X| ≤ |Y| or |Y| ≤ |X|.
A set X is Dedekind-infinite if there exists a proper subset Y of X with |X| = |Y|, and Dedekind-finite if such a subset does not exist. The finite cardinals are just the natural numbers, in the sense that a set X is finite if and only if |X| = |n| = n for some natural number n. Any other set is infinite.
Assuming the axiom of choice, it can be proved that the Dedekind notions correspond to the standard ones. It can also be proved that the cardinal ℵ 0 {\displaystyle \aleph _{0}} (aleph null or aleph-0, where aleph is the first letter in the Hebrew alphabet, represented ℵ {\displaystyle \aleph } ) of the set of natural numbers is the smallest infinite cardinal (i.e., any infinite set has a subset of cardinality ℵ 0 {\displaystyle \aleph _{0}} ). The next larger cardinal is denoted by ℵ 1 {\displaystyle \aleph _{1}} , and so on. For every ordinal α, there is a cardinal number ℵ α , {\displaystyle \aleph _{\alpha },} and this list exhausts all infinite cardinal numbers.
We can define arithmetic operations on cardinal numbers that generalize the ordinary operations for natural numbers. It can be shown that for finite cardinals, these operations coincide with the usual operations for natural numbers. Furthermore, these operations share many properties with ordinary arithmetic.
If the axiom of choice holds, then every cardinal κ has a successor, denoted κ, where κ > κ and there are no cardinals between κ and its successor. (Without the axiom of choice, using Hartogs' theorem, it can be shown that for any cardinal number κ, there is a minimal cardinal κ such that κ + ≰ κ . {\displaystyle \kappa ^{+}\nleq \kappa .} ) For finite cardinals, the successor is simply κ + 1. For infinite cardinals, the successor cardinal differs from the successor ordinal.
If X and Y are disjoint, addition is given by the union of X and Y. If the two sets are not already disjoint, then they can be replaced by disjoint sets of the same cardinality (e.g., replace X by X×{0} and Y by Y×{1}).
Zero is an additive identity κ + 0 = 0 + κ = κ.
Addition is associative (κ + μ) + ν = κ + (μ + ν).
Addition is commutative κ + μ = μ + κ.
Addition is non-decreasing in both arguments:
Assuming the axiom of choice, addition of infinite cardinal numbers is easy. If either κ or μ is infinite, then
Assuming the axiom of choice and, given an infinite cardinal σ and a cardinal μ, there exists a cardinal κ such that μ + κ = σ if and only if μ ≤ σ. It will be unique (and equal to σ) if and only if μ < σ.
The product of cardinals comes from the Cartesian product.
κ·0 = 0·κ = 0.
κ·μ = 0 → (κ = 0 or μ = 0).
One is a multiplicative identity κ·1 = 1·κ = κ.
Multiplication is associative (κ·μ)·ν = κ·(μ·ν).
Multiplication is commutative κ·μ = μ·κ.
Multiplication is non-decreasing in both arguments: κ ≤ μ → (κ·ν ≤ μ·ν and ν·κ ≤ ν·μ).
Multiplication distributes over addition: κ·(μ + ν) = κ·μ + κ·ν and (μ + ν)·κ = μ·κ + ν·κ.
Assuming the axiom of choice, multiplication of infinite cardinal numbers is also easy. If either κ or μ is infinite and both are non-zero, then
Assuming the axiom of choice and, given an infinite cardinal π and a non-zero cardinal μ, there exists a cardinal κ such that μ · κ = π if and only if μ ≤ π. It will be unique (and equal to π) if and only if μ < π.
Exponentiation is given by
where X is the set of all functions from Y to X.
Exponentiation is non-decreasing in both arguments:
2 is the cardinality of the power set of the set X and Cantor's diagonal argument shows that 2 > |X| for any set X. This proves that no largest cardinal exists (because for any cardinal κ, we can always find a larger cardinal 2). In fact, the class of cardinals is a proper class. (This proof fails in some set theories, notably New Foundations.)
All the remaining propositions in this section assume the axiom of choice:
If 2 ≤ κ and 1 ≤ μ and at least one of them is infinite, then:
Using König's theorem, one can prove κ < κ and κ < cf(2) for any infinite cardinal κ, where cf(κ) is the cofinality of κ.
Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 0, the cardinal ν satisfying ν μ = κ {\displaystyle \nu ^{\mu }=\kappa } will be κ {\displaystyle \kappa } .
Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 1, there may or may not be a cardinal λ satisfying μ λ = κ {\displaystyle \mu ^{\lambda }=\kappa } . However, if such a cardinal exists, it is infinite and less than κ, and any finite cardinality ν greater than 1 will also satisfy ν λ = κ {\displaystyle \nu ^{\lambda }=\kappa } .
The logarithm of an infinite cardinal number κ is defined as the least cardinal number μ such that κ ≤ 2. Logarithms of infinite cardinals are useful in some fields of mathematics, for example in the study of cardinal invariants of topological spaces, though they lack some of the properties that logarithms of positive real numbers possess.
The continuum hypothesis (CH) states that there are no cardinals strictly between ℵ 0 {\displaystyle \aleph _{0}} and 2 ℵ 0 . {\displaystyle 2^{\aleph _{0}}.} The latter cardinal number is also often denoted by c {\displaystyle {\mathfrak {c}}} ; it is the cardinality of the continuum (the set of real numbers). In this case 2 ℵ 0 = ℵ 1 . {\displaystyle 2^{\aleph _{0}}=\aleph _{1}.}
Similarly, the generalized continuum hypothesis (GCH) states that for every infinite cardinal κ {\displaystyle \kappa } , there are no cardinals strictly between κ {\displaystyle \kappa } and 2 κ {\displaystyle 2^{\kappa }} . Both the continuum hypothesis and the generalized continuum hypothesis have been proved independent of the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC).
Indeed, Easton's theorem shows that, for regular cardinals κ {\displaystyle \kappa } , the only restrictions ZFC places on the cardinality of 2 κ {\displaystyle 2^{\kappa }} are that κ < cf ( 2 κ ) {\displaystyle \kappa <\operatorname {cf} (2^{\kappa })} , and that the exponential function is non-decreasing.
Notes
Bibliography | [
{
"paragraph_id": 0,
"text": "In mathematics, a cardinal number, or cardinal for short, is what is commonly called the number of elements of a set. In the case of a finite set, its cardinal number, or cardinality is therefore a natural number. For dealing with the case of infinite sets, the infinite cardinal numbers have been introduced, which are often denoted with the Hebrew letter ℵ {\\displaystyle \\aleph } (aleph) marked with subscript indicating their rank among the infinite cardinals.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, there is a one-to-one correspondence (bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of number of elements. In the case of infinite sets, the behavior is more complex. A fundamental theorem due to Georg Cantor shows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers. It is also possible for a proper subset of an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There is a transfinite sequence of cardinal numbers:",
"title": ""
},
{
"paragraph_id": 3,
"text": "This sequence starts with the natural numbers including zero (finite cardinals), which are followed by the aleph numbers. The aleph numbers are indexed by ordinal numbers. If the axiom of choice is true, this transfinite sequence includes every cardinal number. If the axiom of choice is not true (see Axiom of choice § Independence), there are infinite cardinals that are not aleph numbers.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Cardinality is studied for its own sake as part of set theory. It is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra and mathematical analysis. In category theory, the cardinal numbers form a skeleton of the category of sets.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The notion of cardinality, as now understood, was formulated by Georg Cantor, the originator of set theory, in 1874–1884. Cardinality can be used to compare an aspect of finite sets. For example, the sets {1,2,3} and {4,5,6} are not equal, but have the same cardinality, namely three. This is established by the existence of a bijection (i.e., a one-to-one correspondence) between the two sets, such as the correspondence {1→4, 2→5, 3→6}.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Cantor applied his concept of bijection to infinite sets (for example the set of natural numbers N = {0, 1, 2, 3, ...}). Thus, he called all sets having a bijection with N denumerable (countably infinite) sets, which all share the same cardinal number. This cardinal number is called ℵ 0 {\\displaystyle \\aleph _{0}} , aleph-null. He called the cardinal numbers of infinite sets transfinite cardinal numbers.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Cantor proved that any unbounded subset of N has the same cardinality as N, even though this might appear to run contrary to intuition. He also proved that the set of all ordered pairs of natural numbers is denumerable; this implies that the set of all rational numbers is also denumerable, since every rational can be represented by a pair of integers. He later proved that the set of all real algebraic numbers is also denumerable. Each real algebraic number z may be encoded as a finite sequence of integers, which are the coefficients in the polynomial equation of which it is a solution, i.e. the ordered n-tuple (a0, a1, ..., an), ai ∈ Z together with a pair of rationals (b0, b1) such that z is the unique root of the polynomial with coefficients (a0, a1, ..., an) that lies in the interval (b0, b1).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In his 1874 paper \"On a Property of the Collection of All Real Algebraic Numbers\", Cantor proved that there exist higher-order cardinal numbers, by showing that the set of real numbers has cardinality greater than that of N. His proof used an argument with nested intervals, but in an 1891 paper, he proved the same result using his ingenious and much simpler diagonal argument. The new cardinal number of the set of real numbers is called the cardinality of the continuum and Cantor used the symbol c {\\displaystyle {\\mathfrak {c}}} for it.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Cantor also developed a large portion of the general theory of cardinal numbers; he proved that there is a smallest transfinite cardinal number ( ℵ 0 {\\displaystyle \\aleph _{0}} , aleph-null), and that for every cardinal number there is a next-larger cardinal",
"title": "History"
},
{
"paragraph_id": 10,
"text": "His continuum hypothesis is the proposition that the cardinality c {\\displaystyle {\\mathfrak {c}}} of the set of real numbers is the same as ℵ 1 {\\displaystyle \\aleph _{1}} . This hypothesis is independent of the standard axioms of mathematical set theory, that is, it can neither be proved nor disproved from them. This was shown in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In informal use, a cardinal number is what is normally referred to as a counting number, provided that 0 is included: 0, 1, 2, .... They may be identified with the natural numbers beginning with 0. The counting numbers are exactly what can be defined formally as the finite cardinal numbers. Infinite cardinals only occur in higher-level mathematics and logic.",
"title": "Motivation"
},
{
"paragraph_id": 12,
"text": "More formally, a non-zero number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. For finite sets and sequences it is easy to see that these two notions coincide, since for every number describing a position in a sequence we can construct a set that has exactly the right size. For example, 3 describes the position of 'c' in the sequence <'a','b','c','d',...>, and we can construct the set {a,b,c}, which has 3 elements.",
"title": "Motivation"
},
{
"paragraph_id": 13,
"text": "However, when dealing with infinite sets, it is essential to distinguish between the two, since the two notions are in fact different for infinite sets. Considering the position aspect leads to ordinal numbers, while the size aspect is generalized by the cardinal numbers described here.",
"title": "Motivation"
},
{
"paragraph_id": 14,
"text": "The intuition behind the formal definition of cardinal is the construction of a notion of the relative size or \"bigness\" of a set, without reference to the kind of members which it has. For finite sets this is easy; one simply counts the number of elements a set has. In order to compare the sizes of larger sets, it is necessary to appeal to more refined notions.",
"title": "Motivation"
},
{
"paragraph_id": 15,
"text": "A set Y is at least as big as a set X if there is an injective mapping from the elements of X to the elements of Y. An injective mapping identifies each element of the set X with a unique element of the set Y. This is most easily understood by an example; suppose we have the sets X = {1,2,3} and Y = {a,b,c,d}, then using this notion of size, we would observe that there is a mapping:",
"title": "Motivation"
},
{
"paragraph_id": 16,
"text": "which is injective, and hence conclude that Y has cardinality greater than or equal to X. The element d has no element mapping to it, but this is permitted as we only require an injective mapping, and not necessarily a bijective mapping. The advantage of this notion is that it can be extended to infinite sets.",
"title": "Motivation"
},
{
"paragraph_id": 17,
"text": "We can then extend this to an equality-style relation. Two sets X and Y are said to have the same cardinality if there exists a bijection between X and Y. By the Schroeder–Bernstein theorem, this is equivalent to there being both an injective mapping from X to Y, and an injective mapping from Y to X. We then write |X| = |Y|. The cardinal number of X itself is often defined as the least ordinal a with |a| = |X|. This is called the von Neumann cardinal assignment; for this definition to make sense, it must be proved that every set has the same cardinality as some ordinal; this statement is the well-ordering principle. It is however possible to discuss the relative cardinality of sets without explicitly assigning names to objects.",
"title": "Motivation"
},
{
"paragraph_id": 18,
"text": "The classic example used is that of the infinite hotel paradox, also called Hilbert's paradox of the Grand Hotel. Supposing there is an innkeeper at a hotel with an infinite number of rooms. The hotel is full, and then a new guest arrives. It is possible to fit the extra guest in by asking the guest who was in room 1 to move to room 2, the guest in room 2 to move to room 3, and so on, leaving room 1 vacant. We can explicitly write a segment of this mapping:",
"title": "Motivation"
},
{
"paragraph_id": 19,
"text": "With this assignment, we can see that the set {1,2,3,...} has the same cardinality as the set {2,3,4,...}, since a bijection between the first and the second has been shown. This motivates the definition of an infinite set being any set that has a proper subset of the same cardinality (i.e., a Dedekind-infinite set); in this case {2,3,4,...} is a proper subset of {1,2,3,...}.",
"title": "Motivation"
},
{
"paragraph_id": 20,
"text": "When considering these large objects, one might also want to see if the notion of counting order coincides with that of cardinal defined above for these infinite sets. It happens that it does not; by considering the above example we can see that if some object \"one greater than infinity\" exists, then it must have the same cardinality as the infinite set we started out with. It is possible to use a different formal notion for number, called ordinals, based on the ideas of counting and considering each number in turn, and we discover that the notions of cardinality and ordinality are divergent once we move out of the finite numbers.",
"title": "Motivation"
},
{
"paragraph_id": 21,
"text": "It can be proved that the cardinality of the real numbers is greater than that of the natural numbers just described. This can be visualized using Cantor's diagonal argument; classic questions of cardinality (for instance the continuum hypothesis) are concerned with discovering whether there is some cardinal between some pair of other infinite cardinals. In more recent times, mathematicians have been describing the properties of larger and larger cardinals.",
"title": "Motivation"
},
{
"paragraph_id": 22,
"text": "Since cardinality is such a common concept in mathematics, a variety of names are in use. Sameness of cardinality is sometimes referred to as equipotence, equipollence, or equinumerosity. It is thus said that two sets with the same cardinality are, respectively, equipotent, equipollent, or equinumerous.",
"title": "Motivation"
},
{
"paragraph_id": 23,
"text": "Formally, assuming the axiom of choice, the cardinality of a set X is the least ordinal number α such that there is a bijection between X and α. This definition is known as the von Neumann cardinal assignment. If the axiom of choice is not assumed, then a different approach is needed. The oldest definition of the cardinality of a set X (implicit in Cantor and explicit in Frege and Principia Mathematica) is as the class [X] of all sets that are equinumerous with X. This does not work in ZFC or other related systems of axiomatic set theory because if X is non-empty, this collection is too large to be a set. In fact, for X ≠ ∅ there is an injection from the universe into [X] by mapping a set m to {m} × X, and so by the axiom of limitation of size, [X] is a proper class. The definition does work however in type theory and in New Foundations and related systems. However, if we restrict from this class to those equinumerous with X that have the least rank, then it will work (this is a trick due to Dana Scott: it works because the collection of objects with any given rank is a set).",
"title": "Formal definition"
},
{
"paragraph_id": 24,
"text": "Von Neumann cardinal assignment implies that the cardinal number of a finite set is the common ordinal number of all possible well-orderings of that set, and cardinal and ordinal arithmetic (addition, multiplication, power, proper subtraction) then give the same answers for finite numbers. However, they differ for infinite numbers. For example, 2 ω = ω < ω 2 {\\displaystyle 2^{\\omega }=\\omega <\\omega ^{2}} in ordinal arithmetic while 2 ℵ 0 > ℵ 0 = ℵ 0 2 {\\displaystyle 2^{\\aleph _{0}}>\\aleph _{0}=\\aleph _{0}^{2}} in cardinal arithmetic, although the von Neumann assignment puts ℵ 0 = ω {\\displaystyle \\aleph _{0}=\\omega } . On the other hand, Scott's trick implies that the cardinal number 0 is { ∅ } {\\displaystyle \\{\\emptyset \\}} , which is also the ordinal number 1, and this may be confusing. A possible compromise (to take advantage of the alignment in finite arithmetic while avoiding reliance on the axiom of choice and confusion in infinite arithmetic) is to apply von Neumann assignment to the cardinal numbers of finite sets (those which can be well ordered and are not equipotent to proper subsets) and to use Scott's trick for the cardinal numbers of other sets.",
"title": "Formal definition"
},
{
"paragraph_id": 25,
"text": "Formally, the order among cardinal numbers is defined as follows: |X| ≤ |Y| means that there exists an injective function from X to Y. The Cantor–Bernstein–Schroeder theorem states that if |X| ≤ |Y| and |Y| ≤ |X| then |X| = |Y|. The axiom of choice is equivalent to the statement that given two sets X and Y, either |X| ≤ |Y| or |Y| ≤ |X|.",
"title": "Formal definition"
},
{
"paragraph_id": 26,
"text": "A set X is Dedekind-infinite if there exists a proper subset Y of X with |X| = |Y|, and Dedekind-finite if such a subset does not exist. The finite cardinals are just the natural numbers, in the sense that a set X is finite if and only if |X| = |n| = n for some natural number n. Any other set is infinite.",
"title": "Formal definition"
},
{
"paragraph_id": 27,
"text": "Assuming the axiom of choice, it can be proved that the Dedekind notions correspond to the standard ones. It can also be proved that the cardinal ℵ 0 {\\displaystyle \\aleph _{0}} (aleph null or aleph-0, where aleph is the first letter in the Hebrew alphabet, represented ℵ {\\displaystyle \\aleph } ) of the set of natural numbers is the smallest infinite cardinal (i.e., any infinite set has a subset of cardinality ℵ 0 {\\displaystyle \\aleph _{0}} ). The next larger cardinal is denoted by ℵ 1 {\\displaystyle \\aleph _{1}} , and so on. For every ordinal α, there is a cardinal number ℵ α , {\\displaystyle \\aleph _{\\alpha },} and this list exhausts all infinite cardinal numbers.",
"title": "Formal definition"
},
{
"paragraph_id": 28,
"text": "We can define arithmetic operations on cardinal numbers that generalize the ordinary operations for natural numbers. It can be shown that for finite cardinals, these operations coincide with the usual operations for natural numbers. Furthermore, these operations share many properties with ordinary arithmetic.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 29,
"text": "If the axiom of choice holds, then every cardinal κ has a successor, denoted κ, where κ > κ and there are no cardinals between κ and its successor. (Without the axiom of choice, using Hartogs' theorem, it can be shown that for any cardinal number κ, there is a minimal cardinal κ such that κ + ≰ κ . {\\displaystyle \\kappa ^{+}\\nleq \\kappa .} ) For finite cardinals, the successor is simply κ + 1. For infinite cardinals, the successor cardinal differs from the successor ordinal.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 30,
"text": "If X and Y are disjoint, addition is given by the union of X and Y. If the two sets are not already disjoint, then they can be replaced by disjoint sets of the same cardinality (e.g., replace X by X×{0} and Y by Y×{1}).",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 31,
"text": "Zero is an additive identity κ + 0 = 0 + κ = κ.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 32,
"text": "Addition is associative (κ + μ) + ν = κ + (μ + ν).",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 33,
"text": "Addition is commutative κ + μ = μ + κ.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 34,
"text": "Addition is non-decreasing in both arguments:",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 35,
"text": "Assuming the axiom of choice, addition of infinite cardinal numbers is easy. If either κ or μ is infinite, then",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 36,
"text": "Assuming the axiom of choice and, given an infinite cardinal σ and a cardinal μ, there exists a cardinal κ such that μ + κ = σ if and only if μ ≤ σ. It will be unique (and equal to σ) if and only if μ < σ.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 37,
"text": "The product of cardinals comes from the Cartesian product.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 38,
"text": "κ·0 = 0·κ = 0.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 39,
"text": "κ·μ = 0 → (κ = 0 or μ = 0).",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 40,
"text": "One is a multiplicative identity κ·1 = 1·κ = κ.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 41,
"text": "Multiplication is associative (κ·μ)·ν = κ·(μ·ν).",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 42,
"text": "Multiplication is commutative κ·μ = μ·κ.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 43,
"text": "Multiplication is non-decreasing in both arguments: κ ≤ μ → (κ·ν ≤ μ·ν and ν·κ ≤ ν·μ).",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 44,
"text": "Multiplication distributes over addition: κ·(μ + ν) = κ·μ + κ·ν and (μ + ν)·κ = μ·κ + ν·κ.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 45,
"text": "Assuming the axiom of choice, multiplication of infinite cardinal numbers is also easy. If either κ or μ is infinite and both are non-zero, then",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 46,
"text": "Assuming the axiom of choice and, given an infinite cardinal π and a non-zero cardinal μ, there exists a cardinal κ such that μ · κ = π if and only if μ ≤ π. It will be unique (and equal to π) if and only if μ < π.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 47,
"text": "Exponentiation is given by",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 48,
"text": "where X is the set of all functions from Y to X.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 49,
"text": "Exponentiation is non-decreasing in both arguments:",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 50,
"text": "2 is the cardinality of the power set of the set X and Cantor's diagonal argument shows that 2 > |X| for any set X. This proves that no largest cardinal exists (because for any cardinal κ, we can always find a larger cardinal 2). In fact, the class of cardinals is a proper class. (This proof fails in some set theories, notably New Foundations.)",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 51,
"text": "All the remaining propositions in this section assume the axiom of choice:",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 52,
"text": "If 2 ≤ κ and 1 ≤ μ and at least one of them is infinite, then:",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 53,
"text": "Using König's theorem, one can prove κ < κ and κ < cf(2) for any infinite cardinal κ, where cf(κ) is the cofinality of κ.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 54,
"text": "Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 0, the cardinal ν satisfying ν μ = κ {\\displaystyle \\nu ^{\\mu }=\\kappa } will be κ {\\displaystyle \\kappa } .",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 55,
"text": "Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 1, there may or may not be a cardinal λ satisfying μ λ = κ {\\displaystyle \\mu ^{\\lambda }=\\kappa } . However, if such a cardinal exists, it is infinite and less than κ, and any finite cardinality ν greater than 1 will also satisfy ν λ = κ {\\displaystyle \\nu ^{\\lambda }=\\kappa } .",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 56,
"text": "The logarithm of an infinite cardinal number κ is defined as the least cardinal number μ such that κ ≤ 2. Logarithms of infinite cardinals are useful in some fields of mathematics, for example in the study of cardinal invariants of topological spaces, though they lack some of the properties that logarithms of positive real numbers possess.",
"title": "Cardinal arithmetic"
},
{
"paragraph_id": 57,
"text": "The continuum hypothesis (CH) states that there are no cardinals strictly between ℵ 0 {\\displaystyle \\aleph _{0}} and 2 ℵ 0 . {\\displaystyle 2^{\\aleph _{0}}.} The latter cardinal number is also often denoted by c {\\displaystyle {\\mathfrak {c}}} ; it is the cardinality of the continuum (the set of real numbers). In this case 2 ℵ 0 = ℵ 1 . {\\displaystyle 2^{\\aleph _{0}}=\\aleph _{1}.}",
"title": "The continuum hypothesis"
},
{
"paragraph_id": 58,
"text": "Similarly, the generalized continuum hypothesis (GCH) states that for every infinite cardinal κ {\\displaystyle \\kappa } , there are no cardinals strictly between κ {\\displaystyle \\kappa } and 2 κ {\\displaystyle 2^{\\kappa }} . Both the continuum hypothesis and the generalized continuum hypothesis have been proved independent of the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC).",
"title": "The continuum hypothesis"
},
{
"paragraph_id": 59,
"text": "Indeed, Easton's theorem shows that, for regular cardinals κ {\\displaystyle \\kappa } , the only restrictions ZFC places on the cardinality of 2 κ {\\displaystyle 2^{\\kappa }} are that κ < cf ( 2 κ ) {\\displaystyle \\kappa <\\operatorname {cf} (2^{\\kappa })} , and that the exponential function is non-decreasing.",
"title": "The continuum hypothesis"
},
{
"paragraph_id": 60,
"text": "Notes",
"title": "References"
},
{
"paragraph_id": 61,
"text": "Bibliography",
"title": "References"
}
]
| In mathematics, a cardinal number, or cardinal for short, is what is commonly called the number of elements of a set. In the case of a finite set, its cardinal number, or cardinality is therefore a natural number. For dealing with the case of infinite sets, the infinite cardinal numbers have been introduced, which are often denoted with the Hebrew letter ℵ (aleph) marked with subscript indicating their rank among the infinite cardinals. Cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, there is a one-to-one correspondence (bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of number of elements. In the case of infinite sets, the behavior is more complex. A fundamental theorem due to Georg Cantor shows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers. It is also possible for a proper subset of an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets. There is a transfinite sequence of cardinal numbers: This sequence starts with the natural numbers including zero, which are followed by the aleph numbers. The aleph numbers are indexed by ordinal numbers. If the axiom of choice is true, this transfinite sequence includes every cardinal number. If the axiom of choice is not true, there are infinite cardinals that are not aleph numbers. Cardinality is studied for its own sake as part of set theory. It is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra and mathematical analysis. In category theory, the cardinal numbers form a skeleton of the category of sets. | 2001-10-01T18:01:08Z | 2023-12-30T06:29:19Z | [
"Template:Reflist",
"Template:Cite journal",
"Template:Springer",
"Template:Number systems",
"Template:Short description",
"Template:About",
"Template:Slink",
"Template:Portal",
"Template:Cite web",
"Template:Isbn",
"Template:Further",
"Template:Div col end",
"Template:Citation",
"Template:Mathematical logic",
"Template:Set theory",
"Template:Authority control",
"Template:Div col",
"Template:Harvnb",
"Template:ISBN",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Cardinal_number |
6,174 | Cardinality | In mathematics, the cardinality of a set is a measure of the number of elements of the set. For example, the set A = { 2 , 4 , 6 } {\displaystyle A=\{2,4,6\}} contains 3 elements, and therefore A {\displaystyle A} has a cardinality of 3. Beginning in the late 19th century, this concept was generalized to infinite sets, which allows one to distinguish between different types of infinity, and to perform arithmetic on them. There are two approaches to cardinality: one which compares sets directly using bijections and injections, and another which uses cardinal numbers. The cardinality of a set is also called its size, when no confusion with other notions of size is possible.
The cardinality of a set A {\displaystyle A} is usually denoted | A | {\displaystyle |A|} , with a vertical bar on each side; this is the same notation as absolute value, and the meaning depends on context. The cardinality of a set A {\displaystyle A} may alternatively be denoted by n ( A ) {\displaystyle n(A)} , A {\displaystyle A} , card ( A ) {\displaystyle \operatorname {card} (A)} , or # A {\displaystyle \#A} .
A crude sense of cardinality, an awareness that groups of things or events compare with other groups by containing more, fewer, or the same number of instances, is observed in a variety of present-day animal species, suggesting an origin millions of years ago. Human expression of cardinality is seen as early as 40000 years ago, with equating the size of a group with a group of recorded notches, or a representative collection of other things, such as sticks and shells. The abstraction of cardinality as a number is evident by 3000 BCE, in Sumerian mathematics and the manipulation of numbers without reference to a specific group of things or events.
From the 6th century BCE, the writings of Greek philosophers show the first hints of the cardinality of infinite sets. While they considered the notion of infinity as an endless series of actions, such as adding 1 to a number repeatedly, they did not consider the size of an infinite set of numbers to be a thing. The ancient Greek notion of infinity also considered the division of things into parts repeated without limit. In Euclid's Elements, commensurability was described as the ability to compare the length of two line segments, a and b, as a ratio, as long as there were a third segment, no matter how small, that could be laid end-to-end a whole number of times into both a and b. But with the discovery of irrational numbers, it was seen that even the infinite set of all rational numbers was not enough to describe the length of every possible line segment. Still, there was no concept of infinite sets as something that had cardinality.
To better understand infinite sets, a notion of cardinality was formulated circa 1880 by Georg Cantor, the originator of set theory. He examined the process of equating two sets with bijection, a one-to-one correspondence between the elements of two sets based on a unique relationship. In 1891, with the publication of Cantor's diagonal argument, he demonstrated that there are sets of numbers that cannot be placed in one-to-one correspondence with the set of natural numbers, i.e. uncountable sets that contain more elements than there are in the infinite set of natural numbers.
While the cardinality of a finite set is just the number of its elements, extending the notion to infinite sets usually starts with defining the notion of comparison of arbitrary sets (some of which are possibly infinite).
If |A| ≤ |B| and |B| ≤ |A|, then |A| = |B| (a fact known as Schröder–Bernstein theorem). The axiom of choice is equivalent to the statement that |A| ≤ |B| or |B| ≤ |A| for every A, B.
In the above section, "cardinality" of a set was defined functionally. In other words, it was not defined as a specific object itself. However, such an object can be defined as follows.
The relation of having the same cardinality is called equinumerosity, and this is an equivalence relation on the class of all sets. The equivalence class of a set A under this relation, then, consists of all those sets which have the same cardinality as A. There are two ways to define the "cardinality of a set":
Assuming the axiom of choice, the cardinalities of the infinite sets are denoted
For each ordinal α {\displaystyle \alpha } , ℵ α + 1 {\displaystyle \aleph _{\alpha +1}} is the least cardinal number greater than ℵ α {\displaystyle \aleph _{\alpha }} .
The cardinality of the natural numbers is denoted aleph-null ( ℵ 0 {\displaystyle \aleph _{0}} ), while the cardinality of the real numbers is denoted by " c {\displaystyle {\mathfrak {c}}} " (a lowercase fraktur script "c"), and is also referred to as the cardinality of the continuum. Cantor showed, using the diagonal argument, that c > ℵ 0 {\displaystyle {\mathfrak {c}}>\aleph _{0}} . We can show that c = 2 ℵ 0 {\displaystyle {\mathfrak {c}}=2^{\aleph _{0}}} , this also being the cardinality of the set of all subsets of the natural numbers.
The continuum hypothesis says that ℵ 1 = 2 ℵ 0 {\displaystyle \aleph _{1}=2^{\aleph _{0}}} , i.e. 2 ℵ 0 {\displaystyle 2^{\aleph _{0}}} is the smallest cardinal number bigger than ℵ 0 {\displaystyle \aleph _{0}} , i.e. there is no set whose cardinality is strictly between that of the integers and that of the real numbers. The continuum hypothesis is independent of ZFC, a standard axiomatization of set theory; that is, it is impossible to prove the continuum hypothesis or its negation from ZFC—provided that ZFC is consistent. For more detail, see § Cardinality of the continuum below.
If the axiom of choice holds, the law of trichotomy holds for cardinality. Thus we can make the following definitions:
Our intuition gained from finite sets breaks down when dealing with infinite sets. In the late nineteenth century Georg Cantor, Gottlob Frege, Richard Dedekind and others rejected the view that the whole cannot be the same size as the part. One example of this is Hilbert's paradox of the Grand Hotel. Indeed, Dedekind defined an infinite set as one that can be placed into a one-to-one correspondence with a strict subset (that is, having the same size in Cantor's sense); this notion of infinity is called Dedekind infinite. Cantor introduced the cardinal numbers, and showed—according to his bijection-based definition of size—that some infinite sets are greater than others. The smallest infinite cardinality is that of the natural numbers ( ℵ 0 {\displaystyle \aleph _{0}} ).
One of Cantor's most important results was that the cardinality of the continuum ( c {\displaystyle {\mathfrak {c}}} ) is greater than that of the natural numbers ( ℵ 0 {\displaystyle \aleph _{0}} ); that is, there are more real numbers R than natural numbers N. Namely, Cantor showed that c = 2 ℵ 0 = ℶ 1 {\displaystyle {\mathfrak {c}}=2^{\aleph _{0}}=\beth _{1}} (see Beth one) satisfies:
The continuum hypothesis states that there is no cardinal number between the cardinality of the reals and the cardinality of the natural numbers, that is,
However, this hypothesis can neither be proved nor disproved within the widely accepted ZFC axiomatic set theory, if ZFC is consistent.
Cardinal arithmetic can be used to show not only that the number of points in a real number line is equal to the number of points in any segment of that line, but that this is equal to the number of points on a plane and, indeed, in any finite-dimensional space. These results are highly counterintuitive, because they imply that there exist proper subsets and proper supersets of an infinite set S that have the same size as S, although S contains elements that do not belong to its subsets, and the supersets of S contain elements that are not included in it.
The first of these results is apparent by considering, for instance, the tangent function, which provides a one-to-one correspondence between the interval (−½π, ½π) and R (see also Hilbert's paradox of the Grand Hotel).
The second result was first demonstrated by Cantor in 1878, but it became more apparent in 1890, when Giuseppe Peano introduced the space-filling curves, curved lines that twist and turn enough to fill the whole of any square, or cube, or hypercube, or finite-dimensional space. These curves are not a direct proof that a line has the same number of points as a finite-dimensional space, but they can be used to obtain such a proof.
Cantor also showed that sets with cardinality strictly greater than c {\displaystyle {\mathfrak {c}}} exist (see his generalized diagonal argument and theorem). They include, for instance:
Both have cardinality
The cardinal equalities c 2 = c , {\displaystyle {\mathfrak {c}}^{2}={\mathfrak {c}},} c ℵ 0 = c , {\displaystyle {\mathfrak {c}}^{\aleph _{0}}={\mathfrak {c}},} and c c = 2 c {\displaystyle {\mathfrak {c}}^{\mathfrak {c}}=2^{\mathfrak {c}}} can be demonstrated using cardinal arithmetic:
If A and B are disjoint sets, then
From this, one can show that in general, the cardinalities of unions and intersections are related by the following equation: | [
{
"paragraph_id": 0,
"text": "In mathematics, the cardinality of a set is a measure of the number of elements of the set. For example, the set A = { 2 , 4 , 6 } {\\displaystyle A=\\{2,4,6\\}} contains 3 elements, and therefore A {\\displaystyle A} has a cardinality of 3. Beginning in the late 19th century, this concept was generalized to infinite sets, which allows one to distinguish between different types of infinity, and to perform arithmetic on them. There are two approaches to cardinality: one which compares sets directly using bijections and injections, and another which uses cardinal numbers. The cardinality of a set is also called its size, when no confusion with other notions of size is possible.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The cardinality of a set A {\\displaystyle A} is usually denoted | A | {\\displaystyle |A|} , with a vertical bar on each side; this is the same notation as absolute value, and the meaning depends on context. The cardinality of a set A {\\displaystyle A} may alternatively be denoted by n ( A ) {\\displaystyle n(A)} , A {\\displaystyle A} , card ( A ) {\\displaystyle \\operatorname {card} (A)} , or # A {\\displaystyle \\#A} .",
"title": ""
},
{
"paragraph_id": 2,
"text": "A crude sense of cardinality, an awareness that groups of things or events compare with other groups by containing more, fewer, or the same number of instances, is observed in a variety of present-day animal species, suggesting an origin millions of years ago. Human expression of cardinality is seen as early as 40000 years ago, with equating the size of a group with a group of recorded notches, or a representative collection of other things, such as sticks and shells. The abstraction of cardinality as a number is evident by 3000 BCE, in Sumerian mathematics and the manipulation of numbers without reference to a specific group of things or events.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "From the 6th century BCE, the writings of Greek philosophers show the first hints of the cardinality of infinite sets. While they considered the notion of infinity as an endless series of actions, such as adding 1 to a number repeatedly, they did not consider the size of an infinite set of numbers to be a thing. The ancient Greek notion of infinity also considered the division of things into parts repeated without limit. In Euclid's Elements, commensurability was described as the ability to compare the length of two line segments, a and b, as a ratio, as long as there were a third segment, no matter how small, that could be laid end-to-end a whole number of times into both a and b. But with the discovery of irrational numbers, it was seen that even the infinite set of all rational numbers was not enough to describe the length of every possible line segment. Still, there was no concept of infinite sets as something that had cardinality.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "To better understand infinite sets, a notion of cardinality was formulated circa 1880 by Georg Cantor, the originator of set theory. He examined the process of equating two sets with bijection, a one-to-one correspondence between the elements of two sets based on a unique relationship. In 1891, with the publication of Cantor's diagonal argument, he demonstrated that there are sets of numbers that cannot be placed in one-to-one correspondence with the set of natural numbers, i.e. uncountable sets that contain more elements than there are in the infinite set of natural numbers.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "While the cardinality of a finite set is just the number of its elements, extending the notion to infinite sets usually starts with defining the notion of comparison of arbitrary sets (some of which are possibly infinite).",
"title": "Comparing sets"
},
{
"paragraph_id": 6,
"text": "If |A| ≤ |B| and |B| ≤ |A|, then |A| = |B| (a fact known as Schröder–Bernstein theorem). The axiom of choice is equivalent to the statement that |A| ≤ |B| or |B| ≤ |A| for every A, B.",
"title": "Comparing sets"
},
{
"paragraph_id": 7,
"text": "In the above section, \"cardinality\" of a set was defined functionally. In other words, it was not defined as a specific object itself. However, such an object can be defined as follows.",
"title": "Cardinal numbers"
},
{
"paragraph_id": 8,
"text": "The relation of having the same cardinality is called equinumerosity, and this is an equivalence relation on the class of all sets. The equivalence class of a set A under this relation, then, consists of all those sets which have the same cardinality as A. There are two ways to define the \"cardinality of a set\":",
"title": "Cardinal numbers"
},
{
"paragraph_id": 9,
"text": "Assuming the axiom of choice, the cardinalities of the infinite sets are denoted",
"title": "Cardinal numbers"
},
{
"paragraph_id": 10,
"text": "For each ordinal α {\\displaystyle \\alpha } , ℵ α + 1 {\\displaystyle \\aleph _{\\alpha +1}} is the least cardinal number greater than ℵ α {\\displaystyle \\aleph _{\\alpha }} .",
"title": "Cardinal numbers"
},
{
"paragraph_id": 11,
"text": "The cardinality of the natural numbers is denoted aleph-null ( ℵ 0 {\\displaystyle \\aleph _{0}} ), while the cardinality of the real numbers is denoted by \" c {\\displaystyle {\\mathfrak {c}}} \" (a lowercase fraktur script \"c\"), and is also referred to as the cardinality of the continuum. Cantor showed, using the diagonal argument, that c > ℵ 0 {\\displaystyle {\\mathfrak {c}}>\\aleph _{0}} . We can show that c = 2 ℵ 0 {\\displaystyle {\\mathfrak {c}}=2^{\\aleph _{0}}} , this also being the cardinality of the set of all subsets of the natural numbers.",
"title": "Cardinal numbers"
},
{
"paragraph_id": 12,
"text": "The continuum hypothesis says that ℵ 1 = 2 ℵ 0 {\\displaystyle \\aleph _{1}=2^{\\aleph _{0}}} , i.e. 2 ℵ 0 {\\displaystyle 2^{\\aleph _{0}}} is the smallest cardinal number bigger than ℵ 0 {\\displaystyle \\aleph _{0}} , i.e. there is no set whose cardinality is strictly between that of the integers and that of the real numbers. The continuum hypothesis is independent of ZFC, a standard axiomatization of set theory; that is, it is impossible to prove the continuum hypothesis or its negation from ZFC—provided that ZFC is consistent. For more detail, see § Cardinality of the continuum below.",
"title": "Cardinal numbers"
},
{
"paragraph_id": 13,
"text": "If the axiom of choice holds, the law of trichotomy holds for cardinality. Thus we can make the following definitions:",
"title": "Finite, countable and uncountable sets"
},
{
"paragraph_id": 14,
"text": "Our intuition gained from finite sets breaks down when dealing with infinite sets. In the late nineteenth century Georg Cantor, Gottlob Frege, Richard Dedekind and others rejected the view that the whole cannot be the same size as the part. One example of this is Hilbert's paradox of the Grand Hotel. Indeed, Dedekind defined an infinite set as one that can be placed into a one-to-one correspondence with a strict subset (that is, having the same size in Cantor's sense); this notion of infinity is called Dedekind infinite. Cantor introduced the cardinal numbers, and showed—according to his bijection-based definition of size—that some infinite sets are greater than others. The smallest infinite cardinality is that of the natural numbers ( ℵ 0 {\\displaystyle \\aleph _{0}} ).",
"title": "Infinite sets"
},
{
"paragraph_id": 15,
"text": "One of Cantor's most important results was that the cardinality of the continuum ( c {\\displaystyle {\\mathfrak {c}}} ) is greater than that of the natural numbers ( ℵ 0 {\\displaystyle \\aleph _{0}} ); that is, there are more real numbers R than natural numbers N. Namely, Cantor showed that c = 2 ℵ 0 = ℶ 1 {\\displaystyle {\\mathfrak {c}}=2^{\\aleph _{0}}=\\beth _{1}} (see Beth one) satisfies:",
"title": "Infinite sets"
},
{
"paragraph_id": 16,
"text": "The continuum hypothesis states that there is no cardinal number between the cardinality of the reals and the cardinality of the natural numbers, that is,",
"title": "Infinite sets"
},
{
"paragraph_id": 17,
"text": "However, this hypothesis can neither be proved nor disproved within the widely accepted ZFC axiomatic set theory, if ZFC is consistent.",
"title": "Infinite sets"
},
{
"paragraph_id": 18,
"text": "Cardinal arithmetic can be used to show not only that the number of points in a real number line is equal to the number of points in any segment of that line, but that this is equal to the number of points on a plane and, indeed, in any finite-dimensional space. These results are highly counterintuitive, because they imply that there exist proper subsets and proper supersets of an infinite set S that have the same size as S, although S contains elements that do not belong to its subsets, and the supersets of S contain elements that are not included in it.",
"title": "Infinite sets"
},
{
"paragraph_id": 19,
"text": "The first of these results is apparent by considering, for instance, the tangent function, which provides a one-to-one correspondence between the interval (−½π, ½π) and R (see also Hilbert's paradox of the Grand Hotel).",
"title": "Infinite sets"
},
{
"paragraph_id": 20,
"text": "The second result was first demonstrated by Cantor in 1878, but it became more apparent in 1890, when Giuseppe Peano introduced the space-filling curves, curved lines that twist and turn enough to fill the whole of any square, or cube, or hypercube, or finite-dimensional space. These curves are not a direct proof that a line has the same number of points as a finite-dimensional space, but they can be used to obtain such a proof.",
"title": "Infinite sets"
},
{
"paragraph_id": 21,
"text": "Cantor also showed that sets with cardinality strictly greater than c {\\displaystyle {\\mathfrak {c}}} exist (see his generalized diagonal argument and theorem). They include, for instance:",
"title": "Infinite sets"
},
{
"paragraph_id": 22,
"text": "Both have cardinality",
"title": "Infinite sets"
},
{
"paragraph_id": 23,
"text": "The cardinal equalities c 2 = c , {\\displaystyle {\\mathfrak {c}}^{2}={\\mathfrak {c}},} c ℵ 0 = c , {\\displaystyle {\\mathfrak {c}}^{\\aleph _{0}}={\\mathfrak {c}},} and c c = 2 c {\\displaystyle {\\mathfrak {c}}^{\\mathfrak {c}}=2^{\\mathfrak {c}}} can be demonstrated using cardinal arithmetic:",
"title": "Infinite sets"
},
{
"paragraph_id": 24,
"text": "If A and B are disjoint sets, then",
"title": "Union and intersection"
},
{
"paragraph_id": 25,
"text": "From this, one can show that in general, the cardinalities of unions and intersections are related by the following equation:",
"title": "Union and intersection"
}
]
| In mathematics, the cardinality of a set is a measure of the number of elements of the set. For example, the set A = { 2 , 4 , 6 } contains 3 elements, and therefore A has a cardinality of 3. Beginning in the late 19th century, this concept was generalized to infinite sets, which allows one to distinguish between different types of infinity, and to perform arithmetic on them. There are two approaches to cardinality: one which compares sets directly using bijections and injections, and another which uses cardinal numbers.
The cardinality of a set is also called its size, when no confusion with other notions of size is possible. The cardinality of a set A is usually denoted | A | , with a vertical bar on each side; this is the same notation as absolute value, and the meaning depends on context. The cardinality of a set A may alternatively be denoted by n , A , card , or # A . | 2001-08-21T20:04:33Z | 2023-11-25T17:11:01Z | [
"Template:Authority control",
"Template:Other uses",
"Template:Color",
"Template:Citation needed",
"Template:Cite web",
"Template:Abs",
"Template:ISBN",
"Template:Citation",
"Template:Mathematical logic",
"Template:Set theory",
"Template:Short description",
"Template:Commons category",
"Template:MathWorld",
"Template:Cite journal",
"Template:Webarchive",
"Template:Val",
"Template:Main article",
"Template:Wikidata property",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Cardinality |
6,176 | Cecil B. DeMille | Cecil Blount DeMille (/ˈsɛsəl dəˈmɪl/; August 12, 1881 – January 21, 1959) was an American filmmaker and actor. Between 1914 and 1958, he made 70 features, both silent and sound films. He is acknowledged as a founding father of American cinema and the most commercially successful producer-director in film history. His films were distinguished by their epic scale and by his cinematic showmanship. His silent films included social dramas, comedies, Westerns, farces, morality plays, and historical pageants. He was an active Freemason and member of Prince of Orange Lodge #16 in New York City.
DeMille was born in Ashfield, Massachusetts, and grew up in New York City. He began his career as a stage actor in 1900. He later moved to writing and directing stage productions, some with Jesse Lasky, who was then a vaudeville producer. DeMille's first film, The Squaw Man (1914), was also the first full-length feature film shot in Hollywood. Its interracial love story made it commercially successful, and it first publicized Hollywood as the home of the U.S. film industry. The continued success of his productions led to the founding of Paramount Pictures with Lasky and Adolph Zukor. His first biblical epic, The Ten Commandments (1923), was both a critical and commercial success; it held the Paramount revenue record for twenty-five years.
DeMille directed The King of Kings (1927), a biography of Jesus, which gained approval for its sensitivity and reached more than 800 million viewers. The Sign of the Cross (1932) is said to be the first sound film to integrate all aspects of cinematic technique. Cleopatra (1934) was his first film to be nominated for the Academy Award for Best Picture. After more than thirty years in film production, DeMille reached a pinnacle in his career with Samson and Delilah (1949), a biblical epic that became the highest-grossing film of 1950. Along with biblical and historical narratives, he also directed films oriented toward "neo-naturalism", which tried to portray the laws of man fighting the forces of nature.
He received his first nomination for the Academy Award for Best Director for his circus drama The Greatest Show on Earth (1952), which won both the Academy Award for Best Picture and the Golden Globe Award for Best Motion Picture – Drama. His last and best known film, The Ten Commandments (1956), also a Best Picture Academy Award nominee, is currently the eighth-highest-grossing film of all time, adjusted for inflation. In addition to his Best Picture Awards, he received an Academy Honorary Award for his film contributions, the Palme d'Or (posthumously) for Union Pacific (1939), a DGA Award for Lifetime Achievement, and the Irving G. Thalberg Memorial Award. He was the first recipient of the Golden Globe Cecil B. DeMille Award, which was named in his honor. DeMille's reputation had a renaissance in the 2010s, and his work has influenced numerous other films and directors.
Cecil Blount DeMille was born on August 12, 1881, in a boarding house on Main Street in Ashfield, Massachusetts, where his parents had been vacationing for the summer. On September 1, 1881, the family returned with the newborn DeMille to their flat in New York. DeMille was named after his grandmothers Cecelia Wolff and Margarete Blount. He was the second of three children of Henry Churchill de Mille (September 4, 1853 – February 10, 1893) and his wife, Matilda Beatrice deMille (née Samuel; January 30, 1853 – October 8, 1923), known as Beatrice. His brother, William C. DeMille, was born on July 25, 1878. Henry de Mille, whose ancestors were of English and Dutch-Belgian descent, was a North Carolina-born dramatist, actor, and lay reader in the Episcopal Church. DeMille's father was also an English teacher at Columbia College (now Columbia University). He worked as a playwright, administrator, and faculty member during the early years of the American Academy of Dramatic Arts, established in New York City in 1884. Henry deMille frequently collaborated with David Belasco in playwriting; their best-known collaborations included "The Wife", "Lord Chumley", "The Charity Ball", and "Men and Women".
Cecil B. DeMille's mother, Beatrice, a literary agent and scriptwriter, was the daughter of German Jews. She had emigrated from England with her parents in 1871 when she was 18; the newly arrived family settled in Brooklyn, New York, where they maintained a middle-class, English-speaking household.
DeMille's parents met as members of a music and literary society in New York. Henry was a tall, red-headed student. Beatrice was intelligent, educated, forthright, and strong-willed. The two were married on July 1, 1876, despite Beatrice's parents' objections because of the young couple's differing religions; Beatrice converted to Episcopalianism.
DeMille was a brave and confident child. He gained his love of theater while watching his father and Belasco rehearse their plays. A lasting memory for DeMille was a lunch with his father and actor Edwin Booth. As a child, DeMille created an alter-ego, Champion Driver, a Robin Hood-like character, evidence of his creativity and imagination. The family lived in Washington, North Carolina, until Henry built a three-story Victorian-style house for his family in Pompton Lakes, New Jersey; they named this estate "Pamlico". John Philip Sousa was a friend of the family, and DeMille recalled throwing mud balls in the air so neighbor Annie Oakley could practice her shooting. DeMille's sister, Agnes, was born on April 23, 1891; his mother nearly did not survive the birth. Agnes would die on February 11, 1894, at the age of three from spinal meningitis. DeMille's parents operated a private school in town and attended Christ Episcopal Church. DeMille recalled that this church was the place where he visualized the story of his 1923 version of The Ten Commandments.
On January 8, 1893, at age 40, Henry de Mille died suddenly from typhoid fever, leaving Beatrice with three children. To provide for her family, she opened the Henry C. de Mille School for Girls in her home in February 1893. The aim of the school was to teach young women to properly understand and fulfill the women's duty to herself, her home, and her country. Before Henry de Mille's death, Beatrice had "enthusiastically supported" her husband's theatrical aspirations. She later became the second female play broker on Broadway. On Henry de Mille's deathbed, he told his wife that he did not want his sons to become playwrights. DeMille's mother sent him to Pennsylvania Military College (now Widener University) in Chester, Pennsylvania, at age 15. He fled the school to join the Spanish–American War, but failed to meet the age requirement. At the military college, even though his grades were average, he reportedly excelled in personal conduct. DeMille attended the American Academy of Dramatic Arts (tuition-free due to his father's service to the Academy). He graduated in 1900, and for graduation, his performance was the play The Arcady Trail. In the audience was Charles Frohman, who would cast DeMille in his play Hearts are Trumps, DeMille's Broadway debut.
Cecil B. DeMille began his career as an actor on the stage in the theatrical company of Charles Frohman in 1900. He debuted as an actor on February 21, 1900, in the play Hearts Are Trumps at New York's Garden Theater. In 1901, DeMille starred in productions of A Repentance, To Have and to Hold, and Are You a Mason? At the age of 21, Cecil B. DeMille married Constance Adams on August 16, 1902, at Adams's father's home in East Orange, New Jersey. The wedding party was small. Beatrice DeMille's family was not in attendance, and Simon Louvish suggests that this was to conceal DeMille's partial Jewish heritage. Adams was 29 years old at the time of their marriage, eight years older than DeMille. They had met in a theater in Washington D.C. while they were both acting in Hearts Are Trumps.
They were sexually incompatible; according to DeMille, Adams was too "pure" to "feel such violent and evil passions". DeMille had more violent sexual preferences and fetishes than his wife. Adams allowed DeMille to have several long-term mistresses during their marriage as an outlet while maintaining an outward appearance of a faithful marriage. One of DeMille's affairs was with his screenwriter Jeanie MacPherson. Despite his reputation for extramarital affairs, DeMille did not like to have affairs with his stars, as he believed it would cause him to lose control as a director. He related a story that he maintained his self-control when Gloria Swanson sat on his lap, refusing to touch her.
In 1902, he played a small part in Hamlet. Publicists wrote that he became an actor in order to learn how direct and produce, but DeMille admitted that he became an actor in order to pay the bills. From 1904 to 1905, DeMille attempted to make a living as a stock theatre actor with his wife, Constance. DeMille made a 1905 reprise in Hamlet as Osric. In the summer of 1905, DeMille joined the stock cast at the Elitch Theatre in Denver, Colorado. He appeared in eleven of the fifteen plays presented that season, although all were minor roles. Maude Fealy would appear as the featured actress in several productions that summer and would develop a lasting friendship with DeMille. (He would later cast her in The Ten Commandments.)
His brother, William, was establishing himself as a playwright and sometimes invited DeMille to collaborate. DeMille and William collaborated on The Genius, The Royal Mounted, and After Five. However, none of these were very successful; William deMille was most successful when he worked alone. DeMille and his brother at times worked with the legendary impresario David Belasco, who had been a friend and collaborator of their father. DeMille would later adapt Belasco's The Girl of the Golden West, Rose of the Rancho, and The Warrens of Virginia into films. DeMille was credited with creating the premise of Belasco's The Return of Peter Grimm. The Return of Peter Grimm sparked controversy, because Belasco had taken DeMille's unnamed screenplay, changed the characters, and named it The Return of Peter Grimm, producing and presenting it as his own work. DeMille was credited in small print as "based on an idea by Cecil DeMille". The play was successful, and DeMille was distraught that his childhood idol had plagiarized his work.
DeMille performed on stage with actors whom he would later direct in films: Charlotte Walker, Mary Pickford, and Pedro de Cordoba. DeMille also produced and directed plays. His 1905 performance in The Prince Chap as the Earl of Huntington was well received by audiences. DeMille wrote a few of his own plays in-between stage performances, but his playwriting was not as successful. His first play was The Pretender-A Play in a Prologue and 4 Acts set in seventeenth century Russia. Another unperformed play he wrote was Son of the Winds, a mythological Native American story. Life was difficult for DeMille and his wife as traveling actors; however, traveling allowed him to experience part of the United States he had not yet seen. DeMille sometimes worked with the director E. H. Sothern, who influenced DeMille's later perfectionism in his work. In 1907, due to a scandal with one of Beatrice's students, Evelyn Nesbit, the Henry de Mille School lost students. The school closed, and Beatrice filed for bankruptcy. DeMille wrote another play originally called Sergeant Devil May Care, which was renamed The Royal Mounted. He also toured with the Standard Opera Company, but there are few records to indicate DeMille's singing ability. DeMille had a daughter, Cecilia, on November 5, 1908, who would be his only biological child. In the 1910s, DeMille began directing and producing other writer's plays.
DeMille was poor and struggled to find work. Consequently, his mother hired him for her agency The DeMille Play Company, and taught him how to be an agent and a playwright. Eventually, he became manager of the agency and later, a junior partner with his mother. In 1911, DeMille became acquainted with vaudeville producer Jesse Lasky when Lasky was searching for a writer for his new musical. He initially sought out William deMille. William had been a successful playwright, but DeMille was suffering from the failure of his plays The Royal Mounted and The Genius. However, Beatrice introduced Lasky to DeMille instead. The collaboration of DeMille and Lasky produced a successful musical called California, which opened in New York in January 1912. Another DeMille-Lasky production that opened in January 1912 was The Antique Girl. DeMille found success in the spring of 1913, producing Reckless Age by Lee Wilson, a play about a high society girl wrongly accused of manslaughter starring Frederick Burton and Sydney Shields. However, changes in the theater rendered DeMille's melodramas obsolete before they were produced, and true theatrical success eluded him. He produced many flops. Having become disinterested in working in theatre, DeMille's passion for film was ignited when he watched the 1912 French film Les Amours de la reine Élisabeth.
Desiring a change of scene, Cecil B. DeMille, Jesse Lasky, Sam Goldfish (later Samuel Goldwyn), and a group of East Coast businessmen created the Jesse L. Lasky Feature Play Company in 1913, over which DeMille became director-general. Lasky and DeMille were said to have sketched out the organization of the company on the back of a restaurant menu. As director-general, DeMille's job was to make the films. In addition to directing, DeMille was the supervisor and consultant for the first year of films made by the Lasky Feature Play Company. Sometimes, he directed scenes for other directors at the Feature Play Company in order to release films on time. Moreover, when he was busy directing other films, he would co-author other Lasky Company scripts as well as create screen adaptations that others directed.
The Lasky Play Company sought out William deMille to join the company, but he rejected the offer because he did not believe there was any promise in a film career. When William found out that DeMille had begun working in the motion picture industry, he wrote DeMille a letter, disappointed that he was willing "to throw away [his] future" when he was "born and raised in the finest traditions of the theater". The Lasky Company wanted to attract high-class audiences to their films, so they began producing films from literary works. The Lasky Company bought the rights to the play The Squaw Man by Edwin Milton Royle and cast Dustin Farnum in the lead role. They offered Farnum a choice to have a quarter stock in the company (similar to William deMille) or $250 per week as salary. Farnum chose $250 per week. Already $15,000 in debt to Royle for the screenplay of The Squaw Man, Lasky's relatives bought the $5,000 stock to save the Lasky Company from bankruptcy. With no knowledge of filmmaking, DeMille was introduced to observe the process at film studios. He was eventually introduced to Oscar Apfel, a stage director who had been a director with the Edison Company.
On December 12, 1913, DeMille, his cast, and crew boarded a Southern Pacific train bound for Flagstaff via New Orleans. His tentative plan was to shoot a film in Arizona, but he felt that Arizona did not typify the Western look they were searching for. They also learned that other filmmakers were successfully shooting in Los Angeles, even in winter. He continued to Los Angeles. Once there, he chose not to shoot in Edendale, where many studios were, but in Hollywood. DeMille rented a barn to function as their film studio. Filming began on December 29, 1913, and lasted three weeks. Apfel filmed most of The Squaw Man due to DeMille's inexperience; however, DeMille learned quickly and was particularly adept at impromptu screenwriting as necessary. He made his first film run sixty minutes, as long as a short play. The Squaw Man (1914), co-directed by Oscar Apfel, was a sensation, and it established the Lasky Company. This was the first feature-length film made in Hollywood. There were problems with the perforation of the film stock, and it was discovered the DeMille had brought a cheap British film perforator, which had punched in sixty-five holes per foot instead of the industry-standard of sixty-four. Lasky and DeMille convinced film pioneer Siegmund Lubin of the Lubin Manufacturing Company of Philadelphia to have his experienced technicians reperforate the film This was also the first American feature film; however, only by release date, as D. W. Griffith's Judith of Bethulia was filmed earlier than The Squaw Man, but released later. Additionally, this was the only film in which DeMille shared director's credit with Oscar C. Apfel.
The Squaw Man was a success, which led to the eventual founding of Paramount Pictures and Hollywood becoming the "film capital of the world". The film grossed over ten times its budget after its New York premiere in February 1914. DeMille's next project was to aid Oscar Apfel in directing Brewster's Millions, which was wildly successful. In December 1914, Constance Adams brought home John DeMille, a fifteen-month-old, whom the couple legally adopted three years later. Biographer Scott Eyman suggested that this may have been a result of Adams's recent miscarriage.
Cecil B. DeMille's second film credited exclusively to him was The Virginian. This is the earliest of DeMille's films available in a quality, color-tinted video format. However, this version is actually a 1918 re-release. The first few years of the Lasky Company were spent in making films nonstop, literally writing the language of film. DeMille himself directed twenty films by 1915. The most successful films during the beginning of the Lasky Company were Brewster's Millions (co-directed by DeMille), Rose of the Rancho, and The Ghost Breaker. DeMille adapted Belasco's dramatic lighting techniques to film technology, mimicking moonlight with U.S. cinema's first attempts at "motivated lighting" in The Warrens of Virginia. This was the first of few film collaborations with his brother William. They struggled to adapt the play from the stage to the set. After the film was shown, viewers complained that the shadows and lighting prevented the audience from seeing the actors' full faces, complaining that they would only pay half price. However, Sam Goldwyn realized that if they called it "Rembrandt" lighting, the audience would pay double the price. Additionally, because of DeMille's cordiality after the Peter Grimm incident, DeMille was able to rekindle his partnership with Belasco. He adapted several of Belasco's screenplays into film.
DeMille's most successful film was The Cheat; DeMille's direction in the film was acclaimed. In 1916, exhausted from three years of nonstop filmmaking, DeMille purchased land in the Angeles National Forest for a ranch that would become his getaway. He called this place, "Paradise", declaring it a wildlife sanctuary; no shooting of animals besides snakes was allowed. His wife did not like Paradise, so DeMille often brought his mistresses there with him, including actress Julia Faye. In addition to his Paradise, DeMille purchased a yacht in 1921, which he called The Seaward.
While filming The Captive in 1915, an extra, Bob Fleming, died on set when another extra failed to heed DeMille's orders to unload all guns for rehearsal. DeMille instructed the guilty man to leave town and would never reveal his name. Lasky and DeMille maintained the widow Fleming on the payroll; however, according to leading actor House Peters Sr., DeMille refused to stop production for the funeral of Fleming. Peters claimed that he encouraged the cast to attend the funeral with him anyway since DeMille would not be able to shoot the film without him. On July 19, 1916, the Jesse Lasky Feature Play Company merged with Adolph Zukor's Famous Players Film Company, becoming Famous Players–Lasky. Zukor became president with Lasky as the vice president. DeMille was maintained as director-general, and Goldwyn became chairman of the board. Goldwyn was later fired from Famous Players–Lasky due to frequent clashes with Lasky, DeMille, and Zukor. While on a European vacation in 1921, DeMille contracted rheumatic fever in Paris. He was confined to bed and unable to eat. His poor physical condition upon his return home affected the production of his 1922 film Manslaughter. According to Richard Birchard, DeMille's weakened state during production may have led to the film being received as uncharacteristically substandard.
During World War I, the Famous Players–Lasky organized a military company underneath the National Guard called the Home Guard made up of film studio employees with DeMille as captain. Eventually, the Guard was enlarged to a battalion and recruited soldiers from other film studios. They took time off weekly from film production to practice military drills. Additionally, during the war, DeMille volunteered for the Justice Department's Intelligence Office, investigating friends, neighbors, and others he came in contact with in connection with the Famous Players–Lasky. He volunteered for the Intelligence Office during World War II as well. Although DeMille considered enlisting in World War I, he stayed in the United States and made films. However, he did take a few months to set up a movie theater for the French front. Famous Players–Lasky donated the films. DeMille and Adams adopted Katherine Lester in 1920, whom Adams had found in the orphanage over which she was the director. In 1922, the couple adopted Richard deMille.
Film started becoming more sophisticated and the subsequent films of the Lasky company were criticized for primitive and unrealistic set design. Consequently, Beatrice deMille introduced the Famous Players–Lasky to Wilfred Buckland, who DeMille had known from his time at the American Academy of Dramatic Arts, and he became DeMille's art director. William deMille reluctantly became a story editor. William deMille would later convert from theater to Hollywood and would spend the rest of his career as a film director. Throughout his career, DeMille would frequently remake his own films. In his first instance, in 1917, he remade The Squaw Man (1918), only waiting four years from the 1914 original. Despite its quick turnaround, the film was fairly successful. However, DeMille's second remake at MGM in 1931 would be a failure.
After five years and thirty hit films, DeMille became the American film industry's most successful director. In the silent era, he was renowned for Male and Female (1919), Manslaughter (1922), The Volga Boatman (1926), and The Godless Girl (1928). DeMille's trademark scenes included bathtubs, lion attacks, and Roman orgies. Many of his films featured scenes in two-color Technicolor. In 1923, DeMille released a modern melodrama The Ten Commandments, which was a significant change from his previous stint of irreligious films. The film was produced on a large budget of $600,000, the most expensive production at Paramount. This concerned the executives at Paramount; however, the film turned out to be the studio's highest-grossing film. It held the Paramount record for twenty-five years until DeMille broke the record again.
In the early 1920s, scandal surrounded Paramount; religious groups and the media opposed portrayals of immorality in films. A censorship board called the Hays Code was established. DeMille's film The Affairs of Anatol came under fire. Furthermore, DeMille argued with Zukor over his extravagant and over-budget production costs. Consequently, DeMille left Paramount in 1924 despite having helped establish it. He joined the Producers Distributing Corporation. His first film in the new production company, DeMille Pictures Corporation, was The Road to Yesterday in 1925. He directed and produced four films on his own, working with Producers Distributing Corporation because he found front office supervision too restricting. Aside from The King of Kings, none of DeMille's films away from Paramount were successful. The King of Kings established DeMille as "master of the grandiose and of biblical sagas". Considered at the time to be the most successful Christian film of the silent era, DeMille calculated that it had been viewed over 800 million times around the world. After the release of DeMille's The Godless Girl, silent films in America became obsolete, and DeMille was forced to shoot a shoddy final reel with the new sound production technique. Although this final reel looked so different from the previous eleven reels that it appeared to be from another movie, according to Simon Louvish, the film is one of DeMille's strangest and most "DeMillean" film.
The immense popularity of DeMille's silent films enabled him to branch out into other areas. The Roaring Twenties were the boom years and DeMille took full advantage, opening the Mercury Aviation Company, one of America's first commercial airlines. He was also a real estate speculator, an underwriter of political campaigns, and vice president of Bank of America. He was additionally vice president of the Commercial National Trust and Savings Bank in Los Angeles where he approved loans for other filmmakers. In 1916, DeMille purchased a mansion in Hollywood. Charlie Chaplin lived next door for a time, and after he moved, DeMille purchased the other house and combined the estates.
When "talking pictures" were invented in 1928, Cecil B. DeMille made a successful transition, offering his own innovations to the painful process; he devised a microphone boom and a soundproof camera blimp. He also popularized the camera crane. His first three sound films were produced at Metro-Goldwyn-Mayer. These three films, Dynamite, Madame Satan, and his 1931 remake of The Squaw Man were critically and financially unsuccessful. He had completely adapted to the production of sound film despite the film's poor dialogue. After his contract ended at MGM, he left, but no production studios would hire him. He attempted to create a guild of a half a dozen directors with the same creative desires called the Director's Guild. However, the idea failed due to lack of funding and commitment. Moreover, DeMille was audited by the Internal Revenue Service due to issues with his production company. This was, according to DeMille, the lowest point of his career. DeMille traveled abroad to find employment until he was offered a deal at Paramount.
In 1932, DeMille returned to Paramount at the request of Lasky, bringing with him his own production unit. His first film back at Paramount, The Sign of the Cross, was also his first success since leaving Paramount besides The King of Kings. DeMille's return was approved by Zukor under the condition that DeMille not exceed his production budget of $650,000 for The Sign of the Cross. Produced in eight weeks without exceeding budget, the film was financially successful. The Sign of the Cross was the first film to integrate all cinematic techniques. The film was considered a "masterpiece" and surpassed the quality of other sound films of the time. DeMille followed this epic uncharacteristically with two dramas released in 1933 and 1934. This Day and Age and Four Frightened People were box office disappointments, though Four Frightened People received good reviews. DeMille would stick to his large-budget spectaculars for the rest of his career.
Cecil B. DeMille was outspoken about his strong Episcopalian integrity, but his private life included mistresses and adultery. DeMille was a conservative Republican activist, becoming more conservative as he aged. He was known as anti-union and worked to prevent the unionizing of film production studios. However, according to DeMille himself, he was not anti-union and belonged to a few unions himself. He said he was rather against union leaders such as Walter Reuther and Harry Bridges, whom he compared to dictators. He supported Herbert Hoover and in 1928 made his largest campaign donation to Hoover. DeMille also liked Franklin D. Roosevelt, however, finding him charismatic, tenacious, and intelligent and agreeing with Roosevelt's abhorrence of Prohibition. DeMille lent Roosevelt a car for his campaign for the 1932 United States presidential election and voted for him. However, he would never again vote for a Democratic candidate in a presidential election.
From June 1, 1936, until January 22, 1945, Cecil DeMille hosted and directed Lux Radio Theatre, a weekly digest of current feature films. Broadcast on the Columbia Broadcasting System (CBS) from 1935 to 1954, the Lux Radio show was one of the most popular weekly shows in the history of radio. While DeMille was host, the show had forty million weekly listeners, gaining DeMille an annual salary of $100,000. From 1936 to 1945, he produced, hosted, and directed all shows with the occasional exception of a guest director. He resigned from the Lux Radio show because he refused to pay a dollar to the American Federation of Radio Artists (AFRA) because he did not believe that any organization had the right to "levy a compulsory assessment upon any member."
DeMille sued the union for reinstatement but lost. He then appealed to the California Supreme Court and lost again. When the AFRA expanded to television, DeMille was banned from television appearances. Consequently, he formed the DeMille Foundation for Political Freedom in order to campaign for the right to work. He began presenting speeches across the United States for the next few years. DeMille's primary criticism was of closed shops, but later included criticism of communism and unions in general. The United States Supreme Court declined to review his case. Despite his loss, DeMille continued to lobby for the Taft–Hartley Act, which passed. This prohibited denying anyone the right to work if they refuse to pay a political assessment, however, the law did not apply retroactively. Consequently, DeMille's television and radio appearance ban lasted for the remainder of his life, though he was permitted to appear on radio or television to publicize a movie. William Keighley was his replacement. DeMille would never again work on radio.
In 1939, DeMille's Union Pacific was successful through DeMille's collaboration with the Union Pacific Railroad. The Union Pacific gave DeMille access to historical data, early period trains, and expert crews, adding to the authenticity of the film. During pre-production of Union Pacific, DeMille was dealing with his first serious health issue. In March 1938, he underwent a major emergency prostatectomy. He suffered from a post-surgery infection from which he nearly did not recover, citing streptomycin as his saving grace. The surgery caused him to suffer from sexual dysfunction for the rest of his life, according to some family members. Following his surgery and the success of Union Pacific, in 1940, DeMille first used three-strip Technicolor in North West Mounted Police. DeMille wanted to film in Canada; however, due to budget constraints, the film was instead shot in Oregon and Hollywood. Critics were impressed with the visuals but found the scripts dull, calling it DeMille's "poorest Western". Despite the criticism, it was Paramount's highest-grossing film of the year. Audiences liked its highly saturated color, so DeMille made no further black-and-white features. DeMille was anti-communist and abandoned a project in 1940 to film Ernest Hemingway's For Whom the Bell Tolls due to its communist themes, despite the fact he had already paid $100,000 for the rights to the novel. He was so eager to produce the film that he hadn't yet read the novel. He claimed he abandoned the project in order to complete a different project, but in reality, it was to preserve his reputation and avoid appearing reactionary. While concurrently filmmaking, he served in World War II at the age of sixty as his neighborhood air-raid warden.
In 1942, DeMille worked with Jeanie MacPherson and brother William deMille in order to produce a film called Queen of Queens, which was intended to be about Mary, mother of Jesus. After reading the screenplay, Daniel A. Lord warned DeMille that Catholics would find the film too irreverent, while non-Catholics would have considered the film Catholic propaganda. Consequently, the film was never made. Jeanie MacPherson would work as a scriptwriter for many of DeMille's films. In 1938, DeMille supervised the compilation of film Land of Liberty to represent the contribution of the American film industry to the 1939 New York World's Fair. DeMille used clips from his own films in Land of Liberty. Though the film was not high-grossing, it was well-received, and DeMille was asked to shorten its running time to allow for more showings per day. MGM distributed the film in 1941 and donated profits to World War II relief charities.
In 1942, DeMille released Paramount's most successful film, Reap the Wild Wind. It was produced with a large budget and contained many special effects including an electronically operated giant squid. After working on Reap the Wild Wind, in 1944, he was the master of ceremonies at the rally organized by David O. Selznick in the Los Angeles Coliseum in support of the Dewey–Bricker ticket as well as Governor Earl Warren of California. DeMille's subsequent film Unconquered (1947) had the longest running time (146 minutes), longest filming schedule (102 days), and largest budget ($5 million). The sets and effects were so realistic that 30 extras needed to be hospitalized due to a scene with fireballs and flaming arrows. It was commercially very successful.
DeMille's next film, Samson and Delilah in 1949, became Paramount's highest-grossing film up to that time. A Biblical epic with sex, it was a characteristically DeMille film. Again, 1952's The Greatest Show on Earth became Paramount's highest-grossing film to that point. Furthermore, DeMille's film won the Academy Award for Best Picture and the Academy Award for Best Story. The film began production in 1949, Ringling Brothers-Barnum and Bailey were paid $250,000 for use of the title and facilities. DeMille toured with the circus while helping write the script. Noisy and bright, it was not well-liked by critics, but was a favorite among audiences. DeMille signed a contract with Prentice Hall publishers in August 1953 to publish an autobiography. DeMille would reminisce into a voice recorder, the recording would be transcribed, and the information would be organized in the biography based on the topic. Art Arthur also interviewed people for the autobiography. DeMille did not like the first draft of the biography, saying that he thought the person portrayed in the biography was an "SOB"; he said it made him sound too egotistical. Besides filmmaking and finishing his autobiography, DeMille was involved in other projects. In the early 1950s, DeMille was recruited by Allen Dulles and Frank Wisner to serve on the board of the anti-communist National Committee for a Free Europe, the public face of the organization that oversaw the Radio Free Europe service. In 1954, Secretary of the Air Force Harold E. Talbott asked DeMille for help in designing the cadet uniforms at the newly established United States Air Force Academy. DeMille's designs, most notably his design of the distinctive cadet parade uniform, won praise from Air Force and Academy leadership, were ultimately adopted, and are still worn by cadets.
We have just lived through a war where our people were systematically executed. Here we have a man who made a film praising the Jewish people, that tells of Samson, one of the legends of our Scripture. Now he wants to make the life of Moses. We should get down on our knees to Cecil and say "Thank you!"
– Alfred Zukor responding to DeMille's proposal of The Ten Commandments remake
In 1952, DeMille sought approval for a lavish remake of his 1923 silent film The Ten Commandments. He went before the Paramount board of directors, which was mostly Jewish-American. The members rejected his proposal, even though his last two films, Samson and Delilah and The Greatest Show on Earth, had been record-breaking hits. Adolph Zukor convinced the board to change their minds on the grounds of morality. DeMille did not have an exact budget proposal for the project, and it promised to be the most costly in U.S. film history. Still, the members unanimously approved it. The Ten Commandments, released in 1956, was DeMille's final film. It was the longest (3 hours, 39 minutes) and most expensive ($13 million) film in Paramount history. Production of The Ten Commandments began in October 1954. The Exodus scene was filmed on-site in Egypt with the use of four Technicolor-VistaVision camera filming 12,000 people. They continued filming in 1955 in Paris and Hollywood on 30 different sound stages. They were even required to expand to RKO sound studios for filming. Post-production lasted a year, and the film premiered in Salt Lake City. Nominated for an Academy Award for Best Picture, it grossed over $80 million, which surpassed the gross of The Greatest Show on Earth and every other film in history, except for Gone with the Wind. A unique practice at the time, DeMille offered ten percent of his profit to the crew.
On November 7, 1954, while in Egypt filming the Exodus sequence for The Ten Commandments, DeMille (who was seventy-three) climbed a 107-foot (33 m) ladder to the top of the massive Per Rameses set and suffered a serious heart attack. Despite the urging of his associate producer, DeMille wanted to return to the set right away. DeMille developed a plan with his doctor to allow him to continue directing while reducing his physical stress. Although DeMille completed the film, his health was diminished by several more heart attacks. His daughter Cecilia took over as director as DeMille sat behind the camera with Loyal Griggs as the cinematographer. This film would be his last.
Due to his frequent heart attacks, DeMille asked his son-in-law, actor Anthony Quinn, to direct a remake of his 1938 film The Buccaneer. DeMille served as executive producer, overseeing producer Henry Wilcoxon. Despite a cast led by Charlton Heston and Yul Brynner, the 1958 film The Buccaneer was a disappointment. DeMille attended the Santa Barbara premiere of The Buccaneer in December 1958. DeMille was unable to attend the Los Angeles premiere of The Buccaneer. In the months before his death, DeMille was researching a film biography of Robert Baden-Powell, the founder of the Scout Movement. DeMille asked David Niven to star in the film, but it was never made. DeMille also was planning a film about the space race as well as another biblical epic about the Book of Revelation. DeMille's autobiography was mostly completed by the time DeMille died and was published in November 1959.
Cecil B. DeMille suffered a series of heart attacks from June 1958 to January 1959, and died on January 21, 1959, following an attack. DeMille's funeral was held on January 23 at St. Stephen's Episcopal Church. He was entombed at the Hollywood Memorial Cemetery (now known as Hollywood Forever). After his death, notable news outlets such as The New York Times, the Los Angeles Times, and The Guardian honored DeMille as "pioneer of movies", "the greatest creator and showman of our industry", and "the founder of Hollywood". DeMille left his multi-million dollar estate in Los Feliz, Los Angeles, in Laughlin Park to his daughter Cecilia because his wife had dementia and was unable to care for an estate. She would die one year later. His personal will drew a line between Cecilia and his three adopted children, with Cecilia receiving a majority of DeMille's inheritance and estate. The other three children were surprised by this, as DeMille did not treat the children differently in life. Cecilia lived in the house for many years until her death in 1984, but the house was auctioned by his granddaughter Cecilia DeMille Presley who also lived there in the late 1980s.
DeMille believed his first influences to be his parents, Henry and Beatrice DeMille. His playwright father introduced him to the theater at a young age. Henry was heavily influenced by the work of Charles Kingsley, whose ideas trickled down to DeMille. DeMille noted that his mother had a "high sense of the dramatic" and was determined to continue the artistic legacy of her husband after he died. Beatrice became a play broker and author's agent, influencing DeMille's early life and career. DeMille's father worked with David Belasco who was a theatrical producer, impresario, and playwright. Belasco was known for adding realistic elements in his plays such as real flowers, food, and aromas that could transport his audiences into the scenes. While working in theatre, DeMille used real fruit trees in his play California, as influenced by Belasco. Similar to Belasco, DeMille's theatre revolved around entertainment rather than artistry. Generally, Belasco's influence of DeMille's career can be seen in DeMille's showmanship and narration. E. H. Sothern's early influence on DeMille's work can be seen in DeMille's perfectionism. DeMille recalled that one of the most influential plays he saw was Hamlet, directed by Sothern.
DeMille's filmmaking process always began with extensive research. Next, he would work with writers to develop the story that he was envisioning. Then, he would help writers construct a script. Finally, he would leave the script with artists and allow them to create artistic depictions and renderings of each scene. Plot and dialogue were not a strong point of DeMille's films. Consequently, he focused his efforts on his films' visuals. He worked with visual technicians, editors, art directors, costume designers, cinematographers, and set carpenters in order to perfect the visual aspects of his films. With his editor, Anne Bauchens, DeMille used editing techniques to allow the visual images to bring the plot to climax rather than dialogue. DeMille had large and frequent office conferences to discuss and examine all aspects of the working film including story-boards, props, and special effects.
DeMille rarely gave direction to actors; he preferred to "office-direct", where he would work with actors in his office, going over characters and reading through scripts. Any problems on the set were often fixed by writers in the office rather than on the set. DeMille did not believe a large movie set was the place to discuss minor character or line issues. DeMille was particularly adept at directing and managing large crowds in his films. Martin Scorsese recalled that DeMille had the skill to maintain control of not only the lead actors in a frame but the many extras in the frame as well. DeMille was adept at directing "thousands of extras", and many of his pictures include spectacular set pieces: the toppling of the pagan temple in Samson and Delilah; train wrecks in The Road to Yesterday, Union Pacific and The Greatest Show on Earth; the destruction of an airship in Madam Satan; and the parting of the Red Sea in both versions of The Ten Commandments.
In his early films, DeMille experimented with photographic light and shade, which created dramatic shadows instead of glare. His specific use of lighting, influenced by his mentor David Belasco, was for the purpose of creating "striking images" and heightening "dramatic situations". DeMille was unique in using this technique. In addition to his use of volatile and abrupt film editing, his lighting and composition were innovative for the time period as filmmakers were primarily concerned with a clear, realistic image. Another important aspect of DeMille's editing technique was to put the film away for a week or two after an initial edit in order to re-edit the picture with a fresh mind. This allowed for the rapid production of his films in the early years of the Lasky Company. The cuts were sometimes rough, but the movies were always interesting.
DeMille often edited in a manner that favored psychological space rather than physical space through his cuts. In this way, the characters' thoughts and desires are the visual focus rather than the circumstances regarding the physical scene. As DeMille's career progressed, he increasingly relied on artist Dan Sayre Groesbeck's concept, costume, and storyboard art. Groesbeck's art was circulated on set to give actors and crew members a better understanding of DeMille's vision. His art was even shown at Paramount meetings when pitching new films. DeMille adored the art of Groesbeck, even hanging it above his fireplace, but film staff found it difficult to convert his art into three-dimensional sets. As DeMille continued to rely on Groesbeck, the nervous energy of his early films transformed into more steady compositions of his later films. While visually appealing, this made the films appear more old-fashioned.
Composer Elmer Bernstein described DeMille as "sparing no effort" when filmmaking. Bernstein recalled that DeMille would scream, yell, or flatter—whatever it took to achieve the perfection he required in his films. DeMille was painstakingly attentive to details on set and was as critical of himself as he was of his crew. Costume designer Dorothy Jeakins, who worked with DeMille on The Ten Commandments (1956), said that he was skilled in humiliating people. Jeakins admitted that she received quality training from him, but that it was necessary to become a perfectionist on a DeMille set to avoid being fired. DeMille had an authoritarian persona on set; he required absolute attention from the cast and crew. He had a band of assistants who catered to his needs. He would speak to the entire set, sometimes enormous with countless numbers of crew members and extras, via a microphone to maintain control of the set. He was disliked by many inside and outside of the film industry for his cold and controlling reputation.
DeMille was known for autocratic behavior on the set, singling out and berating extras who were not paying attention. Many of these displays were thought to be staged, however, as an exercise in discipline. He despised actors who were unwilling to take physical risks, especially when he had first demonstrated that the required stunt would not harm them. This occurred with Victor Mature in Samson and Delilah. Mature refused to wrestle Jackie the Lion, even though DeMille had just tussled with the lion, proving that he was tame. DeMille told the actor that he was "one hundred percent yellow". Paulette Goddard's refusal to risk personal injury in a scene involving fire in Unconquered cost her DeMille's favor and a role in The Greatest Show on Earth. DeMille did receive help in his films, notably from Alvin Wyckoff, who shot forty-three of DeMille's films; brother William deMille who would occasionally serve as his screenwriter; and Jeanie Macpherson, who served as DeMille's exclusive screenwriter for fifteen years; and Eddie Salven, DeMille's favorite assistant director.
DeMille made stars of unknown actors: Gloria Swanson, Bebe Daniels, Rod La Rocque, William Boyd, Claudette Colbert, and Charlton Heston. He also cast established stars such as Gary Cooper, Robert Preston, Paulette Goddard and Fredric March in multiple pictures. DeMille cast some of his performers repeatedly, including Henry Wilcoxon, Julia Faye, Joseph Schildkraut, Ian Keith, Charles Bickford, Theodore Roberts, Akim Tamiroff, and William Boyd. DeMille was credited by actor Edward G. Robinson with saving his career following his eclipse in the Hollywood blacklist.
Cecil B. DeMille's film production career evolved from critically significant silent films to financially significant sound films. He began his career with reserved yet brilliant melodramas; from there, his style developed into marital comedies with outrageously melodramatic plots. In order to attract a high-class audience, DeMille based many of his early films on stage melodramas, novels, and short stories. He began the production of epics earlier in his career until they began to solidify his career in the 1920s. By 1930, DeMille had perfected his film style of mass-interest spectacle films with Western, Roman, or Biblical themes. DeMille was often criticized for making his spectacles too colorful and for being too occupied with entertaining the audience rather than accessing the artistic and auteur possibilities that film could provide. However, others interpreted DeMille's work as visually impressive, thrilling, and nostalgic. Along the same lines, critics of DeMille often qualify him by his later spectacles and fail to consider several decades of ingenuity and energy that defined him during his generation. Throughout his career, he did not alter his films to better adhere to contemporary or popular styles. Actor Charlton Heston admitted DeMille was, "terribly unfashionable" and Sidney Lumet called DeMille, "the cheap version of D.W. Griffith", adding that DeMille, "[didn't have]...an original thought in his head", though Heston added that DeMille was much more than that.
According to Scott Eyman, DeMille's films were at the same time masculine and feminine due to his thematic adventurousness and his eye for the extravagant. DeMille's distinctive style can be seen through camera and lighting effects as early as The Squaw Man with the use of daydream images; moonlight and sunset on a mountain; and side-lighting through a tent flap. In the early age of cinema, DeMille differentiated the Lasky Company from other production companies due to the use of dramatic, low-key lighting they called "Lasky lighting" and marketed as "Rembrandt lighting" to appeal to the public. DeMille achieved international recognition for his unique use of lighting and color tint in his film The Cheat. DeMille's 1956 version of The Ten Commandments, according to director Martin Scorsese, is renowned for its level of production and the care and detail that went into creating the film. He stated that The Ten Commandments was the final culmination of DeMille's style.
DeMille was interested in art and his favorite artist was Gustave Doré; DeMille based some of his most well-known scenes on the work of Doré. DeMille was the first director to connect art to filmmaking; he created the title of "art director" on the film set. DeMille was also known for his use of special effects without the use of digital technology. Notably, DeMille had cinematographer John P. Fulton create the parting of the Red Sea scene in his 1956 film The Ten Commandments, which was one of the most expensive special effects in film history, and has been called by Steven Spielberg "the greatest special effect in film history". The actual parting of the sea was created by releasing 360,000 gallons of water into a huge water tank split by a U-shaped trough, overlaying it with a film of a giant waterfall that was built on the Paramount backlot, and playing the clip backward.
Aside from his Biblical and historical epics, which are concerned with how man relates to God, some of DeMille's films contained themes of "neo-naturalism", which portray the conflict between the laws of man and the laws of nature. Although he is known for his later "spectacular" films, his early films are held in high regard by critics and film historians. DeMille discovered the possibilities of the "bathroom" or "boudoir" in the film without being "vulgar" or "cheap". DeMille's films Male and Female, Why Change Your Wife?, and The Affairs of Anatol can be retrospectively described as high camp and are categorized as "early DeMille films" due to their particular style of production and costume and set design. However, his earlier films The Captive, Kindling, Carmen, and The Whispering Chorus are more serious films. It is difficult to typify DeMille's films into one specific genre. His first three films were Westerns, and he filmed many Westerns throughout his career. However, throughout his career, he filmed comedies, periodic and contemporary romances, dramas, fantasies, propaganda, Biblical spectacles, musical comedies, suspense, and war films. At least one DeMille film can represent each film genre. DeMille produced the majority of his films before the 1930s, and by the time sound films were invented, film critics saw DeMille as antiquated, with his best filmmaking years behind him.
DeMille's films contained many similar themes throughout his career. However, the films of his silent era were often thematically different from the films of his sound era. His silent-era films often included the "battle of the sexes" theme due to the era of women's suffrage and the enlarging role of women in society. Moreover, before his religious-themed films, many of his silent era films revolved around "husband-and-wife-divorce-and-remarry satires", considerably more adult-themed. According to Simon Louvish, these films reflected DeMille's inner thoughts and opinions about marriage and human sexuality. Religion was a theme that DeMille returned to throughout his career. Of his seventy films, five revolved around stories of the Bible and the New Testament; however many others, while not direct retellings of Biblical stories, had themes of faith and religious fanaticism in films such as The Crusades and The Road to Yesterday. Western and frontier American were also themes that DeMille returned to throughout his career. His first several films were Westerns, and he produced a chain of westerns during the sound era. Instead of portraying the danger and anarchy of the West, he portrayed the opportunity and redemption found in Western America. Another common theme in DeMille's films is the reversal of fortune and the portrayal of the rich and the poor, including the war of the classes and man versus society conflicts such as in The Golden Chance and The Cheat. In relation to his own interests and sexual preferences, sadomasochism was a minor theme present in some of his films. Another minor characteristic of DeMille's films include train crashes, which can be found in several of his films.
Known as the father of the Hollywood motion picture industry, Cecil B. DeMille made 70 films including several box-office hits. DeMille is one of the more commercially successful film directors in history, with his films before the release of The Ten Commandments estimated to have grossed $650 million worldwide. Adjusted for inflation, DeMille's remake of The Ten Commandments is the eighth highest-grossing film in the world.
According to Sam Goldwyn, critics did not like DeMille's films, but the audiences did, and "they have the final word". Similarly, scholar David Blanke, argued that DeMille had lost the respect of his colleagues and film critics by his late film career. However, his final films maintained that DeMille was still respected by his audiences. Five of DeMille's films were the highest-grossing films at the year of their release, with only Spielberg topping him with six of his films as the highest-grossing films of the year. DeMille's highest-grossing films include: The Sign of the Cross (1932), Unconquered (1947), Samson and Delilah (1949), The Greatest Show on Earth (1952), and The Ten Commandments (1956). Director Ridley Scott has been called "the Cecil B. DeMille of the digital era" due to his classical and medieval epics.
Despite his box-office success, awards, and artistic achievements, DeMille has been dismissed and ignored by critics both during his life and posthumously. He was consistently criticized for producing shallow films without talent or artistic care. Compared to other directors, few film scholars have taken the time to academically analyze his films and style. During the French New Wave, critics began to categorize certain filmmakers as auteurs such as Howard Hawks, John Ford, and Raoul Walsh. DeMille was omitted from the list, thought to be too unsophisticated and antiquated to be considered an auteur. However, Simon Louvish wrote "he was the complete master and auteur of his films", and Anton Kozlovic called him the "unsung American auteur". Andrew Sarris, a leading proponent of the auteur theory, ranked DeMille highly as an auteur in the "Far Side of Paradise", just below the "Pantheon". Sarris added that despite the influence of the styles of contemporary directors throughout his career, DeMille's style remained unchanged. Robert Birchard wrote that one could argue the auteurship of DeMille on the basis that DeMille's thematic and visual style remained consistent throughout his career. However, Birchard acknowledged that Sarris's point was more likely that DeMille's style was behind the development of film as an art form. Meanwhile, Sumiko Higashi sees DeMille as "not only a figure who was shaped and influenced by the forces of his era but as a filmmaker who left his own signature on the culture industry." The critic Camille Paglia has called The Ten Commandments one of the ten greatest films of all time.
DeMille was one of the first directors to become a celebrity in his own right. He cultivated the image of the omnipotent director, complete with megaphone, riding crop, and jodhpurs. He was known for his unique working wardrobe, which included riding boots, riding pants, and soft, open necked shirts. Joseph Henabery recalled that DeMille looked like "a king on a throne surrounded by his court" while directing films on a camera platform.
DeMille was liked by some of his fellow directors and disliked by others, though his actual films were usually dismissed by his peers as a vapid spectacle. Director John Huston intensely disliked both DeMille and his films. "He was a thoroughly bad director", Huston said. "A dreadful showoff. Terrible. To diseased proportions." Said fellow director William Wellman: "Directorially, I think his pictures were the most horrible things I've ever seen in my life. But he put on pictures that made a fortune. In that respect, he was better than any of us." Producer David O. Selznick wrote: "There has appeared only one Cecil B. DeMille. He is one of the most extraordinarily able showmen of modern times. However much I may dislike some of his pictures, it would be very silly of me, as a producer of commercial motion pictures, to demean for an instant his unparalleled skill as a maker of mass entertainment." Salvador Dalí wrote that DeMille, Walt Disney, and the Marx Brothers were "the three great American Surrealists". DeMille appeared as himself in numerous films, including the MGM comedy Free and Easy. He often appeared in his coming-attraction trailers and narrated many of his later films, even stepping on screen to introduce The Ten Commandments. DeMille was immortalized in Billy Wilder's Sunset Boulevard when Gloria Swanson spoke the line: "All right, Mr. DeMille. I'm ready for my close-up." DeMille plays himself in the film. DeMille's reputation had a renaissance in the 2010s.
As a filmmaker, DeMille was the aesthetic inspiration of many directors and films due to his early influence during the crucial development of the film industry. DeMille's early silent comedies influenced the comedies of Ernst Lubitsch and Charlie Chaplin's A Woman of Paris. Additionally, DeMille's epics such as The Crusades influenced Sergei Eisenstein's Alexander Nevsky. Moreover, DeMille's epics inspired directors such as Howard Hawks, Nicholas Ray, Joseph L. Mankiewicz, and George Stevens to try producing epics. Cecil B. DeMille has influenced the work of several well-known directors. Alfred Hitchcock cited DeMille's 1921 film Forbidden Fruit as an influence of his work and one of his top ten favorite films. DeMille has influenced the careers of many modern directors. Martin Scorsese cited Unconquered, Samson and Delilah, and The Greatest Show on Earth as DeMille films that have imparted lasting memories on him. Scorsese said he had viewed The Ten Commandments forty or fifty times. Famed director Steven Spielberg stated that DeMille's The Greatest Show on Earth was one of the films that influenced him to become a filmmaker. Furthermore, DeMille influenced about half of Spielberg's films, including War of the Worlds. The Ten Commandments inspired DreamWorks Animation's later film about Moses, The Prince of Egypt. As one of the establishing members of Paramount Pictures and co-founder of Hollywood, DeMille had a role in the development of the film industry. Consequently, the name "DeMille" has become synonymous with filmmaking.
Publicly Episcopalian, DeMille drew on his Christian and Jewish ancestors to convey a message of tolerance. DeMille received more than a dozen awards from Christian and Jewish religious and cultural groups, including B'nai B'rith. However, not everyone received DeMille's religious films favorably. DeMille was accused of antisemitism after the release of The King of Kings, and director John Ford despised DeMille for what he saw as "hollow" biblical epics meant to promote DeMille's reputation during the politically turbulent 1950s. In response to the claims, DeMille donated some of the profits from The King of Kings to charity. In the 2012 Sight & Sound poll, both DeMille's Samson and Delilah and 1923 version of The Ten Commandments received votes, but did not make the top 100 films. Although many of DeMille's films are available on DVD and Blu-ray release, only 20 of his silent films are commercially available on DVD
The original Lasky-DeMille Barn in which The Squaw Man was filmed was converted into a museum named the "Hollywood Heritage Museum". It opened on December 13, 1985, and features some of DeMille's personal artifacts. The Lasky-DeMille Barn was dedicated as a California historical landmark in a ceremony on December 27, 1956; DeMille was the keynote speaker. It was listed on the National Register of Historic Places in 2014. The Dunes Center in Guadalupe, California, contains an exhibition of artifacts uncovered in the desert near Guadalupe from DeMille's set of his 1923 version of The Ten Commandments, known as the "Lost City of Cecil B. DeMille". Donated by the Cecil B. DeMille Foundation in 2004, the moving image collection of Cecil B. DeMille is held at the Academy Film Archive and includes home movies, outtakes, and never-before-seen test footage.
In summer 2019, The Friends of the Pompton Lakes Library hosted a Cecil B DeMille film festival to celebrate DeMille's achievements and connection to Pompton Lakes. They screened four of his films at Christ Church, where DeMille and his family attended church when they lived there. Two schools have been named after him: Cecil B. DeMille Middle School, in Long Beach, California, which was closed and demolished in 2010 to make way for a new high school; and Cecil B. DeMille Elementary School in Midway City, California. The former film building at Chapman University in Orange, California, is named in honor of DeMille. During the Apollo 11 mission, Buzz Aldrin referred to himself in one instance as "Cecil B. DeAldrin", as a humorous nod to DeMille. The title of the 2000 John Waters film Cecil B. Demented alludes to DeMille.
DeMille's legacy is maintained by his granddaughter Cecilia DeMille Presley who serves as the president of the Cecil B. DeMille Foundation, which strives to support higher education, child welfare, and film in Southern California. In 1963, the Cecil B. DeMille Foundation donated the "Paradise" ranch to the Hathaway Foundation, which cares for emotionally disturbed and abused children. A large collection of DeMille's materials including scripts, storyboards, and films resides at Brigham Young University in L. Tom Perry Special Collections.
Cecil B. DeMille received many awards and honors, especially later in his career.
In August 1941, DeMille was honored with a block in the forecourt of Grauman's Chinese Theatre.
The American Academy of Dramatic Arts honored DeMille with an Alumni Achievement Award in 1958.
In 1957, DeMille gave the commencement address for the graduation ceremony of Brigham Young University, wherein he received an honorary Doctorate of Letter degree. Additionally, in 1958, he received an honorary Doctorate of Law degree from Temple University.
From the film industry, DeMille received the Irving G. Thalberg Memorial Award at the Academy Awards in 1953, and a Lifetime Achievement Award from the Directors Guild of America Award the same year. In the same ceremony, DeMille received a nomination from Directors Guild of America Award for Outstanding Directorial Achievement in Motion Pictures for The Greatest Show on Earth. In 1952, DeMille was awarded the first Cecil B. DeMille Award at the Golden Globes. An annual award, the Golden Globe's Cecil B. DeMille Award recognizes lifetime achievement in the film industry. For his contribution to the motion picture and radio industry, DeMille has two stars on the Hollywood Walk of Fame. The first, for radio contributions, is located at 6240 Hollywood Blvd. The second star is located at 1725 Vine Street.
DeMille received two Academy Awards: an Honorary Award for "37 years of brilliant showmanship" in 1950 and a Best Picture award in 1953 for The Greatest Show on Earth. DeMille received a Golden Globe Award for Best Director and was additionally nominated for the Best Director category at the 1953 Academy Awards for the same film. He was further nominated in the Best Picture category for The Ten Commandments at the 1957 Academy Awards. DeMille's Union Pacific received a Palme d'Or in retrospect at the 2002 Cannes Film Festival.
Two of DeMille's films have been selected for preservation in the National Film Registry by the United States Library of Congress: The Cheat (1915) and The Ten Commandments (1956).
Cecil B. DeMille made 70 features. 52 of his features are silent films. The first 24 of his silent films were made in the first three years of his career (1913–1916). Eight of his films were "epics" with five of those classified as "Biblical". Six of DeMille's films—The Arab, The Wild Goose Chase, The Dream Girl, The Devil-Stone, We Can't Have Everything, and The Squaw Man (1918)—were destroyed by nitrate decomposition, and are considered lost. The Ten Commandments is broadcast every Saturday at Passover in the United States on the ABC Television Network.
Filmography obtained from Fifty Hollywood Directors.
Silent films
Sound films
These films represent those which DeMille produced or assisted in directing, credited or uncredited.
DeMille frequently made cameos as himself in other Paramount films. Additionally, he often starred in prologues and special trailers that he created for his films, having an opportunity to personally address the audience. | [
{
"paragraph_id": 0,
"text": "Cecil Blount DeMille (/ˈsɛsəl dəˈmɪl/; August 12, 1881 – January 21, 1959) was an American filmmaker and actor. Between 1914 and 1958, he made 70 features, both silent and sound films. He is acknowledged as a founding father of American cinema and the most commercially successful producer-director in film history. His films were distinguished by their epic scale and by his cinematic showmanship. His silent films included social dramas, comedies, Westerns, farces, morality plays, and historical pageants. He was an active Freemason and member of Prince of Orange Lodge #16 in New York City.",
"title": ""
},
{
"paragraph_id": 1,
"text": "DeMille was born in Ashfield, Massachusetts, and grew up in New York City. He began his career as a stage actor in 1900. He later moved to writing and directing stage productions, some with Jesse Lasky, who was then a vaudeville producer. DeMille's first film, The Squaw Man (1914), was also the first full-length feature film shot in Hollywood. Its interracial love story made it commercially successful, and it first publicized Hollywood as the home of the U.S. film industry. The continued success of his productions led to the founding of Paramount Pictures with Lasky and Adolph Zukor. His first biblical epic, The Ten Commandments (1923), was both a critical and commercial success; it held the Paramount revenue record for twenty-five years.",
"title": ""
},
{
"paragraph_id": 2,
"text": "DeMille directed The King of Kings (1927), a biography of Jesus, which gained approval for its sensitivity and reached more than 800 million viewers. The Sign of the Cross (1932) is said to be the first sound film to integrate all aspects of cinematic technique. Cleopatra (1934) was his first film to be nominated for the Academy Award for Best Picture. After more than thirty years in film production, DeMille reached a pinnacle in his career with Samson and Delilah (1949), a biblical epic that became the highest-grossing film of 1950. Along with biblical and historical narratives, he also directed films oriented toward \"neo-naturalism\", which tried to portray the laws of man fighting the forces of nature.",
"title": ""
},
{
"paragraph_id": 3,
"text": "He received his first nomination for the Academy Award for Best Director for his circus drama The Greatest Show on Earth (1952), which won both the Academy Award for Best Picture and the Golden Globe Award for Best Motion Picture – Drama. His last and best known film, The Ten Commandments (1956), also a Best Picture Academy Award nominee, is currently the eighth-highest-grossing film of all time, adjusted for inflation. In addition to his Best Picture Awards, he received an Academy Honorary Award for his film contributions, the Palme d'Or (posthumously) for Union Pacific (1939), a DGA Award for Lifetime Achievement, and the Irving G. Thalberg Memorial Award. He was the first recipient of the Golden Globe Cecil B. DeMille Award, which was named in his honor. DeMille's reputation had a renaissance in the 2010s, and his work has influenced numerous other films and directors.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Cecil Blount DeMille was born on August 12, 1881, in a boarding house on Main Street in Ashfield, Massachusetts, where his parents had been vacationing for the summer. On September 1, 1881, the family returned with the newborn DeMille to their flat in New York. DeMille was named after his grandmothers Cecelia Wolff and Margarete Blount. He was the second of three children of Henry Churchill de Mille (September 4, 1853 – February 10, 1893) and his wife, Matilda Beatrice deMille (née Samuel; January 30, 1853 – October 8, 1923), known as Beatrice. His brother, William C. DeMille, was born on July 25, 1878. Henry de Mille, whose ancestors were of English and Dutch-Belgian descent, was a North Carolina-born dramatist, actor, and lay reader in the Episcopal Church. DeMille's father was also an English teacher at Columbia College (now Columbia University). He worked as a playwright, administrator, and faculty member during the early years of the American Academy of Dramatic Arts, established in New York City in 1884. Henry deMille frequently collaborated with David Belasco in playwriting; their best-known collaborations included \"The Wife\", \"Lord Chumley\", \"The Charity Ball\", and \"Men and Women\".",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Cecil B. DeMille's mother, Beatrice, a literary agent and scriptwriter, was the daughter of German Jews. She had emigrated from England with her parents in 1871 when she was 18; the newly arrived family settled in Brooklyn, New York, where they maintained a middle-class, English-speaking household.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "DeMille's parents met as members of a music and literary society in New York. Henry was a tall, red-headed student. Beatrice was intelligent, educated, forthright, and strong-willed. The two were married on July 1, 1876, despite Beatrice's parents' objections because of the young couple's differing religions; Beatrice converted to Episcopalianism.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "DeMille was a brave and confident child. He gained his love of theater while watching his father and Belasco rehearse their plays. A lasting memory for DeMille was a lunch with his father and actor Edwin Booth. As a child, DeMille created an alter-ego, Champion Driver, a Robin Hood-like character, evidence of his creativity and imagination. The family lived in Washington, North Carolina, until Henry built a three-story Victorian-style house for his family in Pompton Lakes, New Jersey; they named this estate \"Pamlico\". John Philip Sousa was a friend of the family, and DeMille recalled throwing mud balls in the air so neighbor Annie Oakley could practice her shooting. DeMille's sister, Agnes, was born on April 23, 1891; his mother nearly did not survive the birth. Agnes would die on February 11, 1894, at the age of three from spinal meningitis. DeMille's parents operated a private school in town and attended Christ Episcopal Church. DeMille recalled that this church was the place where he visualized the story of his 1923 version of The Ten Commandments.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "On January 8, 1893, at age 40, Henry de Mille died suddenly from typhoid fever, leaving Beatrice with three children. To provide for her family, she opened the Henry C. de Mille School for Girls in her home in February 1893. The aim of the school was to teach young women to properly understand and fulfill the women's duty to herself, her home, and her country. Before Henry de Mille's death, Beatrice had \"enthusiastically supported\" her husband's theatrical aspirations. She later became the second female play broker on Broadway. On Henry de Mille's deathbed, he told his wife that he did not want his sons to become playwrights. DeMille's mother sent him to Pennsylvania Military College (now Widener University) in Chester, Pennsylvania, at age 15. He fled the school to join the Spanish–American War, but failed to meet the age requirement. At the military college, even though his grades were average, he reportedly excelled in personal conduct. DeMille attended the American Academy of Dramatic Arts (tuition-free due to his father's service to the Academy). He graduated in 1900, and for graduation, his performance was the play The Arcady Trail. In the audience was Charles Frohman, who would cast DeMille in his play Hearts are Trumps, DeMille's Broadway debut.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "Cecil B. DeMille began his career as an actor on the stage in the theatrical company of Charles Frohman in 1900. He debuted as an actor on February 21, 1900, in the play Hearts Are Trumps at New York's Garden Theater. In 1901, DeMille starred in productions of A Repentance, To Have and to Hold, and Are You a Mason? At the age of 21, Cecil B. DeMille married Constance Adams on August 16, 1902, at Adams's father's home in East Orange, New Jersey. The wedding party was small. Beatrice DeMille's family was not in attendance, and Simon Louvish suggests that this was to conceal DeMille's partial Jewish heritage. Adams was 29 years old at the time of their marriage, eight years older than DeMille. They had met in a theater in Washington D.C. while they were both acting in Hearts Are Trumps.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "They were sexually incompatible; according to DeMille, Adams was too \"pure\" to \"feel such violent and evil passions\". DeMille had more violent sexual preferences and fetishes than his wife. Adams allowed DeMille to have several long-term mistresses during their marriage as an outlet while maintaining an outward appearance of a faithful marriage. One of DeMille's affairs was with his screenwriter Jeanie MacPherson. Despite his reputation for extramarital affairs, DeMille did not like to have affairs with his stars, as he believed it would cause him to lose control as a director. He related a story that he maintained his self-control when Gloria Swanson sat on his lap, refusing to touch her.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "In 1902, he played a small part in Hamlet. Publicists wrote that he became an actor in order to learn how direct and produce, but DeMille admitted that he became an actor in order to pay the bills. From 1904 to 1905, DeMille attempted to make a living as a stock theatre actor with his wife, Constance. DeMille made a 1905 reprise in Hamlet as Osric. In the summer of 1905, DeMille joined the stock cast at the Elitch Theatre in Denver, Colorado. He appeared in eleven of the fifteen plays presented that season, although all were minor roles. Maude Fealy would appear as the featured actress in several productions that summer and would develop a lasting friendship with DeMille. (He would later cast her in The Ten Commandments.)",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "His brother, William, was establishing himself as a playwright and sometimes invited DeMille to collaborate. DeMille and William collaborated on The Genius, The Royal Mounted, and After Five. However, none of these were very successful; William deMille was most successful when he worked alone. DeMille and his brother at times worked with the legendary impresario David Belasco, who had been a friend and collaborator of their father. DeMille would later adapt Belasco's The Girl of the Golden West, Rose of the Rancho, and The Warrens of Virginia into films. DeMille was credited with creating the premise of Belasco's The Return of Peter Grimm. The Return of Peter Grimm sparked controversy, because Belasco had taken DeMille's unnamed screenplay, changed the characters, and named it The Return of Peter Grimm, producing and presenting it as his own work. DeMille was credited in small print as \"based on an idea by Cecil DeMille\". The play was successful, and DeMille was distraught that his childhood idol had plagiarized his work.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "DeMille performed on stage with actors whom he would later direct in films: Charlotte Walker, Mary Pickford, and Pedro de Cordoba. DeMille also produced and directed plays. His 1905 performance in The Prince Chap as the Earl of Huntington was well received by audiences. DeMille wrote a few of his own plays in-between stage performances, but his playwriting was not as successful. His first play was The Pretender-A Play in a Prologue and 4 Acts set in seventeenth century Russia. Another unperformed play he wrote was Son of the Winds, a mythological Native American story. Life was difficult for DeMille and his wife as traveling actors; however, traveling allowed him to experience part of the United States he had not yet seen. DeMille sometimes worked with the director E. H. Sothern, who influenced DeMille's later perfectionism in his work. In 1907, due to a scandal with one of Beatrice's students, Evelyn Nesbit, the Henry de Mille School lost students. The school closed, and Beatrice filed for bankruptcy. DeMille wrote another play originally called Sergeant Devil May Care, which was renamed The Royal Mounted. He also toured with the Standard Opera Company, but there are few records to indicate DeMille's singing ability. DeMille had a daughter, Cecilia, on November 5, 1908, who would be his only biological child. In the 1910s, DeMille began directing and producing other writer's plays.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "DeMille was poor and struggled to find work. Consequently, his mother hired him for her agency The DeMille Play Company, and taught him how to be an agent and a playwright. Eventually, he became manager of the agency and later, a junior partner with his mother. In 1911, DeMille became acquainted with vaudeville producer Jesse Lasky when Lasky was searching for a writer for his new musical. He initially sought out William deMille. William had been a successful playwright, but DeMille was suffering from the failure of his plays The Royal Mounted and The Genius. However, Beatrice introduced Lasky to DeMille instead. The collaboration of DeMille and Lasky produced a successful musical called California, which opened in New York in January 1912. Another DeMille-Lasky production that opened in January 1912 was The Antique Girl. DeMille found success in the spring of 1913, producing Reckless Age by Lee Wilson, a play about a high society girl wrongly accused of manslaughter starring Frederick Burton and Sydney Shields. However, changes in the theater rendered DeMille's melodramas obsolete before they were produced, and true theatrical success eluded him. He produced many flops. Having become disinterested in working in theatre, DeMille's passion for film was ignited when he watched the 1912 French film Les Amours de la reine Élisabeth.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "Desiring a change of scene, Cecil B. DeMille, Jesse Lasky, Sam Goldfish (later Samuel Goldwyn), and a group of East Coast businessmen created the Jesse L. Lasky Feature Play Company in 1913, over which DeMille became director-general. Lasky and DeMille were said to have sketched out the organization of the company on the back of a restaurant menu. As director-general, DeMille's job was to make the films. In addition to directing, DeMille was the supervisor and consultant for the first year of films made by the Lasky Feature Play Company. Sometimes, he directed scenes for other directors at the Feature Play Company in order to release films on time. Moreover, when he was busy directing other films, he would co-author other Lasky Company scripts as well as create screen adaptations that others directed.",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "The Lasky Play Company sought out William deMille to join the company, but he rejected the offer because he did not believe there was any promise in a film career. When William found out that DeMille had begun working in the motion picture industry, he wrote DeMille a letter, disappointed that he was willing \"to throw away [his] future\" when he was \"born and raised in the finest traditions of the theater\". The Lasky Company wanted to attract high-class audiences to their films, so they began producing films from literary works. The Lasky Company bought the rights to the play The Squaw Man by Edwin Milton Royle and cast Dustin Farnum in the lead role. They offered Farnum a choice to have a quarter stock in the company (similar to William deMille) or $250 per week as salary. Farnum chose $250 per week. Already $15,000 in debt to Royle for the screenplay of The Squaw Man, Lasky's relatives bought the $5,000 stock to save the Lasky Company from bankruptcy. With no knowledge of filmmaking, DeMille was introduced to observe the process at film studios. He was eventually introduced to Oscar Apfel, a stage director who had been a director with the Edison Company.",
"title": "Biography"
},
{
"paragraph_id": 17,
"text": "On December 12, 1913, DeMille, his cast, and crew boarded a Southern Pacific train bound for Flagstaff via New Orleans. His tentative plan was to shoot a film in Arizona, but he felt that Arizona did not typify the Western look they were searching for. They also learned that other filmmakers were successfully shooting in Los Angeles, even in winter. He continued to Los Angeles. Once there, he chose not to shoot in Edendale, where many studios were, but in Hollywood. DeMille rented a barn to function as their film studio. Filming began on December 29, 1913, and lasted three weeks. Apfel filmed most of The Squaw Man due to DeMille's inexperience; however, DeMille learned quickly and was particularly adept at impromptu screenwriting as necessary. He made his first film run sixty minutes, as long as a short play. The Squaw Man (1914), co-directed by Oscar Apfel, was a sensation, and it established the Lasky Company. This was the first feature-length film made in Hollywood. There were problems with the perforation of the film stock, and it was discovered the DeMille had brought a cheap British film perforator, which had punched in sixty-five holes per foot instead of the industry-standard of sixty-four. Lasky and DeMille convinced film pioneer Siegmund Lubin of the Lubin Manufacturing Company of Philadelphia to have his experienced technicians reperforate the film This was also the first American feature film; however, only by release date, as D. W. Griffith's Judith of Bethulia was filmed earlier than The Squaw Man, but released later. Additionally, this was the only film in which DeMille shared director's credit with Oscar C. Apfel.",
"title": "Biography"
},
{
"paragraph_id": 18,
"text": "The Squaw Man was a success, which led to the eventual founding of Paramount Pictures and Hollywood becoming the \"film capital of the world\". The film grossed over ten times its budget after its New York premiere in February 1914. DeMille's next project was to aid Oscar Apfel in directing Brewster's Millions, which was wildly successful. In December 1914, Constance Adams brought home John DeMille, a fifteen-month-old, whom the couple legally adopted three years later. Biographer Scott Eyman suggested that this may have been a result of Adams's recent miscarriage.",
"title": "Biography"
},
{
"paragraph_id": 19,
"text": "Cecil B. DeMille's second film credited exclusively to him was The Virginian. This is the earliest of DeMille's films available in a quality, color-tinted video format. However, this version is actually a 1918 re-release. The first few years of the Lasky Company were spent in making films nonstop, literally writing the language of film. DeMille himself directed twenty films by 1915. The most successful films during the beginning of the Lasky Company were Brewster's Millions (co-directed by DeMille), Rose of the Rancho, and The Ghost Breaker. DeMille adapted Belasco's dramatic lighting techniques to film technology, mimicking moonlight with U.S. cinema's first attempts at \"motivated lighting\" in The Warrens of Virginia. This was the first of few film collaborations with his brother William. They struggled to adapt the play from the stage to the set. After the film was shown, viewers complained that the shadows and lighting prevented the audience from seeing the actors' full faces, complaining that they would only pay half price. However, Sam Goldwyn realized that if they called it \"Rembrandt\" lighting, the audience would pay double the price. Additionally, because of DeMille's cordiality after the Peter Grimm incident, DeMille was able to rekindle his partnership with Belasco. He adapted several of Belasco's screenplays into film.",
"title": "Biography"
},
{
"paragraph_id": 20,
"text": "DeMille's most successful film was The Cheat; DeMille's direction in the film was acclaimed. In 1916, exhausted from three years of nonstop filmmaking, DeMille purchased land in the Angeles National Forest for a ranch that would become his getaway. He called this place, \"Paradise\", declaring it a wildlife sanctuary; no shooting of animals besides snakes was allowed. His wife did not like Paradise, so DeMille often brought his mistresses there with him, including actress Julia Faye. In addition to his Paradise, DeMille purchased a yacht in 1921, which he called The Seaward.",
"title": "Biography"
},
{
"paragraph_id": 21,
"text": "While filming The Captive in 1915, an extra, Bob Fleming, died on set when another extra failed to heed DeMille's orders to unload all guns for rehearsal. DeMille instructed the guilty man to leave town and would never reveal his name. Lasky and DeMille maintained the widow Fleming on the payroll; however, according to leading actor House Peters Sr., DeMille refused to stop production for the funeral of Fleming. Peters claimed that he encouraged the cast to attend the funeral with him anyway since DeMille would not be able to shoot the film without him. On July 19, 1916, the Jesse Lasky Feature Play Company merged with Adolph Zukor's Famous Players Film Company, becoming Famous Players–Lasky. Zukor became president with Lasky as the vice president. DeMille was maintained as director-general, and Goldwyn became chairman of the board. Goldwyn was later fired from Famous Players–Lasky due to frequent clashes with Lasky, DeMille, and Zukor. While on a European vacation in 1921, DeMille contracted rheumatic fever in Paris. He was confined to bed and unable to eat. His poor physical condition upon his return home affected the production of his 1922 film Manslaughter. According to Richard Birchard, DeMille's weakened state during production may have led to the film being received as uncharacteristically substandard.",
"title": "Biography"
},
{
"paragraph_id": 22,
"text": "During World War I, the Famous Players–Lasky organized a military company underneath the National Guard called the Home Guard made up of film studio employees with DeMille as captain. Eventually, the Guard was enlarged to a battalion and recruited soldiers from other film studios. They took time off weekly from film production to practice military drills. Additionally, during the war, DeMille volunteered for the Justice Department's Intelligence Office, investigating friends, neighbors, and others he came in contact with in connection with the Famous Players–Lasky. He volunteered for the Intelligence Office during World War II as well. Although DeMille considered enlisting in World War I, he stayed in the United States and made films. However, he did take a few months to set up a movie theater for the French front. Famous Players–Lasky donated the films. DeMille and Adams adopted Katherine Lester in 1920, whom Adams had found in the orphanage over which she was the director. In 1922, the couple adopted Richard deMille.",
"title": "Biography"
},
{
"paragraph_id": 23,
"text": "Film started becoming more sophisticated and the subsequent films of the Lasky company were criticized for primitive and unrealistic set design. Consequently, Beatrice deMille introduced the Famous Players–Lasky to Wilfred Buckland, who DeMille had known from his time at the American Academy of Dramatic Arts, and he became DeMille's art director. William deMille reluctantly became a story editor. William deMille would later convert from theater to Hollywood and would spend the rest of his career as a film director. Throughout his career, DeMille would frequently remake his own films. In his first instance, in 1917, he remade The Squaw Man (1918), only waiting four years from the 1914 original. Despite its quick turnaround, the film was fairly successful. However, DeMille's second remake at MGM in 1931 would be a failure.",
"title": "Biography"
},
{
"paragraph_id": 24,
"text": "After five years and thirty hit films, DeMille became the American film industry's most successful director. In the silent era, he was renowned for Male and Female (1919), Manslaughter (1922), The Volga Boatman (1926), and The Godless Girl (1928). DeMille's trademark scenes included bathtubs, lion attacks, and Roman orgies. Many of his films featured scenes in two-color Technicolor. In 1923, DeMille released a modern melodrama The Ten Commandments, which was a significant change from his previous stint of irreligious films. The film was produced on a large budget of $600,000, the most expensive production at Paramount. This concerned the executives at Paramount; however, the film turned out to be the studio's highest-grossing film. It held the Paramount record for twenty-five years until DeMille broke the record again.",
"title": "Biography"
},
{
"paragraph_id": 25,
"text": "In the early 1920s, scandal surrounded Paramount; religious groups and the media opposed portrayals of immorality in films. A censorship board called the Hays Code was established. DeMille's film The Affairs of Anatol came under fire. Furthermore, DeMille argued with Zukor over his extravagant and over-budget production costs. Consequently, DeMille left Paramount in 1924 despite having helped establish it. He joined the Producers Distributing Corporation. His first film in the new production company, DeMille Pictures Corporation, was The Road to Yesterday in 1925. He directed and produced four films on his own, working with Producers Distributing Corporation because he found front office supervision too restricting. Aside from The King of Kings, none of DeMille's films away from Paramount were successful. The King of Kings established DeMille as \"master of the grandiose and of biblical sagas\". Considered at the time to be the most successful Christian film of the silent era, DeMille calculated that it had been viewed over 800 million times around the world. After the release of DeMille's The Godless Girl, silent films in America became obsolete, and DeMille was forced to shoot a shoddy final reel with the new sound production technique. Although this final reel looked so different from the previous eleven reels that it appeared to be from another movie, according to Simon Louvish, the film is one of DeMille's strangest and most \"DeMillean\" film.",
"title": "Biography"
},
{
"paragraph_id": 26,
"text": "The immense popularity of DeMille's silent films enabled him to branch out into other areas. The Roaring Twenties were the boom years and DeMille took full advantage, opening the Mercury Aviation Company, one of America's first commercial airlines. He was also a real estate speculator, an underwriter of political campaigns, and vice president of Bank of America. He was additionally vice president of the Commercial National Trust and Savings Bank in Los Angeles where he approved loans for other filmmakers. In 1916, DeMille purchased a mansion in Hollywood. Charlie Chaplin lived next door for a time, and after he moved, DeMille purchased the other house and combined the estates.",
"title": "Biography"
},
{
"paragraph_id": 27,
"text": "When \"talking pictures\" were invented in 1928, Cecil B. DeMille made a successful transition, offering his own innovations to the painful process; he devised a microphone boom and a soundproof camera blimp. He also popularized the camera crane. His first three sound films were produced at Metro-Goldwyn-Mayer. These three films, Dynamite, Madame Satan, and his 1931 remake of The Squaw Man were critically and financially unsuccessful. He had completely adapted to the production of sound film despite the film's poor dialogue. After his contract ended at MGM, he left, but no production studios would hire him. He attempted to create a guild of a half a dozen directors with the same creative desires called the Director's Guild. However, the idea failed due to lack of funding and commitment. Moreover, DeMille was audited by the Internal Revenue Service due to issues with his production company. This was, according to DeMille, the lowest point of his career. DeMille traveled abroad to find employment until he was offered a deal at Paramount.",
"title": "Biography"
},
{
"paragraph_id": 28,
"text": "In 1932, DeMille returned to Paramount at the request of Lasky, bringing with him his own production unit. His first film back at Paramount, The Sign of the Cross, was also his first success since leaving Paramount besides The King of Kings. DeMille's return was approved by Zukor under the condition that DeMille not exceed his production budget of $650,000 for The Sign of the Cross. Produced in eight weeks without exceeding budget, the film was financially successful. The Sign of the Cross was the first film to integrate all cinematic techniques. The film was considered a \"masterpiece\" and surpassed the quality of other sound films of the time. DeMille followed this epic uncharacteristically with two dramas released in 1933 and 1934. This Day and Age and Four Frightened People were box office disappointments, though Four Frightened People received good reviews. DeMille would stick to his large-budget spectaculars for the rest of his career.",
"title": "Biography"
},
{
"paragraph_id": 29,
"text": "Cecil B. DeMille was outspoken about his strong Episcopalian integrity, but his private life included mistresses and adultery. DeMille was a conservative Republican activist, becoming more conservative as he aged. He was known as anti-union and worked to prevent the unionizing of film production studios. However, according to DeMille himself, he was not anti-union and belonged to a few unions himself. He said he was rather against union leaders such as Walter Reuther and Harry Bridges, whom he compared to dictators. He supported Herbert Hoover and in 1928 made his largest campaign donation to Hoover. DeMille also liked Franklin D. Roosevelt, however, finding him charismatic, tenacious, and intelligent and agreeing with Roosevelt's abhorrence of Prohibition. DeMille lent Roosevelt a car for his campaign for the 1932 United States presidential election and voted for him. However, he would never again vote for a Democratic candidate in a presidential election.",
"title": "Biography"
},
{
"paragraph_id": 30,
"text": "From June 1, 1936, until January 22, 1945, Cecil DeMille hosted and directed Lux Radio Theatre, a weekly digest of current feature films. Broadcast on the Columbia Broadcasting System (CBS) from 1935 to 1954, the Lux Radio show was one of the most popular weekly shows in the history of radio. While DeMille was host, the show had forty million weekly listeners, gaining DeMille an annual salary of $100,000. From 1936 to 1945, he produced, hosted, and directed all shows with the occasional exception of a guest director. He resigned from the Lux Radio show because he refused to pay a dollar to the American Federation of Radio Artists (AFRA) because he did not believe that any organization had the right to \"levy a compulsory assessment upon any member.\"",
"title": "Biography"
},
{
"paragraph_id": 31,
"text": "DeMille sued the union for reinstatement but lost. He then appealed to the California Supreme Court and lost again. When the AFRA expanded to television, DeMille was banned from television appearances. Consequently, he formed the DeMille Foundation for Political Freedom in order to campaign for the right to work. He began presenting speeches across the United States for the next few years. DeMille's primary criticism was of closed shops, but later included criticism of communism and unions in general. The United States Supreme Court declined to review his case. Despite his loss, DeMille continued to lobby for the Taft–Hartley Act, which passed. This prohibited denying anyone the right to work if they refuse to pay a political assessment, however, the law did not apply retroactively. Consequently, DeMille's television and radio appearance ban lasted for the remainder of his life, though he was permitted to appear on radio or television to publicize a movie. William Keighley was his replacement. DeMille would never again work on radio.",
"title": "Biography"
},
{
"paragraph_id": 32,
"text": "In 1939, DeMille's Union Pacific was successful through DeMille's collaboration with the Union Pacific Railroad. The Union Pacific gave DeMille access to historical data, early period trains, and expert crews, adding to the authenticity of the film. During pre-production of Union Pacific, DeMille was dealing with his first serious health issue. In March 1938, he underwent a major emergency prostatectomy. He suffered from a post-surgery infection from which he nearly did not recover, citing streptomycin as his saving grace. The surgery caused him to suffer from sexual dysfunction for the rest of his life, according to some family members. Following his surgery and the success of Union Pacific, in 1940, DeMille first used three-strip Technicolor in North West Mounted Police. DeMille wanted to film in Canada; however, due to budget constraints, the film was instead shot in Oregon and Hollywood. Critics were impressed with the visuals but found the scripts dull, calling it DeMille's \"poorest Western\". Despite the criticism, it was Paramount's highest-grossing film of the year. Audiences liked its highly saturated color, so DeMille made no further black-and-white features. DeMille was anti-communist and abandoned a project in 1940 to film Ernest Hemingway's For Whom the Bell Tolls due to its communist themes, despite the fact he had already paid $100,000 for the rights to the novel. He was so eager to produce the film that he hadn't yet read the novel. He claimed he abandoned the project in order to complete a different project, but in reality, it was to preserve his reputation and avoid appearing reactionary. While concurrently filmmaking, he served in World War II at the age of sixty as his neighborhood air-raid warden.",
"title": "Biography"
},
{
"paragraph_id": 33,
"text": "In 1942, DeMille worked with Jeanie MacPherson and brother William deMille in order to produce a film called Queen of Queens, which was intended to be about Mary, mother of Jesus. After reading the screenplay, Daniel A. Lord warned DeMille that Catholics would find the film too irreverent, while non-Catholics would have considered the film Catholic propaganda. Consequently, the film was never made. Jeanie MacPherson would work as a scriptwriter for many of DeMille's films. In 1938, DeMille supervised the compilation of film Land of Liberty to represent the contribution of the American film industry to the 1939 New York World's Fair. DeMille used clips from his own films in Land of Liberty. Though the film was not high-grossing, it was well-received, and DeMille was asked to shorten its running time to allow for more showings per day. MGM distributed the film in 1941 and donated profits to World War II relief charities.",
"title": "Biography"
},
{
"paragraph_id": 34,
"text": "In 1942, DeMille released Paramount's most successful film, Reap the Wild Wind. It was produced with a large budget and contained many special effects including an electronically operated giant squid. After working on Reap the Wild Wind, in 1944, he was the master of ceremonies at the rally organized by David O. Selznick in the Los Angeles Coliseum in support of the Dewey–Bricker ticket as well as Governor Earl Warren of California. DeMille's subsequent film Unconquered (1947) had the longest running time (146 minutes), longest filming schedule (102 days), and largest budget ($5 million). The sets and effects were so realistic that 30 extras needed to be hospitalized due to a scene with fireballs and flaming arrows. It was commercially very successful.",
"title": "Biography"
},
{
"paragraph_id": 35,
"text": "DeMille's next film, Samson and Delilah in 1949, became Paramount's highest-grossing film up to that time. A Biblical epic with sex, it was a characteristically DeMille film. Again, 1952's The Greatest Show on Earth became Paramount's highest-grossing film to that point. Furthermore, DeMille's film won the Academy Award for Best Picture and the Academy Award for Best Story. The film began production in 1949, Ringling Brothers-Barnum and Bailey were paid $250,000 for use of the title and facilities. DeMille toured with the circus while helping write the script. Noisy and bright, it was not well-liked by critics, but was a favorite among audiences. DeMille signed a contract with Prentice Hall publishers in August 1953 to publish an autobiography. DeMille would reminisce into a voice recorder, the recording would be transcribed, and the information would be organized in the biography based on the topic. Art Arthur also interviewed people for the autobiography. DeMille did not like the first draft of the biography, saying that he thought the person portrayed in the biography was an \"SOB\"; he said it made him sound too egotistical. Besides filmmaking and finishing his autobiography, DeMille was involved in other projects. In the early 1950s, DeMille was recruited by Allen Dulles and Frank Wisner to serve on the board of the anti-communist National Committee for a Free Europe, the public face of the organization that oversaw the Radio Free Europe service. In 1954, Secretary of the Air Force Harold E. Talbott asked DeMille for help in designing the cadet uniforms at the newly established United States Air Force Academy. DeMille's designs, most notably his design of the distinctive cadet parade uniform, won praise from Air Force and Academy leadership, were ultimately adopted, and are still worn by cadets.",
"title": "Biography"
},
{
"paragraph_id": 36,
"text": "We have just lived through a war where our people were systematically executed. Here we have a man who made a film praising the Jewish people, that tells of Samson, one of the legends of our Scripture. Now he wants to make the life of Moses. We should get down on our knees to Cecil and say \"Thank you!\"",
"title": "Biography"
},
{
"paragraph_id": 37,
"text": "– Alfred Zukor responding to DeMille's proposal of The Ten Commandments remake",
"title": "Biography"
},
{
"paragraph_id": 38,
"text": "In 1952, DeMille sought approval for a lavish remake of his 1923 silent film The Ten Commandments. He went before the Paramount board of directors, which was mostly Jewish-American. The members rejected his proposal, even though his last two films, Samson and Delilah and The Greatest Show on Earth, had been record-breaking hits. Adolph Zukor convinced the board to change their minds on the grounds of morality. DeMille did not have an exact budget proposal for the project, and it promised to be the most costly in U.S. film history. Still, the members unanimously approved it. The Ten Commandments, released in 1956, was DeMille's final film. It was the longest (3 hours, 39 minutes) and most expensive ($13 million) film in Paramount history. Production of The Ten Commandments began in October 1954. The Exodus scene was filmed on-site in Egypt with the use of four Technicolor-VistaVision camera filming 12,000 people. They continued filming in 1955 in Paris and Hollywood on 30 different sound stages. They were even required to expand to RKO sound studios for filming. Post-production lasted a year, and the film premiered in Salt Lake City. Nominated for an Academy Award for Best Picture, it grossed over $80 million, which surpassed the gross of The Greatest Show on Earth and every other film in history, except for Gone with the Wind. A unique practice at the time, DeMille offered ten percent of his profit to the crew.",
"title": "Biography"
},
{
"paragraph_id": 39,
"text": "On November 7, 1954, while in Egypt filming the Exodus sequence for The Ten Commandments, DeMille (who was seventy-three) climbed a 107-foot (33 m) ladder to the top of the massive Per Rameses set and suffered a serious heart attack. Despite the urging of his associate producer, DeMille wanted to return to the set right away. DeMille developed a plan with his doctor to allow him to continue directing while reducing his physical stress. Although DeMille completed the film, his health was diminished by several more heart attacks. His daughter Cecilia took over as director as DeMille sat behind the camera with Loyal Griggs as the cinematographer. This film would be his last.",
"title": "Biography"
},
{
"paragraph_id": 40,
"text": "Due to his frequent heart attacks, DeMille asked his son-in-law, actor Anthony Quinn, to direct a remake of his 1938 film The Buccaneer. DeMille served as executive producer, overseeing producer Henry Wilcoxon. Despite a cast led by Charlton Heston and Yul Brynner, the 1958 film The Buccaneer was a disappointment. DeMille attended the Santa Barbara premiere of The Buccaneer in December 1958. DeMille was unable to attend the Los Angeles premiere of The Buccaneer. In the months before his death, DeMille was researching a film biography of Robert Baden-Powell, the founder of the Scout Movement. DeMille asked David Niven to star in the film, but it was never made. DeMille also was planning a film about the space race as well as another biblical epic about the Book of Revelation. DeMille's autobiography was mostly completed by the time DeMille died and was published in November 1959.",
"title": "Biography"
},
{
"paragraph_id": 41,
"text": "Cecil B. DeMille suffered a series of heart attacks from June 1958 to January 1959, and died on January 21, 1959, following an attack. DeMille's funeral was held on January 23 at St. Stephen's Episcopal Church. He was entombed at the Hollywood Memorial Cemetery (now known as Hollywood Forever). After his death, notable news outlets such as The New York Times, the Los Angeles Times, and The Guardian honored DeMille as \"pioneer of movies\", \"the greatest creator and showman of our industry\", and \"the founder of Hollywood\". DeMille left his multi-million dollar estate in Los Feliz, Los Angeles, in Laughlin Park to his daughter Cecilia because his wife had dementia and was unable to care for an estate. She would die one year later. His personal will drew a line between Cecilia and his three adopted children, with Cecilia receiving a majority of DeMille's inheritance and estate. The other three children were surprised by this, as DeMille did not treat the children differently in life. Cecilia lived in the house for many years until her death in 1984, but the house was auctioned by his granddaughter Cecilia DeMille Presley who also lived there in the late 1980s.",
"title": "Biography"
},
{
"paragraph_id": 42,
"text": "DeMille believed his first influences to be his parents, Henry and Beatrice DeMille. His playwright father introduced him to the theater at a young age. Henry was heavily influenced by the work of Charles Kingsley, whose ideas trickled down to DeMille. DeMille noted that his mother had a \"high sense of the dramatic\" and was determined to continue the artistic legacy of her husband after he died. Beatrice became a play broker and author's agent, influencing DeMille's early life and career. DeMille's father worked with David Belasco who was a theatrical producer, impresario, and playwright. Belasco was known for adding realistic elements in his plays such as real flowers, food, and aromas that could transport his audiences into the scenes. While working in theatre, DeMille used real fruit trees in his play California, as influenced by Belasco. Similar to Belasco, DeMille's theatre revolved around entertainment rather than artistry. Generally, Belasco's influence of DeMille's career can be seen in DeMille's showmanship and narration. E. H. Sothern's early influence on DeMille's work can be seen in DeMille's perfectionism. DeMille recalled that one of the most influential plays he saw was Hamlet, directed by Sothern.",
"title": "Filmmaking"
},
{
"paragraph_id": 43,
"text": "DeMille's filmmaking process always began with extensive research. Next, he would work with writers to develop the story that he was envisioning. Then, he would help writers construct a script. Finally, he would leave the script with artists and allow them to create artistic depictions and renderings of each scene. Plot and dialogue were not a strong point of DeMille's films. Consequently, he focused his efforts on his films' visuals. He worked with visual technicians, editors, art directors, costume designers, cinematographers, and set carpenters in order to perfect the visual aspects of his films. With his editor, Anne Bauchens, DeMille used editing techniques to allow the visual images to bring the plot to climax rather than dialogue. DeMille had large and frequent office conferences to discuss and examine all aspects of the working film including story-boards, props, and special effects.",
"title": "Filmmaking"
},
{
"paragraph_id": 44,
"text": "DeMille rarely gave direction to actors; he preferred to \"office-direct\", where he would work with actors in his office, going over characters and reading through scripts. Any problems on the set were often fixed by writers in the office rather than on the set. DeMille did not believe a large movie set was the place to discuss minor character or line issues. DeMille was particularly adept at directing and managing large crowds in his films. Martin Scorsese recalled that DeMille had the skill to maintain control of not only the lead actors in a frame but the many extras in the frame as well. DeMille was adept at directing \"thousands of extras\", and many of his pictures include spectacular set pieces: the toppling of the pagan temple in Samson and Delilah; train wrecks in The Road to Yesterday, Union Pacific and The Greatest Show on Earth; the destruction of an airship in Madam Satan; and the parting of the Red Sea in both versions of The Ten Commandments.",
"title": "Filmmaking"
},
{
"paragraph_id": 45,
"text": "In his early films, DeMille experimented with photographic light and shade, which created dramatic shadows instead of glare. His specific use of lighting, influenced by his mentor David Belasco, was for the purpose of creating \"striking images\" and heightening \"dramatic situations\". DeMille was unique in using this technique. In addition to his use of volatile and abrupt film editing, his lighting and composition were innovative for the time period as filmmakers were primarily concerned with a clear, realistic image. Another important aspect of DeMille's editing technique was to put the film away for a week or two after an initial edit in order to re-edit the picture with a fresh mind. This allowed for the rapid production of his films in the early years of the Lasky Company. The cuts were sometimes rough, but the movies were always interesting.",
"title": "Filmmaking"
},
{
"paragraph_id": 46,
"text": "DeMille often edited in a manner that favored psychological space rather than physical space through his cuts. In this way, the characters' thoughts and desires are the visual focus rather than the circumstances regarding the physical scene. As DeMille's career progressed, he increasingly relied on artist Dan Sayre Groesbeck's concept, costume, and storyboard art. Groesbeck's art was circulated on set to give actors and crew members a better understanding of DeMille's vision. His art was even shown at Paramount meetings when pitching new films. DeMille adored the art of Groesbeck, even hanging it above his fireplace, but film staff found it difficult to convert his art into three-dimensional sets. As DeMille continued to rely on Groesbeck, the nervous energy of his early films transformed into more steady compositions of his later films. While visually appealing, this made the films appear more old-fashioned.",
"title": "Filmmaking"
},
{
"paragraph_id": 47,
"text": "Composer Elmer Bernstein described DeMille as \"sparing no effort\" when filmmaking. Bernstein recalled that DeMille would scream, yell, or flatter—whatever it took to achieve the perfection he required in his films. DeMille was painstakingly attentive to details on set and was as critical of himself as he was of his crew. Costume designer Dorothy Jeakins, who worked with DeMille on The Ten Commandments (1956), said that he was skilled in humiliating people. Jeakins admitted that she received quality training from him, but that it was necessary to become a perfectionist on a DeMille set to avoid being fired. DeMille had an authoritarian persona on set; he required absolute attention from the cast and crew. He had a band of assistants who catered to his needs. He would speak to the entire set, sometimes enormous with countless numbers of crew members and extras, via a microphone to maintain control of the set. He was disliked by many inside and outside of the film industry for his cold and controlling reputation.",
"title": "Filmmaking"
},
{
"paragraph_id": 48,
"text": "DeMille was known for autocratic behavior on the set, singling out and berating extras who were not paying attention. Many of these displays were thought to be staged, however, as an exercise in discipline. He despised actors who were unwilling to take physical risks, especially when he had first demonstrated that the required stunt would not harm them. This occurred with Victor Mature in Samson and Delilah. Mature refused to wrestle Jackie the Lion, even though DeMille had just tussled with the lion, proving that he was tame. DeMille told the actor that he was \"one hundred percent yellow\". Paulette Goddard's refusal to risk personal injury in a scene involving fire in Unconquered cost her DeMille's favor and a role in The Greatest Show on Earth. DeMille did receive help in his films, notably from Alvin Wyckoff, who shot forty-three of DeMille's films; brother William deMille who would occasionally serve as his screenwriter; and Jeanie Macpherson, who served as DeMille's exclusive screenwriter for fifteen years; and Eddie Salven, DeMille's favorite assistant director.",
"title": "Filmmaking"
},
{
"paragraph_id": 49,
"text": "DeMille made stars of unknown actors: Gloria Swanson, Bebe Daniels, Rod La Rocque, William Boyd, Claudette Colbert, and Charlton Heston. He also cast established stars such as Gary Cooper, Robert Preston, Paulette Goddard and Fredric March in multiple pictures. DeMille cast some of his performers repeatedly, including Henry Wilcoxon, Julia Faye, Joseph Schildkraut, Ian Keith, Charles Bickford, Theodore Roberts, Akim Tamiroff, and William Boyd. DeMille was credited by actor Edward G. Robinson with saving his career following his eclipse in the Hollywood blacklist.",
"title": "Filmmaking"
},
{
"paragraph_id": 50,
"text": "Cecil B. DeMille's film production career evolved from critically significant silent films to financially significant sound films. He began his career with reserved yet brilliant melodramas; from there, his style developed into marital comedies with outrageously melodramatic plots. In order to attract a high-class audience, DeMille based many of his early films on stage melodramas, novels, and short stories. He began the production of epics earlier in his career until they began to solidify his career in the 1920s. By 1930, DeMille had perfected his film style of mass-interest spectacle films with Western, Roman, or Biblical themes. DeMille was often criticized for making his spectacles too colorful and for being too occupied with entertaining the audience rather than accessing the artistic and auteur possibilities that film could provide. However, others interpreted DeMille's work as visually impressive, thrilling, and nostalgic. Along the same lines, critics of DeMille often qualify him by his later spectacles and fail to consider several decades of ingenuity and energy that defined him during his generation. Throughout his career, he did not alter his films to better adhere to contemporary or popular styles. Actor Charlton Heston admitted DeMille was, \"terribly unfashionable\" and Sidney Lumet called DeMille, \"the cheap version of D.W. Griffith\", adding that DeMille, \"[didn't have]...an original thought in his head\", though Heston added that DeMille was much more than that.",
"title": "Filmmaking"
},
{
"paragraph_id": 51,
"text": "According to Scott Eyman, DeMille's films were at the same time masculine and feminine due to his thematic adventurousness and his eye for the extravagant. DeMille's distinctive style can be seen through camera and lighting effects as early as The Squaw Man with the use of daydream images; moonlight and sunset on a mountain; and side-lighting through a tent flap. In the early age of cinema, DeMille differentiated the Lasky Company from other production companies due to the use of dramatic, low-key lighting they called \"Lasky lighting\" and marketed as \"Rembrandt lighting\" to appeal to the public. DeMille achieved international recognition for his unique use of lighting and color tint in his film The Cheat. DeMille's 1956 version of The Ten Commandments, according to director Martin Scorsese, is renowned for its level of production and the care and detail that went into creating the film. He stated that The Ten Commandments was the final culmination of DeMille's style.",
"title": "Filmmaking"
},
{
"paragraph_id": 52,
"text": "DeMille was interested in art and his favorite artist was Gustave Doré; DeMille based some of his most well-known scenes on the work of Doré. DeMille was the first director to connect art to filmmaking; he created the title of \"art director\" on the film set. DeMille was also known for his use of special effects without the use of digital technology. Notably, DeMille had cinematographer John P. Fulton create the parting of the Red Sea scene in his 1956 film The Ten Commandments, which was one of the most expensive special effects in film history, and has been called by Steven Spielberg \"the greatest special effect in film history\". The actual parting of the sea was created by releasing 360,000 gallons of water into a huge water tank split by a U-shaped trough, overlaying it with a film of a giant waterfall that was built on the Paramount backlot, and playing the clip backward.",
"title": "Filmmaking"
},
{
"paragraph_id": 53,
"text": "Aside from his Biblical and historical epics, which are concerned with how man relates to God, some of DeMille's films contained themes of \"neo-naturalism\", which portray the conflict between the laws of man and the laws of nature. Although he is known for his later \"spectacular\" films, his early films are held in high regard by critics and film historians. DeMille discovered the possibilities of the \"bathroom\" or \"boudoir\" in the film without being \"vulgar\" or \"cheap\". DeMille's films Male and Female, Why Change Your Wife?, and The Affairs of Anatol can be retrospectively described as high camp and are categorized as \"early DeMille films\" due to their particular style of production and costume and set design. However, his earlier films The Captive, Kindling, Carmen, and The Whispering Chorus are more serious films. It is difficult to typify DeMille's films into one specific genre. His first three films were Westerns, and he filmed many Westerns throughout his career. However, throughout his career, he filmed comedies, periodic and contemporary romances, dramas, fantasies, propaganda, Biblical spectacles, musical comedies, suspense, and war films. At least one DeMille film can represent each film genre. DeMille produced the majority of his films before the 1930s, and by the time sound films were invented, film critics saw DeMille as antiquated, with his best filmmaking years behind him.",
"title": "Filmmaking"
},
{
"paragraph_id": 54,
"text": "DeMille's films contained many similar themes throughout his career. However, the films of his silent era were often thematically different from the films of his sound era. His silent-era films often included the \"battle of the sexes\" theme due to the era of women's suffrage and the enlarging role of women in society. Moreover, before his religious-themed films, many of his silent era films revolved around \"husband-and-wife-divorce-and-remarry satires\", considerably more adult-themed. According to Simon Louvish, these films reflected DeMille's inner thoughts and opinions about marriage and human sexuality. Religion was a theme that DeMille returned to throughout his career. Of his seventy films, five revolved around stories of the Bible and the New Testament; however many others, while not direct retellings of Biblical stories, had themes of faith and religious fanaticism in films such as The Crusades and The Road to Yesterday. Western and frontier American were also themes that DeMille returned to throughout his career. His first several films were Westerns, and he produced a chain of westerns during the sound era. Instead of portraying the danger and anarchy of the West, he portrayed the opportunity and redemption found in Western America. Another common theme in DeMille's films is the reversal of fortune and the portrayal of the rich and the poor, including the war of the classes and man versus society conflicts such as in The Golden Chance and The Cheat. In relation to his own interests and sexual preferences, sadomasochism was a minor theme present in some of his films. Another minor characteristic of DeMille's films include train crashes, which can be found in several of his films.",
"title": "Filmmaking"
},
{
"paragraph_id": 55,
"text": "Known as the father of the Hollywood motion picture industry, Cecil B. DeMille made 70 films including several box-office hits. DeMille is one of the more commercially successful film directors in history, with his films before the release of The Ten Commandments estimated to have grossed $650 million worldwide. Adjusted for inflation, DeMille's remake of The Ten Commandments is the eighth highest-grossing film in the world.",
"title": "Legacy"
},
{
"paragraph_id": 56,
"text": "According to Sam Goldwyn, critics did not like DeMille's films, but the audiences did, and \"they have the final word\". Similarly, scholar David Blanke, argued that DeMille had lost the respect of his colleagues and film critics by his late film career. However, his final films maintained that DeMille was still respected by his audiences. Five of DeMille's films were the highest-grossing films at the year of their release, with only Spielberg topping him with six of his films as the highest-grossing films of the year. DeMille's highest-grossing films include: The Sign of the Cross (1932), Unconquered (1947), Samson and Delilah (1949), The Greatest Show on Earth (1952), and The Ten Commandments (1956). Director Ridley Scott has been called \"the Cecil B. DeMille of the digital era\" due to his classical and medieval epics.",
"title": "Legacy"
},
{
"paragraph_id": 57,
"text": "Despite his box-office success, awards, and artistic achievements, DeMille has been dismissed and ignored by critics both during his life and posthumously. He was consistently criticized for producing shallow films without talent or artistic care. Compared to other directors, few film scholars have taken the time to academically analyze his films and style. During the French New Wave, critics began to categorize certain filmmakers as auteurs such as Howard Hawks, John Ford, and Raoul Walsh. DeMille was omitted from the list, thought to be too unsophisticated and antiquated to be considered an auteur. However, Simon Louvish wrote \"he was the complete master and auteur of his films\", and Anton Kozlovic called him the \"unsung American auteur\". Andrew Sarris, a leading proponent of the auteur theory, ranked DeMille highly as an auteur in the \"Far Side of Paradise\", just below the \"Pantheon\". Sarris added that despite the influence of the styles of contemporary directors throughout his career, DeMille's style remained unchanged. Robert Birchard wrote that one could argue the auteurship of DeMille on the basis that DeMille's thematic and visual style remained consistent throughout his career. However, Birchard acknowledged that Sarris's point was more likely that DeMille's style was behind the development of film as an art form. Meanwhile, Sumiko Higashi sees DeMille as \"not only a figure who was shaped and influenced by the forces of his era but as a filmmaker who left his own signature on the culture industry.\" The critic Camille Paglia has called The Ten Commandments one of the ten greatest films of all time.",
"title": "Legacy"
},
{
"paragraph_id": 58,
"text": "DeMille was one of the first directors to become a celebrity in his own right. He cultivated the image of the omnipotent director, complete with megaphone, riding crop, and jodhpurs. He was known for his unique working wardrobe, which included riding boots, riding pants, and soft, open necked shirts. Joseph Henabery recalled that DeMille looked like \"a king on a throne surrounded by his court\" while directing films on a camera platform.",
"title": "Legacy"
},
{
"paragraph_id": 59,
"text": "DeMille was liked by some of his fellow directors and disliked by others, though his actual films were usually dismissed by his peers as a vapid spectacle. Director John Huston intensely disliked both DeMille and his films. \"He was a thoroughly bad director\", Huston said. \"A dreadful showoff. Terrible. To diseased proportions.\" Said fellow director William Wellman: \"Directorially, I think his pictures were the most horrible things I've ever seen in my life. But he put on pictures that made a fortune. In that respect, he was better than any of us.\" Producer David O. Selznick wrote: \"There has appeared only one Cecil B. DeMille. He is one of the most extraordinarily able showmen of modern times. However much I may dislike some of his pictures, it would be very silly of me, as a producer of commercial motion pictures, to demean for an instant his unparalleled skill as a maker of mass entertainment.\" Salvador Dalí wrote that DeMille, Walt Disney, and the Marx Brothers were \"the three great American Surrealists\". DeMille appeared as himself in numerous films, including the MGM comedy Free and Easy. He often appeared in his coming-attraction trailers and narrated many of his later films, even stepping on screen to introduce The Ten Commandments. DeMille was immortalized in Billy Wilder's Sunset Boulevard when Gloria Swanson spoke the line: \"All right, Mr. DeMille. I'm ready for my close-up.\" DeMille plays himself in the film. DeMille's reputation had a renaissance in the 2010s.",
"title": "Legacy"
},
{
"paragraph_id": 60,
"text": "As a filmmaker, DeMille was the aesthetic inspiration of many directors and films due to his early influence during the crucial development of the film industry. DeMille's early silent comedies influenced the comedies of Ernst Lubitsch and Charlie Chaplin's A Woman of Paris. Additionally, DeMille's epics such as The Crusades influenced Sergei Eisenstein's Alexander Nevsky. Moreover, DeMille's epics inspired directors such as Howard Hawks, Nicholas Ray, Joseph L. Mankiewicz, and George Stevens to try producing epics. Cecil B. DeMille has influenced the work of several well-known directors. Alfred Hitchcock cited DeMille's 1921 film Forbidden Fruit as an influence of his work and one of his top ten favorite films. DeMille has influenced the careers of many modern directors. Martin Scorsese cited Unconquered, Samson and Delilah, and The Greatest Show on Earth as DeMille films that have imparted lasting memories on him. Scorsese said he had viewed The Ten Commandments forty or fifty times. Famed director Steven Spielberg stated that DeMille's The Greatest Show on Earth was one of the films that influenced him to become a filmmaker. Furthermore, DeMille influenced about half of Spielberg's films, including War of the Worlds. The Ten Commandments inspired DreamWorks Animation's later film about Moses, The Prince of Egypt. As one of the establishing members of Paramount Pictures and co-founder of Hollywood, DeMille had a role in the development of the film industry. Consequently, the name \"DeMille\" has become synonymous with filmmaking.",
"title": "Legacy"
},
{
"paragraph_id": 61,
"text": "Publicly Episcopalian, DeMille drew on his Christian and Jewish ancestors to convey a message of tolerance. DeMille received more than a dozen awards from Christian and Jewish religious and cultural groups, including B'nai B'rith. However, not everyone received DeMille's religious films favorably. DeMille was accused of antisemitism after the release of The King of Kings, and director John Ford despised DeMille for what he saw as \"hollow\" biblical epics meant to promote DeMille's reputation during the politically turbulent 1950s. In response to the claims, DeMille donated some of the profits from The King of Kings to charity. In the 2012 Sight & Sound poll, both DeMille's Samson and Delilah and 1923 version of The Ten Commandments received votes, but did not make the top 100 films. Although many of DeMille's films are available on DVD and Blu-ray release, only 20 of his silent films are commercially available on DVD",
"title": "Legacy"
},
{
"paragraph_id": 62,
"text": "The original Lasky-DeMille Barn in which The Squaw Man was filmed was converted into a museum named the \"Hollywood Heritage Museum\". It opened on December 13, 1985, and features some of DeMille's personal artifacts. The Lasky-DeMille Barn was dedicated as a California historical landmark in a ceremony on December 27, 1956; DeMille was the keynote speaker. It was listed on the National Register of Historic Places in 2014. The Dunes Center in Guadalupe, California, contains an exhibition of artifacts uncovered in the desert near Guadalupe from DeMille's set of his 1923 version of The Ten Commandments, known as the \"Lost City of Cecil B. DeMille\". Donated by the Cecil B. DeMille Foundation in 2004, the moving image collection of Cecil B. DeMille is held at the Academy Film Archive and includes home movies, outtakes, and never-before-seen test footage.",
"title": "Legacy"
},
{
"paragraph_id": 63,
"text": "In summer 2019, The Friends of the Pompton Lakes Library hosted a Cecil B DeMille film festival to celebrate DeMille's achievements and connection to Pompton Lakes. They screened four of his films at Christ Church, where DeMille and his family attended church when they lived there. Two schools have been named after him: Cecil B. DeMille Middle School, in Long Beach, California, which was closed and demolished in 2010 to make way for a new high school; and Cecil B. DeMille Elementary School in Midway City, California. The former film building at Chapman University in Orange, California, is named in honor of DeMille. During the Apollo 11 mission, Buzz Aldrin referred to himself in one instance as \"Cecil B. DeAldrin\", as a humorous nod to DeMille. The title of the 2000 John Waters film Cecil B. Demented alludes to DeMille.",
"title": "Legacy"
},
{
"paragraph_id": 64,
"text": "DeMille's legacy is maintained by his granddaughter Cecilia DeMille Presley who serves as the president of the Cecil B. DeMille Foundation, which strives to support higher education, child welfare, and film in Southern California. In 1963, the Cecil B. DeMille Foundation donated the \"Paradise\" ranch to the Hathaway Foundation, which cares for emotionally disturbed and abused children. A large collection of DeMille's materials including scripts, storyboards, and films resides at Brigham Young University in L. Tom Perry Special Collections.",
"title": "Legacy"
},
{
"paragraph_id": 65,
"text": "Cecil B. DeMille received many awards and honors, especially later in his career.",
"title": "Awards and recognition"
},
{
"paragraph_id": 66,
"text": "In August 1941, DeMille was honored with a block in the forecourt of Grauman's Chinese Theatre.",
"title": "Awards and recognition"
},
{
"paragraph_id": 67,
"text": "The American Academy of Dramatic Arts honored DeMille with an Alumni Achievement Award in 1958.",
"title": "Awards and recognition"
},
{
"paragraph_id": 68,
"text": "In 1957, DeMille gave the commencement address for the graduation ceremony of Brigham Young University, wherein he received an honorary Doctorate of Letter degree. Additionally, in 1958, he received an honorary Doctorate of Law degree from Temple University.",
"title": "Awards and recognition"
},
{
"paragraph_id": 69,
"text": "From the film industry, DeMille received the Irving G. Thalberg Memorial Award at the Academy Awards in 1953, and a Lifetime Achievement Award from the Directors Guild of America Award the same year. In the same ceremony, DeMille received a nomination from Directors Guild of America Award for Outstanding Directorial Achievement in Motion Pictures for The Greatest Show on Earth. In 1952, DeMille was awarded the first Cecil B. DeMille Award at the Golden Globes. An annual award, the Golden Globe's Cecil B. DeMille Award recognizes lifetime achievement in the film industry. For his contribution to the motion picture and radio industry, DeMille has two stars on the Hollywood Walk of Fame. The first, for radio contributions, is located at 6240 Hollywood Blvd. The second star is located at 1725 Vine Street.",
"title": "Awards and recognition"
},
{
"paragraph_id": 70,
"text": "DeMille received two Academy Awards: an Honorary Award for \"37 years of brilliant showmanship\" in 1950 and a Best Picture award in 1953 for The Greatest Show on Earth. DeMille received a Golden Globe Award for Best Director and was additionally nominated for the Best Director category at the 1953 Academy Awards for the same film. He was further nominated in the Best Picture category for The Ten Commandments at the 1957 Academy Awards. DeMille's Union Pacific received a Palme d'Or in retrospect at the 2002 Cannes Film Festival.",
"title": "Awards and recognition"
},
{
"paragraph_id": 71,
"text": "Two of DeMille's films have been selected for preservation in the National Film Registry by the United States Library of Congress: The Cheat (1915) and The Ten Commandments (1956).",
"title": "Awards and recognition"
},
{
"paragraph_id": 72,
"text": "Cecil B. DeMille made 70 features. 52 of his features are silent films. The first 24 of his silent films were made in the first three years of his career (1913–1916). Eight of his films were \"epics\" with five of those classified as \"Biblical\". Six of DeMille's films—The Arab, The Wild Goose Chase, The Dream Girl, The Devil-Stone, We Can't Have Everything, and The Squaw Man (1918)—were destroyed by nitrate decomposition, and are considered lost. The Ten Commandments is broadcast every Saturday at Passover in the United States on the ABC Television Network.",
"title": "Filmography"
},
{
"paragraph_id": 73,
"text": "Filmography obtained from Fifty Hollywood Directors.",
"title": "Filmography"
},
{
"paragraph_id": 74,
"text": "Silent films",
"title": "Filmography"
},
{
"paragraph_id": 75,
"text": "Sound films",
"title": "Filmography"
},
{
"paragraph_id": 76,
"text": "These films represent those which DeMille produced or assisted in directing, credited or uncredited.",
"title": "Filmography"
},
{
"paragraph_id": 77,
"text": "DeMille frequently made cameos as himself in other Paramount films. Additionally, he often starred in prologues and special trailers that he created for his films, having an opportunity to personally address the audience.",
"title": "Filmography"
}
]
| Cecil Blount DeMille was an American filmmaker and actor. Between 1914 and 1958, he made 70 features, both silent and sound films. He is acknowledged as a founding father of American cinema and the most commercially successful producer-director in film history. His films were distinguished by their epic scale and by his cinematic showmanship. His silent films included social dramas, comedies, Westerns, farces, morality plays, and historical pageants. He was an active Freemason and member of Prince of Orange Lodge #16 in New York City. DeMille was born in Ashfield, Massachusetts, and grew up in New York City. He began his career as a stage actor in 1900. He later moved to writing and directing stage productions, some with Jesse Lasky, who was then a vaudeville producer. DeMille's first film, The Squaw Man (1914), was also the first full-length feature film shot in Hollywood. Its interracial love story made it commercially successful, and it first publicized Hollywood as the home of the U.S. film industry. The continued success of his productions led to the founding of Paramount Pictures with Lasky and Adolph Zukor. His first biblical epic, The Ten Commandments (1923), was both a critical and commercial success; it held the Paramount revenue record for twenty-five years. DeMille directed The King of Kings (1927), a biography of Jesus, which gained approval for its sensitivity and reached more than 800 million viewers. The Sign of the Cross (1932) is said to be the first sound film to integrate all aspects of cinematic technique. Cleopatra (1934) was his first film to be nominated for the Academy Award for Best Picture. After more than thirty years in film production, DeMille reached a pinnacle in his career with Samson and Delilah (1949), a biblical epic that became the highest-grossing film of 1950. Along with biblical and historical narratives, he also directed films oriented toward "neo-naturalism", which tried to portray the laws of man fighting the forces of nature. He received his first nomination for the Academy Award for Best Director for his circus drama The Greatest Show on Earth (1952), which won both the Academy Award for Best Picture and the Golden Globe Award for Best Motion Picture – Drama. His last and best known film, The Ten Commandments (1956), also a Best Picture Academy Award nominee, is currently the eighth-highest-grossing film of all time, adjusted for inflation. In addition to his Best Picture Awards, he received an Academy Honorary Award for his film contributions, the Palme d'Or (posthumously) for Union Pacific (1939), a DGA Award for Lifetime Achievement, and the Irving G. Thalberg Memorial Award. He was the first recipient of the Golden Globe Cecil B. DeMille Award, which was named in his honor. DeMille's reputation had a renaissance in the 2010s, and his work has influenced numerous other films and directors. | 2001-08-21T20:51:59Z | 2023-12-29T16:15:34Z | [
"Template:ISBN",
"Template:Cecil B. DeMille",
"Template:Refend",
"Template:Wikiquote",
"Template:IBDB name",
"Template:Internet Archive author",
"Template:Portal bar",
"Template:Cite journal",
"Template:Refbegin",
"Template:Refn",
"Template:Cite book",
"Template:Short description",
"Template:Use American English",
"Template:Cite web",
"Template:Free access",
"Template:Official website",
"Template:IMDb name",
"Template:IPAc-en",
"Template:Rp",
"Template:Reflist",
"Template:Navboxes",
"Template:Authority control",
"Template:Sfn",
"Template:Div col",
"Template:Cite news",
"Template:Cite magazine",
"Template:Wikisource author",
"Template:Use mdy dates",
"Template:Harvnb",
"Template:Convert",
"Template:Tcmdb name",
"Template:Good article",
"Template:Quote box",
"Template:Citation needed",
"Template:Commons category",
"Template:Infobox person",
"Template:Spnd"
]
| https://en.wikipedia.org/wiki/Cecil_B._DeMille |
6,181 | Chinese Islamic cuisine | Chinese Islamic cuisine consists of variations of regionally popular foods that are typical of Han Chinese cuisine, in particular to make them halal. Dishes borrow ingredients from Middle Eastern, Turkic, and South Asian cuisines, notably mutton and spices. Much like other northern Chinese cuisines, Chinese Islamic cuisine uses wheat noodles as the staple, rather than rice. Chinese Islamic dishes include clear-broth beef noodle soup and chuanr.
The Hui (ethnic Chinese Muslims), Bonan, Dongxiang, Salar, and Uyghurs of China, as well as the Dungans of Central Asia and the Panthays of Burma, collectively contribute to Chinese Islamic cuisine.
Due to the large Muslim population in Western China, many Chinese restaurants cater to or are run by Muslims. Northern Chinese Islamic cuisine originated in China proper. It is heavily influenced by Beijing cuisine, with nearly all cooking methods identical and differs only in material due to religious restrictions. As a result, northern Islamic cuisine is often included in home Beijing cuisine though seldom in east coast restaurants.
During the Yuan dynasty, halal and kosher methods of slaughtering animals and preparing food was banned and forbidden by the Mongol emperors, starting with Genghis Khan who banned Muslims and Jews from slaughtering their animals their own way and made them follow the Mongol method.
Among all the [subject] alien peoples only the Hui-hui say "we do not eat Mongol food." [Cinggis Qa'an replied:] "By the aid of heaven we have pacified you; you are our slaves. Yet you do not eat our food or drink. How can this be right?" He thereupon made them eat. "If you slaughter sheep, you will be considered guilty of a crime." He issued a regulation to that effect ... [In 1279/1280 under Qubilai] all the Muslims say: “if someone else slaughters [the animal] we do not eat." Because the poor people are upset by this, from now on, Musuluman [Muslim] Huihui and Zhuhu [Jewish] Huihui, no matter who kills [the animal] will eat [it] and must cease slaughtering sheep themselves, and cease the rite of circumcision.
Traditionally, there is a distinction between Northern and Southern Chinese Islamic cuisine despite both using lamb and mutton. Northern Chinese Islamic cuisine relies heavily on beef, but rarely ducks, geese, shrimp or seafood, while southern Islamic cuisine is the reverse. The reason for this difference is due to availability of the ingredients. Oxen have been long used for farming and Chinese governments have frequently strictly prohibited the slaughter of oxen for food. However, due to the geographic proximity of the northern part of China to minority-dominated regions that were not subjected to such restrictions, beef could be easily purchased and transported to Northern China. At the same time, ducks, geese and shrimp are rare in comparison to Southern China due to the arid climate of Northern China.
A Chinese Islamic restaurant (Chinese: 淸眞菜館; pinyin: qīngzhēn càiguǎn) can be similar to a Mandarin restaurant with the exception that there is no pork on the menu and the dishes are primarily noodle/soup based.
In most major eastern cities in China, there are very limited Islamic/Halal restaurants, which are typically run by migrants from Western China (e.g., Uyghurs). They primarily offer inexpensive noodle soups only. These restaurants are typically decorated with Islamic motifs such as pictures of Islamic rugs and Arabic writing.
Another difference is that lamb and mutton dishes are more commonly available than in other Chinese restaurants, due to the greater prevalence of these meats in the cuisine of Western Chinese regions. (Refer to image 1.)
Other Muslim ethnic minorities like the Bonan, Dongxiang, Salar and Tibetan Muslims have their own cuisines as well. Dongxiang people operate their own restaurants serving their cuisine.
Many cafeterias (canteens) at Chinese universities have separate sections or dining areas for Muslim students (Hui or Western Chinese minorities), typically labeled "qingzhen". Student ID cards sometimes indicate whether a student is Muslim and will allow access to these dining areas or will allow access on special occasions such as the Eid feast following Ramadan.
Several Hui restaurants serving Chinese Islamic cuisine exist in Los Angeles. San Francisco, despite its huge number of Chinese restaurants, appears to have only one whose cuisine would qualify as halal.
Many Chinese Hui Muslims who moved from Yunnan to Burma (Myanmar) are known as Panthays operate restaurants and stalls serving Chinese Islamic cuisine such as noodles. Chinese Hui Muslims from Yunnan who moved to Thailand are known as Chin Haw and they also own restaurants and stalls serving Chinese Islamic food.
In Central Asia, Dungan people, descendants of Hui, operate restaurants serving Chinese Islamic cuisine, which is respectively referred to as Dungan cuisine there. They cater to Chinese businessmen. Chopsticks are used by Dungans. The cuisine of the Dungan resembles northwestern Chinese cuisine.
Most Chinese regard Hui halal food as cleaner than food made by non-Muslims so their restaurants are popular in China. Hui who migrated to Northeast China (Manchuria) after the Chuang Guandong opened many new inns and restaurants to cater to travelers, which were regarded as clean.
The Hui who migrated to Taiwan operate Qingzhen restaurants and stalls serving Chinese Islamic cuisine in Taipei and other big cities.
The Thai Department of Export Promotion claims that "China's halal food producers are small-scale entrepreneurs whose products have little value added and lack branding and technology to push their goods to international standards" to encourage Thai private sector halal producers to market their products in China.
A 1903-started franchise serving Muslim food is Dong Lai Shun in Hankou.
400 meters have to be kept as a distance from each restaurant serving beef noodles to another of its type if they belong to Hui Muslims, since Hui have a pact between each other in Ningxia, Gansu and Shaanxi.
Halal restaurants are checked up upon by clerics from mosques.
Halal food manufacture has been sanctioned by the government of the Ningxia Autonomous Region.
Lamian (simplified Chinese: 拉面; traditional Chinese: 拉麪; pinyin: lāmiàn, Dungan: Ламян) is a Chinese dish of hand-made noodles, usually served in a beef or mutton-flavored soup (湯麪, даңмян, tāngmiàn), but sometimes stir-fried (炒麪, Чаомян, chǎomiàn) and served with a tomato-based sauce. Literally, 拉, ла (lā) means to pull or stretch, while 麪, мян (miàn) means noodle. The hand-making process involves taking a lump of dough and repeatedly stretching it to produce a single very long noodle.
Words that begin with L are not native to Turkic — läghmän is a loanword as stated by Uyghur linguist Abdlikim: It is of Chinese derivation and not originally Uyghur.
Beef noodle soup is a noodle soup dish composed of stewed beef, beef broth, vegetables and wheat noodles. It exists in various forms throughout East and Southeast Asia. It was created by the Hui people during the Qing dynasty of China.
In the west, this food may be served in a small portion as a soup. In China, a large bowl of it is often taken as a whole meal with or without any side dish.
Chuanr (Chinese: 串儿, Dungan: Чўанр, Pinyin: chuànr (shortened from "chuan er"), "kebab"), originating in the Xinjiang (新疆) province of China and in recent years has been disseminated throughout the rest of that country, most notably in Beijing. It is a product of the Chinese Islamic cuisine of the Uyghur (维吾尔) people and other Chinese Muslims. Yang rou chuan or lamb kebabs, is particularly popular.
Suan cai is a traditional fermented vegetable dish, similar to Korean kimchi and German sauerkraut, used in a variety of ways. It consists of pickled Chinese cabbage. Suan cai is a unique form of pao cai due to the material used and the method of production. Although suan cai is not exclusive to Chinese Islamic cuisine, it is used in Chinese Islamic cuisine to top off noodle soups, especially beef noodle soup.
Nang (Chinese: 馕, Dungan: Нәң) is a type of round unleavened bread, topped with sesame. It is similar to South and Central Asia naan. | [
{
"paragraph_id": 0,
"text": "Chinese Islamic cuisine consists of variations of regionally popular foods that are typical of Han Chinese cuisine, in particular to make them halal. Dishes borrow ingredients from Middle Eastern, Turkic, and South Asian cuisines, notably mutton and spices. Much like other northern Chinese cuisines, Chinese Islamic cuisine uses wheat noodles as the staple, rather than rice. Chinese Islamic dishes include clear-broth beef noodle soup and chuanr.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Hui (ethnic Chinese Muslims), Bonan, Dongxiang, Salar, and Uyghurs of China, as well as the Dungans of Central Asia and the Panthays of Burma, collectively contribute to Chinese Islamic cuisine.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Due to the large Muslim population in Western China, many Chinese restaurants cater to or are run by Muslims. Northern Chinese Islamic cuisine originated in China proper. It is heavily influenced by Beijing cuisine, with nearly all cooking methods identical and differs only in material due to religious restrictions. As a result, northern Islamic cuisine is often included in home Beijing cuisine though seldom in east coast restaurants.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "During the Yuan dynasty, halal and kosher methods of slaughtering animals and preparing food was banned and forbidden by the Mongol emperors, starting with Genghis Khan who banned Muslims and Jews from slaughtering their animals their own way and made them follow the Mongol method.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Among all the [subject] alien peoples only the Hui-hui say \"we do not eat Mongol food.\" [Cinggis Qa'an replied:] \"By the aid of heaven we have pacified you; you are our slaves. Yet you do not eat our food or drink. How can this be right?\" He thereupon made them eat. \"If you slaughter sheep, you will be considered guilty of a crime.\" He issued a regulation to that effect ... [In 1279/1280 under Qubilai] all the Muslims say: “if someone else slaughters [the animal] we do not eat.\" Because the poor people are upset by this, from now on, Musuluman [Muslim] Huihui and Zhuhu [Jewish] Huihui, no matter who kills [the animal] will eat [it] and must cease slaughtering sheep themselves, and cease the rite of circumcision.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Traditionally, there is a distinction between Northern and Southern Chinese Islamic cuisine despite both using lamb and mutton. Northern Chinese Islamic cuisine relies heavily on beef, but rarely ducks, geese, shrimp or seafood, while southern Islamic cuisine is the reverse. The reason for this difference is due to availability of the ingredients. Oxen have been long used for farming and Chinese governments have frequently strictly prohibited the slaughter of oxen for food. However, due to the geographic proximity of the northern part of China to minority-dominated regions that were not subjected to such restrictions, beef could be easily purchased and transported to Northern China. At the same time, ducks, geese and shrimp are rare in comparison to Southern China due to the arid climate of Northern China.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "A Chinese Islamic restaurant (Chinese: 淸眞菜館; pinyin: qīngzhēn càiguǎn) can be similar to a Mandarin restaurant with the exception that there is no pork on the menu and the dishes are primarily noodle/soup based.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In most major eastern cities in China, there are very limited Islamic/Halal restaurants, which are typically run by migrants from Western China (e.g., Uyghurs). They primarily offer inexpensive noodle soups only. These restaurants are typically decorated with Islamic motifs such as pictures of Islamic rugs and Arabic writing.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Another difference is that lamb and mutton dishes are more commonly available than in other Chinese restaurants, due to the greater prevalence of these meats in the cuisine of Western Chinese regions. (Refer to image 1.)",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Other Muslim ethnic minorities like the Bonan, Dongxiang, Salar and Tibetan Muslims have their own cuisines as well. Dongxiang people operate their own restaurants serving their cuisine.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Many cafeterias (canteens) at Chinese universities have separate sections or dining areas for Muslim students (Hui or Western Chinese minorities), typically labeled \"qingzhen\". Student ID cards sometimes indicate whether a student is Muslim and will allow access to these dining areas or will allow access on special occasions such as the Eid feast following Ramadan.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Several Hui restaurants serving Chinese Islamic cuisine exist in Los Angeles. San Francisco, despite its huge number of Chinese restaurants, appears to have only one whose cuisine would qualify as halal.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Many Chinese Hui Muslims who moved from Yunnan to Burma (Myanmar) are known as Panthays operate restaurants and stalls serving Chinese Islamic cuisine such as noodles. Chinese Hui Muslims from Yunnan who moved to Thailand are known as Chin Haw and they also own restaurants and stalls serving Chinese Islamic food.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In Central Asia, Dungan people, descendants of Hui, operate restaurants serving Chinese Islamic cuisine, which is respectively referred to as Dungan cuisine there. They cater to Chinese businessmen. Chopsticks are used by Dungans. The cuisine of the Dungan resembles northwestern Chinese cuisine.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Most Chinese regard Hui halal food as cleaner than food made by non-Muslims so their restaurants are popular in China. Hui who migrated to Northeast China (Manchuria) after the Chuang Guandong opened many new inns and restaurants to cater to travelers, which were regarded as clean.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Hui who migrated to Taiwan operate Qingzhen restaurants and stalls serving Chinese Islamic cuisine in Taipei and other big cities.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The Thai Department of Export Promotion claims that \"China's halal food producers are small-scale entrepreneurs whose products have little value added and lack branding and technology to push their goods to international standards\" to encourage Thai private sector halal producers to market their products in China.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "A 1903-started franchise serving Muslim food is Dong Lai Shun in Hankou.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "400 meters have to be kept as a distance from each restaurant serving beef noodles to another of its type if they belong to Hui Muslims, since Hui have a pact between each other in Ningxia, Gansu and Shaanxi.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Halal restaurants are checked up upon by clerics from mosques.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Halal food manufacture has been sanctioned by the government of the Ningxia Autonomous Region.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Lamian (simplified Chinese: 拉面; traditional Chinese: 拉麪; pinyin: lāmiàn, Dungan: Ламян) is a Chinese dish of hand-made noodles, usually served in a beef or mutton-flavored soup (湯麪, даңмян, tāngmiàn), but sometimes stir-fried (炒麪, Чаомян, chǎomiàn) and served with a tomato-based sauce. Literally, 拉, ла (lā) means to pull or stretch, while 麪, мян (miàn) means noodle. The hand-making process involves taking a lump of dough and repeatedly stretching it to produce a single very long noodle.",
"title": "Famous dishes"
},
{
"paragraph_id": 22,
"text": "Words that begin with L are not native to Turkic — läghmän is a loanword as stated by Uyghur linguist Abdlikim: It is of Chinese derivation and not originally Uyghur.",
"title": "Famous dishes"
},
{
"paragraph_id": 23,
"text": "Beef noodle soup is a noodle soup dish composed of stewed beef, beef broth, vegetables and wheat noodles. It exists in various forms throughout East and Southeast Asia. It was created by the Hui people during the Qing dynasty of China.",
"title": "Famous dishes"
},
{
"paragraph_id": 24,
"text": "In the west, this food may be served in a small portion as a soup. In China, a large bowl of it is often taken as a whole meal with or without any side dish.",
"title": "Famous dishes"
},
{
"paragraph_id": 25,
"text": "Chuanr (Chinese: 串儿, Dungan: Чўанр, Pinyin: chuànr (shortened from \"chuan er\"), \"kebab\"), originating in the Xinjiang (新疆) province of China and in recent years has been disseminated throughout the rest of that country, most notably in Beijing. It is a product of the Chinese Islamic cuisine of the Uyghur (维吾尔) people and other Chinese Muslims. Yang rou chuan or lamb kebabs, is particularly popular.",
"title": "Famous dishes"
},
{
"paragraph_id": 26,
"text": "Suan cai is a traditional fermented vegetable dish, similar to Korean kimchi and German sauerkraut, used in a variety of ways. It consists of pickled Chinese cabbage. Suan cai is a unique form of pao cai due to the material used and the method of production. Although suan cai is not exclusive to Chinese Islamic cuisine, it is used in Chinese Islamic cuisine to top off noodle soups, especially beef noodle soup.",
"title": "Famous dishes"
},
{
"paragraph_id": 27,
"text": "Nang (Chinese: 馕, Dungan: Нәң) is a type of round unleavened bread, topped with sesame. It is similar to South and Central Asia naan.",
"title": "Famous dishes"
}
]
| Chinese Islamic cuisine consists of variations of regionally popular foods that are typical of Han Chinese cuisine, in particular to make them halal. Dishes borrow ingredients from Middle Eastern, Turkic, and South Asian cuisines, notably mutton and spices. Much like other northern Chinese cuisines, Chinese Islamic cuisine uses wheat noodles as the staple, rather than rice. Chinese Islamic dishes include clear-broth beef noodle soup and chuanr. The Hui, Bonan, Dongxiang, Salar, and Uyghurs of China, as well as the Dungans of Central Asia and the Panthays of Burma, collectively contribute to Chinese Islamic cuisine. | 2001-08-22T01:35:17Z | 2023-12-12T15:18:45Z | [
"Template:Full citation needed",
"Template:Authority control",
"Template:Infobox Chinese",
"Template:Zh",
"Template:Main",
"Template:Reflist",
"Template:Dead link",
"Template:Religion in China",
"Template:Short description",
"Template:Div col end",
"Template:Cite news",
"Template:Commons category",
"Template:Cite magazine",
"Template:Islam and China",
"Template:Circa",
"Template:Portal",
"Template:Cite book",
"Template:Xinjiang topics",
"Template:Cuisine of China",
"Template:Div col",
"Template:Cite web",
"Template:Cuisine"
]
| https://en.wikipedia.org/wiki/Chinese_Islamic_cuisine |
6,182 | Cantonese cuisine | Cantonese or Guangdong cuisine, also known as Yue cuisine (Chinese: 廣東菜 or 粵菜) is the cuisine of Guangdong province of China, particularly the provincial capital Guangzhou, and the surrounding regions in the Pearl River Delta including Hong Kong and Macau. Strictly speaking, Cantonese cuisine is the cuisine of Guangzhou or of Cantonese speakers, but it often includes the cooking styles of all the speakers of Yue Chinese languages in Guangdong.
The Teochew cuisine and Hakka cuisine of Guangdong are considered their own styles. However, scholars may categorize Guangdong cuisine into three major groups based on the region's dialect: Cantonese, Hakka and Chaozhou cuisines. Neighboring Guangxi's cuisine is also considered separate despite eastern Guangxi being considered culturally Cantonese due to the presence of ethnic Zhuang influences in the rest of the province.
Cantonese cuisine is one of the Eight Great Traditions of Chinese cuisine. Its prominence outside China is due to the large number of Cantonese emigrants. Chefs trained in Cantonese cuisine are highly sought after throughout China. Until the late 20th century, most Chinese restaurants in the West served largely Cantonese dishes.
Guangzhou (Canton) City, the provincial capital of Guangdong and the centre of Cantonese culture, has long been a trading hub and many imported foods and ingredients are used in Cantonese cuisine. Besides pork, beef and chicken, Cantonese cuisine incorporates almost all edible meats, including offal, chicken feet, duck's tongue, frog legs, snakes and snails. However, lamb and goat are less commonly used than in the cuisines of northern or western China. Many cooking methods are used, with steaming and stir-frying being the most favoured due to their convenience and rapidity. Other techniques include shallow frying, double steaming, braising and deep frying.
Compared to other Chinese regional cuisines, the flavours of most traditional Cantonese dishes should be well-balanced and not greasy. Apart from that, spices should be used in modest amounts to avoid overwhelming the flavours of the primary ingredients, and these ingredients in turn should be at the peak of their freshness and quality. There is no widespread use of fresh herbs in Cantonese cooking, in contrast with their liberal use in other cuisines such as Sichuanese, Vietnamese, Lao, Thai and European. Garlic chives and coriander leaves are notable exceptions, although the former are often used as a vegetable and the latter are usually used as mere garnish in most dishes.
In Cantonese cuisine, ingredients such as sugar, salt, soy sauce, rice wine, corn starch, vinegar, scallion and sesame oil suffice to enhance flavour, although garlic is heavily used in some dishes, especially those in which internal organs, such as entrails, may emit unpleasant odours. Ginger, chili peppers, five-spice powder, powdered black pepper, star anise and a few other spices are also used, but often sparingly.
Although Cantonese cooks pay much attention to the freshness of their primary ingredients, Cantonese cuisine also uses a long list of preserved food items to add flavour to a dish. This may be influenced by Hakka cuisine, since the Hakkas were once a dominant group occupying imperial Hong Kong and other southern territories.
Some items gain very intense flavours during the drying/preservation/oxidation process and some foods are preserved to increase their shelf life. Some chefs combine both dried and fresh varieties of the same items in a dish. Dried items are usually soaked in water to rehydrate before cooking. These ingredients are generally not served a la carte, but rather with vegetables or other Cantonese dishes.
A number of dishes have been part of Cantonese cuisine since the earliest territorial establishments of Guangdong. While many of these are on the menus of typical Cantonese restaurants, some simpler ones are more commonly found in Cantonese homes. Home-made Cantonese dishes are usually served with plain white rice.
There are a small number of deep-fried dishes in Cantonese cuisine, which can often be found as street food. They have been extensively documented in colonial Hong Kong records of the 19th and 20th centuries. A few are synonymous with Cantonese breakfast and lunch, even though these are also part of other cuisines.
Old fire soup, or lou fo tong (老火汤; 老火湯; lǎohuǒ tāng; lou5 fo2 tong; 'old fire-cooked soup'), is a clear broth prepared by simmering meat and other ingredients over a low heat for several hours. Chinese herbs are often used as ingredients. There are basically two ways to make old fire soup – put ingredients and water in the pot and heat it directly on fire, which is called bou tong (煲汤; 煲湯; bāo tāng; bou1 tong1); or put the ingredients in a small stew pot, and put it in a bigger pot filled with water, then heat the bigger pot on fire directly, which is called dun tong (燉汤; 燉湯; dùn tāng; dan6 tong1). The latter way can keep the most original taste of the soup.
Soup chain stores or delivery outlets in cities with significant Cantonese populations, such as Hong Kong, serve this dish due to the long preparation time required of slow-simmered soup.
Due to Guangdong's location along the South China Sea coast, fresh seafood is prominent in Cantonese cuisine, and many Cantonese restaurants keep aquariums or seafood tanks on the premises. In Cantonese cuisine, as in cuisines from other parts of Asia, if seafood has a repugnant odour, strong spices and marinating juices are added; the freshest seafood is odourless and, in Cantonese culinary arts, is best cooked by steaming. For instance, in some recipes, only a small amount of soy sauce, ginger and spring onion is added to steamed fish. In Cantonese cuisine, the light seasoning is used only to bring out the natural sweetness of the seafood. As a rule of thumb, the spiciness of a dish is usually negatively correlated to the freshness of the ingredients.
Noodles are served either in soup broth or fried. These are available as home-cooked meals, on dim sum side menus, or as street food at dai pai dongs, where they can be served with a variety of toppings such as fish balls, beef balls, or fish slices.
Siu mei (烧味; 燒味; shāo wèi; siu1 mei6) is essentially the Chinese rotisserie style of cooking. Unlike most other Cantonese dishes, siu mei solely consists of meat, with no vegetables.
Lou mei (卤味; 滷味; lǔ wèi; lou5 mei6) is the name given to dishes made from internal organs, entrails and other left-over parts of animals. It is widely available in southern Chinese regions.
All Cantonese-style cooked meats, including siu mei, lou mei and preserved meat can be classified as siu laap (烧腊; 燒臘; shāo là; siu1 laap6). Siu laap also includes dishes such as:
A typical dish may consist of offal and half an order of multiple varieties of roasted meat. The majority of siu laap is white meat.
Little pot rice (煲仔饭; 煲仔飯; bāozǎifàn; bou1 zai2 faan6) are dishes cooked and served in a flat-bottomed pot (as opposed to a round-bottomed wok). Usually this is a saucepan or braising pan (see clay pot cooking). Such dishes are cooked by covering and steaming, making the rice and ingredients very hot and soft. Usually the ingredients are layered on top of the rice with little or no mixing in between. Many standard combinations exist.
A number of dishes are traditionally served in Cantonese restaurants only at dinner time. Dim sum restaurants stop serving bamboo-basket dishes after the yum cha period (equivalent to afternoon tea) and begin offering an entirely different menu in the evening. Some dishes are standard while others are regional. Some are customised for special purposes such as Chinese marriages or banquets. Salt and pepper dishes are one of the few spicy dishes.
After the evening meal, most Cantonese restaurants offer tong sui (糖水; táng shuǐ; tong4 seoi2; 'sugar water'), a sweet soup. Many varieties of tong sui are also found in other Chinese cuisines. Some desserts are traditional, while others are recent innovations. The more expensive restaurants usually offer their specialty desserts. Sugar water is the general name of dessert in Guangdong province. It is cooked by adding water and sugar to some other cooking ingredients.
Certain Cantonese delicacies consist of parts taken from rare or endangered animals, which raises controversy over animal rights and environmental issues. This is often due to alleged health benefits of certain animal products. For example, the continued spreading of the idea that shark cartilage can cure cancer has led to decreased shark populations even though scientific research has found no evidence to support the credibility of shark cartilage as a cancer cure. | [
{
"paragraph_id": 0,
"text": "Cantonese or Guangdong cuisine, also known as Yue cuisine (Chinese: 廣東菜 or 粵菜) is the cuisine of Guangdong province of China, particularly the provincial capital Guangzhou, and the surrounding regions in the Pearl River Delta including Hong Kong and Macau. Strictly speaking, Cantonese cuisine is the cuisine of Guangzhou or of Cantonese speakers, but it often includes the cooking styles of all the speakers of Yue Chinese languages in Guangdong.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Teochew cuisine and Hakka cuisine of Guangdong are considered their own styles. However, scholars may categorize Guangdong cuisine into three major groups based on the region's dialect: Cantonese, Hakka and Chaozhou cuisines. Neighboring Guangxi's cuisine is also considered separate despite eastern Guangxi being considered culturally Cantonese due to the presence of ethnic Zhuang influences in the rest of the province.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cantonese cuisine is one of the Eight Great Traditions of Chinese cuisine. Its prominence outside China is due to the large number of Cantonese emigrants. Chefs trained in Cantonese cuisine are highly sought after throughout China. Until the late 20th century, most Chinese restaurants in the West served largely Cantonese dishes.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Guangzhou (Canton) City, the provincial capital of Guangdong and the centre of Cantonese culture, has long been a trading hub and many imported foods and ingredients are used in Cantonese cuisine. Besides pork, beef and chicken, Cantonese cuisine incorporates almost all edible meats, including offal, chicken feet, duck's tongue, frog legs, snakes and snails. However, lamb and goat are less commonly used than in the cuisines of northern or western China. Many cooking methods are used, with steaming and stir-frying being the most favoured due to their convenience and rapidity. Other techniques include shallow frying, double steaming, braising and deep frying.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "Compared to other Chinese regional cuisines, the flavours of most traditional Cantonese dishes should be well-balanced and not greasy. Apart from that, spices should be used in modest amounts to avoid overwhelming the flavours of the primary ingredients, and these ingredients in turn should be at the peak of their freshness and quality. There is no widespread use of fresh herbs in Cantonese cooking, in contrast with their liberal use in other cuisines such as Sichuanese, Vietnamese, Lao, Thai and European. Garlic chives and coriander leaves are notable exceptions, although the former are often used as a vegetable and the latter are usually used as mere garnish in most dishes.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "In Cantonese cuisine, ingredients such as sugar, salt, soy sauce, rice wine, corn starch, vinegar, scallion and sesame oil suffice to enhance flavour, although garlic is heavily used in some dishes, especially those in which internal organs, such as entrails, may emit unpleasant odours. Ginger, chili peppers, five-spice powder, powdered black pepper, star anise and a few other spices are also used, but often sparingly.",
"title": "Foods"
},
{
"paragraph_id": 6,
"text": "Although Cantonese cooks pay much attention to the freshness of their primary ingredients, Cantonese cuisine also uses a long list of preserved food items to add flavour to a dish. This may be influenced by Hakka cuisine, since the Hakkas were once a dominant group occupying imperial Hong Kong and other southern territories.",
"title": "Foods"
},
{
"paragraph_id": 7,
"text": "Some items gain very intense flavours during the drying/preservation/oxidation process and some foods are preserved to increase their shelf life. Some chefs combine both dried and fresh varieties of the same items in a dish. Dried items are usually soaked in water to rehydrate before cooking. These ingredients are generally not served a la carte, but rather with vegetables or other Cantonese dishes.",
"title": "Foods"
},
{
"paragraph_id": 8,
"text": "A number of dishes have been part of Cantonese cuisine since the earliest territorial establishments of Guangdong. While many of these are on the menus of typical Cantonese restaurants, some simpler ones are more commonly found in Cantonese homes. Home-made Cantonese dishes are usually served with plain white rice.",
"title": "Foods"
},
{
"paragraph_id": 9,
"text": "There are a small number of deep-fried dishes in Cantonese cuisine, which can often be found as street food. They have been extensively documented in colonial Hong Kong records of the 19th and 20th centuries. A few are synonymous with Cantonese breakfast and lunch, even though these are also part of other cuisines.",
"title": "Foods"
},
{
"paragraph_id": 10,
"text": "Old fire soup, or lou fo tong (老火汤; 老火湯; lǎohuǒ tāng; lou5 fo2 tong; 'old fire-cooked soup'), is a clear broth prepared by simmering meat and other ingredients over a low heat for several hours. Chinese herbs are often used as ingredients. There are basically two ways to make old fire soup – put ingredients and water in the pot and heat it directly on fire, which is called bou tong (煲汤; 煲湯; bāo tāng; bou1 tong1); or put the ingredients in a small stew pot, and put it in a bigger pot filled with water, then heat the bigger pot on fire directly, which is called dun tong (燉汤; 燉湯; dùn tāng; dan6 tong1). The latter way can keep the most original taste of the soup.",
"title": "Foods"
},
{
"paragraph_id": 11,
"text": "Soup chain stores or delivery outlets in cities with significant Cantonese populations, such as Hong Kong, serve this dish due to the long preparation time required of slow-simmered soup.",
"title": "Foods"
},
{
"paragraph_id": 12,
"text": "Due to Guangdong's location along the South China Sea coast, fresh seafood is prominent in Cantonese cuisine, and many Cantonese restaurants keep aquariums or seafood tanks on the premises. In Cantonese cuisine, as in cuisines from other parts of Asia, if seafood has a repugnant odour, strong spices and marinating juices are added; the freshest seafood is odourless and, in Cantonese culinary arts, is best cooked by steaming. For instance, in some recipes, only a small amount of soy sauce, ginger and spring onion is added to steamed fish. In Cantonese cuisine, the light seasoning is used only to bring out the natural sweetness of the seafood. As a rule of thumb, the spiciness of a dish is usually negatively correlated to the freshness of the ingredients.",
"title": "Foods"
},
{
"paragraph_id": 13,
"text": "Noodles are served either in soup broth or fried. These are available as home-cooked meals, on dim sum side menus, or as street food at dai pai dongs, where they can be served with a variety of toppings such as fish balls, beef balls, or fish slices.",
"title": "Foods"
},
{
"paragraph_id": 14,
"text": "Siu mei (烧味; 燒味; shāo wèi; siu1 mei6) is essentially the Chinese rotisserie style of cooking. Unlike most other Cantonese dishes, siu mei solely consists of meat, with no vegetables.",
"title": "Foods"
},
{
"paragraph_id": 15,
"text": "Lou mei (卤味; 滷味; lǔ wèi; lou5 mei6) is the name given to dishes made from internal organs, entrails and other left-over parts of animals. It is widely available in southern Chinese regions.",
"title": "Foods"
},
{
"paragraph_id": 16,
"text": "All Cantonese-style cooked meats, including siu mei, lou mei and preserved meat can be classified as siu laap (烧腊; 燒臘; shāo là; siu1 laap6). Siu laap also includes dishes such as:",
"title": "Foods"
},
{
"paragraph_id": 17,
"text": "A typical dish may consist of offal and half an order of multiple varieties of roasted meat. The majority of siu laap is white meat.",
"title": "Foods"
},
{
"paragraph_id": 18,
"text": "Little pot rice (煲仔饭; 煲仔飯; bāozǎifàn; bou1 zai2 faan6) are dishes cooked and served in a flat-bottomed pot (as opposed to a round-bottomed wok). Usually this is a saucepan or braising pan (see clay pot cooking). Such dishes are cooked by covering and steaming, making the rice and ingredients very hot and soft. Usually the ingredients are layered on top of the rice with little or no mixing in between. Many standard combinations exist.",
"title": "Foods"
},
{
"paragraph_id": 19,
"text": "A number of dishes are traditionally served in Cantonese restaurants only at dinner time. Dim sum restaurants stop serving bamboo-basket dishes after the yum cha period (equivalent to afternoon tea) and begin offering an entirely different menu in the evening. Some dishes are standard while others are regional. Some are customised for special purposes such as Chinese marriages or banquets. Salt and pepper dishes are one of the few spicy dishes.",
"title": "Foods"
},
{
"paragraph_id": 20,
"text": "After the evening meal, most Cantonese restaurants offer tong sui (糖水; táng shuǐ; tong4 seoi2; 'sugar water'), a sweet soup. Many varieties of tong sui are also found in other Chinese cuisines. Some desserts are traditional, while others are recent innovations. The more expensive restaurants usually offer their specialty desserts. Sugar water is the general name of dessert in Guangdong province. It is cooked by adding water and sugar to some other cooking ingredients.",
"title": "Foods"
},
{
"paragraph_id": 21,
"text": "Certain Cantonese delicacies consist of parts taken from rare or endangered animals, which raises controversy over animal rights and environmental issues. This is often due to alleged health benefits of certain animal products. For example, the continued spreading of the idea that shark cartilage can cure cancer has led to decreased shark populations even though scientific research has found no evidence to support the credibility of shark cartilage as a cancer cure.",
"title": "Foods"
}
]
| Cantonese or Guangdong cuisine, also known as Yue cuisine is the cuisine of Guangdong province of China, particularly the provincial capital Guangzhou, and the surrounding regions in the Pearl River Delta including Hong Kong and Macau. Strictly speaking, Cantonese cuisine is the cuisine of Guangzhou or of Cantonese speakers, but it often includes the cooking styles of all the speakers of Yue Chinese languages in Guangdong. The Teochew cuisine and Hakka cuisine of Guangdong are considered their own styles. However, scholars may categorize Guangdong cuisine into three major groups based on the region's dialect: Cantonese, Hakka and Chaozhou cuisines. Neighboring Guangxi's cuisine is also considered separate despite eastern Guangxi being considered culturally Cantonese due to the presence of ethnic Zhuang influences in the rest of the province. Cantonese cuisine is one of the Eight Great Traditions of Chinese cuisine. Its prominence outside China is due to the large number of Cantonese emigrants. Chefs trained in Cantonese cuisine are highly sought after throughout China. Until the late 20th century, most Chinese restaurants in the West served largely Cantonese dishes. | 2001-09-26T04:44:40Z | 2023-11-24T01:42:09Z | [
"Template:Zh",
"Template:Wikibooks",
"Template:Cuisine",
"Template:Main",
"Template:According to whom",
"Template:Reflist",
"Template:ISBN",
"Template:Hong Kong topics",
"Template:Short description",
"Template:More citations needed",
"Template:Infobox Chinese",
"Template:Macau topics",
"Template:Cite web",
"Template:Cantonese cuisine",
"Template:Authority control",
"Template:Citation needed",
"Template:Lang",
"Template:Cite journal",
"Template:Cite book",
"Template:Guangdong topics",
"Template:Guangxi topics",
"Template:EngvarB",
"Template:Cuisine of China",
"Template:Transl"
]
| https://en.wikipedia.org/wiki/Cantonese_cuisine |
6,183 | Teochew cuisine | Chaoshan cuisine, also known as Chiuchow cuisine, Chaozhou cuisine or Teo-swa cuisine, originated from the Chaoshan region in the eastern part of China's Guangdong Province, which includes the cities of Chaozhou, Shantou and Jieyang. Chaoshan cuisine bears more similarities to that of Fujian cuisine, particularly Southern Min cuisine, due to the similarity of Chaoshan's and Fujian's culture, language, and their geographic proximity to each other. However, Chaoshan cuisine is also influenced by Cantonese cuisine in its style and technique.
Chaoshan cuisine is well known for its seafood and vegetarian dishes. Its use of flavouring is much less heavy-handed than most other Chinese cuisines and depends much on the freshness and quality of the ingredients for taste and flavour. As a delicate cuisine, oil is not often used in large quantities and there is a relatively heavy emphasis on poaching, steaming and braising, as well as the common Chinese method of stir-frying. Chaoshan cuisine is also known for serving congee (糜; mí; or mue), in addition to steamed rice or noodles with meals. The Chaoshan mue is rather different from the Cantonese counterpart, being very watery with the rice sitting loosely at the bottom of the bowl, while the Cantonese dish is more a thin gruel.
Authentic Chaoshan restaurants serve very strong oolong tea called Tieguanyin in very tiny cups before and after the meal. Presented as gongfu tea, the tea has a thickly bittersweet taste, colloquially known as gam gam (甘甘; gān gān).
A condiment that is popular in Fujian and Taiwanese cuisine and commonly associated with cuisine of certain Chaoshan groups is shacha sauce (沙茶酱; 沙茶醬; shāchá jiàng). It is made from soybean oil, garlic, shallots, chilies, brill fish and dried shrimp. The paste has a savoury and slightly spicy taste. As an ingredient, it has multiple uses: as a base for soups, as a rub for barbecued meats, as a seasoning for stir-fried dishes, or as a component for dipping sauces.
In addition to soy sauce (widely used in all Chinese cuisines), the Chaoshan diaspora in Southeast Asia use fish sauce in their cooking. It is used as a flavouring agent in soups and sometimes as a dipping sauce, as in Vietnamese spring rolls.
Chaoshan chefs often use a special stock called superior broth (上汤; 上湯; shàngtāng). This stock remains on the stove and is continuously replenished. Portrayed in popular media, some Hong Kong chefs allegedly use the same superior broth that is preserved for decades. This stock can as well be seen on Chaozhou TV's cooking programmes.
There is a notable feast in Chaoshan cuisine called jiat dot (食桌; shízhuō; 'food table'). A myriad of dishes are often served, which include shark fin soup, bird's nest soup, lobster, steamed fish, roasted suckling pig and braised goose.
Chaoshan chefs take pride in their skills of vegetable carving, and carved vegetables are used as garnishes on cold dishes and on the banquet table.
Chaoshan cuisine is also known for a late night meal known as meh siao (夜宵; yèxiāo) or daa laang (打冷; dǎléng) among the Cantonese. Chaoshan people enjoy eating out close to midnight in restaurants or at roadside food stalls. Some dai pai dong-like eateries stay open till dawn.
Unlike the typical menu selections of many other Chinese cuisines, Chaoshan restaurant menus often have a dessert section.
Many people of Chaoshan origin, also known as Teochiu or Chaoshan people, have settled in Hong Kong and places in Southeast Asia like Malaysia, Singapore, Cambodia and Thailand. Influences they bring can be noted in Singaporean cuisine and that of other settlements. A large number of Chaoshan people have also settled in Taiwan, evident in Taiwanese cuisine. Other notable Chaoshan diaspora communities are in Vietnam, Cambodia and France. A popular noodle soup in both Vietnam and Cambodia, known as hu tieu, originated from the Chaoshan. There is also a large diaspora of Chaoshan people (most were from Southeast Asia) in the United States - particularly in California. There is a Teochew Chinese Association in Paris called L'Amicale des Teochews en France. | [
{
"paragraph_id": 0,
"text": "Chaoshan cuisine, also known as Chiuchow cuisine, Chaozhou cuisine or Teo-swa cuisine, originated from the Chaoshan region in the eastern part of China's Guangdong Province, which includes the cities of Chaozhou, Shantou and Jieyang. Chaoshan cuisine bears more similarities to that of Fujian cuisine, particularly Southern Min cuisine, due to the similarity of Chaoshan's and Fujian's culture, language, and their geographic proximity to each other. However, Chaoshan cuisine is also influenced by Cantonese cuisine in its style and technique.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chaoshan cuisine is well known for its seafood and vegetarian dishes. Its use of flavouring is much less heavy-handed than most other Chinese cuisines and depends much on the freshness and quality of the ingredients for taste and flavour. As a delicate cuisine, oil is not often used in large quantities and there is a relatively heavy emphasis on poaching, steaming and braising, as well as the common Chinese method of stir-frying. Chaoshan cuisine is also known for serving congee (糜; mí; or mue), in addition to steamed rice or noodles with meals. The Chaoshan mue is rather different from the Cantonese counterpart, being very watery with the rice sitting loosely at the bottom of the bowl, while the Cantonese dish is more a thin gruel.",
"title": "Background"
},
{
"paragraph_id": 2,
"text": "Authentic Chaoshan restaurants serve very strong oolong tea called Tieguanyin in very tiny cups before and after the meal. Presented as gongfu tea, the tea has a thickly bittersweet taste, colloquially known as gam gam (甘甘; gān gān).",
"title": "Background"
},
{
"paragraph_id": 3,
"text": "A condiment that is popular in Fujian and Taiwanese cuisine and commonly associated with cuisine of certain Chaoshan groups is shacha sauce (沙茶酱; 沙茶醬; shāchá jiàng). It is made from soybean oil, garlic, shallots, chilies, brill fish and dried shrimp. The paste has a savoury and slightly spicy taste. As an ingredient, it has multiple uses: as a base for soups, as a rub for barbecued meats, as a seasoning for stir-fried dishes, or as a component for dipping sauces.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "In addition to soy sauce (widely used in all Chinese cuisines), the Chaoshan diaspora in Southeast Asia use fish sauce in their cooking. It is used as a flavouring agent in soups and sometimes as a dipping sauce, as in Vietnamese spring rolls.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "Chaoshan chefs often use a special stock called superior broth (上汤; 上湯; shàngtāng). This stock remains on the stove and is continuously replenished. Portrayed in popular media, some Hong Kong chefs allegedly use the same superior broth that is preserved for decades. This stock can as well be seen on Chaozhou TV's cooking programmes.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "There is a notable feast in Chaoshan cuisine called jiat dot (食桌; shízhuō; 'food table'). A myriad of dishes are often served, which include shark fin soup, bird's nest soup, lobster, steamed fish, roasted suckling pig and braised goose.",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "Chaoshan chefs take pride in their skills of vegetable carving, and carved vegetables are used as garnishes on cold dishes and on the banquet table.",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "Chaoshan cuisine is also known for a late night meal known as meh siao (夜宵; yèxiāo) or daa laang (打冷; dǎléng) among the Cantonese. Chaoshan people enjoy eating out close to midnight in restaurants or at roadside food stalls. Some dai pai dong-like eateries stay open till dawn.",
"title": "Background"
},
{
"paragraph_id": 9,
"text": "Unlike the typical menu selections of many other Chinese cuisines, Chaoshan restaurant menus often have a dessert section.",
"title": "Background"
},
{
"paragraph_id": 10,
"text": "Many people of Chaoshan origin, also known as Teochiu or Chaoshan people, have settled in Hong Kong and places in Southeast Asia like Malaysia, Singapore, Cambodia and Thailand. Influences they bring can be noted in Singaporean cuisine and that of other settlements. A large number of Chaoshan people have also settled in Taiwan, evident in Taiwanese cuisine. Other notable Chaoshan diaspora communities are in Vietnam, Cambodia and France. A popular noodle soup in both Vietnam and Cambodia, known as hu tieu, originated from the Chaoshan. There is also a large diaspora of Chaoshan people (most were from Southeast Asia) in the United States - particularly in California. There is a Teochew Chinese Association in Paris called L'Amicale des Teochews en France.",
"title": "Background"
}
]
| Chaoshan cuisine, also known as Chiuchow cuisine, Chaozhou cuisine or Teo-swa cuisine, originated from the Chaoshan region in the eastern part of China's Guangdong Province, which includes the cities of Chaozhou, Shantou and Jieyang. Chaoshan cuisine bears more similarities to that of Fujian cuisine, particularly Southern Min cuisine, due to the similarity of Chaoshan's and Fujian's culture, language, and their geographic proximity to each other. However, Chaoshan cuisine is also influenced by Cantonese cuisine in its style and technique. | 2001-09-08T00:09:23Z | 2023-10-23T10:31:19Z | [
"Template:Reflist",
"Template:Citation",
"Template:Cite book",
"Template:Use American English",
"Template:More citations needed",
"Template:Chinese",
"Template:Not a typo",
"Template:Chaozhou cuisine",
"Template:Short description",
"Template:Zh",
"Template:Lang",
"Template:Cite web",
"Template:Use mdy dates",
"Template:Cuisine",
"Template:Cuisine of China"
]
| https://en.wikipedia.org/wiki/Teochew_cuisine |
6,184 | Co-NP | NP = ? co-NP {\displaystyle {\textsf {NP}}\ {\overset {?}{=}}\ {\textsf {co-NP}}}
In computational complexity theory, co-NP is a complexity class. A decision problem X is a member of co-NP if and only if its complement X is in the complexity class NP. The class can be defined as follows: a decision problem is in co-NP if and only if for every no-instance we have a polynomial-length "certificate" and there is a polynomial-time algorithm that can be used to verify any purported certificate.
That is, co-NP is the set of decision problems where there exists a polynomial p ( n ) {\displaystyle p(n)} and a polynomial-time bounded Turing machine M such that for every instance x, x is a no-instance if and only if: for some possible certificate c of length bounded by p ( n ) {\displaystyle p(n)} , the Turing machine M accepts the pair (x, c).
While an NP problem asks whether a given instance is a yes-instance, its complement asks whether an instance is a no-instance, which means the complement is in co-NP. Any yes-instance for the original NP problem becomes a no-instance for its complement, and vice versa.
An example of an NP-complete problem is the Boolean satisfiability problem: given a Boolean formula, is it satisfiable (is there a possible input for which the formula outputs true)? The complementary problem asks: "given a Boolean formula, is it unsatisfiable (do all possible inputs to the formula output false)?". Since this is the complement of the satisfiability problem, a certificate for a no-instance is the same as for a yes-instance from the original NP problem: a set of Boolean variable assignments which make the formula true. On the other hand, a certificate of a yes-instance for the complementary problem would be equally as complex as the no-instance of the original NP satisfiability problem.
A problem L is co-NP-complete if and only if L is in co-NP and for any problem in co-NP, there exists a polynomial-time reduction from that problem to L.
Determining if a formula in propositional logic is a tautology is co-NP-complete: that is, if the formula evaluates to true under every possible assignment to its variables.
P, the class of polynomial time solvable problems, is a subset of both NP and co-NP. P is thought to be a strict subset in both cases (and demonstrably cannot be strict in one case and not strict in the other).
NP and co-NP are also thought to be unequal. If so, then no NP-complete problem can be in co-NP and no co-NP-complete problem can be in NP. This can be shown as follows. Suppose for the sake of contradiction there exists an NP-complete problem X that is in co-NP. Since all problems in NP can be reduced to X, it follows that for every problem in NP, we can construct a non-deterministic Turing machine that decides its complement in polynomial time; i.e., NP ⊆ co-NP {\displaystyle {\textsf {NP}}\subseteq {\textsf {co-NP}}} . From this, it follows that the set of complements of the problems in NP is a subset of the set of complements of the problems in co-NP; i.e., co-NP ⊆ NP {\displaystyle {\textsf {co-NP}}\subseteq {\textsf {NP}}} . Thus co-NP = NP {\displaystyle {\textsf {co-NP}}={\textsf {NP}}} . The proof that no co-NP-complete problem can be in NP if NP ≠ co-NP {\displaystyle {\textsf {NP}}\neq {\textsf {co-NP}}} is symmetrical.
co-NP is a subset of PH, which itself is a subset of PSPACE.
An example of a problem that is known to belong to both NP and co-NP (but not known to be in P) is integer factorization: given positive integers m and n, determine if m has a factor less than n and greater than one. Membership in NP is clear; if m does have such a factor, then the factor itself is a certificate. Membership in co-NP is also straightforward: one can just list the prime factors of m, all greater or equal to n, which the verifier can confirm to be valid by multiplication and the AKS primality test. It is presently not known whether there is a polynomial-time algorithm for factorization, equivalently that integer factorization is in P, and hence this example is interesting as one of the most natural problems known to be in NP and co-NP but not known to be in P. | [
{
"paragraph_id": 0,
"text": "NP = ? co-NP {\\displaystyle {\\textsf {NP}}\\ {\\overset {?}{=}}\\ {\\textsf {co-NP}}}",
"title": ""
},
{
"paragraph_id": 1,
"text": "In computational complexity theory, co-NP is a complexity class. A decision problem X is a member of co-NP if and only if its complement X is in the complexity class NP. The class can be defined as follows: a decision problem is in co-NP if and only if for every no-instance we have a polynomial-length \"certificate\" and there is a polynomial-time algorithm that can be used to verify any purported certificate.",
"title": ""
},
{
"paragraph_id": 2,
"text": "That is, co-NP is the set of decision problems where there exists a polynomial p ( n ) {\\displaystyle p(n)} and a polynomial-time bounded Turing machine M such that for every instance x, x is a no-instance if and only if: for some possible certificate c of length bounded by p ( n ) {\\displaystyle p(n)} , the Turing machine M accepts the pair (x, c).",
"title": ""
},
{
"paragraph_id": 3,
"text": "While an NP problem asks whether a given instance is a yes-instance, its complement asks whether an instance is a no-instance, which means the complement is in co-NP. Any yes-instance for the original NP problem becomes a no-instance for its complement, and vice versa.",
"title": "Complementary Problems"
},
{
"paragraph_id": 4,
"text": "An example of an NP-complete problem is the Boolean satisfiability problem: given a Boolean formula, is it satisfiable (is there a possible input for which the formula outputs true)? The complementary problem asks: \"given a Boolean formula, is it unsatisfiable (do all possible inputs to the formula output false)?\". Since this is the complement of the satisfiability problem, a certificate for a no-instance is the same as for a yes-instance from the original NP problem: a set of Boolean variable assignments which make the formula true. On the other hand, a certificate of a yes-instance for the complementary problem would be equally as complex as the no-instance of the original NP satisfiability problem.",
"title": "Complementary Problems"
},
{
"paragraph_id": 5,
"text": "A problem L is co-NP-complete if and only if L is in co-NP and for any problem in co-NP, there exists a polynomial-time reduction from that problem to L.",
"title": "co-NP-completeness"
},
{
"paragraph_id": 6,
"text": "Determining if a formula in propositional logic is a tautology is co-NP-complete: that is, if the formula evaluates to true under every possible assignment to its variables.",
"title": "co-NP-completeness"
},
{
"paragraph_id": 7,
"text": "P, the class of polynomial time solvable problems, is a subset of both NP and co-NP. P is thought to be a strict subset in both cases (and demonstrably cannot be strict in one case and not strict in the other).",
"title": "Relationship to other classes"
},
{
"paragraph_id": 8,
"text": "NP and co-NP are also thought to be unequal. If so, then no NP-complete problem can be in co-NP and no co-NP-complete problem can be in NP. This can be shown as follows. Suppose for the sake of contradiction there exists an NP-complete problem X that is in co-NP. Since all problems in NP can be reduced to X, it follows that for every problem in NP, we can construct a non-deterministic Turing machine that decides its complement in polynomial time; i.e., NP ⊆ co-NP {\\displaystyle {\\textsf {NP}}\\subseteq {\\textsf {co-NP}}} . From this, it follows that the set of complements of the problems in NP is a subset of the set of complements of the problems in co-NP; i.e., co-NP ⊆ NP {\\displaystyle {\\textsf {co-NP}}\\subseteq {\\textsf {NP}}} . Thus co-NP = NP {\\displaystyle {\\textsf {co-NP}}={\\textsf {NP}}} . The proof that no co-NP-complete problem can be in NP if NP ≠ co-NP {\\displaystyle {\\textsf {NP}}\\neq {\\textsf {co-NP}}} is symmetrical.",
"title": "Relationship to other classes"
},
{
"paragraph_id": 9,
"text": "co-NP is a subset of PH, which itself is a subset of PSPACE.",
"title": "Relationship to other classes"
},
{
"paragraph_id": 10,
"text": "An example of a problem that is known to belong to both NP and co-NP (but not known to be in P) is integer factorization: given positive integers m and n, determine if m has a factor less than n and greater than one. Membership in NP is clear; if m does have such a factor, then the factor itself is a certificate. Membership in co-NP is also straightforward: one can just list the prime factors of m, all greater or equal to n, which the verifier can confirm to be valid by multiplication and the AKS primality test. It is presently not known whether there is a polynomial-time algorithm for factorization, equivalently that integer factorization is in P, and hence this example is interesting as one of the most natural problems known to be in NP and co-NP but not known to be in P.",
"title": "Relationship to other classes"
}
]
| In computational complexity theory, co-NP is a complexity class. A decision problem X is a member of co-NP if and only if its complement X is in the complexity class NP. The class can be defined as follows: a decision problem is in co-NP if and only if for every no-instance we have a polynomial-length "certificate" and there is a polynomial-time algorithm that can be used to verify any purported certificate. That is, co-NP is the set of decision problems where there exists a polynomial p and a polynomial-time bounded Turing machine M such that for every instance x, x is a no-instance if and only if: for some possible certificate c of length bounded by p , the Turing machine M accepts the pair. | 2023-06-12T16:57:04Z | [
"Template:Cite book",
"Template:ComplexityClasses",
"Template:Unsolved",
"Template:Tmath",
"Template:Main",
"Template:Expand section",
"Template:CZoo",
"Template:Clarify",
"Template:Mathcal",
"Template:Cn",
"Template:Lowercase",
"Template:Math",
"Template:Further",
"Template:Overline",
"Template:Citation needed",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Co-NP |
|
6,185 | Chuck Yeager | Brigadier General Charles Elwood Yeager (/ˈjeɪɡər/ YAY-gər, February 13, 1923 – December 7, 2020) was a United States Air Force officer, flying ace, and record-setting test pilot who in October 1947 became the first pilot in history confirmed to have exceeded the speed of sound in level flight.
Yeager was raised in Hamlin, West Virginia. His career began in World War II as a private in the United States Army, assigned to the Army Air Forces in 1941. After serving as an aircraft mechanic, in September 1942, he entered enlisted pilot training and upon graduation was promoted to the rank of flight officer (the World War II Army Air Force version of the Army's warrant officer), later achieving most of his aerial victories as a P-51 Mustang fighter pilot on the Western Front, where he was credited with shooting down 11.5 enemy aircraft (the half credit is from a second pilot assisting him in a single shootdown). On October 12, 1944, he attained "ace in a day" status, shooting down five enemy aircraft in one mission.
After the war, Yeager became a test pilot and flew many types of aircraft, including experimental rocket-powered aircraft for the National Advisory Committee for Aeronautics (NACA). Through the NACA program, he became the first human to officially break the sound barrier on October 14, 1947, when he flew the experimental Bell X-1 at Mach 1 at an altitude of 45,000 ft (13,700 m), for which he won both the Collier and Mackay trophies in 1948. He then went on to break several other speed and altitude records in the following years. In 1962, he became the first commandant of the USAF Aerospace Research Pilot School, which trained and produced astronauts for NASA and the Air Force.
Yeager later commanded fighter squadrons and wings in Germany, as well as in Southeast Asia during the Vietnam War. In recognition of his achievements and the outstanding performance ratings of those units, he was promoted to brigadier general in 1969 and inducted into the National Aviation Hall of Fame in 1973, retiring on March 1, 1975. His three-war active-duty flying career spanned more than 30 years and took him to many parts of the world, including the Korean War zone and the Soviet Union during the height of the Cold War.
Yeager is referred to by many as one of the greatest pilots of all time, and was ranked fifth on Flying's list of the 51 Heroes of Aviation in 2013. Throughout his life, he flew more than 360 different types of aircraft over a 70-year period, and continued to fly for two decades after retirement as a consultant pilot for the United States Air Force.
Yeager was born February 13, 1923, in Myra, West Virginia, to farming parents Albert Hal Yeager (1896–1963) and Susie Mae Yeager (née Sizemore; 1898–1987). When he was five years old, his family moved to Hamlin, West Virginia. Yeager had two brothers, Roy and Hal Jr., and two sisters, Doris Ann (accidentally killed at age two by six-year-old Roy playing with a firearm) and Pansy Lee.
He attended Hamlin High School, where he played basketball and football, receiving his best grades in geometry and typing. He graduated from high school in June 1941.
His first experience with the military was as a teen at the Citizens Military Training Camp at Fort Benjamin Harrison, Indianapolis, Indiana, during the summers of 1939 and 1940. On February 26, 1945, Yeager married Glennis Dickhouse, and the couple had four children. Glennis Yeager died in 1990, predeceasing her husband by 30 years.
His cousin, Steve Yeager, was a professional baseball catcher.
Yeager enlisted as a private in the U.S. Army Air Forces (USAAF) on September 12, 1941, and became an aircraft mechanic at George Air Force Base, Victorville, California. At enlistment, Yeager was not eligible for flight training because of his age and educational background, but the entry of the U.S. into World War II less than three months later prompted the USAAF to alter its recruiting standards. Yeager had unusually sharp vision (a visual acuity rated 20/10), which once enabled him to shoot a deer at 600 yd (550 m).
At the time of his flight training acceptance, he was a crew chief on an AT-11. He received his pilot wings and a promotion to flight officer at Luke Field, Arizona, where he graduated from Class 43C on March 10, 1943. Assigned to the 357th Fighter Group at Tonopah, Nevada, he initially trained as a fighter pilot, flying Bell P-39 Airacobras (being grounded for seven days for clipping a farmer's tree during a training flight), and shipped overseas with the group on November 23, 1943.
Stationed in the United Kingdom at RAF Leiston, Yeager flew P-51 Mustangs in combat with the 363d Fighter Squadron. He named his aircraft Glamorous Glen after his girlfriend, Glennis Faye Dickhouse, who became his wife in February 1945. Yeager had gained one victory before he was shot down over France in his first aircraft (P-51B-5-NA s/n 43-6763) on March 5, 1944, on his eighth mission. He escaped to Spain on March 30, 1944, with the help of the Maquis (French Resistance) and returned to England on May 15, 1944. During his stay with the Maquis, Yeager assisted the guerrillas in duties that did not involve direct combat; he helped construct bombs for the group, a skill that he had learned from his father. He was awarded the Bronze Star for helping a navigator, Omar M. "Pat" Patterson, Jr., to cross the Pyrenees.
Despite a regulation prohibiting "evaders" (escaped pilots) from flying over enemy territory again, the purpose of which was to prevent resistance groups from being compromised by giving the enemy a second chance to possibly capture him, Yeager was reinstated to flying combat. He had joined another evader, fellow P-51 pilot 1st Lt Fred Glover, in speaking directly to the Supreme Allied Commander, General Dwight D. Eisenhower, on June 12, 1944. "I raised so much hell that General Eisenhower finally let me go back to my squadron" Yeager said. "He cleared me for combat after D Day, because all the free Frenchmen – Maquis and people like that – had surfaced". Eisenhower, after gaining permission from the War Department to decide the requests, concurred with Yeager and Glover. In the meantime, Yeager shot down his second enemy aircraft, a German Junkers Ju 88 bomber, over the English Channel.
Yeager demonstrated outstanding flying skills and combat leadership. On October 12, 1944, he became the first pilot in his group to make "ace in a day," downing five enemy aircraft in a single mission. Two of these victories were scored without firing a single shot: when he flew into firing position against a Messerschmitt Bf 109, the pilot of the aircraft panicked, breaking to port and colliding with his wingman. Yeager said both pilots bailed out. He finished the war with 11.5 official victories, including one of the first air-to-air victories over a jet fighter, a German Messerschmitt Me 262 that he shot down as it was on final approach for landing.
In his 1986 memoirs, Yeager recalled with disgust that "atrocities were committed by both sides", and said he went on a mission with orders from the Eighth Air Force to "strafe anything that moved". During the mission briefing, he whispered to Major Donald H. Bochkay, "If we are going to do things like this, we sure as hell better make sure we are on the winning side". Yeager said, "I'm certainly not proud of that particular strafing mission against civilians. But it is there, on the record and in my memory". He also expressed bitterness at his treatment in England during World War II, prompting descriptions of the British as "arrogant" and "nasty" on Twitter.
Yeager was commissioned a second lieutenant while at Leiston, and was promoted to captain before the end of his tour. He flew his 61st and final mission on January 15, 1945, and returned to the United States in early February 1945. As an evader, he received his choice of assignments and, because his new wife was pregnant, chose Wright Field to be near his home in West Virginia. His high number of flight hours and maintenance experience qualified him to become a functional test pilot of repaired aircraft, which brought him under the command of Colonel Albert Boyd, head of the Aeronautical Systems Flight Test Division.
Yeager remained in the U.S. Army Air Forces after the war, becoming a test pilot at Muroc Army Air Field (now Edwards Air Force Base), following graduation from Air Materiel Command Flight Performance School (Class 46C). After Bell Aircraft test pilot Chalmers "Slick" Goodlin demanded US$150,000 (equivalent to $1,970,000 in 2022) to break the sound "barrier", the USAAF selected the 24-year-old Yeager to fly the rocket-powered Bell XS-1 in a NACA program to research high-speed flight. Under the National Security Act of 1947, the USAAF became the United States Air Force (USAF) on September 18.
Such was the difficulty of this task that the answer to many of the inherent challenges was along the lines of "Yeager better have paid-up insurance". Two nights before the scheduled date for the flight, Yeager broke two ribs when he fell from a horse. He was worried that the injury would remove him from the mission and reported that he went to a civilian doctor in nearby Rosamond, who taped his ribs. Besides his wife who was riding with him, Yeager told only his friend and fellow project pilot Jack Ridley about the accident. On the day of the flight, Yeager was in such pain that he could not seal the X-1's hatch by himself. Ridley rigged up a device, using the end of a broom handle as an extra lever, to allow Yeager to seal the hatch.
Yeager broke the sound barrier on October 14, 1947, in level flight while piloting the X-1 Glamorous Glennis at Mach 1.05 at an altitude of 45,000 ft (13,700 m) over the Rogers Dry Lake of the Mojave Desert in California. The success of the mission was not announced to the public for nearly eight months, until June 10, 1948. Yeager was awarded the Mackay Trophy and the Collier Trophy in 1948 for his mach-transcending flight, and the Harmon International Trophy in 1954. The X-1 he flew that day was later put on permanent display at the Smithsonian Institution's National Air and Space Museum. During 1952, he attended the Air Command and Staff College.
Yeager went on to break many other speed and altitude records. He was also one of the first American pilots to fly a Mikoyan-Gurevich MiG-15, after its pilot, No Kum-sok, defected to South Korea. Returning to Muroc, during the latter half of 1953, Yeager was involved with the USAF team that was working on the X-1A, an aircraft designed to surpass Mach 2 in level flight. That year, he flew a chase aircraft for the civilian pilot Jackie Cochran as she became the first woman to fly faster than sound.
On November 20, 1953, the U.S. Navy program involving the D-558-II Skyrocket and its pilot, Scott Crossfield, became the first team to reach twice the speed of sound. After they were bested, Ridley and Yeager decided to beat rival Crossfield's speed record in a series of test flights that they dubbed "Operation NACA Weep". Not only did they beat Crossfield by setting a new record at Mach 2.44 on December 12, 1953, but they did it in time to spoil a celebration planned for the 50th anniversary of flight in which Crossfield was to be called "the fastest man alive".
The new record flight, however, did not entirely go to plan, since shortly after reaching Mach 2.44, Yeager lost control of the X-1A at about 80,000 ft (24,000 m) due to inertia coupling, a phenomenon largely unknown at the time. With the aircraft simultaneously rolling, pitching, and yawing out of control, Yeager dropped 51,000 ft (16,000 m) in less than a minute before regaining control at around 29,000 ft (8,800 m). He then managed to land without further incident. For this feat, Yeager was awarded the Distinguished Service Medal (DSM) in 1954.
Yeager was foremost a fighter pilot and held several squadron and wing commands. From 1954 to 1957, he commanded the F-86H Sabre-equipped 417th Fighter-Bomber Squadron (50th Fighter-Bomber Wing) at Hahn AB, West Germany, and Toul-Rosieres Air Base, France; and from 1957 to 1960 the F-100D Super Sabre-equipped 1st Fighter Day Squadron at George Air Force Base, California, and Morón Air Base, Spain.
Now a full colonel in 1962, after completion of a year's studies and final thesis on STOL aircraft at the Air War College, Yeager became the first commandant of the USAF Aerospace Research Pilot School, which produced astronauts for NASA and the USAF, after its redesignation from the USAF Flight Test Pilot School. (Yeager himself had only a high school education, so he was not eligible to become an astronaut like those he trained.) In April 1962, Yeager made his only flight with Neil Armstrong. Their job, flying a T-33, was to evaluate Smith Ranch Dry Lake in Nevada for use as an emergency landing site for the North American X-15. In his autobiography, Yeager wrote that he knew the lake bed was unsuitable for landings after recent rains, but Armstrong insisted on flying out anyway. As Armstrong suggested that they do a touch-and-go, Yeager advised against it, telling him "You may touch, but you ain't gonna go!" When Armstrong did touch down, the wheels became stuck in the mud, bringing the plane to a sudden stop and provoking Yeager to fits of laughter. They had to wait for rescue.
Yeager's participation in the test pilot training program for NASA included controversial behavior. Yeager reportedly did not believe that Ed Dwight, the first African American pilot admitted into the program, should be a part of it. In the 2019 documentary series Chasing the Moon, the filmmakers made the claim that Yeager instructed staff and participants at the school that "Washington is trying to cram the nigger down our throats. [President] Kennedy is using this to make 'racial equality,' so do not speak to him, do not socialize with him, do not drink with him, do not invite him over to your house, and in six months he'll be gone." In his autobiography, Dwight details how Yeager's leadership led to discriminatory treatment throughout his training at Edwards Air Force Base.
Between December 1963 and January 1964, Yeager completed five flights in the NASA M2-F1 lifting body. An accident during a December 1963 test flight in one of the school's NF-104s resulted in serious injuries. After climbing to a near-record altitude, the plane's controls became ineffective, and it entered a flat spin. After several turns, and an altitude loss of approximately 95,000 feet, Yeager ejected from the plane. During the ejection, the seat straps released normally, but the seat base slammed into Yeager, with the still-hot rocket motor breaking his helmet's plastic faceplate and causing his emergency oxygen supply to catch fire. The resulting burns to his face required extensive and agonizing medical care. This was Yeager's last attempt at setting test-flying records.
In 1966, Yeager took command of the 405th Tactical Fighter Wing at Clark Air Base, the Philippines, whose squadrons were deployed on rotational temporary duty (TDY) in South Vietnam and elsewhere in Southeast Asia. There he flew 127 missions. In February 1968, Yeager was assigned command of the 4th Tactical Fighter Wing at Seymour Johnson Air Force Base, North Carolina, and led the McDonnell Douglas F-4 Phantom II wing in South Korea during the Pueblo crisis.
Yeager was promoted to brigadier general and was assigned in July 1969 as the vice-commander of the Seventeenth Air Force.
From 1971 to 1973, at the behest of Ambassador Joseph Farland, Yeager was assigned as the Air Attache in Pakistan to advise the Pakistan Air Force which was led by Abdur Rahim Khan (the first Pakistani to break the sound barrier). He arrived in Pakistan at a time when tensions with India were at a high level. One of Yeager's jobs during this time was to assist Pakistani technicians in installing AIM-9 Sidewinders on PAF's Shenyang F-6 fighters. He also had a keen interest in interacting with PAF personnel from various Pakistani Squadrons and helping them develop combat tactics. In one instance in 1972, while visiting the No. 15 Squadron "Cobras" at Peshawar Airbase, the Squadron's OC Wing Commander Najeeb Khan escorted him to K2 in a pair of F-86Fs after Yeager requested a visit to the second highest mountain on Earth. After hostilities broke out in 1971, he decided to stay in West Pakistan and continued overseeing the PAF's operations. Yeager recalled "the Pakistanis whipped the Indians' asses in the sky... the Pakistanis scored a three-to-one kill ratio, knocking out 102 Russian-made Indian jets and losing 34 airplanes of their own". During the war, he flew around the western front in a helicopter documenting wreckages of Indian warplanes of Soviet origin which included Sukhoi Su-7s and MiG-21s; they were transported to the United States after the war for analysis. Yeager also flew around in his Beechcraft Queen Air, a small passenger aircraft that was assigned to him by the Pentagon, picking up shot-down Indian fighter pilots. The Beechcraft was later destroyed during an air raid by the Indian Air Force at a PAF airbase. Yeager was not present in the aircraft. Edward C. Ingraham, a U.S. diplomat who had served as political counselor to Ambassador Farland in Islamabad, recalled this incident in the Washington Monthly of October 1985: "After Yeager's Beechcraft was destroyed during an Indian air raid, he raged to his cowering colleagues that the Indian pilot had been specifically instructed by Indira Gandhi to blast his plane. 'It was', he later wrote, 'the Indian way of giving Uncle Sam the finger'". Yeager was incensed over the incident and demanded U.S. retaliation.
On March 1, 1975, following assignments in West Germany and Pakistan, Yeager retired from the Air Force at Norton Air Force Base, California.
Yeager made a cameo appearance in the movie The Right Stuff (1983). He played "Fred", a bartender at "Pancho's Place", which was most appropriate, as Yeager said, "if all the hours were ever totaled, I reckon I spent more time at her place than in a cockpit over those years". Sam Shepard portrayed Yeager in the film, which chronicles in part his famous 1947 record-breaking flight. Also in popular culture, Yeager has been referenced several times as being part of the shared Star Trek universe, including having a fictional type of starship named after him and appearing in archival footage within the opening title sequence for the series Star Trek: Enterprise (2001–2005). For that same series, executive producer Rick Berman said that he envisaged the lead character, Captain Jonathan Archer, as being "halfway between Chuck Yeager and Han Solo."
For several years in the 1980s, Yeager was connected to General Motors, publicizing ACDelco, the company's automotive parts division. In 1986, he was invited to drive the Chevrolet Corvette pace car for the 70th running of the Indianapolis 500. In 1988, Yeager was again invited to drive the pace car, this time at the wheel of an Oldsmobile Cutlass Supreme. In 1986, President Reagan appointed Yeager to the Rogers Commission that investigated the explosion of the Space Shuttle Challenger.
During this time, Yeager also served as a technical adviser for three Electronic Arts flight simulator video games. The games include Chuck Yeager's Advanced Flight Trainer, Chuck Yeager's Advanced Flight Trainer 2.0, and Chuck Yeager's Air Combat. The game manuals featured quotes and anecdotes from Yeager and were well received by players. Missions featured several of Yeager's accomplishments and let players attempt to top his records. Chuck Yeager's Advanced Flight Trainer was Electronic Art's top-selling game for 1987.
In 2009, Yeager participated in the documentary The Legend of Pancho Barnes and the Happy Bottom Riding Club, a profile of his friend Pancho Barnes. The documentary was screened at film festivals, aired on public television in the United States, and won an Emmy Award.
On October 14, 1997, on the 50th anniversary of his historic flight past Mach 1, he flew a new Glamorous Glennis III, an F-15D Eagle, past Mach 1. The chase plane for the flight was an F-16 Fighting Falcon piloted by Bob Hoover, a longtime test, fighter, and aerobatic pilot who had been Yeager's wingman for the first supersonic flight. At the end of his speech to the crowd in 1997, Yeager concluded, "All that I am ... I owe to the Air Force". Later that month, he was the recipient of the Tony Jannus Award for his achievements.
On October 14, 2012, on the 65th anniversary of breaking the sound barrier, Yeager did it again at the age of 89, flying as co-pilot in a McDonnell Douglas F-15 Eagle piloted by Captain David Vincent out of Nellis Air Force Base.
In 1973, Yeager was inducted into the National Aviation Hall of Fame, arguably aviation's highest honor. In 1974, Yeager received the Golden Plate Award of the American Academy of Achievement. In December 1975, the U.S. Congress awarded Yeager a silver medal "equivalent to a noncombat Medal of Honor ... for contributing immeasurably to aerospace science by risking his life in piloting the X-1 research airplane faster than the speed of sound on October 14, 1947". President Gerald Ford presented the medal to Yeager in a ceremony at the White House on December 8, 1976.
Yeager, who never attended college and was often modest about his background, is considered by many, including Flying Magazine, the California Hall of Fame, the State of West Virginia, National Aviation Hall of Fame, a few U.S. presidents, and the United States Army Air Force, to be one of the greatest pilots of all time. Air & Space/Smithsonian magazine ranked him the fifth greatest pilot of all time in 2003. Despite his lack of higher education, West Virginia's Marshall University named its highest academic scholarship the Society of Yeager Scholars in his honor. Yeager was also the chairman of Experimental Aircraft Association (EAA)'s Young Eagle Program from 1994 to 2004, and was named the program's chairman emeritus.
In 1966, Yeager was inducted into the International Air & Space Hall of Fame. He was inducted into the International Space Hall of Fame in 1981. He was inducted into the Aerospace Walk of Honor 1990 inaugural class.
Yeager Airport in Charleston, West Virginia, is named in his honor. The Interstate 64/Interstate 77 bridge over the Kanawha River in Charleston is named in his honor. He also flew directly under the Kanawha Bridge and West Virginia named it the Chuck E. Yeager Bridge. On October 19, 2006, the state of West Virginia also honored Yeager with a marker along Corridor G (part of U.S. Highway 119) in his home Lincoln County, and also renamed part of the highway the Yeager Highway.
Yeager was an honorary board member of the humanitarian organization Wings of Hope. On August 25, 2009, Governor Arnold Schwarzenegger and Maria Shriver announced that Yeager would be one of 13 California Hall of Fame inductees in The California Museum's yearlong exhibit. The induction ceremony was on December 1, 2009, in Sacramento, California. Flying Magazine ranked Yeager number 5 on its 2013 list of The 51 Heroes of Aviation; for many years, he was the highest-ranked living person on the list.
The Civil Air Patrol, the volunteer auxiliary of the USAF, awards the Charles E. "Chuck" Yeager Award to its senior members as part of its Aerospace Education program.
Yeager named his plane after his wife, Glennis, as a good-luck charm: "You're my good-luck charm, hon. Any airplane I name after you always brings me home." Yeager and Glennis moved to Grass Valley, California, after his retirement from the Air Force in 1975. The couple prospered as a result of Yeager's best-selling autobiography, speaking engagements, and commercial ventures. Glennis Yeager died of ovarian cancer in 1990. They had four children (Susan, Don, Mickey, and Sharon). Yeager's son Mickey (Michael) died unexpectedly in Oregon, on March 26, 2011.
Yeager appeared in a Texas advertisement for George H. W. Bush's 1988 presidential campaign. In 2000, Yeager met actress Victoria Scott D'Angelo on a hiking trail in Nevada County. The pair started dating shortly thereafter, and married in August 2003. Subsequent to the commencement of their relationship, a bitter dispute arose between Yeager, his children and D'Angelo. The children contended that D'Angelo, at least 35 years Yeager's junior, had married him for his fortune. Yeager and D'Angelo both denied the charge. Litigation ensued, in which his children accused D'Angelo of "undue influence" on Yeager, and Yeager accused his children of diverting millions of dollars from his assets. In August 2008, the California Court of Appeal ruled for Yeager, finding that his daughter Susan had breached her duty as trustee.
Yeager lived in Grass Valley, Northern California and died in the afternoon of December 7, 2020 (National Pearl Harbor Remembrance Day), at age 97, in a Los Angeles hospital. | [
{
"paragraph_id": 0,
"text": "Brigadier General Charles Elwood Yeager (/ˈjeɪɡər/ YAY-gər, February 13, 1923 – December 7, 2020) was a United States Air Force officer, flying ace, and record-setting test pilot who in October 1947 became the first pilot in history confirmed to have exceeded the speed of sound in level flight.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Yeager was raised in Hamlin, West Virginia. His career began in World War II as a private in the United States Army, assigned to the Army Air Forces in 1941. After serving as an aircraft mechanic, in September 1942, he entered enlisted pilot training and upon graduation was promoted to the rank of flight officer (the World War II Army Air Force version of the Army's warrant officer), later achieving most of his aerial victories as a P-51 Mustang fighter pilot on the Western Front, where he was credited with shooting down 11.5 enemy aircraft (the half credit is from a second pilot assisting him in a single shootdown). On October 12, 1944, he attained \"ace in a day\" status, shooting down five enemy aircraft in one mission.",
"title": ""
},
{
"paragraph_id": 2,
"text": "After the war, Yeager became a test pilot and flew many types of aircraft, including experimental rocket-powered aircraft for the National Advisory Committee for Aeronautics (NACA). Through the NACA program, he became the first human to officially break the sound barrier on October 14, 1947, when he flew the experimental Bell X-1 at Mach 1 at an altitude of 45,000 ft (13,700 m), for which he won both the Collier and Mackay trophies in 1948. He then went on to break several other speed and altitude records in the following years. In 1962, he became the first commandant of the USAF Aerospace Research Pilot School, which trained and produced astronauts for NASA and the Air Force.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Yeager later commanded fighter squadrons and wings in Germany, as well as in Southeast Asia during the Vietnam War. In recognition of his achievements and the outstanding performance ratings of those units, he was promoted to brigadier general in 1969 and inducted into the National Aviation Hall of Fame in 1973, retiring on March 1, 1975. His three-war active-duty flying career spanned more than 30 years and took him to many parts of the world, including the Korean War zone and the Soviet Union during the height of the Cold War.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Yeager is referred to by many as one of the greatest pilots of all time, and was ranked fifth on Flying's list of the 51 Heroes of Aviation in 2013. Throughout his life, he flew more than 360 different types of aircraft over a 70-year period, and continued to fly for two decades after retirement as a consultant pilot for the United States Air Force.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Yeager was born February 13, 1923, in Myra, West Virginia, to farming parents Albert Hal Yeager (1896–1963) and Susie Mae Yeager (née Sizemore; 1898–1987). When he was five years old, his family moved to Hamlin, West Virginia. Yeager had two brothers, Roy and Hal Jr., and two sisters, Doris Ann (accidentally killed at age two by six-year-old Roy playing with a firearm) and Pansy Lee.",
"title": "Early life and education"
},
{
"paragraph_id": 6,
"text": "He attended Hamlin High School, where he played basketball and football, receiving his best grades in geometry and typing. He graduated from high school in June 1941.",
"title": "Early life and education"
},
{
"paragraph_id": 7,
"text": "His first experience with the military was as a teen at the Citizens Military Training Camp at Fort Benjamin Harrison, Indianapolis, Indiana, during the summers of 1939 and 1940. On February 26, 1945, Yeager married Glennis Dickhouse, and the couple had four children. Glennis Yeager died in 1990, predeceasing her husband by 30 years.",
"title": "Early life and education"
},
{
"paragraph_id": 8,
"text": "His cousin, Steve Yeager, was a professional baseball catcher.",
"title": "Early life and education"
},
{
"paragraph_id": 9,
"text": "Yeager enlisted as a private in the U.S. Army Air Forces (USAAF) on September 12, 1941, and became an aircraft mechanic at George Air Force Base, Victorville, California. At enlistment, Yeager was not eligible for flight training because of his age and educational background, but the entry of the U.S. into World War II less than three months later prompted the USAAF to alter its recruiting standards. Yeager had unusually sharp vision (a visual acuity rated 20/10), which once enabled him to shoot a deer at 600 yd (550 m).",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "At the time of his flight training acceptance, he was a crew chief on an AT-11. He received his pilot wings and a promotion to flight officer at Luke Field, Arizona, where he graduated from Class 43C on March 10, 1943. Assigned to the 357th Fighter Group at Tonopah, Nevada, he initially trained as a fighter pilot, flying Bell P-39 Airacobras (being grounded for seven days for clipping a farmer's tree during a training flight), and shipped overseas with the group on November 23, 1943.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "Stationed in the United Kingdom at RAF Leiston, Yeager flew P-51 Mustangs in combat with the 363d Fighter Squadron. He named his aircraft Glamorous Glen after his girlfriend, Glennis Faye Dickhouse, who became his wife in February 1945. Yeager had gained one victory before he was shot down over France in his first aircraft (P-51B-5-NA s/n 43-6763) on March 5, 1944, on his eighth mission. He escaped to Spain on March 30, 1944, with the help of the Maquis (French Resistance) and returned to England on May 15, 1944. During his stay with the Maquis, Yeager assisted the guerrillas in duties that did not involve direct combat; he helped construct bombs for the group, a skill that he had learned from his father. He was awarded the Bronze Star for helping a navigator, Omar M. \"Pat\" Patterson, Jr., to cross the Pyrenees.",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "Despite a regulation prohibiting \"evaders\" (escaped pilots) from flying over enemy territory again, the purpose of which was to prevent resistance groups from being compromised by giving the enemy a second chance to possibly capture him, Yeager was reinstated to flying combat. He had joined another evader, fellow P-51 pilot 1st Lt Fred Glover, in speaking directly to the Supreme Allied Commander, General Dwight D. Eisenhower, on June 12, 1944. \"I raised so much hell that General Eisenhower finally let me go back to my squadron\" Yeager said. \"He cleared me for combat after D Day, because all the free Frenchmen – Maquis and people like that – had surfaced\". Eisenhower, after gaining permission from the War Department to decide the requests, concurred with Yeager and Glover. In the meantime, Yeager shot down his second enemy aircraft, a German Junkers Ju 88 bomber, over the English Channel.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "Yeager demonstrated outstanding flying skills and combat leadership. On October 12, 1944, he became the first pilot in his group to make \"ace in a day,\" downing five enemy aircraft in a single mission. Two of these victories were scored without firing a single shot: when he flew into firing position against a Messerschmitt Bf 109, the pilot of the aircraft panicked, breaking to port and colliding with his wingman. Yeager said both pilots bailed out. He finished the war with 11.5 official victories, including one of the first air-to-air victories over a jet fighter, a German Messerschmitt Me 262 that he shot down as it was on final approach for landing.",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "In his 1986 memoirs, Yeager recalled with disgust that \"atrocities were committed by both sides\", and said he went on a mission with orders from the Eighth Air Force to \"strafe anything that moved\". During the mission briefing, he whispered to Major Donald H. Bochkay, \"If we are going to do things like this, we sure as hell better make sure we are on the winning side\". Yeager said, \"I'm certainly not proud of that particular strafing mission against civilians. But it is there, on the record and in my memory\". He also expressed bitterness at his treatment in England during World War II, prompting descriptions of the British as \"arrogant\" and \"nasty\" on Twitter.",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "Yeager was commissioned a second lieutenant while at Leiston, and was promoted to captain before the end of his tour. He flew his 61st and final mission on January 15, 1945, and returned to the United States in early February 1945. As an evader, he received his choice of assignments and, because his new wife was pregnant, chose Wright Field to be near his home in West Virginia. His high number of flight hours and maintenance experience qualified him to become a functional test pilot of repaired aircraft, which brought him under the command of Colonel Albert Boyd, head of the Aeronautical Systems Flight Test Division.",
"title": "Career"
},
{
"paragraph_id": 16,
"text": "Yeager remained in the U.S. Army Air Forces after the war, becoming a test pilot at Muroc Army Air Field (now Edwards Air Force Base), following graduation from Air Materiel Command Flight Performance School (Class 46C). After Bell Aircraft test pilot Chalmers \"Slick\" Goodlin demanded US$150,000 (equivalent to $1,970,000 in 2022) to break the sound \"barrier\", the USAAF selected the 24-year-old Yeager to fly the rocket-powered Bell XS-1 in a NACA program to research high-speed flight. Under the National Security Act of 1947, the USAAF became the United States Air Force (USAF) on September 18.",
"title": "Career"
},
{
"paragraph_id": 17,
"text": "Such was the difficulty of this task that the answer to many of the inherent challenges was along the lines of \"Yeager better have paid-up insurance\". Two nights before the scheduled date for the flight, Yeager broke two ribs when he fell from a horse. He was worried that the injury would remove him from the mission and reported that he went to a civilian doctor in nearby Rosamond, who taped his ribs. Besides his wife who was riding with him, Yeager told only his friend and fellow project pilot Jack Ridley about the accident. On the day of the flight, Yeager was in such pain that he could not seal the X-1's hatch by himself. Ridley rigged up a device, using the end of a broom handle as an extra lever, to allow Yeager to seal the hatch.",
"title": "Career"
},
{
"paragraph_id": 18,
"text": "Yeager broke the sound barrier on October 14, 1947, in level flight while piloting the X-1 Glamorous Glennis at Mach 1.05 at an altitude of 45,000 ft (13,700 m) over the Rogers Dry Lake of the Mojave Desert in California. The success of the mission was not announced to the public for nearly eight months, until June 10, 1948. Yeager was awarded the Mackay Trophy and the Collier Trophy in 1948 for his mach-transcending flight, and the Harmon International Trophy in 1954. The X-1 he flew that day was later put on permanent display at the Smithsonian Institution's National Air and Space Museum. During 1952, he attended the Air Command and Staff College.",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "Yeager went on to break many other speed and altitude records. He was also one of the first American pilots to fly a Mikoyan-Gurevich MiG-15, after its pilot, No Kum-sok, defected to South Korea. Returning to Muroc, during the latter half of 1953, Yeager was involved with the USAF team that was working on the X-1A, an aircraft designed to surpass Mach 2 in level flight. That year, he flew a chase aircraft for the civilian pilot Jackie Cochran as she became the first woman to fly faster than sound.",
"title": "Career"
},
{
"paragraph_id": 20,
"text": "On November 20, 1953, the U.S. Navy program involving the D-558-II Skyrocket and its pilot, Scott Crossfield, became the first team to reach twice the speed of sound. After they were bested, Ridley and Yeager decided to beat rival Crossfield's speed record in a series of test flights that they dubbed \"Operation NACA Weep\". Not only did they beat Crossfield by setting a new record at Mach 2.44 on December 12, 1953, but they did it in time to spoil a celebration planned for the 50th anniversary of flight in which Crossfield was to be called \"the fastest man alive\".",
"title": "Career"
},
{
"paragraph_id": 21,
"text": "The new record flight, however, did not entirely go to plan, since shortly after reaching Mach 2.44, Yeager lost control of the X-1A at about 80,000 ft (24,000 m) due to inertia coupling, a phenomenon largely unknown at the time. With the aircraft simultaneously rolling, pitching, and yawing out of control, Yeager dropped 51,000 ft (16,000 m) in less than a minute before regaining control at around 29,000 ft (8,800 m). He then managed to land without further incident. For this feat, Yeager was awarded the Distinguished Service Medal (DSM) in 1954.",
"title": "Career"
},
{
"paragraph_id": 22,
"text": "Yeager was foremost a fighter pilot and held several squadron and wing commands. From 1954 to 1957, he commanded the F-86H Sabre-equipped 417th Fighter-Bomber Squadron (50th Fighter-Bomber Wing) at Hahn AB, West Germany, and Toul-Rosieres Air Base, France; and from 1957 to 1960 the F-100D Super Sabre-equipped 1st Fighter Day Squadron at George Air Force Base, California, and Morón Air Base, Spain.",
"title": "Career"
},
{
"paragraph_id": 23,
"text": "Now a full colonel in 1962, after completion of a year's studies and final thesis on STOL aircraft at the Air War College, Yeager became the first commandant of the USAF Aerospace Research Pilot School, which produced astronauts for NASA and the USAF, after its redesignation from the USAF Flight Test Pilot School. (Yeager himself had only a high school education, so he was not eligible to become an astronaut like those he trained.) In April 1962, Yeager made his only flight with Neil Armstrong. Their job, flying a T-33, was to evaluate Smith Ranch Dry Lake in Nevada for use as an emergency landing site for the North American X-15. In his autobiography, Yeager wrote that he knew the lake bed was unsuitable for landings after recent rains, but Armstrong insisted on flying out anyway. As Armstrong suggested that they do a touch-and-go, Yeager advised against it, telling him \"You may touch, but you ain't gonna go!\" When Armstrong did touch down, the wheels became stuck in the mud, bringing the plane to a sudden stop and provoking Yeager to fits of laughter. They had to wait for rescue.",
"title": "Career"
},
{
"paragraph_id": 24,
"text": "Yeager's participation in the test pilot training program for NASA included controversial behavior. Yeager reportedly did not believe that Ed Dwight, the first African American pilot admitted into the program, should be a part of it. In the 2019 documentary series Chasing the Moon, the filmmakers made the claim that Yeager instructed staff and participants at the school that \"Washington is trying to cram the nigger down our throats. [President] Kennedy is using this to make 'racial equality,' so do not speak to him, do not socialize with him, do not drink with him, do not invite him over to your house, and in six months he'll be gone.\" In his autobiography, Dwight details how Yeager's leadership led to discriminatory treatment throughout his training at Edwards Air Force Base.",
"title": "Career"
},
{
"paragraph_id": 25,
"text": "Between December 1963 and January 1964, Yeager completed five flights in the NASA M2-F1 lifting body. An accident during a December 1963 test flight in one of the school's NF-104s resulted in serious injuries. After climbing to a near-record altitude, the plane's controls became ineffective, and it entered a flat spin. After several turns, and an altitude loss of approximately 95,000 feet, Yeager ejected from the plane. During the ejection, the seat straps released normally, but the seat base slammed into Yeager, with the still-hot rocket motor breaking his helmet's plastic faceplate and causing his emergency oxygen supply to catch fire. The resulting burns to his face required extensive and agonizing medical care. This was Yeager's last attempt at setting test-flying records.",
"title": "Career"
},
{
"paragraph_id": 26,
"text": "In 1966, Yeager took command of the 405th Tactical Fighter Wing at Clark Air Base, the Philippines, whose squadrons were deployed on rotational temporary duty (TDY) in South Vietnam and elsewhere in Southeast Asia. There he flew 127 missions. In February 1968, Yeager was assigned command of the 4th Tactical Fighter Wing at Seymour Johnson Air Force Base, North Carolina, and led the McDonnell Douglas F-4 Phantom II wing in South Korea during the Pueblo crisis.",
"title": "Career"
},
{
"paragraph_id": 27,
"text": "Yeager was promoted to brigadier general and was assigned in July 1969 as the vice-commander of the Seventeenth Air Force.",
"title": "Career"
},
{
"paragraph_id": 28,
"text": "From 1971 to 1973, at the behest of Ambassador Joseph Farland, Yeager was assigned as the Air Attache in Pakistan to advise the Pakistan Air Force which was led by Abdur Rahim Khan (the first Pakistani to break the sound barrier). He arrived in Pakistan at a time when tensions with India were at a high level. One of Yeager's jobs during this time was to assist Pakistani technicians in installing AIM-9 Sidewinders on PAF's Shenyang F-6 fighters. He also had a keen interest in interacting with PAF personnel from various Pakistani Squadrons and helping them develop combat tactics. In one instance in 1972, while visiting the No. 15 Squadron \"Cobras\" at Peshawar Airbase, the Squadron's OC Wing Commander Najeeb Khan escorted him to K2 in a pair of F-86Fs after Yeager requested a visit to the second highest mountain on Earth. After hostilities broke out in 1971, he decided to stay in West Pakistan and continued overseeing the PAF's operations. Yeager recalled \"the Pakistanis whipped the Indians' asses in the sky... the Pakistanis scored a three-to-one kill ratio, knocking out 102 Russian-made Indian jets and losing 34 airplanes of their own\". During the war, he flew around the western front in a helicopter documenting wreckages of Indian warplanes of Soviet origin which included Sukhoi Su-7s and MiG-21s; they were transported to the United States after the war for analysis. Yeager also flew around in his Beechcraft Queen Air, a small passenger aircraft that was assigned to him by the Pentagon, picking up shot-down Indian fighter pilots. The Beechcraft was later destroyed during an air raid by the Indian Air Force at a PAF airbase. Yeager was not present in the aircraft. Edward C. Ingraham, a U.S. diplomat who had served as political counselor to Ambassador Farland in Islamabad, recalled this incident in the Washington Monthly of October 1985: \"After Yeager's Beechcraft was destroyed during an Indian air raid, he raged to his cowering colleagues that the Indian pilot had been specifically instructed by Indira Gandhi to blast his plane. 'It was', he later wrote, 'the Indian way of giving Uncle Sam the finger'\". Yeager was incensed over the incident and demanded U.S. retaliation.",
"title": "Career"
},
{
"paragraph_id": 29,
"text": "On March 1, 1975, following assignments in West Germany and Pakistan, Yeager retired from the Air Force at Norton Air Force Base, California.",
"title": "Career"
},
{
"paragraph_id": 30,
"text": "Yeager made a cameo appearance in the movie The Right Stuff (1983). He played \"Fred\", a bartender at \"Pancho's Place\", which was most appropriate, as Yeager said, \"if all the hours were ever totaled, I reckon I spent more time at her place than in a cockpit over those years\". Sam Shepard portrayed Yeager in the film, which chronicles in part his famous 1947 record-breaking flight. Also in popular culture, Yeager has been referenced several times as being part of the shared Star Trek universe, including having a fictional type of starship named after him and appearing in archival footage within the opening title sequence for the series Star Trek: Enterprise (2001–2005). For that same series, executive producer Rick Berman said that he envisaged the lead character, Captain Jonathan Archer, as being \"halfway between Chuck Yeager and Han Solo.\"",
"title": "Career"
},
{
"paragraph_id": 31,
"text": "For several years in the 1980s, Yeager was connected to General Motors, publicizing ACDelco, the company's automotive parts division. In 1986, he was invited to drive the Chevrolet Corvette pace car for the 70th running of the Indianapolis 500. In 1988, Yeager was again invited to drive the pace car, this time at the wheel of an Oldsmobile Cutlass Supreme. In 1986, President Reagan appointed Yeager to the Rogers Commission that investigated the explosion of the Space Shuttle Challenger.",
"title": "Career"
},
{
"paragraph_id": 32,
"text": "During this time, Yeager also served as a technical adviser for three Electronic Arts flight simulator video games. The games include Chuck Yeager's Advanced Flight Trainer, Chuck Yeager's Advanced Flight Trainer 2.0, and Chuck Yeager's Air Combat. The game manuals featured quotes and anecdotes from Yeager and were well received by players. Missions featured several of Yeager's accomplishments and let players attempt to top his records. Chuck Yeager's Advanced Flight Trainer was Electronic Art's top-selling game for 1987.",
"title": "Career"
},
{
"paragraph_id": 33,
"text": "In 2009, Yeager participated in the documentary The Legend of Pancho Barnes and the Happy Bottom Riding Club, a profile of his friend Pancho Barnes. The documentary was screened at film festivals, aired on public television in the United States, and won an Emmy Award.",
"title": "Career"
},
{
"paragraph_id": 34,
"text": "On October 14, 1997, on the 50th anniversary of his historic flight past Mach 1, he flew a new Glamorous Glennis III, an F-15D Eagle, past Mach 1. The chase plane for the flight was an F-16 Fighting Falcon piloted by Bob Hoover, a longtime test, fighter, and aerobatic pilot who had been Yeager's wingman for the first supersonic flight. At the end of his speech to the crowd in 1997, Yeager concluded, \"All that I am ... I owe to the Air Force\". Later that month, he was the recipient of the Tony Jannus Award for his achievements.",
"title": "Career"
},
{
"paragraph_id": 35,
"text": "On October 14, 2012, on the 65th anniversary of breaking the sound barrier, Yeager did it again at the age of 89, flying as co-pilot in a McDonnell Douglas F-15 Eagle piloted by Captain David Vincent out of Nellis Air Force Base.",
"title": "Career"
},
{
"paragraph_id": 36,
"text": "In 1973, Yeager was inducted into the National Aviation Hall of Fame, arguably aviation's highest honor. In 1974, Yeager received the Golden Plate Award of the American Academy of Achievement. In December 1975, the U.S. Congress awarded Yeager a silver medal \"equivalent to a noncombat Medal of Honor ... for contributing immeasurably to aerospace science by risking his life in piloting the X-1 research airplane faster than the speed of sound on October 14, 1947\". President Gerald Ford presented the medal to Yeager in a ceremony at the White House on December 8, 1976.",
"title": "Awards and decorations"
},
{
"paragraph_id": 37,
"text": "Yeager, who never attended college and was often modest about his background, is considered by many, including Flying Magazine, the California Hall of Fame, the State of West Virginia, National Aviation Hall of Fame, a few U.S. presidents, and the United States Army Air Force, to be one of the greatest pilots of all time. Air & Space/Smithsonian magazine ranked him the fifth greatest pilot of all time in 2003. Despite his lack of higher education, West Virginia's Marshall University named its highest academic scholarship the Society of Yeager Scholars in his honor. Yeager was also the chairman of Experimental Aircraft Association (EAA)'s Young Eagle Program from 1994 to 2004, and was named the program's chairman emeritus.",
"title": "Awards and decorations"
},
{
"paragraph_id": 38,
"text": "In 1966, Yeager was inducted into the International Air & Space Hall of Fame. He was inducted into the International Space Hall of Fame in 1981. He was inducted into the Aerospace Walk of Honor 1990 inaugural class.",
"title": "Awards and decorations"
},
{
"paragraph_id": 39,
"text": "Yeager Airport in Charleston, West Virginia, is named in his honor. The Interstate 64/Interstate 77 bridge over the Kanawha River in Charleston is named in his honor. He also flew directly under the Kanawha Bridge and West Virginia named it the Chuck E. Yeager Bridge. On October 19, 2006, the state of West Virginia also honored Yeager with a marker along Corridor G (part of U.S. Highway 119) in his home Lincoln County, and also renamed part of the highway the Yeager Highway.",
"title": "Awards and decorations"
},
{
"paragraph_id": 40,
"text": "Yeager was an honorary board member of the humanitarian organization Wings of Hope. On August 25, 2009, Governor Arnold Schwarzenegger and Maria Shriver announced that Yeager would be one of 13 California Hall of Fame inductees in The California Museum's yearlong exhibit. The induction ceremony was on December 1, 2009, in Sacramento, California. Flying Magazine ranked Yeager number 5 on its 2013 list of The 51 Heroes of Aviation; for many years, he was the highest-ranked living person on the list.",
"title": "Awards and decorations"
},
{
"paragraph_id": 41,
"text": "The Civil Air Patrol, the volunteer auxiliary of the USAF, awards the Charles E. \"Chuck\" Yeager Award to its senior members as part of its Aerospace Education program.",
"title": "Awards and decorations"
},
{
"paragraph_id": 42,
"text": "",
"title": "Dates of rank"
},
{
"paragraph_id": 43,
"text": "Yeager named his plane after his wife, Glennis, as a good-luck charm: \"You're my good-luck charm, hon. Any airplane I name after you always brings me home.\" Yeager and Glennis moved to Grass Valley, California, after his retirement from the Air Force in 1975. The couple prospered as a result of Yeager's best-selling autobiography, speaking engagements, and commercial ventures. Glennis Yeager died of ovarian cancer in 1990. They had four children (Susan, Don, Mickey, and Sharon). Yeager's son Mickey (Michael) died unexpectedly in Oregon, on March 26, 2011.",
"title": "Personal life"
},
{
"paragraph_id": 44,
"text": "Yeager appeared in a Texas advertisement for George H. W. Bush's 1988 presidential campaign. In 2000, Yeager met actress Victoria Scott D'Angelo on a hiking trail in Nevada County. The pair started dating shortly thereafter, and married in August 2003. Subsequent to the commencement of their relationship, a bitter dispute arose between Yeager, his children and D'Angelo. The children contended that D'Angelo, at least 35 years Yeager's junior, had married him for his fortune. Yeager and D'Angelo both denied the charge. Litigation ensued, in which his children accused D'Angelo of \"undue influence\" on Yeager, and Yeager accused his children of diverting millions of dollars from his assets. In August 2008, the California Court of Appeal ruled for Yeager, finding that his daughter Susan had breached her duty as trustee.",
"title": "Personal life"
},
{
"paragraph_id": 45,
"text": "Yeager lived in Grass Valley, Northern California and died in the afternoon of December 7, 2020 (National Pearl Harbor Remembrance Day), at age 97, in a Los Angeles hospital.",
"title": "Personal life"
}
]
| Brigadier General Charles Elwood Yeager was a United States Air Force officer, flying ace, and record-setting test pilot who in October 1947 became the first pilot in history confirmed to have exceeded the speed of sound in level flight. Yeager was raised in Hamlin, West Virginia. His career began in World War II as a private in the United States Army, assigned to the Army Air Forces in 1941. After serving as an aircraft mechanic, in September 1942, he entered enlisted pilot training and upon graduation was promoted to the rank of flight officer, later achieving most of his aerial victories as a P-51 Mustang fighter pilot on the Western Front, where he was credited with shooting down 11.5 enemy aircraft. On October 12, 1944, he attained "ace in a day" status, shooting down five enemy aircraft in one mission. After the war, Yeager became a test pilot and flew many types of aircraft, including experimental rocket-powered aircraft for the National Advisory Committee for Aeronautics (NACA). Through the NACA program, he became the first human to officially break the sound barrier on October 14, 1947, when he flew the experimental Bell X-1 at Mach 1 at an altitude of 45,000 ft (13,700 m), for which he won both the Collier and Mackay trophies in 1948. He then went on to break several other speed and altitude records in the following years. In 1962, he became the first commandant of the USAF Aerospace Research Pilot School, which trained and produced astronauts for NASA and the Air Force. Yeager later commanded fighter squadrons and wings in Germany, as well as in Southeast Asia during the Vietnam War. In recognition of his achievements and the outstanding performance ratings of those units, he was promoted to brigadier general in 1969 and inducted into the National Aviation Hall of Fame in 1973, retiring on March 1, 1975. His three-war active-duty flying career spanned more than 30 years and took him to many parts of the world, including the Korean War zone and the Soviet Union during the height of the Cold War. Yeager is referred to by many as one of the greatest pilots of all time, and was ranked fifth on Flying's list of the 51 Heroes of Aviation in 2013. Throughout his life, he flew more than 360 different types of aircraft over a 70-year period, and continued to fly for two decades after retirement as a consultant pilot for the United States Air Force. | 2001-08-22T08:00:13Z | 2023-12-10T05:57:37Z | [
"Template:Cite book",
"Template:Cite court",
"Template:Discogs artist",
"Template:Respell",
"Template:Spnd",
"Template:Cvt",
"Template:Ribbon devices",
"Template:Short description",
"Template:US$",
"Template:Dodseal",
"Template:Refbegin",
"Template:Refend",
"Template:Cite news",
"Template:Cite magazine",
"Template:Cbignore",
"Template:ISBN",
"Template:Cite web",
"Template:Cite AV media",
"Template:Authority control",
"Template:Use American English",
"Template:Efn",
"Template:Reflist",
"Template:Cite journal",
"Template:Subject bar",
"Template:Nee",
"Template:Refn",
"Template:Official website",
"Template:Lists of flying aces",
"Template:Cite tweet",
"Template:Webarchive",
"Template:IMDb name",
"Template:Use mdy dates",
"Template:Infobox military person",
"Template:IPAc-en",
"Template:Harvp",
"Template:'s",
"Template:Commons"
]
| https://en.wikipedia.org/wiki/Chuck_Yeager |
6,186 | Cajun cuisine | Cajun cuisine (French: cuisine cadienne [kɥi.zin ka.dʒɛn], Spanish: cocina acadiense) is a style of cooking developed by the Cajun–Acadians who were deported from Acadia to Louisiana during the 18th century and who incorporated West African, French and Spanish cooking techniques into their original cuisine.
Cajun cuisine is sometimes referred to as a 'rustic cuisine', meaning that it is based on locally available ingredients and that preparation is relatively simple.
An authentic Cajun meal is usually a three-pot affair, with one pot dedicated to the main dish, one dedicated to steamed rice, specially made sausages, or some seafood dish, and the third containing whatever vegetable is plentiful or available. Crawfish, shrimp, and andouille sausage are staple meats used in a variety of dishes.
The aromatic vegetables green bell pepper (piment doux), onion, and celery are called "the trinity" by chefs in Cajun and Louisiana Creole cuisines. Roughly diced and combined in cooking, the method is similar to the use of the mirepoix in traditional French cuisine which blends roughly diced carrot, onion, and celery. Additional characteristic aromatics for both the Creole and Cajun versions may include parsley, bay leaf, thyme, green onions, ground cayenne pepper, and ground black pepper. Cayenne and Louisiana-style hot sauce are the primary sources of spice in Cajun cuisine, which usually tends towards a moderate, well-balanced heat, despite the national "Cajun hot" craze of the 1980s and 1990s.
The Acadians were a group of French colonists who lived in Acadia, what is today Eastern Canada. In the mid-18th century, they were deported from Acadia by the British during the French and Indian War in what they termed le Grand Dérangement, and many of them ended up settling in Southern Louisiana.
Due to the extreme change in climate, Acadians were unable to cook their original dishes. Soon, their former culinary traditions were adapted and, in time, incorporated not only Indigenous American traditions, but also African-American traditions—as is exemplified in the classic Cajun dish "Gumbo", which is named for its principal ingredient (Okra) using the West African name for that very ingredient: "Gumbo," in West Africa, means "Okra".
Many other meals developed along these lines, adapted in no small part from Haiti, to become what is now considered classic Cajun cuisine traditions (not to be confused with the more modern concept associated with Prudhomme's style).
Up through the 20th century, the meals were not elaborate but instead, rather basic. The public's false perception of "Cajun" cuisine was based on Prudhomme's style of Cajun cooking, which was spicy, flavorful, and not true to the classic form of the cuisine.
Cajun and Creole cuisine have been mistaken to be the same, but the origins of Creole cooking began in New Orleans, and Cajun cooking came 40 years after the establishment of New Orleans. Today, most restaurants serve dishes that consist of Cajun styles, which Paul Prudhomme dubbed "Louisiana cooking". In home-cooking, these individual styles are still kept separate. However, there are fewer and fewer people cooking the classic Cajun dishes that would have been eaten by the original settlers.
Deep-frying of turkeys or oven-roasted turduckens entered southern Louisiana cuisine more recently. Also, blackening of fish or chicken and barbecuing of shrimp in the shell are excluded because they were not prepared in traditional Cajun cuisine. Blackening was actually an invention by chef Paul Prudhomme in the 1970s, becoming associated with Cajun cooking, and presented as such by him, but is not a true historical or traditional Cajun cooking process.
The following is a partial list of ingredients used in Cajun cuisine and some of the staple ingredients of the Acadian food culture.
Cajun foodways include many ways of preserving meat, some of which are waning due to the availability of refrigeration and mass-produced meat at the grocer. Smoking of meats remains a fairly common practice, but once-common preparations such as turkey or duck confit (preserved in poultry fat, with spices) are now seen even by Acadians as quaint rarities.
Game (and hunting) are still uniformly popular in Acadiana.
The recent increase of catfish farming in the Mississippi Delta has brought about an increase in its usage in Cajun cuisine in place of the more traditional wild-caught trout.
Seafood
Also included in the seafood mix are some so-called trash fish that would not sell at the market because of their high bone to meat ratio or required complicated cooking methods. These were brought home by fishermen to feed the family. Examples are garfish, black drum also called gaspergou or just "goo", croaker, and bream.
Poultry
Pork
Beef and dairy Though parts of Acadiana are well suited to cattle or dairy farming, beef is not often used in a pre-processed or uniquely Cajun form. It is usually prepared fairly simply as chops, stews, or steaks, taking a cue from Texas to the west. Ground beef is used as is traditional throughout the US, although seasoned differently.
Dairy farming is not as prevalent as in the past, but there are still some farms in the business. There are no unique dairy items prepared in Cajun cuisine. Traditional Cajun and New Orleans Creole-influenced desserts are common.
Other game meats
Thyme, sage, mint, marjoram, savory, and basil are considered sweet herbs. In Colonial times a herbes de Provence would be several sweet herbs tied up in a muslin.
Boudin—a type of sausage made from pork, pork liver, rice, garlic, green onions and other spices. It is widely available by the link or pound from butcher shops. Boudin is typically stuffed in a natural casing and has a softer consistency than other, better-known sausage varieties. It is usually served with side dishes such as rice dressing, maque choux or bread. Boudin balls are commonly served in southern Louisiana restaurants and are made by taking the boudin out of the case and frying it in spherical form.
Gumbo—High on the list of favorites of Cajun cooking are the soups called gumbos. Contrary to non-Cajun or Continental beliefs, gumbo does not mean simply "everything in the pot". Gumbo exemplifies the influence of French, Spanish, African and Native American food cultures on Cajun cuisine.
There are two theories as to the etymological origins of the name. "Some believe that gumbo gets its name from the Choctaw word for filé powder, kombo; others suggest it's taken from the West African Bantu name for okra, ki ngombo." Both filé and okra can be used as thickening agents in gumbo. Historically, large amounts of filé were added directly to the pot when okra was out of season. While a distinction between filé gumbo and okra gumbo is still held by some, many people enjoy putting filé in okra gumbo simply as a flavoring. Regardless of which is the dominant thickener, filé is also provided at the table and added to taste.
Many claim that gumbo is a Cajun dish, but gumbo was established long before the Acadian arrival.
Its early existence came via the early French Creole culture in New Orleans, Louisiana, where French, Spanish and Africans frequented and also influenced by later waves of Italian, German and Irish settlers.
The backbone of a gumbo is roux, as described above. Cajun gumbo typically favors darker roux, often approaching the color of chocolate or coffee beans. Since the starches in the flour break down more with longer cooking time, a dark roux has less thickening power than a lighter one. While the stovetop method is traditional, flour may also be dry-toasted in an oven for a fat-free roux, or a regular roux may be prepared in a microwave oven for a hands-off method. If the roux is for immediate use, the "trinity" may be sauteed in it, which stops the cooking process.
A classic gumbo is made with chicken and andouille, especially in the colder months, but the ingredients vary according to what is available. Seafood gumbos are also very popular in Cajun country.
Jambalaya—The only certain thing that can be said about jambalaya is that it contains rice, some sort of meat (often chicken, ham, sausage, or a combination), seafood (such as shrimp or crawfish), plus other items that may be available. Usually, it will include green peppers, onions, celery, tomatoes and hot chili peppers. This is also a great pre-Acadian dish, established by the Spanish in Louisiana. Jambalaya may be a tomato-rich New Orleans-style "red" jambalaya of Spanish Creole roots, or a Cajun-style "brown" jambalaya which draws its color and flavor from browned meat and caramelized onions. Historically, tomatoes were not as widely available in Acadiana as the area around New Orleans, but in modern times, both styles are popular across the state. Brown is the style served at the annual World Jambalaya Festival in Gonzales.
Rice and gravy—Rice and gravy dishes are a staple of Cajun cuisine and is usually a brown gravy based on pan drippings, which are deglazed and simmered with extra seasonings and served over steamed or boiled rice.
The dish is traditionally made from cheaper cuts of meat and cooked in a cast-iron pot, typically for an extended time period to let the tough cuts of meat become tender. Beef, pork, chicken or any of a large variety of game meats are used for its preparation. Popular local varieties include hamburger steak, smothered rabbit, turkey necks, and chicken fricassee.
The crawfish boil is a celebratory event where Cajuns boil crawfish, potatoes, onions and corn in large pots over propane cookers. Lemons and small muslin bags containing a mixture of bay leaves, mustard seeds, cayenne pepper, and other spices, commonly known as "crab boil" or "crawfish boil" are added to the water for seasoning.
The results are then dumped onto large, newspaper-draped tables and in some areas covered in Creole/Cajun spice blends, such as REX, Zatarain's, Louisiana Fish Fry, or Tony Chachere's. Also, cocktail sauce, mayonnaise, and hot sauce are sometimes used. The seafood is scooped onto large trays or plates and eaten by hand.
During times when crawfish are not abundant, shrimp and crabs are prepared and served in the same manner.
Attendees are encouraged to "suck the head" of a crawfish by separating the head from the abdomen of the crustacean and sucking out the fat and juices from the head.
Often, newcomers to the crawfish boil or those unfamiliar with the traditions are jokingly warned "not to eat the dead ones." This comes from the common belief that when live crawfish are boiled, their tails curl beneath themselves, but when dead crawfish are boiled, their tails are straight and limp. Seafood boils with crabs and shrimp are also popular.
The traditional Cajun outdoor food event is hosted by a farmer in the rural areas of Acadiana. Family and friends of the farmer gather to socialize, play games, dance, drink, and have a copious meal consisting of hog and other dishes. Men have the task of slaughtering a hog, cutting it into usable parts, and cooking the main pork dishes while women have the task of making boudin.
Similar to a family boucherie, the cochon de lait is a food event that revolves around pork but does not need to be hosted by a farmer. Traditionally, a suckling pig was purchased for the event, but in modern cochon de laits, adult pigs are used.
Unlike the family boucherie, a hog is not butchered by the hosts and there are generally not as many guests or activities. The host and male guests have the task of roasting the pig (see pig roast) while female guests bring side dishes.
The traditional Cajun Mardi Gras (see: Courir de Mardi Gras) is a Mardi Gras celebration in rural Cajun Parishes. The tradition originated in the 18th century with the Cajuns of Louisiana, but it was abandoned in the early 20th century because of unwelcome violence associated with the event. In the early 1950s the tradition was revived in Mamou in Evangeline Parish.
The event revolves around male maskers on horseback who ride into the countryside to collect food ingredients for the party later on. They entertain householders with Cajun music, dancing, and festive antics in return for the ingredients. The preferred ingredient is a live chicken in which the householder throws the chicken to allow the maskers to chase it down (symbolizing a hunt), but other ingredients include rice, sausage, vegetables, or frozen chicken.
Unlike other Cajun events, men take no part in cooking the main course for the party, and women prepare the chicken and ingredients for the gumbo. Once the festivities begin, the Cajun community members eat and dance to Cajun music until midnight after which is the beginning of Lent.
Three popular local dishes in Acadiana are noted in the Hank Williams song "Jambalaya", namely "Jambalaya and-a crawfish pie and filé gumbo". | [
{
"paragraph_id": 0,
"text": "Cajun cuisine (French: cuisine cadienne [kɥi.zin ka.dʒɛn], Spanish: cocina acadiense) is a style of cooking developed by the Cajun–Acadians who were deported from Acadia to Louisiana during the 18th century and who incorporated West African, French and Spanish cooking techniques into their original cuisine.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cajun cuisine is sometimes referred to as a 'rustic cuisine', meaning that it is based on locally available ingredients and that preparation is relatively simple.",
"title": ""
},
{
"paragraph_id": 2,
"text": "An authentic Cajun meal is usually a three-pot affair, with one pot dedicated to the main dish, one dedicated to steamed rice, specially made sausages, or some seafood dish, and the third containing whatever vegetable is plentiful or available. Crawfish, shrimp, and andouille sausage are staple meats used in a variety of dishes.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The aromatic vegetables green bell pepper (piment doux), onion, and celery are called \"the trinity\" by chefs in Cajun and Louisiana Creole cuisines. Roughly diced and combined in cooking, the method is similar to the use of the mirepoix in traditional French cuisine which blends roughly diced carrot, onion, and celery. Additional characteristic aromatics for both the Creole and Cajun versions may include parsley, bay leaf, thyme, green onions, ground cayenne pepper, and ground black pepper. Cayenne and Louisiana-style hot sauce are the primary sources of spice in Cajun cuisine, which usually tends towards a moderate, well-balanced heat, despite the national \"Cajun hot\" craze of the 1980s and 1990s.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Acadians were a group of French colonists who lived in Acadia, what is today Eastern Canada. In the mid-18th century, they were deported from Acadia by the British during the French and Indian War in what they termed le Grand Dérangement, and many of them ended up settling in Southern Louisiana.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Due to the extreme change in climate, Acadians were unable to cook their original dishes. Soon, their former culinary traditions were adapted and, in time, incorporated not only Indigenous American traditions, but also African-American traditions—as is exemplified in the classic Cajun dish \"Gumbo\", which is named for its principal ingredient (Okra) using the West African name for that very ingredient: \"Gumbo,\" in West Africa, means \"Okra\".",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Many other meals developed along these lines, adapted in no small part from Haiti, to become what is now considered classic Cajun cuisine traditions (not to be confused with the more modern concept associated with Prudhomme's style).",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Up through the 20th century, the meals were not elaborate but instead, rather basic. The public's false perception of \"Cajun\" cuisine was based on Prudhomme's style of Cajun cooking, which was spicy, flavorful, and not true to the classic form of the cuisine.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Cajun and Creole cuisine have been mistaken to be the same, but the origins of Creole cooking began in New Orleans, and Cajun cooking came 40 years after the establishment of New Orleans. Today, most restaurants serve dishes that consist of Cajun styles, which Paul Prudhomme dubbed \"Louisiana cooking\". In home-cooking, these individual styles are still kept separate. However, there are fewer and fewer people cooking the classic Cajun dishes that would have been eaten by the original settlers.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Deep-frying of turkeys or oven-roasted turduckens entered southern Louisiana cuisine more recently. Also, blackening of fish or chicken and barbecuing of shrimp in the shell are excluded because they were not prepared in traditional Cajun cuisine. Blackening was actually an invention by chef Paul Prudhomme in the 1970s, becoming associated with Cajun cooking, and presented as such by him, but is not a true historical or traditional Cajun cooking process.",
"title": "Cajun cooking methods"
},
{
"paragraph_id": 10,
"text": "The following is a partial list of ingredients used in Cajun cuisine and some of the staple ingredients of the Acadian food culture.",
"title": "Ingredients"
},
{
"paragraph_id": 11,
"text": "Cajun foodways include many ways of preserving meat, some of which are waning due to the availability of refrigeration and mass-produced meat at the grocer. Smoking of meats remains a fairly common practice, but once-common preparations such as turkey or duck confit (preserved in poultry fat, with spices) are now seen even by Acadians as quaint rarities.",
"title": "Ingredients"
},
{
"paragraph_id": 12,
"text": "Game (and hunting) are still uniformly popular in Acadiana.",
"title": "Ingredients"
},
{
"paragraph_id": 13,
"text": "The recent increase of catfish farming in the Mississippi Delta has brought about an increase in its usage in Cajun cuisine in place of the more traditional wild-caught trout.",
"title": "Ingredients"
},
{
"paragraph_id": 14,
"text": "Seafood",
"title": "Ingredients"
},
{
"paragraph_id": 15,
"text": "Also included in the seafood mix are some so-called trash fish that would not sell at the market because of their high bone to meat ratio or required complicated cooking methods. These were brought home by fishermen to feed the family. Examples are garfish, black drum also called gaspergou or just \"goo\", croaker, and bream.",
"title": "Ingredients"
},
{
"paragraph_id": 16,
"text": "Poultry",
"title": "Ingredients"
},
{
"paragraph_id": 17,
"text": "Pork",
"title": "Ingredients"
},
{
"paragraph_id": 18,
"text": "Beef and dairy Though parts of Acadiana are well suited to cattle or dairy farming, beef is not often used in a pre-processed or uniquely Cajun form. It is usually prepared fairly simply as chops, stews, or steaks, taking a cue from Texas to the west. Ground beef is used as is traditional throughout the US, although seasoned differently.",
"title": "Ingredients"
},
{
"paragraph_id": 19,
"text": "Dairy farming is not as prevalent as in the past, but there are still some farms in the business. There are no unique dairy items prepared in Cajun cuisine. Traditional Cajun and New Orleans Creole-influenced desserts are common.",
"title": "Ingredients"
},
{
"paragraph_id": 20,
"text": "Other game meats",
"title": "Ingredients"
},
{
"paragraph_id": 21,
"text": "Thyme, sage, mint, marjoram, savory, and basil are considered sweet herbs. In Colonial times a herbes de Provence would be several sweet herbs tied up in a muslin.",
"title": "Ingredients"
},
{
"paragraph_id": 22,
"text": "Boudin—a type of sausage made from pork, pork liver, rice, garlic, green onions and other spices. It is widely available by the link or pound from butcher shops. Boudin is typically stuffed in a natural casing and has a softer consistency than other, better-known sausage varieties. It is usually served with side dishes such as rice dressing, maque choux or bread. Boudin balls are commonly served in southern Louisiana restaurants and are made by taking the boudin out of the case and frying it in spherical form.",
"title": "Cajun dishes"
},
{
"paragraph_id": 23,
"text": "Gumbo—High on the list of favorites of Cajun cooking are the soups called gumbos. Contrary to non-Cajun or Continental beliefs, gumbo does not mean simply \"everything in the pot\". Gumbo exemplifies the influence of French, Spanish, African and Native American food cultures on Cajun cuisine.",
"title": "Cajun dishes"
},
{
"paragraph_id": 24,
"text": "There are two theories as to the etymological origins of the name. \"Some believe that gumbo gets its name from the Choctaw word for filé powder, kombo; others suggest it's taken from the West African Bantu name for okra, ki ngombo.\" Both filé and okra can be used as thickening agents in gumbo. Historically, large amounts of filé were added directly to the pot when okra was out of season. While a distinction between filé gumbo and okra gumbo is still held by some, many people enjoy putting filé in okra gumbo simply as a flavoring. Regardless of which is the dominant thickener, filé is also provided at the table and added to taste.",
"title": "Cajun dishes"
},
{
"paragraph_id": 25,
"text": "Many claim that gumbo is a Cajun dish, but gumbo was established long before the Acadian arrival.",
"title": "Cajun dishes"
},
{
"paragraph_id": 26,
"text": "Its early existence came via the early French Creole culture in New Orleans, Louisiana, where French, Spanish and Africans frequented and also influenced by later waves of Italian, German and Irish settlers.",
"title": "Cajun dishes"
},
{
"paragraph_id": 27,
"text": "The backbone of a gumbo is roux, as described above. Cajun gumbo typically favors darker roux, often approaching the color of chocolate or coffee beans. Since the starches in the flour break down more with longer cooking time, a dark roux has less thickening power than a lighter one. While the stovetop method is traditional, flour may also be dry-toasted in an oven for a fat-free roux, or a regular roux may be prepared in a microwave oven for a hands-off method. If the roux is for immediate use, the \"trinity\" may be sauteed in it, which stops the cooking process.",
"title": "Cajun dishes"
},
{
"paragraph_id": 28,
"text": "A classic gumbo is made with chicken and andouille, especially in the colder months, but the ingredients vary according to what is available. Seafood gumbos are also very popular in Cajun country.",
"title": "Cajun dishes"
},
{
"paragraph_id": 29,
"text": "Jambalaya—The only certain thing that can be said about jambalaya is that it contains rice, some sort of meat (often chicken, ham, sausage, or a combination), seafood (such as shrimp or crawfish), plus other items that may be available. Usually, it will include green peppers, onions, celery, tomatoes and hot chili peppers. This is also a great pre-Acadian dish, established by the Spanish in Louisiana. Jambalaya may be a tomato-rich New Orleans-style \"red\" jambalaya of Spanish Creole roots, or a Cajun-style \"brown\" jambalaya which draws its color and flavor from browned meat and caramelized onions. Historically, tomatoes were not as widely available in Acadiana as the area around New Orleans, but in modern times, both styles are popular across the state. Brown is the style served at the annual World Jambalaya Festival in Gonzales.",
"title": "Cajun dishes"
},
{
"paragraph_id": 30,
"text": "Rice and gravy—Rice and gravy dishes are a staple of Cajun cuisine and is usually a brown gravy based on pan drippings, which are deglazed and simmered with extra seasonings and served over steamed or boiled rice.",
"title": "Cajun dishes"
},
{
"paragraph_id": 31,
"text": "The dish is traditionally made from cheaper cuts of meat and cooked in a cast-iron pot, typically for an extended time period to let the tough cuts of meat become tender. Beef, pork, chicken or any of a large variety of game meats are used for its preparation. Popular local varieties include hamburger steak, smothered rabbit, turkey necks, and chicken fricassee.",
"title": "Cajun dishes"
},
{
"paragraph_id": 32,
"text": "The crawfish boil is a celebratory event where Cajuns boil crawfish, potatoes, onions and corn in large pots over propane cookers. Lemons and small muslin bags containing a mixture of bay leaves, mustard seeds, cayenne pepper, and other spices, commonly known as \"crab boil\" or \"crawfish boil\" are added to the water for seasoning.",
"title": "Cajun dishes"
},
{
"paragraph_id": 33,
"text": "The results are then dumped onto large, newspaper-draped tables and in some areas covered in Creole/Cajun spice blends, such as REX, Zatarain's, Louisiana Fish Fry, or Tony Chachere's. Also, cocktail sauce, mayonnaise, and hot sauce are sometimes used. The seafood is scooped onto large trays or plates and eaten by hand.",
"title": "Cajun dishes"
},
{
"paragraph_id": 34,
"text": "During times when crawfish are not abundant, shrimp and crabs are prepared and served in the same manner.",
"title": "Cajun dishes"
},
{
"paragraph_id": 35,
"text": "Attendees are encouraged to \"suck the head\" of a crawfish by separating the head from the abdomen of the crustacean and sucking out the fat and juices from the head.",
"title": "Cajun dishes"
},
{
"paragraph_id": 36,
"text": "Often, newcomers to the crawfish boil or those unfamiliar with the traditions are jokingly warned \"not to eat the dead ones.\" This comes from the common belief that when live crawfish are boiled, their tails curl beneath themselves, but when dead crawfish are boiled, their tails are straight and limp. Seafood boils with crabs and shrimp are also popular.",
"title": "Cajun dishes"
},
{
"paragraph_id": 37,
"text": "The traditional Cajun outdoor food event is hosted by a farmer in the rural areas of Acadiana. Family and friends of the farmer gather to socialize, play games, dance, drink, and have a copious meal consisting of hog and other dishes. Men have the task of slaughtering a hog, cutting it into usable parts, and cooking the main pork dishes while women have the task of making boudin.",
"title": "Cajun dishes"
},
{
"paragraph_id": 38,
"text": "Similar to a family boucherie, the cochon de lait is a food event that revolves around pork but does not need to be hosted by a farmer. Traditionally, a suckling pig was purchased for the event, but in modern cochon de laits, adult pigs are used.",
"title": "Cajun dishes"
},
{
"paragraph_id": 39,
"text": "Unlike the family boucherie, a hog is not butchered by the hosts and there are generally not as many guests or activities. The host and male guests have the task of roasting the pig (see pig roast) while female guests bring side dishes.",
"title": "Cajun dishes"
},
{
"paragraph_id": 40,
"text": "The traditional Cajun Mardi Gras (see: Courir de Mardi Gras) is a Mardi Gras celebration in rural Cajun Parishes. The tradition originated in the 18th century with the Cajuns of Louisiana, but it was abandoned in the early 20th century because of unwelcome violence associated with the event. In the early 1950s the tradition was revived in Mamou in Evangeline Parish.",
"title": "Cajun dishes"
},
{
"paragraph_id": 41,
"text": "The event revolves around male maskers on horseback who ride into the countryside to collect food ingredients for the party later on. They entertain householders with Cajun music, dancing, and festive antics in return for the ingredients. The preferred ingredient is a live chicken in which the householder throws the chicken to allow the maskers to chase it down (symbolizing a hunt), but other ingredients include rice, sausage, vegetables, or frozen chicken.",
"title": "Cajun dishes"
},
{
"paragraph_id": 42,
"text": "Unlike other Cajun events, men take no part in cooking the main course for the party, and women prepare the chicken and ingredients for the gumbo. Once the festivities begin, the Cajun community members eat and dance to Cajun music until midnight after which is the beginning of Lent.",
"title": "Cajun dishes"
},
{
"paragraph_id": 43,
"text": "Three popular local dishes in Acadiana are noted in the Hank Williams song \"Jambalaya\", namely \"Jambalaya and-a crawfish pie and filé gumbo\".",
"title": "In popular culture"
}
]
| Cajun cuisine is a style of cooking developed by the Cajun–Acadians who were deported from Acadia to Louisiana during the 18th century and who incorporated West African, French and Spanish cooking techniques into their original cuisine. Cajun cuisine is sometimes referred to as a 'rustic cuisine', meaning that it is based on locally available ingredients and that preparation is relatively simple. An authentic Cajun meal is usually a three-pot affair, with one pot dedicated to the main dish, one dedicated to steamed rice, specially made sausages, or some seafood dish, and the third containing whatever vegetable is plentiful or available. Crawfish, shrimp, and andouille sausage are staple meats used in a variety of dishes. The aromatic vegetables green bell pepper, onion, and celery are called "the trinity" by chefs in Cajun and Louisiana Creole cuisines. Roughly diced and combined in cooking, the method is similar to the use of the mirepoix in traditional French cuisine which blends roughly diced carrot, onion, and celery. Additional characteristic aromatics for both the Creole and Cajun versions may include parsley, bay leaf, thyme, green onions, ground cayenne pepper, and ground black pepper. Cayenne and Louisiana-style hot sauce are the primary sources of spice in Cajun cuisine, which usually tends towards a moderate, well-balanced heat, despite the national "Cajun hot" craze of the 1980s and 1990s. | 2001-10-07T00:05:56Z | 2023-09-27T17:04:42Z | [
"Template:Commons category",
"Template:Lang-es",
"Template:Nbsp",
"Template:Cite book",
"Template:Cite news",
"Template:Authority control",
"Template:American cuisine",
"Template:Reflist",
"Template:Rp",
"Template:R",
"Template:IPAc-en",
"Template:Cite web",
"Template:Cite journal",
"Template:Cajun cuisine",
"Template:Lang-fr",
"Template:Lang",
"Template:Cuisine",
"Template:Citation needed",
"Template:Short description",
"Template:IPA-fr"
]
| https://en.wikipedia.org/wiki/Cajun_cuisine |
6,187 | Cologne | Cologne (/kəˈloʊn/ kə-LOHN; German: Köln [kœln] ; Kölsch: Kölle [ˈkœlə] ) is the largest city of the German state of North Rhine-Westphalia and the fourth-most populous city of Germany with nearly 1.1 million inhabitants in the city proper and over 3.1 million people in the Cologne Bonn urban region. Centered on the left (west) bank of the Rhine, Cologne is about 35 km (22 mi) southeast of the North Rhine-Westphalia state capital Düsseldorf and 25 km (16 mi) northwest of Bonn, the former capital of West Germany.
The city's medieval Catholic Cologne Cathedral (Kölner Dom) is the third-tallest church and tallest cathedral in the world. It was constructed to house the Shrine of the Three Kings and is a globally recognized landmark and one of the most visited sights and pilgrimage destinations in Europe. The cityscape is further shaped by the Twelve Romanesque churches of Cologne, and Cologne is famous for Eau de Cologne, that has been produced in the city since 1709, and "cologne" has since come to be a generic term.
Cologne was founded and established in Germanic Ubii territory in the 1st century CE as the Roman Colonia Agrippina, hence its name. Agrippina was later dropped (except in Latin), and Colonia became the name of the city in its own right, which developed into modern German as Köln. Cologne, the French version of the city's name, has become standard in English as well. Cologne functioned as the capital of the Roman province of Germania Inferior and as the headquarters of the Roman military in the region until occupied by the Franks in 462. During the Middle Ages the city flourished as being located on one of the most important major trade routes between east and western Europe (including the Brabant Road, Via Regia and Publica). Cologne was a free imperial city of the Holy Roman Empire and one of the major members of the trade union Hanseatic League. It was one of the largest European cities in medieval and renaissance times.
Prior to World War II, the city had undergone occupations by the French (1794–1815) and the British (1918–1926), and was part of Prussia beginning in 1815. Cologne was one of the most heavily bombed cities in Germany during World War II. The bombing reduced the population by 93% mainly due to evacuation, and destroyed around 80% of the millennia-old city center. The post-war rebuilding has resulted in a mixed cityscape, restoring most major historic landmarks like city gates and churches (31 of them being Romanesque). The city boosts around 9,000 historic buildings.
Cologne is a major cultural center for the Rhineland; it hosts more than 30 museums and hundreds of galleries. There are many institutions of higher education, most notably the University of Cologne, one of Europe's oldest and largest universities; the Technical University of Cologne, Germany's largest university of applied sciences; and the German Sport University Cologne. It hosts three Max Planck science institutes and is a major research hub for the aerospace industry, with the German Aerospace Center and the European Astronaut Centre headquarters. It also has a significant chemical and automobile industry. Cologne Bonn Airport is a regional hub, the main airport for the region being Düsseldorf Airport. The Cologne Trade Fair hosts a number of trade shows.
The first urban settlement on the grounds of modern-day Cologne was Oppidum Ubiorum, founded in 38 BCE by the Ubii, a Cisrhenian Germanic tribe. In 50 CE, the Romans founded Colonia Claudia Ara Agrippinensium (Cologne) on the river Rhine and the city became the provincial capital of Germania Inferior in 85 CE. It was also known as Augusta Ubiorum. Considerable Roman remains can be found in present-day Cologne, especially near the wharf area, where a 1,900-year-old Roman boat was discovered in late 2007. From 260 to 271, Cologne was the capital of the Gallic Empire under Postumus, Marius, and Victorinus. In 310, under emperor Constantine I, a bridge was built over the Rhine at Cologne. Roman imperial governors resided in the city and it became one of the most important trade and production centers in the Roman Empire north of the Alps. Cologne is shown on the 4th century Peutinger Map.
Maternus, who was elected as bishop in 313, was the first known bishop of Cologne. The city was the capital of a Roman province until it was occupied by the Ripuarian Franks in 462. Parts of the original Roman sewers are preserved underneath the city, with the new sewerage system having opened in 1890.
After the destruction of the Second Temple in the Siege of Jerusalem and the associated dispersion (diaspora) of the Jews, there is evidence of a Jewish community in Cologne. In 321 CE, Emperor Constantine approved the settlement of a Jewish community with all the freedoms of Roman citizens. It is assumed that it was located near the Marspforte within the city wall. The Edict of Constantine to the Jews is the oldest documented evidence in Germany.
Early medieval Cologne was part of Austrasia within the Frankish Empire. Cunibert, made bishop of Cologne in 623, was an important advisor to the Merovingian King Dagobert I and served with domesticus Pepin of Landen as tutor to the king's son and heir Siegebert III, the future king of Austrasia. In 716, Charles Martel commanded an army for the first time and suffered the only defeat of his life when Chilperic II, King of Neustria, invaded Austrasia and the city fell to him in the Battle of Cologne. Charles fled to the Eifel mountains, rallied supporters and took the city back that same year after defeating Chilperic in the Battle of Amblève. Cologne had been the seat of a bishop since the Roman period; under Charlemagne, in 795, bishop Hildebold was promoted to archbishop. In the 843 Treaty of Verdun Cologne fell into the dominion of Lothair I's Middle Francia – later called Lotharingia (Lower Lorraine).
In 953, the archbishops of Cologne first gained noteworthy secular power when bishop Bruno was appointed as duke by his brother Otto I, King of Germany. In order to weaken the secular nobility, who threatened his power, Otto endowed Bruno and his archiepiscopal successors with the prerogatives of secular princes, thus establishing the Electorate of Cologne, formed by the temporal possessions of the archbishopric and included in the end a strip of territory along the left Bank of the Rhine east of Jülich, as well as the Duchy of Westphalia on the other side of the Rhine, beyond Berg and Mark. By the end of the 12th century, the Archbishop of Cologne was one of the seven electors of the Holy Roman Emperor. Besides being prince elector, he was Archchancellor of Italy as well, technically from 1238 and permanently from 1263 until 1803.
Following the Battle of Worringen in 1288, Cologne gained its independence from the archbishops and became a Free City. Archbishop Sigfried II von Westerburg was forced to reside in Bonn. The archbishop nevertheless preserved the right of capital punishment. Thus the municipal council (though in strict political opposition towards the archbishop) depended upon him in all matters concerning criminal justice. This included torture, the sentence for which was only allowed to be handed down by the episcopal judge known as the greve. This legal situation lasted until the French conquest of Cologne.
Besides its economic and political significance Cologne also became an important centre of medieval pilgrimage, when Cologne's archbishop, Rainald of Dassel, gave the relics of the Three Wise Men to Cologne's cathedral in 1164 (after they had been taken from Milan). Besides the three magi Cologne preserves the relics of Saint Ursula and Albertus Magnus.
Cologne's location on the river Rhine placed it at the intersection of the major trade routes between east and west as well as the main south–north Western Europe trade route, Venice to Netherlands; even by the mid-10th century, merchants in the town were already known for their prosperity and luxurious standard of living due to the availability of trade opportunities. The intersection of these trade routes was the basis of Cologne's growth. By the end of the 12th century, Archbishop Phillip von Heinsberg enclosed the entire city with walls. By 1300 the city population was 50,000–55,000. Cologne was a member of the Hanseatic League in 1475, when Frederick III confirmed the city's imperial immediacy. Cologne was so influential in regional commerce that its systems of weights and measurements (e.g. the Cologne mark) were used throughout Europe.
The economic structures of medieval and early modern Cologne were characterised by the city's status as a major harbour and transport hub on the Rhine. Craftsmanship was organised by self-administering guilds, some of which were exclusive to women.
As a free imperial city, Cologne was a self-ruling state within the Holy Roman Empire, an imperial estate with seat and vote at the Imperial Diet, and as such had the right (and obligation) to contribute to the defense of the Empire and maintain its own military force. As they wore a red uniform, these troops were known as the Rote Funken (red sparks). These soldiers were part of the Army of the Holy Roman Empire ("Reichskontingent"). They fought in the wars of the 17th and 18th century, including the wars against revolutionary France in which the small force was almost completely wiped out in combat. The tradition of these troops is preserved as a military persiflage by Cologne's most outstanding carnival society, the Rote Funken.
The Free Imperial City of Cologne must not be confused with the Electorate of Cologne, which was a state of its own within the Holy Roman Empire. Since the second half of the 16th century the majority of archbishops were drawn from the Bavarian Wittelsbach dynasty. Due to the free status of Cologne, the archbishops were usually not allowed to enter the city. Thus they took up residence in Bonn and later in Brühl on the Rhine. As members of an influential and powerful family, and supported by their outstanding status as electors, the archbishops of Cologne repeatedly challenged and threatened the free status of Cologne during the 17th and 18th centuries, resulting in complicated affairs, which were handled by diplomatic means and propaganda as well as by the supreme courts of the Holy Roman Empire.
Cologne lost its status as a free city during the French period. According to the Treaty of Lunéville (1801) all the territories of the Holy Roman Empire on the left bank of the Rhine were officially incorporated into the French Republic (which had already occupied Cologne in 1794). Thus this region later became part of Napoleon's Empire. Cologne was part of the French Département Roer (named after the river Roer, German: Rur) with Aachen (French: Aix-la-Chapelle) as its capital. The French modernised public life, for example by introducing the Napoleonic code and removing the old elites from power. The Napoleonic code remained in use on the left bank of the Rhine until 1900, when a unified civil code (the Bürgerliches Gesetzbuch) was introduced in the German Empire. In 1815 at the Congress of Vienna, Cologne was made part of the Kingdom of Prussia, first in the Province of Jülich-Cleves-Berg and then the Rhine Province.
The permanent tensions between the Roman Catholic Rhineland and the overwhelmingly Protestant Prussian state repeatedly escalated with Cologne being in the focus of the conflict. In 1837 the archbishop of Cologne, Clemens August von Droste-Vischering, was arrested and imprisoned for two years after a dispute over the legal status of marriages between Protestants and Roman Catholics (Mischehenstreit). In 1874, during the Kulturkampf, Archbishop Paul Melchers was imprisoned before taking asylum in the Netherlands. These conflicts alienated the Catholic population from Berlin and contributed to a deeply felt anti-Prussian resentment, which was still significant after World War II, when the former mayor of Cologne, Konrad Adenauer, became the first West German chancellor.
During the 19th and 20th centuries, Cologne absorbed numerous surrounding towns, and by World War I had already grown to 700,000 inhabitants. Industrialisation changed the city and spurred its growth. Vehicle and engine manufacturing was especially successful, though the heavy industry was less ubiquitous than in the Ruhr area. The cathedral, started in 1248 but abandoned around 1560, was eventually finished in 1880 not just as a place of worship but also as a German national monument celebrating the newly founded German empire and the continuity of the German nation since the Middle Ages. Some of this urban growth occurred at the expense of the city's historic heritage with much being demolished (for example, the city walls or the area around the cathedral) and sometimes replaced by contemporary buildings.
Cologne was designated as one of the Fortresses of the German Confederation. It was turned into a heavily armed fortress (opposing the French and Belgian fortresses of Verdun and Liège) with two fortified belts surrounding the city, the remains of which can be seen to this day. The military demands on what became Germany's largest fortress presented a significant obstacle to urban development, with forts, bunkers, and wide defensive dugouts completely encircling the city and preventing expansion; this resulted in a very densely built-up area within the city itself.
During World War I Cologne was the target of several minor air raids but suffered no significant damage. Cologne was occupied by the British Army of the Rhine until 1926, under the terms of the Armistice and the subsequent Versailles Peace Treaty. In contrast with the harsh behaviour of the French occupation troops in Germany, the British forces were more lenient to the local population. Konrad Adenauer, the mayor of Cologne from 1917 until 1933 and later a West German chancellor, acknowledged the political impact of this approach, especially since Britain had opposed French demands for a permanent Allied occupation of the entire Rhineland.
As part of the demilitarisation of the Rhineland, the city's fortifications had to be dismantled. This was an opportunity to create two green belts (Grüngürtel) around the city by converting the fortifications and their fields of fire into large public parks. This was not completed until 1933. In 1919 the University of Cologne, closed by the French in 1798, was reopened. This was considered to be a replacement for the loss of the University of Strasbourg on the west bank of the Rhine, which reverted to France with the rest of Alsace. Cologne prospered during the Weimar Republic (1919–33), and progress was made especially in public governance, city planning, housing and social affairs. Social housing projects were considered exemplary and were copied by other German cities. Cologne competed to host the Olympics, and a modern sports stadium was erected at Müngersdorf. When the British occupation ended, the prohibition of civil aviation was lifted and Cologne Butzweilerhof Airport soon became a hub for national and international air traffic, second in Germany only to Berlin Tempelhof Airport.
The democratic parties lost the local elections in Cologne in March 1933 to the Nazi Party and other extreme-right parties. The Nazis then arrested the Communist and Social Democrats members of the city assembly, and Mayor Adenauer was dismissed. Compared to some other major cities, however, the Nazis never gained decisive support in Cologne. (Significantly, the number of votes cast for the Nazi Party in Reichstag elections had always been the national average.) By 1939, the population had risen to 772,221 inhabitants.
During World War II, Cologne was a Military Area Command Headquarters (Militärbereichshauptkommandoquartier) for Wehrkreis VI (headquartered at Münster). Cologne was under the command of Lieutenant-General Freiherr Roeder von Diersburg, who was responsible for military operations in Bonn, Siegburg, Aachen, Jülich, Düren, and Monschau. Cologne was home to the 211th Infantry Regiment and the 26th Artillery Regiment.
The Allies dropped 44,923.2 tons of bombs on the city during World War II, destroying 61% of its built up area. During the Bombing of Cologne in World War II, Cologne endured 262 air raids by the Western Allies, which caused approximately 20,000 civilian casualties and almost completely wiped out the central part of the city. During the night of 31 May 1942, Cologne was the target of "Operation Millennium", the first 1,000 bomber raid by the Royal Air Force in World War II. 1,046 heavy bombers attacked their target with 1,455 tons of explosives, approximately two-thirds of which were incendiary. This raid lasted about 75 minutes, destroyed 600 acres (243 ha) of built-up area (61%), killed 486 civilians and made 59,000 people homeless. The devastation was recorded by Hermann Claasen from 1942 until the end of the war, and presented in his exhibition and book of 1947 Singing in the furnace. Cologne – Remains of an old city.
Cologne was taken by the American First Army in early March 1945 during the Invasion of Germany after a battle. By the end of the war, the population of Cologne had been reduced by 95%. This loss was mainly caused by a massive evacuation of the people to more rural areas. The same happened in many other German cities in the last two years of war. By the end of 1945, however, the population had already recovered to approximately 450,000. By the end of the war, essentially all of Cologne's pre-war Jewish population of 11,000 had been deported or killed by the Nazis. The six synagogues of the city were destroyed. The synagogue on Roonstraße was rebuilt in 1959.
Despite Cologne's status as the largest city in the region, nearby Düsseldorf was chosen as the political capital of the federated state of North Rhine-Westphalia. With Bonn being chosen as the provisional federal capital (provisorische Bundeshauptstadt) and seat of the government of the Federal Republic of Germany (then informally West Germany), Cologne benefited by being sandwiched between two important political centres. The city became–and still is–home to a number of federal agencies and organizations. After reunification in 1990, Berlin was made the capital of Germany.
In 1945 architect and urban planner Rudolf Schwarz called Cologne the "world's greatest heap of rubble". Schwarz designed the master plan for reconstruction in 1947, which included the construction of several new thoroughfares through the city centre, especially the Nord-Süd-Fahrt ("North-South-Drive"). The master plan took into consideration the fact that even shortly after the war a large increase in automobile traffic could be anticipated. Plans for new roads had already, to a certain degree, evolved under the Nazi administration, but the actual construction became easier when most of the city centre was in ruins.
The destruction of 95% of the city centre, including the famous Twelve Romanesque churches such as St. Gereon, Great St. Martin, St. Maria im Kapitol and several other monuments in World War II, meant a tremendous loss of cultural treasures. The rebuilding of those churches and other landmarks such as the Gürzenich event hall was not undisputed among leading architects and art historians at that time, but in most cases, civil intention prevailed. The reconstruction lasted until the 1990s, when the Romanesque church of St. Kunibert was finished.
In 1959, the city's population reached pre-war numbers again. It then grew steadily, exceeding 1 million for about one year from 1975. It remained just below that until mid-2010, when it exceeded 1 million again.
In the 1980s and 1990s Cologne's economy prospered for two main reasons. The first was the growth in the number of media companies, both in the private and public sectors; they are especially catered for in the newly developed Media Park, which creates a strong visual focal point in Cologne's city centre and includes the KölnTurm, one of Cologne's most prominent high-rise buildings. The second was the permanent improvement of the diverse traffic infrastructure, which made Cologne one of the most easily accessible metropolitan areas in Central Europe.
Due to the economic success of the Cologne Trade Fair, the city arranged a large extension to the fair site in 2005. At the same time the original buildings, which date back to the 1920s, were rented out to RTL, Germany's largest private broadcaster, as their new corporate headquarters.
Cologne was the focus of the 2015-16 New Year's Eve sexual assaults in Germany, with over 500 women reporting that they were sexually assaulted by persons of African and Arab appearance.
The metropolitan area encompasses over 405 square kilometres (156 square miles), extending around a central point that lies at 50° 56' 33 latitude and 6° 57' 32 longitude. The city's highest point is 118 m (387 ft) above sea level (the Monte Troodelöh) and its lowest point is 37.5 m (123 ft) above sea level (the Worringer Bruch). The city of Cologne lies within the larger area of the Cologne Lowland, a cone-shaped area of the central Rhineland that lies between Bonn, Aachen and Düsseldorf.
Cologne is divided into 9 boroughs (Stadtbezirke) and 85 districts (Stadtteile):
Located in the Rhine-Ruhr area, Cologne is one of the warmest cities in Germany. It has a temperate–oceanic climate (Köppen: Cfb) with cool winters and warm summers. It is also one of the cloudiest cities in Germany, with just 1,567.5 hours of sun a year. Its average annual temperature is 10.7 °C (51 °F): 15.4 °C (60 °F) during the day and 6.1 °C (43 °F) at night. In January, the mean temperature is 3.0 °C (37 °F), while the mean temperature in July is 19.0 °C (66 °F). The record high temperature of 40.3 °C (105 °F) happened on 25 July 2019 during the July 2019 European heat wave in which Cologne saw three consecutive days over 38.0 °C (100 °F). Especially the inner urban neighbourhoods experience a greater number of hot days, as well as significantly higher temperatures during nighttime compared to the surrounding area (including the airport, where temperatures are classified). Still temperatures can vary noticeably over the course of a month with warmer and colder weather. Precipitation is spread evenly throughout the year with a light peak in summer due to showers and thunderstorms.
Cologne is regularly affected by flooding from the Rhine and is considered the most flood-prone European city. A city agency (Stadtentwässerungsbetriebe Köln, "Cologne Urban Drainage Operations") manages an extensive flood control system which includes both permanent and mobile flood walls, protection from rising waters for buildings close to the river banks, monitoring and forecasting systems, pumping stations and programmes to create or protect floodplains, and river embankments. The system was redesigned after a 1993 flood, which resulted in heavy damage.
In the Roman Empire, the city was large and rich with a population of 40,000 in 100–200 AD. The city was home to around 20,000 people in 1000 AD, growing to 50,000 in 1200 AD. The Rhineland metropolis still had 50,000 residents in 1300 AD.
Cologne is the fourth-largest city in Germany after Berlin, Hamburg and Munich. As of 31 December 2021, there were 1,079,301 people registered as living in Cologne in an area of 404.99 km (156.37 sq mi), which makes Cologne the third largest city by area. The population density was 2,700/km (7,000/sq mi). Cologne first reached the population of 1,000,000 in 1975 due to the incorporation of Wesseling, however this was reversed after public opposition. In 2009 Cologne's population again reached 1,000,000 and it became one of the four cities in Germany with a population exceeding 1 Million. The metropolitan area of the Cologne Bonn Region is home to 3,573,500 living on 4,415/km (11,430/sq mi). It is part of the polycentric megacity region Rhine-Ruhr with a population of over 11,000,000 people.
There were 551,528 women and 527,773 men in Cologne. In 2021, there were 11,127 births in Cologne; 5,844 marriages and 1,808 divorces, and 10,536 deaths. In the city, the population was spread out, with 16.3% under the age of 18, and 17.8% were 65 years of age or older. 203 people in Cologne were over the age of 100.
According to the Statistical Office of the City of Cologne, the number of people with a migrant background is at 40.5% (436,660). 2,254 people acquired German citizenship in 2021. In 2021, there were 559,854 households, of which 18.4% had children under the age of 18; 51% of all households were made up of singles. 8% of all households were single-parent households. The average household size was 1.88.
Cologne residents with a foreign citizenship as of 31 December 2021 is as follows:
Cologne is home to 90,000 people of Turkish origin and is the second largest German city with Turkish population after Berlin. Cologne has a Little Istanbul in Keupstraße that has many Turkish restaurants and markets. Famous Turkish-German people like rapper Eko Fresh and TV presenter Nazan Eckes were born in Cologne.
Colognian or Kölsch (Colognian pronunciation: [kœɫːʃ]) (natively Kölsch Platt) is a small set of very closely related dialects, or variants, of the Ripuarian Central German group of languages. These dialects are spoken in the area covered by the Archdiocese and former Electorate of Cologne reaching from Neuss in the north to just south of Bonn, west to Düren and east to Olpe in the North-West of Germany. Kölsch is one of the very few city dialects in Germany, which also include the dialect spoken in Berlin, for example.
As of 2015, 35.5% of the population belonged to the Catholic Church, the largest religious body, and 15.5% to the Protestant Church. Irenaeus of Lyons claimed that Christianity was brought to Cologne by Roman soldiers and traders at an unknown early date. It is known that in the early second century it was a bishop's seat. The first historical Bishop of Cologne was Saint Maternus. Thomas Aquinas studied in Cologne in 1244 under Albertus Magnus. Cologne is the seat of the Roman Catholic Archdiocese of Cologne.
According to the 2011 census, 2.1% of the population was Eastern Orthodox, 0.5% belonged of an Evangelical Free Church and 4.2% belonged to further religious communities officially recognized by the state of North Rhine-Westphalia (such as Jehovah's Witnesses).
There are several mosques, including the Cologne Central Mosque run by the Turkish-Islamic Union for Religious Affairs. In 2011, about 11.2% of the population was Muslim.
Cologne also has one of the oldest and largest Jewish communities in Germany. In 2011, 0.3% of Cologne's population was Jewish.
On 11 October 2021, the Mayor of Cologne, Henriette Reker, announced that all of Cologne's 35 mosques would be allowed to broadcast the Adhan (prayer call) for up to five minutes on Fridays between noon and 3 p.m. She commented that the move "shows that diversity is appreciated and loved in Cologne".
The city's administration is headed by the mayor and the three deputy mayors.
The long tradition of a free imperial city, which long dominated an exclusively Catholic population and the age-old conflict between the church and the bourgeoisie (and within it between the patricians and craftsmen) have created its own political climate in Cologne. Various interest groups often form networks beyond party boundaries. The resulting web of relationships, with political, economic, and cultural links with each other in a system of mutual favours, obligations and dependencies, is called the 'Cologne coterie'. This has often led to an unusual proportional distribution in the city government and degenerated at times into corruption: in 1999, a "waste scandal" over kickbacks and illegal campaign contributions came to light, which led not only to the imprisonment of the entrepreneur Hellmut Trienekens, but also to the downfall of almost the entire leadership of the ruling Social Democrats.
The current Lord Mayor of Cologne is Henriette Reker. She received 52.66% of the vote at the municipal election on 17 October 2015, running as an independent with the support of the CDU, FDP, and Greens. She took office on 15 December 2015. Reker was re-elected to a second term in a runoff election on 27 September 2020, in which she received 59.27% of the vote.
The most recent mayoral election was held on 13 September 2020, with a runoff held on 27 September, and the results were as follows:
The Cologne city council (Kölner Stadtrat) governs the city alongside the Mayor. It serves a term of five years. The most recent city council election was held on 13 September 2020, and the results were as follows:
In the Landtag of North Rhine-Westphalia, Cologne is divided between seven constituencies. After the 2022 North Rhine-Westphalia state election, the composition and representation of each was as follows:
In the Bundestag, Cologne is divided between four constituencies. In the 20th Bundestag, the composition and representation of each was as follows:
The inner city of Cologne was largely destroyed during World War II. The reconstruction of the city followed the style of the 1950s, while respecting the old layout and naming of the streets. Thus, the city centre today is characterized by modern architecture, with a few interspersed pre-war buildings which were reconstructed due to their historical importance. Some buildings of the "Wiederaufbauzeit" (era of reconstruction), for example, the opera house by Wilhelm Riphahn, are nowadays regarded as classics of modern architecture. Nevertheless, the uncompromising style of the Cologne Opera house and other modern buildings has remained controversial.
Green areas account for over a quarter of Cologne, which is approximately 75 m (807.29 sq ft) of public green space for every inhabitant.
The dominant wildlife of Cologne is insects, small rodents, and several species of birds. Pigeons are the most often seen animals in Cologne, although the number of birds is augmented each year by a growing population of feral exotics, most visibly parrots such as the rose-ringed parakeet. The sheltered climate in southeast Northrhine-Westphalia allows these birds to survive through the winter, and in some cases, they are displacing native species. The plumage of Cologne's green parrots is highly visible even from a distance, and contrasts starkly with the otherwise muted colours of the cityscape.
Hedgehogs, rabbits and squirrels are common in parks and the greener parts of town. In the outer suburbs foxes and wild boar can be seen, even during the day.
Cologne had 5.8 million overnight stays booked and 3.35 million arrivals in 2016.
The Cologne City Hall (Kölner Rathaus), founded in the 12th century, is the oldest city hall in Germany still in use. The Renaissance-style loggia and tower were added in the 15th century. Other famous buildings include the Gürzenich, Haus Saaleck and the Overstolzenhaus.
Of the twelve medieval city gates that once existed, only the Eigelsteintorburg at Ebertplatz, the Hahnentor at Rudolfplatz and the Severinstorburg at Chlodwigplatz still stand today.
Several bridges cross the Rhine in Cologne. They are (from south to north): the Cologne Rodenkirchen Bridge, South Bridge (railway), Severin Bridge, Deutz Bridge, Hohenzollern Bridge (railway), Zoo Bridge (Zoobrücke) and Cologne Mülheim Bridge. In particular the iron tied arch Hohenzollern Bridge (Hohenzollernbrücke) is a dominant landmark along the river embankment. A Rhine crossing of a special kind is provided by the Cologne Cable Car (German: Kölner Seilbahn), a cableway that runs across the Rhine between the Cologne Zoological Garden in Riehl and the Rheinpark in Deutz.
Cologne's tallest structure is the Colonius telecommunication tower at 266 m or 873 ft. The observation deck has been closed since 1992. A selection of the tallest buildings in Cologne is listed below. Other tall structures include the Hansahochhaus (designed by architect Jacob Koerfer and completed in 1925 – it was at one time Europe's tallest office building), the Kranhaus buildings at Rheinauhafen, and the Messeturm Köln ("trade fair tower").
Cologne has several museums. The famous Roman-Germanic Museum features art and architecture from the city's distant past; the Museum Ludwig houses one of the most important collections of modern art in Europe, including a Picasso collection matched only by the museums in Barcelona and Paris. The Museum Schnütgen of religious art is partly housed in St. Cecilia, one of Cologne's Twelve Romanesque churches. Many art galleries in Cologne enjoy a worldwide reputation like e.g. Galerie Karsten Greve, one of the leading galleries for postwar and contemporary art.
Cologne has more than 60 music venues and the third-highest density of music venues of Germany's four largest cities, after Munich and Hamburg and ahead of Berlin.
Several orchestras are active in the city, among them the Gürzenich Orchestra, which is also the orchestra of the Cologne Opera and the WDR Symphony Orchestra Cologne (German State Radio Orchestra), both based at the Cologne Philharmonic Orchestra Building (Kölner Philharmonie). Other orchestras are the Musica Antiqua Köln, the WDR Rundfunkorchester Köln and WDR Big Band, and several choirs, including the WDR Rundfunkchor Köln. Cologne was also an important hotbed for electronic music in the 1950s (Studio für elektronische Musik, Karlheinz Stockhausen) and again from the 1990s onward. The public radio and TV station WDR was involved in promoting musical movements such as Krautrock in the 1970s; the influential Can was formed there in 1968. There are several centres of nightlife, among them the Kwartier Latäng (the student quarter around the Zülpicher Straße) and the nightclub-studded areas around Hohenzollernring, Friesenplatz and Rudolfplatz.
The large annual literary festival lit.COLOGNE [de] with its Silberschweinpreis [de] features regional and international authors. The main literary figure connected with Cologne is the writer Heinrich Böll, winner of the Nobel Prize for Literature. Since 2012, there is also an annual international festival of philosophy called phil.cologne [de].
The city also has the most pubs per capita in Germany. Cologne is well known for its beer, called Kölsch. Kölsch is also the name of the local dialect. This has led to the common joke of Kölsch being the only language one can drink.
Cologne is also famous for Eau de Cologne (German: Kölnisch Wasser; lit: "Water of Cologne"), a perfume created by Italian expatriate Johann Maria Farina at the beginning of the 18th century. During the 18th century, this perfume became increasingly popular, was exported all over Europe by the Farina family and Farina became a household name for Eau de Cologne. In 1803 Wilhelm Mülhens entered into a contract with an unrelated person from Italy named Carlo Francesco Farina who granted him the right to use his family name and Mühlens opened a small factory at Cologne's Glockengasse. In later years, and after various court battles, his grandson Ferdinand Mülhens was forced to abandon the name Farina for the company and their product. He decided to use the house number given to the factory at Glockengasse during the French occupation in the early 19th century, 4711. Today, original Eau de Cologne is still produced in Cologne by both the Farina family, currently in the eighth generation, and by Mäurer & Wirtz who bought the 4711 brand in 2006.
The Cologne carnival is one of the largest street festivals in Europe. In Cologne, the carnival season officially starts on 11 November at 11 minutes past 11 a.m. with the proclamation of the new Carnival Season, and continues until Ash Wednesday. However, the so-called "Tolle Tage" (crazy days) do not start until Weiberfastnacht (Women's Carnival) or, in dialect, Wieverfastelovend, the Thursday before Ash Wednesday, which is the beginning of the street carnival. Zülpicher Strasse and its surroundings, Neumarkt square, Heumarkt and all bars and pubs in the city are crowded with people in costumes dancing and drinking in the streets. Hundreds of thousands of visitors flock to Cologne during this time. Generally, around a million people celebrate in the streets on the Thursday before Ash Wednesday.
Cologne and Düsseldorf have a "fierce regional rivalry", which includes carnival parades, football, and beer. People in Cologne prefer Kölsch while people in Düsseldorf prefer Altbier ("Alt"). Waiters and patrons will "scorn" and make a "mockery" of people who order Alt beer in Cologne or Kölsch in Düsseldorf. The rivalry has been described as a "love–hate relationship". The Köln Guild of Brewers was established in 1396. The Kölsch beer style first appeared in the 1800s and in 1986 the breweries established an appellation under which only breweries in the city are allowed to use the term Kölsch.
The city was home to the internationally famous Ringfest, and now to the C/o pop festival.
In addition, Cologne enjoys a thriving Christmas Market (Weihnachtsmarkt) presence with several locations in the city.
As the largest city in the Rhine-Ruhr metropolitan region, Cologne benefits from a large market structure. In competition with Düsseldorf, the economy of Cologne is primarily based on insurance and media industries, while the city is also an important cultural and research centre and home to a number of corporate headquarters.
Among the largest media companies based in Cologne are Westdeutscher Rundfunk, RTL Television (with subsidiaries), n-tv, Deutschlandradio, Brainpool TV and publishing houses like J. P. Bachem, Taschen, Tandem Verlag, and M. DuMont Schauberg. Several clusters of media, arts and communications agencies, TV production studios, and state agencies work partly with private and government-funded cultural institutions. Among the insurance companies based in Cologne are Central, DEVK, DKV, Generali Deutschland, Gen Re, Gothaer, HDI Gerling and national headquarters of Axa Insurance, Mitsui Sumitomo Insurance Group and Zurich Financial Services.
The German flag carrier Lufthansa and its subsidiary Lufthansa CityLine have their main corporate headquarters in Cologne. The largest employer in Cologne is Ford Europe, which has its European headquarters and a factory in Niehl (Ford-Werke GmbH). Toyota Motorsport GmbH (TMG), Toyota's official motorsports team, responsible for Toyota rally cars, and then Formula One cars, has its headquarters and workshops in Cologne. Other large companies based in Cologne include the REWE Group, TÜV Rheinland, Deutz AG and a number of Kölsch breweries. The largest three Kölsch breweries of Cologne are Reissdorf, Gaffel, and Früh.
Historically, Cologne has always been an important trade city, with land, air, and sea connections. The city has five Rhine ports, the second largest inland port in Germany and one of the largest in Europe. Cologne Bonn Airport is the second largest freight terminal in Germany. Today, the Cologne trade fair (Koelnmesse) ranks as a major European trade fair location with over 50 trade fairs and other large cultural and sports events. In 2008 Cologne had 4.31 million overnight stays booked and 2.38 million arrivals. Cologne's largest daily newspaper is the Kölner Stadt-Anzeiger.
Cologne shows a significant increase in startup companies, especially when considering digital business.
Cologne has also become the first German city with a population of more than a million people to declare climate emergency.
Road building had been a major issue in the 1920s under the leadership of mayor Konrad Adenauer. The first German limited-access road was constructed after 1929 between Cologne and Bonn. Today, this is the Bundesautobahn 555. In 1965, Cologne became the first German city to be fully encircled by a motorway ring road. Roughly at the same time, a city centre bypass (Stadtautobahn) was planned, but only partially put into effect, due to opposition by environmental groups. The completed section became Bundesstraße ("Federal Road") B 55a, which begins at the Zoobrücke ("Zoo Bridge") and meets with A 4 and A 3 at the interchange Cologne East. Nevertheless, it is referred to as Stadtautobahn by most locals. In contrast to this, the Nord-Süd-Fahrt ("North-South-Drive") was actually completed, a new four/six-lane city centre through-route, which had already been anticipated by planners such as Fritz Schumacher in the 1920s. The last section south of Ebertplatz was completed in 1972.
In 2005, the first stretch of an eight-lane motorway in North Rhine-Westphalia was opened to traffic on Bundesautobahn 3, part of the eastern section of the Cologne Beltway between the interchanges Cologne East and Heumar.
Compared to other German cities, Cologne has a traffic layout that is not very bicycle-friendly. It has repeatedly ranked among the worst in an independent evaluation conducted by the Allgemeiner Deutscher Fahrrad-Club. In 2014, it ranked 36th out of 39 German cities with a population greater than 200,000.
Cologne has a railway service with Deutsche Bahn InterCity and ICE-trains stopping at Köln Hauptbahnhof (Cologne Main Station), Köln Messe/Deutz and Cologne/Bonn Airport. ICE and TGV Thalys high-speed trains link Cologne with Amsterdam, Brussels (in 1h47, 9 departures/day) and Paris (in 3h14, 6 departures/day). There are frequent ICE trains to other German cities, including Frankfurt am Main and Berlin. ICE Trains to London via the Channel Tunnel were planned for 2013.
The Cologne Stadtbahn operated by Kölner Verkehrsbetriebe (KVB) is an extensive light rail system that is partially underground and serves Cologne and a number of neighbouring cities. It evolved from the tram system. Nearby Bonn is linked by both the Stadtbahn and main line railway trains, and occasional recreational boats on the Rhine. Düsseldorf is also linked by S-Bahn trains, which are operated by Deutsche Bahn.
The Rhine-Ruhr S-Bahn has 5 lines which cross Cologne. The S13/S19 runs 24/7 between Cologne Hbf and Cologne/Bonn airport.
There are also frequent buses covering most of the city and surrounding suburbs, and Eurolines coaches to London via Brussels.
Häfen und Güterverkehr Köln (Ports and Goods traffic Cologne, HGK) is one of the largest operators of inland ports in Germany. Ports include Köln-Deutz, Köln-Godorf, and Köln-Niehl I and II.
Cologne's international airport is Cologne/Bonn Airport (CGN). It is also called Konrad Adenauer Airport after Germany's first post-war Chancellor Konrad Adenauer, who was born in the city and was mayor of Cologne from 1917 until 1933. The airport is shared with the neighbouring city of Bonn. Cologne is headquarters to the European Aviation Safety Agency (EASA).
Cologne is home to numerous universities and colleges, and host to some 72,000 students. Its oldest university, the University of Cologne (founded in 1388) is the largest university in Germany, as the Cologne University of Applied Sciences is the largest university of Applied Sciences in the country. The Cologne University of Music and Dance is the largest conservatory in Europe. Foreigners can have German lessons in the VHS (Adult Education Centre).
Lauder Morijah School (German: Lauder-Morijah-Schule), a Jewish school in Cologne, previously closed. After Russian immigration increased the Jewish population, the school reopened in 2002.
Within Germany, Cologne is known as an important media centre. Several radio and television stations, including Westdeutscher Rundfunk (WDR), RTL and VOX, have their headquarters in the city. Film and TV production is also important. The city is "Germany's capital of TV crime stories". A third of all German TV productions are made in the Cologne region. Furthermore, the city hosts the Cologne Comedy Festival, which is considered to be the largest comedy festival in mainland Europe.
Cologne hosts the football club 1. FC Köln, who play in the 1. Bundesliga (first division). They play their home matches in RheinEnergieStadion which also hosted five matches of the 2006 FIFA World Cup. The International Olympic Committee and the International Association of Sports and Leisure Facilities gave RheinEnergieStadion a bronze medal for "being one of the best sporting venues in the world". The city also hosts the two football clubs FC Viktoria Köln and SC Fortuna Köln, who currently play in the 3. Liga (third division) and the Regionalliga West (fourth division) respectively. Cologne's oldest football club 1. FSV Köln 1899 is playing with its amateur team in the Verbandsliga (sixth division).
Cologne also is home of the ice hockey team Kölner Haie, which is playing in the highest ice hockey league in Germany, the Deutsche Eishockey Liga. They are based at Lanxess Arena.
Several horse races per year are held at Cologne-Weidenpesch Racecourse since 1897, the annual Cologne Marathon was started in 1997 and the classic cycling race Rund um Köln is organised in Cologne since 1908. The city also has a long tradition in rowing, being home of some of Germany's oldest regatta courses and boat clubs, such as the Kölner Rudergesellschaft 1891 or the Kölner Ruderverein von 1877 in the Rodenkirchen district.
Japanese automotive manufacturer Toyota has their major motorsport facility known by the name Toyota Motorsport GmbH, which is located in the Marsdorf district, and is responsible for Toyota's major motorsport development and operations, which in the past included the FIA Formula One World Championship, the FIA World Rally Championship and the Le Mans Series. Currently they are working on Toyota's team Toyota Gazoo Racing which competes in the FIA World Endurance Championship.
Cologne is considered "the secret golf capital of Germany". The first golf club in North Rhine-Westphalia was founded in Cologne in 1906. The city offers the most options and top events in Germany.
The city has hosted several athletic events which includes the 2005 FIFA Confederations Cup, 2006 FIFA World Cup, 2007 World Men's Handball Championship, 2010 and 2017 Ice Hockey World Championships and 2010 Gay Games.
Since 2014, the city has hosted ESL One Cologne, one of the biggest CS GO tournaments held annually in July/August at Lanxess Arena.
Furthermore Cologne is home of the Sport-Club Colonia 1906, Germany's oldest boxing club, and the Kölner Athleten-Club 1882, the world's oldest active weightlifting club.
Cologne is twinned with:
Cologne also cooperates with: | [
{
"paragraph_id": 0,
"text": "Cologne (/kəˈloʊn/ kə-LOHN; German: Köln [kœln] ; Kölsch: Kölle [ˈkœlə] ) is the largest city of the German state of North Rhine-Westphalia and the fourth-most populous city of Germany with nearly 1.1 million inhabitants in the city proper and over 3.1 million people in the Cologne Bonn urban region. Centered on the left (west) bank of the Rhine, Cologne is about 35 km (22 mi) southeast of the North Rhine-Westphalia state capital Düsseldorf and 25 km (16 mi) northwest of Bonn, the former capital of West Germany.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The city's medieval Catholic Cologne Cathedral (Kölner Dom) is the third-tallest church and tallest cathedral in the world. It was constructed to house the Shrine of the Three Kings and is a globally recognized landmark and one of the most visited sights and pilgrimage destinations in Europe. The cityscape is further shaped by the Twelve Romanesque churches of Cologne, and Cologne is famous for Eau de Cologne, that has been produced in the city since 1709, and \"cologne\" has since come to be a generic term.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cologne was founded and established in Germanic Ubii territory in the 1st century CE as the Roman Colonia Agrippina, hence its name. Agrippina was later dropped (except in Latin), and Colonia became the name of the city in its own right, which developed into modern German as Köln. Cologne, the French version of the city's name, has become standard in English as well. Cologne functioned as the capital of the Roman province of Germania Inferior and as the headquarters of the Roman military in the region until occupied by the Franks in 462. During the Middle Ages the city flourished as being located on one of the most important major trade routes between east and western Europe (including the Brabant Road, Via Regia and Publica). Cologne was a free imperial city of the Holy Roman Empire and one of the major members of the trade union Hanseatic League. It was one of the largest European cities in medieval and renaissance times.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Prior to World War II, the city had undergone occupations by the French (1794–1815) and the British (1918–1926), and was part of Prussia beginning in 1815. Cologne was one of the most heavily bombed cities in Germany during World War II. The bombing reduced the population by 93% mainly due to evacuation, and destroyed around 80% of the millennia-old city center. The post-war rebuilding has resulted in a mixed cityscape, restoring most major historic landmarks like city gates and churches (31 of them being Romanesque). The city boosts around 9,000 historic buildings.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Cologne is a major cultural center for the Rhineland; it hosts more than 30 museums and hundreds of galleries. There are many institutions of higher education, most notably the University of Cologne, one of Europe's oldest and largest universities; the Technical University of Cologne, Germany's largest university of applied sciences; and the German Sport University Cologne. It hosts three Max Planck science institutes and is a major research hub for the aerospace industry, with the German Aerospace Center and the European Astronaut Centre headquarters. It also has a significant chemical and automobile industry. Cologne Bonn Airport is a regional hub, the main airport for the region being Düsseldorf Airport. The Cologne Trade Fair hosts a number of trade shows.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The first urban settlement on the grounds of modern-day Cologne was Oppidum Ubiorum, founded in 38 BCE by the Ubii, a Cisrhenian Germanic tribe. In 50 CE, the Romans founded Colonia Claudia Ara Agrippinensium (Cologne) on the river Rhine and the city became the provincial capital of Germania Inferior in 85 CE. It was also known as Augusta Ubiorum. Considerable Roman remains can be found in present-day Cologne, especially near the wharf area, where a 1,900-year-old Roman boat was discovered in late 2007. From 260 to 271, Cologne was the capital of the Gallic Empire under Postumus, Marius, and Victorinus. In 310, under emperor Constantine I, a bridge was built over the Rhine at Cologne. Roman imperial governors resided in the city and it became one of the most important trade and production centers in the Roman Empire north of the Alps. Cologne is shown on the 4th century Peutinger Map.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Maternus, who was elected as bishop in 313, was the first known bishop of Cologne. The city was the capital of a Roman province until it was occupied by the Ripuarian Franks in 462. Parts of the original Roman sewers are preserved underneath the city, with the new sewerage system having opened in 1890.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "After the destruction of the Second Temple in the Siege of Jerusalem and the associated dispersion (diaspora) of the Jews, there is evidence of a Jewish community in Cologne. In 321 CE, Emperor Constantine approved the settlement of a Jewish community with all the freedoms of Roman citizens. It is assumed that it was located near the Marspforte within the city wall. The Edict of Constantine to the Jews is the oldest documented evidence in Germany.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Early medieval Cologne was part of Austrasia within the Frankish Empire. Cunibert, made bishop of Cologne in 623, was an important advisor to the Merovingian King Dagobert I and served with domesticus Pepin of Landen as tutor to the king's son and heir Siegebert III, the future king of Austrasia. In 716, Charles Martel commanded an army for the first time and suffered the only defeat of his life when Chilperic II, King of Neustria, invaded Austrasia and the city fell to him in the Battle of Cologne. Charles fled to the Eifel mountains, rallied supporters and took the city back that same year after defeating Chilperic in the Battle of Amblève. Cologne had been the seat of a bishop since the Roman period; under Charlemagne, in 795, bishop Hildebold was promoted to archbishop. In the 843 Treaty of Verdun Cologne fell into the dominion of Lothair I's Middle Francia – later called Lotharingia (Lower Lorraine).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 953, the archbishops of Cologne first gained noteworthy secular power when bishop Bruno was appointed as duke by his brother Otto I, King of Germany. In order to weaken the secular nobility, who threatened his power, Otto endowed Bruno and his archiepiscopal successors with the prerogatives of secular princes, thus establishing the Electorate of Cologne, formed by the temporal possessions of the archbishopric and included in the end a strip of territory along the left Bank of the Rhine east of Jülich, as well as the Duchy of Westphalia on the other side of the Rhine, beyond Berg and Mark. By the end of the 12th century, the Archbishop of Cologne was one of the seven electors of the Holy Roman Emperor. Besides being prince elector, he was Archchancellor of Italy as well, technically from 1238 and permanently from 1263 until 1803.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Following the Battle of Worringen in 1288, Cologne gained its independence from the archbishops and became a Free City. Archbishop Sigfried II von Westerburg was forced to reside in Bonn. The archbishop nevertheless preserved the right of capital punishment. Thus the municipal council (though in strict political opposition towards the archbishop) depended upon him in all matters concerning criminal justice. This included torture, the sentence for which was only allowed to be handed down by the episcopal judge known as the greve. This legal situation lasted until the French conquest of Cologne.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Besides its economic and political significance Cologne also became an important centre of medieval pilgrimage, when Cologne's archbishop, Rainald of Dassel, gave the relics of the Three Wise Men to Cologne's cathedral in 1164 (after they had been taken from Milan). Besides the three magi Cologne preserves the relics of Saint Ursula and Albertus Magnus.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Cologne's location on the river Rhine placed it at the intersection of the major trade routes between east and west as well as the main south–north Western Europe trade route, Venice to Netherlands; even by the mid-10th century, merchants in the town were already known for their prosperity and luxurious standard of living due to the availability of trade opportunities. The intersection of these trade routes was the basis of Cologne's growth. By the end of the 12th century, Archbishop Phillip von Heinsberg enclosed the entire city with walls. By 1300 the city population was 50,000–55,000. Cologne was a member of the Hanseatic League in 1475, when Frederick III confirmed the city's imperial immediacy. Cologne was so influential in regional commerce that its systems of weights and measurements (e.g. the Cologne mark) were used throughout Europe.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The economic structures of medieval and early modern Cologne were characterised by the city's status as a major harbour and transport hub on the Rhine. Craftsmanship was organised by self-administering guilds, some of which were exclusive to women.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "As a free imperial city, Cologne was a self-ruling state within the Holy Roman Empire, an imperial estate with seat and vote at the Imperial Diet, and as such had the right (and obligation) to contribute to the defense of the Empire and maintain its own military force. As they wore a red uniform, these troops were known as the Rote Funken (red sparks). These soldiers were part of the Army of the Holy Roman Empire (\"Reichskontingent\"). They fought in the wars of the 17th and 18th century, including the wars against revolutionary France in which the small force was almost completely wiped out in combat. The tradition of these troops is preserved as a military persiflage by Cologne's most outstanding carnival society, the Rote Funken.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Free Imperial City of Cologne must not be confused with the Electorate of Cologne, which was a state of its own within the Holy Roman Empire. Since the second half of the 16th century the majority of archbishops were drawn from the Bavarian Wittelsbach dynasty. Due to the free status of Cologne, the archbishops were usually not allowed to enter the city. Thus they took up residence in Bonn and later in Brühl on the Rhine. As members of an influential and powerful family, and supported by their outstanding status as electors, the archbishops of Cologne repeatedly challenged and threatened the free status of Cologne during the 17th and 18th centuries, resulting in complicated affairs, which were handled by diplomatic means and propaganda as well as by the supreme courts of the Holy Roman Empire.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Cologne lost its status as a free city during the French period. According to the Treaty of Lunéville (1801) all the territories of the Holy Roman Empire on the left bank of the Rhine were officially incorporated into the French Republic (which had already occupied Cologne in 1794). Thus this region later became part of Napoleon's Empire. Cologne was part of the French Département Roer (named after the river Roer, German: Rur) with Aachen (French: Aix-la-Chapelle) as its capital. The French modernised public life, for example by introducing the Napoleonic code and removing the old elites from power. The Napoleonic code remained in use on the left bank of the Rhine until 1900, when a unified civil code (the Bürgerliches Gesetzbuch) was introduced in the German Empire. In 1815 at the Congress of Vienna, Cologne was made part of the Kingdom of Prussia, first in the Province of Jülich-Cleves-Berg and then the Rhine Province.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The permanent tensions between the Roman Catholic Rhineland and the overwhelmingly Protestant Prussian state repeatedly escalated with Cologne being in the focus of the conflict. In 1837 the archbishop of Cologne, Clemens August von Droste-Vischering, was arrested and imprisoned for two years after a dispute over the legal status of marriages between Protestants and Roman Catholics (Mischehenstreit). In 1874, during the Kulturkampf, Archbishop Paul Melchers was imprisoned before taking asylum in the Netherlands. These conflicts alienated the Catholic population from Berlin and contributed to a deeply felt anti-Prussian resentment, which was still significant after World War II, when the former mayor of Cologne, Konrad Adenauer, became the first West German chancellor.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "During the 19th and 20th centuries, Cologne absorbed numerous surrounding towns, and by World War I had already grown to 700,000 inhabitants. Industrialisation changed the city and spurred its growth. Vehicle and engine manufacturing was especially successful, though the heavy industry was less ubiquitous than in the Ruhr area. The cathedral, started in 1248 but abandoned around 1560, was eventually finished in 1880 not just as a place of worship but also as a German national monument celebrating the newly founded German empire and the continuity of the German nation since the Middle Ages. Some of this urban growth occurred at the expense of the city's historic heritage with much being demolished (for example, the city walls or the area around the cathedral) and sometimes replaced by contemporary buildings.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Cologne was designated as one of the Fortresses of the German Confederation. It was turned into a heavily armed fortress (opposing the French and Belgian fortresses of Verdun and Liège) with two fortified belts surrounding the city, the remains of which can be seen to this day. The military demands on what became Germany's largest fortress presented a significant obstacle to urban development, with forts, bunkers, and wide defensive dugouts completely encircling the city and preventing expansion; this resulted in a very densely built-up area within the city itself.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "During World War I Cologne was the target of several minor air raids but suffered no significant damage. Cologne was occupied by the British Army of the Rhine until 1926, under the terms of the Armistice and the subsequent Versailles Peace Treaty. In contrast with the harsh behaviour of the French occupation troops in Germany, the British forces were more lenient to the local population. Konrad Adenauer, the mayor of Cologne from 1917 until 1933 and later a West German chancellor, acknowledged the political impact of this approach, especially since Britain had opposed French demands for a permanent Allied occupation of the entire Rhineland.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "As part of the demilitarisation of the Rhineland, the city's fortifications had to be dismantled. This was an opportunity to create two green belts (Grüngürtel) around the city by converting the fortifications and their fields of fire into large public parks. This was not completed until 1933. In 1919 the University of Cologne, closed by the French in 1798, was reopened. This was considered to be a replacement for the loss of the University of Strasbourg on the west bank of the Rhine, which reverted to France with the rest of Alsace. Cologne prospered during the Weimar Republic (1919–33), and progress was made especially in public governance, city planning, housing and social affairs. Social housing projects were considered exemplary and were copied by other German cities. Cologne competed to host the Olympics, and a modern sports stadium was erected at Müngersdorf. When the British occupation ended, the prohibition of civil aviation was lifted and Cologne Butzweilerhof Airport soon became a hub for national and international air traffic, second in Germany only to Berlin Tempelhof Airport.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The democratic parties lost the local elections in Cologne in March 1933 to the Nazi Party and other extreme-right parties. The Nazis then arrested the Communist and Social Democrats members of the city assembly, and Mayor Adenauer was dismissed. Compared to some other major cities, however, the Nazis never gained decisive support in Cologne. (Significantly, the number of votes cast for the Nazi Party in Reichstag elections had always been the national average.) By 1939, the population had risen to 772,221 inhabitants.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "During World War II, Cologne was a Military Area Command Headquarters (Militärbereichshauptkommandoquartier) for Wehrkreis VI (headquartered at Münster). Cologne was under the command of Lieutenant-General Freiherr Roeder von Diersburg, who was responsible for military operations in Bonn, Siegburg, Aachen, Jülich, Düren, and Monschau. Cologne was home to the 211th Infantry Regiment and the 26th Artillery Regiment.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The Allies dropped 44,923.2 tons of bombs on the city during World War II, destroying 61% of its built up area. During the Bombing of Cologne in World War II, Cologne endured 262 air raids by the Western Allies, which caused approximately 20,000 civilian casualties and almost completely wiped out the central part of the city. During the night of 31 May 1942, Cologne was the target of \"Operation Millennium\", the first 1,000 bomber raid by the Royal Air Force in World War II. 1,046 heavy bombers attacked their target with 1,455 tons of explosives, approximately two-thirds of which were incendiary. This raid lasted about 75 minutes, destroyed 600 acres (243 ha) of built-up area (61%), killed 486 civilians and made 59,000 people homeless. The devastation was recorded by Hermann Claasen from 1942 until the end of the war, and presented in his exhibition and book of 1947 Singing in the furnace. Cologne – Remains of an old city.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Cologne was taken by the American First Army in early March 1945 during the Invasion of Germany after a battle. By the end of the war, the population of Cologne had been reduced by 95%. This loss was mainly caused by a massive evacuation of the people to more rural areas. The same happened in many other German cities in the last two years of war. By the end of 1945, however, the population had already recovered to approximately 450,000. By the end of the war, essentially all of Cologne's pre-war Jewish population of 11,000 had been deported or killed by the Nazis. The six synagogues of the city were destroyed. The synagogue on Roonstraße was rebuilt in 1959.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Despite Cologne's status as the largest city in the region, nearby Düsseldorf was chosen as the political capital of the federated state of North Rhine-Westphalia. With Bonn being chosen as the provisional federal capital (provisorische Bundeshauptstadt) and seat of the government of the Federal Republic of Germany (then informally West Germany), Cologne benefited by being sandwiched between two important political centres. The city became–and still is–home to a number of federal agencies and organizations. After reunification in 1990, Berlin was made the capital of Germany.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In 1945 architect and urban planner Rudolf Schwarz called Cologne the \"world's greatest heap of rubble\". Schwarz designed the master plan for reconstruction in 1947, which included the construction of several new thoroughfares through the city centre, especially the Nord-Süd-Fahrt (\"North-South-Drive\"). The master plan took into consideration the fact that even shortly after the war a large increase in automobile traffic could be anticipated. Plans for new roads had already, to a certain degree, evolved under the Nazi administration, but the actual construction became easier when most of the city centre was in ruins.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The destruction of 95% of the city centre, including the famous Twelve Romanesque churches such as St. Gereon, Great St. Martin, St. Maria im Kapitol and several other monuments in World War II, meant a tremendous loss of cultural treasures. The rebuilding of those churches and other landmarks such as the Gürzenich event hall was not undisputed among leading architects and art historians at that time, but in most cases, civil intention prevailed. The reconstruction lasted until the 1990s, when the Romanesque church of St. Kunibert was finished.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In 1959, the city's population reached pre-war numbers again. It then grew steadily, exceeding 1 million for about one year from 1975. It remained just below that until mid-2010, when it exceeded 1 million again.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In the 1980s and 1990s Cologne's economy prospered for two main reasons. The first was the growth in the number of media companies, both in the private and public sectors; they are especially catered for in the newly developed Media Park, which creates a strong visual focal point in Cologne's city centre and includes the KölnTurm, one of Cologne's most prominent high-rise buildings. The second was the permanent improvement of the diverse traffic infrastructure, which made Cologne one of the most easily accessible metropolitan areas in Central Europe.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Due to the economic success of the Cologne Trade Fair, the city arranged a large extension to the fair site in 2005. At the same time the original buildings, which date back to the 1920s, were rented out to RTL, Germany's largest private broadcaster, as their new corporate headquarters.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Cologne was the focus of the 2015-16 New Year's Eve sexual assaults in Germany, with over 500 women reporting that they were sexually assaulted by persons of African and Arab appearance.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The metropolitan area encompasses over 405 square kilometres (156 square miles), extending around a central point that lies at 50° 56' 33 latitude and 6° 57' 32 longitude. The city's highest point is 118 m (387 ft) above sea level (the Monte Troodelöh) and its lowest point is 37.5 m (123 ft) above sea level (the Worringer Bruch). The city of Cologne lies within the larger area of the Cologne Lowland, a cone-shaped area of the central Rhineland that lies between Bonn, Aachen and Düsseldorf.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "Cologne is divided into 9 boroughs (Stadtbezirke) and 85 districts (Stadtteile):",
"title": "Geography"
},
{
"paragraph_id": 35,
"text": "Located in the Rhine-Ruhr area, Cologne is one of the warmest cities in Germany. It has a temperate–oceanic climate (Köppen: Cfb) with cool winters and warm summers. It is also one of the cloudiest cities in Germany, with just 1,567.5 hours of sun a year. Its average annual temperature is 10.7 °C (51 °F): 15.4 °C (60 °F) during the day and 6.1 °C (43 °F) at night. In January, the mean temperature is 3.0 °C (37 °F), while the mean temperature in July is 19.0 °C (66 °F). The record high temperature of 40.3 °C (105 °F) happened on 25 July 2019 during the July 2019 European heat wave in which Cologne saw three consecutive days over 38.0 °C (100 °F). Especially the inner urban neighbourhoods experience a greater number of hot days, as well as significantly higher temperatures during nighttime compared to the surrounding area (including the airport, where temperatures are classified). Still temperatures can vary noticeably over the course of a month with warmer and colder weather. Precipitation is spread evenly throughout the year with a light peak in summer due to showers and thunderstorms.",
"title": "Geography"
},
{
"paragraph_id": 36,
"text": "Cologne is regularly affected by flooding from the Rhine and is considered the most flood-prone European city. A city agency (Stadtentwässerungsbetriebe Köln, \"Cologne Urban Drainage Operations\") manages an extensive flood control system which includes both permanent and mobile flood walls, protection from rising waters for buildings close to the river banks, monitoring and forecasting systems, pumping stations and programmes to create or protect floodplains, and river embankments. The system was redesigned after a 1993 flood, which resulted in heavy damage.",
"title": "Geography"
},
{
"paragraph_id": 37,
"text": "In the Roman Empire, the city was large and rich with a population of 40,000 in 100–200 AD. The city was home to around 20,000 people in 1000 AD, growing to 50,000 in 1200 AD. The Rhineland metropolis still had 50,000 residents in 1300 AD.",
"title": "Demographics"
},
{
"paragraph_id": 38,
"text": "Cologne is the fourth-largest city in Germany after Berlin, Hamburg and Munich. As of 31 December 2021, there were 1,079,301 people registered as living in Cologne in an area of 404.99 km (156.37 sq mi), which makes Cologne the third largest city by area. The population density was 2,700/km (7,000/sq mi). Cologne first reached the population of 1,000,000 in 1975 due to the incorporation of Wesseling, however this was reversed after public opposition. In 2009 Cologne's population again reached 1,000,000 and it became one of the four cities in Germany with a population exceeding 1 Million. The metropolitan area of the Cologne Bonn Region is home to 3,573,500 living on 4,415/km (11,430/sq mi). It is part of the polycentric megacity region Rhine-Ruhr with a population of over 11,000,000 people.",
"title": "Demographics"
},
{
"paragraph_id": 39,
"text": "There were 551,528 women and 527,773 men in Cologne. In 2021, there were 11,127 births in Cologne; 5,844 marriages and 1,808 divorces, and 10,536 deaths. In the city, the population was spread out, with 16.3% under the age of 18, and 17.8% were 65 years of age or older. 203 people in Cologne were over the age of 100.",
"title": "Demographics"
},
{
"paragraph_id": 40,
"text": "According to the Statistical Office of the City of Cologne, the number of people with a migrant background is at 40.5% (436,660). 2,254 people acquired German citizenship in 2021. In 2021, there were 559,854 households, of which 18.4% had children under the age of 18; 51% of all households were made up of singles. 8% of all households were single-parent households. The average household size was 1.88.",
"title": "Demographics"
},
{
"paragraph_id": 41,
"text": "Cologne residents with a foreign citizenship as of 31 December 2021 is as follows:",
"title": "Demographics"
},
{
"paragraph_id": 42,
"text": "Cologne is home to 90,000 people of Turkish origin and is the second largest German city with Turkish population after Berlin. Cologne has a Little Istanbul in Keupstraße that has many Turkish restaurants and markets. Famous Turkish-German people like rapper Eko Fresh and TV presenter Nazan Eckes were born in Cologne.",
"title": "Demographics"
},
{
"paragraph_id": 43,
"text": "Colognian or Kölsch (Colognian pronunciation: [kœɫːʃ]) (natively Kölsch Platt) is a small set of very closely related dialects, or variants, of the Ripuarian Central German group of languages. These dialects are spoken in the area covered by the Archdiocese and former Electorate of Cologne reaching from Neuss in the north to just south of Bonn, west to Düren and east to Olpe in the North-West of Germany. Kölsch is one of the very few city dialects in Germany, which also include the dialect spoken in Berlin, for example.",
"title": "Demographics"
},
{
"paragraph_id": 44,
"text": "As of 2015, 35.5% of the population belonged to the Catholic Church, the largest religious body, and 15.5% to the Protestant Church. Irenaeus of Lyons claimed that Christianity was brought to Cologne by Roman soldiers and traders at an unknown early date. It is known that in the early second century it was a bishop's seat. The first historical Bishop of Cologne was Saint Maternus. Thomas Aquinas studied in Cologne in 1244 under Albertus Magnus. Cologne is the seat of the Roman Catholic Archdiocese of Cologne.",
"title": "Demographics"
},
{
"paragraph_id": 45,
"text": "According to the 2011 census, 2.1% of the population was Eastern Orthodox, 0.5% belonged of an Evangelical Free Church and 4.2% belonged to further religious communities officially recognized by the state of North Rhine-Westphalia (such as Jehovah's Witnesses).",
"title": "Demographics"
},
{
"paragraph_id": 46,
"text": "There are several mosques, including the Cologne Central Mosque run by the Turkish-Islamic Union for Religious Affairs. In 2011, about 11.2% of the population was Muslim.",
"title": "Demographics"
},
{
"paragraph_id": 47,
"text": "Cologne also has one of the oldest and largest Jewish communities in Germany. In 2011, 0.3% of Cologne's population was Jewish.",
"title": "Demographics"
},
{
"paragraph_id": 48,
"text": "On 11 October 2021, the Mayor of Cologne, Henriette Reker, announced that all of Cologne's 35 mosques would be allowed to broadcast the Adhan (prayer call) for up to five minutes on Fridays between noon and 3 p.m. She commented that the move \"shows that diversity is appreciated and loved in Cologne\".",
"title": "Demographics"
},
{
"paragraph_id": 49,
"text": "The city's administration is headed by the mayor and the three deputy mayors.",
"title": "Government and politics"
},
{
"paragraph_id": 50,
"text": "The long tradition of a free imperial city, which long dominated an exclusively Catholic population and the age-old conflict between the church and the bourgeoisie (and within it between the patricians and craftsmen) have created its own political climate in Cologne. Various interest groups often form networks beyond party boundaries. The resulting web of relationships, with political, economic, and cultural links with each other in a system of mutual favours, obligations and dependencies, is called the 'Cologne coterie'. This has often led to an unusual proportional distribution in the city government and degenerated at times into corruption: in 1999, a \"waste scandal\" over kickbacks and illegal campaign contributions came to light, which led not only to the imprisonment of the entrepreneur Hellmut Trienekens, but also to the downfall of almost the entire leadership of the ruling Social Democrats.",
"title": "Government and politics"
},
{
"paragraph_id": 51,
"text": "The current Lord Mayor of Cologne is Henriette Reker. She received 52.66% of the vote at the municipal election on 17 October 2015, running as an independent with the support of the CDU, FDP, and Greens. She took office on 15 December 2015. Reker was re-elected to a second term in a runoff election on 27 September 2020, in which she received 59.27% of the vote.",
"title": "Government and politics"
},
{
"paragraph_id": 52,
"text": "The most recent mayoral election was held on 13 September 2020, with a runoff held on 27 September, and the results were as follows:",
"title": "Government and politics"
},
{
"paragraph_id": 53,
"text": "The Cologne city council (Kölner Stadtrat) governs the city alongside the Mayor. It serves a term of five years. The most recent city council election was held on 13 September 2020, and the results were as follows:",
"title": "Government and politics"
},
{
"paragraph_id": 54,
"text": "In the Landtag of North Rhine-Westphalia, Cologne is divided between seven constituencies. After the 2022 North Rhine-Westphalia state election, the composition and representation of each was as follows:",
"title": "Government and politics"
},
{
"paragraph_id": 55,
"text": "In the Bundestag, Cologne is divided between four constituencies. In the 20th Bundestag, the composition and representation of each was as follows:",
"title": "Government and politics"
},
{
"paragraph_id": 56,
"text": "The inner city of Cologne was largely destroyed during World War II. The reconstruction of the city followed the style of the 1950s, while respecting the old layout and naming of the streets. Thus, the city centre today is characterized by modern architecture, with a few interspersed pre-war buildings which were reconstructed due to their historical importance. Some buildings of the \"Wiederaufbauzeit\" (era of reconstruction), for example, the opera house by Wilhelm Riphahn, are nowadays regarded as classics of modern architecture. Nevertheless, the uncompromising style of the Cologne Opera house and other modern buildings has remained controversial.",
"title": "Cityscape"
},
{
"paragraph_id": 57,
"text": "Green areas account for over a quarter of Cologne, which is approximately 75 m (807.29 sq ft) of public green space for every inhabitant.",
"title": "Cityscape"
},
{
"paragraph_id": 58,
"text": "The dominant wildlife of Cologne is insects, small rodents, and several species of birds. Pigeons are the most often seen animals in Cologne, although the number of birds is augmented each year by a growing population of feral exotics, most visibly parrots such as the rose-ringed parakeet. The sheltered climate in southeast Northrhine-Westphalia allows these birds to survive through the winter, and in some cases, they are displacing native species. The plumage of Cologne's green parrots is highly visible even from a distance, and contrasts starkly with the otherwise muted colours of the cityscape.",
"title": "Wildlife"
},
{
"paragraph_id": 59,
"text": "Hedgehogs, rabbits and squirrels are common in parks and the greener parts of town. In the outer suburbs foxes and wild boar can be seen, even during the day.",
"title": "Wildlife"
},
{
"paragraph_id": 60,
"text": "Cologne had 5.8 million overnight stays booked and 3.35 million arrivals in 2016.",
"title": "Tourism"
},
{
"paragraph_id": 61,
"text": "The Cologne City Hall (Kölner Rathaus), founded in the 12th century, is the oldest city hall in Germany still in use. The Renaissance-style loggia and tower were added in the 15th century. Other famous buildings include the Gürzenich, Haus Saaleck and the Overstolzenhaus.",
"title": "Tourism"
},
{
"paragraph_id": 62,
"text": "Of the twelve medieval city gates that once existed, only the Eigelsteintorburg at Ebertplatz, the Hahnentor at Rudolfplatz and the Severinstorburg at Chlodwigplatz still stand today.",
"title": "Tourism"
},
{
"paragraph_id": 63,
"text": "Several bridges cross the Rhine in Cologne. They are (from south to north): the Cologne Rodenkirchen Bridge, South Bridge (railway), Severin Bridge, Deutz Bridge, Hohenzollern Bridge (railway), Zoo Bridge (Zoobrücke) and Cologne Mülheim Bridge. In particular the iron tied arch Hohenzollern Bridge (Hohenzollernbrücke) is a dominant landmark along the river embankment. A Rhine crossing of a special kind is provided by the Cologne Cable Car (German: Kölner Seilbahn), a cableway that runs across the Rhine between the Cologne Zoological Garden in Riehl and the Rheinpark in Deutz.",
"title": "Tourism"
},
{
"paragraph_id": 64,
"text": "Cologne's tallest structure is the Colonius telecommunication tower at 266 m or 873 ft. The observation deck has been closed since 1992. A selection of the tallest buildings in Cologne is listed below. Other tall structures include the Hansahochhaus (designed by architect Jacob Koerfer and completed in 1925 – it was at one time Europe's tallest office building), the Kranhaus buildings at Rheinauhafen, and the Messeturm Köln (\"trade fair tower\").",
"title": "Tourism"
},
{
"paragraph_id": 65,
"text": "Cologne has several museums. The famous Roman-Germanic Museum features art and architecture from the city's distant past; the Museum Ludwig houses one of the most important collections of modern art in Europe, including a Picasso collection matched only by the museums in Barcelona and Paris. The Museum Schnütgen of religious art is partly housed in St. Cecilia, one of Cologne's Twelve Romanesque churches. Many art galleries in Cologne enjoy a worldwide reputation like e.g. Galerie Karsten Greve, one of the leading galleries for postwar and contemporary art.",
"title": "Culture"
},
{
"paragraph_id": 66,
"text": "Cologne has more than 60 music venues and the third-highest density of music venues of Germany's four largest cities, after Munich and Hamburg and ahead of Berlin.",
"title": "Culture"
},
{
"paragraph_id": 67,
"text": "Several orchestras are active in the city, among them the Gürzenich Orchestra, which is also the orchestra of the Cologne Opera and the WDR Symphony Orchestra Cologne (German State Radio Orchestra), both based at the Cologne Philharmonic Orchestra Building (Kölner Philharmonie). Other orchestras are the Musica Antiqua Köln, the WDR Rundfunkorchester Köln and WDR Big Band, and several choirs, including the WDR Rundfunkchor Köln. Cologne was also an important hotbed for electronic music in the 1950s (Studio für elektronische Musik, Karlheinz Stockhausen) and again from the 1990s onward. The public radio and TV station WDR was involved in promoting musical movements such as Krautrock in the 1970s; the influential Can was formed there in 1968. There are several centres of nightlife, among them the Kwartier Latäng (the student quarter around the Zülpicher Straße) and the nightclub-studded areas around Hohenzollernring, Friesenplatz and Rudolfplatz.",
"title": "Culture"
},
{
"paragraph_id": 68,
"text": "The large annual literary festival lit.COLOGNE [de] with its Silberschweinpreis [de] features regional and international authors. The main literary figure connected with Cologne is the writer Heinrich Böll, winner of the Nobel Prize for Literature. Since 2012, there is also an annual international festival of philosophy called phil.cologne [de].",
"title": "Culture"
},
{
"paragraph_id": 69,
"text": "The city also has the most pubs per capita in Germany. Cologne is well known for its beer, called Kölsch. Kölsch is also the name of the local dialect. This has led to the common joke of Kölsch being the only language one can drink.",
"title": "Culture"
},
{
"paragraph_id": 70,
"text": "Cologne is also famous for Eau de Cologne (German: Kölnisch Wasser; lit: \"Water of Cologne\"), a perfume created by Italian expatriate Johann Maria Farina at the beginning of the 18th century. During the 18th century, this perfume became increasingly popular, was exported all over Europe by the Farina family and Farina became a household name for Eau de Cologne. In 1803 Wilhelm Mülhens entered into a contract with an unrelated person from Italy named Carlo Francesco Farina who granted him the right to use his family name and Mühlens opened a small factory at Cologne's Glockengasse. In later years, and after various court battles, his grandson Ferdinand Mülhens was forced to abandon the name Farina for the company and their product. He decided to use the house number given to the factory at Glockengasse during the French occupation in the early 19th century, 4711. Today, original Eau de Cologne is still produced in Cologne by both the Farina family, currently in the eighth generation, and by Mäurer & Wirtz who bought the 4711 brand in 2006.",
"title": "Culture"
},
{
"paragraph_id": 71,
"text": "The Cologne carnival is one of the largest street festivals in Europe. In Cologne, the carnival season officially starts on 11 November at 11 minutes past 11 a.m. with the proclamation of the new Carnival Season, and continues until Ash Wednesday. However, the so-called \"Tolle Tage\" (crazy days) do not start until Weiberfastnacht (Women's Carnival) or, in dialect, Wieverfastelovend, the Thursday before Ash Wednesday, which is the beginning of the street carnival. Zülpicher Strasse and its surroundings, Neumarkt square, Heumarkt and all bars and pubs in the city are crowded with people in costumes dancing and drinking in the streets. Hundreds of thousands of visitors flock to Cologne during this time. Generally, around a million people celebrate in the streets on the Thursday before Ash Wednesday.",
"title": "Culture"
},
{
"paragraph_id": 72,
"text": "Cologne and Düsseldorf have a \"fierce regional rivalry\", which includes carnival parades, football, and beer. People in Cologne prefer Kölsch while people in Düsseldorf prefer Altbier (\"Alt\"). Waiters and patrons will \"scorn\" and make a \"mockery\" of people who order Alt beer in Cologne or Kölsch in Düsseldorf. The rivalry has been described as a \"love–hate relationship\". The Köln Guild of Brewers was established in 1396. The Kölsch beer style first appeared in the 1800s and in 1986 the breweries established an appellation under which only breweries in the city are allowed to use the term Kölsch.",
"title": "Culture"
},
{
"paragraph_id": 73,
"text": "The city was home to the internationally famous Ringfest, and now to the C/o pop festival.",
"title": "Culture"
},
{
"paragraph_id": 74,
"text": "In addition, Cologne enjoys a thriving Christmas Market (Weihnachtsmarkt) presence with several locations in the city.",
"title": "Culture"
},
{
"paragraph_id": 75,
"text": "As the largest city in the Rhine-Ruhr metropolitan region, Cologne benefits from a large market structure. In competition with Düsseldorf, the economy of Cologne is primarily based on insurance and media industries, while the city is also an important cultural and research centre and home to a number of corporate headquarters.",
"title": "Economy"
},
{
"paragraph_id": 76,
"text": "Among the largest media companies based in Cologne are Westdeutscher Rundfunk, RTL Television (with subsidiaries), n-tv, Deutschlandradio, Brainpool TV and publishing houses like J. P. Bachem, Taschen, Tandem Verlag, and M. DuMont Schauberg. Several clusters of media, arts and communications agencies, TV production studios, and state agencies work partly with private and government-funded cultural institutions. Among the insurance companies based in Cologne are Central, DEVK, DKV, Generali Deutschland, Gen Re, Gothaer, HDI Gerling and national headquarters of Axa Insurance, Mitsui Sumitomo Insurance Group and Zurich Financial Services.",
"title": "Economy"
},
{
"paragraph_id": 77,
"text": "The German flag carrier Lufthansa and its subsidiary Lufthansa CityLine have their main corporate headquarters in Cologne. The largest employer in Cologne is Ford Europe, which has its European headquarters and a factory in Niehl (Ford-Werke GmbH). Toyota Motorsport GmbH (TMG), Toyota's official motorsports team, responsible for Toyota rally cars, and then Formula One cars, has its headquarters and workshops in Cologne. Other large companies based in Cologne include the REWE Group, TÜV Rheinland, Deutz AG and a number of Kölsch breweries. The largest three Kölsch breweries of Cologne are Reissdorf, Gaffel, and Früh.",
"title": "Economy"
},
{
"paragraph_id": 78,
"text": "Historically, Cologne has always been an important trade city, with land, air, and sea connections. The city has five Rhine ports, the second largest inland port in Germany and one of the largest in Europe. Cologne Bonn Airport is the second largest freight terminal in Germany. Today, the Cologne trade fair (Koelnmesse) ranks as a major European trade fair location with over 50 trade fairs and other large cultural and sports events. In 2008 Cologne had 4.31 million overnight stays booked and 2.38 million arrivals. Cologne's largest daily newspaper is the Kölner Stadt-Anzeiger.",
"title": "Economy"
},
{
"paragraph_id": 79,
"text": "Cologne shows a significant increase in startup companies, especially when considering digital business.",
"title": "Economy"
},
{
"paragraph_id": 80,
"text": "Cologne has also become the first German city with a population of more than a million people to declare climate emergency.",
"title": "Economy"
},
{
"paragraph_id": 81,
"text": "Road building had been a major issue in the 1920s under the leadership of mayor Konrad Adenauer. The first German limited-access road was constructed after 1929 between Cologne and Bonn. Today, this is the Bundesautobahn 555. In 1965, Cologne became the first German city to be fully encircled by a motorway ring road. Roughly at the same time, a city centre bypass (Stadtautobahn) was planned, but only partially put into effect, due to opposition by environmental groups. The completed section became Bundesstraße (\"Federal Road\") B 55a, which begins at the Zoobrücke (\"Zoo Bridge\") and meets with A 4 and A 3 at the interchange Cologne East. Nevertheless, it is referred to as Stadtautobahn by most locals. In contrast to this, the Nord-Süd-Fahrt (\"North-South-Drive\") was actually completed, a new four/six-lane city centre through-route, which had already been anticipated by planners such as Fritz Schumacher in the 1920s. The last section south of Ebertplatz was completed in 1972.",
"title": "Transport"
},
{
"paragraph_id": 82,
"text": "In 2005, the first stretch of an eight-lane motorway in North Rhine-Westphalia was opened to traffic on Bundesautobahn 3, part of the eastern section of the Cologne Beltway between the interchanges Cologne East and Heumar.",
"title": "Transport"
},
{
"paragraph_id": 83,
"text": "Compared to other German cities, Cologne has a traffic layout that is not very bicycle-friendly. It has repeatedly ranked among the worst in an independent evaluation conducted by the Allgemeiner Deutscher Fahrrad-Club. In 2014, it ranked 36th out of 39 German cities with a population greater than 200,000.",
"title": "Transport"
},
{
"paragraph_id": 84,
"text": "Cologne has a railway service with Deutsche Bahn InterCity and ICE-trains stopping at Köln Hauptbahnhof (Cologne Main Station), Köln Messe/Deutz and Cologne/Bonn Airport. ICE and TGV Thalys high-speed trains link Cologne with Amsterdam, Brussels (in 1h47, 9 departures/day) and Paris (in 3h14, 6 departures/day). There are frequent ICE trains to other German cities, including Frankfurt am Main and Berlin. ICE Trains to London via the Channel Tunnel were planned for 2013.",
"title": "Transport"
},
{
"paragraph_id": 85,
"text": "The Cologne Stadtbahn operated by Kölner Verkehrsbetriebe (KVB) is an extensive light rail system that is partially underground and serves Cologne and a number of neighbouring cities. It evolved from the tram system. Nearby Bonn is linked by both the Stadtbahn and main line railway trains, and occasional recreational boats on the Rhine. Düsseldorf is also linked by S-Bahn trains, which are operated by Deutsche Bahn.",
"title": "Transport"
},
{
"paragraph_id": 86,
"text": "The Rhine-Ruhr S-Bahn has 5 lines which cross Cologne. The S13/S19 runs 24/7 between Cologne Hbf and Cologne/Bonn airport.",
"title": "Transport"
},
{
"paragraph_id": 87,
"text": "There are also frequent buses covering most of the city and surrounding suburbs, and Eurolines coaches to London via Brussels.",
"title": "Transport"
},
{
"paragraph_id": 88,
"text": "Häfen und Güterverkehr Köln (Ports and Goods traffic Cologne, HGK) is one of the largest operators of inland ports in Germany. Ports include Köln-Deutz, Köln-Godorf, and Köln-Niehl I and II.",
"title": "Transport"
},
{
"paragraph_id": 89,
"text": "Cologne's international airport is Cologne/Bonn Airport (CGN). It is also called Konrad Adenauer Airport after Germany's first post-war Chancellor Konrad Adenauer, who was born in the city and was mayor of Cologne from 1917 until 1933. The airport is shared with the neighbouring city of Bonn. Cologne is headquarters to the European Aviation Safety Agency (EASA).",
"title": "Transport"
},
{
"paragraph_id": 90,
"text": "Cologne is home to numerous universities and colleges, and host to some 72,000 students. Its oldest university, the University of Cologne (founded in 1388) is the largest university in Germany, as the Cologne University of Applied Sciences is the largest university of Applied Sciences in the country. The Cologne University of Music and Dance is the largest conservatory in Europe. Foreigners can have German lessons in the VHS (Adult Education Centre).",
"title": "Education"
},
{
"paragraph_id": 91,
"text": "Lauder Morijah School (German: Lauder-Morijah-Schule), a Jewish school in Cologne, previously closed. After Russian immigration increased the Jewish population, the school reopened in 2002.",
"title": "Education"
},
{
"paragraph_id": 92,
"text": "Within Germany, Cologne is known as an important media centre. Several radio and television stations, including Westdeutscher Rundfunk (WDR), RTL and VOX, have their headquarters in the city. Film and TV production is also important. The city is \"Germany's capital of TV crime stories\". A third of all German TV productions are made in the Cologne region. Furthermore, the city hosts the Cologne Comedy Festival, which is considered to be the largest comedy festival in mainland Europe.",
"title": "Media"
},
{
"paragraph_id": 93,
"text": "Cologne hosts the football club 1. FC Köln, who play in the 1. Bundesliga (first division). They play their home matches in RheinEnergieStadion which also hosted five matches of the 2006 FIFA World Cup. The International Olympic Committee and the International Association of Sports and Leisure Facilities gave RheinEnergieStadion a bronze medal for \"being one of the best sporting venues in the world\". The city also hosts the two football clubs FC Viktoria Köln and SC Fortuna Köln, who currently play in the 3. Liga (third division) and the Regionalliga West (fourth division) respectively. Cologne's oldest football club 1. FSV Köln 1899 is playing with its amateur team in the Verbandsliga (sixth division).",
"title": "Sports"
},
{
"paragraph_id": 94,
"text": "Cologne also is home of the ice hockey team Kölner Haie, which is playing in the highest ice hockey league in Germany, the Deutsche Eishockey Liga. They are based at Lanxess Arena.",
"title": "Sports"
},
{
"paragraph_id": 95,
"text": "Several horse races per year are held at Cologne-Weidenpesch Racecourse since 1897, the annual Cologne Marathon was started in 1997 and the classic cycling race Rund um Köln is organised in Cologne since 1908. The city also has a long tradition in rowing, being home of some of Germany's oldest regatta courses and boat clubs, such as the Kölner Rudergesellschaft 1891 or the Kölner Ruderverein von 1877 in the Rodenkirchen district.",
"title": "Sports"
},
{
"paragraph_id": 96,
"text": "Japanese automotive manufacturer Toyota has their major motorsport facility known by the name Toyota Motorsport GmbH, which is located in the Marsdorf district, and is responsible for Toyota's major motorsport development and operations, which in the past included the FIA Formula One World Championship, the FIA World Rally Championship and the Le Mans Series. Currently they are working on Toyota's team Toyota Gazoo Racing which competes in the FIA World Endurance Championship.",
"title": "Sports"
},
{
"paragraph_id": 97,
"text": "Cologne is considered \"the secret golf capital of Germany\". The first golf club in North Rhine-Westphalia was founded in Cologne in 1906. The city offers the most options and top events in Germany.",
"title": "Sports"
},
{
"paragraph_id": 98,
"text": "The city has hosted several athletic events which includes the 2005 FIFA Confederations Cup, 2006 FIFA World Cup, 2007 World Men's Handball Championship, 2010 and 2017 Ice Hockey World Championships and 2010 Gay Games.",
"title": "Sports"
},
{
"paragraph_id": 99,
"text": "Since 2014, the city has hosted ESL One Cologne, one of the biggest CS GO tournaments held annually in July/August at Lanxess Arena.",
"title": "Sports"
},
{
"paragraph_id": 100,
"text": "Furthermore Cologne is home of the Sport-Club Colonia 1906, Germany's oldest boxing club, and the Kölner Athleten-Club 1882, the world's oldest active weightlifting club.",
"title": "Sports"
},
{
"paragraph_id": 101,
"text": "Cologne is twinned with:",
"title": "Twin towns – sister cities"
},
{
"paragraph_id": 102,
"text": "Cologne also cooperates with:",
"title": "Twin towns – sister cities"
}
]
| Cologne is the largest city of the German state of North Rhine-Westphalia and the fourth-most populous city of Germany with nearly 1.1 million inhabitants in the city proper and over 3.1 million people in the Cologne Bonn urban region. Centered on the left (west) bank of the Rhine, Cologne is about 35 km (22 mi) southeast of the North Rhine-Westphalia state capital Düsseldorf and 25 km (16 mi) northwest of Bonn, the former capital of West Germany. The city's medieval Catholic Cologne Cathedral is the third-tallest church and tallest cathedral in the world. It was constructed to house the Shrine of the Three Kings and is a globally recognized landmark and one of the most visited sights and pilgrimage destinations in Europe. The cityscape is further shaped by the Twelve Romanesque churches of Cologne, and Cologne is famous for Eau de Cologne, that has been produced in the city since 1709, and "cologne" has since come to be a generic term. Cologne was founded and established in Germanic Ubii territory in the 1st century CE as the Roman Colonia Agrippina, hence its name. Agrippina was later dropped, and Colonia became the name of the city in its own right, which developed into modern German as Köln. Cologne, the French version of the city's name, has become standard in English as well. Cologne functioned as the capital of the Roman province of Germania Inferior and as the headquarters of the Roman military in the region until occupied by the Franks in 462. During the Middle Ages the city flourished as being located on one of the most important major trade routes between east and western Europe. Cologne was a free imperial city of the Holy Roman Empire and one of the major members of the trade union Hanseatic League. It was one of the largest European cities in medieval and renaissance times. Prior to World War II, the city had undergone occupations by the French (1794–1815) and the British (1918–1926), and was part of Prussia beginning in 1815. Cologne was one of the most heavily bombed cities in Germany during World War II. The bombing reduced the population by 93% mainly due to evacuation, and destroyed around 80% of the millennia-old city center. The post-war rebuilding has resulted in a mixed cityscape, restoring most major historic landmarks like city gates and churches. The city boosts around 9,000 historic buildings. Cologne is a major cultural center for the Rhineland; it hosts more than 30 museums and hundreds of galleries. There are many institutions of higher education, most notably the University of Cologne, one of Europe's oldest and largest universities; the Technical University of Cologne, Germany's largest university of applied sciences; and the German Sport University Cologne. It hosts three Max Planck science institutes and is a major research hub for the aerospace industry, with the German Aerospace Center and the European Astronaut Centre headquarters. It also has a significant chemical and automobile industry. Cologne Bonn Airport is a regional hub, the main airport for the region being Düsseldorf Airport. The Cologne Trade Fair hosts a number of trade shows. | 2001-11-08T14:13:07Z | 2023-12-31T14:16:19Z | [
"Template:Anchor",
"Template:Cities in Germany",
"Template:Hanseatic League",
"Template:Cite web",
"Template:Weather box",
"Template:Cite book",
"Template:For timeline",
"Template:CN",
"Template:Div col",
"Template:Cite news",
"Template:Wikisource1911Enc",
"Template:Lang-de",
"Template:See also",
"Template:About",
"Template:Decrease",
"Template:Official",
"Template:Geographic location",
"Template:Short description",
"Template:Increase",
"Template:Citation needed",
"Template:ISBN",
"Template:Use dmy dates",
"Template:Convert",
"Template:Historical populations",
"Template:Election table",
"Template:Wide image",
"Template:Portal",
"Template:Redirect",
"Template:Sister project links",
"Template:Main",
"Template:Flagicon",
"Template:Webarchive",
"Template:Citation",
"Template:Lang",
"Template:IPA-ksh",
"Template:^",
"Template:IPA-de",
"Template:Authority control",
"Template:Respell",
"Template:Lang-ksh",
"Template:Ill",
"Template:In lang",
"Template:Districts of Cologne",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Infobox German location",
"Template:Circa",
"Template:Flag",
"Template:Steady",
"Template:Colend",
"Template:Germany districts north rhine-westphalia",
"Template:IPAc-en",
"Template:Free Imperial Cities"
]
| https://en.wikipedia.org/wiki/Cologne |
6,188 | Buddhist cuisine | Buddhist cuisine is an Asian cuisine that is followed by monks and many believers from areas historically influenced by Mahayana Buddhism. It is vegetarian or vegan, and it is based on the Dharmic concept of ahimsa (non-violence). Vegetarianism is common in other Dharmic faiths such as Hinduism, Jainism and Sikhism, as well as East Asian religions like Taoism. While monks, nuns and a minority of believers are vegetarian year-round, many believers follow the Buddhist vegetarian diet for celebrations.
Buddhists believe that cooking is seen as a spiritual practice that produces the nourishment which the body needs to work hard and meditate. The origin of "Buddhist food" as a distinct sub-style of cuisine is tied to monasteries, where one member of the community would have the duty of being the head cook and supplying meals that paid respect to the strictures of Buddhist precepts. Temples that were open to visitors from the general public might also serve meals to them and a few temples effectively run functioning restaurants on the premises. In Japan, this culinary custom, recognized as shōjin ryōri (精進料理) or devotion cuisine, is commonly offered at numerous temples, notably in Kyoto. This centuries-old culinary tradition, primarily associated with religious contexts, is seldom encountered beyond places like temples, religious festivals, and funerals. A more recent version, more Chinese in style, is prepared by the Ōbaku school of zen, and known as fucha ryōri (普茶料理); this is served at the head temple of Manpuku-ji, as well as various subtemples. In modern times, commercial restaurants have also latched on to the style, catering both to practicing and non-practicing lay people.
Most of the dishes considered to be uniquely Buddhist are vegetarian, but not all Buddhist traditions require vegetarianism of lay followers or clergy. Vegetarian eating is primarily associated with the East and Southeast Asian tradition in China, Vietnam, Japan, and Korea where it is commonly practiced by clergy and may be observed by laity on holidays or as a devotional practice.
In the Mahayana tradition, several sutras of the Mahayana canon contain explicit prohibitions against consuming meat, including sections of the Lankavatara Sutra and Surangama Sutra. The monastic community in Chinese Buddhism, Vietnamese Buddhism and most of Korean Buddhism strictly adhere to vegetarianism.
Theravada Buddhist monks and nuns consume food by gathering alms themselves, and generally must eat whatever foods are offered to them, including meat. The exception to this alms rule is when monks and nuns have seen, heard or known that animal(s) have been specifically killed to feed the alms-seeker, in which case consumption of such meat would be karmically negative, as well as meat from certain animals, such as dogs and snakes, that were regarded as impure in ancient India. The same restriction is also followed by some lay Buddhists and is known as the consumption of "triply clean meat" (三净肉). The Pāli Scriptures also indicated that Lord Buddha refusing a proposal by his traitor disciple Devadatta to mandate vegetarianism in the monastic precepts.
Tibetan Buddhism has long accepted that the practical difficulties in obtaining vegetables and grains within most of Tibet make it impossible to insist upon vegetarianism; however, many leading Tibetan Buddhist teachers agree upon the great worth of practicing vegetarianism whenever and wherever possible, such as Chatral Rinpoche, a lifelong advocate of vegetarianism who famously released large numbers of fish caught for food back into the ocean once a year, and who wrote about the practice of saving lives.
Both Mahayana and Theravada Buddhists consider that one may practice vegetarianism as part of cultivating Bodhisattvas's paramita.
In addition to the ban on garlic, practically all Mahayana monastics in China, Korea, Vietnam and Japan specifically avoid eating strong-smelling plants, traditionally asafoetida, shallot, mountain leek and Allium chinense, which together with garlic are referred to as wǔ hūn (五葷, or 'Five Acrid and Strong-smelling Vegetables') or wǔ xīn (五辛 or 'Five Spices') as they tend to excite senses. This is based on teachings found in the Brahamajala Sutra, the Surangama Sutra and the Lankavatara Sutra (chapter eight). In modern times this rule is often interpreted to include other vegetables of the onion genus, as well as coriander. The origin of this additional restriction is from the Indic region and can still be found among some believers of Hinduism and Jainism. Some Taoists also have this additional restriction but the list of restricted plants differs from the Buddhist list.
The food that a strict Buddhist takes, if not a vegetarian, is also specific. For many Chinese Buddhists, beef and the consumption of large animals and exotic species is avoided. Then there would be the aforementioned "triply clean meat" rule. One restriction on food that is not known to many is the abstinence from eating animal offal (organ meat). This is known as xiàshui (下水), not to be confused with the term for sewage.
Alcohol and other drugs are also avoided by many Buddhists because of their effects on the mind and "mindfulness". It is part of the Five Precepts which dictate that one is not to consume "addictive materials". The definition of "addictive" depends on each individual but most Buddhists consider alcohol, tobacco and drugs other than medicine to be addictive. Although caffeine is now also known to be addictive, caffeinated drinks and especially tea are not included under this restriction; tea in particular is considered to be healthful and beneficial and its mild stimulant effect desirable. There are many legends about tea. Among meditators it is considered to keep the person alert and awake without overexcitement.
In theory and practice, many regional styles of cooking may be adapted to be "Buddhist" as long as the cook, with the above restrictions in mind, prepares the food, generally in simple preparations, with expert attention to its quality, wholesomeness and flavor. Often working on a tight budget, the monastery cook would have to make the most of whatever ingredients were available.
In Tenzo kyokun ("Instructions for the Zen Cook"), Soto Zen founder Eihei Dogen wrote the following about the Zen attitude toward food:
In preparing food, it is essential to be sincere and to respect each ingredient regardless of how coarse or fine it is. (...) A rich buttery soup is not better as such than a broth of wild herbs. In handling and preparing wild herbs, do so as you would the ingredients for a rich feast, wholeheartedly, sincerely, clearly. When you serve the monastic assembly, they and you should taste only the flavour of the Ocean of Reality, the Ocean of unobscured Awake Awareness, not whether or not the soup is creamy or made only of wild herbs. In nourishing the seeds of living in the Way, rich food and wild grass are not separate.
Following its dominant status in most parts of East Asia where Buddhism is most practiced, rice features heavily as a staple in the Buddhist meal, especially in the form of rice porridge or congee as the usual morning meal. Noodles and other grains may often be served as well. Vegetables of all sorts are generally either stir-fried or cooked in vegetarian broth with seasonings and may be eaten with various sauces. Traditionally eggs and dairy are not permitted. Seasonings will be informed by whatever is common in the local region; for example, soy sauce and vegan dashi figure strongly in Japanese monastery food while curry and tương (as a vegetarian replacement for fish sauce) may be prominent in Southeast Asia. Sweets and desserts are not often consumed, but are permitted in moderation and may be served at special occasions, such as in the context of a tea ceremony in the Zen tradition.
Buddhist vegetarian chefs have become extremely creative in imitating meat using prepared wheat gluten, also known as seitan, kao fu (烤麸) or wheat meat, soy (such as tofu or tempeh), agar, konnyaku and other plant products. Some of their recipes are the oldest and most-refined meat analogues in the world. Soy and wheat gluten are very versatile materials, because they can be manufactured into various shapes and textures, and they absorb flavorings (including, but not limited to, meat-like flavorings), while having very little flavor of their own. With the proper seasonings, they can mimic various kinds of meat quite closely.
Some of these Buddhist vegetarian chefs are in the many monasteries and temples which serve allium-free and mock-meat (also known as 'meat analogues') dishes to the monks and visitors (including non-Buddhists who often stay for a few hours or days, to Buddhists who are not monks, but staying overnight for anywhere up to weeks or months). Many Buddhist restaurants also serve vegetarian, vegan, non-alcoholic or allium-free dishes.
Some Buddhists eat vegetarian on the 1st and 15th of the lunar calendar (lenten days), on Chinese New Year eve, and on saint and ancestral holy days. To cater to this type of customer, as well as full-time vegetarians, the menu of a Buddhist vegetarian restaurant usually shows no difference from a typical Chinese or East Asian restaurant, except that in recipes originally made to contain meat, a soy chicken substitute might be served instead.
According to cookbooks published in English, formal monastery meals in the Zen tradition generally follow a pattern of "three bowls" in descending size. The first and largest bowl is a grain-based dish such as rice, noodles or congee; the second contains the protein dish which is often some form of stew or soup; the third and smallest bowl is a vegetable dish or a salad. | [
{
"paragraph_id": 0,
"text": "Buddhist cuisine is an Asian cuisine that is followed by monks and many believers from areas historically influenced by Mahayana Buddhism. It is vegetarian or vegan, and it is based on the Dharmic concept of ahimsa (non-violence). Vegetarianism is common in other Dharmic faiths such as Hinduism, Jainism and Sikhism, as well as East Asian religions like Taoism. While monks, nuns and a minority of believers are vegetarian year-round, many believers follow the Buddhist vegetarian diet for celebrations.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Buddhists believe that cooking is seen as a spiritual practice that produces the nourishment which the body needs to work hard and meditate. The origin of \"Buddhist food\" as a distinct sub-style of cuisine is tied to monasteries, where one member of the community would have the duty of being the head cook and supplying meals that paid respect to the strictures of Buddhist precepts. Temples that were open to visitors from the general public might also serve meals to them and a few temples effectively run functioning restaurants on the premises. In Japan, this culinary custom, recognized as shōjin ryōri (精進料理) or devotion cuisine, is commonly offered at numerous temples, notably in Kyoto. This centuries-old culinary tradition, primarily associated with religious contexts, is seldom encountered beyond places like temples, religious festivals, and funerals. A more recent version, more Chinese in style, is prepared by the Ōbaku school of zen, and known as fucha ryōri (普茶料理); this is served at the head temple of Manpuku-ji, as well as various subtemples. In modern times, commercial restaurants have also latched on to the style, catering both to practicing and non-practicing lay people.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Most of the dishes considered to be uniquely Buddhist are vegetarian, but not all Buddhist traditions require vegetarianism of lay followers or clergy. Vegetarian eating is primarily associated with the East and Southeast Asian tradition in China, Vietnam, Japan, and Korea where it is commonly practiced by clergy and may be observed by laity on holidays or as a devotional practice.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 3,
"text": "In the Mahayana tradition, several sutras of the Mahayana canon contain explicit prohibitions against consuming meat, including sections of the Lankavatara Sutra and Surangama Sutra. The monastic community in Chinese Buddhism, Vietnamese Buddhism and most of Korean Buddhism strictly adhere to vegetarianism.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 4,
"text": "Theravada Buddhist monks and nuns consume food by gathering alms themselves, and generally must eat whatever foods are offered to them, including meat. The exception to this alms rule is when monks and nuns have seen, heard or known that animal(s) have been specifically killed to feed the alms-seeker, in which case consumption of such meat would be karmically negative, as well as meat from certain animals, such as dogs and snakes, that were regarded as impure in ancient India. The same restriction is also followed by some lay Buddhists and is known as the consumption of \"triply clean meat\" (三净肉). The Pāli Scriptures also indicated that Lord Buddha refusing a proposal by his traitor disciple Devadatta to mandate vegetarianism in the monastic precepts.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 5,
"text": "Tibetan Buddhism has long accepted that the practical difficulties in obtaining vegetables and grains within most of Tibet make it impossible to insist upon vegetarianism; however, many leading Tibetan Buddhist teachers agree upon the great worth of practicing vegetarianism whenever and wherever possible, such as Chatral Rinpoche, a lifelong advocate of vegetarianism who famously released large numbers of fish caught for food back into the ocean once a year, and who wrote about the practice of saving lives.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 6,
"text": "Both Mahayana and Theravada Buddhists consider that one may practice vegetarianism as part of cultivating Bodhisattvas's paramita.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 7,
"text": "In addition to the ban on garlic, practically all Mahayana monastics in China, Korea, Vietnam and Japan specifically avoid eating strong-smelling plants, traditionally asafoetida, shallot, mountain leek and Allium chinense, which together with garlic are referred to as wǔ hūn (五葷, or 'Five Acrid and Strong-smelling Vegetables') or wǔ xīn (五辛 or 'Five Spices') as they tend to excite senses. This is based on teachings found in the Brahamajala Sutra, the Surangama Sutra and the Lankavatara Sutra (chapter eight). In modern times this rule is often interpreted to include other vegetables of the onion genus, as well as coriander. The origin of this additional restriction is from the Indic region and can still be found among some believers of Hinduism and Jainism. Some Taoists also have this additional restriction but the list of restricted plants differs from the Buddhist list.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 8,
"text": "The food that a strict Buddhist takes, if not a vegetarian, is also specific. For many Chinese Buddhists, beef and the consumption of large animals and exotic species is avoided. Then there would be the aforementioned \"triply clean meat\" rule. One restriction on food that is not known to many is the abstinence from eating animal offal (organ meat). This is known as xiàshui (下水), not to be confused with the term for sewage.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 9,
"text": "Alcohol and other drugs are also avoided by many Buddhists because of their effects on the mind and \"mindfulness\". It is part of the Five Precepts which dictate that one is not to consume \"addictive materials\". The definition of \"addictive\" depends on each individual but most Buddhists consider alcohol, tobacco and drugs other than medicine to be addictive. Although caffeine is now also known to be addictive, caffeinated drinks and especially tea are not included under this restriction; tea in particular is considered to be healthful and beneficial and its mild stimulant effect desirable. There are many legends about tea. Among meditators it is considered to keep the person alert and awake without overexcitement.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 10,
"text": "In theory and practice, many regional styles of cooking may be adapted to be \"Buddhist\" as long as the cook, with the above restrictions in mind, prepares the food, generally in simple preparations, with expert attention to its quality, wholesomeness and flavor. Often working on a tight budget, the monastery cook would have to make the most of whatever ingredients were available.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 11,
"text": "In Tenzo kyokun (\"Instructions for the Zen Cook\"), Soto Zen founder Eihei Dogen wrote the following about the Zen attitude toward food:",
"title": "Philosophies governing food"
},
{
"paragraph_id": 12,
"text": "In preparing food, it is essential to be sincere and to respect each ingredient regardless of how coarse or fine it is. (...) A rich buttery soup is not better as such than a broth of wild herbs. In handling and preparing wild herbs, do so as you would the ingredients for a rich feast, wholeheartedly, sincerely, clearly. When you serve the monastic assembly, they and you should taste only the flavour of the Ocean of Reality, the Ocean of unobscured Awake Awareness, not whether or not the soup is creamy or made only of wild herbs. In nourishing the seeds of living in the Way, rich food and wild grass are not separate.",
"title": "Philosophies governing food"
},
{
"paragraph_id": 13,
"text": "Following its dominant status in most parts of East Asia where Buddhism is most practiced, rice features heavily as a staple in the Buddhist meal, especially in the form of rice porridge or congee as the usual morning meal. Noodles and other grains may often be served as well. Vegetables of all sorts are generally either stir-fried or cooked in vegetarian broth with seasonings and may be eaten with various sauces. Traditionally eggs and dairy are not permitted. Seasonings will be informed by whatever is common in the local region; for example, soy sauce and vegan dashi figure strongly in Japanese monastery food while curry and tương (as a vegetarian replacement for fish sauce) may be prominent in Southeast Asia. Sweets and desserts are not often consumed, but are permitted in moderation and may be served at special occasions, such as in the context of a tea ceremony in the Zen tradition.",
"title": "Ingredients"
},
{
"paragraph_id": 14,
"text": "Buddhist vegetarian chefs have become extremely creative in imitating meat using prepared wheat gluten, also known as seitan, kao fu (烤麸) or wheat meat, soy (such as tofu or tempeh), agar, konnyaku and other plant products. Some of their recipes are the oldest and most-refined meat analogues in the world. Soy and wheat gluten are very versatile materials, because they can be manufactured into various shapes and textures, and they absorb flavorings (including, but not limited to, meat-like flavorings), while having very little flavor of their own. With the proper seasonings, they can mimic various kinds of meat quite closely.",
"title": "Ingredients"
},
{
"paragraph_id": 15,
"text": "Some of these Buddhist vegetarian chefs are in the many monasteries and temples which serve allium-free and mock-meat (also known as 'meat analogues') dishes to the monks and visitors (including non-Buddhists who often stay for a few hours or days, to Buddhists who are not monks, but staying overnight for anywhere up to weeks or months). Many Buddhist restaurants also serve vegetarian, vegan, non-alcoholic or allium-free dishes.",
"title": "Ingredients"
},
{
"paragraph_id": 16,
"text": "Some Buddhists eat vegetarian on the 1st and 15th of the lunar calendar (lenten days), on Chinese New Year eve, and on saint and ancestral holy days. To cater to this type of customer, as well as full-time vegetarians, the menu of a Buddhist vegetarian restaurant usually shows no difference from a typical Chinese or East Asian restaurant, except that in recipes originally made to contain meat, a soy chicken substitute might be served instead.",
"title": "Ingredients"
},
{
"paragraph_id": 17,
"text": "According to cookbooks published in English, formal monastery meals in the Zen tradition generally follow a pattern of \"three bowls\" in descending size. The first and largest bowl is a grain-based dish such as rice, noodles or congee; the second contains the protein dish which is often some form of stew or soup; the third and smallest bowl is a vegetable dish or a salad.",
"title": "Variations by sect or region"
}
]
| Buddhist cuisine is an Asian cuisine that is followed by monks and many believers from areas historically influenced by Mahayana Buddhism. It is vegetarian or vegan, and it is based on the Dharmic concept of ahimsa (non-violence). Vegetarianism is common in other Dharmic faiths such as Hinduism, Jainism and Sikhism, as well as East Asian religions like Taoism. While monks, nuns and a minority of believers are vegetarian year-round, many believers follow the Buddhist vegetarian diet for celebrations. Buddhists believe that cooking is seen as a spiritual practice that produces the nourishment which the body needs to work hard and meditate. The origin of "Buddhist food" as a distinct sub-style of cuisine is tied to monasteries, where one member of the community would have the duty of being the head cook and supplying meals that paid respect to the strictures of Buddhist precepts. Temples that were open to visitors from the general public might also serve meals to them and a few temples effectively run functioning restaurants on the premises. In Japan, this culinary custom, recognized as shōjin ryōri (精進料理) or devotion cuisine, is commonly offered at numerous temples, notably in Kyoto. This centuries-old culinary tradition, primarily associated with religious contexts, is seldom encountered beyond places like temples, religious festivals, and funerals. A more recent version, more Chinese in style, is prepared by the Ōbaku school of zen, and known as fucha ryōri (普茶料理); this is served at the head temple of Manpuku-ji, as well as various subtemples. In modern times, commercial restaurants have also latched on to the style, catering both to practicing and non-practicing lay people. | 2001-08-22T22:48:26Z | 2023-11-08T11:05:33Z | [
"Template:Cuisine of China",
"Template:Main",
"Template:Cite web",
"Template:Short description",
"Template:Citation needed",
"Template:Unreferenced section",
"Template:Div col end",
"Template:Cite book",
"Template:Buddhism topics",
"Template:Vegetarianism",
"Template:Cuisines",
"Template:More citations needed",
"Template:Nihongo",
"Template:Commons category",
"Template:Authority control",
"Template:Chinese",
"Template:Portal",
"Template:Div col",
"Template:Reflist",
"Template:Diets"
]
| https://en.wikipedia.org/wiki/Buddhist_cuisine |
6,191 | Charles V | Charles V may refer to: | [
{
"paragraph_id": 0,
"text": "Charles V may refer to:",
"title": ""
}
]
| Charles V may refer to: Charles V, Holy Roman Emperor (1500–1558)
Charles V of Naples (1661–1700), better known as Charles II of Spain
Charles V of France (1338–1380), called the Wise
Charles V, Duke of Lorraine (1643–1690)
Infante Carlos of Spain, Count of Molina (1788–1855), first Carlist pretender to the throne of Spain | 2022-01-03T13:16:43Z | [
"Template:Hndis"
]
| https://en.wikipedia.org/wiki/Charles_V |
|
6,193 | Constantin von Tischendorf | Lobegott Friedrich Constantin (von) Tischendorf (18 January 1815 – 7 December 1874) was a German biblical scholar. In 1844, he discovered the world's oldest and most complete Bible dated to around the mid-4th century and called Codex Sinaiticus after Saint Catherine's Monastery at Mount Sinai, where Tischendorf discovered it.
Tischendorf was made an honorary doctor by the University of Oxford on 16 March 1865, and by the University of Cambridge on 9 March 1865 following his discovery. While a student gaining his academic degree in the 1840s, he earned international recognition when he deciphered the Codex Ephraemi Rescriptus, a 5th-century Greek manuscript of the New Testament.
Tischendorf was born in Lengenfeld, Saxony, the son of a forensic physician. After attending primary school in Lengenfield, he went to grammar school in nearby Plauen. From Easter 1834, having achieved excellent marks at school, he studied theology and philosophy at the University of Leipzig.
At Leipzig he was mainly influenced by JGB Winer, and he began to take special interest in New Testament criticism. Winer's influence gave him the desire to use the oldest manuscripts in order to compile the text of the New Testament as close to the original as possible. Despite his father's death in 1835 and his mother just a year later, he was still able to achieve his doctorate in 1838, before accepting a tutoring job in the home of Reverend Ferdinand Leberecht Zehme in Grossstadeln where he met and fell in love with the clergyman's daughter Angelika. He published a volume of poetry in 1838, Maiknospen (Buds of May) and Der junge Mystiker (The Young Mystic) was published under a pseudonym in 1839. At this time he also began his first critical edition of the NewTestament in Greek which was to become his life's work.
After a journey through southern Germany and Switzerland, and a visit to Strassburg, he returned to Leipzig to begin work on a critical study of the New Testament text.
In 1840, he qualified as university lecturer in theology with a dissertation on the recensions of the New Testament text, the main part of which reappeared the following year in the prolegomena to his first edition of the Greek New Testament. These early textual studies convinced him of the absolute necessity of new and more exact collations of manuscripts.
From October 1840 until January 1843 he was in Paris, busy with the treasures of the Bibliothèque Nationale, eking out his scanty means by making collations for other scholars, and producing for the publisher, Firmin Didot, several editions of the Greek New Testament, one of them exhibiting the form of the text corresponding most closely to the Vulgate. His second edition retracted the more precarious readings of the first, and included a statement of critical principles that is a landmark for evolving critical studies of Biblical texts.
A great triumph of these laborious months was the decipherment of the palimpsest Codex Ephraemi Syri Rescriptus, of which the New Testament part was printed before he left Paris, and the Old Testament in 1845. His success in dealing with a manuscript that, having been over-written with other works of Ephrem the Syrian, had been mostly illegible to earlier collators, made him more well known, and gained support for more extended critical expeditions. He now became professor extraordinarius at Leipzig, where he was married in 1845. He also began to publish Reise in den Orient, an account of his travels in the east (in 2 vols., 1845–46, translated as Travels in the East in 1847). Even though he was an expert in reading the text of a palimpsest (this is a document where the original writing has been removed and new writing added), he was not able to identify the value or meaning of the Archimedes Palimpsest, a torn leaf of which he held and after his death was sold to the Cambridge University Library.
Tischendorf briefly visited the Netherlands in 1841 and England in 1842. In 1843 he visited Italy for thirteen months, before continuing on to Egypt, Sinai, and the Levant, returning via Vienna and Munich.
In 1844 Tischendorf travelled the first time to Saint Catherine's Monastery at the foot of Mount Sinai in Egypt, where he found a portion of what would later be hailed as the oldest complete known New Testament.
Of the many pages which were contained in an old wicker basket (the kind that the monastery hauled in its visitors as customary in unsafe territories) he was given 43 pages containing a part of the Old Testament as a present. He donated those 43 pages to King Frederick Augustus II of Saxony (reigned 1836–1854), to honour him and to recognise his patronage as the funder of Tischendorf's journey. (Tischendorf held a position as Theological Professor at Leipzig University, also under the patronage of Frederick Augustus II.) Leipzig University put two of the leaves on display in 2011.
Tischendorf reported in his 1865 book Wann Wurden Unsere Evangelen Verfasst, translated to English in 1866 as When Were Our Gospels Written in the section "The Discovery of the Sinaitic Manuscript" that he found, in a trash basket, forty-three sheets of parchment of an ancient copy of the Greek Old Testament, reporting that the monks were using the trash to start fires. And Tischendorf, horrified, asked if he could have them. He deposited them at the University of Leipzig, under the title of the Codex Friderico-Augustanus, a name given in honour of his patron, Frederick Augustus II of Saxony, king of Saxony. The fragments were published in 1846, although Tischendorf kept the place of discovery a secret.
Many have expressed skepticism at the historical accuracy of this report of saving a 1500-year-old parchment from the flames. J. Rendel Harris referred to the story as a myth. The Tischendorf Lesebuch (see References) quotes that the Librarian Kyrillos mentioned to Tischendorf that the contents of the basket had already twice been submitted to the fire. The contents of the baskets were damaged scriptures, the third filling apparently, so cited by Tischendorf himself.[see Tischendorf Lesebuch, Tischendorf's own account]. In 1853 Tischendorf made a second trip to the Syrian monastery but made no new discoveries. He returned a third time in January 1859 under the patronage of Tsar Alexander II of Russia with the active aid of the Russian government to find more of the Codex Frederico-Augustanus or similar ancient Biblical texts. On 4 February, the last day of his visit, he was shown a text which he recognized as significant – the Codex Sinaiticus – a Greek manuscript of the complete New Testament and parts of the Old Testament dating to the 4th century.
Tischendorf persuaded the monks to present the manuscript to Tsar Alexander II of Russia, at the cost of the Tsar it was published in 1862 (in four folio volumes). Those ignorant of the details of his discovery of the Codex Sinaiticus accused Tischendorf of buying manuscripts from ignorant monastery librarians at low prices. Indeed, he was never rich, but he staunchly defended the rights of the monks at Saint Catherine's Monastery when he persuaded them eventually to send the manuscript to the Tsar. This took approximately 10 years because the abbot of St Catherines had to be re-elected and confirmed in office in Cairo and in Jerusalem, and during those 10 years no one in the monastery had the authority to hand over any documents. However the documents were handed over in due course following a signed and sealed letter to the Tsar Alexander II (Schenkungsurkunde). Even so, the monks of Mt. Sinai still display a receipt-letter from Tischendorf promising to return the manuscript to them in the case that the donation can not be done. This token-letter had to be destroyed, following the late issue of a "Schenkungsurkunde". This donation act regulated the Codex exchange with the Tsar, against 9000 Rubels and Rumanian estate protection. The Tsar was seen as the protector of Greek-Orthodox Christians. Thought lost since the Russian revolution, the document (Schenkungsurkunde) has now resurfaced in St Petersburg 2003, and has also been long before commented upon by other scholars like Kurt Aland. The monastery has disputed the existence of the gift certificate (Schenkungsurkunde) since the British Library was named as the new owner of the Codex. Now following the late find of the gift certificate by the National Russian Library the existence cannot be disputed in earnest.
In 1869 the Tsar awarded Tischendorf the style of "von" Tischendorf as a Russian noble. 327 facsimile editions of the Codex were printed in Leipzig for the Tsar (instead of a salary for the three-year work of Tischendorf the Tsar gave him 100 copies for reselling) in order to celebrate the 1000th anniversary of the traditional foundation of the Rus' state in 862 with the publication of this most amazing find. Supporting the production of the facsimile, all made with special print characters for each of the 4 scribes of the Codex Sinaiticus, was shift work and contributed to Tischendorf's early demise due to exhausting work for months also during nights. Thus the Codex found its way to the Imperial Library at St. Petersburg.
When the 4-volume luxury edition of the Sinai Bible was completed in 1862, C. Tischendorf presented the original ancient manuscript to Emperor Alexander II. Meanwhile, the question of transferring the manuscript to the full possession of the Russian Sovereign remained unresolved for some years. In 1869, the new Archbishop of Sinai, Callistratus, and the monastic community, signed the official certificate presenting the manuscript to the Tsar. The Russian Government, in turn, bestowed the Monastery with 9000 rubles and decorated the Archbishop and some of the brethren with orders. In 1933 the Soviet Government sold the Codex Sinaiticus for 100,000 pounds to the British Museum in London, England. The official certificate with signatures in Russian/ French/ Greek sections has been refound in St Petersburg.
In the winter of 1849 the first edition of his great work now titled Novum Testamentum Graece. Ad antiquos testes recensuit. Apparatum criticum multis modis appeared (translated as Greek New Testament. The ancient witnesses reviewed. Preparations critical in many ways), containing canons of criticism, adding examples of their application that are applicable to students today:
Basic rule: "The text is only to be sought from ancient evidence, and especially from Greek manuscripts, but without neglecting the testimonies of versions and fathers."
These were partly the result of the tireless travels he had begun in 1839 in search of unread manuscripts of the New Testament, "to clear up in this way," he wrote, "the history of the sacred text, and to recover if possible the genuine apostolic text which is the foundation of our faith."
In 1850 appeared his edition of the Codex Amiatinus (in 1854 corrected) and of the Septuagint version of the Old Testament (7th ed., 1887); in 1852, amongst other works, his edition of the Codex Claromontanus. In 1859, he was named professor ordinarius of theology and of Biblical paleography, this latter professorship being specially created for him; and another book of travel, Aus dem heiligen Lande, appeared in 1862. Tischendorf's Eastern journeys were rich enough in other discoveries to merit the highest praise.
Besides his fame as a scholar, he was a friend of both Robert Schumann, with whom he corresponded, and Felix Mendelssohn, who dedicated a song to him. His colleague Samuel Prideaux Tregelles wrote warmly of their mutual interest in textual scholarship. His personal library, purchased after his death, eventually came to the University of Glasgow, where a commemorative exhibition of books from his library was held in 1974 and can be accessed by the public.
Lobegott Friedrich Constantin (von) Tischendorf died in Leipzig on 7 December 1874, aged 59.
The Codex Sinaiticus contains a 4th-century manuscript of New Testament texts. Two other Bibles of similar age exist, though they are less complete: Codex Vaticanus in the Vatican Library and Codex Alexandrinus, currently owned by the British Library. The Codex Sinaiticus is deemed by some to be the most important surviving New Testament manuscript, as no older manuscript is as nearly complete as the Codex. The codex can be viewed in the British Library in London, or as a digitized version on the Internet.
Throughout his life Tischendorf sought old biblical manuscripts, as he saw it as his task to give theology a Greek New Testament which was based on the oldest possible scriptures. He intended to be as close as possible to the original sources. Tischendorf's greatest discovery was in the monastery of Saint Catherine on the Sinai Peninsula, which he visited in May 1844, and again in 1853 and 1859 (as Russian envoy).
In 1862 Tischendorf published the text of the Codex Sinaiticus for the 1000th Anniversary of the Russian Monarchy in both an illustrious four-volume facsimile edition and in a less costly text edition, to enable all scholars to have access to the Codex.
Tischendorf pursued a constant course of editorial labours, mainly on the New Testament, until he was broken down by overwork in 1873. His motive, as explained in a publication on Tischendorf's Letter by Prof. Christfried Boettrich (Leibzig University, Prof. of Theology), was to prove scientifically that the words of the Bible were trustfully transmitted over centuries.de:Christfried Böttrich
His magnum opus was the "Critical Edition of the New Testament."
The great edition, of which the text and apparatus appeared in 1869 and 1872, was called by himself editio viii; but this number is raised to twenty or twenty-one, if mere reprints from stereotype plates and the minor editions of his great critical texts are included; posthumous prints bring the total to forty-one. Four main recensions of Tischendorf's text may be distinguished, dating respectively from his editions of 1841, 1849, 1859 (ed. vii), and 1869–72 (ed. viii). The edition of 1849 may be regarded as historically the most important, from the mass of new critical material it used; that of 1859 is distinguished from Tischendorf's other editions by coming nearer to the received text; in the eighth edition, the testimony of the Sinaitic manuscript received great (probably too great) weight. The readings of the Vatican manuscript were given with more exactness and certainty than had been possible in the earlier editions, and the editor had also the advantage of using the published labours of his colleague and friend Samuel Prideaux Tregelles.
Of relatively lesser importance was Tischendorf's work on the Greek Old Testament. His edition of the Roman text, with the variants of the Alexandrian manuscript, the Codex Ephraemi, and the Friderico-Augustanus, was of service when it appeared in 1850, but, being stereotyped, was not greatly improved in subsequent issues. Its imperfections, even within the limited field it covers, may be judged by the aid of Eberhard Nestle's appendix to the 6th issue (1880).
Besides this may be mentioned editions of the New Testament apocrypha, De Evangeliorum apocryphorum origine et usu (1851); Acta Apostolorum apocrypha (1851); Evangelia apocrypha (1853; 2nd ed., 1876); Apocalypses apocryphae (1866), and various minor writings, partly of an apologetic character, such as Wann wurden unsere Evangelien verfasst? (When Were Our Gospels Written?; 1865; 4th ed., 1866, digitized by Google and available for e-readers), Haben wir den echten Schrifttext der Evangelisten und Apostel? (1873), and Synopsis evangelica (7th ed., 1898). | [
{
"paragraph_id": 0,
"text": "Lobegott Friedrich Constantin (von) Tischendorf (18 January 1815 – 7 December 1874) was a German biblical scholar. In 1844, he discovered the world's oldest and most complete Bible dated to around the mid-4th century and called Codex Sinaiticus after Saint Catherine's Monastery at Mount Sinai, where Tischendorf discovered it.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Tischendorf was made an honorary doctor by the University of Oxford on 16 March 1865, and by the University of Cambridge on 9 March 1865 following his discovery. While a student gaining his academic degree in the 1840s, he earned international recognition when he deciphered the Codex Ephraemi Rescriptus, a 5th-century Greek manuscript of the New Testament.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Tischendorf was born in Lengenfeld, Saxony, the son of a forensic physician. After attending primary school in Lengenfield, he went to grammar school in nearby Plauen. From Easter 1834, having achieved excellent marks at school, he studied theology and philosophy at the University of Leipzig.",
"title": "Early life and education"
},
{
"paragraph_id": 3,
"text": "At Leipzig he was mainly influenced by JGB Winer, and he began to take special interest in New Testament criticism. Winer's influence gave him the desire to use the oldest manuscripts in order to compile the text of the New Testament as close to the original as possible. Despite his father's death in 1835 and his mother just a year later, he was still able to achieve his doctorate in 1838, before accepting a tutoring job in the home of Reverend Ferdinand Leberecht Zehme in Grossstadeln where he met and fell in love with the clergyman's daughter Angelika. He published a volume of poetry in 1838, Maiknospen (Buds of May) and Der junge Mystiker (The Young Mystic) was published under a pseudonym in 1839. At this time he also began his first critical edition of the NewTestament in Greek which was to become his life's work.",
"title": "Early life and education"
},
{
"paragraph_id": 4,
"text": "After a journey through southern Germany and Switzerland, and a visit to Strassburg, he returned to Leipzig to begin work on a critical study of the New Testament text.",
"title": "Early life and education"
},
{
"paragraph_id": 5,
"text": "In 1840, he qualified as university lecturer in theology with a dissertation on the recensions of the New Testament text, the main part of which reappeared the following year in the prolegomena to his first edition of the Greek New Testament. These early textual studies convinced him of the absolute necessity of new and more exact collations of manuscripts.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "From October 1840 until January 1843 he was in Paris, busy with the treasures of the Bibliothèque Nationale, eking out his scanty means by making collations for other scholars, and producing for the publisher, Firmin Didot, several editions of the Greek New Testament, one of them exhibiting the form of the text corresponding most closely to the Vulgate. His second edition retracted the more precarious readings of the first, and included a statement of critical principles that is a landmark for evolving critical studies of Biblical texts.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "A great triumph of these laborious months was the decipherment of the palimpsest Codex Ephraemi Syri Rescriptus, of which the New Testament part was printed before he left Paris, and the Old Testament in 1845. His success in dealing with a manuscript that, having been over-written with other works of Ephrem the Syrian, had been mostly illegible to earlier collators, made him more well known, and gained support for more extended critical expeditions. He now became professor extraordinarius at Leipzig, where he was married in 1845. He also began to publish Reise in den Orient, an account of his travels in the east (in 2 vols., 1845–46, translated as Travels in the East in 1847). Even though he was an expert in reading the text of a palimpsest (this is a document where the original writing has been removed and new writing added), he was not able to identify the value or meaning of the Archimedes Palimpsest, a torn leaf of which he held and after his death was sold to the Cambridge University Library.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "Tischendorf briefly visited the Netherlands in 1841 and England in 1842. In 1843 he visited Italy for thirteen months, before continuing on to Egypt, Sinai, and the Levant, returning via Vienna and Munich.",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "In 1844 Tischendorf travelled the first time to Saint Catherine's Monastery at the foot of Mount Sinai in Egypt, where he found a portion of what would later be hailed as the oldest complete known New Testament.",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "Of the many pages which were contained in an old wicker basket (the kind that the monastery hauled in its visitors as customary in unsafe territories) he was given 43 pages containing a part of the Old Testament as a present. He donated those 43 pages to King Frederick Augustus II of Saxony (reigned 1836–1854), to honour him and to recognise his patronage as the funder of Tischendorf's journey. (Tischendorf held a position as Theological Professor at Leipzig University, also under the patronage of Frederick Augustus II.) Leipzig University put two of the leaves on display in 2011.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "Tischendorf reported in his 1865 book Wann Wurden Unsere Evangelen Verfasst, translated to English in 1866 as When Were Our Gospels Written in the section \"The Discovery of the Sinaitic Manuscript\" that he found, in a trash basket, forty-three sheets of parchment of an ancient copy of the Greek Old Testament, reporting that the monks were using the trash to start fires. And Tischendorf, horrified, asked if he could have them. He deposited them at the University of Leipzig, under the title of the Codex Friderico-Augustanus, a name given in honour of his patron, Frederick Augustus II of Saxony, king of Saxony. The fragments were published in 1846, although Tischendorf kept the place of discovery a secret.",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "Many have expressed skepticism at the historical accuracy of this report of saving a 1500-year-old parchment from the flames. J. Rendel Harris referred to the story as a myth. The Tischendorf Lesebuch (see References) quotes that the Librarian Kyrillos mentioned to Tischendorf that the contents of the basket had already twice been submitted to the fire. The contents of the baskets were damaged scriptures, the third filling apparently, so cited by Tischendorf himself.[see Tischendorf Lesebuch, Tischendorf's own account]. In 1853 Tischendorf made a second trip to the Syrian monastery but made no new discoveries. He returned a third time in January 1859 under the patronage of Tsar Alexander II of Russia with the active aid of the Russian government to find more of the Codex Frederico-Augustanus or similar ancient Biblical texts. On 4 February, the last day of his visit, he was shown a text which he recognized as significant – the Codex Sinaiticus – a Greek manuscript of the complete New Testament and parts of the Old Testament dating to the 4th century.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "Tischendorf persuaded the monks to present the manuscript to Tsar Alexander II of Russia, at the cost of the Tsar it was published in 1862 (in four folio volumes). Those ignorant of the details of his discovery of the Codex Sinaiticus accused Tischendorf of buying manuscripts from ignorant monastery librarians at low prices. Indeed, he was never rich, but he staunchly defended the rights of the monks at Saint Catherine's Monastery when he persuaded them eventually to send the manuscript to the Tsar. This took approximately 10 years because the abbot of St Catherines had to be re-elected and confirmed in office in Cairo and in Jerusalem, and during those 10 years no one in the monastery had the authority to hand over any documents. However the documents were handed over in due course following a signed and sealed letter to the Tsar Alexander II (Schenkungsurkunde). Even so, the monks of Mt. Sinai still display a receipt-letter from Tischendorf promising to return the manuscript to them in the case that the donation can not be done. This token-letter had to be destroyed, following the late issue of a \"Schenkungsurkunde\". This donation act regulated the Codex exchange with the Tsar, against 9000 Rubels and Rumanian estate protection. The Tsar was seen as the protector of Greek-Orthodox Christians. Thought lost since the Russian revolution, the document (Schenkungsurkunde) has now resurfaced in St Petersburg 2003, and has also been long before commented upon by other scholars like Kurt Aland. The monastery has disputed the existence of the gift certificate (Schenkungsurkunde) since the British Library was named as the new owner of the Codex. Now following the late find of the gift certificate by the National Russian Library the existence cannot be disputed in earnest.",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "In 1869 the Tsar awarded Tischendorf the style of \"von\" Tischendorf as a Russian noble. 327 facsimile editions of the Codex were printed in Leipzig for the Tsar (instead of a salary for the three-year work of Tischendorf the Tsar gave him 100 copies for reselling) in order to celebrate the 1000th anniversary of the traditional foundation of the Rus' state in 862 with the publication of this most amazing find. Supporting the production of the facsimile, all made with special print characters for each of the 4 scribes of the Codex Sinaiticus, was shift work and contributed to Tischendorf's early demise due to exhausting work for months also during nights. Thus the Codex found its way to the Imperial Library at St. Petersburg.",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "When the 4-volume luxury edition of the Sinai Bible was completed in 1862, C. Tischendorf presented the original ancient manuscript to Emperor Alexander II. Meanwhile, the question of transferring the manuscript to the full possession of the Russian Sovereign remained unresolved for some years. In 1869, the new Archbishop of Sinai, Callistratus, and the monastic community, signed the official certificate presenting the manuscript to the Tsar. The Russian Government, in turn, bestowed the Monastery with 9000 rubles and decorated the Archbishop and some of the brethren with orders. In 1933 the Soviet Government sold the Codex Sinaiticus for 100,000 pounds to the British Museum in London, England. The official certificate with signatures in Russian/ French/ Greek sections has been refound in St Petersburg.",
"title": "Career"
},
{
"paragraph_id": 16,
"text": "In the winter of 1849 the first edition of his great work now titled Novum Testamentum Graece. Ad antiquos testes recensuit. Apparatum criticum multis modis appeared (translated as Greek New Testament. The ancient witnesses reviewed. Preparations critical in many ways), containing canons of criticism, adding examples of their application that are applicable to students today:",
"title": "Career"
},
{
"paragraph_id": 17,
"text": "Basic rule: \"The text is only to be sought from ancient evidence, and especially from Greek manuscripts, but without neglecting the testimonies of versions and fathers.\"",
"title": "Career"
},
{
"paragraph_id": 18,
"text": "These were partly the result of the tireless travels he had begun in 1839 in search of unread manuscripts of the New Testament, \"to clear up in this way,\" he wrote, \"the history of the sacred text, and to recover if possible the genuine apostolic text which is the foundation of our faith.\"",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "In 1850 appeared his edition of the Codex Amiatinus (in 1854 corrected) and of the Septuagint version of the Old Testament (7th ed., 1887); in 1852, amongst other works, his edition of the Codex Claromontanus. In 1859, he was named professor ordinarius of theology and of Biblical paleography, this latter professorship being specially created for him; and another book of travel, Aus dem heiligen Lande, appeared in 1862. Tischendorf's Eastern journeys were rich enough in other discoveries to merit the highest praise.",
"title": "Career"
},
{
"paragraph_id": 20,
"text": "Besides his fame as a scholar, he was a friend of both Robert Schumann, with whom he corresponded, and Felix Mendelssohn, who dedicated a song to him. His colleague Samuel Prideaux Tregelles wrote warmly of their mutual interest in textual scholarship. His personal library, purchased after his death, eventually came to the University of Glasgow, where a commemorative exhibition of books from his library was held in 1974 and can be accessed by the public.",
"title": "Career"
},
{
"paragraph_id": 21,
"text": "Lobegott Friedrich Constantin (von) Tischendorf died in Leipzig on 7 December 1874, aged 59.",
"title": "Death"
},
{
"paragraph_id": 22,
"text": "The Codex Sinaiticus contains a 4th-century manuscript of New Testament texts. Two other Bibles of similar age exist, though they are less complete: Codex Vaticanus in the Vatican Library and Codex Alexandrinus, currently owned by the British Library. The Codex Sinaiticus is deemed by some to be the most important surviving New Testament manuscript, as no older manuscript is as nearly complete as the Codex. The codex can be viewed in the British Library in London, or as a digitized version on the Internet.",
"title": "Codex Sinaiticus"
},
{
"paragraph_id": 23,
"text": "Throughout his life Tischendorf sought old biblical manuscripts, as he saw it as his task to give theology a Greek New Testament which was based on the oldest possible scriptures. He intended to be as close as possible to the original sources. Tischendorf's greatest discovery was in the monastery of Saint Catherine on the Sinai Peninsula, which he visited in May 1844, and again in 1853 and 1859 (as Russian envoy).",
"title": "Tischendorf's motivation"
},
{
"paragraph_id": 24,
"text": "In 1862 Tischendorf published the text of the Codex Sinaiticus for the 1000th Anniversary of the Russian Monarchy in both an illustrious four-volume facsimile edition and in a less costly text edition, to enable all scholars to have access to the Codex.",
"title": "Tischendorf's motivation"
},
{
"paragraph_id": 25,
"text": "Tischendorf pursued a constant course of editorial labours, mainly on the New Testament, until he was broken down by overwork in 1873. His motive, as explained in a publication on Tischendorf's Letter by Prof. Christfried Boettrich (Leibzig University, Prof. of Theology), was to prove scientifically that the words of the Bible were trustfully transmitted over centuries.de:Christfried Böttrich",
"title": "Tischendorf's motivation"
},
{
"paragraph_id": 26,
"text": "His magnum opus was the \"Critical Edition of the New Testament.\"",
"title": "Works"
},
{
"paragraph_id": 27,
"text": "The great edition, of which the text and apparatus appeared in 1869 and 1872, was called by himself editio viii; but this number is raised to twenty or twenty-one, if mere reprints from stereotype plates and the minor editions of his great critical texts are included; posthumous prints bring the total to forty-one. Four main recensions of Tischendorf's text may be distinguished, dating respectively from his editions of 1841, 1849, 1859 (ed. vii), and 1869–72 (ed. viii). The edition of 1849 may be regarded as historically the most important, from the mass of new critical material it used; that of 1859 is distinguished from Tischendorf's other editions by coming nearer to the received text; in the eighth edition, the testimony of the Sinaitic manuscript received great (probably too great) weight. The readings of the Vatican manuscript were given with more exactness and certainty than had been possible in the earlier editions, and the editor had also the advantage of using the published labours of his colleague and friend Samuel Prideaux Tregelles.",
"title": "Works"
},
{
"paragraph_id": 28,
"text": "Of relatively lesser importance was Tischendorf's work on the Greek Old Testament. His edition of the Roman text, with the variants of the Alexandrian manuscript, the Codex Ephraemi, and the Friderico-Augustanus, was of service when it appeared in 1850, but, being stereotyped, was not greatly improved in subsequent issues. Its imperfections, even within the limited field it covers, may be judged by the aid of Eberhard Nestle's appendix to the 6th issue (1880).",
"title": "Works"
},
{
"paragraph_id": 29,
"text": "Besides this may be mentioned editions of the New Testament apocrypha, De Evangeliorum apocryphorum origine et usu (1851); Acta Apostolorum apocrypha (1851); Evangelia apocrypha (1853; 2nd ed., 1876); Apocalypses apocryphae (1866), and various minor writings, partly of an apologetic character, such as Wann wurden unsere Evangelien verfasst? (When Were Our Gospels Written?; 1865; 4th ed., 1866, digitized by Google and available for e-readers), Haben wir den echten Schrifttext der Evangelisten und Apostel? (1873), and Synopsis evangelica (7th ed., 1898).",
"title": "Works"
}
]
| Lobegott Friedrich Constantin (von) Tischendorf was a German biblical scholar. In 1844, he discovered the world's oldest and most complete Bible dated to around the mid-4th century and called Codex Sinaiticus after Saint Catherine's Monastery at Mount Sinai, where Tischendorf discovered it. Tischendorf was made an honorary doctor by the University of Oxford on 16 March 1865, and by the University of Cambridge on 9 March 1865 following his discovery. While a student gaining his academic degree in the 1840s, he earned international recognition when he deciphered the Codex Ephraemi Rescriptus, a 5th-century Greek manuscript of the New Testament. | 2001-08-23T17:50:27Z | 2023-12-10T22:04:26Z | [
"Template:More citations needed",
"Template:Snd",
"Template:Cn",
"Template:Webarchive",
"Template:Cite web",
"Template:See also",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Sfn",
"Template:Who",
"Template:Reflist",
"Template:Infobox scholar",
"Template:By whom",
"Template:EB1911",
"Template:Dead link",
"Template:Short description",
"Template:Peacock term",
"Template:Citation needed",
"Template:Cite book",
"Template:Commons category"
]
| https://en.wikipedia.org/wiki/Constantin_von_Tischendorf |
6,195 | Calvin Coolidge | Calvin Coolidge (born John Calvin Coolidge Jr.; /ˈkuːlɪdʒ/; July 4, 1872 – January 5, 1933) was an American attorney and politician who served as the 30th president of the United States from 1923 to 1929.
Born in Vermont, Coolidge was a Republican lawyer from New England who climbed the ladder of Massachusetts politics, becoming the state's 48th governor. His response to the Boston police strike of 1919 thrust him into the national spotlight as a man of decisive action. The next year, Coolidge was elected the country's 29th vice president and succeeded the presidency upon the sudden death of President Warren G. Harding in 1923. Elected in his own right in 1924, Coolidge gained a reputation as a small-government conservative with a taciturn personality and dry sense of humor that earned him the nickname "Silent Cal". Though his widespread popularity enabled him to run for a second full term, Coolidge chose not to run again in 1928, remarking that ten years as president would be "longer than any other man has had it – too long!"
Throughout his gubernatorial career, Coolidge ran on the record of fiscal conservatism, strong support for women's suffrage, and a vague opposition to Prohibition. During his presidency, he restored public confidence in the White House after the many scandals of the Harding administration. He signed into law the Indian Citizenship Act of 1924, which granted U.S. citizenship to all Native Americans, and oversaw a period of rapid and expansive economic growth known as the "Roaring Twenties", leaving office with considerable popularity. He was known for his hands-off governing approach and pro-business stances; as biographer Claude Fuess wrote: "He embodied the spirit and hopes of the middle class, could interpret their longings and express their opinions. That he did represent the genius of the average is the most convincing proof of his strength."
Scholars have ranked Coolidge in the lower half of U.S. presidents. He gains nearly universal praise for his stalwart support of racial equality during a period of heightened racial tension in the United States, and is highly praised by advocates of smaller government and laissez-faire economics, while supporters of an active central government generally view him far less favorably. His critics argue that he failed to use the country's economic boom to help struggling farmers and workers in other flailing industries, and there is still much debate among historians as to the extent to which Coolidge's economic policies contributed to the onset of the Great Depression.
John Calvin Coolidge Jr. was born on July 4, 1872, in Plymouth Notch, Vermont—the only U.S. president to be born on Independence Day. He was the elder of the two children of John Calvin Coolidge Sr. (1845–1926) and Victoria Josephine Moor (1846–1885). Although named for his father, from early childhood Coolidge was addressed by his middle name. The name Calvin was used in multiple generations of the Coolidge family, apparently selected in honor of John Calvin, the Protestant Reformer.
Coolidge Senior engaged in many occupations and developed a statewide reputation as a prosperous farmer, storekeeper, and public servant. He held various local offices, including justice of the peace and tax collector and served in both houses of the Vermont General Assembly. When Coolidge was 12 years old, his chronically ill mother died at the age of 39, perhaps from tuberculosis. His younger sister, Abigail Grace Coolidge (1875–1890), died at the age of 15, probably of appendicitis, when Coolidge was 18. Coolidge's father married a Plymouth schoolteacher in 1891, and lived to the age of 80.
Coolidge's family had deep roots in New England. His earliest American ancestor, John Coolidge emigrated from Cottenham, Cambridgeshire, England, around 1630 and settled in Watertown, Massachusetts. Coolidge's great-great-grandfather, also named John Coolidge, was an American military officer in the Revolutionary War and one of the first selectmen of the town of Plymouth. His grandfather Calvin Galusha Coolidge served in the Vermont House of Representatives. His cousin Park Pollard was a businessman in Cavendish, Vermont and the longtime chair of the Vermont Democratic Party. Coolidge was also a descendant of Samuel Appleton, who settled in Ipswich and led the Massachusetts Bay Colony during King Philip's War. Coolidge's mother was the daughter of Hiram Dunlap Moor, a Plymouth Notch farmer, and Abigail Franklin.
Coolidge attended the Black River Academy and then St. Johnsbury Academy before enrolling at Amherst College, where he distinguished himself in the debating class. As a senior, he joined the Phi Gamma Delta fraternity and graduated cum laude. While at Amherst, Coolidge was profoundly influenced by philosophy professor Charles Edward Garman, a Congregational mystic who had a neo-Hegelian philosophy.
Coolidge explained Garman's ethics forty years later:
[T]here is a standard of righteousness that might does not make right, that the end does not justify the means, and that expediency as a working principle is bound to fail. The only hope of perfecting human relationships is in accordance with the law of service under which men are not so solicitous about what they shall get as they are about what they shall give. Yet people are entitled to the rewards of their industry. What they earn is theirs, no matter how small or how great. But the possession of property carries the obligation to use it in a larger service...
At his father's urging after graduation, Coolidge moved to Northampton, Massachusetts, to become a lawyer. Coolidge followed the common practice of apprenticing with a local law firm, Hammond & Field, and reading law with them. John C. Hammond and Henry P. Field, both Amherst graduates, introduced Coolidge to practicing law in the county seat of Hampshire County, Massachusetts. In 1897, Coolidge was admitted to the Massachusetts bar, becoming a country lawyer. With his savings and a small inheritance from his grandfather, Coolidge opened his own law office in Northampton in 1898. He practiced commercial law, believing that he served his clients best by staying out of court. As his reputation as a hard-working and diligent attorney grew, local banks and other businesses began to retain his services.
In 1903, Coolidge met Grace Goodhue, a graduate of the University of Vermont and a teacher at Northampton's Clarke School for the Deaf. They married on October 4, 1905, at 2:30 p.m. in a small ceremony which took place in the parlor of Grace's family's house, having overcome her mother's objections to the marriage. The newlyweds went on a honeymoon trip to Montreal, originally planned for two weeks but cut short by a week at Coolidge's request. After 25 years he wrote of Grace, "for almost a quarter of a century she has borne with my infirmities and I have rejoiced in her graces".
The Coolidges had two sons: John (1906–2000) and Calvin Jr. (1908–1924). On June 30, 1924, Calvin Jr. had played tennis with his brother on the White House tennis courts without putting on socks and developed a blister on one of his toes. The blister subsequently degenerated into sepsis. Calvin Jr. died a little over a week later at the age of 16. The President never forgave himself for Calvin Jr's death. His eldest son John said it "hurt [Coolidge] terribly", and psychiatric biographer Robert E. Gilbert, author of The Tormented President: Calvin Coolidge, Death, and Clinical Depression, said that Coolidge "ceased to function as President after the death of his sixteen-year-old son". Gilbert explains in his book how Coolidge displayed all ten of the symptoms listed by the American Psychiatric Association as evidence of major depressive disorder following Calvin Jr.'s sudden death. John later became a railroad executive, helped to start the Coolidge Foundation, and was instrumental in creating the President Calvin Coolidge State Historic Site.
Coolidge was frugal, and when it came to securing a home, he insisted upon renting. He and his wife attended Northampton's Edwards Congregational Church before and after his presidency.
The Republican Party was dominant in New England at the time, and Coolidge followed the example of Hammond and Field by becoming active in local politics. In 1896, Coolidge campaigned for Republican presidential candidate William McKinley, and was selected to be a member of the Republican City Committee the next year. In 1898, he won election to the City Council of Northampton, placing second in a ward where the top three candidates were elected. The position offered no salary but provided Coolidge with valuable political experience. In 1899, he was chosen City Solicitor by the City Council. He was elected for a one-year term in 1900, and reelected in 1901. This position gave Coolidge more experience as a lawyer and paid a salary of $600 (equivalent to $21,106 in 2022). In 1902, the city council selected a Democrat for city solicitor, and Coolidge returned to private practice. Soon thereafter, however, the clerk of courts for the county died, and Coolidge was chosen to replace him. The position paid well, but it barred him from practicing law, so he remained at the job for only one year. In 1904, Coolidge suffered his sole defeat at the ballot box, losing an election to the Northampton school board. When told that some of his neighbors voted against him because he had no children in the schools he would govern, the recently married Coolidge replied, "Might give me time!"
In 1906, the local Republican committee nominated Coolidge for election to the Massachusetts House of Representatives. He won a close victory over the incumbent Democrat, and reported to Boston for the 1907 session of the Massachusetts General Court. In his freshman term, Coolidge served on minor committees and, although he usually voted with the party, was known as a Progressive Republican, voting in favor of such measures as women's suffrage and the direct election of Senators. While in Boston, Coolidge became an ally, and then a liegeman, of then U.S. Senator Winthrop Murray Crane who controlled the western faction of the Massachusetts Republican Party; Crane's party rival in the east of the commonwealth was U.S. Senator Henry Cabot Lodge. Coolidge forged another key strategic alliance with Guy Currier, who had served in both state houses and had the social distinction, wealth, personal charm and broad circle of friends which Coolidge lacked, and which would have a lasting impact on his political career. In 1907, he was elected to a second term, and in the 1908 session Coolidge was more outspoken, though not in a leadership position.
Instead of vying for another term in the State House, Coolidge returned home to his growing family and ran for mayor of Northampton when the incumbent Democrat retired. He was well liked in the town, and defeated his challenger by a vote of 1,597 to 1,409. During his first term (1910 to 1911), he increased teachers' salaries and retired some of the city's debt while still managing to effect a slight tax decrease. He was renominated in 1911, and defeated the same opponent by a slightly larger margin.
In 1911, the State Senator for the Hampshire County area retired and successfully encouraged Coolidge to run for his seat for the 1912 session; Coolidge defeated his Democratic opponent by a large margin. At the start of that term, he became chairman of a committee to arbitrate the "Bread and Roses" strike by the workers of the American Woolen Company in Lawrence, Massachusetts. After two tense months, the company agreed to the workers' demands, in a settlement proposed by the committee. A major issue affecting Massachusetts Republicans that year was the party split between the progressive wing, which favored Theodore Roosevelt, and the conservative wing, which favored William Howard Taft. Although he favored some progressive measures, Coolidge refused to leave the Republican party. When the new Progressive Party declined to run a candidate in his state senate district, Coolidge won reelection against his Democratic opponent by an increased margin.
In the 1913 session, Coolidge enjoyed renowned success in arduously navigating to passage the Western Trolley Act, which connected Northampton with a dozen similar industrial communities in Western Massachusetts. Coolidge intended to retire after his second term as was the custom, but when the president of the state senate, Levi H. Greenwood, considered running for lieutenant governor, Coolidge decided to run again for the Senate in the hopes of being elected as its presiding officer. Although Greenwood later decided to run for reelection to the Senate, he was defeated primarily due to his opposition to women's suffrage; Coolidge was in favor of the women's vote, won his re-election, and with Crane's help, assumed the presidency of a closely divided Senate. After his election in January 1914, Coolidge delivered a published and frequently quoted speech entitled Have Faith in Massachusetts, which summarized his philosophy of government.
Coolidge's speech was well received, and he attracted some admirers on its account; towards the end of the term, many of them were proposing his name for nomination to lieutenant governor. After winning reelection to the Senate by an increased margin in the 1914 elections, Coolidge was reelected unanimously to be President of the Senate. Coolidge's supporters, led by fellow Amherst alumnus Frank Stearns, encouraged him again to run for lieutenant governor. Stearns, an executive with the Boston department store R. H. Stearns, became another key ally, and began a publicity campaign on Coolidge's behalf before he announced his candidacy at the end of the 1915 legislative session.
Coolidge entered the primary election for lieutenant governor and was nominated to run alongside gubernatorial candidate Samuel W. McCall. Coolidge was the leading vote-getter in the Republican primary, and balanced the Republican ticket by adding a western presence to McCall's eastern base of support. McCall and Coolidge won the 1915 election to their respective one-year terms, with Coolidge defeating his opponent by more than 50,000 votes.
In Massachusetts, the lieutenant governor does not preside over the state Senate, as is the case in many other states; nevertheless, as lieutenant governor, Coolidge was a deputy governor functioning as an administrative inspector and was a member of the governor's council. He was also chairman of the finance committee and the pardons committee. As a full-time elected official, Coolidge discontinued his law practice in 1916, though his family continued to live in Northampton. McCall and Coolidge were both reelected in 1916 and again in 1917. When McCall decided that he would not stand for a fourth term, Coolidge announced his intention to run for governor.
Coolidge was unopposed for the Republican nomination for Governor of Massachusetts in 1918. He and his running mate, Channing Cox, a Boston lawyer and Speaker of the Massachusetts House of Representatives, ran on the previous administration's record: fiscal conservatism, a vague opposition to Prohibition, support for women's suffrage, and support for American involvement in World War I. The issue of the war proved divisive, especially among Irish and German Americans. Coolidge was elected by a margin of 16,773 votes over his opponent, Richard H. Long, in the smallest margin of victory of any of his statewide campaigns.
In 1919, in reaction to a plan of the policemen of the Boston Police Department to register with a union, Police Commissioner Edwin U. Curtis announced that such an act would not be tolerated. In August of that year, the American Federation of Labor issued a charter to the Boston Police Union. Curtis declared the union's leaders were guilty of insubordination and would be relieved of duty, but indicated he would cancel their suspension if the union was dissolved by September 4. The mayor of Boston, Andrew Peters, convinced Curtis to delay his action for a few days, but with no results, and Curtis suspended the union leaders on September 8. The following day, about three-quarters of the policemen in Boston went on strike. Coolidge, tacitly but fully in support of Curtis' position, closely monitored the situation but initially deferred to the local authorities. He anticipated that only a resulting measure of lawlessness could sufficiently prompt the public to understand and appreciate the controlling principle – that a policeman does not strike. That night and the next, there was sporadic violence and rioting in the unruly city. Peters, concerned about sympathy strikes by the firemen and others, called up some units of the Massachusetts National Guard stationed in the Boston area pursuant to an old and obscure legal authority, and relieved Curtis of duty.
Coolidge, sensing the severity of circumstances were then in need of his intervention, conferred with Crane's operative, William Butler, and then acted. He called up more units of the National Guard, restored Curtis to office, and took personal control of the police force. Curtis proclaimed that all of the strikers were fired from their jobs, and Coolidge called for a new police force to be recruited. That night Coolidge received a telegram from AFL leader Samuel Gompers. "Whatever disorder has occurred", Gompers wrote, "is due to Curtis's order in which the right of the policemen has been denied…" Coolidge publicly answered Gompers's telegram, denying any justification whatsoever for the strike – and his response launched him into the national consciousness. Newspapers across the nation picked up on Coolidge's statement and he became the newest hero to opponents of the strike. Amid of the First Red Scare, many Americans were terrified of the spread of communist revolutions, like those that had taken place in Russia, Hungary, and Germany. While Coolidge had lost some friends among organized labor, conservatives across the nation had seen a rising star. Although he usually acted with deliberation, the Boston police strike gave him a national reputation as a decisive leader, and as a strict enforcer of law and order.
Coolidge and Cox were renominated for their respective offices in 1919. By this time Coolidge's supporters (especially Stearns) had publicized his actions in the Police Strike around the state and the nation and some of Coolidge's speeches were published in book form. He faced the same opponent as in 1918, Richard Long, but this time Coolidge defeated him by 125,101 votes, more than seven times his margin of victory from a year earlier. His actions in the police strike, combined with the massive electoral victory, led to suggestions that Coolidge run for president in 1920.
By the time Coolidge was inaugurated on January 2, 1919, the First World War had ended, and Coolidge pushed the legislature to give a $100 bonus (equivalent to $1,688 in 2022) to Massachusetts veterans. He also signed a bill reducing the work week for women and children from fifty-four hours to forty-eight, saying, "We must humanize the industry, or the system will break down." He signed into law a budget that kept the tax rates the same, while trimming $4 million from expenditures, thus allowing the state to retire some of its debt.
Coolidge also wielded the veto pen as governor. His most publicized veto prevented an increase in legislators' pay by 50%. Although he was personally opposed to Prohibition, he vetoed a bill in May 1920 that would have allowed the sale of beer or wine of 2.75% alcohol or less, in Massachusetts in violation of the Eighteenth Amendment to the United States Constitution. "Opinions and instructions do not outmatch the Constitution," he said in his veto message. "Against it, they are void."
At the 1920 Republican National Convention, most of the delegates were selected by state party caucuses, not primaries. As such, the field was divided among many local favorites. Coolidge was one such candidate, and while he placed as high as sixth in the voting, the powerful party bosses running the convention, primarily the party's U.S. Senators, never considered him seriously. After ten ballots, the bosses and then the delegates settled on Senator Warren G. Harding of Ohio as their nominee for president. When the time came to select a vice presidential nominee, the bosses also made and announced their decision on whom they wanted – Sen. Irvine Lenroot of Wisconsin – and then prematurely departed after his name was put forth, relying on the rank and file to confirm their decision. A delegate from Oregon, Wallace McCamant, having read Have Faith in Massachusetts, proposed Coolidge for vice president instead. The suggestion caught on quickly with the masses starving for an act of independence from the absent bosses, and Coolidge was unexpectedly nominated.
The Democrats nominated another Ohioan, James M. Cox, for president and the Assistant Secretary of the Navy, Franklin D. Roosevelt, for vice president. The question of the United States joining the League of Nations was a major issue in the campaign, as was the unfinished legacy of Progressivism. Harding ran a "front-porch" campaign from his home in Marion, Ohio, but Coolidge took to the campaign trail in the Upper South, New York, and New England – his audiences carefully limited to those familiar with Coolidge and those placing a premium upon concise and short speeches. On November 2, 1920, Harding and Coolidge were victorious in a landslide, winning more than 60 percent of the popular vote, including every state outside the South. They also won in Tennessee, the first time a Republican ticket had won a Southern state since Reconstruction.
The U.S. vice-presidency did not carry many official duties, but Coolidge was invited by President Harding to attend cabinet meetings, making him the first vice president to do so. He gave a number of unremarkable speeches around the country.
As vice president, Coolidge and his vivacious wife Grace were invited to quite a few parties, where the legend of "Silent Cal" was born. It is from this time that most of the jokes and anecdotes involving Coolidge originate, such as Coolidge being "silent in five languages". Although Coolidge was known to be a skilled and effective public speaker, in private he was a man of few words and was commonly referred to as "Silent Cal". An apocryphal story has it that a person seated next to him at a dinner said to him, "I made a bet today that I could get more than two words out of you." He replied, "You lose." However, on April 22, 1924, Coolidge himself said that the "You lose" quotation never occurred. The story about it was related by Frank B. Noyes, President of the Associated Press, to their membership at their annual luncheon at the Waldorf Astoria Hotel, when toasting and introducing Coolidge, who was the invited speaker. After the introduction and before his prepared remarks, Coolidge said to the membership, "Your President [referring to Noyes] has given you a perfect example of one of those rumors now current in Washington which is without any foundation."
Coolidge often seemed uncomfortable among fashionable Washington society; when asked why he continued to attend so many of their dinner parties, he replied, "Got to eat somewhere." Alice Roosevelt Longworth, a leading Republican wit, underscored Coolidge's silence and his dour personality: "When he wished he were elsewhere, he pursed his lips, folded his arms, and said nothing. He looked then precisely as though he had been weaned on a pickle." Coolidge and his wife, Grace, who was a great baseball fan, once attended a Washington Senators game and sat through all nine innings without saying a word, except once when he asked her the time.
As president, Coolidge's reputation as a quiet man continued. "The words of a President have an enormous weight," he would later write, "and ought not to be used indiscriminately." Coolidge was aware of his stiff reputation; indeed, he cultivated it. "I think the American people want a solemn ass as a President," he once told Ethel Barrymore, "and I think I will go along with them." Some historians suggest that Coolidge's image was created deliberately as a campaign tactic, while others believe his withdrawn and quiet behavior to be natural, deepening after the death of his son in 1924. Dorothy Parker, upon learning that Coolidge had died, reportedly remarked, "How can they tell?"
On August 2, 1923, President Harding died unexpectedly from a heart attack in San Francisco while on a speaking tour of the western United States. Vice President Coolidge was in Vermont visiting his family home, which had neither electricity nor a telephone, when he received word by messenger of Harding's death. Coolidge dressed, said a prayer, and came downstairs to greet the reporters who had assembled. His father, a notary public and justice of the peace, administered the oath of office in the family's parlor by the light of a kerosene lamp at 2:47 a.m. on August 3, 1923, whereupon the new President of the United States returned to bed.
Coolidge returned to Washington the next day, and was sworn in again by Justice Adolph A. Hoehling Jr. of the Supreme Court of the District of Columbia, to forestall any questions about the authority of a state official to administer a federal oath. This second oath-taking remained a secret until it was revealed by Harry M. Daugherty in 1932, and confirmed by Hoehling. When Hoehling confirmed Daugherty's story, he indicated that Daugherty, then serving as United States Attorney General, asked him to administer the oath without fanfare at the Willard Hotel. According to Hoehling, he did not question Daugherty's reason for requesting a second oath-taking but assumed it was to resolve any doubt about whether the first swearing-in was valid.
The nation initially did not know what to make of Coolidge, who had maintained a low profile in the Harding administration; many had even expected him to be replaced on the ballot in 1924. Coolidge believed that those of Harding's men under suspicion were entitled to every presumption of innocence, taking a methodical approach to the scandals, principally the Teapot Dome scandal, while others clamored for rapid punishment of those they presumed guilty. Coolidge thought the Senate investigations of the scandals would suffice; this was affirmed by the resulting resignations of those involved. He personally intervened in demanding the resignation of Attorney General Harry M. Daugherty after he refused to cooperate with the congressional probe. He then set about to confirm that no loose ends remained in the administration, arranging for a full briefing on the wrongdoing. Harry A. Slattery reviewed the facts with him, Harlan F. Stone analyzed the legal aspects for him and Senator William E. Borah assessed and presented the political factors.
Coolidge addressed Congress when it reconvened on December 6, 1923, giving a speech that supported many of Harding's policies, including Harding's formal budgeting process, the enforcement of immigration restrictions and arbitration of coal strikes ongoing in Pennsylvania. The address to Congress was the first presidential speech to be broadcast over the radio. The Washington Naval Treaty was proclaimed just one month into Coolidge's term, and was generally well received in the country. In May 1924, the World War I veterans' World War Adjusted Compensation Act or "Bonus Bill" was passed over his veto. Coolidge signed the Immigration Act later that year, which was aimed at restricting southern and eastern European immigration, but appended a signing statement expressing his unhappiness with the bill's specific exclusion of Japanese immigrants. Just before the Republican Convention began, Coolidge signed into law the Revenue Act of 1924, which reduced the top marginal tax rate from 58 percent to 46 percent, as well as personal income tax rates across the board, increased the estate tax and bolstered it with a new gift tax.
On June 2, 1924, Coolidge signed the act granting citizenship to all Native Americans born in the United States. By that time, two-thirds of them were already citizens, having gained it through marriage, military service (veterans of World War I were granted citizenship in 1919), or the land allotments that had earlier taken place.
The Republican Convention was held from June 10 to 12, 1924, in Cleveland, Ohio; Coolidge was nominated on the first ballot. The convention nominated Frank Lowden of Illinois for vice president on the second ballot, but he declined; former Brigadier General Charles G. Dawes was nominated on the third ballot and accepted.
The Democrats held their convention the next month in New York City. The convention soon deadlocked, and after 103 ballots, the delegates finally agreed on a compromise candidate, John W. Davis, with Charles W. Bryan nominated for vice president. The Democrats' hopes were buoyed when Robert M. La Follette, a Republican senator from Wisconsin, split from the GOP to form a new Progressive Party. Many believed that the split in the Republican Party, like the one in 1912, would allow a Democrat to win the presidency.
After the conventions and the death of his younger son Calvin, Coolidge became withdrawn; he later said that "when he [the son] died, the power and glory of the Presidency went with him." Even as he mourned, Coolidge ran his standard campaign, not mentioning his opponents by name or maligning them, and delivering speeches on his theory of government, including several that were broadcast over the radio. It was the most subdued campaign since 1896, partly because of Coolidge's grief, but also because of his naturally non-confrontational style. The other candidates campaigned in a more modern fashion, but despite the split in the Republican party, the results were similar to those of 1920. Coolidge and Dawes won every state outside the South except Wisconsin, La Follette's home state. Coolidge won the election with 382 electoral votes and the popular vote by 2.5 million over his opponents' combined total.
During Coolidge's presidency, the United States experienced a period of rapid economic growth known as the "Roaring Twenties". He left the administration's industrial policy in the hands of his activist Secretary of Commerce, Herbert Hoover, who energetically used government auspices to promote business efficiency and develop airlines and radio. Coolidge disdained regulation and demonstrated this by appointing commissioners to the Federal Trade Commission and the Interstate Commerce Commission who did little to restrict the activities of businesses under their jurisdiction. The regulatory state under Coolidge was, as one biographer described it, "thin to the point of invisibility".
Historian Robert Sobel offers some context of Coolidge's laissez-faire ideology, based on the prevailing understanding of federalism during his presidency: "As Governor of Massachusetts, Coolidge supported wages and hours legislation, opposed child labor, imposed economic controls during World War I, favored safety measures in factories, and even worker representation on corporate boards. Did he support these measures while president? No, because in the 1920s, such matters were considered the responsibilities of state and local governments." However, Coolidge did sign the Radio Act of 1927 into law that established the Federal Radio Commission (1927–1934), the equal-time rule for radio broadcasters in the United States, and restricted radio broadcasting licenses to stations that demonstrated that they served "the public interest, convenience, or necessity".
Coolidge adopted the taxation policies of his Secretary of the Treasury, Andrew Mellon, who advocated "scientific taxation" – the notion that lowering taxes will increase, rather than decrease, government receipts. Congress agreed, and tax rates were reduced in Coolidge's term. In addition to federal tax cuts, Coolidge proposed reductions in federal expenditures and retiring of the federal debt. Coolidge's ideas were shared by the Republicans in Congress, and in 1924, Congress passed the Revenue Act of 1924, which reduced income tax rates and eliminated all income taxation for some two million people. They reduced taxes again by passing the Revenue Acts of 1926 and 1928, all the while continuing to keep spending down so as to reduce the overall federal debt. By 1927, only the wealthiest 2% of taxpayers paid any federal income tax. Federal spending remained flat during Coolidge's administration, allowing one-fourth of the federal debt to be retired in total. State and local governments saw considerable growth, however, surpassing the federal budget in 1927. By 1929, after Coolidge's series of tax rate reductions had cut the tax rate to 24 percent on those making over $100,000, the federal government collected more than a billion dollars in income taxes, of which 65 percent was collected from those making over $100,000. In 1921, when the tax rate on people making over $100,000 a year was 73 percent, the federal government collected a little over $700 million in income taxes, of which 30 percent was paid by those making over $100,000.
Perhaps the most contentious issue of Coolidge's presidency was relief for farmers. Some in Congress proposed a bill designed to fight falling agricultural prices by allowing the federal government to purchase crops to sell abroad at lower prices. Agriculture Secretary Henry C. Wallace and other administration officials favored the bill when it was introduced in 1924, but rising prices convinced many in Congress that the bill was unnecessary, and it was defeated just before the elections that year. In 1926, with farm prices falling once more, Senator Charles L. McNary and Representative Gilbert N. Haugen – both Republicans – proposed the McNary–Haugen Farm Relief Bill. The bill proposed a federal farm board that would purchase surplus production in high-yield years and hold it (when feasible) for later sale or sell it abroad. Coolidge opposed McNary-Haugen, declaring that agriculture must stand "on an independent business basis", and said that "government control cannot be divorced from political control." Instead of manipulating prices, he favored instead Herbert Hoover's proposal to increase profitability by modernizing agriculture. Secretary Mellon wrote a letter denouncing the McNary-Haugen measure as unsound and likely to cause inflation, and it was defeated.
After McNary-Haugen's defeat, Coolidge supported a less radical measure, the Curtis-Crisp Act, which would have created a federal board to lend money to farm co-operatives in times of surplus; the bill did not pass. In February 1927, Congress took up the McNary-Haugen bill again, this time narrowly passing it, and Coolidge vetoed it. In his veto message, he expressed the belief that the bill would do nothing to help farmers, benefiting only exporters and expanding the federal bureaucracy. Congress did not override the veto, but it passed the bill again in May 1928 by an increased majority; again, Coolidge vetoed it. "Farmers never have made much money," said Coolidge, the Vermont farmer's son. "I do not believe we can do much about it."
Coolidge has often been criticized for his actions during the Great Mississippi Flood of 1927, the worst natural disaster to hit the Gulf Coast until Hurricane Katrina in 2005. Although he did eventually name Secretary Hoover to a commission in charge of flood relief, scholars argue that Coolidge overall showed a lack of interest in federal flood control. Coolidge did not believe that personally visiting the region after the floods would accomplish anything, and that it would be seen as mere political grandstanding. He also did not want to incur the federal spending that flood control would require; he believed property owners should bear much of the cost. On the other hand, Congress wanted a bill that would place the federal government completely in charge of flood mitigation. When Congress passed a compromise measure in 1928, Coolidge declined to take credit for it and signed the bill in private on May 15.
According to one biographer, Coolidge was "devoid of racial prejudice", but rarely took the lead on civil rights. Coolidge disliked the Ku Klux Klan and no Klansman is known to have received an appointment from him. In the 1924 presidential election his opponents (Robert La Follette and John Davis), and his running mate Charles Dawes, often attacked the Klan but Coolidge avoided the subject. During his administration, lynchings of African-Americans decreased and millions of people left the Ku Klux Klan.
Coolidge spoke in favor of the civil rights of African Americans, saying in his first State of the Union address that their rights were "just as sacred as those of any other citizen" under the U.S. Constitution and that it was a "public and a private duty to protect those rights."
Coolidge repeatedly called for laws to make lynching a federal crime (it was already a state crime, though not always enforced). Congress refused to pass any such legislation. On June 2, 1924, Coolidge signed the Indian Citizenship Act, which granted U.S. citizenship to all Native Americans living on reservations. (Those off reservations had long been citizens.) On June 6, 1924, Coolidge delivered a commencement address at historically black, non-segregated Howard University, in which he thanked and commended African Americans for their rapid advances in education and their contributions to U.S. society over the years, as well as their eagerness to render their services as soldiers in the World War, all while being faced with discrimination and prejudices at home.
In a speech in October 1924, Coolidge stressed tolerance of differences as an American value and thanked immigrants for their contributions to U.S. society, saying that they have "contributed much to making our country what it is." He stated that although the diversity of peoples was a detrimental source of conflict and tension in Europe, it was peculiar for the United States that it was a "harmonious" benefit for the country. Coolidge further stated the United States should assist and help immigrants who come to the country and urged immigrants to reject "race hatreds" and "prejudices".
Coolidge was neither well versed nor very interested in world affairs. His focus was directed mainly at American business, especially pertaining to trade, and "Maintaining the Status Quo". Although not an isolationist, he was reluctant to enter into European involvements. While Coolidge believed strongly in a non-interventionist foreign policy, he did believe that the United States was exceptional.
Coolidge considered the 1920 Republican victory as a rejection of the Wilsonian position that the United States should join the League of Nations. Coolidge believed the League did not serve American interests. However, he spoke in favor of joining the Permanent Court of International Justice (World Court), provided that the nation would not be bound by advisory decisions. In 1926, the Senate eventually approved joining the Court (with reservations). The League of Nations accepted the reservations, but it suggested some modifications of its own. The Senate failed to act and so the United States did not join the World Court.
In 1924 the Coolidge administration nominated Charles Dawes to head the multi-national committee that produced the Dawes Plan. It set fixed annual amounts for Germany's World War I reparations payments and authorized a large loan, mostly from American banks, to help stabilize and stimulate the German economy. Additionally, Coolidge attempted to pursue further curbs on naval strength following the early successes of Harding's Washington Naval Conference by sponsoring the Geneva Naval Conference in 1927, which failed owing to a French and Italian boycott and ultimate failure of Great Britain and the United States to agree on cruiser tonnages. As a result, the conference was a failure and Congress eventually authorized for increased American naval spending in 1928. The Kellogg–Briand Pact of 1928, named for Coolidge's Secretary of State, Frank B. Kellogg, and French foreign minister Aristide Briand, was also a key peacekeeping initiative. The treaty, ratified in 1929, committed signatories – the United States, the United Kingdom, France, Germany, Italy, and Japan – to "renounce war, as an instrument of national policy in their relations with one another". The treaty did not achieve its intended result – the outlawry of war – but it did provide the founding principle for international law after World War II. Coolidge also continued the previous administration's policy of withholding recognition of the Soviet Union.
Efforts were made to normalize ties with post-Revolution Mexico. Coolidge recognized Mexico's new governments under Álvaro Obregón and Plutarco Elías Calles, and continued American support for the elected Mexican government against the National League for the Defense of Religious Liberty during the Cristero War, lifting the arms embargo on that country; he also appointed Dwight Morrow as Ambassador to Mexico with the successful objective to avoid further American conflict with Mexico.
Coolidge's administration would see continuity in the occupation of Nicaragua and Haiti, and an end to the occupation of the Dominican Republic in 1924 as a result of withdrawal agreements finalized during Harding's administration. In 1925, Coolidge ordered the withdrawal of Marines stationed in Nicaragua following perceived stability after the 1924 Nicaraguan general election, but redeployed them there in January 1927 following failed attempts to peacefully resolve the rapid deterioration of political stability and avert the ensuing Constitutionalist War; Henry L. Stimson was later sent by Coolidge to mediate a peace deal that would end the civil war and extend American military presence in Nicaragua beyond Coolidge's term in office.
To extend an olive branch to Latin American leaders embittered over America's interventionist policies in Central America and the Caribbean, Coolidge led the U.S. delegation to the Sixth International Conference of American States, January 15–17, 1928, in Havana, Cuba, the only international trip Coolidge made during his presidency. He would be the last sitting American president to visit Cuba until Barack Obama in 2016.
For Canada, Coolidge authorized the St. Lawrence Seaway, a system of locks and canals that would provide large vessels passage between the Atlantic Ocean and the Great Lakes.
Although a few of Harding's cabinet appointees were scandal-tarred, Coolidge initially retained all of them, out of an ardent conviction that as successor to a deceased elected president he was obligated to retain Harding's counselors and policies until the next election. He kept Harding's able speechwriter Judson T. Welliver; Stuart Crawford replaced Welliver in November 1925. Coolidge appointed C. Bascom Slemp, a Virginia Congressman and experienced federal politician, to work jointly with Edward T. Clark, a Massachusetts Republican organizer whom he retained from his vice-presidential staff, as Secretaries to the President (a position equivalent to the modern White House Chief of Staff).
Perhaps the most powerful person in Coolidge's Cabinet was Secretary of the Treasury Andrew Mellon, who controlled the administration's financial policies and was regarded by many, including House Minority Leader John Nance Garner, as more powerful than Coolidge himself. Secretary of Commerce Herbert Hoover also held a prominent place in Coolidge's Cabinet, in part because Coolidge found value in Hoover's ability to win positive publicity with his pro-business proposals. Secretary of State Charles Evans Hughes directed Coolidge's foreign policy until he resigned in 1925 following Coolidge's re-election. He was replaced by Frank B. Kellogg, who had previously served as a senator and as the ambassador to Great Britain. Coolidge made two other appointments following his re-election, with William M. Jardine taking the position of Secretary of Agriculture and John G. Sargent becoming Attorney General. Coolidge did not have a vice president during his first term, but Charles Dawes became vice president during Coolidge's second term, and Dawes and Coolidge clashed over farm policy and other issues.
Coolidge appointed one justice to the Supreme Court of the United States, Harlan F. Stone in 1925. Stone was Coolidge's fellow Amherst alumnus, a Wall Street lawyer and conservative Republican. Stone was serving as dean of Columbia Law School when Coolidge appointed him to be attorney general in 1924 to restore the reputation tarnished by Harding's Attorney General, Harry M. Daugherty. It does not appear that Coolidge considered appointing anyone other than Stone, although Stone himself had urged Coolidge to appoint Benjamin N. Cardozo. Stone proved to be a firm believer in judicial restraint and was regarded as one of the court's three liberal justices who would often vote to uphold New Deal legislation. President Franklin D. Roosevelt later appointed Stone to be chief justice.
Coolidge nominated 17 judges to the United States Courts of Appeals and 61 judges to the United States district courts. He appointed judges to various specialty courts as well, including Genevieve R. Cline, who became the first woman named to the federal judiciary when Coolidge placed her on the United States Customs Court in 1928. Coolidge also signed the Judiciary Act of 1925 into law, allowing the Supreme Court more discretion over its workload.
In the summer of 1927, Coolidge vacationed in the Black Hills of South Dakota. While on vacation, Coolidge surprisingly issued a terse statement that he would not seek a second full term as president: "I do not choose to run for President in 1928." After allowing the reporters to take that in, Coolidge elaborated. "If I take another term, I will be in the White House till 1933 … Ten years in Washington is longer than any other man has had it – too long!" In his memoirs, Coolidge explained his decision not to run: "The Presidential office takes a heavy toll of those who occupy it and those who are dear to them. While we should not refuse to spend and be spent in the service of our country, it is hazardous to attempt what we feel is beyond our strength to accomplish." After leaving office, he and Grace returned to Northampton, where he wrote his memoirs. The Republicans retained the White House in 1928 with a landslide by Herbert Hoover. Coolidge had been reluctant to endorse Hoover as his successor; on one occasion he remarked that "for six years that man has given me unsolicited advice – all of it bad." Even so, Coolidge had no desire to split the party by publicly opposing the nomination of the popular commerce secretary.
After his presidency, Coolidge retired to a spacious home in Northampton, "The Beeches". He kept a Hacker runabout boat on the Connecticut River and was often observed on the water by local boating enthusiasts. During this period, he also served as chairman of the Non-Partisan Railroad Commission, an entity created by several banks and corporations to survey the country's long-term transportation needs and make recommendations for improvements. He was an honorary president of the American Foundation for the Blind, a director of New York Life Insurance Company, president of the American Antiquarian Society, and a trustee of Amherst College.
Coolidge published his autobiography in 1929 and wrote a syndicated newspaper column, "Calvin Coolidge Says", from 1930 to 1931. Faced with a Democratic landslide in the 1932 presidential election, some Republicans spoke of rejecting Hoover as their party's nominee, and instead drafting Coolidge to run. Coolidge made it clear that he was not interested in running again, and that he would publicly repudiate any effort to draft him. Hoover was renominated, and Coolidge made several radio addresses in support of him. Hoover then lost the general election to Coolidge's 1920 vice presidential Democratic opponent Franklin D. Roosevelt in a landslide.
Coolidge died suddenly from coronary thrombosis at "The Beeches" on January 5, 1933, at 12:45 p.m., aged 60. Shortly before his death, Coolidge confided to an old friend: "I feel I no longer fit in with these times." Coolidge is buried in Plymouth Notch Cemetery, Plymouth Notch, Vermont. The nearby family home is maintained as one of the original buildings on the Calvin Coolidge Homestead District site. The State of Vermont dedicated a new visitors' center nearby to mark Coolidge's 100th birthday on July 4, 1972.
Despite his reputation as a quiet and even reclusive politician, Coolidge made use of the new medium of radio and made radio history several times while president. He made himself available to reporters, giving 520 press conferences, meeting with reporters more regularly than any president before or since. Coolidge's second inauguration was the first presidential inauguration broadcast on radio. On December 6, 1923, his speech to Congress was broadcast on radio, the first presidential radio address. Coolidge signed the Radio Act of 1927, which assigned regulation of radio to the newly created Federal Radio Commission. On August 11, 1924, Theodore W. Case, using the Phonofilm sound-on-film process he developed for Lee de Forest, filmed Coolidge on the White House lawn, making "Silent Cal" the first president to appear in a sound film. The title of the DeForest film was President Coolidge, Taken on the White House Grounds. When Charles Lindbergh arrived in Washington on a U.S. Navy ship after his celebrated 1927 trans-Atlantic flight, President Coolidge welcomed him back to the U.S. and presented him with the Medal of Honor; the event was captured on film. | [
{
"paragraph_id": 0,
"text": "Calvin Coolidge (born John Calvin Coolidge Jr.; /ˈkuːlɪdʒ/; July 4, 1872 – January 5, 1933) was an American attorney and politician who served as the 30th president of the United States from 1923 to 1929.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Born in Vermont, Coolidge was a Republican lawyer from New England who climbed the ladder of Massachusetts politics, becoming the state's 48th governor. His response to the Boston police strike of 1919 thrust him into the national spotlight as a man of decisive action. The next year, Coolidge was elected the country's 29th vice president and succeeded the presidency upon the sudden death of President Warren G. Harding in 1923. Elected in his own right in 1924, Coolidge gained a reputation as a small-government conservative with a taciturn personality and dry sense of humor that earned him the nickname \"Silent Cal\". Though his widespread popularity enabled him to run for a second full term, Coolidge chose not to run again in 1928, remarking that ten years as president would be \"longer than any other man has had it – too long!\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "Throughout his gubernatorial career, Coolidge ran on the record of fiscal conservatism, strong support for women's suffrage, and a vague opposition to Prohibition. During his presidency, he restored public confidence in the White House after the many scandals of the Harding administration. He signed into law the Indian Citizenship Act of 1924, which granted U.S. citizenship to all Native Americans, and oversaw a period of rapid and expansive economic growth known as the \"Roaring Twenties\", leaving office with considerable popularity. He was known for his hands-off governing approach and pro-business stances; as biographer Claude Fuess wrote: \"He embodied the spirit and hopes of the middle class, could interpret their longings and express their opinions. That he did represent the genius of the average is the most convincing proof of his strength.\"",
"title": ""
},
{
"paragraph_id": 3,
"text": "Scholars have ranked Coolidge in the lower half of U.S. presidents. He gains nearly universal praise for his stalwart support of racial equality during a period of heightened racial tension in the United States, and is highly praised by advocates of smaller government and laissez-faire economics, while supporters of an active central government generally view him far less favorably. His critics argue that he failed to use the country's economic boom to help struggling farmers and workers in other flailing industries, and there is still much debate among historians as to the extent to which Coolidge's economic policies contributed to the onset of the Great Depression.",
"title": ""
},
{
"paragraph_id": 4,
"text": "John Calvin Coolidge Jr. was born on July 4, 1872, in Plymouth Notch, Vermont—the only U.S. president to be born on Independence Day. He was the elder of the two children of John Calvin Coolidge Sr. (1845–1926) and Victoria Josephine Moor (1846–1885). Although named for his father, from early childhood Coolidge was addressed by his middle name. The name Calvin was used in multiple generations of the Coolidge family, apparently selected in honor of John Calvin, the Protestant Reformer.",
"title": "Early life and family history"
},
{
"paragraph_id": 5,
"text": "Coolidge Senior engaged in many occupations and developed a statewide reputation as a prosperous farmer, storekeeper, and public servant. He held various local offices, including justice of the peace and tax collector and served in both houses of the Vermont General Assembly. When Coolidge was 12 years old, his chronically ill mother died at the age of 39, perhaps from tuberculosis. His younger sister, Abigail Grace Coolidge (1875–1890), died at the age of 15, probably of appendicitis, when Coolidge was 18. Coolidge's father married a Plymouth schoolteacher in 1891, and lived to the age of 80.",
"title": "Early life and family history"
},
{
"paragraph_id": 6,
"text": "Coolidge's family had deep roots in New England. His earliest American ancestor, John Coolidge emigrated from Cottenham, Cambridgeshire, England, around 1630 and settled in Watertown, Massachusetts. Coolidge's great-great-grandfather, also named John Coolidge, was an American military officer in the Revolutionary War and one of the first selectmen of the town of Plymouth. His grandfather Calvin Galusha Coolidge served in the Vermont House of Representatives. His cousin Park Pollard was a businessman in Cavendish, Vermont and the longtime chair of the Vermont Democratic Party. Coolidge was also a descendant of Samuel Appleton, who settled in Ipswich and led the Massachusetts Bay Colony during King Philip's War. Coolidge's mother was the daughter of Hiram Dunlap Moor, a Plymouth Notch farmer, and Abigail Franklin.",
"title": "Early life and family history"
},
{
"paragraph_id": 7,
"text": "Coolidge attended the Black River Academy and then St. Johnsbury Academy before enrolling at Amherst College, where he distinguished himself in the debating class. As a senior, he joined the Phi Gamma Delta fraternity and graduated cum laude. While at Amherst, Coolidge was profoundly influenced by philosophy professor Charles Edward Garman, a Congregational mystic who had a neo-Hegelian philosophy.",
"title": "Early career and marriage"
},
{
"paragraph_id": 8,
"text": "Coolidge explained Garman's ethics forty years later:",
"title": "Early career and marriage"
},
{
"paragraph_id": 9,
"text": "[T]here is a standard of righteousness that might does not make right, that the end does not justify the means, and that expediency as a working principle is bound to fail. The only hope of perfecting human relationships is in accordance with the law of service under which men are not so solicitous about what they shall get as they are about what they shall give. Yet people are entitled to the rewards of their industry. What they earn is theirs, no matter how small or how great. But the possession of property carries the obligation to use it in a larger service...",
"title": "Early career and marriage"
},
{
"paragraph_id": 10,
"text": "At his father's urging after graduation, Coolidge moved to Northampton, Massachusetts, to become a lawyer. Coolidge followed the common practice of apprenticing with a local law firm, Hammond & Field, and reading law with them. John C. Hammond and Henry P. Field, both Amherst graduates, introduced Coolidge to practicing law in the county seat of Hampshire County, Massachusetts. In 1897, Coolidge was admitted to the Massachusetts bar, becoming a country lawyer. With his savings and a small inheritance from his grandfather, Coolidge opened his own law office in Northampton in 1898. He practiced commercial law, believing that he served his clients best by staying out of court. As his reputation as a hard-working and diligent attorney grew, local banks and other businesses began to retain his services.",
"title": "Early career and marriage"
},
{
"paragraph_id": 11,
"text": "In 1903, Coolidge met Grace Goodhue, a graduate of the University of Vermont and a teacher at Northampton's Clarke School for the Deaf. They married on October 4, 1905, at 2:30 p.m. in a small ceremony which took place in the parlor of Grace's family's house, having overcome her mother's objections to the marriage. The newlyweds went on a honeymoon trip to Montreal, originally planned for two weeks but cut short by a week at Coolidge's request. After 25 years he wrote of Grace, \"for almost a quarter of a century she has borne with my infirmities and I have rejoiced in her graces\".",
"title": "Early career and marriage"
},
{
"paragraph_id": 12,
"text": "The Coolidges had two sons: John (1906–2000) and Calvin Jr. (1908–1924). On June 30, 1924, Calvin Jr. had played tennis with his brother on the White House tennis courts without putting on socks and developed a blister on one of his toes. The blister subsequently degenerated into sepsis. Calvin Jr. died a little over a week later at the age of 16. The President never forgave himself for Calvin Jr's death. His eldest son John said it \"hurt [Coolidge] terribly\", and psychiatric biographer Robert E. Gilbert, author of The Tormented President: Calvin Coolidge, Death, and Clinical Depression, said that Coolidge \"ceased to function as President after the death of his sixteen-year-old son\". Gilbert explains in his book how Coolidge displayed all ten of the symptoms listed by the American Psychiatric Association as evidence of major depressive disorder following Calvin Jr.'s sudden death. John later became a railroad executive, helped to start the Coolidge Foundation, and was instrumental in creating the President Calvin Coolidge State Historic Site.",
"title": "Early career and marriage"
},
{
"paragraph_id": 13,
"text": "Coolidge was frugal, and when it came to securing a home, he insisted upon renting. He and his wife attended Northampton's Edwards Congregational Church before and after his presidency.",
"title": "Early career and marriage"
},
{
"paragraph_id": 14,
"text": "The Republican Party was dominant in New England at the time, and Coolidge followed the example of Hammond and Field by becoming active in local politics. In 1896, Coolidge campaigned for Republican presidential candidate William McKinley, and was selected to be a member of the Republican City Committee the next year. In 1898, he won election to the City Council of Northampton, placing second in a ward where the top three candidates were elected. The position offered no salary but provided Coolidge with valuable political experience. In 1899, he was chosen City Solicitor by the City Council. He was elected for a one-year term in 1900, and reelected in 1901. This position gave Coolidge more experience as a lawyer and paid a salary of $600 (equivalent to $21,106 in 2022). In 1902, the city council selected a Democrat for city solicitor, and Coolidge returned to private practice. Soon thereafter, however, the clerk of courts for the county died, and Coolidge was chosen to replace him. The position paid well, but it barred him from practicing law, so he remained at the job for only one year. In 1904, Coolidge suffered his sole defeat at the ballot box, losing an election to the Northampton school board. When told that some of his neighbors voted against him because he had no children in the schools he would govern, the recently married Coolidge replied, \"Might give me time!\"",
"title": "Local political office (1898−1915)"
},
{
"paragraph_id": 15,
"text": "In 1906, the local Republican committee nominated Coolidge for election to the Massachusetts House of Representatives. He won a close victory over the incumbent Democrat, and reported to Boston for the 1907 session of the Massachusetts General Court. In his freshman term, Coolidge served on minor committees and, although he usually voted with the party, was known as a Progressive Republican, voting in favor of such measures as women's suffrage and the direct election of Senators. While in Boston, Coolidge became an ally, and then a liegeman, of then U.S. Senator Winthrop Murray Crane who controlled the western faction of the Massachusetts Republican Party; Crane's party rival in the east of the commonwealth was U.S. Senator Henry Cabot Lodge. Coolidge forged another key strategic alliance with Guy Currier, who had served in both state houses and had the social distinction, wealth, personal charm and broad circle of friends which Coolidge lacked, and which would have a lasting impact on his political career. In 1907, he was elected to a second term, and in the 1908 session Coolidge was more outspoken, though not in a leadership position.",
"title": "Local political office (1898−1915)"
},
{
"paragraph_id": 16,
"text": "Instead of vying for another term in the State House, Coolidge returned home to his growing family and ran for mayor of Northampton when the incumbent Democrat retired. He was well liked in the town, and defeated his challenger by a vote of 1,597 to 1,409. During his first term (1910 to 1911), he increased teachers' salaries and retired some of the city's debt while still managing to effect a slight tax decrease. He was renominated in 1911, and defeated the same opponent by a slightly larger margin.",
"title": "Local political office (1898−1915)"
},
{
"paragraph_id": 17,
"text": "In 1911, the State Senator for the Hampshire County area retired and successfully encouraged Coolidge to run for his seat for the 1912 session; Coolidge defeated his Democratic opponent by a large margin. At the start of that term, he became chairman of a committee to arbitrate the \"Bread and Roses\" strike by the workers of the American Woolen Company in Lawrence, Massachusetts. After two tense months, the company agreed to the workers' demands, in a settlement proposed by the committee. A major issue affecting Massachusetts Republicans that year was the party split between the progressive wing, which favored Theodore Roosevelt, and the conservative wing, which favored William Howard Taft. Although he favored some progressive measures, Coolidge refused to leave the Republican party. When the new Progressive Party declined to run a candidate in his state senate district, Coolidge won reelection against his Democratic opponent by an increased margin.",
"title": "Local political office (1898−1915)"
},
{
"paragraph_id": 18,
"text": "In the 1913 session, Coolidge enjoyed renowned success in arduously navigating to passage the Western Trolley Act, which connected Northampton with a dozen similar industrial communities in Western Massachusetts. Coolidge intended to retire after his second term as was the custom, but when the president of the state senate, Levi H. Greenwood, considered running for lieutenant governor, Coolidge decided to run again for the Senate in the hopes of being elected as its presiding officer. Although Greenwood later decided to run for reelection to the Senate, he was defeated primarily due to his opposition to women's suffrage; Coolidge was in favor of the women's vote, won his re-election, and with Crane's help, assumed the presidency of a closely divided Senate. After his election in January 1914, Coolidge delivered a published and frequently quoted speech entitled Have Faith in Massachusetts, which summarized his philosophy of government.",
"title": "Local political office (1898−1915)"
},
{
"paragraph_id": 19,
"text": "Coolidge's speech was well received, and he attracted some admirers on its account; towards the end of the term, many of them were proposing his name for nomination to lieutenant governor. After winning reelection to the Senate by an increased margin in the 1914 elections, Coolidge was reelected unanimously to be President of the Senate. Coolidge's supporters, led by fellow Amherst alumnus Frank Stearns, encouraged him again to run for lieutenant governor. Stearns, an executive with the Boston department store R. H. Stearns, became another key ally, and began a publicity campaign on Coolidge's behalf before he announced his candidacy at the end of the 1915 legislative session.",
"title": "Local political office (1898−1915)"
},
{
"paragraph_id": 20,
"text": "Coolidge entered the primary election for lieutenant governor and was nominated to run alongside gubernatorial candidate Samuel W. McCall. Coolidge was the leading vote-getter in the Republican primary, and balanced the Republican ticket by adding a western presence to McCall's eastern base of support. McCall and Coolidge won the 1915 election to their respective one-year terms, with Coolidge defeating his opponent by more than 50,000 votes.",
"title": "Lieutenant Governor and Governor of Massachusetts (1916−1921)"
},
{
"paragraph_id": 21,
"text": "In Massachusetts, the lieutenant governor does not preside over the state Senate, as is the case in many other states; nevertheless, as lieutenant governor, Coolidge was a deputy governor functioning as an administrative inspector and was a member of the governor's council. He was also chairman of the finance committee and the pardons committee. As a full-time elected official, Coolidge discontinued his law practice in 1916, though his family continued to live in Northampton. McCall and Coolidge were both reelected in 1916 and again in 1917. When McCall decided that he would not stand for a fourth term, Coolidge announced his intention to run for governor.",
"title": "Lieutenant Governor and Governor of Massachusetts (1916−1921)"
},
{
"paragraph_id": 22,
"text": "Coolidge was unopposed for the Republican nomination for Governor of Massachusetts in 1918. He and his running mate, Channing Cox, a Boston lawyer and Speaker of the Massachusetts House of Representatives, ran on the previous administration's record: fiscal conservatism, a vague opposition to Prohibition, support for women's suffrage, and support for American involvement in World War I. The issue of the war proved divisive, especially among Irish and German Americans. Coolidge was elected by a margin of 16,773 votes over his opponent, Richard H. Long, in the smallest margin of victory of any of his statewide campaigns.",
"title": "Lieutenant Governor and Governor of Massachusetts (1916−1921)"
},
{
"paragraph_id": 23,
"text": "In 1919, in reaction to a plan of the policemen of the Boston Police Department to register with a union, Police Commissioner Edwin U. Curtis announced that such an act would not be tolerated. In August of that year, the American Federation of Labor issued a charter to the Boston Police Union. Curtis declared the union's leaders were guilty of insubordination and would be relieved of duty, but indicated he would cancel their suspension if the union was dissolved by September 4. The mayor of Boston, Andrew Peters, convinced Curtis to delay his action for a few days, but with no results, and Curtis suspended the union leaders on September 8. The following day, about three-quarters of the policemen in Boston went on strike. Coolidge, tacitly but fully in support of Curtis' position, closely monitored the situation but initially deferred to the local authorities. He anticipated that only a resulting measure of lawlessness could sufficiently prompt the public to understand and appreciate the controlling principle – that a policeman does not strike. That night and the next, there was sporadic violence and rioting in the unruly city. Peters, concerned about sympathy strikes by the firemen and others, called up some units of the Massachusetts National Guard stationed in the Boston area pursuant to an old and obscure legal authority, and relieved Curtis of duty.",
"title": "Lieutenant Governor and Governor of Massachusetts (1916−1921)"
},
{
"paragraph_id": 24,
"text": "Coolidge, sensing the severity of circumstances were then in need of his intervention, conferred with Crane's operative, William Butler, and then acted. He called up more units of the National Guard, restored Curtis to office, and took personal control of the police force. Curtis proclaimed that all of the strikers were fired from their jobs, and Coolidge called for a new police force to be recruited. That night Coolidge received a telegram from AFL leader Samuel Gompers. \"Whatever disorder has occurred\", Gompers wrote, \"is due to Curtis's order in which the right of the policemen has been denied…\" Coolidge publicly answered Gompers's telegram, denying any justification whatsoever for the strike – and his response launched him into the national consciousness. Newspapers across the nation picked up on Coolidge's statement and he became the newest hero to opponents of the strike. Amid of the First Red Scare, many Americans were terrified of the spread of communist revolutions, like those that had taken place in Russia, Hungary, and Germany. While Coolidge had lost some friends among organized labor, conservatives across the nation had seen a rising star. Although he usually acted with deliberation, the Boston police strike gave him a national reputation as a decisive leader, and as a strict enforcer of law and order.",
"title": "Lieutenant Governor and Governor of Massachusetts (1916−1921)"
},
{
"paragraph_id": 25,
"text": "Coolidge and Cox were renominated for their respective offices in 1919. By this time Coolidge's supporters (especially Stearns) had publicized his actions in the Police Strike around the state and the nation and some of Coolidge's speeches were published in book form. He faced the same opponent as in 1918, Richard Long, but this time Coolidge defeated him by 125,101 votes, more than seven times his margin of victory from a year earlier. His actions in the police strike, combined with the massive electoral victory, led to suggestions that Coolidge run for president in 1920.",
"title": "Lieutenant Governor and Governor of Massachusetts (1916−1921)"
},
{
"paragraph_id": 26,
"text": "By the time Coolidge was inaugurated on January 2, 1919, the First World War had ended, and Coolidge pushed the legislature to give a $100 bonus (equivalent to $1,688 in 2022) to Massachusetts veterans. He also signed a bill reducing the work week for women and children from fifty-four hours to forty-eight, saying, \"We must humanize the industry, or the system will break down.\" He signed into law a budget that kept the tax rates the same, while trimming $4 million from expenditures, thus allowing the state to retire some of its debt.",
"title": "Lieutenant Governor and Governor of Massachusetts (1916−1921)"
},
{
"paragraph_id": 27,
"text": "Coolidge also wielded the veto pen as governor. His most publicized veto prevented an increase in legislators' pay by 50%. Although he was personally opposed to Prohibition, he vetoed a bill in May 1920 that would have allowed the sale of beer or wine of 2.75% alcohol or less, in Massachusetts in violation of the Eighteenth Amendment to the United States Constitution. \"Opinions and instructions do not outmatch the Constitution,\" he said in his veto message. \"Against it, they are void.\"",
"title": "Lieutenant Governor and Governor of Massachusetts (1916−1921)"
},
{
"paragraph_id": 28,
"text": "At the 1920 Republican National Convention, most of the delegates were selected by state party caucuses, not primaries. As such, the field was divided among many local favorites. Coolidge was one such candidate, and while he placed as high as sixth in the voting, the powerful party bosses running the convention, primarily the party's U.S. Senators, never considered him seriously. After ten ballots, the bosses and then the delegates settled on Senator Warren G. Harding of Ohio as their nominee for president. When the time came to select a vice presidential nominee, the bosses also made and announced their decision on whom they wanted – Sen. Irvine Lenroot of Wisconsin – and then prematurely departed after his name was put forth, relying on the rank and file to confirm their decision. A delegate from Oregon, Wallace McCamant, having read Have Faith in Massachusetts, proposed Coolidge for vice president instead. The suggestion caught on quickly with the masses starving for an act of independence from the absent bosses, and Coolidge was unexpectedly nominated.",
"title": "Vice presidency (1921–1923)"
},
{
"paragraph_id": 29,
"text": "The Democrats nominated another Ohioan, James M. Cox, for president and the Assistant Secretary of the Navy, Franklin D. Roosevelt, for vice president. The question of the United States joining the League of Nations was a major issue in the campaign, as was the unfinished legacy of Progressivism. Harding ran a \"front-porch\" campaign from his home in Marion, Ohio, but Coolidge took to the campaign trail in the Upper South, New York, and New England – his audiences carefully limited to those familiar with Coolidge and those placing a premium upon concise and short speeches. On November 2, 1920, Harding and Coolidge were victorious in a landslide, winning more than 60 percent of the popular vote, including every state outside the South. They also won in Tennessee, the first time a Republican ticket had won a Southern state since Reconstruction.",
"title": "Vice presidency (1921–1923)"
},
{
"paragraph_id": 30,
"text": "The U.S. vice-presidency did not carry many official duties, but Coolidge was invited by President Harding to attend cabinet meetings, making him the first vice president to do so. He gave a number of unremarkable speeches around the country.",
"title": "Vice presidency (1921–1923)"
},
{
"paragraph_id": 31,
"text": "As vice president, Coolidge and his vivacious wife Grace were invited to quite a few parties, where the legend of \"Silent Cal\" was born. It is from this time that most of the jokes and anecdotes involving Coolidge originate, such as Coolidge being \"silent in five languages\". Although Coolidge was known to be a skilled and effective public speaker, in private he was a man of few words and was commonly referred to as \"Silent Cal\". An apocryphal story has it that a person seated next to him at a dinner said to him, \"I made a bet today that I could get more than two words out of you.\" He replied, \"You lose.\" However, on April 22, 1924, Coolidge himself said that the \"You lose\" quotation never occurred. The story about it was related by Frank B. Noyes, President of the Associated Press, to their membership at their annual luncheon at the Waldorf Astoria Hotel, when toasting and introducing Coolidge, who was the invited speaker. After the introduction and before his prepared remarks, Coolidge said to the membership, \"Your President [referring to Noyes] has given you a perfect example of one of those rumors now current in Washington which is without any foundation.\"",
"title": "Vice presidency (1921–1923)"
},
{
"paragraph_id": 32,
"text": "Coolidge often seemed uncomfortable among fashionable Washington society; when asked why he continued to attend so many of their dinner parties, he replied, \"Got to eat somewhere.\" Alice Roosevelt Longworth, a leading Republican wit, underscored Coolidge's silence and his dour personality: \"When he wished he were elsewhere, he pursed his lips, folded his arms, and said nothing. He looked then precisely as though he had been weaned on a pickle.\" Coolidge and his wife, Grace, who was a great baseball fan, once attended a Washington Senators game and sat through all nine innings without saying a word, except once when he asked her the time.",
"title": "Vice presidency (1921–1923)"
},
{
"paragraph_id": 33,
"text": "As president, Coolidge's reputation as a quiet man continued. \"The words of a President have an enormous weight,\" he would later write, \"and ought not to be used indiscriminately.\" Coolidge was aware of his stiff reputation; indeed, he cultivated it. \"I think the American people want a solemn ass as a President,\" he once told Ethel Barrymore, \"and I think I will go along with them.\" Some historians suggest that Coolidge's image was created deliberately as a campaign tactic, while others believe his withdrawn and quiet behavior to be natural, deepening after the death of his son in 1924. Dorothy Parker, upon learning that Coolidge had died, reportedly remarked, \"How can they tell?\"",
"title": "Vice presidency (1921–1923)"
},
{
"paragraph_id": 34,
"text": "On August 2, 1923, President Harding died unexpectedly from a heart attack in San Francisco while on a speaking tour of the western United States. Vice President Coolidge was in Vermont visiting his family home, which had neither electricity nor a telephone, when he received word by messenger of Harding's death. Coolidge dressed, said a prayer, and came downstairs to greet the reporters who had assembled. His father, a notary public and justice of the peace, administered the oath of office in the family's parlor by the light of a kerosene lamp at 2:47 a.m. on August 3, 1923, whereupon the new President of the United States returned to bed.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 35,
"text": "Coolidge returned to Washington the next day, and was sworn in again by Justice Adolph A. Hoehling Jr. of the Supreme Court of the District of Columbia, to forestall any questions about the authority of a state official to administer a federal oath. This second oath-taking remained a secret until it was revealed by Harry M. Daugherty in 1932, and confirmed by Hoehling. When Hoehling confirmed Daugherty's story, he indicated that Daugherty, then serving as United States Attorney General, asked him to administer the oath without fanfare at the Willard Hotel. According to Hoehling, he did not question Daugherty's reason for requesting a second oath-taking but assumed it was to resolve any doubt about whether the first swearing-in was valid.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 36,
"text": "The nation initially did not know what to make of Coolidge, who had maintained a low profile in the Harding administration; many had even expected him to be replaced on the ballot in 1924. Coolidge believed that those of Harding's men under suspicion were entitled to every presumption of innocence, taking a methodical approach to the scandals, principally the Teapot Dome scandal, while others clamored for rapid punishment of those they presumed guilty. Coolidge thought the Senate investigations of the scandals would suffice; this was affirmed by the resulting resignations of those involved. He personally intervened in demanding the resignation of Attorney General Harry M. Daugherty after he refused to cooperate with the congressional probe. He then set about to confirm that no loose ends remained in the administration, arranging for a full briefing on the wrongdoing. Harry A. Slattery reviewed the facts with him, Harlan F. Stone analyzed the legal aspects for him and Senator William E. Borah assessed and presented the political factors.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 37,
"text": "Coolidge addressed Congress when it reconvened on December 6, 1923, giving a speech that supported many of Harding's policies, including Harding's formal budgeting process, the enforcement of immigration restrictions and arbitration of coal strikes ongoing in Pennsylvania. The address to Congress was the first presidential speech to be broadcast over the radio. The Washington Naval Treaty was proclaimed just one month into Coolidge's term, and was generally well received in the country. In May 1924, the World War I veterans' World War Adjusted Compensation Act or \"Bonus Bill\" was passed over his veto. Coolidge signed the Immigration Act later that year, which was aimed at restricting southern and eastern European immigration, but appended a signing statement expressing his unhappiness with the bill's specific exclusion of Japanese immigrants. Just before the Republican Convention began, Coolidge signed into law the Revenue Act of 1924, which reduced the top marginal tax rate from 58 percent to 46 percent, as well as personal income tax rates across the board, increased the estate tax and bolstered it with a new gift tax.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 38,
"text": "On June 2, 1924, Coolidge signed the act granting citizenship to all Native Americans born in the United States. By that time, two-thirds of them were already citizens, having gained it through marriage, military service (veterans of World War I were granted citizenship in 1919), or the land allotments that had earlier taken place.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 39,
"text": "The Republican Convention was held from June 10 to 12, 1924, in Cleveland, Ohio; Coolidge was nominated on the first ballot. The convention nominated Frank Lowden of Illinois for vice president on the second ballot, but he declined; former Brigadier General Charles G. Dawes was nominated on the third ballot and accepted.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 40,
"text": "The Democrats held their convention the next month in New York City. The convention soon deadlocked, and after 103 ballots, the delegates finally agreed on a compromise candidate, John W. Davis, with Charles W. Bryan nominated for vice president. The Democrats' hopes were buoyed when Robert M. La Follette, a Republican senator from Wisconsin, split from the GOP to form a new Progressive Party. Many believed that the split in the Republican Party, like the one in 1912, would allow a Democrat to win the presidency.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 41,
"text": "After the conventions and the death of his younger son Calvin, Coolidge became withdrawn; he later said that \"when he [the son] died, the power and glory of the Presidency went with him.\" Even as he mourned, Coolidge ran his standard campaign, not mentioning his opponents by name or maligning them, and delivering speeches on his theory of government, including several that were broadcast over the radio. It was the most subdued campaign since 1896, partly because of Coolidge's grief, but also because of his naturally non-confrontational style. The other candidates campaigned in a more modern fashion, but despite the split in the Republican party, the results were similar to those of 1920. Coolidge and Dawes won every state outside the South except Wisconsin, La Follette's home state. Coolidge won the election with 382 electoral votes and the popular vote by 2.5 million over his opponents' combined total.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 42,
"text": "During Coolidge's presidency, the United States experienced a period of rapid economic growth known as the \"Roaring Twenties\". He left the administration's industrial policy in the hands of his activist Secretary of Commerce, Herbert Hoover, who energetically used government auspices to promote business efficiency and develop airlines and radio. Coolidge disdained regulation and demonstrated this by appointing commissioners to the Federal Trade Commission and the Interstate Commerce Commission who did little to restrict the activities of businesses under their jurisdiction. The regulatory state under Coolidge was, as one biographer described it, \"thin to the point of invisibility\".",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 43,
"text": "Historian Robert Sobel offers some context of Coolidge's laissez-faire ideology, based on the prevailing understanding of federalism during his presidency: \"As Governor of Massachusetts, Coolidge supported wages and hours legislation, opposed child labor, imposed economic controls during World War I, favored safety measures in factories, and even worker representation on corporate boards. Did he support these measures while president? No, because in the 1920s, such matters were considered the responsibilities of state and local governments.\" However, Coolidge did sign the Radio Act of 1927 into law that established the Federal Radio Commission (1927–1934), the equal-time rule for radio broadcasters in the United States, and restricted radio broadcasting licenses to stations that demonstrated that they served \"the public interest, convenience, or necessity\".",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 44,
"text": "Coolidge adopted the taxation policies of his Secretary of the Treasury, Andrew Mellon, who advocated \"scientific taxation\" – the notion that lowering taxes will increase, rather than decrease, government receipts. Congress agreed, and tax rates were reduced in Coolidge's term. In addition to federal tax cuts, Coolidge proposed reductions in federal expenditures and retiring of the federal debt. Coolidge's ideas were shared by the Republicans in Congress, and in 1924, Congress passed the Revenue Act of 1924, which reduced income tax rates and eliminated all income taxation for some two million people. They reduced taxes again by passing the Revenue Acts of 1926 and 1928, all the while continuing to keep spending down so as to reduce the overall federal debt. By 1927, only the wealthiest 2% of taxpayers paid any federal income tax. Federal spending remained flat during Coolidge's administration, allowing one-fourth of the federal debt to be retired in total. State and local governments saw considerable growth, however, surpassing the federal budget in 1927. By 1929, after Coolidge's series of tax rate reductions had cut the tax rate to 24 percent on those making over $100,000, the federal government collected more than a billion dollars in income taxes, of which 65 percent was collected from those making over $100,000. In 1921, when the tax rate on people making over $100,000 a year was 73 percent, the federal government collected a little over $700 million in income taxes, of which 30 percent was paid by those making over $100,000.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 45,
"text": "Perhaps the most contentious issue of Coolidge's presidency was relief for farmers. Some in Congress proposed a bill designed to fight falling agricultural prices by allowing the federal government to purchase crops to sell abroad at lower prices. Agriculture Secretary Henry C. Wallace and other administration officials favored the bill when it was introduced in 1924, but rising prices convinced many in Congress that the bill was unnecessary, and it was defeated just before the elections that year. In 1926, with farm prices falling once more, Senator Charles L. McNary and Representative Gilbert N. Haugen – both Republicans – proposed the McNary–Haugen Farm Relief Bill. The bill proposed a federal farm board that would purchase surplus production in high-yield years and hold it (when feasible) for later sale or sell it abroad. Coolidge opposed McNary-Haugen, declaring that agriculture must stand \"on an independent business basis\", and said that \"government control cannot be divorced from political control.\" Instead of manipulating prices, he favored instead Herbert Hoover's proposal to increase profitability by modernizing agriculture. Secretary Mellon wrote a letter denouncing the McNary-Haugen measure as unsound and likely to cause inflation, and it was defeated.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 46,
"text": "After McNary-Haugen's defeat, Coolidge supported a less radical measure, the Curtis-Crisp Act, which would have created a federal board to lend money to farm co-operatives in times of surplus; the bill did not pass. In February 1927, Congress took up the McNary-Haugen bill again, this time narrowly passing it, and Coolidge vetoed it. In his veto message, he expressed the belief that the bill would do nothing to help farmers, benefiting only exporters and expanding the federal bureaucracy. Congress did not override the veto, but it passed the bill again in May 1928 by an increased majority; again, Coolidge vetoed it. \"Farmers never have made much money,\" said Coolidge, the Vermont farmer's son. \"I do not believe we can do much about it.\"",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 47,
"text": "Coolidge has often been criticized for his actions during the Great Mississippi Flood of 1927, the worst natural disaster to hit the Gulf Coast until Hurricane Katrina in 2005. Although he did eventually name Secretary Hoover to a commission in charge of flood relief, scholars argue that Coolidge overall showed a lack of interest in federal flood control. Coolidge did not believe that personally visiting the region after the floods would accomplish anything, and that it would be seen as mere political grandstanding. He also did not want to incur the federal spending that flood control would require; he believed property owners should bear much of the cost. On the other hand, Congress wanted a bill that would place the federal government completely in charge of flood mitigation. When Congress passed a compromise measure in 1928, Coolidge declined to take credit for it and signed the bill in private on May 15.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 48,
"text": "According to one biographer, Coolidge was \"devoid of racial prejudice\", but rarely took the lead on civil rights. Coolidge disliked the Ku Klux Klan and no Klansman is known to have received an appointment from him. In the 1924 presidential election his opponents (Robert La Follette and John Davis), and his running mate Charles Dawes, often attacked the Klan but Coolidge avoided the subject. During his administration, lynchings of African-Americans decreased and millions of people left the Ku Klux Klan.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 49,
"text": "Coolidge spoke in favor of the civil rights of African Americans, saying in his first State of the Union address that their rights were \"just as sacred as those of any other citizen\" under the U.S. Constitution and that it was a \"public and a private duty to protect those rights.\"",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 50,
"text": "Coolidge repeatedly called for laws to make lynching a federal crime (it was already a state crime, though not always enforced). Congress refused to pass any such legislation. On June 2, 1924, Coolidge signed the Indian Citizenship Act, which granted U.S. citizenship to all Native Americans living on reservations. (Those off reservations had long been citizens.) On June 6, 1924, Coolidge delivered a commencement address at historically black, non-segregated Howard University, in which he thanked and commended African Americans for their rapid advances in education and their contributions to U.S. society over the years, as well as their eagerness to render their services as soldiers in the World War, all while being faced with discrimination and prejudices at home.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 51,
"text": "In a speech in October 1924, Coolidge stressed tolerance of differences as an American value and thanked immigrants for their contributions to U.S. society, saying that they have \"contributed much to making our country what it is.\" He stated that although the diversity of peoples was a detrimental source of conflict and tension in Europe, it was peculiar for the United States that it was a \"harmonious\" benefit for the country. Coolidge further stated the United States should assist and help immigrants who come to the country and urged immigrants to reject \"race hatreds\" and \"prejudices\".",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 52,
"text": "Coolidge was neither well versed nor very interested in world affairs. His focus was directed mainly at American business, especially pertaining to trade, and \"Maintaining the Status Quo\". Although not an isolationist, he was reluctant to enter into European involvements. While Coolidge believed strongly in a non-interventionist foreign policy, he did believe that the United States was exceptional.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 53,
"text": "Coolidge considered the 1920 Republican victory as a rejection of the Wilsonian position that the United States should join the League of Nations. Coolidge believed the League did not serve American interests. However, he spoke in favor of joining the Permanent Court of International Justice (World Court), provided that the nation would not be bound by advisory decisions. In 1926, the Senate eventually approved joining the Court (with reservations). The League of Nations accepted the reservations, but it suggested some modifications of its own. The Senate failed to act and so the United States did not join the World Court.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 54,
"text": "In 1924 the Coolidge administration nominated Charles Dawes to head the multi-national committee that produced the Dawes Plan. It set fixed annual amounts for Germany's World War I reparations payments and authorized a large loan, mostly from American banks, to help stabilize and stimulate the German economy. Additionally, Coolidge attempted to pursue further curbs on naval strength following the early successes of Harding's Washington Naval Conference by sponsoring the Geneva Naval Conference in 1927, which failed owing to a French and Italian boycott and ultimate failure of Great Britain and the United States to agree on cruiser tonnages. As a result, the conference was a failure and Congress eventually authorized for increased American naval spending in 1928. The Kellogg–Briand Pact of 1928, named for Coolidge's Secretary of State, Frank B. Kellogg, and French foreign minister Aristide Briand, was also a key peacekeeping initiative. The treaty, ratified in 1929, committed signatories – the United States, the United Kingdom, France, Germany, Italy, and Japan – to \"renounce war, as an instrument of national policy in their relations with one another\". The treaty did not achieve its intended result – the outlawry of war – but it did provide the founding principle for international law after World War II. Coolidge also continued the previous administration's policy of withholding recognition of the Soviet Union.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 55,
"text": "Efforts were made to normalize ties with post-Revolution Mexico. Coolidge recognized Mexico's new governments under Álvaro Obregón and Plutarco Elías Calles, and continued American support for the elected Mexican government against the National League for the Defense of Religious Liberty during the Cristero War, lifting the arms embargo on that country; he also appointed Dwight Morrow as Ambassador to Mexico with the successful objective to avoid further American conflict with Mexico.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 56,
"text": "Coolidge's administration would see continuity in the occupation of Nicaragua and Haiti, and an end to the occupation of the Dominican Republic in 1924 as a result of withdrawal agreements finalized during Harding's administration. In 1925, Coolidge ordered the withdrawal of Marines stationed in Nicaragua following perceived stability after the 1924 Nicaraguan general election, but redeployed them there in January 1927 following failed attempts to peacefully resolve the rapid deterioration of political stability and avert the ensuing Constitutionalist War; Henry L. Stimson was later sent by Coolidge to mediate a peace deal that would end the civil war and extend American military presence in Nicaragua beyond Coolidge's term in office.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 57,
"text": "To extend an olive branch to Latin American leaders embittered over America's interventionist policies in Central America and the Caribbean, Coolidge led the U.S. delegation to the Sixth International Conference of American States, January 15–17, 1928, in Havana, Cuba, the only international trip Coolidge made during his presidency. He would be the last sitting American president to visit Cuba until Barack Obama in 2016.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 58,
"text": "For Canada, Coolidge authorized the St. Lawrence Seaway, a system of locks and canals that would provide large vessels passage between the Atlantic Ocean and the Great Lakes.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 59,
"text": "Although a few of Harding's cabinet appointees were scandal-tarred, Coolidge initially retained all of them, out of an ardent conviction that as successor to a deceased elected president he was obligated to retain Harding's counselors and policies until the next election. He kept Harding's able speechwriter Judson T. Welliver; Stuart Crawford replaced Welliver in November 1925. Coolidge appointed C. Bascom Slemp, a Virginia Congressman and experienced federal politician, to work jointly with Edward T. Clark, a Massachusetts Republican organizer whom he retained from his vice-presidential staff, as Secretaries to the President (a position equivalent to the modern White House Chief of Staff).",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 60,
"text": "Perhaps the most powerful person in Coolidge's Cabinet was Secretary of the Treasury Andrew Mellon, who controlled the administration's financial policies and was regarded by many, including House Minority Leader John Nance Garner, as more powerful than Coolidge himself. Secretary of Commerce Herbert Hoover also held a prominent place in Coolidge's Cabinet, in part because Coolidge found value in Hoover's ability to win positive publicity with his pro-business proposals. Secretary of State Charles Evans Hughes directed Coolidge's foreign policy until he resigned in 1925 following Coolidge's re-election. He was replaced by Frank B. Kellogg, who had previously served as a senator and as the ambassador to Great Britain. Coolidge made two other appointments following his re-election, with William M. Jardine taking the position of Secretary of Agriculture and John G. Sargent becoming Attorney General. Coolidge did not have a vice president during his first term, but Charles Dawes became vice president during Coolidge's second term, and Dawes and Coolidge clashed over farm policy and other issues.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 61,
"text": "Coolidge appointed one justice to the Supreme Court of the United States, Harlan F. Stone in 1925. Stone was Coolidge's fellow Amherst alumnus, a Wall Street lawyer and conservative Republican. Stone was serving as dean of Columbia Law School when Coolidge appointed him to be attorney general in 1924 to restore the reputation tarnished by Harding's Attorney General, Harry M. Daugherty. It does not appear that Coolidge considered appointing anyone other than Stone, although Stone himself had urged Coolidge to appoint Benjamin N. Cardozo. Stone proved to be a firm believer in judicial restraint and was regarded as one of the court's three liberal justices who would often vote to uphold New Deal legislation. President Franklin D. Roosevelt later appointed Stone to be chief justice.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 62,
"text": "Coolidge nominated 17 judges to the United States Courts of Appeals and 61 judges to the United States district courts. He appointed judges to various specialty courts as well, including Genevieve R. Cline, who became the first woman named to the federal judiciary when Coolidge placed her on the United States Customs Court in 1928. Coolidge also signed the Judiciary Act of 1925 into law, allowing the Supreme Court more discretion over its workload.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 63,
"text": "In the summer of 1927, Coolidge vacationed in the Black Hills of South Dakota. While on vacation, Coolidge surprisingly issued a terse statement that he would not seek a second full term as president: \"I do not choose to run for President in 1928.\" After allowing the reporters to take that in, Coolidge elaborated. \"If I take another term, I will be in the White House till 1933 … Ten years in Washington is longer than any other man has had it – too long!\" In his memoirs, Coolidge explained his decision not to run: \"The Presidential office takes a heavy toll of those who occupy it and those who are dear to them. While we should not refuse to spend and be spent in the service of our country, it is hazardous to attempt what we feel is beyond our strength to accomplish.\" After leaving office, he and Grace returned to Northampton, where he wrote his memoirs. The Republicans retained the White House in 1928 with a landslide by Herbert Hoover. Coolidge had been reluctant to endorse Hoover as his successor; on one occasion he remarked that \"for six years that man has given me unsolicited advice – all of it bad.\" Even so, Coolidge had no desire to split the party by publicly opposing the nomination of the popular commerce secretary.",
"title": "Presidency (1923–1929)"
},
{
"paragraph_id": 64,
"text": "After his presidency, Coolidge retired to a spacious home in Northampton, \"The Beeches\". He kept a Hacker runabout boat on the Connecticut River and was often observed on the water by local boating enthusiasts. During this period, he also served as chairman of the Non-Partisan Railroad Commission, an entity created by several banks and corporations to survey the country's long-term transportation needs and make recommendations for improvements. He was an honorary president of the American Foundation for the Blind, a director of New York Life Insurance Company, president of the American Antiquarian Society, and a trustee of Amherst College.",
"title": "Post-presidency (1929–1933)"
},
{
"paragraph_id": 65,
"text": "Coolidge published his autobiography in 1929 and wrote a syndicated newspaper column, \"Calvin Coolidge Says\", from 1930 to 1931. Faced with a Democratic landslide in the 1932 presidential election, some Republicans spoke of rejecting Hoover as their party's nominee, and instead drafting Coolidge to run. Coolidge made it clear that he was not interested in running again, and that he would publicly repudiate any effort to draft him. Hoover was renominated, and Coolidge made several radio addresses in support of him. Hoover then lost the general election to Coolidge's 1920 vice presidential Democratic opponent Franklin D. Roosevelt in a landslide.",
"title": "Post-presidency (1929–1933)"
},
{
"paragraph_id": 66,
"text": "Coolidge died suddenly from coronary thrombosis at \"The Beeches\" on January 5, 1933, at 12:45 p.m., aged 60. Shortly before his death, Coolidge confided to an old friend: \"I feel I no longer fit in with these times.\" Coolidge is buried in Plymouth Notch Cemetery, Plymouth Notch, Vermont. The nearby family home is maintained as one of the original buildings on the Calvin Coolidge Homestead District site. The State of Vermont dedicated a new visitors' center nearby to mark Coolidge's 100th birthday on July 4, 1972.",
"title": "Death"
},
{
"paragraph_id": 67,
"text": "Despite his reputation as a quiet and even reclusive politician, Coolidge made use of the new medium of radio and made radio history several times while president. He made himself available to reporters, giving 520 press conferences, meeting with reporters more regularly than any president before or since. Coolidge's second inauguration was the first presidential inauguration broadcast on radio. On December 6, 1923, his speech to Congress was broadcast on radio, the first presidential radio address. Coolidge signed the Radio Act of 1927, which assigned regulation of radio to the newly created Federal Radio Commission. On August 11, 1924, Theodore W. Case, using the Phonofilm sound-on-film process he developed for Lee de Forest, filmed Coolidge on the White House lawn, making \"Silent Cal\" the first president to appear in a sound film. The title of the DeForest film was President Coolidge, Taken on the White House Grounds. When Charles Lindbergh arrived in Washington on a U.S. Navy ship after his celebrated 1927 trans-Atlantic flight, President Coolidge welcomed him back to the U.S. and presented him with the Medal of Honor; the event was captured on film.",
"title": "Radio, film, and commemorations"
}
]
| Calvin Coolidge was an American attorney and politician who served as the 30th president of the United States from 1923 to 1929. Born in Vermont, Coolidge was a Republican lawyer from New England who climbed the ladder of Massachusetts politics, becoming the state's 48th governor. His response to the Boston police strike of 1919 thrust him into the national spotlight as a man of decisive action. The next year, Coolidge was elected the country's 29th vice president and succeeded the presidency upon the sudden death of President Warren G. Harding in 1923. Elected in his own right in 1924, Coolidge gained a reputation as a small-government conservative with a taciturn personality and dry sense of humor that earned him the nickname "Silent Cal". Though his widespread popularity enabled him to run for a second full term, Coolidge chose not to run again in 1928, remarking that ten years as president would be "longer than any other man has had it – too long!" Throughout his gubernatorial career, Coolidge ran on the record of fiscal conservatism, strong support for women's suffrage, and a vague opposition to Prohibition. During his presidency, he restored public confidence in the White House after the many scandals of the Harding administration. He signed into law the Indian Citizenship Act of 1924, which granted U.S. citizenship to all Native Americans, and oversaw a period of rapid and expansive economic growth known as the "Roaring Twenties", leaving office with considerable popularity. He was known for his hands-off governing approach and pro-business stances; as biographer Claude Fuess wrote: "He embodied the spirit and hopes of the middle class, could interpret their longings and express their opinions. That he did represent the genius of the average is the most convincing proof of his strength." Scholars have ranked Coolidge in the lower half of U.S. presidents. He gains nearly universal praise for his stalwart support of racial equality during a period of heightened racial tension in the United States, and is highly praised by advocates of smaller government and laissez-faire economics, while supporters of an active central government generally view him far less favorably. His critics argue that he failed to use the country's economic boom to help struggling farmers and workers in other flailing industries, and there is still much debate among historians as to the extent to which Coolidge's economic policies contributed to the onset of the Great Depression. | 2001-08-23T20:58:42Z | 2023-12-07T12:29:17Z | [
"Template:Infobox officeholder",
"Template:Efn",
"Template:See also",
"Template:For timeline",
"Template:Refend",
"Template:External media",
"Template:Pp-move",
"Template:Snd",
"Template:Sfnm",
"Template:Reflist",
"Template:Cite journal",
"Template:Refbegin",
"Template:Library resources box",
"Template:Curlie",
"Template:Redirect",
"Template:Use mdy dates",
"Template:Sfn",
"Template:Inflation",
"Template:Cite encyclopedia",
"Template:Cbignore",
"Template:Authority control",
"Template:Internet Archive author",
"Template:About",
"Template:Pp",
"Template:Notelist",
"Template:Cite web",
"Template:Cite book",
"Template:Citation",
"Template:NYT topic",
"Template:C-SPAN",
"Template:IMDb name",
"Template:Calvin Coolidge",
"Template:Featured article",
"Template:Use American English",
"Template:Main",
"Template:Clear",
"Template:Portal bar",
"Template:Short description",
"Template:Circa",
"Template:SS",
"Template:Cite news",
"Template:Sister project links",
"Template:Gutenberg author",
"Template:PM20",
"Template:IPAc-en",
"Template:CongBio",
"Template:Navboxes"
]
| https://en.wikipedia.org/wiki/Calvin_Coolidge |
6,198 | Convention on Biological Diversity | The Convention on Biological Diversity (CBD), known informally as the Biodiversity Convention, is a multilateral treaty. The Convention has three main goals: the conservation of biological diversity (or biodiversity); the sustainable use of its components; and the fair and equitable sharing of benefits arising from genetic resources. Its objective is to develop national strategies for the conservation and sustainable use of biological diversity, and it is often seen as the key document regarding sustainable development.
The Convention was opened for signature at the Earth Summit in Rio de Janeiro on 5 June 1992 and entered into force on 29 December 1993. The United States is the only UN member state which has not ratified the Convention. It has two supplementary agreements, the Cartagena Protocol and Nagoya Protocol.
The Cartagena Protocol on Biosafety to the Convention on Biological Diversity is an international treaty governing the movements of living modified organisms (LMOs) resulting from modern biotechnology from one country to another. It was adopted on 29 January 2000 as a supplementary agreement to the CBD and entered into force on 11 September 2003.
The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization (ABS) to the Convention on Biological Diversity is another supplementary agreement to the CBD. It provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. The Nagoya Protocol was adopted on 29 October 2010 in Nagoya, Japan, and entered into force on 12 October 2014.
2010 was also the International Year of Biodiversity, and the Secretariat of the CBD was its focal point. Following a recommendation of CBD signatories at Nagoya, the UN declared 2011 to 2020 as the United Nations Decade on Biodiversity in December 2010. The Convention's Strategic Plan for Biodiversity 2011-2020, created in 2010, include the Aichi Biodiversity Targets.
The meetings of the Parties to the Convention are known as Conferences of the Parties (COP), with the first one (COP 1) held in Nassau, Bahamas, in 1994 and the most recent one (COP 15) in 2021/2022 in Kunming, China and Montreal, Canada.
In the area of marine and coastal biodiversity CBD's focus at present is to identify Ecologically or Biologically Significant Marine Areas (EBSAs) in specific ocean locations based on scientific criteria. The aim is to create an international legally binding instrument (ILBI) involving area-based planning and decision-making under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ).
The notion of an international convention on biodiversity was conceived at a United Nations Environment Programme (UNEP) Ad Hoc Working Group of Experts on Biological Diversity in November 1988. The subsequent year, the Ad Hoc Working Group of Technical and Legal Experts was established for the drafting of a legal text which addressed the conservation and sustainable use of biological diversity, as well as the sharing of benefits arising from their utilization with sovereign states and local communities. In 1991, an intergovernmental negotiating committee was established, tasked with finalizing the Convention's text.
A Conference for the Adoption of the Agreed Text of the Convention on Biological Diversity was held in Nairobi, Kenya, in 1992, and its conclusions were distilled in the Nairobi Final Act. The Convention's text was opened for signature on 5 June 1992 at the United Nations Conference on Environment and Development (the Rio "Earth Summit"). By its closing date, 4 June 1993, the Convention had received 168 signatures. It entered into force on 29 December 1993.
The Convention recognized for the first time in international law that the conservation of biodiversity is "a common concern of humankind" and is an integral part of the development process. The agreement covers all ecosystems, species, and genetic resources. It links traditional conservation efforts to the economic goal of using biological resources sustainably. It sets principles for the fair and equitable sharing of the benefits arising from the use of genetic resources, notably those destined for commercial use. It also covers the rapidly expanding field of biotechnology through its Cartagena Protocol on Biosafety, addressing technology development and transfer, benefit-sharing and biosafety issues. Importantly, the Convention is legally binding; countries that join it ('Parties') are obliged to implement its provisions.
The Convention reminds decision-makers that natural resources are not infinite and sets out a philosophy of sustainable use. While past conservation efforts were aimed at protecting particular species and habitats, the Convention recognizes that ecosystems, species and genes must be used for the benefit of humans. However, this should be done in a way and at a rate that does not lead to the long-term decline of biological diversity.
The Convention also offers decision-makers guidance based on the precautionary principle which demands that where there is a threat of significant reduction or loss of biological diversity, lack of full scientific certainty should not be used as a reason for postponing measures to avoid or minimize such a threat. The Convention acknowledges that substantial investments are required to conserve biological diversity. It argues, however, that conservation will bring us significant environmental, economic and social benefits in return.
The Convention on Biological Diversity of 2010 banned some forms of geoengineering.
As of 1 December 2019, the acting executive secretary is Elizabeth Maruma Mrema.
The previous executive secretaries were: pl:Cristiana Pașca Palmer (2017–2019), Braulio Ferreira de Souza Dias (2012–2017), Ahmed Djoghlaf (2006–2012), Hamdallah Zedan (1998–2005), Calestous Juma (1995–1998), and Angela Cropper (1993–1995).
Some of the many issues dealt with under the Convention include:
The Convention's governing body is the Conference of the Parties (COP), consisting of all governments (and regional economic integration organizations) that have ratified the treaty. This ultimate authority reviews progress under the Convention, identifies new priorities, and sets work plans for members. The COP can also make amendments to the Convention, create expert advisory bodies, review progress reports by member nations, and collaborate with other international organizations and agreements.
The Conference of the Parties uses expertise and support from several other bodies that are established by the Convention. In addition to committees or mechanisms established on an ad hoc basis, the main organs are:
The CBD Secretariat, based in Montreal, Quebec, Canada, operates under UNEP, the United Nations Environment Programme. Its main functions are to organize meetings, draft documents, assist member governments in the implementation of the programme of work, coordinate with other international organizations, and collect and disseminate information.
The SBSTTA is a committee composed of experts from member governments competent in relevant fields. It plays a key role in making recommendations to the COP on scientific and technical issues. It provides assessments of the status of biological diversity and of various measures taken in accordance with Convention, and also gives recommendations to the Conference of the Parties, which may be endorsed in whole, in part or in modified form by the COPs. As of 2020 SBSTTA had met 23 times, with a 24th meeting taking place in Geneva, Switzerland in 2022.
In 2014, the Conference of the Parties to the Convention on Biological Diversity established the Subsidiary Body on Implementation (SBI) to replace the Ad Hoc Open-ended Working Group on Review of Implementation of the Convention. The four functions and core areas of work of SBI are: (a) review of progress in implementation; (b) strategic actions to enhance implementation; (c) strengthening means of implementation; and (d) operations of the Convention and the Protocols. The first meeting of the SBI was held on 2–6 May 2016 and the second meeting was held on 9–13 July 2018, both in Montreal, Canada. The third meeting of the SBI will be held in March 2022 in Geneva, Switzerland. The Bureau of the Conference of the Parties serves as the Bureau of the SBI. The current chair of the SBI is Ms. Charlotta Sörqvist of Sweden.
As of 2016, the Convention has 196 Parties, which includes 195 states and the European Union. All UN member states—with the exception of the United States—have ratified the treaty. Non-UN member states that have ratified are the Cook Islands, Niue, and the State of Palestine. The Holy See and the states with limited recognition are non-Parties. The US has signed but not ratified the treaty, because ratification requires a two-thirds majority in the Senate and is blocked by Republican Party senators.
The European Union created the Cartagena Protocol (see below) in 2000 to enhance biosafety regulation and propagate the "precautionary principle" over the "sound science principle" defended by the United States. Whereas the impact of the Cartagena Protocol on domestic regulations has been substantial, its impact on international trade law remains uncertain. In 2006, the World Trade Organization (WTO) ruled that the European Union had violated international trade law between 1999 and 2003 by imposing a moratorium on the approval of genetically modified organisms (GMO) imports. Disappointing the United States, the panel nevertheless "decided not to decide" by not invalidating the stringent European biosafety regulations.
Implementation by the Parties to the Convention is achieved using two means:
National Biodiversity Strategies and Action Plans (NBSAP) are the principal instruments for implementing the Convention at the national level. The Convention requires that countries prepare a national biodiversity strategy and to ensure that this strategy is included in planning for activities in all sectors where diversity may be impacted. As of early 2012, 173 Parties had developed NBSAPs.
The United Kingdom, New Zealand and Tanzania carried out elaborate responses to conserve individual species and specific habitats. The United States of America, a signatory who had not yet ratified the treaty by 2010, produced one of the most thorough implementation programs through species recovery programs and other mechanisms long in place in the US for species conservation.
Singapore established a detailed National Biodiversity Strategy and Action Plan. The National Biodiversity Centre of Singapore represents Singapore in the Convention for Biological Diversity.
In accordance with Article 26 of the Convention, Parties prepare national reports on the status of implementation of the Convention.
The Cartagena Protocol on Biosafety, also known as the Biosafety Protocol, was adopted in January 2000, after a CBD Open-ended Ad Hoc Working Group on Biosafety had met six times between July 1996 and February 1999. The Working Group submitted a draft text of the Protocol for consideration by Conference of the Parties at its first extraordinary meeting, which was convened for the express purpose of adopting a protocol on biosafety to the Convention on Biological Diversity. After a few delays, the Cartagena Protocol was eventually adopted on 29 January 2000. The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by living modified organisms resulting from modern biotechnology.
The Biosafety Protocol makes clear that products from new technologies must be based on the precautionary principle and allow developing nations to balance public health against economic benefits. It will, for example, let countries ban imports of a genetically modified organism if they feel there is not enough scientific evidence the product is safe and requires exporters to label shipments containing genetically modified commodities such as corn or cotton.
The required number of 50 instruments of ratification/accession/approval/acceptance by countries was reached in May 2003. In accordance with the provisions of its Article 37, the Protocol entered into force on 11 September 2003.
In April 2002, the Parties of the UN CBD adopted the recommendations of the Gran Canaria Declaration Calling for a Global Plant Conservation Strategy, and adopted a 16-point plan aiming to slow the rate of plant extinctions around the world by 2010.
The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity was adopted on 29 October 2010 in Nagoya, Aichi Prefecture, Japan, at the tenth meeting of the Conference of the Parties, and entered into force on 12 October 2014. The protocol is a supplementary agreement to the Convention on Biological Diversity, and provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. It thereby contributes to the conservation and sustainable use of biodiversity.
Also at the tenth meeting of the Conference of the Parties, held from 18 to 29 October 2010 in Nagoya, a revised and updated "Strategic Plan for Biodiversity, 2011-2020" was agreed and published. This document included the "Aichi Biodiversity Targets", comprising 20 targets that address each of five strategic goals defined in the plan. The strategic plan includes the following strategic goals:
Upon the launch of Agenda 2030, CBD released a technical note mapping and identifying synergies between the 17 Sustainable Development Goals (SDGs) and the 20 Aichi Biodiversity Targets. This helps to understand the contributions of biodiversity to achieving the SDGs.
A new plan, known as the post-2020 Global Biodiversity Framework (GBF) was developed to guide action through 2030. A first draft of this framework was released in July 2021, and its final content was discussed and negotiated as part of the COP 15 meetings. Reducing agricultural pollution and sharing the benefits of digital sequence information arose as key points of contention among Parties during development of the framework. A final version was adopted by the Convention on 19 December 2022. The framework includes a number of ambitious goals, including a commitment to designate at least 30 percent of global land and sea as protected areas (known as the "30 by 30" initiative).
The CBD has a significant focus on marine and coastal biodiversity. A series of expert workshops have been held (2018–2022) to identify options for modifying the description of Ecologically or Biologically Significant Marine Areas (EBSAs) and describing new areas. These have focused on the North-East, North-West and South-Eastern Atlantic Ocean, Baltic Sea, Caspian Sea, Black Sea, Seas of East Asia, North-West Indian Ocean and Adjacent Gulf Areas, Southern and North-East Indian Ocean, Mediterranean Sea, North and South Pacific, Eastern Tropical and Temperate Pacific, Wider Caribbean and Western Mid-Atlantic. The workshop meetings have followed the EBSA process based on internationally agreed scientific criteria. This is aimed at creating an international legally binding instrument (ILBI) under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ). The central mechanism is area-based planning and decision-making. It integrates EBSAs, Vulnerable Marine Ecosystems (VMEs) and High Seas (Marine Protected Areas) with Blue Growth scenarios. There is also linkage with the EU Marine Strategy Framework Directive.
There have been criticisms against CBD that its implementation has been weakened due to resistance of Western countries to the implementation of pro-South provisions of the Convention. CBD is also regarded as a case of a hard treaty gone soft in the implementation trajectory. The argument to enforce the treaty as a legally binding multilateral instrument with the Conference of Parties reviewing the infractions and non-compliance is also gaining strength.
Although the Convention explicitly states that all forms of life are covered by its provisions, examination of reports and of national biodiversity strategies and action plans submitted by participating countries shows that in practice this is not happening. The fifth report of the European Union, for example, makes frequent reference to animals (particularly fish) and plants, but does not mention bacteria, fungi or protists at all. The International Society for Fungal Conservation has assessed more than 100 of these CBD documents for their coverage of fungi using defined criteria to place each in one of six categories. No documents were assessed as good or adequate, less than 10% as nearly adequate or poor, and the rest as deficient, seriously deficient or totally deficient.
Scientists working with biodiversity and medical research are expressing fears that the Nagoya Protocol is counterproductive, and will hamper disease prevention and conservation efforts, and that the threat of imprisonment of scientists will have a chilling effect on research. Non-commercial researchers and institutions such as natural history museums fear maintaining biological reference collections and exchanging material between institutions will become difficult, and medical researchers have expressed alarm at plans to expand the protocol to make it illegal to publicly share genetic information, e.g. via GenBank.
William Yancey Brown, when with the Brookings Institution, suggested that the Convention on Biological Diversity should include the preservation of intact genomes and viable cells for every known species and for new species as they are discovered.
A Conference of the Parties (COP) was held annually for three years after 1994, and thence biennially on even-numbered years.
The first ordinary meeting of the Parties to the Convention took place in November and December 1994, in Nassau, Bahamas.
The second ordinary meeting of the Parties to the Convention took place in November 1995, in Jakarta, Indonesia.
The third ordinary meeting of the Parties to the Convention took place in November 1996, in Buenos Aires, Argentina.
The fourth ordinary meeting of the Parties to the Convention took place in May 1998, in Bratislava, Slovakia.
The First Extraordinary Meeting of the Conference of the Parties took place in February 1999, in Cartagena, Colombia. A series of meetings led to the adoption of the Cartagena Protocol on Biosafety in January 2000, effective from 2003.
The fifth ordinary meeting of the Parties to the Convention took place in May 2000, in Nairobi, Kenya.
The sixth ordinary meeting of the Parties to the Convention took place in April 2002, in The Hague, Netherlands.
The seventh ordinary meeting of the Parties to the Convention took place in February 2004, in Kuala Lumpur, Malaysia.
The eighth ordinary meeting of the Parties to the Convention took place in March 2006, in Curitiba, Brazil.
The ninth ordinary meeting of the Parties to the Convention took place in May 2008, in Bonn, Germany.
The tenth ordinary meeting of the Parties to the Convention took place in October 2010, in Nagoya, Japan. It was at this meeting that the Nagoya Protocol was ratified.
2010 was the International Year of Biodiversity and the Secretariat of the CBD was its focal point. Following a recommendation of CBD signatories during COP 10 at Nagoya, the UN, on 22 December 2010, declared 2011 to 2020 as the United Nations Decade on Biodiversity.
Leading up to the Conference of the Parties (COP 11) meeting on biodiversity in Hyderabad, India, 2012, preparations for a World Wide Views on Biodiversity has begun, involving old and new partners and building on the experiences from the World Wide Views on Global Warming.
Under the theme, "Biodiversity for Sustainable Development", thousands of representatives of governments, NGOs, indigenous peoples, scientists and the private sector gathered in Pyeongchang, Republic of Korea in October 2014 for the 12th meeting of the Conference of the Parties to the Convention on Biological Diversity (COP 12).
From 6–17 October 2014, Parties discussed the implementation of the Strategic Plan for Biodiversity 2011-2020 and its Aichi Biodiversity Targets, which are to be achieved by the end of this decade. The results of Global Biodiversity Outlook 4, the flagship assessment report of the CBD informed the discussions.
The conference gave a mid-term evaluation to the UN Decade on Biodiversity (2011–2020) initiative, which aims to promote the conservation and sustainable use of nature. The meeting achieved a total of 35 decisions, including a decision on "Mainstreaming gender considerations", to incorporate gender perspective to the analysis of biodiversity.
At the end of the meeting, the meeting adopted the "Pyeongchang Road Map", which addresses ways to achieve biodiversity through technology cooperation, funding and strengthening the capacity of developing countries.
The thirteenth ordinary meeting of the Parties to the Convention took place between 2 and 17 December 2016 in Cancún, Mexico.
The 14th ordinary meeting of the Parties to the Convention took place on 17–29 November 2018, in Sharm El-Sheikh, Egypt. The 2018 UN Biodiversity Conference closed on 29 November 2018 with broad international agreement on reversing the global destruction of nature and biodiversity loss threatening all forms of life on Earth. Parties adopted the Voluntary Guidelines for the design and effective implementation of ecosystem-based approaches to climate change adaptation and disaster risk reduction. Governments also agreed to accelerate action to achieve the Aichi Biodiversity Targets, agreed in 2010, until 2020. Work to achieve these targets would take place at the global, regional, national and subnational levels.
The 15th meeting of the Parties was originally scheduled to take place in Kunming, China in 2020, but was postponed several times due to the COVID-19 pandemic. After the start date was delayed for a third time, the Convention was split into two sessions. A mostly online event took place in October 2021, where over 100 nations signed the Kunming declaration on biodiversity. The theme of the declaration was "Ecological Civilization: Building a Shared Future for All Life on Earth". Twenty-one action-oriented draft targets were provisionally agreed in the October meeting, to be further discussed in the second session: an in-person event that was originally scheduled to start in April 2022, but was rescheduled to occur later in 2022. The second part of COP 15 ultimately took place in Montreal, Canada, from 5–17 December 2022. At the meeting, the Parties to the Convention adopted a new action plan, the Kunming-Montreal Global Biodiversity Framework.
The 16th meeting of the Parties is scheduled to be held in Colombia in 2024. Originally, Turkey was going to host it but after a series of earthquakes in February 2023 they had to withdraw. | [
{
"paragraph_id": 0,
"text": "The Convention on Biological Diversity (CBD), known informally as the Biodiversity Convention, is a multilateral treaty. The Convention has three main goals: the conservation of biological diversity (or biodiversity); the sustainable use of its components; and the fair and equitable sharing of benefits arising from genetic resources. Its objective is to develop national strategies for the conservation and sustainable use of biological diversity, and it is often seen as the key document regarding sustainable development.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Convention was opened for signature at the Earth Summit in Rio de Janeiro on 5 June 1992 and entered into force on 29 December 1993. The United States is the only UN member state which has not ratified the Convention. It has two supplementary agreements, the Cartagena Protocol and Nagoya Protocol.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Cartagena Protocol on Biosafety to the Convention on Biological Diversity is an international treaty governing the movements of living modified organisms (LMOs) resulting from modern biotechnology from one country to another. It was adopted on 29 January 2000 as a supplementary agreement to the CBD and entered into force on 11 September 2003.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization (ABS) to the Convention on Biological Diversity is another supplementary agreement to the CBD. It provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. The Nagoya Protocol was adopted on 29 October 2010 in Nagoya, Japan, and entered into force on 12 October 2014.",
"title": ""
},
{
"paragraph_id": 4,
"text": "2010 was also the International Year of Biodiversity, and the Secretariat of the CBD was its focal point. Following a recommendation of CBD signatories at Nagoya, the UN declared 2011 to 2020 as the United Nations Decade on Biodiversity in December 2010. The Convention's Strategic Plan for Biodiversity 2011-2020, created in 2010, include the Aichi Biodiversity Targets.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The meetings of the Parties to the Convention are known as Conferences of the Parties (COP), with the first one (COP 1) held in Nassau, Bahamas, in 1994 and the most recent one (COP 15) in 2021/2022 in Kunming, China and Montreal, Canada.",
"title": ""
},
{
"paragraph_id": 6,
"text": "In the area of marine and coastal biodiversity CBD's focus at present is to identify Ecologically or Biologically Significant Marine Areas (EBSAs) in specific ocean locations based on scientific criteria. The aim is to create an international legally binding instrument (ILBI) involving area-based planning and decision-making under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ).",
"title": ""
},
{
"paragraph_id": 7,
"text": "The notion of an international convention on biodiversity was conceived at a United Nations Environment Programme (UNEP) Ad Hoc Working Group of Experts on Biological Diversity in November 1988. The subsequent year, the Ad Hoc Working Group of Technical and Legal Experts was established for the drafting of a legal text which addressed the conservation and sustainable use of biological diversity, as well as the sharing of benefits arising from their utilization with sovereign states and local communities. In 1991, an intergovernmental negotiating committee was established, tasked with finalizing the Convention's text.",
"title": "Origin and scope"
},
{
"paragraph_id": 8,
"text": "A Conference for the Adoption of the Agreed Text of the Convention on Biological Diversity was held in Nairobi, Kenya, in 1992, and its conclusions were distilled in the Nairobi Final Act. The Convention's text was opened for signature on 5 June 1992 at the United Nations Conference on Environment and Development (the Rio \"Earth Summit\"). By its closing date, 4 June 1993, the Convention had received 168 signatures. It entered into force on 29 December 1993.",
"title": "Origin and scope"
},
{
"paragraph_id": 9,
"text": "The Convention recognized for the first time in international law that the conservation of biodiversity is \"a common concern of humankind\" and is an integral part of the development process. The agreement covers all ecosystems, species, and genetic resources. It links traditional conservation efforts to the economic goal of using biological resources sustainably. It sets principles for the fair and equitable sharing of the benefits arising from the use of genetic resources, notably those destined for commercial use. It also covers the rapidly expanding field of biotechnology through its Cartagena Protocol on Biosafety, addressing technology development and transfer, benefit-sharing and biosafety issues. Importantly, the Convention is legally binding; countries that join it ('Parties') are obliged to implement its provisions.",
"title": "Origin and scope"
},
{
"paragraph_id": 10,
"text": "The Convention reminds decision-makers that natural resources are not infinite and sets out a philosophy of sustainable use. While past conservation efforts were aimed at protecting particular species and habitats, the Convention recognizes that ecosystems, species and genes must be used for the benefit of humans. However, this should be done in a way and at a rate that does not lead to the long-term decline of biological diversity.",
"title": "Origin and scope"
},
{
"paragraph_id": 11,
"text": "The Convention also offers decision-makers guidance based on the precautionary principle which demands that where there is a threat of significant reduction or loss of biological diversity, lack of full scientific certainty should not be used as a reason for postponing measures to avoid or minimize such a threat. The Convention acknowledges that substantial investments are required to conserve biological diversity. It argues, however, that conservation will bring us significant environmental, economic and social benefits in return.",
"title": "Origin and scope"
},
{
"paragraph_id": 12,
"text": "The Convention on Biological Diversity of 2010 banned some forms of geoengineering.",
"title": "Origin and scope"
},
{
"paragraph_id": 13,
"text": "As of 1 December 2019, the acting executive secretary is Elizabeth Maruma Mrema.",
"title": "Executive secretary"
},
{
"paragraph_id": 14,
"text": "The previous executive secretaries were: pl:Cristiana Pașca Palmer (2017–2019), Braulio Ferreira de Souza Dias (2012–2017), Ahmed Djoghlaf (2006–2012), Hamdallah Zedan (1998–2005), Calestous Juma (1995–1998), and Angela Cropper (1993–1995).",
"title": "Executive secretary"
},
{
"paragraph_id": 15,
"text": "Some of the many issues dealt with under the Convention include:",
"title": "Issues"
},
{
"paragraph_id": 16,
"text": "The Convention's governing body is the Conference of the Parties (COP), consisting of all governments (and regional economic integration organizations) that have ratified the treaty. This ultimate authority reviews progress under the Convention, identifies new priorities, and sets work plans for members. The COP can also make amendments to the Convention, create expert advisory bodies, review progress reports by member nations, and collaborate with other international organizations and agreements.",
"title": "International bodies established"
},
{
"paragraph_id": 17,
"text": "The Conference of the Parties uses expertise and support from several other bodies that are established by the Convention. In addition to committees or mechanisms established on an ad hoc basis, the main organs are:",
"title": "International bodies established"
},
{
"paragraph_id": 18,
"text": "The CBD Secretariat, based in Montreal, Quebec, Canada, operates under UNEP, the United Nations Environment Programme. Its main functions are to organize meetings, draft documents, assist member governments in the implementation of the programme of work, coordinate with other international organizations, and collect and disseminate information.",
"title": "International bodies established"
},
{
"paragraph_id": 19,
"text": "The SBSTTA is a committee composed of experts from member governments competent in relevant fields. It plays a key role in making recommendations to the COP on scientific and technical issues. It provides assessments of the status of biological diversity and of various measures taken in accordance with Convention, and also gives recommendations to the Conference of the Parties, which may be endorsed in whole, in part or in modified form by the COPs. As of 2020 SBSTTA had met 23 times, with a 24th meeting taking place in Geneva, Switzerland in 2022.",
"title": "International bodies established"
},
{
"paragraph_id": 20,
"text": "In 2014, the Conference of the Parties to the Convention on Biological Diversity established the Subsidiary Body on Implementation (SBI) to replace the Ad Hoc Open-ended Working Group on Review of Implementation of the Convention. The four functions and core areas of work of SBI are: (a) review of progress in implementation; (b) strategic actions to enhance implementation; (c) strengthening means of implementation; and (d) operations of the Convention and the Protocols. The first meeting of the SBI was held on 2–6 May 2016 and the second meeting was held on 9–13 July 2018, both in Montreal, Canada. The third meeting of the SBI will be held in March 2022 in Geneva, Switzerland. The Bureau of the Conference of the Parties serves as the Bureau of the SBI. The current chair of the SBI is Ms. Charlotta Sörqvist of Sweden.",
"title": "International bodies established"
},
{
"paragraph_id": 21,
"text": "As of 2016, the Convention has 196 Parties, which includes 195 states and the European Union. All UN member states—with the exception of the United States—have ratified the treaty. Non-UN member states that have ratified are the Cook Islands, Niue, and the State of Palestine. The Holy See and the states with limited recognition are non-Parties. The US has signed but not ratified the treaty, because ratification requires a two-thirds majority in the Senate and is blocked by Republican Party senators.",
"title": "Parties"
},
{
"paragraph_id": 22,
"text": "The European Union created the Cartagena Protocol (see below) in 2000 to enhance biosafety regulation and propagate the \"precautionary principle\" over the \"sound science principle\" defended by the United States. Whereas the impact of the Cartagena Protocol on domestic regulations has been substantial, its impact on international trade law remains uncertain. In 2006, the World Trade Organization (WTO) ruled that the European Union had violated international trade law between 1999 and 2003 by imposing a moratorium on the approval of genetically modified organisms (GMO) imports. Disappointing the United States, the panel nevertheless \"decided not to decide\" by not invalidating the stringent European biosafety regulations.",
"title": "Parties"
},
{
"paragraph_id": 23,
"text": "Implementation by the Parties to the Convention is achieved using two means:",
"title": "Parties"
},
{
"paragraph_id": 24,
"text": "National Biodiversity Strategies and Action Plans (NBSAP) are the principal instruments for implementing the Convention at the national level. The Convention requires that countries prepare a national biodiversity strategy and to ensure that this strategy is included in planning for activities in all sectors where diversity may be impacted. As of early 2012, 173 Parties had developed NBSAPs.",
"title": "Parties"
},
{
"paragraph_id": 25,
"text": "The United Kingdom, New Zealand and Tanzania carried out elaborate responses to conserve individual species and specific habitats. The United States of America, a signatory who had not yet ratified the treaty by 2010, produced one of the most thorough implementation programs through species recovery programs and other mechanisms long in place in the US for species conservation.",
"title": "Parties"
},
{
"paragraph_id": 26,
"text": "Singapore established a detailed National Biodiversity Strategy and Action Plan. The National Biodiversity Centre of Singapore represents Singapore in the Convention for Biological Diversity.",
"title": "Parties"
},
{
"paragraph_id": 27,
"text": "In accordance with Article 26 of the Convention, Parties prepare national reports on the status of implementation of the Convention.",
"title": "Parties"
},
{
"paragraph_id": 28,
"text": "The Cartagena Protocol on Biosafety, also known as the Biosafety Protocol, was adopted in January 2000, after a CBD Open-ended Ad Hoc Working Group on Biosafety had met six times between July 1996 and February 1999. The Working Group submitted a draft text of the Protocol for consideration by Conference of the Parties at its first extraordinary meeting, which was convened for the express purpose of adopting a protocol on biosafety to the Convention on Biological Diversity. After a few delays, the Cartagena Protocol was eventually adopted on 29 January 2000. The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by living modified organisms resulting from modern biotechnology.",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 29,
"text": "The Biosafety Protocol makes clear that products from new technologies must be based on the precautionary principle and allow developing nations to balance public health against economic benefits. It will, for example, let countries ban imports of a genetically modified organism if they feel there is not enough scientific evidence the product is safe and requires exporters to label shipments containing genetically modified commodities such as corn or cotton.",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 30,
"text": "The required number of 50 instruments of ratification/accession/approval/acceptance by countries was reached in May 2003. In accordance with the provisions of its Article 37, the Protocol entered into force on 11 September 2003.",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 31,
"text": "In April 2002, the Parties of the UN CBD adopted the recommendations of the Gran Canaria Declaration Calling for a Global Plant Conservation Strategy, and adopted a 16-point plan aiming to slow the rate of plant extinctions around the world by 2010.",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 32,
"text": "The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity was adopted on 29 October 2010 in Nagoya, Aichi Prefecture, Japan, at the tenth meeting of the Conference of the Parties, and entered into force on 12 October 2014. The protocol is a supplementary agreement to the Convention on Biological Diversity, and provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. It thereby contributes to the conservation and sustainable use of biodiversity.",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 33,
"text": "Also at the tenth meeting of the Conference of the Parties, held from 18 to 29 October 2010 in Nagoya, a revised and updated \"Strategic Plan for Biodiversity, 2011-2020\" was agreed and published. This document included the \"Aichi Biodiversity Targets\", comprising 20 targets that address each of five strategic goals defined in the plan. The strategic plan includes the following strategic goals:",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 34,
"text": "Upon the launch of Agenda 2030, CBD released a technical note mapping and identifying synergies between the 17 Sustainable Development Goals (SDGs) and the 20 Aichi Biodiversity Targets. This helps to understand the contributions of biodiversity to achieving the SDGs.",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 35,
"text": "A new plan, known as the post-2020 Global Biodiversity Framework (GBF) was developed to guide action through 2030. A first draft of this framework was released in July 2021, and its final content was discussed and negotiated as part of the COP 15 meetings. Reducing agricultural pollution and sharing the benefits of digital sequence information arose as key points of contention among Parties during development of the framework. A final version was adopted by the Convention on 19 December 2022. The framework includes a number of ambitious goals, including a commitment to designate at least 30 percent of global land and sea as protected areas (known as the \"30 by 30\" initiative).",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 36,
"text": "The CBD has a significant focus on marine and coastal biodiversity. A series of expert workshops have been held (2018–2022) to identify options for modifying the description of Ecologically or Biologically Significant Marine Areas (EBSAs) and describing new areas. These have focused on the North-East, North-West and South-Eastern Atlantic Ocean, Baltic Sea, Caspian Sea, Black Sea, Seas of East Asia, North-West Indian Ocean and Adjacent Gulf Areas, Southern and North-East Indian Ocean, Mediterranean Sea, North and South Pacific, Eastern Tropical and Temperate Pacific, Wider Caribbean and Western Mid-Atlantic. The workshop meetings have followed the EBSA process based on internationally agreed scientific criteria. This is aimed at creating an international legally binding instrument (ILBI) under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ). The central mechanism is area-based planning and decision-making. It integrates EBSAs, Vulnerable Marine Ecosystems (VMEs) and High Seas (Marine Protected Areas) with Blue Growth scenarios. There is also linkage with the EU Marine Strategy Framework Directive.",
"title": "Protocols and plans developed by CBD"
},
{
"paragraph_id": 37,
"text": "There have been criticisms against CBD that its implementation has been weakened due to resistance of Western countries to the implementation of pro-South provisions of the Convention. CBD is also regarded as a case of a hard treaty gone soft in the implementation trajectory. The argument to enforce the treaty as a legally binding multilateral instrument with the Conference of Parties reviewing the infractions and non-compliance is also gaining strength.",
"title": "Criticism"
},
{
"paragraph_id": 38,
"text": "Although the Convention explicitly states that all forms of life are covered by its provisions, examination of reports and of national biodiversity strategies and action plans submitted by participating countries shows that in practice this is not happening. The fifth report of the European Union, for example, makes frequent reference to animals (particularly fish) and plants, but does not mention bacteria, fungi or protists at all. The International Society for Fungal Conservation has assessed more than 100 of these CBD documents for their coverage of fungi using defined criteria to place each in one of six categories. No documents were assessed as good or adequate, less than 10% as nearly adequate or poor, and the rest as deficient, seriously deficient or totally deficient.",
"title": "Criticism"
},
{
"paragraph_id": 39,
"text": "Scientists working with biodiversity and medical research are expressing fears that the Nagoya Protocol is counterproductive, and will hamper disease prevention and conservation efforts, and that the threat of imprisonment of scientists will have a chilling effect on research. Non-commercial researchers and institutions such as natural history museums fear maintaining biological reference collections and exchanging material between institutions will become difficult, and medical researchers have expressed alarm at plans to expand the protocol to make it illegal to publicly share genetic information, e.g. via GenBank.",
"title": "Criticism"
},
{
"paragraph_id": 40,
"text": "William Yancey Brown, when with the Brookings Institution, suggested that the Convention on Biological Diversity should include the preservation of intact genomes and viable cells for every known species and for new species as they are discovered.",
"title": "Criticism"
},
{
"paragraph_id": 41,
"text": "A Conference of the Parties (COP) was held annually for three years after 1994, and thence biennially on even-numbered years.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 42,
"text": "The first ordinary meeting of the Parties to the Convention took place in November and December 1994, in Nassau, Bahamas.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 43,
"text": "The second ordinary meeting of the Parties to the Convention took place in November 1995, in Jakarta, Indonesia.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 44,
"text": "The third ordinary meeting of the Parties to the Convention took place in November 1996, in Buenos Aires, Argentina.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 45,
"text": "The fourth ordinary meeting of the Parties to the Convention took place in May 1998, in Bratislava, Slovakia.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 46,
"text": "The First Extraordinary Meeting of the Conference of the Parties took place in February 1999, in Cartagena, Colombia. A series of meetings led to the adoption of the Cartagena Protocol on Biosafety in January 2000, effective from 2003.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 47,
"text": "The fifth ordinary meeting of the Parties to the Convention took place in May 2000, in Nairobi, Kenya.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 48,
"text": "The sixth ordinary meeting of the Parties to the Convention took place in April 2002, in The Hague, Netherlands.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 49,
"text": "The seventh ordinary meeting of the Parties to the Convention took place in February 2004, in Kuala Lumpur, Malaysia.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 50,
"text": "The eighth ordinary meeting of the Parties to the Convention took place in March 2006, in Curitiba, Brazil.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 51,
"text": "The ninth ordinary meeting of the Parties to the Convention took place in May 2008, in Bonn, Germany.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 52,
"text": "The tenth ordinary meeting of the Parties to the Convention took place in October 2010, in Nagoya, Japan. It was at this meeting that the Nagoya Protocol was ratified.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 53,
"text": "2010 was the International Year of Biodiversity and the Secretariat of the CBD was its focal point. Following a recommendation of CBD signatories during COP 10 at Nagoya, the UN, on 22 December 2010, declared 2011 to 2020 as the United Nations Decade on Biodiversity.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 54,
"text": "Leading up to the Conference of the Parties (COP 11) meeting on biodiversity in Hyderabad, India, 2012, preparations for a World Wide Views on Biodiversity has begun, involving old and new partners and building on the experiences from the World Wide Views on Global Warming.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 55,
"text": "Under the theme, \"Biodiversity for Sustainable Development\", thousands of representatives of governments, NGOs, indigenous peoples, scientists and the private sector gathered in Pyeongchang, Republic of Korea in October 2014 for the 12th meeting of the Conference of the Parties to the Convention on Biological Diversity (COP 12).",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 56,
"text": "From 6–17 October 2014, Parties discussed the implementation of the Strategic Plan for Biodiversity 2011-2020 and its Aichi Biodiversity Targets, which are to be achieved by the end of this decade. The results of Global Biodiversity Outlook 4, the flagship assessment report of the CBD informed the discussions.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 57,
"text": "The conference gave a mid-term evaluation to the UN Decade on Biodiversity (2011–2020) initiative, which aims to promote the conservation and sustainable use of nature. The meeting achieved a total of 35 decisions, including a decision on \"Mainstreaming gender considerations\", to incorporate gender perspective to the analysis of biodiversity.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 58,
"text": "At the end of the meeting, the meeting adopted the \"Pyeongchang Road Map\", which addresses ways to achieve biodiversity through technology cooperation, funding and strengthening the capacity of developing countries.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 59,
"text": "The thirteenth ordinary meeting of the Parties to the Convention took place between 2 and 17 December 2016 in Cancún, Mexico.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 60,
"text": "The 14th ordinary meeting of the Parties to the Convention took place on 17–29 November 2018, in Sharm El-Sheikh, Egypt. The 2018 UN Biodiversity Conference closed on 29 November 2018 with broad international agreement on reversing the global destruction of nature and biodiversity loss threatening all forms of life on Earth. Parties adopted the Voluntary Guidelines for the design and effective implementation of ecosystem-based approaches to climate change adaptation and disaster risk reduction. Governments also agreed to accelerate action to achieve the Aichi Biodiversity Targets, agreed in 2010, until 2020. Work to achieve these targets would take place at the global, regional, national and subnational levels.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 61,
"text": "The 15th meeting of the Parties was originally scheduled to take place in Kunming, China in 2020, but was postponed several times due to the COVID-19 pandemic. After the start date was delayed for a third time, the Convention was split into two sessions. A mostly online event took place in October 2021, where over 100 nations signed the Kunming declaration on biodiversity. The theme of the declaration was \"Ecological Civilization: Building a Shared Future for All Life on Earth\". Twenty-one action-oriented draft targets were provisionally agreed in the October meeting, to be further discussed in the second session: an in-person event that was originally scheduled to start in April 2022, but was rescheduled to occur later in 2022. The second part of COP 15 ultimately took place in Montreal, Canada, from 5–17 December 2022. At the meeting, the Parties to the Convention adopted a new action plan, the Kunming-Montreal Global Biodiversity Framework.",
"title": "Meetings of the Parties"
},
{
"paragraph_id": 62,
"text": "The 16th meeting of the Parties is scheduled to be held in Colombia in 2024. Originally, Turkey was going to host it but after a series of earthquakes in February 2023 they had to withdraw.",
"title": "Meetings of the Parties"
}
]
| The Convention on Biological Diversity (CBD), known informally as the Biodiversity Convention, is a multilateral treaty. The Convention has three main goals: the conservation of biological diversity; the sustainable use of its components; and the fair and equitable sharing of benefits arising from genetic resources. Its objective is to develop national strategies for the conservation and sustainable use of biological diversity, and it is often seen as the key document regarding sustainable development. The Convention was opened for signature at the Earth Summit in Rio de Janeiro on 5 June 1992 and entered into force on 29 December 1993. The United States is the only UN member state which has not ratified the Convention. It has two supplementary agreements, the Cartagena Protocol and Nagoya Protocol. The Cartagena Protocol on Biosafety to the Convention on Biological Diversity is an international treaty governing the movements of living modified organisms (LMOs) resulting from modern biotechnology from one country to another. It was adopted on 29 January 2000 as a supplementary agreement to the CBD and entered into force on 11 September 2003. The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization (ABS) to the Convention on Biological Diversity is another supplementary agreement to the CBD. It provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. The Nagoya Protocol was adopted on 29 October 2010 in Nagoya, Japan, and entered into force on 12 October 2014. 2010 was also the International Year of Biodiversity, and the Secretariat of the CBD was its focal point. Following a recommendation of CBD signatories at Nagoya, the UN declared 2011 to 2020 as the United Nations Decade on Biodiversity in December 2010. The Convention's Strategic Plan for Biodiversity 2011-2020, created in 2010, include the Aichi Biodiversity Targets. The meetings of the Parties to the Convention are known as Conferences of the Parties (COP), with the first one held in Nassau, Bahamas, in 1994 and the most recent one in 2021/2022 in Kunming, China and Montreal, Canada. In the area of marine and coastal biodiversity CBD's focus at present is to identify Ecologically or Biologically Significant Marine Areas (EBSAs) in specific ocean locations based on scientific criteria. The aim is to create an international legally binding instrument (ILBI) involving area-based planning and decision-making under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ). | 2001-08-24T13:02:06Z | 2023-12-21T09:03:48Z | [
"Template:See also",
"Template:Cite web",
"Template:Short description",
"Template:Cn",
"Template:ISBN",
"Template:Portal bar",
"Template:Authority control",
"Template:Infobox treaty",
"Template:Div col end",
"Template:Cite news",
"Template:Cite book",
"Template:CIA World Factbook",
"Template:Sustainability",
"Template:Use dmy dates",
"Template:As of",
"Template:Div col",
"Template:Reflist",
"Template:Webarchive",
"Template:Cite journal",
"Template:Pollution",
"Template:Legend",
"Template:Main"
]
| https://en.wikipedia.org/wiki/Convention_on_Biological_Diversity |
6,199 | Convention on Fishing and Conservation of the Living Resources of the High Seas | The Convention on Fishing and Conservation of Living Resources of the High Seas is an agreement that was designed to solve through international cooperation the problems involved in the conservation of living resources of the high seas, considering that because of the development of modern technology some of these resources are in danger of being overexploited. The convention opened for signature on 29 April 1958 and entered into force on 20 March 1966.
Parties – (39): Australia, Belgium, Bosnia and Herzegovina, Burkina Faso, Cambodia, Colombia, Republic of the Congo, Denmark, Dominican Republic, Fiji, Finland, France, Haiti, Jamaica, Kenya, Lesotho, Madagascar, Malawi, Malaysia, Mauritius, Mexico, Montenegro, Netherlands, Nigeria, Portugal, Senegal, Serbia, Sierra Leone, Solomon Islands, South Africa, Spain, Switzerland, Thailand, Tonga, Trinidad and Tobago, Uganda, United Kingdom, United States, Venezuela.
Countries that have signed, but not yet ratified – (21): Afghanistan, Argentina, Bolivia, Canada, Costa Rica, Cuba, Ghana, Iceland, Indonesia, Iran, Ireland, Israel, Lebanon, Liberia, Nepal, New Zealand, Pakistan, Panama, Sri Lanka, Tunisia, Uruguay. | [
{
"paragraph_id": 0,
"text": "The Convention on Fishing and Conservation of Living Resources of the High Seas is an agreement that was designed to solve through international cooperation the problems involved in the conservation of living resources of the high seas, considering that because of the development of modern technology some of these resources are in danger of being overexploited. The convention opened for signature on 29 April 1958 and entered into force on 20 March 1966.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Parties – (39): Australia, Belgium, Bosnia and Herzegovina, Burkina Faso, Cambodia, Colombia, Republic of the Congo, Denmark, Dominican Republic, Fiji, Finland, France, Haiti, Jamaica, Kenya, Lesotho, Madagascar, Malawi, Malaysia, Mauritius, Mexico, Montenegro, Netherlands, Nigeria, Portugal, Senegal, Serbia, Sierra Leone, Solomon Islands, South Africa, Spain, Switzerland, Thailand, Tonga, Trinidad and Tobago, Uganda, United Kingdom, United States, Venezuela.",
"title": "Participation"
},
{
"paragraph_id": 2,
"text": "Countries that have signed, but not yet ratified – (21): Afghanistan, Argentina, Bolivia, Canada, Costa Rica, Cuba, Ghana, Iceland, Indonesia, Iran, Ireland, Israel, Lebanon, Liberia, Nepal, New Zealand, Pakistan, Panama, Sri Lanka, Tunisia, Uruguay.",
"title": "Participation"
}
]
| The Convention on Fishing and Conservation of Living Resources of the High Seas is an agreement that was designed to solve through international cooperation the problems involved in the conservation of living resources of the high seas, considering that because of the development of modern technology some of these resources are in danger of being overexploited. The convention opened for signature on 29 April 1958 and entered into force on 20 March 1966. | 2023-03-01T13:21:34Z | [
"Template:Reflist",
"Template:Cite web",
"Template:As of",
"Template:Webarchive",
"Template:Fisheries and fishing",
"Template:Short description",
"Template:Use dmy dates",
"Template:Infobox Treaty"
]
| https://en.wikipedia.org/wiki/Convention_on_Fishing_and_Conservation_of_the_Living_Resources_of_the_High_Seas |
|
6,200 | Convention on Long-Range Transboundary Air Pollution | The Convention on Long-Range Transboundary Air Pollution, often abbreviated as Air Convention or CLRTAP, is intended to protect the human environment against air pollution and to gradually reduce and prevent air pollution, including long-range transboundary air pollution. It is implemented by the European Monitoring and Evaluation Programme (EMEP), directed by the United Nations Economic Commission for Europe (UNECE).
The convention opened for signature on November 13, 1979, and entered into force on March 16, 1983.
The Convention, which now has 51 Parties, identifies the Executive Secretary of the United Nations Economic Commission for Europe (UNECE) as its secretariat. The current parties to the Convention are shown on the map.
The Convention is implemented by the European Monitoring and Evaluation Programme (EMEP) (short for Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe). Results of the EMEP programme are published on the EMEP website, www.emep.int.
The aim of the Convention is that Parties shall endeavour to limit and, as far as possible, gradually reduce and prevent air pollution including long-range transboundary air pollution. Parties develop policies and strategies to combat the discharge of air pollutants through exchanges of information, consultation, research and monitoring.
The Parties meet annually at sessions of the Executive Body to review ongoing work and plan future activities including a workplan for the coming year. The three main subsidiary bodies – the Working Group on Effects, the Steering Body to EMEP and the Working Group on Strategies and Review – as well as the Convention's Implementation Committee, report to the Executive Body each year.
Currently, the Convention's priority activities include review and possible revision of its most recent protocols, implementation of the Convention and its protocols across the entire UNECE region (with special focus on Eastern Europe, the Caucasus and Central Asia and South-East Europe) and sharing its knowledge and information with other regions of the world.
Since 1979 the Convention on Long-range Transboundary Air Pollution has addressed some of the major environmental problems of the UNECE region through scientific collaboration and policy negotiation. The Convention has been extended by eight protocols that identify specific measures to be taken by Parties to cut their emissions of air pollutants: | [
{
"paragraph_id": 0,
"text": "The Convention on Long-Range Transboundary Air Pollution, often abbreviated as Air Convention or CLRTAP, is intended to protect the human environment against air pollution and to gradually reduce and prevent air pollution, including long-range transboundary air pollution. It is implemented by the European Monitoring and Evaluation Programme (EMEP), directed by the United Nations Economic Commission for Europe (UNECE).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The convention opened for signature on November 13, 1979, and entered into force on March 16, 1983.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Convention, which now has 51 Parties, identifies the Executive Secretary of the United Nations Economic Commission for Europe (UNECE) as its secretariat. The current parties to the Convention are shown on the map.",
"title": "Secretariat"
},
{
"paragraph_id": 3,
"text": "The Convention is implemented by the European Monitoring and Evaluation Programme (EMEP) (short for Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe). Results of the EMEP programme are published on the EMEP website, www.emep.int.",
"title": "Secretariat"
},
{
"paragraph_id": 4,
"text": "The aim of the Convention is that Parties shall endeavour to limit and, as far as possible, gradually reduce and prevent air pollution including long-range transboundary air pollution. Parties develop policies and strategies to combat the discharge of air pollutants through exchanges of information, consultation, research and monitoring.",
"title": "Procedure"
},
{
"paragraph_id": 5,
"text": "The Parties meet annually at sessions of the Executive Body to review ongoing work and plan future activities including a workplan for the coming year. The three main subsidiary bodies – the Working Group on Effects, the Steering Body to EMEP and the Working Group on Strategies and Review – as well as the Convention's Implementation Committee, report to the Executive Body each year.",
"title": "Procedure"
},
{
"paragraph_id": 6,
"text": "Currently, the Convention's priority activities include review and possible revision of its most recent protocols, implementation of the Convention and its protocols across the entire UNECE region (with special focus on Eastern Europe, the Caucasus and Central Asia and South-East Europe) and sharing its knowledge and information with other regions of the world.",
"title": "Procedure"
},
{
"paragraph_id": 7,
"text": "Since 1979 the Convention on Long-range Transboundary Air Pollution has addressed some of the major environmental problems of the UNECE region through scientific collaboration and policy negotiation. The Convention has been extended by eight protocols that identify specific measures to be taken by Parties to cut their emissions of air pollutants:",
"title": "Protocols"
}
]
| The Convention on Long-Range Transboundary Air Pollution, often abbreviated as Air Convention or CLRTAP, is intended to protect the human environment against air pollution and to gradually reduce and prevent air pollution, including long-range transboundary air pollution. It is implemented by the European Monitoring and Evaluation Programme (EMEP), directed by the United Nations Economic Commission for Europe (UNECE). The convention opened for signature on November 13, 1979, and entered into force on March 16, 1983. | 2002-02-25T15:51:15Z | 2023-10-24T15:51:20Z | [
"Template:Pollution sidebar",
"Template:Format date",
"Template:As of",
"Template:Pollution",
"Template:Use American English",
"Template:Short description",
"Template:Use mdy dates"
]
| https://en.wikipedia.org/wiki/Convention_on_Long-Range_Transboundary_Air_Pollution |
6,201 | CITES | CITES (shorter name for the Convention on International Trade in Endangered Species of Wild Fauna and Flora, also known as the Washington Convention) is a multilateral treaty to protect endangered plants and animals from the threats of international trade. It was drafted as a result of a resolution adopted in 1963 at a meeting of members of the International Union for Conservation of Nature (IUCN). The convention was opened for signature in 1973 and CITES entered into force on 1 July 1975.
Its aim is to ensure that international trade (import/export) in specimens of animals and plants included under CITES, does not threaten the survival of the species in the wild. This is achieved via a system of permits and certificates. CITES affords varying degrees of protection to more than 38,000 species.
As of April 2022, Secretary-General of CITES is Ivonne Higuero.
CITES is one of the largest and oldest conservation and sustainable use agreements in existence. There are three working languages of the Convention (English, French and Spanish) in which all documents are made available. Participation is voluntary and countries that have agreed to be bound by the convention are known as Parties. Although CITES is legally binding on the Parties, it does not take the place of national laws. Rather it provides a framework respected by each Party, which must adopt their own domestic legislation to implement CITES at the national level.
Originally, CITES addressed depletion resulting from demand for luxury goods such as furs in Western countries, but with the rising wealth of Asia, particularly in China, the focus changed to products demanded there, particularly those used for luxury goods such as elephant ivory or rhinoceros horn. As of 2022, CITES has expanded to include thousands of species previously considered unremarkable and in no danger of extinction such as manta rays or pangolins.
The text of the convention was finalized at a meeting of representatives of 80 countries in Washington, D.C., United States, on 3 March 1973. It was then open for signature until 31 December 1974. It entered into force after the 10th ratification by a signatory country, on 1 July 1975. Countries that signed the Convention become Parties by ratifying, accepting or approving it. By the end of 2003, all signatory countries had become Parties. States that were not signatories may become Parties by acceding to the convention. As of August 2022, the convention has 184 parties, including 183 states and the European Union.
The CITES Convention includes provisions and rules for trade with non-Parties. All member states of the United Nations are party to the treaty, with the exception of North Korea, Federated States of Micronesia, Haiti, Kiribati, Marshall Islands, Nauru, South Sudan, East Timor, Turkmenistan, and Tuvalu. UN observer the Holy See is also not a member. The Faroe Islands, an autonomous region in the Kingdom of Denmark, is also treated as a non-Party to CITES (both the Danish mainland and Greenland are part of CITES).
An amendment to the text of the convention, known as the Gaborone Amendment allows regional economic integration organizations (REIO), such as the European Union, to have the status of a member state and to be a Party to the convention. The REIO can vote at CITES meetings with the number of votes representing the number of members in the REIO, but it does not have an additional vote.
In accordance with Article XVII, paragraph 3, of the CITES Convention, the Gaborone Amendment entered into force on 29 November 2013, 60 days after 54 (two-thirds) of the 80 States that were party to CITES on 30 April 1983 deposited their instrument of acceptance of the amendment. At that time it entered into force only for those States that had accepted the amendment. The amended text of the convention will apply automatically to any State that becomes a Party after 29 November 2013. For States that became party to the convention before that date and have not accepted the amendment, it will enter into force 60 days after they accept it.
CITES works by subjecting international trade in specimens of listed taxa to controls as they move across international borders. CITES specimens can include a wide range of items including the whole animal/plant (whether alive or dead), or a product that contains a part or derivative of the listed taxa such as cosmetics or traditional medicines.
Four types of trade are recognised by CITES - import, export, re-export (export of any specimen that has previously been imported) and introduction from the sea (transportation into a state of specimens of any species which were taken in the marine environment not under the jurisdiction of any state). The CITES definition of "trade" does not require a financial transaction to be occurring. All trade in specimens of species covered by CITES must be authorized through a system of permits and certificates prior to the trade taking place. CITES permits and certificates are issued by one or more Management Authorities in charge of administering the CITES system in each country. Management Authorities are advised by one or more Scientific Authorities on the effects of trade of the specimen on the status of CITES-listed species. CITES permits and certificates must be presented to relevant border authorities in each country in order to authorise the trade.
Each party must enact their own domestic legislation to bring the provisions of CITES into effect in their territories. Parties may choose to take stricter domestic measures than CITES provides (for example by requiring permits/certificates in cases where they would not normally be needed or by prohibiting trade in some specimens).
Over 40,900 species, subspecies and populations are protected under CITES. Each protected taxa or population is included in one of three lists called Appendices. The Appendix that lists a taxa or population reflects the level of the threat posed by international trade and the CITES controls that apply.
Taxa may be split-listed meaning that some populations of a species are on one Appendix, while some are on another. The African bush elephant (Loxodonta africana) is currently split-listed, with all populations except those of Botswana, Namibia, South Africa and Zimbabwe listed in Appendix I. Those of Botswana, Namibia, South Africa and Zimbabwe are listed in Appendix II. There are also species that have only some populations listed in an Appendix. One example is the pronghorn (Antilocapra americana), a ruminant native to North America. Its Mexican population is listed in Appendix I, but its U.S. and Canadian populations are not listed (though certain U.S. populations in Arizona are nonetheless protected under other domestic legislation, in this case the Endangered Species Act).
Taxa are proposed for inclusion, amendment or deletion in Appendices I and II at meetings of the Conference of the Parties (CoP), which are held approximately once every three years. Amendments to listing in Appendix III may be made unilaterally by individual parties.
Appendix I taxa are those that are threatened with extinction and to which the highest level of CITES protection is afforded. Commercial trade in wild-sourced specimens of these taxa is not permitted and non-commercial trade is strictly controlled by requiring an import permit and export permit to be granted by the relevant Management Authorities in each country before the trade occurs.
Notable taxa listed in Appendix I include the red panda (Ailurus fulgens), western gorilla (Gorilla gorilla), the chimpanzee species (Pan spp.), tigers (Panthera tigris subspecies), Asian elephant (Elephas maximus), some populations of African bush elephant (Loxodonta africana), and the monkey puzzle tree (Araucaria araucana).
Appendix II taxa are those that are not necessarily threatened with extinction, but trade must be controlled in order to avoid utilization incompatible with their survival. Appendix II taxa may also include species similar in appearance to species already listed in the Appendices. The vast majority of taxa listed under CITES are listed in Appendix II. Any trade in Appendix II taxa standardly requires a CITES export permit or re-export certificate to be granted by the Management Authority of the exporting country before the trade occurs.
Examples of taxa listed on Appendix II are the great white shark (Carcharodon carcharias), the American black bear (Ursus americanus), Hartmann's mountain zebra (Equus zebra hartmannae), green iguana (Iguana iguana), queen conch (Strombus gigas), emperor scorpion (Pandinus imperator), Mertens' water monitor (Varanus mertensi), bigleaf mahogany (Swietenia macrophylla), lignum vitae (Guaiacum officinale), the chambered nautilus (Nautilus pompilius), and all stony corals (Scleractinia spp.).
Appendix III species are those that are protected in at least one country, and that country has asked other CITES Parties for assistance in controlling the trade. Any trade in Appendix III species standardly requires a CITES export permit (if sourced from the country that listed the species) or a certificate of origin (from any other country) to be granted before the trade occurs.
Examples of species listed on Appendix III and the countries that listed them are the Hoffmann's two-toed sloth (Choloepus hoffmanni) by Costa Rica, sitatunga (Tragelaphus spekii) by Ghana and African civet (Civettictis civetta) by Botswana.
Under Article VII, the Convention allows for certain exceptions to the general trade requirements described above.
CITES provides for a special process for specimens that were acquired before the provisions of the Convention applied to that specimen. These are known as "pre-Convention" specimens and must be granted a CITES pre-Convention certificate before the trade occurs. Only specimens legally acquired before the date on which the species concerned was first included in the Appendices qualify for this exemption.
CITES provides that the standard permit/certificate requirements for trade in CITES specimens do not generally apply if a specimen is a personal or household effect. However there are a number of situations where permits/certificates for personal or household effects are required and some countries choose to take stricter domestic measures by requiring permits/certificates for some or all personal or household effects.
CITES allows trade in specimens to follow special procedures if Management Authorities are satisfied that they are sourced from captive bred animals or artificially propagated plants. In the case of commercial trade of Appendix I taxa, captive bred or artificially propagated specimens may be traded as if they were Appendix II. This reduces the permit requirements from two permits (import/export) to one (export only). In the case of non-commercial trade, specimens may be traded with a certificate of captive breeding/artificial propagation issued by the Management Authority of the state of export in lieu of standard permits.
Standard CITES permit and certificates are not required for the non-commercial loan, donation or exchange between scientific or forensic institutions that have been registered by a Management Authority of their State. Consignments containing the specimens must carry a label issued or approved by that Management Authority (in some cases Customs Declaration labels may be used). Specimens that may be included under this provision include museum, herbarium, diagnostic and forensic research specimens. Registered institutions are listed on the CITES website.
Amendments to the Convention must be supported by a two-thirds majority who are "present and voting" and can be made during an extraordinary meeting of the COP if one-third of the Parties are interested in such a meeting. The Gaborone Amendment (1983) allows regional economic blocs to accede to the treaty. Trade with non-Party states is allowed, although permits and certificates are recommended to be issued by exporters and sought by importers.
Species in the Appendices may be proposed for addition, change of Appendix, or de-listing (i.e., deletion) by any Party, whether or not it is a range State and changes may be made despite objections by range States if there is sufficient (2/3 majority) support for the listing. Species listings are made at the Conference of Parties.
Upon acceding to the Convention or within 90 days of a species listing being amended, Parties may make reservations. In these cases, the party is treated as being a state that is not a Party to CITES with respect to trade in the species concerned. Notable reservations include those by Iceland, Japan, and Norway on various baleen whale species and those on Falconiformes by Saudi Arabia.
As of 2002, 50% of Parties lacked one or more of the four major CITES requirements - designation of Management and Scientific Authorities; laws prohibiting the trade in violation of CITES; penalties for such trade and laws providing for the confiscation of specimens.
Although the Convention itself does not provide for arbitration or dispute in the case of noncompliance, 36 years of CITES in practice has resulted in several strategies to deal with infractions by Parties. The Secretariat, when informed of an infraction by a Party, will notify all other parties. The Secretariat will give the Party time to respond to the allegations and may provide technical assistance to prevent further infractions. Other actions the Convention itself does not provide for but that derive from subsequent COP resolutions may be taken against the offending Party. These include:
Bilateral sanctions have been imposed on the basis of national legislation (e.g. the USA used certification under the Pelly Amendment to get Japan to revoke its reservation to hawksbill turtle products in 1991, thus reducing the volume of its exports).
Infractions may include negligence with respect to permit issuing, excessive trade, lax enforcement, and failing to produce annual reports (the most common).
General limitations about the structure and philosophy of CITES include: by design and intent it focuses on trade at the species level and does not address habitat loss, ecosystem approaches to conservation, or poverty; it seeks to prevent unsustainable use rather than promote sustainable use (which generally conflicts with the Convention on Biological Diversity), although this has been changing (see Nile crocodile, African elephant, South African white rhino case studies in Hutton and Dickinson 2000). It does not explicitly address market demand. In fact, CITES listings have been demonstrated to increase financial speculation in certain markets for high value species. Funding does not provide for increased on-the-ground enforcement (it must apply for bilateral aid for most projects of this nature).
There has been increasing willingness within the Parties to allow for trade in products from well-managed populations. For instance, sales of the South African white rhino have generated revenues that helped pay for protection. Listing the species on Appendix I increased the price of rhino horn (which fueled more poaching), but the species survived wherever there was adequate on-the-ground protection. Thus field protection may be the primary mechanism that saved the population, but it is likely that field protection would not have been increased without CITES protection. In another instance, the United States initially stopped exports of bobcat and lynx hides in 1977 when it first implemented CITES for lack of data to support no detriment findings. However, in this Federal Register notice, issued by William Yancey Brown, the U.S. Endangered Species Scientific Authority (ESSA) established a framework of no detriment findings for each state and the Navajo nation and indicated that approval would be forthcoming if the states and Navajo nation provided evidence that their furbearer management programs assured the species would be conserved. Management programs for these species expanded rapidly, including tagging for export, and are currently recognized in program approvals under regulations of the U.S. Fish and Wildlife Service.
By design, CITES regulates and monitors trade in the manner of a "negative list" such that trade in all species is permitted and unregulated unless the species in question appears on the Appendices or looks very much like one of those taxa. Then and only then, trade is regulated or constrained. Because the remit of the Convention covers millions of species of plants and animals, and tens of thousands of these taxa are potentially of economic value, in practice this negative list approach effectively forces CITES signatories to expend limited resources on just a select few, leaving many species to be traded with neither constraint nor review. For example, recently several bird classified as threatened with extinction appeared in the legal wild bird trade because the CITES process never considered their status. If a "positive list" approach were taken, only species evaluated and approved for the positive list would be permitted in trade, thus lightening the review burden for member states and the Secretariat, and also preventing inadvertent legal trade threats to poorly known species.
Specific weaknesses in the text include: it does not stipulate guidelines for the 'non-detriment' finding required of national Scientific Authorities; non-detriment findings require copious amounts of information; the 'household effects' clause is often not rigid enough/specific enough to prevent CITES violations by means of this Article (VII); non-reporting from Parties means Secretariat monitoring is incomplete; and it has no capacity to address domestic trade in listed species.
In order to ensure that the General Agreement on Tariffs and Trade (GATT) was not violated, the Secretariat of GATT was consulted during the drafting process.
During the coronavirus pandemic in 2020 CEO Ivonne Higuero noted that illegal wildlife trade not only helps to destroy habitats, but these habitats create a safety barrier for humans that can prevent pathogens from animals passing themselves on to people.
Suggestions for improvement in the operation of CITES include: more regular missions by the Secretariat (not reserved just for high-profile species); improvement of national legislation and enforcement; better reporting by Parties (and the consolidation of information from all sources-NGOs, TRAFFIC, the wildlife trade monitoring network and Parties); more emphasis on enforcement-including a technical committee enforcement officer; the development of CITES Action Plans (akin to Biodiversity Action Plans related to the Convention on Biological Diversity) including: designation of Scientific/Management Authorities and national enforcement strategies; incentives for reporting and timelines for both Action Plans and reporting. CITES would benefit from access to Global Environment Facility (GEF), funds-although this is difficult given the GEFs more ecosystem approach-or other more regular funds. Development of a future mechanism similar to that of the Montreal Protocol (developed nations contribute to a fund for developing nations) could allow more funds for non-Secretariat activities.
From 2005 to 2009 the legal trade corresponded with these numbers:
In the 1990s the annual trade of legal animal products was $160 billion annually. In 2009 the estimated value almost doubled to $300 billion.
Additional information about the documented trade can be extracted through queries on the CITES website.
The Conference of the Parties (CoP) is held once every three years. The location of the next CoP is chosen at the close of each CoP by a secret ballot vote.
The CITES Committees (Animals Committee, Plants Committee and Standing Committee) hold meetings during each year that does not have a CoP, while the Standing committee meets also in years with a CoP. The Committee meetings take place in Geneva, Switzerland (where the Secretariat of the CITES Convention is located), unless another country offers to host the meeting. The Secretariat is administered by UNEP. The Animals and Plants Committees have sometimes held joint meetings. The previous joint meeting was held in March 2012 in Dublin, Ireland, and the latest one was held in Veracruz, Mexico, in May 2014.
A current list of upcoming meetings appears on the CITES calendar.
At the seventeenth Conference of the Parties (CoP 17), Namibia and Zimbabwe introduced proposals to amend their listing of elephant populations in Appendix II. Instead, they wished to establish controlled trade in all elephant specimens, including ivory. They argue that revenue from regulated trade could be used for elephant conservation and rural communities' development. However, both proposals were opposed by the US and other countries. | [
{
"paragraph_id": 0,
"text": "CITES (shorter name for the Convention on International Trade in Endangered Species of Wild Fauna and Flora, also known as the Washington Convention) is a multilateral treaty to protect endangered plants and animals from the threats of international trade. It was drafted as a result of a resolution adopted in 1963 at a meeting of members of the International Union for Conservation of Nature (IUCN). The convention was opened for signature in 1973 and CITES entered into force on 1 July 1975.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Its aim is to ensure that international trade (import/export) in specimens of animals and plants included under CITES, does not threaten the survival of the species in the wild. This is achieved via a system of permits and certificates. CITES affords varying degrees of protection to more than 38,000 species.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As of April 2022, Secretary-General of CITES is Ivonne Higuero.",
"title": ""
},
{
"paragraph_id": 3,
"text": "CITES is one of the largest and oldest conservation and sustainable use agreements in existence. There are three working languages of the Convention (English, French and Spanish) in which all documents are made available. Participation is voluntary and countries that have agreed to be bound by the convention are known as Parties. Although CITES is legally binding on the Parties, it does not take the place of national laws. Rather it provides a framework respected by each Party, which must adopt their own domestic legislation to implement CITES at the national level.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "Originally, CITES addressed depletion resulting from demand for luxury goods such as furs in Western countries, but with the rising wealth of Asia, particularly in China, the focus changed to products demanded there, particularly those used for luxury goods such as elephant ivory or rhinoceros horn. As of 2022, CITES has expanded to include thousands of species previously considered unremarkable and in no danger of extinction such as manta rays or pangolins.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "The text of the convention was finalized at a meeting of representatives of 80 countries in Washington, D.C., United States, on 3 March 1973. It was then open for signature until 31 December 1974. It entered into force after the 10th ratification by a signatory country, on 1 July 1975. Countries that signed the Convention become Parties by ratifying, accepting or approving it. By the end of 2003, all signatory countries had become Parties. States that were not signatories may become Parties by acceding to the convention. As of August 2022, the convention has 184 parties, including 183 states and the European Union.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "The CITES Convention includes provisions and rules for trade with non-Parties. All member states of the United Nations are party to the treaty, with the exception of North Korea, Federated States of Micronesia, Haiti, Kiribati, Marshall Islands, Nauru, South Sudan, East Timor, Turkmenistan, and Tuvalu. UN observer the Holy See is also not a member. The Faroe Islands, an autonomous region in the Kingdom of Denmark, is also treated as a non-Party to CITES (both the Danish mainland and Greenland are part of CITES).",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "An amendment to the text of the convention, known as the Gaborone Amendment allows regional economic integration organizations (REIO), such as the European Union, to have the status of a member state and to be a Party to the convention. The REIO can vote at CITES meetings with the number of votes representing the number of members in the REIO, but it does not have an additional vote.",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "In accordance with Article XVII, paragraph 3, of the CITES Convention, the Gaborone Amendment entered into force on 29 November 2013, 60 days after 54 (two-thirds) of the 80 States that were party to CITES on 30 April 1983 deposited their instrument of acceptance of the amendment. At that time it entered into force only for those States that had accepted the amendment. The amended text of the convention will apply automatically to any State that becomes a Party after 29 November 2013. For States that became party to the convention before that date and have not accepted the amendment, it will enter into force 60 days after they accept it.",
"title": "Background"
},
{
"paragraph_id": 9,
"text": "CITES works by subjecting international trade in specimens of listed taxa to controls as they move across international borders. CITES specimens can include a wide range of items including the whole animal/plant (whether alive or dead), or a product that contains a part or derivative of the listed taxa such as cosmetics or traditional medicines.",
"title": "Regulation of trade"
},
{
"paragraph_id": 10,
"text": "Four types of trade are recognised by CITES - import, export, re-export (export of any specimen that has previously been imported) and introduction from the sea (transportation into a state of specimens of any species which were taken in the marine environment not under the jurisdiction of any state). The CITES definition of \"trade\" does not require a financial transaction to be occurring. All trade in specimens of species covered by CITES must be authorized through a system of permits and certificates prior to the trade taking place. CITES permits and certificates are issued by one or more Management Authorities in charge of administering the CITES system in each country. Management Authorities are advised by one or more Scientific Authorities on the effects of trade of the specimen on the status of CITES-listed species. CITES permits and certificates must be presented to relevant border authorities in each country in order to authorise the trade.",
"title": "Regulation of trade"
},
{
"paragraph_id": 11,
"text": "Each party must enact their own domestic legislation to bring the provisions of CITES into effect in their territories. Parties may choose to take stricter domestic measures than CITES provides (for example by requiring permits/certificates in cases where they would not normally be needed or by prohibiting trade in some specimens).",
"title": "Regulation of trade"
},
{
"paragraph_id": 12,
"text": "Over 40,900 species, subspecies and populations are protected under CITES. Each protected taxa or population is included in one of three lists called Appendices. The Appendix that lists a taxa or population reflects the level of the threat posed by international trade and the CITES controls that apply.",
"title": "Regulation of trade"
},
{
"paragraph_id": 13,
"text": "Taxa may be split-listed meaning that some populations of a species are on one Appendix, while some are on another. The African bush elephant (Loxodonta africana) is currently split-listed, with all populations except those of Botswana, Namibia, South Africa and Zimbabwe listed in Appendix I. Those of Botswana, Namibia, South Africa and Zimbabwe are listed in Appendix II. There are also species that have only some populations listed in an Appendix. One example is the pronghorn (Antilocapra americana), a ruminant native to North America. Its Mexican population is listed in Appendix I, but its U.S. and Canadian populations are not listed (though certain U.S. populations in Arizona are nonetheless protected under other domestic legislation, in this case the Endangered Species Act).",
"title": "Regulation of trade"
},
{
"paragraph_id": 14,
"text": "Taxa are proposed for inclusion, amendment or deletion in Appendices I and II at meetings of the Conference of the Parties (CoP), which are held approximately once every three years. Amendments to listing in Appendix III may be made unilaterally by individual parties.",
"title": "Regulation of trade"
},
{
"paragraph_id": 15,
"text": "Appendix I taxa are those that are threatened with extinction and to which the highest level of CITES protection is afforded. Commercial trade in wild-sourced specimens of these taxa is not permitted and non-commercial trade is strictly controlled by requiring an import permit and export permit to be granted by the relevant Management Authorities in each country before the trade occurs.",
"title": "Regulation of trade"
},
{
"paragraph_id": 16,
"text": "Notable taxa listed in Appendix I include the red panda (Ailurus fulgens), western gorilla (Gorilla gorilla), the chimpanzee species (Pan spp.), tigers (Panthera tigris subspecies), Asian elephant (Elephas maximus), some populations of African bush elephant (Loxodonta africana), and the monkey puzzle tree (Araucaria araucana).",
"title": "Regulation of trade"
},
{
"paragraph_id": 17,
"text": "Appendix II taxa are those that are not necessarily threatened with extinction, but trade must be controlled in order to avoid utilization incompatible with their survival. Appendix II taxa may also include species similar in appearance to species already listed in the Appendices. The vast majority of taxa listed under CITES are listed in Appendix II. Any trade in Appendix II taxa standardly requires a CITES export permit or re-export certificate to be granted by the Management Authority of the exporting country before the trade occurs.",
"title": "Regulation of trade"
},
{
"paragraph_id": 18,
"text": "Examples of taxa listed on Appendix II are the great white shark (Carcharodon carcharias), the American black bear (Ursus americanus), Hartmann's mountain zebra (Equus zebra hartmannae), green iguana (Iguana iguana), queen conch (Strombus gigas), emperor scorpion (Pandinus imperator), Mertens' water monitor (Varanus mertensi), bigleaf mahogany (Swietenia macrophylla), lignum vitae (Guaiacum officinale), the chambered nautilus (Nautilus pompilius), and all stony corals (Scleractinia spp.).",
"title": "Regulation of trade"
},
{
"paragraph_id": 19,
"text": "Appendix III species are those that are protected in at least one country, and that country has asked other CITES Parties for assistance in controlling the trade. Any trade in Appendix III species standardly requires a CITES export permit (if sourced from the country that listed the species) or a certificate of origin (from any other country) to be granted before the trade occurs.",
"title": "Regulation of trade"
},
{
"paragraph_id": 20,
"text": "Examples of species listed on Appendix III and the countries that listed them are the Hoffmann's two-toed sloth (Choloepus hoffmanni) by Costa Rica, sitatunga (Tragelaphus spekii) by Ghana and African civet (Civettictis civetta) by Botswana.",
"title": "Regulation of trade"
},
{
"paragraph_id": 21,
"text": "Under Article VII, the Convention allows for certain exceptions to the general trade requirements described above.",
"title": "Regulation of trade"
},
{
"paragraph_id": 22,
"text": "CITES provides for a special process for specimens that were acquired before the provisions of the Convention applied to that specimen. These are known as \"pre-Convention\" specimens and must be granted a CITES pre-Convention certificate before the trade occurs. Only specimens legally acquired before the date on which the species concerned was first included in the Appendices qualify for this exemption.",
"title": "Regulation of trade"
},
{
"paragraph_id": 23,
"text": "CITES provides that the standard permit/certificate requirements for trade in CITES specimens do not generally apply if a specimen is a personal or household effect. However there are a number of situations where permits/certificates for personal or household effects are required and some countries choose to take stricter domestic measures by requiring permits/certificates for some or all personal or household effects.",
"title": "Regulation of trade"
},
{
"paragraph_id": 24,
"text": "CITES allows trade in specimens to follow special procedures if Management Authorities are satisfied that they are sourced from captive bred animals or artificially propagated plants. In the case of commercial trade of Appendix I taxa, captive bred or artificially propagated specimens may be traded as if they were Appendix II. This reduces the permit requirements from two permits (import/export) to one (export only). In the case of non-commercial trade, specimens may be traded with a certificate of captive breeding/artificial propagation issued by the Management Authority of the state of export in lieu of standard permits.",
"title": "Regulation of trade"
},
{
"paragraph_id": 25,
"text": "Standard CITES permit and certificates are not required for the non-commercial loan, donation or exchange between scientific or forensic institutions that have been registered by a Management Authority of their State. Consignments containing the specimens must carry a label issued or approved by that Management Authority (in some cases Customs Declaration labels may be used). Specimens that may be included under this provision include museum, herbarium, diagnostic and forensic research specimens. Registered institutions are listed on the CITES website.",
"title": "Regulation of trade"
},
{
"paragraph_id": 26,
"text": "Amendments to the Convention must be supported by a two-thirds majority who are \"present and voting\" and can be made during an extraordinary meeting of the COP if one-third of the Parties are interested in such a meeting. The Gaborone Amendment (1983) allows regional economic blocs to accede to the treaty. Trade with non-Party states is allowed, although permits and certificates are recommended to be issued by exporters and sought by importers.",
"title": "Amendments and reservations"
},
{
"paragraph_id": 27,
"text": "Species in the Appendices may be proposed for addition, change of Appendix, or de-listing (i.e., deletion) by any Party, whether or not it is a range State and changes may be made despite objections by range States if there is sufficient (2/3 majority) support for the listing. Species listings are made at the Conference of Parties.",
"title": "Amendments and reservations"
},
{
"paragraph_id": 28,
"text": "Upon acceding to the Convention or within 90 days of a species listing being amended, Parties may make reservations. In these cases, the party is treated as being a state that is not a Party to CITES with respect to trade in the species concerned. Notable reservations include those by Iceland, Japan, and Norway on various baleen whale species and those on Falconiformes by Saudi Arabia.",
"title": "Amendments and reservations"
},
{
"paragraph_id": 29,
"text": "As of 2002, 50% of Parties lacked one or more of the four major CITES requirements - designation of Management and Scientific Authorities; laws prohibiting the trade in violation of CITES; penalties for such trade and laws providing for the confiscation of specimens.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 30,
"text": "Although the Convention itself does not provide for arbitration or dispute in the case of noncompliance, 36 years of CITES in practice has resulted in several strategies to deal with infractions by Parties. The Secretariat, when informed of an infraction by a Party, will notify all other parties. The Secretariat will give the Party time to respond to the allegations and may provide technical assistance to prevent further infractions. Other actions the Convention itself does not provide for but that derive from subsequent COP resolutions may be taken against the offending Party. These include:",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 31,
"text": "Bilateral sanctions have been imposed on the basis of national legislation (e.g. the USA used certification under the Pelly Amendment to get Japan to revoke its reservation to hawksbill turtle products in 1991, thus reducing the volume of its exports).",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 32,
"text": "Infractions may include negligence with respect to permit issuing, excessive trade, lax enforcement, and failing to produce annual reports (the most common).",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 33,
"text": "General limitations about the structure and philosophy of CITES include: by design and intent it focuses on trade at the species level and does not address habitat loss, ecosystem approaches to conservation, or poverty; it seeks to prevent unsustainable use rather than promote sustainable use (which generally conflicts with the Convention on Biological Diversity), although this has been changing (see Nile crocodile, African elephant, South African white rhino case studies in Hutton and Dickinson 2000). It does not explicitly address market demand. In fact, CITES listings have been demonstrated to increase financial speculation in certain markets for high value species. Funding does not provide for increased on-the-ground enforcement (it must apply for bilateral aid for most projects of this nature).",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 34,
"text": "There has been increasing willingness within the Parties to allow for trade in products from well-managed populations. For instance, sales of the South African white rhino have generated revenues that helped pay for protection. Listing the species on Appendix I increased the price of rhino horn (which fueled more poaching), but the species survived wherever there was adequate on-the-ground protection. Thus field protection may be the primary mechanism that saved the population, but it is likely that field protection would not have been increased without CITES protection. In another instance, the United States initially stopped exports of bobcat and lynx hides in 1977 when it first implemented CITES for lack of data to support no detriment findings. However, in this Federal Register notice, issued by William Yancey Brown, the U.S. Endangered Species Scientific Authority (ESSA) established a framework of no detriment findings for each state and the Navajo nation and indicated that approval would be forthcoming if the states and Navajo nation provided evidence that their furbearer management programs assured the species would be conserved. Management programs for these species expanded rapidly, including tagging for export, and are currently recognized in program approvals under regulations of the U.S. Fish and Wildlife Service.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 35,
"text": "By design, CITES regulates and monitors trade in the manner of a \"negative list\" such that trade in all species is permitted and unregulated unless the species in question appears on the Appendices or looks very much like one of those taxa. Then and only then, trade is regulated or constrained. Because the remit of the Convention covers millions of species of plants and animals, and tens of thousands of these taxa are potentially of economic value, in practice this negative list approach effectively forces CITES signatories to expend limited resources on just a select few, leaving many species to be traded with neither constraint nor review. For example, recently several bird classified as threatened with extinction appeared in the legal wild bird trade because the CITES process never considered their status. If a \"positive list\" approach were taken, only species evaluated and approved for the positive list would be permitted in trade, thus lightening the review burden for member states and the Secretariat, and also preventing inadvertent legal trade threats to poorly known species.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 36,
"text": "Specific weaknesses in the text include: it does not stipulate guidelines for the 'non-detriment' finding required of national Scientific Authorities; non-detriment findings require copious amounts of information; the 'household effects' clause is often not rigid enough/specific enough to prevent CITES violations by means of this Article (VII); non-reporting from Parties means Secretariat monitoring is incomplete; and it has no capacity to address domestic trade in listed species.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 37,
"text": "In order to ensure that the General Agreement on Tariffs and Trade (GATT) was not violated, the Secretariat of GATT was consulted during the drafting process.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 38,
"text": "During the coronavirus pandemic in 2020 CEO Ivonne Higuero noted that illegal wildlife trade not only helps to destroy habitats, but these habitats create a safety barrier for humans that can prevent pathogens from animals passing themselves on to people.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 39,
"text": "Suggestions for improvement in the operation of CITES include: more regular missions by the Secretariat (not reserved just for high-profile species); improvement of national legislation and enforcement; better reporting by Parties (and the consolidation of information from all sources-NGOs, TRAFFIC, the wildlife trade monitoring network and Parties); more emphasis on enforcement-including a technical committee enforcement officer; the development of CITES Action Plans (akin to Biodiversity Action Plans related to the Convention on Biological Diversity) including: designation of Scientific/Management Authorities and national enforcement strategies; incentives for reporting and timelines for both Action Plans and reporting. CITES would benefit from access to Global Environment Facility (GEF), funds-although this is difficult given the GEFs more ecosystem approach-or other more regular funds. Development of a future mechanism similar to that of the Montreal Protocol (developed nations contribute to a fund for developing nations) could allow more funds for non-Secretariat activities.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 40,
"text": "From 2005 to 2009 the legal trade corresponded with these numbers:",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 41,
"text": "In the 1990s the annual trade of legal animal products was $160 billion annually. In 2009 the estimated value almost doubled to $300 billion.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 42,
"text": "Additional information about the documented trade can be extracted through queries on the CITES website.",
"title": "Shortcomings and concerns"
},
{
"paragraph_id": 43,
"text": "The Conference of the Parties (CoP) is held once every three years. The location of the next CoP is chosen at the close of each CoP by a secret ballot vote.",
"title": "Meetings"
},
{
"paragraph_id": 44,
"text": "The CITES Committees (Animals Committee, Plants Committee and Standing Committee) hold meetings during each year that does not have a CoP, while the Standing committee meets also in years with a CoP. The Committee meetings take place in Geneva, Switzerland (where the Secretariat of the CITES Convention is located), unless another country offers to host the meeting. The Secretariat is administered by UNEP. The Animals and Plants Committees have sometimes held joint meetings. The previous joint meeting was held in March 2012 in Dublin, Ireland, and the latest one was held in Veracruz, Mexico, in May 2014.",
"title": "Meetings"
},
{
"paragraph_id": 45,
"text": "A current list of upcoming meetings appears on the CITES calendar.",
"title": "Meetings"
},
{
"paragraph_id": 46,
"text": "At the seventeenth Conference of the Parties (CoP 17), Namibia and Zimbabwe introduced proposals to amend their listing of elephant populations in Appendix II. Instead, they wished to establish controlled trade in all elephant specimens, including ivory. They argue that revenue from regulated trade could be used for elephant conservation and rural communities' development. However, both proposals were opposed by the US and other countries.",
"title": "Meetings"
}
]
| CITES is a multilateral treaty to protect endangered plants and animals from the threats of international trade. It was drafted as a result of a resolution adopted in 1963 at a meeting of members of the International Union for Conservation of Nature (IUCN). The convention was opened for signature in 1973 and CITES entered into force on 1 July 1975. Its aim is to ensure that international trade (import/export) in specimens of animals and plants included under CITES, does not threaten the survival of the species in the wild. This is achieved via a system of permits and certificates. CITES affords varying degrees of protection to more than 38,000 species. As of April 2022, Secretary-General of CITES is Ivonne Higuero. | 2001-08-24T13:11:36Z | 2023-12-19T10:46:33Z | [
"Template:Flag",
"Template:Cite news",
"Template:ISBN",
"Template:Short description",
"Template:Infobox Treaty",
"Template:Reflist",
"Template:Cite web",
"Template:Threatened species",
"Template:About",
"Template:Redirect",
"Template:Use dmy dates",
"Template:As of",
"Template:Efn",
"Template:Notelist",
"Template:Cbignore",
"Template:Cite book",
"Template:Cite journal",
"Template:Wikisource",
"Template:Official",
"Template:Citation needed",
"Template:Wikidata property",
"Template:Commons category",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/CITES |
6,203 | Environmental Modification Convention | The Environmental Modification Convention (ENMOD), formally the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, is an international treaty prohibiting the military or other hostile use of environmental modification techniques having widespread, long-lasting or severe effects. It opened for signature on 18 May 1977 in Geneva and entered into force on 5 October 1978.
The Convention bans weather warfare, which is the use of weather modification techniques for the purposes of inducing damage or destruction. The Convention on Biological Diversity of 2010 would also ban some forms of weather modification or geoengineering.
Many states do not regard this as a complete ban on the use of herbicides in warfare, such as Agent Orange, but it does require case-by-case consideration.
The convention was signed by 48 states; 16 of the signatories have not ratified. As of 2022 the convention has 78 state parties.
The problem of artificial modification of the environment for military or other hostile purposes was brought to the international agenda in the early 1970s. Following the US decision of July 1972 to renounce the use of climate modification techniques for hostile purposes, the 1973 resolution by the US Senate calling for an international agreement "prohibiting the use of any environmental or geophysical modification activity as a weapon of war", and an in-depth review by the Department of Defense of the military aspects of weather and other environmental modification techniques, US decided to seek agreement with the Soviet Union to explore the possibilities of an international agreement.
In July 1974, US and USSR agreed to hold bilateral discussions on measures to overcome the danger of the use of environmental modification techniques for military purposes and three subsequent rounds of discussions in 1974 and 1975. In August 1975, US and USSR tabled identical draft texts of a convention at the Conference of the Committee on Disarmament (CCD), Conference on Disarmament, where intensive negotiations resulted in a modified text and understandings regarding four articles of this Convention in 1976.
The convention was approved by Resolution 31/72 of the General Assembly of the United Nations on 10 December 1976, by 96 to 8 votes with 30 abstentions.
Environmental Modification Technique includes any technique for changing – through the deliberate manipulation of natural processes – the dynamics, composition or structure of the earth, including its biota, lithosphere, hydrosphere and atmosphere, or of outer space.
The Convention contains ten articles and one Annex on the Consultative Committee of Experts. Integral part of the convention are also the Understandings relating to articles I, II, III and VIII. These Understandings are not incorporated into the convention but are part of the negotiating record and were included in the report transmitted by the Conference of the Committee on Disarmament to the United Nations General Assembly in September 1976 Report of the Conference of the Committee on Disarmament, Volume I, General Assembly Official records: Thirty-first session, Supplement No. 27 (A/31/27), New York, United Nations, 1976, pp. 91–92.
ENMOD treaty members are responsible for 83% of carbon dioxide emissions since the treaty entered into force in 1978. The ENMOD treaty could potentially be used by ENMOD member states seeking climate-change loss and damage compensation from other ENMOD member states at the International Court of Justice. With the knowledge that carbon dioxide emissions can enhance extreme weather events, the continued unmitigated greenhouse gas emissions from some ENMOD member states could be viewed as ‘reckless’ in the context of deliberately declining emissions from other ENMOD member states. It is unclear whether the International Court of Justice will consider the ENMOD treaty when it issues a legal opinion on international climate change obligations requested by the United Nations General Assembly on 29 March 2023. | [
{
"paragraph_id": 0,
"text": "The Environmental Modification Convention (ENMOD), formally the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, is an international treaty prohibiting the military or other hostile use of environmental modification techniques having widespread, long-lasting or severe effects. It opened for signature on 18 May 1977 in Geneva and entered into force on 5 October 1978.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Convention bans weather warfare, which is the use of weather modification techniques for the purposes of inducing damage or destruction. The Convention on Biological Diversity of 2010 would also ban some forms of weather modification or geoengineering.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Many states do not regard this as a complete ban on the use of herbicides in warfare, such as Agent Orange, but it does require case-by-case consideration.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The convention was signed by 48 states; 16 of the signatories have not ratified. As of 2022 the convention has 78 state parties.",
"title": "Parties"
},
{
"paragraph_id": 4,
"text": "The problem of artificial modification of the environment for military or other hostile purposes was brought to the international agenda in the early 1970s. Following the US decision of July 1972 to renounce the use of climate modification techniques for hostile purposes, the 1973 resolution by the US Senate calling for an international agreement \"prohibiting the use of any environmental or geophysical modification activity as a weapon of war\", and an in-depth review by the Department of Defense of the military aspects of weather and other environmental modification techniques, US decided to seek agreement with the Soviet Union to explore the possibilities of an international agreement.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In July 1974, US and USSR agreed to hold bilateral discussions on measures to overcome the danger of the use of environmental modification techniques for military purposes and three subsequent rounds of discussions in 1974 and 1975. In August 1975, US and USSR tabled identical draft texts of a convention at the Conference of the Committee on Disarmament (CCD), Conference on Disarmament, where intensive negotiations resulted in a modified text and understandings regarding four articles of this Convention in 1976.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The convention was approved by Resolution 31/72 of the General Assembly of the United Nations on 10 December 1976, by 96 to 8 votes with 30 abstentions.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Environmental Modification Technique includes any technique for changing – through the deliberate manipulation of natural processes – the dynamics, composition or structure of the earth, including its biota, lithosphere, hydrosphere and atmosphere, or of outer space.",
"title": "Environmental Modification Technique"
},
{
"paragraph_id": 8,
"text": "The Convention contains ten articles and one Annex on the Consultative Committee of Experts. Integral part of the convention are also the Understandings relating to articles I, II, III and VIII. These Understandings are not incorporated into the convention but are part of the negotiating record and were included in the report transmitted by the Conference of the Committee on Disarmament to the United Nations General Assembly in September 1976 Report of the Conference of the Committee on Disarmament, Volume I, General Assembly Official records: Thirty-first session, Supplement No. 27 (A/31/27), New York, United Nations, 1976, pp. 91–92.",
"title": "Structure of ENMOD"
},
{
"paragraph_id": 9,
"text": "ENMOD treaty members are responsible for 83% of carbon dioxide emissions since the treaty entered into force in 1978. The ENMOD treaty could potentially be used by ENMOD member states seeking climate-change loss and damage compensation from other ENMOD member states at the International Court of Justice. With the knowledge that carbon dioxide emissions can enhance extreme weather events, the continued unmitigated greenhouse gas emissions from some ENMOD member states could be viewed as ‘reckless’ in the context of deliberately declining emissions from other ENMOD member states. It is unclear whether the International Court of Justice will consider the ENMOD treaty when it issues a legal opinion on international climate change obligations requested by the United Nations General Assembly on 29 March 2023.",
"title": "Anthropogenic climate change"
}
]
| The Environmental Modification Convention (ENMOD), formally the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, is an international treaty prohibiting the military or other hostile use of environmental modification techniques having widespread, long-lasting or severe effects. It opened for signature on 18 May 1977 in Geneva and entered into force on 5 October 1978. The Convention bans weather warfare, which is the use of weather modification techniques for the purposes of inducing damage or destruction. The Convention on Biological Diversity of 2010 would also ban some forms of weather modification or geoengineering. Many states do not regard this as a complete ban on the use of herbicides in warfare, such as Agent Orange, but it does require case-by-case consideration. | 2002-02-25T15:51:15Z | 2023-08-11T09:36:16Z | [
"Template:Use dmy dates",
"Template:Infobox treaty",
"Template:Main",
"Template:Reflist",
"Template:Cite web",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Environmental_Modification_Convention |
6,205 | Chaitin's constant | In the computer science subfield of algorithmic information theory, a Chaitin constant (Chaitin omega number) or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.
Although there are infinitely many halting probabilities, one for each method of encoding programs, it is common to use the letter Ω to refer to them as if there were only one. Because Ω depends on the program encoding used, it is sometimes called Chaitin's construction when not referring to any specific encoding.
Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm to compute its digits. Each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits.
The definition of a halting probability relies on the existence of a prefix-free universal computable function. Such a function, intuitively, represents a programming language with the property that no valid program can be obtained as a proper extension of another valid program.
Suppose that F is a partial function that takes one argument, a finite binary string, and possibly returns a single binary string as output. The function F is called computable if there is a Turing machine that computes it, in the sense that for any finite binary strings x and y, F(x) = y if and only if the Turing machine halts with y on its tape when given the input x.
The function F is called universal if the following property holds: for every computable function f of a single variable there is a string w such that for all x, F(w x) = f(x); here w x represents the concatenation of the two strings w and x. This means that F can be used to simulate any computable function of one variable. Informally, w represents a "script" for the computable function f, and F represents an "interpreter" that parses the script as a prefix of its input and then executes it on the remainder of input.
The domain of F is the set of all inputs p on which it is defined. For F that are universal, such a p can generally be seen both as the concatenation of a program part and a data part, and as a single program for the function F.
The function F is called prefix-free if there are no two elements p, p′ in its domain such that p′ is a proper extension of p. This can be rephrased as: the domain of F is a prefix-free code (instantaneous code) on the set of finite binary strings. A simple way to enforce prefix-free-ness is to use machines whose means of input is a binary stream from which bits can be read one at a time. There is no end-of-stream marker; the end of input is determined by when the universal machine decides to stop reading more bits, and the remaining bits are not considered part of the accepted string. Here, the difference between the two notions of program mentioned in the last paragraph becomes clear; one is easily recognized by some grammar, while the other requires arbitrary computation to recognize.
The domain of any universal computable function is a computably enumerable set but never a computable set. The domain is always Turing equivalent to the halting problem.
Let PF be the domain of a prefix-free universal computable function F. The constant ΩF is then defined as
where | p | {\displaystyle \left|p\right|} denotes the length of a string p. This is an infinite sum which has one summand for every p in the domain of F. The requirement that the domain be prefix-free, together with Kraft's inequality, ensures that this sum converges to a real number between 0 and 1. If F is clear from context then ΩF may be denoted simply Ω, although different prefix-free universal computable functions lead to different values of Ω.
Knowing the first N bits of Ω, one could calculate the halting problem for all programs of a size up to N. Let the program p for which the halting problem is to be solved be N bits long. In dovetailing fashion, all programs of all lengths are run, until enough have halted to jointly contribute enough probability to match these first N bits. If the program p hasn't halted yet, then it never will, since its contribution to the halting probability would affect the first N bits. Thus, the halting problem would be solved for p.
Because many outstanding problems in number theory, such as Goldbach's conjecture, are equivalent to solving the halting problem for special programs (which would basically search for counter-examples and halt if one is found), knowing enough bits of Chaitin's constant would also imply knowing the answer to these problems. But as the halting problem is not generally solvable, and therefore calculating any but the first few bits of Chaitin's constant is not possible, this just reduces hard problems to impossible ones, much like trying to build an oracle machine for the halting problem would be.
The Cantor space is the collection of all infinite sequences of 0s and 1s. A halting probability can be interpreted as the measure of a certain subset of Cantor space under the usual probability measure on Cantor space. It is from this interpretation that halting probabilities take their name.
The probability measure on Cantor space, sometimes called the fair-coin measure, is defined so that for any binary string x the set of sequences that begin with x has measure 2. This implies that for each natural number n, the set of sequences f in Cantor space such that f(n) = 1 has measure 1/2, and the set of sequences whose nth element is 0 also has measure 1/2.
Let F be a prefix-free universal computable function. The domain P of F consists of an infinite set of binary strings
Each of these strings pi determines a subset Si of Cantor space; the set Si contains all sequences in cantor space that begin with pi. These sets are disjoint because P is a prefix-free set. The sum
represents the measure of the set
In this way, ΩF represents the probability that a randomly selected infinite sequence of 0s and 1s begins with a bit string (of some finite length) that is in the domain of F. It is for this reason that ΩF is called a halting probability.
Each Chaitin constant Ω has the following properties:
Not every set that is Turing equivalent to the halting problem is a halting probability. A finer equivalence relation, Solovay equivalence, can be used to characterize the halting probabilities among the left-c.e. reals. One can show that a real number in [0,1] is a Chaitin constant (i.e. the halting probability of some prefix-free universal computable function) if and only if it is left-c.e. and algorithmically random. Ω is among the few definable algorithmically random numbers and is the best-known algorithmically random number, but it is not at all typical of all algorithmically random numbers.
A real number is called computable if there is an algorithm which, given n, returns the first n digits of the number. This is equivalent to the existence of a program that enumerates the digits of the real number.
No halting probability is computable. The proof of this fact relies on an algorithm which, given the first n digits of Ω, solves Turing's halting problem for programs of length up to n. Since the halting problem is undecidable, Ω cannot be computed.
The algorithm proceeds as follows. Given the first n digits of Ω and a k ≤ n, the algorithm enumerates the domain of F until enough elements of the domain have been found so that the probability they represent is within 2 of Ω. After this point, no additional program of length k can be in the domain, because each of these would add 2 to the measure, which is impossible. Thus the set of strings of length k in the domain is exactly the set of such strings already enumerated.
A real number is random if the binary sequence representing the real number is an algorithmically random sequence. Calude, Hertling, Khoussainov, and Wang showed that a recursively enumerable real number is an algorithmically random sequence if and only if it is a Chaitin's Ω number.
For each specific consistent effectively represented axiomatic system for the natural numbers, such as Peano arithmetic, there exists a constant N such that no bit of Ω after the Nth can be proven to be 1 or 0 within that system. The constant N depends on how the formal system is effectively represented, and thus does not directly reflect the complexity of the axiomatic system. This incompleteness result is similar to Gödel's incompleteness theorem in that it shows that no consistent formal theory for arithmetic can be complete.
As mentioned above, the first n bits of Gregory Chaitin's constant Ω are random or incompressible in the sense that we cannot compute them by a halting algorithm with fewer than n-O(1) bits. However, consider the short but never halting algorithm which systematically lists and runs all possible programs; whenever one of them halts its probability gets added to the output (initialized by zero). After finite time the first n bits of the output will never change any more (it does not matter that this time itself is not computable by a halting program). So there is a short non-halting algorithm whose output converges (after finite time) onto the first n bits of Ω. In other words, the enumerable first n bits of Ω are highly compressible in the sense that they are limit-computable by a very short algorithm; they are not random with respect to the set of enumerating algorithms. Jürgen Schmidhuber (2000) constructed a limit-computable "Super Ω" which in a sense is much more random than the original limit-computable Ω, as one cannot significantly compress the Super Ω by any enumerating non-halting algorithm.
For an alternative "Super Ω", the universality probability of a prefix-free Universal Turing Machine (UTM) – namely, the probability that it remains universal even when every input of it (as a binary string) is prefixed by a random binary string – can be seen as the non-halting probability of a machine with oracle the third iteration of the halting problem (i.e., O ( 3 ) {\displaystyle O^{(3)}} using Turing Jump notation). | [
{
"paragraph_id": 0,
"text": "In the computer science subfield of algorithmic information theory, a Chaitin constant (Chaitin omega number) or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Although there are infinitely many halting probabilities, one for each method of encoding programs, it is common to use the letter Ω to refer to them as if there were only one. Because Ω depends on the program encoding used, it is sometimes called Chaitin's construction when not referring to any specific encoding.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm to compute its digits. Each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The definition of a halting probability relies on the existence of a prefix-free universal computable function. Such a function, intuitively, represents a programming language with the property that no valid program can be obtained as a proper extension of another valid program.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "Suppose that F is a partial function that takes one argument, a finite binary string, and possibly returns a single binary string as output. The function F is called computable if there is a Turing machine that computes it, in the sense that for any finite binary strings x and y, F(x) = y if and only if the Turing machine halts with y on its tape when given the input x.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "The function F is called universal if the following property holds: for every computable function f of a single variable there is a string w such that for all x, F(w x) = f(x); here w x represents the concatenation of the two strings w and x. This means that F can be used to simulate any computable function of one variable. Informally, w represents a \"script\" for the computable function f, and F represents an \"interpreter\" that parses the script as a prefix of its input and then executes it on the remainder of input.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "The domain of F is the set of all inputs p on which it is defined. For F that are universal, such a p can generally be seen both as the concatenation of a program part and a data part, and as a single program for the function F.",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "The function F is called prefix-free if there are no two elements p, p′ in its domain such that p′ is a proper extension of p. This can be rephrased as: the domain of F is a prefix-free code (instantaneous code) on the set of finite binary strings. A simple way to enforce prefix-free-ness is to use machines whose means of input is a binary stream from which bits can be read one at a time. There is no end-of-stream marker; the end of input is determined by when the universal machine decides to stop reading more bits, and the remaining bits are not considered part of the accepted string. Here, the difference between the two notions of program mentioned in the last paragraph becomes clear; one is easily recognized by some grammar, while the other requires arbitrary computation to recognize.",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "The domain of any universal computable function is a computably enumerable set but never a computable set. The domain is always Turing equivalent to the halting problem.",
"title": "Background"
},
{
"paragraph_id": 9,
"text": "Let PF be the domain of a prefix-free universal computable function F. The constant ΩF is then defined as",
"title": "Definition"
},
{
"paragraph_id": 10,
"text": "where | p | {\\displaystyle \\left|p\\right|} denotes the length of a string p. This is an infinite sum which has one summand for every p in the domain of F. The requirement that the domain be prefix-free, together with Kraft's inequality, ensures that this sum converges to a real number between 0 and 1. If F is clear from context then ΩF may be denoted simply Ω, although different prefix-free universal computable functions lead to different values of Ω.",
"title": "Definition"
},
{
"paragraph_id": 11,
"text": "Knowing the first N bits of Ω, one could calculate the halting problem for all programs of a size up to N. Let the program p for which the halting problem is to be solved be N bits long. In dovetailing fashion, all programs of all lengths are run, until enough have halted to jointly contribute enough probability to match these first N bits. If the program p hasn't halted yet, then it never will, since its contribution to the halting probability would affect the first N bits. Thus, the halting problem would be solved for p.",
"title": "Relationship to the halting problem"
},
{
"paragraph_id": 12,
"text": "Because many outstanding problems in number theory, such as Goldbach's conjecture, are equivalent to solving the halting problem for special programs (which would basically search for counter-examples and halt if one is found), knowing enough bits of Chaitin's constant would also imply knowing the answer to these problems. But as the halting problem is not generally solvable, and therefore calculating any but the first few bits of Chaitin's constant is not possible, this just reduces hard problems to impossible ones, much like trying to build an oracle machine for the halting problem would be.",
"title": "Relationship to the halting problem"
},
{
"paragraph_id": 13,
"text": "The Cantor space is the collection of all infinite sequences of 0s and 1s. A halting probability can be interpreted as the measure of a certain subset of Cantor space under the usual probability measure on Cantor space. It is from this interpretation that halting probabilities take their name.",
"title": "Interpretation as a probability"
},
{
"paragraph_id": 14,
"text": "The probability measure on Cantor space, sometimes called the fair-coin measure, is defined so that for any binary string x the set of sequences that begin with x has measure 2. This implies that for each natural number n, the set of sequences f in Cantor space such that f(n) = 1 has measure 1/2, and the set of sequences whose nth element is 0 also has measure 1/2.",
"title": "Interpretation as a probability"
},
{
"paragraph_id": 15,
"text": "Let F be a prefix-free universal computable function. The domain P of F consists of an infinite set of binary strings",
"title": "Interpretation as a probability"
},
{
"paragraph_id": 16,
"text": "Each of these strings pi determines a subset Si of Cantor space; the set Si contains all sequences in cantor space that begin with pi. These sets are disjoint because P is a prefix-free set. The sum",
"title": "Interpretation as a probability"
},
{
"paragraph_id": 17,
"text": "represents the measure of the set",
"title": "Interpretation as a probability"
},
{
"paragraph_id": 18,
"text": "In this way, ΩF represents the probability that a randomly selected infinite sequence of 0s and 1s begins with a bit string (of some finite length) that is in the domain of F. It is for this reason that ΩF is called a halting probability.",
"title": "Interpretation as a probability"
},
{
"paragraph_id": 19,
"text": "Each Chaitin constant Ω has the following properties:",
"title": "Properties"
},
{
"paragraph_id": 20,
"text": "Not every set that is Turing equivalent to the halting problem is a halting probability. A finer equivalence relation, Solovay equivalence, can be used to characterize the halting probabilities among the left-c.e. reals. One can show that a real number in [0,1] is a Chaitin constant (i.e. the halting probability of some prefix-free universal computable function) if and only if it is left-c.e. and algorithmically random. Ω is among the few definable algorithmically random numbers and is the best-known algorithmically random number, but it is not at all typical of all algorithmically random numbers.",
"title": "Properties"
},
{
"paragraph_id": 21,
"text": "A real number is called computable if there is an algorithm which, given n, returns the first n digits of the number. This is equivalent to the existence of a program that enumerates the digits of the real number.",
"title": "Uncomputability"
},
{
"paragraph_id": 22,
"text": "No halting probability is computable. The proof of this fact relies on an algorithm which, given the first n digits of Ω, solves Turing's halting problem for programs of length up to n. Since the halting problem is undecidable, Ω cannot be computed.",
"title": "Uncomputability"
},
{
"paragraph_id": 23,
"text": "The algorithm proceeds as follows. Given the first n digits of Ω and a k ≤ n, the algorithm enumerates the domain of F until enough elements of the domain have been found so that the probability they represent is within 2 of Ω. After this point, no additional program of length k can be in the domain, because each of these would add 2 to the measure, which is impossible. Thus the set of strings of length k in the domain is exactly the set of such strings already enumerated.",
"title": "Uncomputability"
},
{
"paragraph_id": 24,
"text": "A real number is random if the binary sequence representing the real number is an algorithmically random sequence. Calude, Hertling, Khoussainov, and Wang showed that a recursively enumerable real number is an algorithmically random sequence if and only if it is a Chaitin's Ω number.",
"title": "Algorithmic randomness"
},
{
"paragraph_id": 25,
"text": "For each specific consistent effectively represented axiomatic system for the natural numbers, such as Peano arithmetic, there exists a constant N such that no bit of Ω after the Nth can be proven to be 1 or 0 within that system. The constant N depends on how the formal system is effectively represented, and thus does not directly reflect the complexity of the axiomatic system. This incompleteness result is similar to Gödel's incompleteness theorem in that it shows that no consistent formal theory for arithmetic can be complete.",
"title": "Incompleteness theorem for halting probabilities"
},
{
"paragraph_id": 26,
"text": "As mentioned above, the first n bits of Gregory Chaitin's constant Ω are random or incompressible in the sense that we cannot compute them by a halting algorithm with fewer than n-O(1) bits. However, consider the short but never halting algorithm which systematically lists and runs all possible programs; whenever one of them halts its probability gets added to the output (initialized by zero). After finite time the first n bits of the output will never change any more (it does not matter that this time itself is not computable by a halting program). So there is a short non-halting algorithm whose output converges (after finite time) onto the first n bits of Ω. In other words, the enumerable first n bits of Ω are highly compressible in the sense that they are limit-computable by a very short algorithm; they are not random with respect to the set of enumerating algorithms. Jürgen Schmidhuber (2000) constructed a limit-computable \"Super Ω\" which in a sense is much more random than the original limit-computable Ω, as one cannot significantly compress the Super Ω by any enumerating non-halting algorithm.",
"title": "Super Omega"
},
{
"paragraph_id": 27,
"text": "For an alternative \"Super Ω\", the universality probability of a prefix-free Universal Turing Machine (UTM) – namely, the probability that it remains universal even when every input of it (as a binary string) is prefixed by a random binary string – can be seen as the non-halting probability of a machine with oracle the third iteration of the halting problem (i.e., O ( 3 ) {\\displaystyle O^{(3)}} using Turing Jump notation).",
"title": "Super Omega"
}
]
| In the computer science subfield of algorithmic information theory, a Chaitin constant or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin. Although there are infinitely many halting probabilities, one for each method of encoding programs, it is common to use the letter Ω to refer to them as if there were only one. Because Ω depends on the program encoding used, it is sometimes called Chaitin's construction when not referring to any specific encoding. Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm to compute its digits. Each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits. | 2001-08-24T15:56:36Z | 2023-11-26T07:53:43Z | [
"Template:Cite journal",
"Template:Citation",
"Template:Cite book",
"Template:Irrational number",
"Template:Short description",
"Template:Use dmy dates",
"Template:Sfn",
"Template:Snd",
"Template:Redirect",
"Template:More footnotes",
"Template:Main"
]
| https://en.wikipedia.org/wiki/Chaitin%27s_constant |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.