title
stringlengths
3
68
text
stringlengths
785
186k
relevans
float64
0.76
0.82
popularity
float64
0.93
1
ranking
float64
0.75
0.81
Urgesellschaft
Urgesellschaft (meaning "primal society" in German) is a term that, according to Friedrich Engels, refers to the original coexistence of humans in prehistoric times, before recorded history. Here, a distinction is made between the kind of Homo sapiens as humans, who hardly differed from modern humans biologically (an assertion disputed by anthropology), and other representatives of the genus Homo such as the Homo erectus or the Neanderthal. Engels claimed "that animal family dynamics and human primitive society are incompatible things" because "the primitive humans that developed out of animalism either knew no family at all or at most one that does not occur among animals". The U.S. anthropologist Lewis Henry Morgan and translations of his books also make use of the term. In specificity, this long period of time is not directly accessible through historical sources. Nevertheless, in archaeology, the study of material cultures provides a variety of opportunities to gain a better understanding of this period, work that is likewise present in sociobiology and social anthropology, and in religious studies through the analysis of prehistoric mythologies. Archaeological classification The so-called primitive society, or more appropriately, the primitive societies, probably span by far the longest period in the history of mankind to date, more than three million years, while other forms of society have existed and continue to exist for only a relatively short period in comparison (less than 1 percent of the period). The Stone Age is an archaeological term for the period in which stone tools (fist wedges) are the oldest chronologically classifiable and roughly datable finds. Other, even older tools and objects made of natural or animal materials (wood, bones, skins) decayed and did not survive. This Stone Age also includes the development of new social structures about 20,000 to 6,000 years ago. Generally, the advent of arable farming and livestock rearing is considered to be the transition to the New Stone Age and the end of this phase. The Neolithic Revolution was followed in some areas by the Bronze Age (around 2200 to 800 BC), but in some cases ran in parallel. Theoretical assumptions A society is formed by different-sized social groups acting together. At different times in history, as well as in different climates and ecozones, human societies were quite different. The gradual dispersal of early human groups (estimated at 1 to 10 kilometers per year) initially placed few demands on them and their generational succession-they did not perceive any changes, especially in equatorial regions. However, drastic environmental changes such as ice and warm periods, to which the migrants were exposed in the target area, caused new forms of adaptation with corresponding social structures. Food gathering and weather protection as well as the use of fire were socially successful. However, a high social differentiation of primitive social forms of organization cannot be assumed. The first graspable societies as well as similar present groups appear relatively equal (egalitarian). The isolation of individual groups, e.g. during the glacial periods or in insular settlement areas, led to culturally different traditions as well as to phenotypic, also racial theoretical differentiations. The comparatively rare contacts were found by a pedestrian, overall stationary society in the nearest vicinity. Whether the exogamy (external marriage) indicates that people became aware of reproductive biology (procreation) is doubted; exogamy is sociologically seen rather as a proving safeguard of (re-)integration of diverging groupings (for example, in lineage or clan alliances with intermarriage). Some religious traditions also speak of a primal society, referring to the preforms of later religions spread across all hunter-gatherer groupings, derived from the social practices of their members. In written cultures, the distinction between shepherds and cultivators that persists to this day is evident, for example, in the biblical Story of Cain and Abel. Still, in modern macrosociological theories, there are sophisticated assumptions about common features of a primitive society, for instance in Thomas Hobbes, Jean-Jacques Rousseau and Friedrich Engels. Whether early humans lived dominion-free or anarchic or already formed consolidated leadership positions (chiefs) is in each case only a justifiable assumption, the same going for whether they organized themselves as social hordes, cultivated religious cults (with ancestor cult or totemism?) and culturally already knew narrators or familially already the Kernfamilie. Economically, this society is based on an occupation economy, depending on the geological time or vegetation zone to dictate whether one takes the profession of hunter, fisherman, or gatherer. During the Ice Age, for example, their focus in Central Europe and North America was on hunting, while elsewhere gathering and fishing also became important, as in Central Europe after the migration of large animal fauna in the Middle Stone Age (compare Scandinavian middens). In Marxist theory on the social development of mankind, especially in historical materialism, primitive society is also called classless primitive communism because, just as in the "communism" that followed capitalism, there was no private property in the means of production. See also Lewis Henry Morgan: Die Urgesellschaft (Ancient Society, USA 1877) Primitive communism Literature Dieter Claessens: Das Konkrete und das Abstrakte. Soziologische Skizzen zur Anthropologie. Suhrkamp, Frankfurt 1993, ISBN 3-518-28708-7. Friedrich Engels: Anteil der Arbeit an der Menschwerdung des Affen. SAV, Berlin 2009 (original: 1876). Lewis Henry Morgan: Die Urgesellschaft oder Untersuchung über den Fortschritt der Menschheit aus der Wildheit durch die Barbarei zur Zivilisation. 1891 (Nachdruck: Achenbach, Lahn 1979; US-Original 1877: Ancient Society, Or: Researches in the lines of human progress from savagery through barbarism to civilisation). Hansjürgen Müller-Beck: Die Steinzeit. Der Weg der Menschen in die Geschichte. 4., durchgesehene und aktualisierte Ausgabe. Beck, München 2004, ISBN 978-3-406-47719-5. Joachim Herrmann, Irmgard Sellnow (Hrsg.): Produktivkräfte und Gesellschaftsformationen in vorkapitalistischer Zeit. Akademie, Berlin 1982 (= Veröffentlichungen des Zentralinstituts für Alte Geschichte und Archäologie der Akademie der Wissenschaften der DDR. Band 12). External links mehrteiliges Tutorial References Friedrich Engels Sociocultural evolution theory
0.7933
0.969236
0.768896
Brenner debate
The Brenner debate was a major debate amongst Marxist historians during the late 1970s and early 1980s, regarding the origins of capitalism. The debate began with Robert Brenner's 1976 journal article "Agrarian class structure and economic development in pre-industrial Europe", published in the influential historical journal Past & Present. It has been seen as a successor to the so-called "transition debate" (or Dobb-Sweezy debate) that followed Maurice Dobb's 1946 Studies in the Development of Capitalism, and Paul Sweezy's 1950 article "The transition from feudalism to capitalism", in the journal Science & Society. These articles were subsequently collected and published as a book, also entitled The Transition from Feudalism to Capitalism, in 1976. Historians Trevor Aston and C. H. E. Philpin (1985) characterised the Brenner debate as "one of the most important historical debates of recent years." Brenner's thesis Brenner's main argument in "Agrarian class structure and economic development in pre-industrial Europe" is to challenge traditional explanations for economic development in late-medieval and early-modern Europe. In particular, Brenner critiques the two overarching interpretations of economic change in this period, which he calls the demographic and commercialization models. In the demographic model, long-term economic changes are primarily attributed to changes in population, while the commercialization model attributes changes primarily to the growth of trade and the market. Contrary to these explanations, Brenner argues that class relations, or class power, determine the degree to which demographic or commercial changes affect long-run trends in the distribution of income and economic growth. He further argues that class structures tend to be resilient in relation to the impact of economic forces. Response Brenner's thesis was the focus of a symposium in around 1977, several contributions to which also appeared in the pages of Past & Present. Brenner's article and the discussions that followed it have a broad significance for understanding the origins of capitalism, and were foundational to so-called "Political Marxism". In 1978, Michael Postan and John Hatcher characterised the debate as attempting to determine whether Malthusian cyclic explanations of population and development or social class explanations governed demographic and economic change in Europe. The debate challenged the prevalent views of class relations in the economy of England in the Middle Ages in particular – and agricultural societies with serfdom in general, as well as engaging the broader historiography of the economics of feudalism from the 20th century (in both the west and the Soviet Union). Even though Brenner's key ideas have not achieved consensus, the debate has remained influential in 21st century scholarship, In the view of Shami Ghosh, Brenner's thesis proposed an explanatory framework for the evolution of what he called "agrarian capitalism" in England, during the 15th and 16th centuries.[A] transformation of relationships between landlords and cultivators led to the creation of a largely free and competitive market in land and labour, while simultaneously dispossessing most of the peasants. Thus from the old class divisions of owners of land on the one hand, and an unfree peasantry with customary rights of use to land on the other, a new tripartite structure came into being, comprising landlords, free tenant farmers on relatively short-term market-determined leases and wage labourers; this Brenner defines as ‘agrarian capitalism’. Wage labourers were completely market-dependent – a rural proletariat – and tenant farmers had to compete on the land market in order to retain their access to land. This last fact was the principal motor of innovation leading to a rise in productivity, which, coupled with the growth of a now-free labour market, was essential for the development of modern (industrial) capitalism. Thus the transformations of agrarian class structures lay at the root of the development of capitalism in England. Publications Brenner's original article, and the symposium on it, led to a series of publications in Past & Present: Brenner, Robert (1976). ‘Agrarian Class Structure and Economic Development in Pre-Industrial Europe,’ Past & Present, 70, February, pp. 30–75. Postan, M.M. & Hatcher, John (1978). ‘Population and Class Relations in Feudal Society,’ Past & Present, 78, February, pp. 24–37. Croot, Patricia & Parker, David (1978). ‘Agrarian Class Structure and the Development of Capitalism: France and England Compared,’ Past & Present, 78, February, pp. 37–47 Wunder, Heide (1978). ‘Peasant Organization and Class Conflict in Eastern and Western Germany,’ Past & Present, 78, February, pp. 48–55. Le Roy Ladurie, Emmanuel (1978). ‘A Reply to Robert Brenner,’ Past & Present, 79, May, pp. 55–59 Bois, Guy (1978). ‘Against the Neo-Malthusian Orthodoxy,’ Past & Present, 79, May, pp. 60–69 Hilton, R. H. (1978). ‘A Crisis of Feudalism,’ Past & Present, 80, August, 3-19 Cooper, J. P. (1978). ‘In Search of Agrarian Capitalism,’ Past & Present, 80, August, pp. 20–65 Klíma, Arnošt (1979). ‘Agrarian Class Structure and Economic Development in Pre-Industrial Bohemia,’ Past & Present, 85, November, pp. 49–67 Brenner, Robert (1982). ‘The Agrarian Roots of European Capitalism,’ Past & Present, 97 November, pp. 16–113 These studies were republished with some additional material in The Brenner Debate: Agrarian Class Structure and Economic Development in Pre-Industrial Europe, ed. by Trevor Aston and C.H.E. Philpin, Past and Present Publications (Cambridge: Cambridge University Press, 1985), , which was to be reprinted many times. A related and parallel debate also took place in the pages of the New Left Review: Brenner, Robert (1977). ‘The Origins of Capitalist Development: A Critique of Neo-Smithian Marxism‘, New Left Review, I/104, July–August pp. 25–92. Sweezy, Paul (1978). ‘Comment on Brenner,’ New Left Review, I/108, March–April, pp. 94–5 Brenner, Robert (1978). ‘Reply to Sweezy,’ New Left Review, I/108, March–April, pp. 95–6 Fine, Ben (1978). ‘On the Origins of Capitalist Development,’ New Left Review, I/109, May–June, pp. 88–95 As of 2016, Brenner's most recent statements of his ideas, making some small modifications to his earlier claims, were: Brenner, R., 1985. ‘The Social Basis of Economic Development’. In Analytical Marxism, ed. J. Roemer, 25–53. Cambridge, UK: Cambridge University Press. Brenner, R., 2001. ‘The Low Countries in the Transition to Capitalism’. Journal of Agrarian Change, 1: 169–241. Brenner, R., 2007. ‘Property and Progress: Where Adam Smith Went Wrong’. In Marxist History-Writing for the Twenty-First Century, ed. C. Wickham, 49–111. Oxford: Oxford University Press. References History of agriculture Capitalism
0.786225
0.97795
0.768889
Ancient Rome
In modern historiography, ancient Rome is the Roman civilisation from the founding of the Italian city of Rome in the 8th century BC to the collapse of the Western Roman Empire in the 5th century AD. It encompasses the Roman Kingdom (753–509 BC), the Roman Republic (509–27 BC), and the Roman Empire (27 BC–476 AD) until the fall of the western empire. Ancient Rome began as an Italic settlement, traditionally dated to 753 BC, beside the River Tiber in the Italian Peninsula. The settlement grew into the city and polity of Rome, and came to control its neighbours through a combination of treaties and military strength. It eventually controlled the Italian Peninsula, assimilating the Greek culture of southern Italy (Magna Grecia) and the Etruscan culture, and then became the dominant power in the Mediterranean region and parts of Europe. At its height it controlled the North African coast, Egypt, Southern Europe, and most of Western Europe, the Balkans, Crimea, and much of the Middle East, including Anatolia, Levant, and parts of Mesopotamia and Arabia. That empire was among the largest empires in the ancient world, covering around in AD 117, with an estimated 50 to 90 million inhabitants, roughly 20% of the world's population at the time. The Roman state evolved from an elective monarchy to a classical republic and then to an increasingly autocratic military dictatorship during the Empire. Ancient Rome is often grouped into classical antiquity together with ancient Greece, and their similar cultures and societies are known as the Greco-Roman world. Ancient Roman civilisation has contributed to modern language, religion, society, technology, law, politics, government, warfare, art, literature, architecture, and engineering. Rome professionalised and expanded its military and created a system of government called , the inspiration for modern republics such as the United States and France. It achieved impressive technological and architectural feats, such as the empire-wide construction of aqueducts and roads, as well as more grandiose monuments and facilities. History Early Italy and the founding of Rome Archaeological evidence of settlement around Rome starts to emerge . Large-scale organisation appears only , with the first graves in the Esquiline Hill's necropolis, along with a clay and timber wall on the bottom of the Palatine Hill dating to the middle of the 8th century BC. Starting from , the Romans started to drain the valley between the Capitoline and Palatine Hills, where today sits the Roman Forum. By the sixth century BC, the Romans were constructing the Temple of Jupiter Optimus Maximus on the Capitoline and expanding to the Forum Boarium located between the Capitoline and Aventine Hills. The Romans themselves had a founding myth, attributing their city to Romulus and Remus, offspring of Mars and a princess of the mythical city of Alba Longa. The sons, sentenced to death, were rescued by a wolf and returned to restore the Alban king and found a city. After a dispute, Romulus killed Remus and became the city's sole founder. The area of his initial settlement on the Palatine Hill was later known as Roma Quadrata ("Square Rome"). The story dates at least to the third century, and the later Roman antiquarian Marcus Terentius Varro placed the city's foundation to 753 BC. Another legend, recorded by Greek historian Dionysius of Halicarnassus, says that Prince Aeneas led a group of Trojans on a sea voyage to found a new Troy after the Trojan War. They landed on the banks of the Tiber River and a woman travelling with them, Roma, torched their ships to prevent them leaving again. They named the settlement after her. The Roman poet Virgil recounted this legend in his classical epic poem the Aeneid, where the Trojan prince Aeneas is destined to found a new Troy. Kingdom Literary and archaeological evidence is clear on there having been kings in Rome, attested in fragmentary 6th century BC texts. Long after the abolition of the Roman monarchy, a vestigial rex sacrorum was retained to exercise the monarch's former priestly functions. The Romans believed that their monarchy was elective, with seven legendary kings who were largely unrelated by blood. Evidence of Roman expansion is clear in the sixth century BC; by its end, Rome controlled a territory of some with a population perhaps as high as 35,000. A palace, the Regia, was constructed ; the Romans attributed the creation of their first popular organisations and the Senate to the regal period as well. Rome also started to extend its control over its Latin neighbours. While later Roman stories like the Aeneid asserted that all Latins descended from the titular character Aeneas, a common culture is attested to archaeologically. Attested to reciprocal rights of marriage and citizenship between Latin cities—the —along with shared religious festivals, further indicate a shared culture. By the end of the 6th century, most of this area had become dominated by the Romans. Republic By the end of the sixth century, Rome and many of its Italian neighbours entered a period of turbulence. Archaeological evidence implies some degree of large-scale warfare. According to tradition and later writers such as Livy, the Roman Republic was established , when the last of the seven kings of Rome, Tarquin the Proud, was deposed and a system based on annually elected magistrates and various representative assemblies was established. A constitution set a series of checks and balances, and a separation of powers. The most important magistrates were the two consuls, who together exercised executive authority such as imperium, or military command. The consuls had to work with the Senate, which was initially an advisory council of the ranking nobility, or patricians, but grew in size and power. Other magistrates of the Republic include tribunes, quaestors, aediles, praetors and censors. The magistracies were originally restricted to patricians, but were later opened to common people, or plebeians. Republican voting assemblies included the comitia centuriata (centuriate assembly), which voted on matters of war and peace and elected men to the most important offices, and the comitia tributa (tribal assembly), which elected less important offices. In the 4th century BC, Rome had come under attack by the Gauls, who now extended their power in the Italian peninsula beyond the Po Valley and through Etruria. On 16 July 390 BC, a Gallic army under the leadership of tribal chieftain Brennus, defeated the Romans at the Battle of the Allia and marched to Rome. The Gauls looted and burned the city, then laid siege to the Capitoline Hill, where some Romans had barricaded themselves, for seven months. The Gauls then agreed to give the Romans peace in exchange for 1000 pounds of gold. According to later legend, the Roman supervising the weighing noticed that the Gauls were using false scales. The Romans then took up arms and defeated the Gauls. Their victorious general Camillus remarked "With iron, not with gold, Rome buys her freedom." The Romans gradually subdued the other peoples on the Italian peninsula, including the Etruscans. The last threat to Roman hegemony in Italy came when Tarentum, a major Greek colony, enlisted the aid of Pyrrhus of Epirus in 281 BC, but this effort failed as well. The Romans secured their conquests by founding Roman colonies in strategic areas, thereby establishing stable control over the region. Punic Wars In the 3rd century BC Rome faced a new and formidable opponent: Carthage, the other major power in the Western Mediterranean. The First Punic War began in 264 BC, when the city of Messana asked for Carthage's help in their conflicts with Hiero II of Syracuse. After the Carthaginian intercession, Messana asked Rome to expel the Carthaginians. Rome entered this war because Syracuse and Messana were too close to the newly conquered Greek cities of Southern Italy and Carthage was now able to make an offensive through Roman territory; along with this, Rome could extend its domain over Sicily. Carthage was a maritime power, and the Roman lack of ships and naval experience made the path to the victory a long and difficult one for the Roman Republic. Despite this, after more than 20 years of war, Rome defeated Carthage and a peace treaty was signed. Among the reasons for the Second Punic War was the subsequent war reparations Carthage acquiesced to at the end of the First Punic War. The war began with the audacious invasion of Hispania by Hannibal, who marched through Hispania to the Italian Alps, causing panic among Rome's Italian allies. The best way found to defeat Hannibal's purpose of causing the Italians to abandon Rome was to delay the Carthaginians with a guerrilla war of attrition, a strategy propounded by Quintus Fabius Maximus Verrucosus. Hannibal's invasion lasted over 16 years, ravaging Italy, but ultimately Carthage was defeated in the decisive Battle of Zama in October 202 BC. More than a half century after these events, Carthage was left humiliated and the Republic's focus was now directed towards the Hellenistic kingdoms of Greece and revolts in Hispania. However, Carthage, having paid the war indemnity, felt that its commitments and submission to Rome had ceased, a vision not shared by the Roman Senate. The Third Punic War began when Rome declared war against Carthage in 149 BC. Carthage resisted well at the first strike but could not withstand the attack of Scipio Aemilianus, who entirely destroyed the city, enslaved all the citizens and gained control of that region, which became the province of Africa. All these wars resulted in Rome's first overseas conquests (Sicily, Hispania and Africa) and the rise of Rome as a significant imperial power. Late Republic After defeating the Macedonian and Seleucid Empires in the 2nd century BC, the Romans became the dominant people of the Mediterranean Sea. The conquest of the Hellenistic kingdoms brought the Roman and Greek cultures in closer contact and the Roman elite, once rural, became cosmopolitan. At this time Rome was a consolidated empire—in the military view—and had no major enemies. Foreign dominance led to internal strife. Senators became rich at the provinces' expense; soldiers, who were mostly small-scale farmers, were away from home longer and could not maintain their land; and the increased reliance on foreign slaves and the growth of latifundia reduced the availability of paid work. Income from war booty, mercantilism in the new provinces, and tax farming created new economic opportunities for the wealthy, forming a new class of merchants, called the equestrians. The lex Claudia forbade members of the Senate from engaging in commerce, so while the equestrians could theoretically join the Senate, they were severely restricted in political power. The Senate squabbled perpetually, repeatedly blocked important land reforms and refused to give the equestrian class a larger say in the government. Violent gangs of the urban unemployed, controlled by rival Senators, intimidated the electorate through violence. The situation came to a head in the late 2nd century BC under the Gracchi brothers, a pair of tribunes who attempted to pass land reform legislation that would redistribute the major patrician landholdings among the plebeians. Both brothers were killed and the Senate passed reforms reversing the Gracchi brother's actions. This led to the growing divide of the plebeian groups (populares) and equestrian classes (optimates). Gaius Marius soon become a leader of the Republic, holding the first of his seven consulships (an unprecedented number) in 107 BC by arguing that his former patron Quintus Caecilius Metellus Numidicus was not able to defeat and capture the Numidian king Jugurtha. Marius then started his military reform: in his recruitment to fight Jugurtha, he levied the very poor (an innovation), and many landless men entered the army. Marius was elected for five consecutive consulships from 104 to 100 BC, as Rome needed a military leader to defeat the Cimbri and the Teutones, who were threatening Rome. After Marius's retirement, Rome had a brief peace, during which the Italian socii ("allies" in Latin) requested Roman citizenship and voting rights. The reformist Marcus Livius Drusus supported their legal process but was assassinated, and the socii revolted against the Romans in the Social War. At one point both consuls were killed; Marius was appointed to command the army together with Lucius Julius Caesar and Lucius Cornelius Sulla. By the end of the Social War, Marius and Sulla were the premier military men in Rome and their partisans were in conflict, both sides jostling for power. In 88 BC, Sulla was elected for his first consulship and his first assignment was to defeat Mithridates VI of Pontus, whose intentions were to conquer the Eastern part of the Roman territories. However, Marius's partisans managed his installation to the military command, defying Sulla and the Senate. To consolidate his own power, Sulla conducted a surprising and illegal action: he marched to Rome with his legions, killing all those who showed support to Marius's cause. In the following year, 87 BC, Marius, who had fled at Sulla's march, returned to Rome while Sulla was campaigning in Greece. He seized power along with the consul Lucius Cornelius Cinna and killed the other consul, Gnaeus Octavius, achieving his seventh consulship. Marius and Cinna revenged their partisans by conducting a massacre. Marius died in 86 BC, due to age and poor health, just a few months after seizing power. Cinna exercised absolute power until his death in 84 BC. After returning from his Eastern campaigns, Sulla had a free path to reestablish his own power. In 83 BC he made his second march on Rome and began a time of terror: thousands of nobles, knights and senators were executed. Sulla held two dictatorships and one more consulship, which began the crisis and decline of Roman Republic. Caesar and the First Triumvirate In the mid-1st century BC, Roman politics were restless. Political divisions in Rome split into one of two groups, populares (who hoped for the support of the people) and optimates (the "best", who wanted to maintain exclusive aristocratic control). Sulla overthrew all populist leaders and his constitutional reforms removed powers (such as those of the tribune of the plebs) that had supported populist approaches. Meanwhile, social and economic stresses continued to build; Rome had become a metropolis with a super-rich aristocracy, debt-ridden aspirants, and a large proletariat often of impoverished farmers. The latter groups supported the Catilinarian conspiracy—a resounding failure since the consul Marcus Tullius Cicero quickly arrested and executed the main leaders. Gaius Julius Caesar reconciled the two most powerful men in Rome: Marcus Licinius Crassus, who had financed much of his earlier career, and Crassus' rival, Gnaeus Pompeius Magnus (anglicised as Pompey), to whom he married his daughter. He formed them into a new informal alliance including himself, the First Triumvirate ("three men"). Caesar's daughter died in childbirth in 54 BC, and in 53 BC, Crassus invaded Parthia and was killed in the Battle of Carrhae; the Triumvirate disintegrated. Caesar conquered Gaul, obtained immense wealth, respect in Rome and the loyalty of battle-hardened legions. He became a threat to Pompey and was loathed by many optimates. Confident that Caesar could be stopped by legal means, Pompey's party tried to strip Caesar of his legions, a prelude to Caesar's trial, impoverishment, and exile. To avoid this fate, Caesar crossed the Rubicon River and invaded Rome in 49 BC. The Battle of Pharsalus was a brilliant victory for Caesar and in this and other campaigns, he destroyed all of the optimates leaders: Metellus Scipio, Cato the Younger, and Pompey's son, Gnaeus Pompeius. Pompey was murdered in Egypt in 48 BC. Caesar was now pre-eminent over Rome: in five years he held four consulships, two ordinary dictatorships, and two special dictatorships, one for perpetuity. He was murdered in 44 BC, on the Ides of March by the Liberatores. Octavian and the Second Triumvirate Caesar's assassination caused political and social turmoil in Rome; the city was ruled by his friend and colleague, Marcus Antonius. Soon afterward, Octavius, whom Caesar adopted through his will, arrived in Rome. Octavian (historians regard Octavius as Octavian due to the Roman naming conventions) tried to align himself with the Caesarian faction. In 43 BC, along with Antony and Marcus Aemilius Lepidus, Caesar's best friend, he legally established the Second Triumvirate. Upon its formation, 130–300 senators were executed, and their property was confiscated, due to their supposed support for the Liberatores. In 42 BC, the Senate deified Caesar as Divus Iulius; Octavian thus became Divi filius, the son of the deified. In the same year, Octavian and Antony defeated both Caesar's assassins and the leaders of the Liberatores, Marcus Junius Brutus and Gaius Cassius Longinus, in the Battle of Philippi. The Second Triumvirate was marked by the proscriptions of many senators and equites: after a revolt led by Antony's brother Lucius Antonius, more than 300 senators and equites involved were executed, although Lucius was spared. The Triumvirate divided the Empire among the triumvirs: Lepidus was given charge of Africa, Antony, the eastern provinces, and Octavian remained in Italia and controlled Hispania and Gaul. The Second Triumvirate expired in 38 BC but was renewed for five more years. However, the relationship between Octavian and Antony had deteriorated, and Lepidus was forced to retire in 36 BC after betraying Octavian in Sicily. By the end of the Triumvirate, Antony was living in Ptolemaic Egypt, ruled by his lover, Cleopatra VII. Antony's affair with Cleopatra was seen as an act of treason, since she was queen of another country. Additionally, Antony adopted a lifestyle considered too extravagant and Hellenistic for a Roman statesman. Following Antony's Donations of Alexandria, which gave to Cleopatra the title of "Queen of Kings", and to Antony's and Cleopatra's children the regal titles to the newly conquered Eastern territories, war between Octavian and Antony broke out. Octavian annihilated Egyptian forces in the Battle of Actium in 31 BC. Antony and Cleopatra committed suicide. Now Egypt was conquered by the Roman Empire. Empire – the Principate In 27 BC and at the age of 36, Octavian was the sole Roman leader. In that year, he took the name Augustus. That event is usually taken by historians as the beginning of Roman Empire. Officially, the government was republican, but Augustus assumed absolute powers. His reform of the government brought about a two-century period colloquially referred to by Romans as the Pax Romana. Julio-Claudian dynasty The Julio-Claudian dynasty was established by Augustus. The emperors of this dynasty were Augustus, Tiberius, Caligula, Claudius and Nero. The Julio-Claudians started the destruction of republican values, but on the other hand, they boosted Rome's status as the central power in the Mediterranean region. While Caligula and Nero are usually remembered in popular culture as dysfunctional emperors, Augustus and Claudius are remembered as successful in politics and the military. This dynasty instituted imperial tradition in Rome and frustrated any attempt to reestablish a Republic. Augustus gathered almost all the republican powers under his official title, princeps, and diminished the political influence of the senatorial class by boosting the equestrian class. The senators lost their right to rule certain provinces, like Egypt, since the governor of that province was directly nominated by the emperor. The creation of the Praetorian Guard and his reforms in the military, creating a standing army with a fixed size of 28 legions, ensured his total control over the army. Compared with the Second Triumvirate's epoch, Augustus' reign as princeps was very peaceful, which led the people and the nobles of Rome to support Augustus, increasing his strength in political affairs. His generals were responsible for the field command, gaining such commanders as Marcus Vipsanius Agrippa, Nero Claudius Drusus and Germanicus much respect from the populace and the legions. Augustus intended to extend the Roman Empire to the whole known world, and in his reign, Rome conquered Cantabria, Aquitania, Raetia, Dalmatia, Illyricum and Pannonia. Under Augustus' reign, Roman literature grew steadily in what is known as the Golden Age of Latin Literature. Poets like Virgil, Horace, Ovid and Rufus developed a rich literature, and were close friends of Augustus. Along with Maecenas, he sponsored patriotic poems, such as Virgil's epic Aeneid and historiographical works like those of Livy. Augustus continued the changes to the calendar promoted by Caesar, and the month of August is named after him. Augustus brought a peaceful and thriving era to Rome, known as Pax Augusta or Pax Romana. Augustus died in 14 AD, but the empire's glory continued after his era. The Julio-Claudians continued to rule Rome after Augustus' death and remained in power until the death of Nero in 68 AD. Influenced by his wife, Livia Drusilla, Augustus appointed her son from another marriage, Tiberius, as his heir. The Senate agreed with the succession, and granted to Tiberius the same titles and honours once granted to Augustus: the title of princeps and Pater patriae, and the Civic Crown. However, Tiberius was not an enthusiast for political affairs: after agreement with the Senate, he retired to Capri in 26 AD, and left control of the city of Rome in the hands of the praetorian prefect Sejanus (until 31 AD) and Macro (from 31 to 37 AD). Tiberius died (or was killed) in 37 AD. The male line of the Julio-Claudians was limited to Tiberius' nephew Claudius, his grandson Tiberius Gemellus and his grand-nephew Caligula. As Gemellus was still a child, Caligula was chosen to rule the empire. He was a popular leader in the first half of his reign, but became a crude and insane tyrant in his years controlling government. The Praetorian Guard murdered Caligula four years after the death of Tiberius, and, with belated support from the senators, proclaimed his uncle Claudius as the new emperor. Claudius was not as authoritarian as Tiberius and Caligula. Claudius conquered Lycia and Thrace; his most important deed was the beginning of the conquest of Britannia. Claudius was poisoned by his wife, Agrippina the Younger in 54 AD. His heir was Nero, son of Agrippina and her former husband, since Claudius' son Britannicus had not reached manhood upon his father's death. Nero sent his general, Suetonius Paulinus, to invade modern-day Wales, where he encountered stiff resistance. The Celts there were independent, tough, resistant to tax collectors, and fought Paulinus as he battled his way across from east to west. It took him a long time to reach the north west coast, and in 60 AD he finally crossed the Menai Strait to the sacred island of Mona (Anglesey), the last stronghold of the druids. His soldiers attacked the island and massacred the druids: men, women and children, destroyed the shrine and the sacred groves and threw many of the sacred standing stones into the sea. While Paulinus and his troops were massacring druids in Mona, the tribes of modern-day East Anglia staged a revolt led by queen Boadicea of the Iceni. The rebels sacked and burned Camulodunum, Londinium and Verulamium (modern-day Colchester, London and St Albans respectively) before they were crushed by Paulinus. Boadicea, like Cleopatra before her, committed suicide to avoid the disgrace of being paraded in triumph in Rome. Nero is widely known as the first persecutor of Christians and for the Great Fire of Rome, rumoured to have been started by the emperor himself. A conspiracy against Nero in 65 AD under Calpurnius Piso failed, but in 68 AD the armies under Julius Vindex in Gaul and Servius Sulpicius Galba in modern-day Spain revolted. Deserted by the Praetorian Guards and condemned to death by the senate, Nero killed himself. As Roman provinces were being established throughout the Mediterranean, Italy maintained a special status which made it ("ruler of the provinces"), and – especially in relation to the first centuries of imperial stability – ("governor of the world") and ("parent of all lands"). Flavian dynasty The Flavians were the second dynasty to rule Rome. By 68 AD, the year of Nero's death, there was no chance of a return to the Roman Republic, and so a new emperor had to arise. After the turmoil in the Year of the Four Emperors, Titus Flavius Vespasianus (anglicised as Vespasian) took control of the empire and established a new dynasty. Under the Flavians, Rome continued its expansion, and the state remained secure. Under Trajan, the Roman Empire reached the peak of its territorial expansion. Rome's dominion now spanned . The most significant military campaign undertaken during the Flavian period was the siege and destruction of Jerusalem in 70 AD by Titus. The destruction of the city was the culmination of the Roman campaign in Judea following the Jewish uprising of 66 AD. The Second Temple was completely demolished, after which Titus' soldiers proclaimed him imperator in honour of the victory. Jerusalem was sacked and much of the population killed or dispersed. Josephus claims that 1,100,000 people were killed during the siege, of whom a majority were Jewish. 97,000 were captured and enslaved, including Simon bar Giora and John of Giscala. Many fled to areas around the Mediterranean. Vespasian was a general under Claudius and Nero and fought as a commander in the First Jewish-Roman War. Following the turmoil of the Year of the Four Emperors, in 69 AD, four emperors were enthroned in turn: Galba, Otho, Vitellius, and, lastly, Vespasian, who crushed Vitellius' forces and became emperor. He reconstructed many buildings which were uncompleted, like a statue of Apollo and the temple of Divus Claudius ("the deified Claudius"), both initiated by Nero. Buildings destroyed by the Great Fire of Rome were rebuilt, and he revitalised the Capitol. Vespasian started the construction of the Flavian Amphitheater, commonly known as the Colosseum. The historians Josephus and Pliny the Elder wrote their works during Vespasian's reign. Vespasian was Josephus' sponsor and Pliny dedicated his Naturalis Historia to Titus, son of Vespasian. Vespasian sent legions to defend the eastern frontier in Cappadocia, extended the occupation in Britannia (modern-day England, Wales and southern Scotland) and reformed the tax system. He died in 79 AD. Titus became emperor in 79. He finished the Flavian Amphitheater, using war spoils from the First Jewish-Roman War, and hosted victory games that lasted for a hundred days. These games included gladiatorial combats, horse races and a sensational mock naval battle on the flooded grounds of the Colosseum. Titus died of fever in 81 AD, and was succeeded by his brother Domitian. As emperor, Domitian showed the characteristics of a tyrant. He ruled for fifteen years, during which time he acquired a reputation for self-promotion as a living god. He constructed at least two temples in honour of Jupiter, the supreme deity in Roman religion. He was murdered following a plot within his own household. Nerva–Antonine dynasty Following Domitian's murder, the Senate rapidly appointed Nerva as Emperor. Nerva had noble ancestry, and he had served as an advisor to Nero and the Flavians. His rule restored many of the traditional liberties of Rome's upper classes, which Domitian had over-ridden. The Nerva–Antonine dynasty from 96 AD to 192 AD included the "five good emperors" Nerva, Trajan, Hadrian, Antoninus Pius and Marcus Aurelius. Trajan, Hadrian, Antoninus Pius and Marcus Aurelius were part of Italic families settled in Roman colonies outside of Italy: the families of Trajan and Hadrian had settled in Italica (Hispania Baetica), that of Antoninus Pius in Colonia Agusta Nemausensis (Gallia Narbonensis), and that of Marcus Aurelius in Colonia Claritas Iulia Ucubi (Hispania Baetica). The Nerva-Antonine dynasty came to an end with Commodus, son of Marcus Aurelius. Nerva abdicated and died in 98 AD, and was succeeded by the general Trajan. Trajan is credited with the restoration of traditional privileges and rights of commoner and senatorial classes, which later Roman historians claim to have been eroded during Domitian's autocracy. Trajan fought three Dacian wars, winning territories roughly equivalent to modern-day Romania and Moldova. He undertook an ambitious public building program in Rome, including Trajan's Forum, Trajan's Market and Trajan's Column, with the architect Apollodorus of Damascus. He remodelled the Pantheon and extended the Circus Maximus. When Parthia appointed a king for Armenia without consulting Rome, Trajan declared war on Parthia and deposed the king of Armenia. In 115 he took the Northern Mesopotamian cities of Nisibis and Batnae, organised a province of Mesopotamia (116), and issued coins that claimed Armenia and Mesopotamia were under the authority of the Roman people. In that same year, he captured Seleucia and the Parthian capital Ctesiphon (near modern Baghdad). After defeating a Parthian revolt and a Jewish revolt, he withdrew due to health issues, and in 117, he died of edema. Trajan's successor Hadrian withdrew all the troops stationed in Parthia, Armenia and Mesopotamia (modern-day Iraq), abandoning Trajan's conquests. Hadrian's army crushed a revolt in Mauretania and the Bar Kokhba revolt in Judea. This was the last large-scale Jewish revolt against the Romans, and was suppressed with massive repercussions in Judea. Hundreds of thousands of Jews were killed. Hadrian renamed the province of Judea "Provincia Syria Palaestina", after one of Judea's most hated enemies. He constructed fortifications and walls, like the celebrated Hadrian's Wall which separated Roman Britannia and the tribes of modern-day Scotland. Hadrian promoted culture, especially the Greek. He forbade torture and humanised the laws. His many building projects included aqueducts, baths, libraries and theatres; additionally, he travelled nearly every province in the Empire to review military and infrastructural conditions. Following Hadrian's death in 138 AD, his successor Antoninus Pius built temples, theatres, and mausoleums, promoted the arts and sciences, and bestowed honours and financial rewards upon the teachers of rhetoric and philosophy. On becoming emperor, Antoninus made few initial changes, leaving intact as far as possible the arrangements instituted by his predecessor. Antoninus expanded Roman Britannia by invading what is now southern Scotland and building the Antonine Wall. He also continued Hadrian's policy of humanising the laws. He died in 161 AD. Marcus Aurelius, known as the Philosopher, was the last of the Five Good Emperors. He was a stoic philosopher and wrote the Meditations. He defeated barbarian tribes in the Marcomannic Wars as well as the Parthian Empire. His co-emperor, Lucius Verus, died in 169 AD, probably from the Antonine Plague, a pandemic that killed nearly five million people through the Empire in 165–180 AD. From Nerva to Marcus Aurelius, the empire achieved an unprecedented status. The powerful influence of laws and manners had gradually cemented the union of the provinces. All the citizens enjoyed and abused the advantages of wealth. The image of a free constitution was preserved with decent reverence. The Roman senate appeared to possess the sovereign authority, and devolved on the emperors all the executive powers of government. Gibbon declared the rule of these "Five Good Emperors" the golden era of the Empire. During this time, Rome reached its greatest territorial extent. Commodus, son of Marcus Aurelius, became emperor after his father's death. He is not counted as one of the Five Good Emperors, due to his direct kinship with the latter emperor; in addition, he was militarily passive. Cassius Dio identifies his reign as the beginning of Roman decadence: "(Rome has transformed) from a kingdom of gold to one of iron and rust." Severan dynasty Commodus was killed by a conspiracy involving Quintus Aemilius Laetus and his wife Marcia in late 192 AD. The following year is known as the Year of the Five Emperors, during which Helvius Pertinax, Didius Julianus, Pescennius Niger, Clodius Albinus and Septimius Severus held the imperial dignity. Pertinax, a member of the senate who had been one of Marcus Aurelius's right-hand men, was the choice of Laetus, and he ruled vigorously and judiciously. Laetus soon became jealous and instigated Pertinax's murder by the Praetorian Guard, who then auctioned the empire to the highest bidder, Didius Julianus, for 25,000 sesterces per man. The people of Rome were appalled and appealed to the frontier legions to save them. The legions of three frontier provinces—Britannia, Pannonia Superior, and Syria—resented being excluded from the "donative" and replied by declaring their individual generals to be emperor. Lucius Septimius Severus Geta, the Pannonian commander, bribed the opposing forces, pardoned the Praetorian Guards and installed himself as emperor. He and his successors governed with the legions' support. The changes on coinage and military expenditures were the root of the financial crisis that marked the Crisis of the Third Century. Severus was enthroned after invading Rome and having Didius Julianus killed. Severus attempted to revive totalitarianism and, addressing the Roman people and Senate, praised the severity and cruelty of Marius and Sulla, which worried the senators. When Parthia invaded Roman territory, Severus successfully waged war against that country. Notwithstanding this military success, Severus failed in invading Hatra, a rich Arabian city. Severus killed his legate, who was gaining respect from the legions; and his soldiers fell victim to famine. After this disastrous campaign, he withdrew. Severus also intended to vanquish the whole of Britannia. To achieve this, he waged war against the Caledonians. After many casualties in the army due to the terrain and the barbarians' ambushes, Severus himself went to the field. However, he became ill and died in 211 AD, at the age of 65. Upon the death of Severus, his sons Caracalla and Geta were made emperors. Caracalla had his brother, a youth, assassinated in his mother's arms, and may have murdered 20,000 of Geta's followers. Like his father, Caracalla was warlike. He continued Severus' policy and gained respect from the legions. Knowing that the citizens of Alexandria disliked him and were denigrating his character, Caracalla served a banquet for its notable citizens, after which his soldiers killed all the guests. From the security of the temple of Sarapis, he then directed an indiscriminate slaughter of Alexandria's people. In 212, he issued the Edict of Caracalla, giving full Roman citizenship to all free men living in the Empire, with the exception of the dediticii, people who had become subject to Rome through surrender in war, and freed slaves. Mary Beard points to the edict as a fundamental turning point, after which Rome was "effectively a new state masquerading under an old name". Macrinus conspired to have Caracalla assassinated by one of his soldiers during a pilgrimage to the Temple of the Moon in Carrhae, in 217 AD. Macrinus assumed power, but soon removed himself from Rome to the east and Antioch. His brief reign ended in 218, when the youngster Bassianus, high priest of the temple of the Sun at Emesa, and supposedly illegitimate son of Caracalla, was declared Emperor by the disaffected soldiers of Macrinus. He adopted the name of Antoninus but history has named him after his Sun god Elagabalus, represented on Earth in the form of a large black stone. An incompetent and lascivious ruler, Elagabalus offended all but his favourites. Cassius Dio, Herodian and the Historia Augusta give many accounts of his notorious extravagance. Elagabalus adopted his cousin Severus Alexander, as Caesar, but subsequently grew jealous and attempted to assassinate him. However, the Praetorian guard preferred Alexander, murdered Elagabalus, dragged his mutilated corpse through the streets of Rome, and threw it into the Tiber. Severus Alexander then succeeded him. Alexander waged war against many foes, including the revitalised Persia and also the Germanic peoples, who invaded Gaul. His losses generated dissatisfaction among his soldiers, and some of them murdered him during his Germanic campaign in 235 AD. Crisis of the Third Century A disastrous scenario emerged after the death of Alexander Severus: the Roman state was plagued by civil wars, external invasions, political chaos, pandemics and economic depression. The old Roman values had fallen, and Mithraism and Christianity had begun to spread through the populace. Emperors were no longer men linked with nobility; they usually were born in lower-classes of distant parts of the Empire. These men rose to prominence through military ranks, and became emperors through civil wars. There were 26 emperors in a 49-year period, a signal of political instability. Maximinus Thrax was the first ruler of that time, governing for just three years. Others ruled just for a few months, like Gordian I, Gordian II, Balbinus and Hostilian. The population and the frontiers were abandoned, since the emperors were mostly concerned with defeating rivals and establishing their power. The economy also suffered: massive military expenditures from the Severi caused a devaluation of Roman coins. Hyperinflation came at this time as well. The Plague of Cyprian broke out in 250 and killed a huge portion of the population. In 260 AD, the provinces of Syria Palaestina, Asia Minor and Egypt separated from the rest of the Roman state to form the Palmyrene Empire, ruled by Queen Zenobia and centered on Palmyra. In that same year the Gallic Empire was created by Postumus, retaining Britannia and Gaul. These countries separated from Rome after the capture of emperor Valerian by the Sassanids of Persia, the first Roman ruler to be captured by his enemies; it was a humiliating fact for the Romans. The crisis began to recede during the reigns of Claudius Gothicus (268–270), who defeated the Gothic invaders, and Aurelian (271–275), who reconquered both the Gallic and Palmyrene Empires. The crisis was overcome during the reign of Diocletian. Empire – The Tetrarchy Diocletian In 284 AD, Diocletian was hailed as Imperator by the eastern army. Diocletian healed the empire from the crisis, by political and economic shifts. A new form of government was established: the Tetrarchy. The Empire was divided among four emperors, two in the West and two in the East. The first tetrarchs were Diocletian (in the East), Maximian (in the West), and two junior emperors, Galerius (in the East) and Flavius Constantius (in the West). To adjust the economy, Diocletian made several tax reforms. Diocletian expelled the Persians who plundered Syria and conquered some barbarian tribes with Maximian. He adopted many behaviours of Eastern monarchs. Anyone in the presence of the emperor had now to prostrate himself—a common act in the East, but never practised in Rome before. Diocletian did not use a disguised form of Republic, as the other emperors since Augustus had done. Between 290 and 330, half a dozen new capitals had been established by the members of the Tetrarchy, officially or not: Antioch, Nicomedia, Thessalonike, Sirmium, Milan, and Trier. Diocletian was also responsible for a significant Christian persecution. In 303 he and Galerius started the persecution and ordered the destruction of all the Christian churches and scripts and forbade Christian worship. Diocletian abdicated in 305 AD together with Maximian, thus, he was the first Roman emperor to resign. His reign ended the traditional form of imperial rule, the Principate (from princeps) and started the Tetrarchy. Constantine and Christianity Constantine assumed the empire as a tetrarch in 306. He conducted many wars against the other tetrarchs. Firstly he defeated Maxentius in 312. In 313, he issued the Edict of Milan, which granted liberty for Christians to profess their religion. Constantine was converted to Christianity, enforcing the Christian faith. He began the Christianization of the Empire and of Europe—a process concluded by the Catholic Church in the Middle Ages. He was defeated by the Franks and the Alamanni during 306–308. In 324 he defeated another tetrarch, Licinius, and controlled all the empire, as it was before Diocletian. To celebrate his victories and Christianity's relevance, he rebuilt Byzantium and renamed it Nova Roma ("New Rome"); but the city soon gained the informal name of Constantinople ("City of Constantine"). The reign of Julian, who under the influence of his adviser Mardonius attempted to restore Classical Roman and Hellenistic religion, only briefly interrupted the succession of Christian emperors. Constantinople served as a new capital for the Empire. In fact, Rome had lost its central importance since the Crisis of the Third Century—Mediolanum was the western capital from 286 to 330, until the reign of Honorius, when Ravenna was made capital, in the 5th century. Constantine's administrative and monetary reforms, that reunited the Empire under one emperor, and rebuilt the city of Byzantium, as Constantinopolis Nova Roma, changed the high period of the ancient world. Fall of the Western Roman Empire In the late 4th and 5th centuries the Western Empire entered a critical stage which terminated with the fall of the Western Roman Empire. Under the last emperors of the Constantinian dynasty and the Valentinianic dynasty, Rome lost decisive battles against the Sasanian Empire and Germanic barbarians: in 363, emperor Julian the Apostate was killed in the Battle of Samarra, against the Persians and the Battle of Adrianople cost the life of emperor Valens (364–378); the victorious Goths were never expelled from the Empire nor assimilated. The next emperor, Theodosius I (379–395), gave even more force to the Christian faith, and after his death, the Empire was divided into the Eastern Roman Empire, ruled by Arcadius and the Western Roman Empire, commanded by Honorius, both of which were Theodosius' sons. The situation became more critical in 408, after the death of Stilicho, a general who tried to reunite the Empire and repel barbarian invasion in the early years of the 5th century. The professional field army collapsed. In 410, the Theodosian dynasty saw the Visigoths sack Rome. During the 5th century, the Western Empire experienced a significant reduction of its territory. The Vandals conquered North Africa, the Visigoths claimed the southern part of Gaul, Gallaecia was taken by the Suebi, Britannia was abandoned by the central government, and the Empire suffered further from the invasions of Attila, chief of the Huns. General Orestes refused to meet the demands of the barbarian "allies" who now formed the army, and tried to expel them from Italy. Unhappy with this, their chieftain Odoacer defeated and killed Orestes, invaded Ravenna and dethroned Romulus Augustus, son of Orestes. This event of 476, usually marks the end of Classical antiquity and beginning of the Middle Ages. The Roman noble and former emperor Julius Nepos continued to rule as emperor from Dalmatia even after the deposition of Romulus Augustus until his death in 480. Some historians consider him to be the last emperor of the Western Empire instead of Romulus Augustus. After 1200 years of independence and nearly 700 years as a great power, the rule of Rome in the West ended. Various reasons for Rome's fall have been proposed ever since, including loss of Republicanism, moral decay, military tyranny, class war, slavery, economic stagnation, environmental change, disease, the decline of the Roman race, as well as the inevitable ebb and flow that all civilisations experience. The Eastern Empire survived for almost 1000 years after the fall of its Western counterpart and became the most stable Christian realm during the Middle Ages. During the 6th century, Justinian reconquered the Italian peninsula from the Ostrogoths, North Africa from the Vandals, and southern Hispania from the Visigoths. But within a few years of Justinian's death, Eastern Roman (Byzantine) possessions in Italy were greatly reduced by the Lombards who settled in the peninsula. In the east, partially due to the weakening effect of the Plague of Justinian as well as a series of mutually destructive wars against the Persian Sassanian Empire, the Byzantine Romans were threatened by the rise of Islam. Its followers rapidly brought about the conquest of the Levant, the conquest of Armenia and the conquest of Egypt during the Arab–Byzantine wars, and soon presented a direct threat to Constantinople. In the following century, the Arabs captured southern Italy and Sicily. In the west, Slavic populations penetrated deep into the Balkans. The Byzantine Romans, however, managed to stop further Islamic expansion into their lands during the 8th century and, beginning in the 9th century, reclaimed parts of the conquered lands. In 1000 AD, the Eastern Empire was at its height: Basil II reconquered Bulgaria and Armenia, and culture and trade flourished. However, soon after, this expansion was abruptly stopped in 1071 with the Byzantine defeat in the Battle of Manzikert. The aftermath of this battle sent the empire into a protracted period of decline. Two decades of internal strife and Turkic invasions ultimately led Emperor Alexios I Komnenos to send a call for help to the Western European kingdoms in 1095. The West responded with the Crusades, eventually resulting in the Sack of Constantinople by participants of the Fourth Crusade. The conquest of Constantinople in 1204 fragmented what remained of the Empire into successor states; the ultimate victor was the Empire of Nicaea. After the recapture of Constantinople by Imperial forces, the Empire was little more than a Greek state confined to the Aegean coast. The Eastern Roman (Byzantine) Empire collapsed when Mehmed the Conqueror conquered Constantinople on 29 May 1453. Society The imperial city of Rome was the largest urban center in the empire, with a population variously estimated from 450,000 to close to one million. Around 20 per cent of the population under jurisdiction of ancient Rome (25–40%, depending on the standards used, in Roman Italy) lived in innumerable urban centers, with population of 10,000 and more and several military settlements, a very high rate of urbanisation by pre-industrial standards. Most of those centers had a forum, temples, and other buildings similar to Rome's. The average life expectancy in the Middle Empire was about 26–28 years. Law The roots of the legal principles and practices of the ancient Romans may be traced to the Law of the Twelve Tables promulgated in 449 BC and to the codification of law issued by order of Emperor Justinian I around 530 AD (see Corpus Juris Civilis). Roman law as preserved in Justinian's codes continued into the Byzantine Roman Empire, and formed the basis of similar codifications in continental Western Europe. Roman law continued, in a broader sense, to be applied throughout most of Europe until the end of the 17th century. The major divisions of the law of ancient Rome, as contained within the Justinian and Theodosian law codes, consisted of Jus civile, Jus gentium, and Jus naturale. The Jus civile ("Citizen Law") was the body of common laws that applied to Roman citizens. The praetores urbani (sg. Praetor Urbanus) were the people who had jurisdiction over cases involving citizens. The Jus gentium ("Law of nations") was the body of common laws that applied to foreigners, and their dealings with Roman citizens. The praetores peregrini (sg. Praetor Peregrinus) were the people who had jurisdiction over cases involving citizens and foreigners. Jus naturale encompassed natural law, the body of laws that were considered common to all beings. Class structure Roman society is largely viewed as hierarchical, with slaves (servi) at the bottom, freedmen (liberti) above them, and free-born citizens (cives) at the top. Free citizens were subdivided by class. The broadest, and earliest, division was between the patricians, who could trace their ancestry to one of the 100 patriarchs at the founding of the city, and the plebeians, who could not. This became less important in the later Republic, as some plebeian families became wealthy and entered politics, and some patrician families fell economically. Anyone, patrician or plebeian, who could count a consul as his ancestor was a noble (nobilis); a man who was the first of his family to hold the consulship, such as Marius or Cicero, was known as a novus homo ("new man") and ennobled his descendants. Patrician ancestry, however, still conferred considerable prestige, and many religious offices remained restricted to patricians. A class division originally based on military service became more important. Membership of these classes was determined periodically by the censors, according to property. The wealthiest were the Senatorial class, who dominated politics and command of the army. Next came the equestrians (equites, sometimes translated "knights"), originally those who could afford a warhorse, and who formed a powerful mercantile class. Several further classes, originally based on the military equipment their members could afford, followed, with the proletarii, citizens who had no property other than their children, at the bottom. Before the reforms of Marius they were ineligible for military service and are often described as being just above freed slaves in wealth and prestige. Voting power in the Republic depended on class. Citizens were enrolled in voting "tribes", but the tribes of the richer classes had fewer members than the poorer ones, all the proletarii being enrolled in a single tribe. Voting was done in class order, from top down, and stopped as soon as most of the tribes had been reached, so the poorer classes were often unable to cast their votes. Women in ancient Rome shared some basic rights with their male counterparts, but were not fully regarded as citizens and were thus not allowed to vote or take part in politics. At the same time the limited rights of women were gradually expanded (due to emancipation) and women reached freedom from pater familias, gained property rights and even had more juridical rights than their husbands, but still no voting rights, and were absent from politics. Allied foreign cities were often given the Latin Rights, an intermediary level between full citizens and foreigners (peregrini), which gave their citizens rights under Roman law and allowed their leading magistrates to become full Roman citizens. While there were varying degrees of Latin rights, the main division was between those cum suffragio ("with vote"; enrolled in a Roman tribe and able to take part in the comitia tributa) and sine suffragio ("without vote"; could not take part in Roman politics). Most of Rome's Italian allies were given full citizenship after the Social War of 91–88 BC, and full Roman citizenship was extended to all free-born men in the Empire by Caracalla in 212, with the exception of the dediticii, people who had become subject to Rome through surrender in war, and freed slaves. Education In the early Republic, there were no public schools, so boys were taught to read and write by their parents, or by educated slaves, called paedagogi, usually of Greek origin. The primary aim of education during this period was to train young men in agriculture, warfare, Roman traditions, and public affairs. Young boys learned much about civic life by accompanying their fathers to religious and political functions, including the Senate for the sons of nobles. The sons of nobles were apprenticed to a prominent political figure at the age of 16, and campaigned with the army from the age of 17. Educational practices were modified after the conquest of the Hellenistic kingdoms in the 3rd century BC and the resulting Greek influence, although Roman educational practices were still much different from Greek ones. If their parents could afford it, boys and some girls at the age of 7 were sent to a private school outside the home called a ludus, where a teacher (called a litterator or a magister ludi, and often of Greek origin) taught them basic reading, writing, arithmetic, and sometimes Greek, until the age of 11. Beginning at age 12, students went to secondary schools, where the teacher (now called a grammaticus) taught them about Greek and Roman literature. At the age of 16, some students went on to rhetoric school (where the teacher, usually Greek, was called a rhetor). Education at this level prepared students for legal careers, and required that the students memorise the laws of Rome. Government Initially, Rome was ruled by kings, who were elected from each of Rome's major tribes in turn. The exact nature of the king's power is uncertain. He may have held near-absolute power, or may have merely been the chief executive of the Senate and the people. In military matters, the king's authority (Imperium) was likely absolute. He was also the head of the state religion. In addition to the authority of the King, there were three administrative assemblies: the Senate, which acted as an advisory body for the King; the Comitia Curiata, which could endorse and ratify laws suggested by the King; and the Comitia Calata, which was an assembly of the priestly college that could assemble the people to bear witness to certain acts, hear proclamations, and declare the feast and holiday schedule for the next month. The class struggles of the Roman Republic resulted in an unusual mixture of democracy and oligarchy. The word republic comes from the Latin res publica, which literally translates to "public business". Roman laws traditionally could only be passed by a vote of the Popular assembly (Comitia Tributa). Likewise, candidates for public positions had to run for election by the people. However, the Roman Senate represented an oligarchic institution, which acted as an advisory body. In the Republic, the Senate held actual authority (auctoritas), but no real legislative power; it was technically only an advisory council. However, as the Senators were individually very influential, it was difficult to accomplish anything against the collective will of the Senate. New senators were chosen from among the most accomplished patricians by censors (Censura), who could also remove a senator from his office if he was found "morally corrupt"; a charge that could include bribery or, as under Cato the Elder, embracing one's wife in public. Later, under the reforms of the dictator Sulla, quaestors were made automatic members of the Senate, though most of his reforms did not survive. The Republic had no fixed bureaucracy, and collected taxes through the practice of tax farming. Government positions such as quaestor, aedile, or praefect were funded by the office-holder. To prevent any citizen from gaining too much power, new magistrates were elected annually and had to share power with a colleague. For example, under normal conditions, the highest authority was held by two consuls. In an emergency, a temporary dictator could be appointed. Throughout the Republic, the administrative system was revised several times to comply with new demands. In the end, it proved inefficient for controlling the ever-expanding dominion of Rome, contributing to the establishment of the Roman Empire. In the early Empire, the pretense of a republican form of government was maintained. The Roman Emperor was portrayed as only a princeps, or "first citizen", and the Senate gained legislative power and all legal authority previously held by the popular assemblies. However, the rule of the Emperors became increasingly autocratic, and the Senate was reduced to an advisory body appointed by the Emperor. The Empire did not inherit a set bureaucracy from the Republic, since the Republic did not have any permanent governmental structures apart from the Senate. The Emperor appointed assistants and advisers, but the state lacked many institutions, such as a centrally planned budget. Some historians have cited this as a significant reason for the decline of the Roman Empire. Military The early Roman army was, like those of other contemporary city-states influenced by Greek civilisation, a citizen militia that practised hoplite tactics. It was small and organised in five classes (in parallel to the comitia centuriata, the body of citizens organised politically), with three providing hoplites and two providing light infantry. The early Roman army was tactically limited and its stance during this period was essentially defensive. By the 3rd century BC, the Romans abandoned the hoplite formation in favour of a more flexible system in which smaller groups of 120 (or sometimes 60) men called maniples could manoeuvre more independently on the battlefield. Thirty maniples arranged in three lines with supporting troops constituted a legion, totalling between 4,000 and 5,000 men. The early Republican legion consisted of five sections: the three lines of manipular heavy infantry (hastati, principes and triarii), a force of light infantry (velites), and the cavalry (equites). With the new organisation came a new orientation toward the offensive and a much more aggressive posture toward adjoining city-states. At nominal full strength, an early Republican legion included 3,600 to 4,800 heavy infantry, several hundred light infantry, and several hundred cavalrymen. Until the late Republican period, the typical legionary was a property-owning citizen farmer from a rural area (an adsiduus) who served for particular (often annual) campaigns, and who supplied his own equipment. After 200 BC, economic conditions in rural areas deteriorated as manpower needs increased, so that the property qualifications for compulsory service were gradually reduced. Beginning in the 3rd century BC, legionaries were paid a stipend (stipendium). By the time of Augustus, the ideal of the citizen-soldier had been abandoned and the legions had become fully professional. At the end of the Civil War, Augustus reorganised Roman military forces, discharging soldiers and disbanding legions. He retained 28 legions, distributed through the provinces of the Empire. During the Principate, the tactical organisation of the Army continued to evolve. The remained independent cohorts, and legionary troops often operated as groups of cohorts rather than as full legions. A new and versatile type of unit, the cohortes equitatae, combined cavalry and legionaries in a single formation. They could be stationed at garrisons or outposts and could fight on their own as balanced small forces or combine with similar units as a larger, legion-sized force. This increase in organizational flexibility helped ensure the long-term success of Roman military forces. The Emperor Gallienus (253–268 AD) began a reorganisation that created the last military structure of the late Empire. Withdrawing some legionaries from the fixed bases on the border, Gallienus created mobile forces (the comitatenses or field armies) and stationed them behind and at some distance from the borders as a strategic reserve. The border troops (limitanei) stationed at fixed bases continued to be the first line of defence. The basic units of the field army were regimental; legiones or for infantry and vexillationes for cavalry. Nominal strengths may have been 1,200 men for infantry regiments and 600 for cavalry, but actual troop levels could have been much lower—800 infantry and 400 cavalry. Many infantry and cavalry regiments operated in pairs under the command of a comes. Field armies included regiments recruited from allied tribes and known as foederati. By 400 AD, foederati regiments had become permanently established units of the Roman army, paid and equipped by the Empire, led by a Roman tribune and used just as Roman units were used. The Empire also used groups of barbarians to fight along with the legions as allies without integration into the field armies, under overall command of a Roman general, but led by their own officers. Military leadership evolved over the course of the history of Rome. Under the monarchy, the hoplite armies were led by the kings. During the early and middle Roman Republic, military forces were under the command of one of the two elected consuls for the year. During the later Republic, members of the Roman Senatorial elite, as part of the normal sequence of elected public offices known as the cursus honorum, would have served first as quaestor (often posted as deputies to field commanders), then as praetor. Following the end of a term as praetor or consul, a Senator might be appointed by the Senate as a propraetor or proconsul (depending on the highest office held before) to govern a foreign province. Under Augustus, whose most important political priority was to place the military under a permanent and unitary command, the Emperor was the legal commander of each legion but exercised that command through a legatus (legate) he appointed from the Senatorial elite. In a province with a single legion, the legate commanded the legion (legatus legionis) and served as provincial governor, while in a province with more than one legion, each legion was commanded by a legate and the legates were commanded by the provincial governor (also a legate but of higher rank). During the later stages of the Imperial period (beginning perhaps with Diocletian), the Augustan model was abandoned. Provincial governors were stripped of military authority, and command of the armies in a group of provinces was given to generals (duces) appointed by the Emperor. These were no longer members of the Roman elite but men who came up through the ranks and had seen much practical soldiering. With increasing frequency, these men attempted (sometimes successfully) to usurp the positions of the Emperors. Decreased resources, increasing political chaos and civil war eventually left the Western Empire vulnerable to attack and takeover by neighbouring barbarian peoples. Roman navy Less is known about the Roman navy than the Roman army. Prior to the middle of the 3rd century BC, officials known as duumviri navales commanded a fleet of twenty ships used mainly to control piracy. This fleet was given up in 278 AD and replaced by allied forces. The First Punic War required that Rome build large fleets, and it did so largely with the assistance of and financing from allies. This reliance on allies continued to the end of the Roman Republic. The quinquereme was the main warship on both sides of the Punic Wars and remained the mainstay of Roman naval forces until replaced by the time of Caesar Augustus by lighter and more manoeuvrable vessels. As compared with a trireme, the quinquereme permitted the use of a mix of experienced and inexperienced crewmen (an advantage for a primarily land-based power), and its lesser manoeuvrability permitted the Romans to adopt and perfect boarding tactics using a troop of about 40 marines in lieu of the ram. Ships were commanded by a navarch, a rank equal to a centurion, who was usually not a citizen. Potter suggests that because the fleet was dominated by non-Romans, the navy was considered non-Roman and allowed to atrophy in times of peace. Information suggests that by the time of the late Empire (350 AD), the Roman navy comprised several fleets including warships and merchant vessels for transportation and supply. Warships were oared sailing galleys with three to five banks of oarsmen. Fleet bases included such ports as Ravenna, Arles, Aquilea, Misenum and the mouth of the Somme River in the West and Alexandria and Rhodes in the East. Flotillas of small river craft (classes) were part of the limitanei (border troops) during this period, based at fortified river harbours along the Rhine and the Danube. That prominent generals commanded both armies and fleets suggests that naval forces were treated as auxiliaries to the army and not as an independent service. The details of command structure and fleet strengths during this period are not well known, although fleets were commanded by prefects. Economy Ancient Rome commanded a vast area of land, with tremendous natural and human resources. As such, Rome's economy remained focused on farming and trade. Agricultural free trade changed the Italian landscape, and by the 1st century BC, vast grape and olive estates had supplanted the yeoman farmers, who were unable to match the imported grain price. The annexation of Egypt, Sicily and Tunisia in North Africa provided a continuous supply of grains. In turn, olive oil and wine were Italy's main exports. Two-tier crop rotation was practised, but farm productivity was low, around 1 ton per hectare. Industrial and manufacturing activities were small. The largest such activities were the mining and quarrying of stones, which provided basic construction materials for the buildings of that period. In manufacturing, production was on a relatively small scale, and generally consisted of workshops and small factories that employed at most dozens of workers. However, some brick factories employed hundreds of workers. The economy of the early Republic was largely based on smallholding and paid labour. However, foreign wars and conquests made slaves increasingly cheap and plentiful, and by the late Republic, the economy was largely dependent on slave labour for both skilled and unskilled work. Slaves are estimated to have constituted around 20% of the Roman Empire's population at this time and 40% in the city of Rome. Only in the Roman Empire, when the conquests stopped and the prices of slaves increased, did hired labour become more economical than slave ownership. Although barter was used in ancient Rome, and often used in tax collection, Rome had a very developed coinage system, with brass, bronze, and precious metal coins in circulation throughout the Empire and beyond—some have even been discovered in India. Before the 3rd century BC, copper was traded by weight, measured in unmarked lumps, across central Italy. The original copper coins (as) had a face value of one Roman pound of copper, but weighed less. Thus, Roman money's utility as a unit of exchange consistently exceeded its intrinsic value as metal. After Nero began debasing the silver denarius, its legal value was an estimated one-third greater than its intrinsic value. Horses were expensive and other pack animals were slower. Mass trade on the Roman roads connected military posts, where Roman markets were centered. These roads were designed for wheels. As a result, there was transport of commodities between Roman regions, but increased with the rise of Roman maritime trade in the 2nd century BC. During that period, a trading vessel took less than a month to complete a trip from Gades to Alexandria via Ostia, spanning the entire length of the Mediterranean. Transport by sea was around 60 times cheaper than by land, so the volume for such trips was much larger. Some economists consider the Roman Empire a market economy, similar in its degree of capitalistic practices to 17th century Netherlands and 18th century England. Family The basic units of Roman society were households and families. Groups of households connected through the male line formed a family (gens), based on blood ties, a common ancestry or adoption. During the Roman Republic, some powerful families, or Gentes Maiores, came to dominate political life. Families were headed by their oldest male citizen, the pater familias (father of the family), who held lawful authority (patria potestas, "father's power") over wives, sons, daughters, and slaves of the household, and the family's wealth. The extreme expressions of this power—the selling or killing of family members for moral or civil offences, including simple disobedience—were very rarely exercised, and were forbidden in the Imperial era. A pater familias had moral and legal duties towards all family members. Even the most despotic pater familias was expected to consult senior members of his household and gens over matters that affected the family's well-being and reputation. Traditionally, such matters were regarded as outside the purview of the state and its magistrates; under the emperors, they were increasingly subject to state interference and legislation. Once accepted into their birth family by their fathers, children were potential heirs. They could not be lawfully given away, or sold into slavery. If parents were unable to care for their child, or if its paternity was in doubt, they could resort to infant exposure (Boswell translates this as being "offered" up to care by the gods or strangers). If a deformed or sickly newborn was patently "unfit to live", killing it was a duty of the pater familias. A citizen father who exposed a healthy freeborn child was not punished, but automatically lost his potestas over that child. Abandoned children were sometimes adopted; some would have been sold into slavery. Slavery was near-ubiquitous and almost universally accepted. In the early Republic, citizens in debt were allowed to sell their labour, and perhaps their sons, to their debtor in a limited form of slavery called nexum, but this was abolished in the middle Republic. Freedom was considered a natural and proper state for citizens; slaves could be lawfully freed, with consent and support of their owners, and still serve their owners' family and financial interests, as freedmen or freed women. This was the basis of the client-patron relationship, one of the most important features of Rome's economy and society. In law, a pater familias held potestas over his adult sons with their own households. This could give rise to legal anomalies, such as adult sons also having the status of minors. No man could be considered a pater familias, nor could he truly hold property under law, while his own father lived. During Rome's early history, married daughters came under the control (manus) of their husbands' pater familias. By the late Republic, most married women retained lawful connection to their birth family, though any children from the marriage belonged to her husband's family. The mother or an elderly relative often raised both boys and girls. Roman moralists held that marriage and child-raising fulfilled a basic duty to family, gens, and the state. Multiple remarriages were not uncommon. Fathers usually began seeking husbands for their daughters when these reached an age between twelve and fourteen, but most commoner-class women stayed single until their twenties, and in general seem to have been far more independent than wives of the elite. Divorce required the consent of one party, along with the return of any dowry. Both parents had power over their children during their minority and adulthood, but husbands had much less control over their wives. Roman citizen women held a restricted form of citizenship; they could not vote but were protected by law. They ran families, could own and run businesses, own and cultivate land, write their own wills, and plead in court on their own behalf, or on behalf of others, all under dispensation of the courts and the nominal supervision of a senior male relative. Throughout the late Republican and Imperial eras, a declining birthrate among the elite, and a corresponding increase among commoners was cause of concern for many gentes; Augustus tried to address this through state intervention, offering rewards to any woman who gave birth to three or more children, and penalising the childless. The latter was much resented, and the former had seemingly negligible results. Aristocratic women seem to have been increasingly disinclined to childbearing; it carried a high risk of mortality to mothers, and a deal of inconvenience thereafter. Time and dates Roman hours were counted ordinally from dawn to dawn. Thus, if sunrise was at 6 am, then 6 to 7 am was called the "first hour". Midday was called meridies and it is from this word that the terms am (ante meridiem) and pm (post meridiem) stem. The English word "noon" comes from nona ("ninth (hour)"), which referred to 3 pm in ancient Rome. The Romans had clocks (horologia), which included giant public sundials (solaria) and water clocks (clepsydrae). The ancient Roman week originally had eight days, which were identified by letters A to H, with the eighth day being the nundinum or market day, a kind of weekend when farmers sold their produce on the streets. The seven-day week, first introduced from the East during the early Empire, was officially adopted during the reign of Constantine. Romans named week days after celestial bodies from at least the 1st century AD. Roman months had three important days: the calends (first day of each month, always in plural), the ides (13th or 15th of the month), and the nones (ninth day before the ides, inclusive, i.e. 5th or 7th of the month). Other days were counted backwards from the next one of these days. The Roman year originally had ten months from Martius (March) to December, with the winter period not included in the calendar. The first four months were named after gods (Martius, Aprilis, Maius, Junius) and the others were numbered (Quintilis, Sextilis, September, October, November, and December). Numa Pompilius, the second king of Rome (716–673 BC), is said to have introduced the months of January and February, both also named after gods, beginning the 12-month calendar still in use today. In 44 BC, the month Quintilis was renamed to Julius (July) after Julius Caesar and in 8 BC, Sextilis was renamed to Augustus (August) after Augustus Caesar. The Romans had several ways of tracking years. One widespread way was the consular dating, which identified years by the two consuls who ruled each year. Another way, introduced in the late 3rd century AD, was counting years from the indictio, a 15-year period based on the announcement of the delivery of food and other goods to the government. Another way, less popular but more similar to present day, was ab urbe condita, which counted years from the mythical foundation of Rome in 753 BC. Culture Life in ancient Rome revolved around the city of Rome, located on seven hills. The city had a vast number of monumental structures like the Colosseum, the Trajan's Forum and the Pantheon. It had theatres, gymnasiums, marketplaces, functional sewers, bath complexes complete with libraries and shops, and fountains with fresh drinking water supplied by hundreds of miles of aqueducts. Throughout the territory under the control of ancient Rome, residential architecture ranged from modest houses to country villas. In the capital city of Rome, there were imperial residences on the elegant Palatine Hill, from which the word palace derives. The low plebeian and middle equestrian classes lived in the city center, packed into apartments, or insulae, which were almost like modern ghettos. These areas, often built by upper class property owners to rent, were often centred upon collegia or taberna. These people, provided with a free supply of grain, and entertained by gladiatorial games, were enrolled as clients of patrons among the upper class patricians, whose assistance they sought and whose interests they upheld. Language The native language of the Romans was Latin, an Italic language the grammar of which relies little on word order, conveying meaning through a system of affixes attached to word stems. Its alphabet was based on the Etruscan alphabet, which was in turn based on the Greek alphabet. Although surviving Latin literature consists almost entirely of Classical Latin, an artificial and highly stylised and polished literary language from the 1st century BC, the spoken language of the Roman Empire was Vulgar Latin, which significantly differed from Classical Latin in grammar and vocabulary, and eventually in pronunciation. Speakers of Latin could understand both until the 7th century when spoken Latin began to diverge so much that 'Classical' or 'Good Latin' had to be learned as a second language. While Latin remained the main written language of the Roman Empire, Greek came to be the language spoken by the well-educated elite, as most of the literature studied by Romans was written in Greek. Most of the emperors were bilingual but had a preference for Latin in the public sphere for political reasons, a practice that first started during the punic wars. In the eastern part of the Roman Empire (and later the Eastern Roman Empire), Latin was never able to replace Greek, a legacy of the Hellenistic period. Justinian would be the last emperor to use Latin in government and marks when Greek officially took over. The expansion of the Roman Empire spread Latin throughout Europe, and Vulgar Latin evolved into many distinct Romance languages. Religion Archaic Roman religion, at least concerning the gods, was made up not of written narratives, but rather of complex interrelations between gods and humans. Unlike in Greek mythology, the gods were not personified, but were vaguely defined sacred spirits called numina. Romans also believed that every person, place or thing had its own genius, or divine soul. During the Roman Republic, Roman religion was organised under a strict system of priestly offices, which were held by men of senatorial rank. The College of Pontifices was uppermost body in this hierarchy, and its chief priest, the Pontifex Maximus, was the head of the state religion. Flamens took care of the cults of various gods, while augurs were trusted with taking the auspices. The sacred king took on the religious responsibilities of the deposed kings. In the Roman Empire, deceased emperors who had ruled well were deified by their successors and the Senate. and the formalised imperial cult became increasingly prominent. As contact with the Greeks increased, the old Roman gods became increasingly associated with Greek gods. Under the Empire, the Romans absorbed the mythologies of their conquered subjects, often leading to situations in which the temples and priests of traditional Italian deities existed side by side with those of foreign gods. Beginning with Emperor Nero in the 1st century AD, Roman official policy towards Christianity was negative, and at some point, being a Christian could be punishable by death. Under Emperor Diocletian, the persecution of Christians reached its peak. However, it became an officially supported religion in the Roman state under Diocletian's successor, Constantine I, with the signing of the Edict of Milan in 313, and quickly became dominant. All religions except Christianity were prohibited in 391 AD by an edict of Emperor Theodosius I. Ethics and morality Like many ancient cultures, concepts of ethics and morality, while sharing some commonalities with modern society, differed greatly in several important ways. Because ancient civilisations like Rome were under constant threat of attack from marauding tribes, their culture was necessarily militaristic with martial skills being a prized attribute. Whereas modern societies consider compassion a virtue, Roman society considered compassion a vice, a moral defect. Indeed, one of the primary purposes of the gladiatorial games was to inoculate Roman citizens from this weakness. Romans instead prized virtues such as courage and conviction (virtus), a sense of duty to one's people, moderation and avoiding excess (moderatio), forgiveness and understanding (clementia), fairness (severitas), and loyalty (pietas). Roman society had well-established and restrictive norms related to sexuality, though as with many societies, the lion's share of the responsibilities fell on women. Women were generally expected to be monogamous having only a single husband during their life (univira), though this was much less regarded by the elite, especially under the empire. Women were expected to be modest in public avoiding any provocative appearance and to demonstrate absolute fidelity to their husbands (pudicitia). Indeed, wearing a veil was a common expectation to preserve modesty. Sex outside of marriage was generally frowned upon for men and women and indeed was made illegal during the imperial period. Nevertheless, prostitution was an accepted and regulated practice. Public demonstrations of death, violence, and brutality were used as a source of entertainment in Roman communities; however it was also a way to maintain social order, demonstrate power, and signify communal unity. Art, music and literature Roman painting styles show Greek influences, and surviving examples are primarily frescoes used to adorn the walls and ceilings of country villas, though Roman literature includes mentions of paintings on wood, ivory, and other materials. Several examples of Roman painting have been found at Pompeii, and from these art historians divide the history of Roman painting into four periods. The first style of Roman painting was practised from the early 2nd century BC to the early- or mid-1st century BC. It was mainly composed of imitations of marble and masonry, though sometimes including depictions of mythological characters. The second style began during the early 1st century BC and attempted to depict realistically three-dimensional architectural features and landscapes. The third style occurred during the reign of Augustus (27 BC – 14 AD), and rejected the realism of the second style in favour of simple ornamentation. A small architectural scene, landscape, or abstract design was placed in the center with a monochrome background. The fourth style, which began in the 1st century AD, depicted scenes from mythology, while retaining architectural details and abstract patterns. Portrait sculpture used youthful and classical proportions, evolving later into a mixture of realism and idealism. During the Antonine and Severan periods, ornate hair and bearding, with deep cutting and drilling, became popular. Advancements were also made in relief sculptures, usually depicting Roman victories. Roman music was largely based on Greek music, and played an important part in many aspects of Roman life. In the Roman military, musical instruments such as the tuba (a long trumpet) or the cornu were used to give various commands, while the buccina (possibly a trumpet or horn) and the lituus (probably an elongated J-shaped instrument), were used in ceremonial capacities. Music was used in the Roman amphitheatres between fights and in the odea, and in these settings is known to have featured the cornu and the hydraulis (a type of water organ). Most religious rituals featured musical performances. Some music historians believe that music was used at almost all public ceremonies. The graffiti, brothels, paintings, and sculptures found in Pompeii and Herculaneum suggest that the Romans had a sex-saturated culture. Literature and Libraries Latin literature was, from its start, influenced heavily by Greek authors. Some of the earliest extant works are of historical epics telling the early military history of Rome. As the Republic expanded, authors began to produce poetry, comedy, history, and tragedy. Ancient Rome's literary contributions are still recognized today and the works by ancient Roman authors were available in bookshops as well as in public and private libraries. Many scholars and statesmen of ancient Rome cultivated private libraries that were used both as demonstrations of knowledge and displays of wealth and power. Private libraries were so commonly encountered that Vitruvius wrote about where libraries should be situated within a villa. In addition to numerous private libraries, the Roman Empire saw the establishment of early public libraries. Although Julius Caesar had intended to establish public libraries to further establish Rome as a great cultural center like Athens and Alexandria, he died before this was accomplished. Caesar's former lieutenant, Gaius Asinius Pollio, took up the project and opened the first public library in Rome in the Atrium Libertatis. Emperors Augustus, Tiberius, Vespasian, Domitian, and Trajan also founded or expanded public libraries in Rome during their reigns. These included the Ulpian Library in Trajan's Forum and libraries in the Temple of Apollo Palatinus, the Temple of Peace in the Roman Forum, the Temple of Divus Augustus, which was dedicated to Minerva when it was  rebuilt under Emperor Domitian's orders. Some of these, including the library at the Temple of Divus Augustus also served as archives. By the fall of the Western Roman Empire, the city of Rome had more than two dozen public libraries. Rome was not the only city to benefit from such institutions. As the Roman Empire spread, public libraries were established in other major cities and cultural centers including Ephesos, Athens, and Timgad. Most public libraries of this time were not built expressly for that purpose, instead sharing space in temples, baths, and other community buildings. In addition to serving as repositories for books, public libraries hosted orations by authors. These recitations served as social gatherings and allowed those who may not be literate to be entertained by poetry, epics, philosophical treatises, and other works. Cuisine Ancient Roman cuisine changed over the long duration of this ancient civilisation. Dietary habits were affected by the influence of Greek culture, the political changes from Kingdom to Republic to Empire, and the Empire's enormous expansion, which exposed Romans to many new, provincial culinary habits and cooking techniques. In the beginning the differences between social classes were relatively small, but disparities evolved with the Empire's growth. Men and women drank wine with their meals. The ancient Roman diet included many items that are staples of modern Italian cooking. Pliny the Elder discussed more than 30 varieties of olive, 40 kinds of pear, figs (native and imported from Africa and the eastern provinces), and a wide variety of vegetables, including carrots (of different colours, but not orange) as well as celery, garlic, some flower bulbs, cabbage and other brassicas (such as kale and broccoli), lettuce, endive, onion, leek, asparagus, radishes, turnips, parsnips, beets, green peas, chard, cardoons, olives, and cucumber. However, some foods now considered characteristic of modern Italian cuisine were not used. In particular, spinach and eggplant (aubergine) were introduced later from the Arab world, and tomatoes, potatoes, capsicum peppers, and maize (the modern source of polenta) only appeared in Europe following the discovery of the New World and the Columbian Exchange. The Romans knew of rice, but it was very rarely available to them. There were also few citrus fruits. Butcher's meat such as beef was an uncommon luxury. The most popular meat was pork, especially sausages. Fish was more common than meat, with a sophisticated aquaculture and large-scale industries devoted to oyster farming. The Romans also engaged in snail farming and oak grub farming. Some fish were greatly esteemed and fetched high prices, such as mullet raised in the fishery at Cosa, and "elaborate means were invented to assure its freshness". Traditionally, a breakfast called ientaculum was served at dawn. At mid-day to early afternoon, Romans ate cena, the main meal of the day, and at nightfall a light supper called vesperna. With the increased importation of foreign foods, the cena grew larger in size and included a wider range of foods. Thus, it gradually shifted to the evening, while the vesperna was abandoned completely over the course of the years. The mid-day meal prandium became a light meal to hold one over until cena. Fashion The toga, a common garment during the era of Julius Caesar, was gradually abandoned by all social classes of the Empire. At the early 4th century, the toga had become just a garment worn by senators in Senate and ceremonial events. At the 4th century, the toga was replaced by the paenula (a garment similar to a poncho) as the everyday garment of the Romans, from the lower classes to the upper classes. Another garment that was popular among the Romans in the later years of the Western Roman Empire was the pallium, which was mostly worn by philosophers and scholars in general. Due to external influences, mainly from the Germanic peoples, the Romans adopted tunics very similar to those used by the Germanic peoples with whom they interacted in the final years of the Western Empire, also adopted trousers and hats like the pileus pannonicus. At the Late Empire the paludamentum (a type of military clothing) was used only by the Emperor of Rome (since the reign of Augustus, the first emperor) while the dalmatic (also used by the Christian clergy) began to spread throughout the empire. Games and recreation The youth of Rome had several forms of athletic play and exercise. Play for boys was supposed to prepare them for active military service, such as jumping, wrestling, boxing, and racing. In the countryside, pastimes for the wealthy also included fishing and hunting. The Romans also had several forms of ball playing, including one resembling handball. Dice games, board games, and gamble games were popular pastimes. For the wealthy, dinner parties presented an opportunity for entertainment, sometimes featuring music, dancing, and poetry readings. The majority, less well-off, sometimes enjoyed similar parties through clubs or associations, but for most Romans, recreational dining usually meant patronising taverns.Children entertained themselves with toys and such games as leapfrog. Public games and spectacles were sponsored by leading Romans who wished to advertise their generosity and court popular approval; in Rome or its provinces, this usually meant the emperor or his governors. Venues in Rome and the provinces were developed specifically for public games. Rome's Colisseum was built in 70 AD under the Roman emperor Vespasian and opened in 80 AD to host other events and gladiatorial combats. Gladiators had an exotic and inventive variety of arms and armour. They sometimes fought to the death, but more often to an adjudicated victory, usually in keeping with the mood of the watching crowd. Shows of exotic animals were popular in their own right; but sometimes animals were pitted against human beings, either armed professionals or unarmed criminals who had been condemned to public death. Chariot racing was extremely popular among all classes. In Rome, these races were usually held at the Circus Maximus, which had been purpose-built for chariot and horse-racing and, as Rome's largest public place, was also used for festivals and animal shows. It could seat around 150,000 people; The charioteers raced in teams, identified by their colours; some aficionados were members of extremely, even violently partisan circus factions. Technology Ancient Rome boasted impressive technological feats, using many advancements that were lost in the Middle Ages and not rivalled again until the 19th and 20th centuries. An example of this is insulated glazing, which was not invented again until the 1930s. Many practical Roman innovations were adopted from earlier Greek designs. Advancements were often divided and based on craft. Artisans guarded technologies as trade secrets. Roman civil engineering and military engineering constituted a large part of Rome's technological superiority and legacy, and contributed to the construction of hundreds of roads, bridges, aqueducts, public baths, theatres and arenas. Many monuments, such as the Colosseum, Pont du Gard, and Pantheon, remain as testaments to Roman engineering and culture. The Romans were renowned for their architecture, which is grouped with Greek traditions into "Classical architecture". Although there were many differences from Greek architecture, Rome borrowed heavily from Greece in adhering to strict, formulaic building designs and proportions. Aside from two new orders of columns, composite and Tuscan, and from the dome, which was derived from the Etruscan arch, Rome had relatively few architectural innovations until the end of the Republic. In the 1st century BC, Romans started to use Roman concrete widely. Concrete was invented in the late 3rd century BC. It was a powerful cement derived from pozzolana, and soon supplanted marble as the chief Roman building material and allowed many daring architectural forms. Also in the 1st century BC, Vitruvius wrote , possibly the first complete treatise on architecture in history. In the late 1st century BC, Rome also began to use glassblowing soon after its invention in Syria about 50 BC. Mosaics took the Empire by storm after samples were retrieved during Lucius Cornelius Sulla's campaigns in Greece. The Romans also largely built using timber, causing a rapid decline of the woodlands surrounding Rome and in much of the Apennine Mountains due to the demand for wood for construction, shipbuilding and fire. The first evidence of long-distance wood trading come from the discovery of wood planks, felled between AD 40 and 60, coming from the Jura mountains in northeastern France and ending up more than away, in the foundations of a lavish portico that was part of a vast wealthy patrician villa, in Central Rome. It is suggested that timber, around long, came up to Rome via the Tiber River on ships travelling across the Mediterranean Sea from the confluence of the Saône and Rhône rivers in what is now the city of Lyon in present-day France. With solid foundations and good drainage, Roman roads were known for their durability and many segments of the Roman road system were still in use a thousand years after the fall of Rome. The construction of a vast and efficient travel network throughout the Empire dramatically increased Rome's power and influence. They allowed Roman legions to be deployed rapidly, with predictable marching times between key points of the empire, no matter the season. These highways also had enormous economic significance, solidifying Rome's role as a trading crossroads—the origin of the saying "all roads lead to Rome". The Roman government maintained a system of way stations, known as the cursus publicus, and established a system of horse relays allowing a dispatch to travel up to a day. The Romans constructed numerous aqueducts to supply water to cities and industrial sites and to aid in their agriculture. By the third century, the city of Rome was supplied by 11 aqueducts with a combined length of . The Romans also made major advancements in sanitation. Romans were particularly famous for their public baths, called thermae, which were used for both hygienic and social purposes. Many Roman houses had flush toilets and indoor plumbing, and a complex sewer system, the Cloaca Maxima, was used to drain the local marshes and carry waste into the Tiber. Some historians have speculated that lead pipes in the sewer and plumbing systems led to widespread lead poisoning, which contributed to fall of Rome; however, lead content would have been minimised. Legacy Ancient Rome is the progenitor of Western civilisation. The customs, religion, law, technology, architecture, political system, military, literature, languages, alphabet, government and many factors and aspects of western civilisation are all inherited from Roman advancements. The rediscovery of Roman culture revitalised Western civilisation, playing a role in the Renaissance and the Age of Enlightenment. Historiography Primary and secondary sources The two longest ancient accounts of the Roman history, the histories of Livy and Dionysius of Halicarnassus, were composed 500 years later than the date for the founding of the republic and 200 years from the defeat of Hannibal. Although there has been a diversity of works on ancient Roman history, many of them are lost. As a result of this loss, there are gaps in Roman history, which are filled by unreliable works, such as the Historia Augusta and other books from obscure authors. Historians used their works for the lauding of Roman culture and customs, and to flatter their patrons. Caesar wrote his own accounts of his military campaigns in Gaul and during the Civil War in part to impress his contemporaries. In the Empire, the biographies of famous men and early emperors flourished, examples being The Twelve Caesars of Suetonius, and Plutarch's Parallel Lives. Other major works of Imperial times were that of Livy and Tacitus. Polybius – The Histories Sallust – Bellum Catilinae and Bellum Jugurthinum Julius Caesar – De Bello Gallico and De Bello Civili Livy – Ab urbe condita Dionysius of Halicarnassus – Roman Antiquities Pliny the Elder – Naturalis Historia Josephus – The Jewish War Suetonius – The Twelve Caesars (De Vita Caesarum) Tacitus – Annales and Histories Plutarch – Parallel Lives (a series of biographies of famous Roman and Greek men) Cassius Dio – Historia Romana Herodian – History of the Roman Empire since Marcus Aurelius Ammianus Marcellinus – Res Gestae Interest in studying, and idealising, ancient Rome became prevalent during the Italian Renaissance. Edward Gibbon's The History of the Decline and Fall of the Roman Empire "began the modern study of Roman history in the English-speaking world". Barthold Georg Niebuhr was a founder of the examination of ancient Roman history and wrote The Roman History, tracing the period until the First Punic war. During the Napoleonic, The History of Romans by Victor Duruy highlighted the Caesarean period popular at the time. History of Rome, Roman constitutional law and Corpus Inscriptionum Latinarum, all by Theodor Mommsen, became milestones. Edward Gibbon – The History of the Decline and Fall of the Roman Empire John Bagnall Bury – History of the Later Roman Empire Michael Grant – The Roman World Barbara Levick – Claudius Barthold Georg Niebuhr Michael Rostovtzeff Howard Hayes Scullard – The History of the Roman World Ronald Syme – The Roman Revolution Adrian Goldsworthy – Caesar: The Life of a Colossus and How Rome fell Mary Beard - SPQR: A History of Ancient Rome See also Outline of classical studies Outline of ancient Rome Timeline of Roman history Regions in Greco-Roman antiquity List of ancient Romans List of Roman emperors List of Roman civil wars and revolts Byzantine Empire Roman army List of archaeologically attested women from the ancient Mediterranean region Notes References Works cited Livy. The Rise of Rome, Books 1–5, translated from Latin by T.J. Luce, 1998. Oxford World's Classics. Oxford: Oxford University Press. . Further reading External links Ancient Rome resources for students from the Courtenay Middle School Library. History of ancient Rome OpenCourseWare from the University of Notre Dame providing free resources including lectures, discussion questions, assignments, and exams. Gallery of the Ancient Art: Ancient Rome Rome Articles which contain graphical timelines 8th-century BC establishments in Italy 5th-century disestablishments Classical civilizations States and territories disestablished in the 5th century Former empires
0.769096
0.999717
0.768879
Human population planning
Human population planning is the practice of managing the growth rate of a human population. The practice, traditionally referred to as population control, had historically been implemented mainly with the goal of increasing population growth, though from the 1950s to the 1980s, concerns about overpopulation and its effects on poverty, the environment and political stability led to efforts to reduce population growth rates in many countries. More recently, however, several countries such as China, Japan, South Korea, Russia, Iran, Italy, Spain, Finland, Hungary and Estonia have begun efforts to boost birth rates once again, generally as a response to looming demographic crises. While population planning can involve measures that improve people's lives by giving them greater control of their reproduction, a few programs, such as the Chinese government's "one-child policy and two-child policy", have employed coercive measures. Types Three types of population planning policies pursued by governments can be identified: Increasing or decreasing the overall population growth rate. Increasing or decreasing the relative population growth of a subgroup of people, such as those of high or low intelligence or those with special abilities or disabilities. Policies that aim to boost relative growth rates are known as positive eugenics; those that aim to reduce relative growth rates are known as negative eugenics. Attempts to ensure that all population groups of a certain type (e.g. all social classes within a society) have the same average rate of population growth. Methods While a specific population planning practice may be legal/mandated in one country, it may be illegal or restricted in another, indicative of the controversy surrounding this topic. Increasing population growth Population policies that are intended to increase a population or subpopulation growth rates may use practices such as: Higher taxation of married couples who have no, or too few, children Politicians imploring the populace to have bigger families Tax breaks and subsidies for families with children Loosening of immigration restrictions, and/or mass recruitment of foreign workers by the government History Ancient times through Middle Ages A number of ancient writers have reflected on the issue of population. At about 300 BC, the Indian political philosopher Chanakya (c. 350-283 BC) considered population a source of political, economic, and military strength. Though a given region can house too many or too few people, he considered the latter possibility to be the greater evil. Chanakya favored the remarriage of widows (which at the time was forbidden in India), opposed taxes encouraging emigration, and believed in restricting asceticism to the aged. In ancient Greece, Plato (427-347 BC) and Aristotle (384-322 BC) discussed the best population size for Greek city-states such as Sparta, and concluded that cities should be small enough for efficient administration and direct citizen participation in public affairs, but at the same time needed to be large enough to defend themselves against hostile neighbors. In order to maintain a desired population size, the philosophers advised that procreation, and if necessary, immigration, should be encouraged if the population size was too small. Emigration to colonies would be encouraged should the population become too large. Aristotle concluded that a large increase in population would bring, "certain poverty on the citizenry and poverty is the cause of sedition and evil." To halt rapid population increase, Aristotle advocated the use of abortion and the exposure of newborns (that is, infanticide). Confucius (551-478 BC) and other Chinese writers cautioned that, "excessive growth may reduce output per worker, repress levels of living for the masses and engender strife." Some Chinese writers may also have observed that "mortality increases when food supply is insufficient; that premature marriage makes for high infantile mortality rates, that war checks population growth." It is particularly noteworthy that Han Fei (281-233 BC), long before Malthus, had already noted the conflict between a population growing at the exponential rate and a food supply growing at the arithmetic rate. Not only did he conclude that overpopulation was the root cause of the intensification of political and social conflict, but he also reduced traditional morality to an evolutionary product of material surplus rather than having any objective value. Nevertheless, during the Han Dynasty, the emperors enacted a large number of laws to encourage early marriage and childbirth. Ancient Rome, especially in the time of Augustus (63 BC-AD 14), needed manpower to acquire and administer the vast Roman Empire. A series of laws were instituted to encourage early marriage and frequent childbirth. Lex Julia (18 BC) and the Lex Papia Poppaea (AD 9) are two well-known examples of such laws, which among others, provided tax breaks and preferential treatment when applying for public office for those who complied with the laws. Severe limitations were imposed on those who did not. For example, the surviving spouse of a childless couple could only inherit one-tenth of the deceased fortune, while the rest was taken by the state. These laws encountered resistance from the population which led to the disregard of their provisions and to their eventual abolition. Tertullian, an early Christian author (ca. AD 160-220), was one of the first to describe famine and war as factors that can prevent overpopulation. He wrote: "The strongest witness is the vast population of the earth to which we are a burden and she scarcely can provide for our needs; as our demands grow greater, our complaints against Nature's inadequacy are heard by all. The scourges of pestilence, famine, wars, and earthquakes have come to be regarded as a blessing to overcrowded nations since they serve to prune away the luxuriant growth of the human race." Ibn Khaldun, a North African polymath (1332–1406), considered population changes to be connected to economic development, linking high birth rates and low death rates to times of economic upswing, and low birth rates and high death rates to economic downswing. Khaldoun concluded that high population density rather than high absolute population numbers were desirable to achieve more efficient division of labour and cheap administration. During the Middle Ages in Christian Europe, population issues were rarely discussed in isolation. Attitudes were generally pro-natalist in line with the Biblical command, "Be ye fruitful and multiply." When Russian explorer Otto von Kotzebue visited the Marshall Islands in Micronesia in 1817, he noted that Marshallese families practiced infanticide after the birth of a third child as a form of population planning due to frequent famines. 16th and 17th centuries European cities grew more rapidly than before, and throughout the 16th century and early 17th century discussions on the advantages and disadvantages of population growth were frequent. Niccolò Machiavelli, an Italian Renaissance political philosopher, wrote, "When every province of the world so teems with inhabitants that they can neither subsist where they are nor remove themselves elsewhere... the world will purge itself in one or another of these three ways," listing floods, plague and famine. Martin Luther concluded, "God makes children. He is also going to feed them." Jean Bodin, a French jurist and political philosopher (1530–1596), argued that larger populations meant more production and more exports, increasing the wealth of a country. Giovanni Botero, an Italian priest and diplomat (1540–1617), emphasized that, "the greatness of a city rests on the multitude of its inhabitants and their power," but pointed out that a population cannot increase beyond its food supply. If this limit was approached, late marriage, emigration, and the war would serve to restore the balance. Richard Hakluyt, an English writer (1527–1616), observed that, "Through our longe peace and seldom sickness... we are grown more populous than ever heretofore;... many thousands of idle persons are within this realme, which, having no way to be sett on work, be either mutinous and seek alteration in the state, or at least very burdensome to the commonwealth." Hakluyt believed that this led to crime and full jails and in A Discourse on Western Planting (1584), Hakluyt advocated for the emigration of the surplus population. With the onset of the Thirty Years' War (1618–48), characterized by widespread devastation and deaths brought on by hunger and disease in Europe, concerns about depopulation returned. Population planning movement In the 20th century, population planning proponents have drawn from the insights of Thomas Malthus, a British clergyman and economist who published An Essay on the Principle of Population in 1798. Malthus argued that, "Population, when unchecked, increases in a geometrical ratio. Subsistence only increases in an arithmetical ratio." He also outlined the idea of "positive checks" and "preventative checks." "Positive checks", such as diseases, wars, disasters, famines, and genocides are factors which Malthus believed could increase the death rate. "Preventative checks" were factors which Malthus believed could affect the birth rate such as moral restraint, abstinence and birth control. He predicted that "positive checks" on exponential population growth would ultimately save humanity from itself and he also believed that human misery was an "absolute necessary consequence". Malthus went on to explain why he believed that this misery affected the poor in a disproportionate manner. Finally, Malthus advocated for the education of the lower class about the use of "moral restraint" or voluntary abstinence, which he believed would slow the growth rate. Paul R. Ehrlich, a US biologist and environmentalist, published The Population Bomb in 1968, advocating stringent population planning policies. His central argument on population is as follows: In his concluding chapter, Ehrlich offered a partial solution to the "population problem", "[We need] compulsory birth regulation... [through] the addition of temporary sterilants to water supplies or staple food. Doses of the antidote would be carefully rationed by the government to produce the desired family size". Ehrlich's views came to be accepted by many population planning advocates in the United States and Europe in the 1960s and 1970s. Since Ehrlich introduced his idea of the "population bomb", overpopulation has been blamed for a variety of issues, including increasing poverty, high unemployment rates, environmental degradation, famine and genocide. In a 2004 interview, Ehrlich reviewed the predictions in his book and found that while the specific dates within his predictions may have been wrong, his predictions about climate change and disease were valid. Ehrlich continued to advocate for population planning and co-authored the book The Population Explosion, released in 1990 with his wife Anne Ehrlich. However, it is controversial as to whether human population stabilization will avert environmental risks. A 2014 study published in the Proceedings of the National Academy of Sciences of the United States of America found that given the "inexorable demographic momentum of the global human population", even mass mortality events and draconian one-child policies implemented on a global scale would still likely result in a population of 5 to 10 billion by 2100. Therefore, while reduced fertility rates are positive for society and the environment, the short term focus should be on mitigating the human impact on the environment through technological and social innovations, along with reducing overconsumption, with population planning being a long-term goal. A letter in response, published in the same journal, argued that a reduction in population by 1 billion people in 2100 could help reduce the risk of catastrophic climate disruption. A 2021 article published in Sustainability Science said that sensible population policies could advance social justice (such as by abolishing child marriage, expanding family planning services and reforms that improve education for women and girls) and avoid the abusive and coercive population control schemes of the past while at the same time mitigating the human impact on the climate, biodiversity and ecosystems by slowing fertility rates. Paige Whaley Eager argues that the shift in perception that occurred in the 1960s must be understood in the context of the demographic changes that took place at the time. It was only in the first decade of the 19th century that the world's population reached one billion. The second billion was added in the 1930s, and the next billion in the 1960s. 90 percent of this net increase occurred in developing countries. Eager also argues that, at the time, the United States recognised that these demographic changes could significantly affect global geopolitics. Large increases occurred in China, Mexico and Nigeria, and demographers warned of a "population explosion", particularly in developing countries from the mid-1950s onwards. In the 1980s, tension grew between population planning advocates and women's health activists who advanced women's reproductive rights as part of a human rights-based approach. Growing opposition to the narrow population planning focus led to a significant change in population planning policies in the early 1990s. Population planning and economics Opinions vary among economists about the effects of population change on a nation's economic health. US scientific research in 2009 concluded that the raising of a child cost about $16,000 yearly ($291,570 total for raising the child to its 18th birthday). In the US, the multiplication of this number with the yearly population growth will yield the overall cost of the population growth. Costs for other developed countries are usually of a similar order of magnitude. Some economists, such as Thomas Sowell and Walter E. Williams, have argued that poverty and famine are caused by bad government and bad economic policies, not by overpopulation. In his book The Ultimate Resource, economist Julian Simon argued that higher population density leads to more specialization and technological innovation, which in turn leads to a higher standard of living. He claimed that human beings are the ultimate resource since we possess "productive and inventive minds that help find creative solutions to man’s problems, thus leaving us better off over the long run". Simon also claimed that when considering a list of countries ranked in order by population density, there is no correlation between population density and poverty and starvation. Instead, if a list of countries is considered according to corruption within their respective governments, there is a significant correlation between government corruption, poverty and famine. Views on population planning Birth rate reductions Support As early as 1798, Thomas Malthus argued in his Essay on the Principle of Population for implementation of population planning. Around the year 1900, Sir Francis Galton said in his publication Hereditary Improvement: "The unfit could become enemies to the State if they continue to propagate." In 1968, Paul Ehrlich noted in The Population Bomb, "We must cut the cancer of population growth", and "if this was not done, there would be only one other solution, namely the 'death rate solution' in which we raise the death rate through war-famine-pestilence, etc.” In the same year, another prominent modern advocate for mandatory population planning was Garrett Hardin, who proposed in his landmark 1968 essay Tragedy of the commons, society must relinquish the "freedom to breed" through "mutual coercion, mutually agreed upon." Later on, in 1972, he reaffirmed his support in his new essay "Exploring New Ethics for Survival", by stating, "We are breeding ourselves into oblivion." Many prominent personalities, such as Bertrand Russell, Margaret Sanger (1939), John D. Rockefeller, Frederick Osborn (1952), Isaac Asimov, Arne Næss and Jacques Cousteau have also advocated for population planning. Today, a number of influential people advocate population planning such as these: David Attenborough Christian de Duve, Nobel laureate Sara Parkin Jonathon Porritt, UK sustainable development commissioner William J. Ripple, lead author of the 2017 World Scientists' Warning to Humanity: A Second Notice Crispin Tickell The head of the UN Millennium Project Jeffrey Sachs is also a strong proponent of decreasing the effects of overpopulation. In 2007, Jeffrey Sachs gave a number of lectures (2007 Reith Lectures) about population planning and overpopulation. In his lectures, called "Bursting at the Seams", he featured an integrated approach that would deal with a number of problems associated with overpopulation and poverty reduction. For example, when criticized for advocating mosquito nets he argued that child survival was, "by far one of the most powerful ways", to achieve fertility reduction, as this would assure poor families that the smaller number of children they had would survive. Opposition Critics of human population planning point out that attempts to curb human population growth have resulted in violations of human rights such as forced sterilization, particularly in China and India. In the latter half of the twentieth century, India's population reduction program received substantial funds and powerful incentives from Western countries and international population planning organizations to reduce India's growing population. This culminated in "the Emergency," a period in the mid-1970's where millions of people were forcibly sterilized. Violent resistance to forced sterilization led to police brutality and some instances of mass shootings of civilians by police. Critics also argue that supposedly voluntary population planning is often coerced. Some also believe that the environmental problems caused by supposed overpopulation are better explained by other factors, and that the goal of human population reduction does not justify the threat to human rights posed by population planning policies. Other causes for opposition emerge from the feasibility of substantially impacting human population. According to some researchers, even rapid global adoption of a one-child policy would result in a world population exceeding 8 billion in 2050, and in a scenario involving catastrophic mass death of 2 billion people, world population would exceed 8 billion by 2100. The Catholic Church has opposed abortion, sterilization, and artificial contraception as a general practice but especially in regard to population planning policies. Pope Benedict XVI has stated, "The extermination of millions of unborn children, in the name of the fight against poverty, actually constitutes the destruction of the poorest of all human beings." The reformed Theology pastor Dr. Stephen Tong also opposes the planning of human population. Pro-natalist policies In 1946, Poland introduced a tax on childlessness, discontinued in the 1970s, as part of natalist policies in the Communist government. From 1941 to the 1990s, the Soviet Union had a similar tax to replenish the population losses incurred during the Second World War. The Socialist Republic of Romania under Nicolae Ceaușescu severely repressed abortion, (the most common birth control method at the time) in 1966, and forced gynecological revisions and penalties for unmarried women and childless couples. The surge of the birth rate taxed the public services received by the decreţei 770 ("Scions of the Decree 770") generation. A consequence of Ceaușescu's natalist policy is that large numbers of children ended up living in orphanages, because their parents could not cope. The vast majority of children who lived in the communist orphanages were not actually orphans, but were simply children whose parents could not afford to raise them. The Romanian Revolution of 1989 preceded a fall in population growth. Balanced birth policies Nativity in the Western world dropped during the interwar period. Swedish sociologists Alva and Gunnar Myrdal published Crisis in the Population Question in 1934, suggesting an extensive welfare state with universal healthcare and childcare, to increase overall Swedish birth rates, and level the number of children at a reproductive level for all social classes in Sweden. Swedish fertility rose throughout World War II (as Sweden was largely unharmed by the war) and peaked in 1946. Modern practice by country Australia Australia currently offers fortnightly Family Tax Benefit payments plus a free immunization scheme, and recently proposed to pay all child care costs for women who want to work. China One-child era (1979–2015) The most significant population planning system in the world was China's one-child policy, in which, with various exceptions, having more than one child was discouraged. Unauthorized births were punished by fines, although there were also allegations of illegal forced abortions and forced sterilization. As part of China's planned birth policy, (work) unit supervisors monitored the fertility of married women and may decide whose turn it is to have a baby. The Chinese government introduced the policy in 1978 to alleviate the social and environmental problems of China. According to government officials, the policy has helped prevent 400 million births. The success of the policy has been questioned, and reduction in fertility has also been attributed to the modernization of China. The policy is controversial both within and outside of China because of its manner of implementation and because of concerns about negative economic and social consequences e.g. female infanticide. In Asian cultures, the oldest male child has responsibility of caring for the parents in their old age. Therefore, it is common for Asian families to invest most heavily in the oldest male child, such as providing college, steering them into the most lucrative careers, and so on. To these families, having an oldest male child is paramount, so in a one-child policy, daughters have no economic benefit, so daughters, especially as a first child, are often targeted for abortion or infanticide. China introduced several government reforms to increase retirement payments to coincide with the one-child policy. During that time, couples could request permission to have more than one child. According to Tibetologist Melvyn Goldstein, natalist feelings run high in China's Tibet Autonomous Region, among both ordinary people and government officials. Seeing population control "as a matter of power and ethnic survival" rather than in terms of ecological sustainability, Tibetans successfully argued for an exemption of Tibetan people from the usual family planning policies in China such as the one-child policy. Two-child era (2016–2021) In November 2014, the Chinese government allowed its people to conceive a second child under the supervision of government regulation. On 29 October 2015, the ruling Chinese Communist Party announced that all one-child policies would be scrapped, allowing all couples to have two children. The change was needed to allow a better balance of male and female children, and to grow the young population to ease the problem of paying for the aging population. The law enacting the two-child policy took effect on 1 January 2016, and replaced the previous one-child policy. Three-child era (2021–) In May 2021, the Chinese government allowed its people to conceive a third child, in a move accompanied by "supportive measures" it regarded "conducive" to improving its "population structure, fulfilling the country's strategy of actively coping with an ageing population and maintaining the advantage, endowment of human resources" after declining birth rates recorded in the 2020 Chinese census. Hungary During the Second Orbán Government, Hungary increased its family benefits spending from one of the lowest rates in the OECD to one of the highest. In 2015, it amounted to nearly 4% of GDP. India Only those with two or fewer children are eligible for election to a local government. Us two, our two ("Hum do, hamare do" in Hindi) is a slogan meaning one family, two children and is intended to reinforce the message of family planning thereby aiding population planning. Facilities offered by government to its employees are limited to two children. The government offers incentives for families accepted for sterilization. Moreover, India was the first country to take measures for family planning back in 1952. In 2019, the Population Control Bill, 2019 bill was introduced in the Rajya Sabha in July 2019 by Rakesh Sinha. The purpose of the bill is to control the population growth of India. Iran After the Iran–Iraq War, Iran encouraged married couples to produce as many children as possible to replace population lost to the war. Iran succeeded in sharply reducing its birth rate from the late 1980s to 2010. Mandatory contraceptive courses are required for both males and females before a marriage license can be obtained, and the government emphasized the benefits of smaller families and the use of contraception. This changed in 2012, when a major policy shift back towards increasing birth rates was announced. In 2014, permanent contraception and advertising of birth control were to be outlawed. Israel In Israel, Haredi families with many children receive economic support through generous governmental child allowances, government assistance in housing young religious couples, as well as specific funds by their own community institutions. Haredi women have an average of 6.7 children while the average Jewish Israeli woman has 3 children. Japan Japan has experienced a shrinking population for many years. The government is trying to encourage women to have children or to have more children – many Japanese women do not have children, or even remain single. The population is culturally opposed to immigration. Some Japanese localities, facing significant population loss, are offering economic incentives. Yamatsuri, a town of 7,000 just north of Tokyo, offers parents $4,600 for the birth of a child and $460 a year for 10 years. Myanmar In Myanmar, the Population planning Health Care Bill requires some parents to space each child three years apart. The Economist, in 2015, stated that the measure was expected to be used against the persecuted Muslim Rohingyas minority. Pakistan Russia Russian President Vladimir Putin directed Parliament in 2006 to adopt a 10-year program to stop the sharp decline in Russia's population, principally by offering financial incentives and subsidies to encourage women to have children. Singapore Singapore has undergone two major phases in its population planning: first to slow and reverse the baby boom in the Post-World War II era; then from the 1980s onwards to encourage couples to have more children as the birth rate had fallen below the replacement-level fertility. In addition, during the interim period, eugenics policies were adopted. The anti-natalist policies flourished in the 1960s and 1970s: initiatives advocating small families were launched and developed into the Stop at Two programme, pushing for two-children families and promoting sterilisation. In 1984, the government announced the Graduate Mothers' Scheme, which favoured children of more well-educated mothers; the policy was however soon abandoned due to the outcry in the general election of the same year. Eventually, the government became pro-natalist in the late 1980s, marked by its Have Three or More plan in 1987. Singapore pays $3,000 for the first child, $9,000 in cash and savings for the second; and up to $18,000 each for the third and fourth. Spain In 2017, the government of Spain appointed Edelmira Barreira, as "Government Commissioner facing the Demographic Challenge", in a pro-natalist attempt to reverse a negative population growth rate. Turkey In May 2012, Turkey's Prime Minister Recep Tayyip Erdogan argued that abortion is murder and announced that legislative preparations to severely limit the practice are underway. Erdogan also argued that abortion and C-section deliveries are plots to stall Turkey's economic growth. Prior to this move, Erdogan had repeatedly demanded that each couple have at least three children. United States Enacted in 1970, Title X of the Public Health Service Act provides access to contraceptive services, supplies and information to those in need. Priority for services is given to people with low incomes. The Title X Family Planning program is administered through the Office of Population Affairs under the Office of Public Health and Science. It is directed by the Office of Family Planning. In 2007, Congress appropriated roughly $283 million for family planning under Title X, at least 90 percent of which was used for services in family planning clinics. Title X is a vital source of funding for family planning clinics throughout the nation, which provide reproductive health care, including abortion. The education and services supplied by the Title X-funded clinics support young individuals and low-income families. The goals of developing healthy families are accomplished by helping individuals and couples decide whether to have children and when the appropriate time to do so would be. Title X has made the prevention of unintended pregnancies possible. It has allowed millions of American women to receive necessary reproductive health care, plan their pregnancies and prevent abortions. Title X is dedicated exclusively to funding family planning and reproductive health care services. Title X as a percentage of total public funding to family planning client services has steadily declined from 44% of total expenditures in 1980 to 12% in 2006. Medicaid has increased from 20% to 71% in the same time. In 2006, Medicaid contributed $1.3 billion to public family planning. In the early 1970s, the United States Congress established the Commission on Population Growth and the American Future (Chairman John D. Rockefeller III), which was created to provide recommendations regarding population growth and its social consequences. The Commission submitted its final recommendations in 1972, which included promoting contraceptives and liberalizing abortion regulations, for example. Natalism in the United States In a 2004 editorial in The New York Times, David Brooks expressed the opinion that the relatively high birth rate of the United States in comparison to Europe could be attributed to social groups with "natalist" attitudes. The article is referred to in an analysis of the Quiverfull movement. However, the figures identified for the demographic are extremely low. Former US Senator Rick Santorum made natalism part of his platform for his 2012 presidential campaign. Many of those categorized in the General Social Survey as "Fundamentalist Protestant" are more or less natalist, and have a higher birth rate than "Moderate" and "Liberal" Protestants. However, Rick Santorum is not a Protestant but a practicing Catholic. Uzbekistan It is reported that Uzbekistan has been pursuing a policy of forced sterilizations, hysterectomies and IUD insertions since the late 1990s in order to impose population planning. See also Fiction Logan's Run (Book) - State-mandated euthanasia at 21 for all people (30 in the film) to conserve resources. Make Room! Make Room! (Book) - Novel, explores the consequence of overpopulation. Ishmael (Quinn novel) - Explores the biological and ecological causes of overpopulation which is a result of increased carrying capacity for humans. The planning proposal is to limit that capacity (see Food Race). Avengers: Infinity War (Movie) - Antagonist and villain Thanos kills half of all living things throughout universe in order to maintain ecological balance. Inferno (Movie) - A billionaire has created a virus that will kill 50% of the world's population to save the other 50%. His followers try to release the virus after his suicide. Shadow Children (Book series) - Families are allowed two children maximum, and "shadow children" (third children and beyond) are subject to be killed. 2 B R 0 2 B (Book) - Aging is cured and each new life requires the sacrifice of another in order to maintain a stable population. 2BR02B: To Be or Naught to Be (Movie) - Based on the above book. The Thinning and The Thinning: New World Order (Film Series) - Involves a dystopian United States enforcing population control via aptitude test and an authoritarian police force known as the Department of Population Control. References Further reading Thomlinson, R. 1975. Demographic Problems: Controversy over Population Control. 2nd ed. Encino, CA: Dickenson. Hopfenberg, Russell. "Genetic feedback and human population regulation". (PDF) Human Ecology 37.5 (2009): 643-651. External links Wikiversity:Should we aim to reduce the Earth population? "Thirty years is too long to turn a blind eye to world population growth" by Jane O’Sullivan. The Overpopulation Project Birth control Human overpopulation Population density Climate change mitigation Dark green environmentalism Natalism
0.774051
0.993259
0.768834
Nation
A nation is a type of social organization where a collective identity, a national identity, has emerged from a combination of shared features across a given population, such as language, history, ethnicity, culture, territory or society. Some nations are constructed around ethnicity (see ethnic nationalism) while others are bound by political constitutions (see civic nationalism). A nation is generally more overtly political than an ethnic group. Benedict Anderson defines a nation as "an imagined political community […] imagined because the members of even the smallest nation will never know most of their fellow-members, meet them, or even hear of them, yet in the minds of each lives the image of their communion", while Anthony D Smith defines nations as cultural-political communities that have become conscious of their autonomy, unity and particular interests. The consensus among scholars is that nations are socially constructed, historically contingent, organizationally flexible, and a distinctly modern phenomenon. Throughout history, people have had an attachment to their kin group and traditions, territorial authorities and their homeland, but nationalism – the belief that state and nation should align as a nation state – did not become a prominent ideology until the end of the 18th century. Etymology and terminology The English word nation from Middle English c. 1300, nacioun "a race of people, large group of people with common ancestry and language," from Old French nacion "birth (naissance), rank; descendants, relatives; country, homeland" (12c.) and directly from Latin nationem (nominative natio (nātĭō), supine of verb nascar « to birth » (supine : natum)) "birth, origin; breed, stock, kind, species; race of people, tribe," literally "that which has been born," from natus, past participle of nasci "be born" (Old Latin gnasci), from PIE root *gene- "give birth, beget," with derivatives referring to procreation and familial and tribal groups. In Latin, natio represents the children of the same birth and also a human group of same origin. By Cicero, natio is used for "people". Black's Law Dictionary defines a nation as follows: nation, n. (14c) 1. A large group of people having a common origin, language, tradition, and usage constitutes a political entity. • When a nation is coincident with a state, the term nation-state is often used.... ... 2. A community of people inhabiting a defined territory and organized under an independent government; a sovereign political state.... The word "nation" is sometimes used as synonym for: State (polity) or sovereign state: a government that controls a specific territory, which may or may not be associated with any particular ethnic group Country: a geographic territory, which may or may not have an affiliation with a government or ethnic group Ethnic group in older texts due to its original meaning and etymology Thus the phrase "nations of the world" could be referring to the top-level governments (as in the name for the United Nations), various large geographical territories, or various large ethnic groups of the planet. Depending on the meaning of "nation" used, the term "nation state" could be used to distinguish larger states from small city states, or could be used to distinguish multinational states from those with a single ethnic group. Medieval nations The existence of Medieval nations The broad consensus amongst scholars of nationalism is that nations are a recent phenomenon. However, some historians argue that their existence can be traced to the medieval period. Adrian Hastings argued that nations and nationalism are predominantly Christian phenomena, with Jews being the sole exception. He viewed them as the "true proto-nation" that provided the original model of nationhood through the foundational example of ancient Israel in the Hebrew Bible, despite losing their political sovereignty for nearly two millennia. The Jews, however, maintained a cohesive national identity throughout this period, which ultimately culminated in the emergence of Zionism and the establishment of modern lsrael. Anthony D. Smith wrote that the Jews of the late Second Temple period provide "a closer approximation to the ideal type of the nation ... perhaps anywhere else in the ancient world." Susan Reynolds has argued that many European medieval kingdoms were nations in the modern sense, except that political participation in nationalism was available only to a limited prosperous and literate class, while Hastings claims England's Anglo-Saxon kings mobilized mass nationalism in their struggle to repel Norse invasions. He argues that Alfred the Great, in particular, drew on biblical language in his law code and that during his reign selected books of the Bible were translated into Old English to inspire Englishmen to fight to turn back the Norse invaders. Hastings argues for a strong renewal of English nationalism (following a hiatus after the Norman conquest) beginning with the translation of the complete bible into English by the Wycliffe circle in the 1380s, positing that the frequency and consistency in usage of the word nation from the early fourteenth century onward strongly suggest English nationalism and the English nation have been continuous since that time. However, John Breuilly criticizes Hastings's assumption that continued usage of a term such as 'English' means continuity in its meaning. Patrick J. Geary agrees, arguing names were adapted to different circumstances by different powers and could convince people of continuity, even if radical discontinuity was the lived reality. Florin Curta cites Medieval Bulgarian nation as another possible example. Danubian Bulgaria was founded in 680-681 as a continuation of Great Bulgaria. After the adoption of Orthodox Christianity in 864 it became one of the cultural centres of Slavic Europe. Its leading cultural position was consolidated with the invention of the Cyrillic script in its capital Preslav on the eve of the 10th century. Hugh Poulton argues the development of Old Church Slavonic literacy in the country had the effect of preventing the assimilation of the South Slavs into neighboring cultures and stimulated the development of a distinct ethnic identity. A symbiosis was carried out between the numerically weak Bulgars and the numerous Slavic tribes in that broad area from the Danube to the north, to the Aegean Sea to the south, and from the Adriatic Sea to the west, to the Black Sea to the east, who accepted the common ethnonym "Bulgarians". During the 10th century the Bulgarians established a form of national identity that was far from modern nationalism but helped them to survive as a distinct entity through the centuries. Anthony Kaldellis asserts in Hellenism in Byzantium (2008) that what is called the Byzantine Empire was the Roman Empire transformed into a nation-state in the Middle Ages. Azar Gat also argues China, Korea and Japan were nations by the time of the European Middle Ages. Criticisms In contrast, Geary rejects the conflation of early medieval and contemporary group identities as a myth, arguing it is a mistake to conclude continuity based on the recurrence of names. He criticizes historians for failing to recognize the differences between earlier ways of perceiving group identities and more contemporary attitudes, stating they are "trapped in the very historical process we are attempting to study". Similarly, Sami Zubaida notes that many states and empires in history ruled over ethnically diverse populations, and "shared ethnicity between ruler and ruled did not always constitute grounds for favour or mutual support". He goes on to argue ethnicity was never the primary basis of identification for the members of these multinational empires. Paul Lawrence criticises Hastings's reading of Bede, observing that those writing so-called 'national' histories may have "been working with a rather different notion of 'the nation' to those writing history in the modern period". Lawrence goes on to argue that such documents do not demonstrate how ordinary people identified themselves, pointing out that, while they serve as texts in which an elite defines itself, "their significance in relation to what the majority thought and felt was likely to have been minor". Use of term nationes by medieval universities and other medieval institutions A significant early use of the term nation, as natio, occurred at Medieval universities to describe the colleagues in a college or students, above all at the University of Paris, who were all born within a pays, spoke the same language and expected to be ruled by their own familiar law. In 1383 and 1384, while studying theology at Paris, Jean Gerson was elected twice as a procurator for the French natio. The University of Prague adopted the division of students into nationes: from its opening in 1349 the studium generale which consisted of Bohemian, Bavarian, Saxon and Polish nations. In a similar way, the nationes were segregated by the Knights Hospitaller of Jerusalem, who maintained at Rhodes the hostels from which they took their name "where foreigners eat and have their places of meeting, each nation apart from the others, and a Knight has charge of each one of these hostels, and provides for the necessities of the inmates according to their religion", as the Spanish traveller Pedro Tafur noted in 1436. Early modern nations In his article, "The Mosaic Moment: An Early Modernist Critique of the Modernist Theory of Nationalism", Philip S. Gorski argues that the first modern nation-state was the Dutch Republic, created by a fully modern political nationalism rooted in the model of biblical nationalism. In a 2013 article "Biblical nationalism and the sixteenth-century states", Diana Muir Appelbaum expands Gorski's argument to apply to a series of new, Protestant, sixteenth-century nation states. A similar, albeit broader, argument was made by Anthony D. Smith in his books, Chosen Peoples: Sacred Sources of National Identity and Myths and Memories of the Nation. In her book Nationalism: Five Roads to Modernity, Liah Greenfeld argued that nationalism was invented in England by 1600. According to Greenfeld, England was “the first nation in the world". For Smith, creating a 'world of nations' has had profound consequences for the global state system, as a nation comprises both a cultural and political identity. Therefore, he argues, "any attempt to forge a national identity is also a political action with political consequences, like the need to redraw the geopolitical map or alter the composition of political regimes and states". Social science There are three notable perspectives on how nations developed. Primordialism (perennialism), which reflects popular conceptions of nationalism but has largely fallen out of favour among academics, proposes that there have always been nations and that nationalism is a natural phenomenon. Ethnosymbolism explains nationalism as a dynamic, evolving phenomenon and stresses the importance of symbols, myths and traditions in the development of nations and nationalism. Modernization theory, which has superseded primordialism as the dominant explanation of nationalism, adopts a constructivist approach and proposes that nationalism emerged due to processes of modernization, such as industrialization, urbanization, and mass education, which made national consciousness possible. Proponents of modernization theory describe nations as "imagined communities", a term coined by Benedict Anderson. A nation is an imagined community in the sense that the material conditions exist for imagining extended and shared connections and that it is objectively impersonal, even if each individual in the nation experiences themselves as subjectively part of an embodied unity with others. For the most part, members of a nation remain strangers to each other and will likely never meet. Nationalism is consequently seen an "invented tradition" in which shared sentiment provides a form of collective identity and binds individuals together in political solidarity. A nation's foundational "story" may be built around a combination of ethnic attributes, values and principles, and may be closely connected to narratives of belonging. Scholars in the 19th and early 20th century offered constructivist criticisms of primordial theories about nations. A prominent lecture by Ernest Renan, "What is a Nation?", argues that a nation is "a daily referendum", and that nations are based as much on what the people jointly forget as on what they remember. Carl Darling Buck argued in a 1916 study, "Nationality is essentially subjective, an active sentiment of unity, within a fairly extensive group, a sentiment based upon real but diverse factors, political, geographical, physical, and social, any or all of which may be present in this or that case, but no one of which must be present in all cases." In the late 20th century, many social scientists argued that there were two types of nations, the civic nation of which French republican society was the principal example and the ethnic nation exemplified by the German peoples. The German tradition was conceptualized as originating with early 19th-century philosophers, like Johann Gottlieb Fichte, and referred to people sharing a common language, religion, culture, history, and ethnic origins, that differentiate them from people of other nations. On the other hand, the civic nation was traced to the French Revolution and ideas deriving from 18th-century French philosophers. It was understood as being centred in a willingness to "live together", this producing a nation that results from an act of affirmation. This is the vision, among others, of Ernest Renan. Debate about a potential future of nations There is an ongoing debate about the future of nations − about whether this framework will persist as is and whether there are viable or developing alternatives. The theory of the clash of civilizations lies in direct contrast to cosmopolitan theories about an ever more-connected world that no longer requires nation states. According to political scientist Samuel P. Huntington, people's cultural and religious identities will be the primary source of conflict in the post–Cold War world. The theory was originally formulated in a 1992 lecture at the American Enterprise Institute, which was then developed in a 1993 Foreign Affairs article titled "The Clash of Civilizations?", in response to Francis Fukuyama's 1992 book, The End of History and the Last Man. Huntington later expanded his thesis in a 1996 book The Clash of Civilizations and the Remaking of World Order. Huntington began his thinking by surveying the diverse theories about the nature of global politics in the post–Cold War period. Some theorists and writers argued that human rights, liberal democracy and capitalist free market economics had become the only remaining ideological alternative for nations in the post–Cold War world. Specifically, Francis Fukuyama, in The End of History and the Last Man, argued that the world had reached a Hegelian "end of history". Huntington believed that while the age of ideology had ended, the world had reverted only to a normal state of affairs characterized by cultural conflict. In his thesis, he argued that the primary axis of conflict in the future will be along cultural and religious lines. Postnationalism is the process or trend by which nation states and national identities lose their importance relative to supranational and global entities. Several factors contribute to the trend Huntington identifies, including economic globalization, a rise in importance of multinational corporations, the internationalization of financial markets, the transfer of socio-political power from national authorities to supranational entities, such as multinational corporations, the United Nations and the European Union and the advent of new information and culture technologies such as the Internet. However attachment to citizenship and national identities often remains important. Jan Zielonka of the University of Oxford states that "the future structure and exercise of political power will resemble the medieval model more than the Westphalian one" with the latter being about "concentration of power, sovereignty and clear-cut identity" and neo-medievalism meaning "overlapping authorities, divided sovereignty, multiple identities and governing institutions, and fuzzy borders". See also Citizenship City network Country Government Identity (social science) Imagined Communities Invented tradition Lists of people by nationality Meta-ethnicity Minzu (anthropology) Multinational state National emblem National god National memory Nationalism Nationality People Polity Race (human categorization) Separatism Irredentism Society Sovereign state Stateless nation Tribe Republic Republicanism References Sources Mylonas, Harris; Tudor, Maya (11 May 2021). "Nationalism: What We Know and What We Still Need to Know". Annual Review of Political Science. 24 (1): 109–132. Further reading Manent, Pierre (2007). "What is a Nation?", The Intercollegiate Review, Vol. XLII, No. 2, pp. 23–31. Renan, Ernest (1896). "What is a Nation?" In: The Poetry of the Celtic Races, and Other Essays. London: The Walter Scott Publishing Co., pp. 61–83. Ethnicity Political geography Political science terminology Types of communities
0.770145
0.998281
0.76882
Mesolithic
The Mesolithic (Greek: μέσος, mesos 'middle' + λίθος, lithos 'stone') or Middle Stone Age is the Old World archaeological period between the Upper Paleolithic and the Neolithic. The term Epipaleolithic is often used synonymously, especially for outside northern Europe, and for the corresponding period in the Levant and Caucasus. The Mesolithic has different time spans in different parts of Eurasia. It refers to the final period of hunter-gatherer cultures in Europe and the Middle East, between the end of the Last Glacial Maximum and the Neolithic Revolution. In Europe it spans roughly 15,000 to 5,000 BP; in the Middle East (the Epipalaeolithic Near East) roughly 20,000 to 10,000 BP. The term is less used of areas farther east, and not at all beyond Eurasia and North Africa. The type of culture associated with the Mesolithic varies between areas, but it is associated with a decline in the group hunting of large animals in favour of a broader hunter-gatherer way of life, and the development of more sophisticated and typically smaller lithic tools and weapons than the heavy-chipped equivalents typical of the Paleolithic. Depending on the region, some use of pottery and textiles may be found in sites allocated to the Mesolithic, but generally indications of agriculture are taken as marking transition into the Neolithic. The more permanent settlements tend to be close to the sea or inland waters offering a good supply of food. Mesolithic societies are not seen as very complex, and burials are fairly simple; in contrast, grandiose burial mounds are a mark of the Neolithic. Terminology The terms "Paleolithic" and "Neolithic" were introduced by John Lubbock in his work Pre-historic Times in 1865. The additional "Mesolithic" category was added as an intermediate category by Hodder Westropp in 1866. Westropp's suggestion was immediately controversial. A British school led by John Evans denied any need for an intermediate: the ages blended together like the colors of a rainbow, he said. A European school led by Gabriel de Mortillet asserted that there was a gap between the earlier and later. Edouard Piette claimed to have filled the gap with his naming of the Azilian Culture. Knut Stjerna offered an alternative in the "Epipaleolithic", suggesting a final phase of the Paleolithic rather than an intermediate age in its own right inserted between the Paleolithic and Neolithic. By the time of Vere Gordon Childe's work, The Dawn of Europe (1947), which affirms the Mesolithic, sufficient data had been collected to determine that a transitional period between the Paleolithic and the Neolithic was indeed a useful concept. However, the terms "Mesolithic" and "Epipalaeolithic" remain in competition, with varying conventions of usage. In the archaeology of Northern Europe, for example for archaeological sites in Great Britain, Germany, Scandinavia, Ukraine, and Russia, the term "Mesolithic" is almost always used. In the archaeology of other areas, the term "Epipaleolithic" may be preferred by most authors, or there may be divergences between authors over which term to use or what meaning to assign to each. In the New World, neither term is used (except provisionally in the Arctic). "Epipaleolithic" is sometimes also used alongside "Mesolithic" for the final end of the Upper Paleolithic immediately followed by the Mesolithic. As "Mesolithic" suggests an intermediate period, followed by the Neolithic, some authors prefer the term "Epipaleolithic" for hunter-gatherer cultures who are not succeeded by agricultural traditions, reserving "Mesolithic" for cultures who are clearly succeeded by the Neolithic Revolution, such as the Natufian culture. Other authors use "Mesolithic" as a generic term for hunter-gatherer cultures after the Last Glacial Maximum, whether they are transitional towards agriculture or not. In addition, terminology appears to differ between archaeological sub-disciplines, with "Mesolithic" being widely used in European archaeology, while "Epipalaeolithic" is more common in Near Eastern archaeology. Europe The Balkan Mesolithic begins around 15,000 years ago. In Western Europe, the Early Mesolithic, or Azilian, begins about 14,000 years ago, in the Franco-Cantabrian region of northern Spain and Southern France. In other parts of Europe, the Mesolithic begins by 11,500 years ago (the beginning of the Holocene), and it ends with the introduction of farming, depending on the region between and 5,500 years ago. Regions that experienced greater environmental effects as the last glacial period ended have a much more apparent Mesolithic era, lasting millennia. In northern Europe, for example, societies were able to live well on rich food supplies from the marshlands created by the warmer climate. Such conditions produced distinctive human behaviors that are preserved in the material record, such as the Maglemosian and Azilian cultures. Such conditions also delayed the coming of the Neolithic until some 5,500 BP in northern Europe. The type of stone toolkit remains one of the most diagnostic features: the Mesolithic used a microlithic technology – composite devices manufactured with Mode V chipped stone tools (microliths), while the Paleolithic had utilized Modes I–IV. In some areas, however, such as Ireland, parts of Portugal, the Isle of Man and the Tyrrhenian Islands, a macrolithic technology was used in the Mesolithic. In the Neolithic, the microlithic technology was replaced by a macrolithic technology, with an increased use of polished stone tools such as stone axes. There is some evidence for the beginning of construction at sites with a ritual or astronomical significance, including Stonehenge, with a short row of large post holes aligned east–west, and a possible "lunar calendar" at Warren Field in Scotland, with pits of post holes of varying sizes, thought to reflect the lunar phases. Both are dated to before (the 8th millennium BC). An ancient chewed gum made from the pitch of birch bark revealed that a woman enjoyed a meal of hazelnuts and duck about 5,700 years ago in southern Denmark. Mesolithic people influenced Europe's forests by bringing favored plants like hazel with them. As the "Neolithic package" (including farming, herding, polished stone axes, timber longhouses and pottery) spread into Europe, the Mesolithic way of life was marginalized and eventually disappeared. Mesolithic adaptations such as sedentism, population size and use of plant foods are cited as evidence of the transition to agriculture. Other Mesolithic communities rejected the Neolithic package likely as a result of ideological reluctance, different worldviews and an active rejection of the sedentary-farming lifestyle. In one sample from the Blätterhöhle in Hagen, it seems that the descendants of Mesolithic people maintained a foraging lifestyle for more than 2000 years after the arrival of farming societies in the area; such societies may be called "Subneolithic". For hunter-gatherer communities, long-term close contact and integration in existing farming communities facilitated the adoption of a farming lifestyle. The integration of these hunter-gatherer in farming communities was made possible by their socially open character towards new members. In north-Eastern Europe, the hunting and fishing lifestyle continued into the Medieval period in regions less suited to agriculture, and in Scandinavia no Mesolithic period may be accepted, with the locally preferred "Older Stone Age" moving into the "Younger Stone Age". Art Compared to the preceding Upper Paleolithic and the following Neolithic, there is rather less surviving art from the Mesolithic. The Rock art of the Iberian Mediterranean Basin, which probably spreads across from the Upper Paleolithic, is a widespread phenomenon, much less well known than the cave-paintings of the Upper Paleolithic, with which it makes an interesting contrast. The sites are now mostly cliff faces in the open air, and the subjects are now mostly human rather than animal, with large groups of small figures; there are 45 figures at Roca dels Moros. Clothing is shown, and scenes of dancing, fighting, hunting and food-gathering. The figures are much smaller than the animals of Paleolithic art, and depicted much more schematically, though often in energetic poses. A few small engraved pendants with suspension holes and simple engraved designs are known, some from northern Europe in amber, and one from Star Carr in Britain in shale. The Elk's Head of Huittinen is a rare Mesolithic animal carving in soapstone from Finland. The rock art in the Urals appears to show similar changes after the Paleolithic, and the wooden Shigir Idol is a rare survival of what may well have been a very common material for sculpture. It is a plank of larch carved with geometric motifs, but topped with a human head. Now in fragments, it would apparently have been over 5 metres tall when made. The Ain Sakhri figurine from Palestine is a Natufian carving in calcite. A total of 33 antler frontlets have been discovered at Star Carr. These are red deer skulls modified to be worn by humans. Modified frontlets have also been discovered at Bedburg-Königshoven, Hohen Viecheln, Plau, and Berlin-Biesdorf. Weaving Weaving techniques were deployed to create shoes and baskets, the latter being of fine construction and decorated with dyes. Examples have been found in Cueva de los Murciélagos in Southern Spain that in 2023 were dated to 9,500 years ago. Ceramic Mesolithic In North-Eastern Europe, Siberia, and certain southern European and North African sites, a "ceramic Mesolithic" can be distinguished between to 5,850 BP. Russian archaeologists prefer to describe such pottery-making cultures as Neolithic, even though farming is absent. This pottery-making Mesolithic culture can be found peripheral to the sedentary Neolithic cultures. It created a distinctive type of pottery, with point or knob base and flared rims, manufactured by methods not used by the Neolithic farmers. Though each area of Mesolithic ceramic developed an individual style, common features suggest a single point of origin. The earliest manifestation of this type of pottery may be in the region around Lake Baikal in Siberia. It appears in the Yelshanka culture on the Volga in Russia 9,000 years ago, and from there spread via the Dnieper-Donets culture to the Narva culture of the Eastern Baltic. Spreading westward along the coastline it is found in the Ertebølle culture of Denmark and Ellerbek of Northern Germany, and the related Swifterbant culture of the Low Countries. A 2012 publication in the Science journal, announced that the earliest pottery yet known anywhere in the world was found in Xianrendong cave in China, dating by radiocarbon to between 20,000 and 19,000 years before present, at the end of the Last Glacial Period. The carbon 14 datation was established by carefully dating surrounding sediments. Many of the pottery fragments had scorch marks, suggesting that the pottery was used for cooking. These early pottery containers were made well before the invention of agriculture (dated to 10,000 to 8,000 BC), by mobile foragers who hunted and gathered their food during the Late Glacial Maximum. Cultures "Mesolithic" outside of The old world While Paleolithic and Neolithic have been found useful terms and concepts in the archaeology of China, and can be mostly regarded as happily naturalized, Mesolithic was introduced later, mostly after 1945, and does not appear to be a necessary or useful term in the context of China. Chinese sites that have been regarded as Mesolithic are better considered as "Early Neolithic". In the archaeology of India, the Mesolithic, dated roughly between 12,000 and 8,000 BP, remains a concept in use. In the archaeology of the Americas, an Archaic or Meso-Indian period, following the Lithic stage, somewhat equates to the Mesolithic. The Saharan rock paintings found at Tassili n'Ajjer in central Sahara, and at other locations depict vivid scenes of everyday life in central North Africa. Some of these paintings were executed by a hunting people who lived in a savanna region teeming with a water-dependent species like the hippopotamus, animals that no longer exist in the now-desert area. See also Caucasus hunter-gatherer History of archery#Prehistory List of Stone Age art List of Mesolithic settlements Mammoth extinction Eastern Hunter-Gatherer Scandinavian Hunter-Gatherer Western Hunter-Gatherer Anatolian hunter-gatherers Younger Dryas References External links Holocene 1860s neologisms
0.769601
0.998909
0.768762
First World
The concept of the First World was originally one of the "Three Worlds" formed by the global political landscape of the Cold War, as it grouped together those countries that were aligned with the Western Bloc of the United States. This grouping was directly opposed to the Second World, which similarly grouped together those countries that were aligned with the Eastern Bloc of the Soviet Union. However, after the Cold War ended with the dissolution of the Soviet Union in 1991, the definition largely shifted to instead refer to any country with a well-functioning democratic system with little prospects of political risk, in addition to a strong rule of law, a capitalist economy with economic stability, and a relatively high mean standard of living. Various ways in which these metrics are assessed are through the examination of a country's GDP, GNP, literacy rate, life expectancy, and Human Development Index. In colloquial usage, "First World" typically refers to "the highly developed industrialized nations often considered the Westernized countries of the world". History After World War II, the world split into two large geopolitical blocs, separating into spheres of communism and capitalism. This led to the Cold War, during which the term First World was often used because of its political, social, and economic relevance. The term itself was first introduced in the late 1940s by the United Nations. Today, the terms are slightly outdated and have no official definition. However, the "First World" is generally thought of as the capitalist, industrial, wealthy, and developed countries. This definition includes the countries of North America and Western Europe, Japan, South Korea, Australia, and New Zealand. In contemporary society, the First World is viewed as countries that have the most advanced economies, the greatest influence, the highest standards of living, and the greatest technology. After the Cold War, these countries of the First World included member states of NATO, U.S.-aligned states, neutral countries that were developed and industrialized, and the former British Colonies that were considered developed. According to Nations Online, the member countries of NATO during the Cold War included: Belgium, Canada, Denmark, France, Germany, Greece, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, Spain, Turkey, the United Kingdom, and the United States. The US Aligned States included: Israel, Japan, and South Korea. Former British Colonies included: Australia and New Zealand. Neutral and more or less industrialized capitalist countries included: Austria, Ireland, Sweden, and Switzerland. Shifting in definitions Since the end of the Cold War, the original definition of the term "First World" is no longer necessarily applicable. There are varying definitions of the First World; however, they follow the same idea. John D. Daniels, past president of the Academy of International Business, defines the First World to be consisting of "high-income industrial countries". Scholar and Professor George J. Bryjak defines the First World to be the "modern, industrial, capitalist countries of North America and Europe". L. Robert Kohls, former director of training for the U.S. Information Agency and the Meridian International Center in Washington, D.C., uses First World and "fully developed" as synonyms. Other indicators Varying definitions of the term First World and the uncertainty of the term in today's world leads to different indicators of First World status. In 1945, the United Nations used the terms first, second, third, and fourth worlds to define the relative wealth of nations (although popular use of the term fourth world did not come about until later). There are some references towards culture in the definition. They were defined in terms of Gross National Product (GNP), measured in U.S. dollars, along with other socio-political factors. The first world included the large industrialized, democratic (free elections, etc.) nations. The second world included modern, wealthy, industrialized nations, but they were all under communist control. Most of the rest of the world was deemed part of the third world, while the fourth world was considered to be those nations whose people were living on less than US$100 annually. If we use the term to mean high-income industrialized economies, then the World Bank classifies countries according to their GNI or gross national income per capita. The World Bank separates countries into four categories: high-income, upper-middle-income, lower-middle-income, and low-income economies. The First World is considered to be countries with high-income economies. The high-income economies are equated to mean developed and industrialized countries. Three-world model The terms "First World", "Second World", and "Third World" were originally used to divide the world's nations into three categories. The model did not emerge to its endstate all at once. The complete overthrow of the pre–World War II status quo, known as the Cold War, left two superpowers (the United States and the Soviet Union) vying for ultimate global supremacy. They created two camps, known as blocs. These blocs formed the basis of the concepts of the First and Second Worlds. Early in the Cold War era, NATO and the Warsaw Pact were created by the United States and the Soviet Union, respectively. They were also referred to as the Western Bloc and the Eastern Bloc. The circumstances of these two blocs were so different that they were essentially two worlds, however, they were not numbered first and second. The onset of the Cold War is marked by Winston Churchill's famous "Iron Curtain" speech. In this speech, Churchill describes the division of the West and East to be so solid that it could be called an iron curtain. In 1952, the French demographer Alfred Sauvy coined the term Third World in reference to the three estates in pre-revolutionary France. The first two estates being the nobility and clergy and everybody else comprising the third estate. He compared the capitalist world (i.e., First World) to the nobility and the communist world (i.e., Second World) to the clergy. Just as the third estate comprised everybody else, Sauvy called the Third World all the countries that were not in this Cold War division, i.e., the unaligned and uninvolved states in the "East-West Conflict". With the coining of the term Third World directly, the first two groups came to be known as the "First World" and "Second World" respectively. Here the three-world system emerged. Post–Cold War With the fall of the Soviet Union in 1991, the Eastern Bloc ceased to exist and with it, the perfect applicability of the term Second World. The definitions of the First World, Second World, and Third World changed slightly, yet generally describe the same concepts. Relationships with the other worlds Historic During the Cold War era, the relationships between the First World, Second World and the Third World were very rigid. The First World and Second World were at constant odds with one another via the tensions between their two cores, the United States and the Soviet Union, respectively. The Cold War, as its name suggests, was a primarily ideological struggle between the First and Second Worlds, or more specifically, the U.S. and the Soviet Union. Multiple doctrines and plans dominated Cold War dynamics including the Truman Doctrine and Marshall Plan (from the U.S.) and the Molotov Plan (from the Soviet Union). The extent of the tension between the two worlds was evident in Berlin -- which was then split into East and West. To stop citizens in East Berlin from having too much exposure to the capitalist West, the Soviet Union put up the Berlin Wall within the actual city. The relationship between the First World and the Third World is characterized by the very definition of the Third World. Because countries of the Third World were noncommittal and non-aligned with both the First World and the Second World, they were targets for recruitment. In the quest for expanding their sphere of influence, the United States (core of the First World) tried to establish pro-U.S. regimes in the Third World. In addition, because the Soviet Union (core of the Second World) also wanted to expand, the Third World often became a site for conflict. Some examples include Vietnam and Korea. Success lay with the First World if at the end of the war, the country became capitalistic and democratic, and with the Second World, if the country became communist. While Vietnam as a whole was eventually communized, only the northern half of Korea remained communist. The Domino Theory largely governed United States policy regarding the Third World and their rivalry with the Second World. In light of the Domino Theory, the U.S. saw winning the proxy wars in the Third World as a measure of the "credibility of U.S. commitments all over the world". Present The movement of people and information largely characterizes the inter-world relationships in the present day. A majority of breakthroughs and innovation originate in Western Europe and the U.S. and later their effects permeate globally. As judged by the Wharton School of Business at the University of Pennsylvania, most of the Top 30 Innovations of the Last 30 Years were from former First World countries (e.g., the U.S. and countries in Western Europe). The disparity between knowledge in the First World as compared to the Third World is evident in healthcare and medical advancements. Deaths from water-related illnesses have largely been eliminated in "wealthier nations", while they are still a "major concern in the developing world". Widely treatable diseases in the developed countries of the First World, malaria and tuberculosis needlessly claim many lives in the developing countries of the Third World. Each year 900,000 people die from malaria and combating the disease accounts for 40% of health spending in many African countries. The International Corporation for Assigned Names and Numbers (ICANN) announced that the first Internationalized Domain Names (IDNs) would be available in the summer of 2010. These include non-Latin domains such as Chinese, Arabic, and Russian. This is one way that the flow of information between the First and Third Worlds may become more even. The movement of information and technology from the First World to various Third World countries has created a general "aspir(ation) to First World living standards". The Third World has lower living standards as compared to the First World. Information about the comparatively higher living standards of the First World comes through television, commercial advertisements and foreign visitors to their countries. This exposure causes two changes: a) living standards in some Third World countries rise and b) this exposure creates hopes and many from Third World countries emigrate—both legally and illegally—to these First World countries in hopes of attaining that living standard and prosperity. In fact, this emigration is the "main contributor to the increasing populations of U.S. and Europe". While these emigrations have greatly contributed to globalization, they have also precipitated trends like brain drain and problems with repatriation. They have also created immigration and governmental burden problems for the countries (i.e., First World) that people emigrate to. Environmental footprint Some have argued that the most important human population problem for the world is not the high rate of population increase in certain Third World countries per se, but rather the "increase in total human impact". The per-capita footprint—the resources consumed and the waste created by each person—varies globally. The highest per-person impact occurs in the First World and the lowest in the Third World: each inhabitant of the United States, Western Europe and Japan consumes 32 times as many resources and puts out 32 times as much waste as each person in the Third World. However, China leads the world in total emissions, but its large population skews its per-capita statistic lower than those of more developed nations. As large consumers of fossil fuels, First World countries drew attention to environmental pollution. The Kyoto Protocol is a treaty that is based on the United Nations Framework Convention on Climate Change, which was finalized in 1992 at the Earth Summit in Rio. It proposed to place the burden of protecting the climate on the United States and other First World countries. Countries that were considered to be developing, such as China and India, were not required to approve the treaty because they were more concerned that restricting emissions would further restrain their development. International relations Until the recent past, little attention was paid to the interests of Third World countries. This is because most international relations scholars have come from the industrialized, First World nations. As more countries have continued to become more developed, the interests of the world have slowly started to shift. However, First World nations still have many more universities, professors, journals, and conferences, which has made it very difficult for Third World countries to gain legitimacy and respect with their new ideas and methods of looking at the world. Development theory During the Cold War, the modernization theory and development theory developed in Europe as a result of their economic, political, social, and cultural response to the management of former colonial territories. European scholars and practitioners of international politics hoped to theorize ideas and then create policies based on those ideas that would cause newly independent colonies to change into politically developed sovereign nation-states. However, most of the theorists were from the United States, and they were not interested in Third World countries achieving development by any model. They wanted those countries to develop through liberal processes of politics, economics, and socialization; that is to say, they wanted them to follow the liberal capitalist example of a so-called "First World state". Therefore, the modernization and development tradition consciously originated as a (mostly U.S.) alternative to the Marxist and neo-Marxist strategies promoted by the "Second World states" like the Soviet Union. It was used to explain how developing Third World states would naturally evolve into developed First World States, and it was partially grounded in liberal economic theory and a form of Talcott Parsons' sociological theory. Globalization The United Nations's ESCWA has written that globalization "is a widely-used term that can be defined in a number of different ways". Joyce Osland from San Jose State University wrote: "Globalization has become an increasingly controversial topic, and the growing number of protests around the world has focused more attention on the basic assumptions of globalization and its effects." "Globalization is not new, though. For thousands of years, people—and, later, corporations—have been buying from and selling to each other in lands at great distances, such as through the famed Silk Road across Central Asia that connected China and Europe during the Middle Ages. Likewise, for centuries, people and corporations have invested in enterprises in other countries. In fact, many of the features of the current wave of globalization are similar to those prevailing before the outbreak of the First World War in 1914." European Union The most prominent example of globalization in the first world is the European Union (EU). The European Union is an agreement in which countries voluntarily decide to build common governmental institutions to which they delegate some individual national sovereignty so that decisions can be made democratically on a higher level of common interest for Europe as a whole. The result is a union of 27 Member States covering with roughly 450 million people. In total, the European Union produces almost a third of the world's gross national product and the member states speak more than 23 languages. All of the European Union countries are joined together by a hope to promote and extend peace, democracy, cooperativeness, stability, prosperity, and the rule of law. In a 2007 speech, Benita Ferrero-Waldner, the European Commissioner for External Relations, said, "The future of the EU is linked to globalization...the EU has a crucial role to play in making globalization work properly...". In a 2014 speech at the European Parliament, the Italian PM Matteo Renzi stated, "We are the ones who can bring civilization to globalization". Just as the concept of the First World came about as a result of World War II, so did the European Union. In 1951 the beginnings of the EU were founded with the creation of European Coal and Steel Community (ECSC). From the beginning of its inception, countries in the EU were judged by many standards, including economic ones. This is where the relation between globalization, the EU, and First World countries arises. Especially during the 1990s when the EU focused on economic policies such as the creation and circulation of the Euro, the creation of the European Monetary Institute, and the opening of the European Central Bank. In 1993, at the Copenhagen European Council, the European Union took a decisive step towards expanding the EU, what they called the Fifth Enlargement, agreeing that "the associated countries in Central and Eastern Europe that so desire shall become members of the European Union". Thus, enlargement was no longer a question of if, but when and how. The European Council stated that accession could occur when the prospective country is able to assume the obligations of membership, that is that all the economic and political conditions required are attained. Furthermore, it defined the membership criteria, which are regarded as the Copenhagen criteria, as follows: stability of institutions guaranteeing democracy, the rule of law, human rights and respect for and protection of minorities the existence of a functioning market economy as well as the capacity to cope with competitive pressure and market forces within the Union the ability to take on the obligations of membership including adherence to the aims of political, economic and monetary union It is clear that all these criteria are characteristics of developed countries. Therefore, there is a direct link between globalization, developed nations, and the European Union. Multinational corporations A majority of multinational corporations find their origins in First World countries. After the collapse of the Soviet Union, multinational corporations proliferated as more countries focused on global trade. The series of General Agreement on Tariffs and Trade (GATT) and later the World Trade Organization (WTO) essentially ended the protectionist measures that were dissuading global trade. The eradication of these protectionist measures, while creating avenues for economic interconnection, mostly benefited developed countries, who by using their power at GATT summits, forced developing and underdeveloped countries to open their economies to Western goods. As the world starts to globalize, it is accompanied by criticism of the current forms of globalization, which are feared to be overly corporate-led. As corporations become larger and multinational, their influence and interests go further accordingly. Being able to influence and own most media companies, it is hard to be able to publicly debate the notions and ideals that corporations pursue. Some choices that corporations take to make profits can affect people all over the world. Sometimes fatally. The third industrial revolution is spreading from the developed world to some, but not all, parts of the developing world. To participate in this new global economy, developing countries must be seen as attractive offshore production bases for multinational corporations. To be such bases, developing countries must provide relatively well-educated workforces, good infrastructure (electricity, telecommunications, transportation), political stability, and a willingness to play by market rules. If these conditions are in place, multinational corporations will transfer via their offshore subsidiaries or to their offshore suppliers, the specific production technologies and market linkages necessary to participate in the global economy. By themselves, developing countries, even if well-educated, cannot produce at the quality levels demanded in high-value-added industries and cannot market what they produce even in low-value-added industries such as textiles or shoes. Put bluntly, multinational companies possess a variety of factors that developing countries must have if they are to participate in the global economy. Outsourcing Outsourcing, according to Grossman and Helpman, refers to the process of "subcontracting an ever expanding set of activities, ranging from product design to assembly, from research and development to marketing, distribution and after-sales service". Many companies have moved to outsourcing services in which they no longer specifically need or have the capability of handling themselves. This is due to considerations of what the companies can have more control over. Whatever companies tend to not have much control over or need to have control over will outsource activities to firms that they consider "less competing". According to SourcingMag.com, the process of outsourcing can take the following four phases. strategic thinking evaluation and selection contract development outsourcing management Outsourcing is among some of the many reasons for increased competition within developing countries. Aside from being a reason for competition, many First World countries see outsourcing, in particular offshore outsourcing, as an opportunity for increased income. As a consequence, the skill level of production in foreign countries handling the outsourced services increases within the economy; and the skill level within the domestic developing countries can decrease. It is because of competition (including outsourcing) that Robert Feenstra and Gordon Hanson predict that there will be a rise of 15–33 percent in inequality amongst these countries. See also Developed country Developing country Digital divide East–West dichotomy First World privilege First World problem Fourth World Global North and Global South Globalization List of countries by total wealth Multinational corporation Second World Third World Three-world model World-systems theory References Cold War terminology Economic country classifications Imperialism studies Politics by region
0.769839
0.998577
0.768744
New Imperialism
In historical contexts, New Imperialism characterizes a period of colonial expansion by European powers, the United States, and Japan during the late 19th and early 20th centuries. The period featured an unprecedented pursuit of overseas territorial acquisitions. At the time, states focused on building their empires with new technological advances and developments, expanding their territory through conquest, and exploiting the resources of the subjugated countries. During the era of New Imperialism, the European powers (and Japan) individually conquered almost all of Africa and parts of Asia. The new wave of imperialism reflected ongoing rivalries among the great powers, the economic desire for new resources and markets, and a "civilizing mission" ethos. Many of the colonies established during this era gained independence during the era of decolonization that followed World War II. The qualifier "new" is used to differentiate modern imperialism from earlier imperial activity, such as the formation of ancient empires and the first wave of European colonization. Rise The American Revolutionary War (1775–1783) and the collapse of the Spanish Empire in Latin America in the 1820s ended the first era of European imperialism. Especially in Great Britain these revolutions helped show the deficiencies of mercantilism, the doctrine of economic competition for finite wealth which had supported earlier imperial expansion. In 1846, the Corn Laws were repealed and manufacturers grew, as the regulations enforced by the Corn Laws had slowed their businesses. With the repeal in place, the manufacturers were able to trade more freely. Thus, Britain began to adopt the concept of free trade. During this period, between the 1815 Congress of Vienna after the defeat of Napoleonic France and Imperial Germany's victory in the Franco-Prussian War in 1871, Great Britain reaped the benefits of being Europe's dominant military and economic power. As the "workshop of the world", Britain could produce finished goods so efficiently that they could usually undersell comparable, locally manufactured goods in foreign markets, supplying a large share of the manufactured goods consumed by such nations as the German states, France, Belgium, and the United States. The erosion of British hegemony after the Franco-Prussian War, in which a coalition of German states led by Prussia soundly defeated the Second French Empire, was occasioned by changes in the European and world economies and in the continental balance of power following the breakdown of the Concert of Europe, established by the Congress of Vienna. The establishment of nation-states in Germany and Italy resolved territorial issues that had kept potential rivals embroiled in internal affairs at the heart of Europe to Britain's advantage. The years from 1871 to 1914 would be marked by an extremely unstable peace. France's determination to recover Alsace-Lorraine, annexed by Germany as a result of the Franco-Prussian War, and Germany's mounting imperialist ambitions would keep the two nations constantly poised for conflict. This competition was sharpened by the Long Depression of 1873–1896, a prolonged period of price deflation punctuated by severe business downturns, which put pressure on governments to promote home industry, leading to the widespread abandonment of free trade among Europe's powers (in Germany from 1879 and in France from 1881). Berlin Conference The Berlin Conference of 1884–1885 sought to destroy the competition between the powers by defining "effective occupation" as the criterion for international recognition of a territory claim, specifically in Africa. The imposition of direct rule in terms of "effective occupation" necessitated routine recourse to armed force against indigenous states and peoples. Uprisings against imperial rule were put down ruthlessly, most brutally in the Herero Wars in German South-West Africa from 1904 to 1907 and the Maji Maji Rebellion in German East Africa from 1905 to 1907. One of the goals of the conference was to reach agreements over trade, navigation, and boundaries of Central Africa. However, of all of the 15 nations in attendance of the Berlin Conference, none of the countries represented were African. The main dominating powers of the conference were France, Germany, Britain, and Portugal. They remapped Africa without considering the cultural and linguistic borders that were already established. At the end of the conference, Africa was divided into 50 different colonies. The attendants established who was in control of each of these newly divided colonies. They also planned, noncommittally, to end the slave trade in Africa. Britain during the era In Britain, the age of new imperialism marked a time for significant economic changes. Because the country was the first to industrialize, Britain was technologically ahead of many other countries throughout the majority of the nineteenth century. By the end of the nineteenth century, however, other countries, chiefly Germany and the United States, began to challenge Britain's technological and economic power. After several decades of monopoly, the country was battling to maintain a dominant economic position while other powers became more involved in international markets. In 1870, Britain contained 31.8% of the world's manufacturing capacity while the United States contained 23.3% and Germany contained 13.2%. By 1910, Britain's manufacturing capacity had dropped to 14.7%, while that of the United States had risen to 35.3% and that of Germany to 15.9%. As countries like Germany and America became more economically successful, they began to become more involved with imperialism, resulting in the British struggling to maintain the volume of British trade and investment overseas. Britain further faced strained international relations with three expansionist powers (Japan, Germany, and Italy) during the early twentieth century. Before 1939, these three powers never directly threatened Britain itself, but the dangers to the Empire were clear. By the 1930s, Britain was worried that Japan would threaten its holdings in the Far East as well as territories in India, Australia and New Zealand. Italy held an interest in North Africa, which threatened British Egypt, and German dominance of the European continent held some danger for Britain's security. Britain worried that the expansionist powers would cause the breakdown of international stability; as such, British foreign policy attempted to protect the stability in a rapidly changing world. With its stability and holdings threatened, Britain decided to adopt a policy of concession rather than resistance, a policy that became known as appeasement. In Britain, the era of new imperialism affected public attitudes toward the idea of imperialism itself. Most of the public believed that if imperialism was going to exist, it was best if Britain was the driving force behind it. The same people further thought that British imperialism was a force for good in the world. In 1940, the Fabian Colonial Research Bureau argued that Africa could be developed both economically and socially, but until this development could happen, Africa was best off remaining with the British Empire. Rudyard Kipling's 1891 poem, "The English Flag," contains the stanza: Winds of the World, give answer! They are whimpering to and fro-- And what should they know of England who only England know?-- The poor little street-bred people that vapour and fume and brag, They are lifting their heads in the stillness to yelp at the English Flag! These lines show Kipling's belief that the British who actively took part in imperialism knew more about British national identity than the ones whose entire lives were spent solely in the imperial metropolis. While there were pockets of anti-imperialist opposition in Britain in the late nineteenth and early twentieth centuries, resistance to imperialism was nearly nonexistent in the country as a whole. In many ways, this new form of imperialism formed a part of the British identity until the end of the era of new imperialism with the Second World War. Socioeconomic implications While Social Darwinism became popular throughout Western Europe and the United States, the paternalistic French and Portuguese "civilizing mission" (in French: ; in Portuguese: ) appealed to many European statesmen both in and outside France. Despite apparent benevolence existing in the notion of the "White Man's Burden", the unintended consequences of imperialism might have greatly outweighed the potential benefits. Governments became increasingly paternalistic at home and neglected the individual liberties of their citizens. Military spending expanded, usually leading to an "imperial overreach", and imperialism created clients of ruling elites abroad that were brutal and corrupt, consolidating power through imperial rents and impeding social change and economic development that ran against their ambitions. Furthermore, "nation building" oftentimes created cultural sentiments of racism and xenophobia. Many of Europe's major elites also found advantages in formal, overseas expansion: large financial and industrial monopolies wanted imperial support to protect their overseas investments against competition and domestic political tensions abroad, bureaucrats sought government offices, military officers desired promotion, and the traditional but waning landed gentries sought increased profits for their investments, formal titles, and high office. Such special interests have perpetuated empire-building throughout history. The enforcement of mercantilist policies played a role in sustaining New Imperialism. This restricted colonies to trade only with respective metropoles, which strengthened home-country economies. At first through growing chartered companies and later through imperial states themselves, New Imperialism shifted towards the use of free trade, the reduction of market restrictions and tariffs, and the coercion of foreign markets to open up, often through gunboat diplomacy or concerted interventionism, such as police actions. Observing the rise of trade unionism, socialism, and other protest movements during an era of mass society both in Europe and later in North America, elites sought to use imperial jingoism to co-opt the support of part of the industrial working class. The new mass media promoted jingoism in the Spanish–American War (1898), the Second Boer War (1899–1902), and the Boxer Rebellion (1900). The left-wing German historian Hans-Ulrich Wehler has defined social imperialism as "the diversions outwards of internal tensions and forces of change in order to preserve the social and political status quo", and as a "defensive ideology" to counter the "disruptive effects of industrialization on the social and economic structure of Germany". In Wehler's opinion, social imperialism was a device that allowed the German government to distract public attention from domestic problems and preserve the existing social and political order. The dominant elites used social imperialism as the glue to hold together a fractured society and to maintain popular support for the social status quo. According to Wehler, German colonial policy in the 1880s was the first example of social imperialism in action, and was followed up by the 1897 Tirpitz Plan for expanding the German Navy. In this point of view, groups such as the Colonial Society and the Navy League are seen as instruments for the government to mobilize public support. The demands for annexing most of Europe and Africa in World War I are seen by Wehler as the pinnacle of social imperialism. South Asia India In the 17th century, the British businessmen arrived in India and, after taking a small portion of land, formed the East India Company. The British East India Company annexed most of the subcontinent of India, starting with Bengal in 1757 and ending with Punjab in 1849. Many princely states remained independent. This was aided by a power vacuum formed by the collapse of the Mughal Empire in India and the death of Mughal Emperor Aurangzeb and increased British forces in India because of colonial conflicts with France. The invention of clipper ships in the early 1800s cut the trip to India from Europe in half from 6 months to 3 months; the British also laid cables on the floor of the ocean allowing telegrams to be sent from India and China. In 1818, the British controlled most of the Indian subcontinent and began imposing their ideas and ways on its residents, including different succession laws that allowed the British to take over a state with no successor and gain its land and armies, new taxes, and monopolistic control of industry. The British also collaborated with Indian officials to increase their influence in the region. Some Hindu and Muslim sepoys rebelled in 1857, resulting in the Indian Rebellion. After this revolt was suppressed by the British, India came under the direct control of the British crown. After the British had gained more control over India, they began changing around the financial state of India. Previously, Europe had to pay for Indian textiles and spices in bullion; with political control, Britain directed farmers to grow cash crops for the company for exports to Europe while India became a market for textiles from Britain. In addition, the British collected huge revenues from land rent and taxes on its acquired monopoly on salt production. Indian weavers were replaced by new spinning and weaving machines and Indian food crops were replaced by cash crops like cotton and tea. The British also began connecting Indian cities by railroad and telegraph to make travel and communication easier as well as building an irrigation system for increasing agricultural production. When Western education was introduced in India, Indians were quite influenced by it, but the inequalities between the British ideals of governance and their treatment of Indians became clear. In response to this discriminatory treatment, a group of educated Indians established the Indian National Congress, demanding equal treatment and self-governance. John Robert Seeley, a Cambridge Professor of History, said, "Our acquisition of India was made blindly. Nothing great that has ever been done by Englishmen was done so unintentionally or accidentally as the conquest of India". According to him, the political control of India was not a conquest in the usual sense because it was not an act of a state. The new administrative arrangement, crowned with Queen Victoria's proclamation as Empress of India in 1876, effectively replaced the rule of a monopolistic enterprise with that of a trained civil service headed by graduates of Britain's top universities. The administration retained and increased the monopolies held by the company. The India Salt Act of 1882 included regulations enforcing a government monopoly on the collection and manufacture of salt; in 1923 a bill was passed doubling the salt tax. Southeast Asia After taking control of much of India, the British expanded further into Burma, Malaya, Singapore and Borneo, with these colonies becoming further sources of trade and raw materials for British goods. France annexed all of Vietnam and Cambodia in the 1880s; in the following decade, France completed its Indochinese empire with the annexation of Laos, leaving the kingdom of Siam (now Thailand) with an uneasy independence as a neutral buffer between British and French-ruled lands. The United States laid claim to the Philippines, and after the Spanish–American War, took control of the archipelago as one of its overseas possessions. Indonesia Formal colonization of the Dutch East Indies (now Indonesia) commenced at the dawn of the 19th century when the Dutch state took possession of all Dutch East India Company (VOC) assets. Before that time the VOC merchants were in principle just another trading power among many, establishing trading posts and settlements (colonies) in strategic places around the archipelago. The Dutch gradually extended their sovereignty over most of the islands in the East Indies. Dutch expansion paused for several years during an interregnum of British rule between 1806 and 1816, when the Dutch Republic was occupied by the French forces of Napoleon. The Dutch government-in-exile in England ceded rule of all its colonies to Great Britain. However, Jan Willem Janssens, the governor of the Dutch East Indies at the time, fought the British before surrendering the colony; he was eventually replaced by Stamford Raffles. The Dutch East Indies became the prize possession of the Dutch Empire. It was not the typical settler colony founded through massive emigration from the mother countries (such as the USA or Australia) and hardly involved displacement of the indigenous islanders, with a notable and dramatic exception in the island of Banda during the VOC era. Neither was it a plantation colony built on the import of slaves (such as Haiti or Jamaica) or a pure trade post colony (such as Singapore or Macau). It was more of an expansion of the existing chain of VOC trading posts. Instead of mass emigration from the homeland, the sizeable indigenous populations were controlled through effective political manipulation supported by military force. The servitude of the indigenous masses was enabled through a structure of indirect governance, keeping existing indigenous rulers in place. This strategy was already established by the VOC, which independently acted as a semi-sovereign state within the Dutch state, using the Indo Eurasian population as an intermediary buffer. In 1869, British anthropologist Alfred Russel Wallace described the colonial governing structure in his book "The Malay Archipelago": "The mode of government now adopted in Java is to retain the whole series of native rulers, from the village chief up to princes, who, under the name of Regents, are the heads of districts about the size of a small English county. With each Regent is placed a Dutch Resident, or Assistant Resident, who is considered to be his "elder brother," and whose "orders" take the form of "recommendations," which are, however, implicitly obeyed. Along with each Assistant, Resident is a Controller, a kind of inspector of all the lower native rulers, who periodically visits every village in the district, examines the proceedings of the native courts, hears complaints against the head-men or other native chiefs, and superintends the Government plantations." East Asia China In 1839, China found itself fighting the First Opium War with Great Britain after the governor-general of Hunan and Hubei, Lin Zexu, seized the illegally traded opium. China was defeated, and in 1842 agreed to the provisions of the Treaty of Nanking. Hong Kong Island was ceded to Britain, and certain ports, including Shanghai and Guangzhou, were opened to British trade and residence. In 1856, the Second Opium War broke out; the Chinese were again defeated and forced to the terms of the 1858 Treaty of Tientsin and the 1860 Convention of Peking. The treaty opened new ports to trade and allowed foreigners to travel in the interior. Missionaries gained the right to propagate Christianity, another means of Western penetration. The United States and Russia obtained the same prerogatives in separate treaties. Towards the end of the 19th century, China appeared on the way to territorial dismemberment and economic vassalage, the fate of India's rulers that had played out much earlier. Several provisions of these treaties caused long-standing bitterness and humiliation among the Chinese: extraterritoriality (meaning that in a dispute with a Chinese person, a Westerner had the right to be tried in a court under the laws of his own country), customs regulation, and the right to station foreign warships in Chinese waters. In 1904, the British invaded Tibet, a pre-emptive strike against Russian intrigues and secret meetings between the 13th Dalai Lama's envoy and Tsar Nicholas II. The Dalai Lama fled into exile to China and Mongolia. The British were greatly concerned at the prospect of a Russian invasion of the Crown colony of India, though Russia – badly defeated by Japan in the Russo-Japanese War and weakened by internal revolution – could not realistically afford a military conflict against Britain. China under the Qing dynasty, however, was another matter. Natural disasters, famine and internal rebellions had enfeebled China in the late Qing. In the late 19th century, Japan and the Great Powers easily carved out trade and territorial concessions. These were humiliating submissions for the once-powerful China. Still, the central lesson of the war with Japan was not lost on the Russian General Staff: an Asian country using Western technology and industrial production methods could defeat a great European power. Jane E. Elliott criticized the allegation that China refused to modernize or was unable to defeat Western armies as simplistic, noting that China embarked on a massive military modernization in the late 1800s after several defeats, buying weapons from Western countries and manufacturing their own at arsenals, such as the Hanyang Arsenal during the Boxer Rebellion. In addition, Elliott questioned the claim that Chinese society was traumatized by the Western victories, as many Chinese peasants (90% of the population at that time) living outside the concessions continued about their daily lives, uninterrupted and without any feeling of "humiliation". The British observer Demetrius Charles de Kavanagh Boulger suggested a British-Chinese alliance to check Russian expansion in Central Asia. During the Ili crisis when Qing China threatened to go to war against Russia over the Russian occupation of Ili, the British officer Charles George Gordon was sent to China by Britain to advise China on military options against Russia should a potential war break out between China and Russia. The Russians observed the Chinese building up their arsenal of modern weapons during the Ili crisis, the Chinese bought thousands of rifles from Germany. In 1880 massive amounts of military equipment and rifles were shipped via boats to China from Antwerp as China purchased torpedoes, artillery, and 260,260 modern rifles from Europe. The Russian military observer D. V. Putiatia visited China in 1888 and found that in Northeastern China (Manchuria) along the Chinese-Russian border, the Chinese soldiers were potentially able to become adept at "European tactics" under certain circumstances, and the Chinese soldiers were armed with modern weapons like Krupp artillery, Winchester carbines, and Mauser rifles. Compared to Russian controlled areas, more benefits were given to the Muslim Kirghiz on the Chinese controlled areas. Russian settlers fought against the Muslim nomadic Kirghiz, which led the Russians to believe that the Kirghiz would be a liability in any conflict against China. The Muslim Kirghiz were sure that in an upcoming war, that China would defeat Russia. The Qing dynasty forced Russia to hand over disputed territory in Ili in the Treaty of Saint Petersburg (1881), in what was widely seen by the west as a diplomatic victory for the Qing. Russia acknowledged that Qing China potentially posed a serious military threat. Mass media in the west during this era portrayed China as a rising military power due to its modernization programs and as major threat to the western world, invoking fears that China would successfully conquer western colonies like Australia. Russian sinologists, the Russian media, threat of internal rebellion, the pariah status inflicted by the Congress of Berlin, and the negative state of the Russian economy all led Russia to concede and negotiate with China in St Petersburg, and return most of Ili to China. Historians have judged the Qing dynasty's vulnerability and weakness to foreign imperialism in the 19th century to be based mainly on its maritime naval weakness while it achieved military success against westerners on land, the historian Edward L. Dreyer said that "China’s nineteenth-century humiliations were strongly related to her weakness and failure at sea. At the start of the Opium War, China had no unified navy and no sense of how vulnerable she was to attack from the sea; British forces sailed and steamed wherever they wanted to go. ... In the Arrow War (1856–60), the Chinese had no way to prevent the Anglo-French expedition of 1860 from sailing into the Gulf of Zhili and landing as near as possible to Beijing. Meanwhile, new but not exactly modern Chinese armies suppressed the midcentury rebellions, bluffed Russia into a peaceful settlement of disputed frontiers in Central Asia, and defeated the French forces on land in the Sino-French War (1884–1885). But the defeat of the fleet, and the resulting threat to steamship traffic to Taiwan, forced China to conclude peace on unfavorable terms." The British and Russian consuls schemed and plotted against each other at Kashgar. In 1906, Tsar Nicholas II sent a secret agent to China to collect intelligence on the reform and modernization of the Qing dynasty. The task was given to Carl Gustaf Emil Mannerheim, at the time a colonel in the Russian army, who travelled to China with French Sinologist Paul Pelliot. Mannerheim was disguised as an ethnographic collector, using a Finnish passport. Finland was, at the time, a Grand Duchy. For two years, Mannerheim proceeded through Xinjiang, Gansu, Shaanxi, Henan, Shanxi and Inner Mongolia to Beijing. At the sacred Buddhist mountain of Wutai Shan he even met the 13th Dalai Lama. However, while Mannerheim was in China in 1907, Russia and Britain brokered the Anglo-Russian Agreement, ending the classical period of the Great Game. The correspondent Douglas Story observed Chinese troops in 1907 and praised their abilities and military skill. The rise of Japan as an imperial power after the Meiji Restoration led to further subjugation of China. In a dispute over regional suzerainty, war broke out between China and Japan, resulting in another humiliating defeat for the Chinese. By the Treaty of Shimonoseki in 1895, China was forced to recognize Korea's exit from the Imperial Chinese tributary system, leading to the proclamation of the Korean Empire, and the island of Taiwan was ceded to Japan. In 1897, taking advantage of the murder of two missionaries, Germany demanded and was given a set of mining and railroad rights around Jiaozhou Bay in Shandong province. In 1898, Russia obtained access to Dairen and Port Arthur and the right to build a railroad across Manchuria, thereby achieving complete domination over a large portion of northeast China. The United Kingdom, France, and Japan also received a number of concessions later that year. The erosion of Chinese sovereignty contributed to a spectacular anti-foreign outbreak in June 1900, when the "Boxers" (properly the society of the "righteous and harmonious fists") attacked foreign legations in Beijing. This Boxer Rebellion provoked a rare display of unity among the colonial powers, who formed the Eight-Nation Alliance. Troops landed at Tianjin and marched on the capital, which they took on 14 August; the foreign soldiers then looted and occupied Beijing for several months. German forces were particularly severe in exacting revenge for the killing of their ambassador, while Russia tightened its hold on Manchuria in the northeast until its crushing defeat by Japan in the Russo-Japanese War of 1904–1905. Extraterritorial jurisdiction was abandoned by the United Kingdom and the United States in 1943. Mainland Chinese historians refer to this period as the century of humiliation. Central Asia The "Great Game" (Also called the Tournament of Shadows (, Turniry Teney) in Russia) was the strategic, economic and political rivalry, emanating to conflict between the British Empire and the Russian Empire for supremacy in Central Asia at the expense of Afghanistan, Persia and the Central Asian Khanates/Emirates. The classic Great Game period is generally regarded as running approximately from the Russo-Persian Treaty of 1813 to the Anglo-Russian Convention of 1907, in which nations like Emirate of Bukhara fell. A less intensive phase followed the Bolshevik Revolution of 1917, causing some trouble with Persia and Afghanistan until the mid-1920s. In the post-Second World War post-colonial period, the term has informally continued in its usage to describe the geopolitical machinations of the great powers and regional powers as they vie for geopolitical power as well as influence in the area, especially in Afghanistan and Iran/Persia. Africa Prelude Between 1850 and 1914, Britain brought nearly 30% of Africa's population under its control, to 15% for France, 9% for Germany, 7% for Belgium and 1% for Italy: Nigeria alone contributed 15 million subjects to Britain, more than in the whole of French West Africa, or the entire German colonial empire. The only nations that were not under European control by 1914 were Liberia and Ethiopia. British colonies Britain's formal occupation of Egypt in 1882, triggered by concern over the Suez Canal, contributed to a preoccupation over securing control of the Nile River, leading to the conquest of neighboring Sudan in 1896–1898, which in turn led to confrontation with a French military expedition at Fashoda in September 1898. In 1899, Britain set out to complete its takeover of the future South Africa, which it had begun in 1814 with the annexation of the Cape Colony, by invading the gold-rich Afrikaner republics of Transvaal and the neighboring Orange Free State. The chartered British South Africa Company had already seized the land to the north, renamed Rhodesia after its head, the Cape tycoon Cecil Rhodes. British gains in southern and East Africa prompted Rhodes and Alfred Milner, Britain's High Commissioner in South Africa, to urge a "Cape to Cairo" empire: linked by rail, the strategically important Canal would be firmly connected to the mineral-rich South, though Belgian control of the Congo Free State and German control of German East Africa prevented such an outcome until the end of World War I, when Great Britain acquired the latter territory. Britain's quest for southern Africa and its diamonds led to social complications and fallouts that lasted for years. To work for their prosperous company, British businessmen hired both white and black South Africans. But when it came to jobs, the white South Africans received the higher paid and less dangerous ones, leaving the black South Africans to risk their lives in the mines for limited pay. This process of separating the two groups of South Africans, whites and blacks, was the beginning of segregation between the two that lasted until 1990. Paradoxically, the United Kingdom, a staunch advocate of free trade, emerged in 1913 with not only the largest overseas empire, thanks to its long-standing presence in India, but also the greatest gains in the conquest of Africa, reflecting its advantageous position at its inception. Congo Free State Until 1876, Belgium had no colonial presence in Africa. It was then that its king, Leopold II created the International African Society. Operating under the pretense of an international scientific and philanthropic association, it was actually a private holding company owned by Leopold. Henry Morton Stanley was employed to explore and colonize the Congo River basin area of equatorial Africa in order to capitalize on the plentiful resources such as ivory, rubber, diamonds, and metals. Up until this point, Africa was known as "the Dark Continent" because of the difficulties Europeans had with exploration. Over the next few years, Stanley overpowered and made treaties with over 450 native tribes, acquiring him over of land, nearly 67 times the size of Belgium. Neither the Belgian government nor the Belgian people had any interest in imperialism at the time, and the land came to be personally owned by King Leopold II. At the Berlin Conference in 1884, he was allowed to have land named the Congo Free State. The other European countries at the conference allowed this to happen on the conditions that he suppress the East African slave trade, promote humanitarian policies, guarantee free trade, and encourage missions to Christianize the people of the Congo. However, Leopold II's primary focus was to make a large profit on the natural resources, particularly ivory and rubber. In order to make this profit, he passed several cruel decrees that can be considered to be genocide. He forced the natives to supply him with rubber and ivory without any sort of payment in return. Their wives and children were held hostage until the workers returned with enough rubber or ivory to fill their quota, and if they could not, their family would be killed. When villages refused, they were burned down; the children of the village were murdered and the men had their hands cut off. These policies led to uprisings, but they were feeble compared to European military and technological might, and were consequently crushed. The forced labor was opposed in other ways: fleeing into the forests to seek refuge or setting the rubber forests on fire, preventing the Europeans from harvesting the rubber. No population figures exist from before or after the period, but it is estimated that as many as 10 million people died from violence, famine and disease. However, some sources point to a total population of 16 million people. King Leopold II profited from the enterprise with a 700% profit ratio for the rubber he took from Congo and exported. He used propaganda to keep the other European nations at bay, for he broke almost all of the parts of the agreement he made at the Berlin Conference. For example, he had some Congolese pygmies sing and dance at the 1897 World Fair in Belgium, showing how he was supposedly civilizing and educating the natives of the Congo. Under significant international pressure, the Belgian government annexed the territory in 1908 and renamed it the Belgian Congo, removing it from the personal power of the king. Of all the colonies that were conquered during the wave of New Imperialism, the human rights abuses of the Congo Free State were considered the worst. Oceania France gained a leading position as an imperial power in the Pacific after making Tahiti and New Caledonia protectorates in 1842 and 1853 respectively. Tahiti was later annexed entirely into the French colonial empire in 1880, along with the rest of the Society Islands. The United States made several territorial gains during this period, particularly with the overthrow and annexation of the Kingdom of Hawaiʻi and acquisition of most of Spain's colonial outposts following the 1898 Spanish–American War, as well as the partition of the Samoan Islands into American Samoa and German Samoa. By 1900, nearly all islands in the Pacific Ocean were under the control of Britain, France, the United States, Germany, Japan, Mexico, Ecuador, and Chile. Chilean expansion Chile's interest in expanding into the islands of the Pacific Ocean dates to the presidency of José Joaquín Prieto (1831–1841) and the ideology of Diego Portales, who considered that Chile's expansion into Polynesia was a natural consequence of its maritime destiny. Nonetheless, the first stage of the country's expansionism into the Pacific began only a decade later, in 1851, when—in response to an American incursion into the Juan Fernández Islands—Chile's government formally organized the islands into a subdelegation of Valparaíso. That same year, Chile's economic interest in the Pacific were renewed after its merchant fleet briefly succeeded in creating an agricultural goods exchange market that connected the Californian port of San Francisco with Australia. By 1861, Chile had established a lucrative enterprise across the Pacific, its national currency abundantly circulating throughout Polynesia and its merchants trading in the markets of Tahiti, New Zealand, Tasmania, and Shanghai; negotiations were also made with the Spanish Philippines, and altercations reportedly occurred between Chilean and American whalers in the Sea of Japan. This period ended as a result of the Chilean merchant fleet's destruction by Spanish forces in 1866, during the Chincha Islands War. Chile's Polynesian aspirations would again be awakened in the aftermath of the country's decisive victory against Peru in the War of the Pacific, which left the Chilean fleet as the dominant maritime force in the Pacific coast of the Americas. Valparaíso had also become the most important port in the Pacific coast of South America, providing Chilean merchants with the capacity to find markets in the Pacific for its new mineral wealth acquired from the Atacama. During this period, the Chilean intellectual and politician Benjamín Vicuña Mackenna (who served as senator in the National Congress from 1876 to 1885) was an influential voice in favor of Chilean expansionism into the Pacific—he considered that Spain's discoveries in the Pacific had been stolen by the British, and envisioned that Chile's duty was to create an empire in the Pacific that would reach Asia. In the context of this imperialist fervor is that, in 1886, Captain Policarpo Toro of the Chilean Navy proposed to his superiors the annexation of Easter Island; a proposal which was supported by President José Manuel Balmaceda because of the island's apparent strategic location and economic value. After Toro transferred the rights to the island's sheep ranching operations from Tahiti-based businesses to the Chilean-based Williamson-Balfour Company in 1887, Easter Island's annexation process was culminated with the signing of the "Agreement of Wills" between Rapa Nui chieftains and Toro, in name of the Chilean government, in 1888. By occupying Easter Island, Chile joined the imperial nations. Imperial rivalries The extension of European control over Africa and Asia added a further dimension to the rivalry and mutual suspicion which characterized international diplomacy in the decades preceding World War I. France's seizure of Tunisia in 1881 initiated fifteen years of tension with Italy, which had hoped to take the country, retaliating by allying with Germany and waging a decade-long tariff war with France. Britain's takeover of Egypt a year later caused a marked cooling of its relations with France. The most striking conflicts of the era were the Spanish–American War of 1898 and the Russo-Japanese War of 1904–05, each signaling the advent of a new imperial great power; the United States and Japan, respectively. The Fashoda incident of 1898 represented the worst Anglo-French crisis in decades, but France's buckling in the face of British demands foreshadowed improved relations as the two countries set about resolving their overseas claims. British policy in South Africa and German actions in the Far East contributed to dramatic policy shifts, which in the 1900s, aligned hitherto isolationist Britain first with Japan as an ally, and then with France and Russia in the looser Triple Entente. German efforts to break the Entente by challenging French hegemony in Morocco resulted in the Tangier Crisis of 1905 and the Agadir Crisis of 1911, adding to tension and anti-German sentiment in the years preceding World War I. In the Pacific, conflicts between Germany, the United States, and the United Kingdom contributed to the First and Second Samoan Civil War. Another crisis occurred in 1902–03, when there was a stand-off between Venezuela backed by Argentina, the United States (see Drago Doctrine and Monroe Doctrine) and a coalition of European countries. Motivation Humanitarianism One of the biggest motivations behind New Imperialism was the idea of humanitarianism and "civilizing" the "lower" class people in Africa and in other undeveloped places. This was a religious motive for many Christian missionaries, in an attempt to save the souls of the "uncivilized" people, and based on the idea that European Christians were morally superior. Most of the missionaries that supported imperialism did so because they felt the only "true" religion was their own. Similarly, French, Spanish and Italian Catholic missionaries opposed the Protestant British, German, and American missionaries. At times, however, imperialism did help the people of the colonies because the missionaries ended up stopping slavery in some areas. Therefore, Europeans claimed that they were only there because they wanted to protect the weaker tribal groups they conquered. The missionaries and other leaders suggested that they should stop such "savage" practices as cannibalism, idolatry and child marriage. This humanitarian ideal was described in poems such as the White Man's Burden and other literature. In many instances, the humanitarianism was sincere, but often with misguided choices. Although some imperialists were trying to be sincere with the notion of humanitarianism, at times their choices might not have been best for the areas they were conquering and the natives living there. As a result, some modern historical revisionists have suggested that new imperialism was driven more by the idea of racial and cultural supremacism, and that claims of "humanitarianism" were either insincere or used as pretexts for territorial expansion. Dutch Ethical Policy The Dutch Ethical Policy was the dominant reformist and liberal political character of colonial policy in the Dutch East Indies during the 20th century. In 1901, the Dutch Queen Wilhelmina announced that the Netherlands accepted an ethical responsibility for the welfare of their colonial subjects despite clearly being discriminatory towards the oppressed colonised peoples. This announcement was a sharp contrast with the former official doctrine that Indonesia was mainly a wingewest (region for making profit). It marked the start of modern development policy, implemented and practised by Alexander Willem Frederik Idenburg, whereas other colonial powers usually talked of a civilizing mission, which mainly involved spreading their culture to colonized peoples and expanding their culture. The Dutch Ethical Policy (Dutch: ) emphasised improvement in material living conditions. The policy suffered, however, from serious underfunding, inflated expectations and lack of acceptance in the Dutch colonial establishment, and it had largely ceased to exist by the onset of the Great Depression in 1929. It did however create an educated indigenous elite able to articulate and eventually establish independence from the Netherlands. Theories The "accumulation theory" adopted by Karl Kautsky, John A. Hobson and popularized by Vladimir Lenin centered on the accumulation of surplus capital during and after the Industrial Revolution: restricted opportunities at home, the argument goes, drove financial interests to seek more profitable investments in less-developed lands with lower labor costs, unexploited raw materials and little competition. Hobson's analysis fails to explain colonial expansion on the part of less industrialized nations with little surplus capital, such as Italy, or the great powers of the next century—the United States and Russia—which were in fact net borrowers of foreign capital. Also, military and bureaucratic costs of occupation frequently exceeded financial returns. In Africa (exclusive of what would become the Union of South Africa in 1909) the amount of capital investment by Europeans was relatively small before and after the 1880s, and the companies involved in tropical African commerce exerted limited political influence. The "World-Systems theory" approach of Immanuel Wallerstein sees imperialism as part of a general, gradual extension of capital investment from the "core" of the industrial countries to a less developed "periphery." Protectionism and formal empire were the major tools of "semi-peripheral," newly industrialized states, such as Germany, seeking to usurp Britain's position at the "core" of the global capitalist system. Echoing Wallerstein's global perspective to an extent, imperial historian Bernard Porter views Britain's adoption of formal imperialism as a symptom and an effect of her relative decline in the world, and not of strength: "Stuck with outmoded physical plants and outmoded forms of business organization, [Britain] now felt the less favorable effects of being the first to modernize." Timeline 1832: Britain annexed the Falkland Islands. 1837: Britain annexed the Pitcairn Islands. 1839: Britain conquered Aden from the Sultanate of Lahej. 1840: Britain established the Colony of New Zealand. 1843: Britain received Hong Kong Island from China. 1848: Britain annexed the Sikh Empire in Punjab. 1852: France annexed New Caledonia. 1854: Division of the Kuril Islands and Sakhalin between Russia and Japan. 1859: Britain annexed the Cocos (Keeling) Islands and Perim. Completion of the French conquest of Algeria. 1857: Britain suppressed the Indian Rebellion. 1862: Creation of French Cochinchina. British Honduras declared a colony. 1867: United States purchased Alaska from Russia. 1869: Japan annexed Hokkaido. 1870: Russia annexed Novaya Zemlya. 1874: Britain established the Colony of Fiji. 1875: Japan annexed the Bonin Islands. 1879: Japan annexed the Ryukyu Islands. 1881: France annexed Tunisia. 1882: Britain occupied Egypt. 1884: Argentina completed the Conquest of the Desert in Patagonia. 1885: Britain completed the conquest of Myanmar. Belgian king established the Congo Free State. German protectorate over Marshall Islands. 1887: British protectorate over Maldives. Creation of French Somaliland. 1888: Britain annexed Christmas Island. Creation of British Somaliland. Germany annexed Nauru. Chile annexed Easter Island. 1889: Creation of French Polynesia. 1890: British protectorate over Zanzibar. Creation of Italian Eritrea. 1892: Britain annexed Banaba Island and Gilbert Islands. 1895: China ceded Taiwan and Penghu to Japan. 1897: France annexed Madagascar. 1898: United States annexed Hawaii, Puerto Rico, Cuba, Guam, and the Philippines. 1898: Division of the Samoan Islands into German Samoa and American Samoa. 1900: British protectorate over Tonga. 1906: Britain and France established the New Hebrides condominium. 1908: France annexed the Comoro Islands. 1910: Japan annexed the Korean Empire. 1915: Britain annexed Cyprus. See also Afro-Asia Dollar diplomacy, US about 1910 Historiography of the British Empire Imperialism International relations (1814–1919) Russian Empire Soviet Empire Timeline of European imperialism Imperialism in Asia People Otto von Bismarck, Germany Cecil Rhodes, Britain Joseph Chamberlain, Britain Jules Ferry, France Napoléon III, France Victor Emmanuel III of Italy Theodore Roosevelt, United States Emperor Meiji, Japan Julio Argentino Roca, Argentina Porfirio Díaz, Mexico Notes References Further reading Albrecht-Carrié, René. A Diplomatic History of Europe Since the Congress of Vienna (1958), 736pp; basic survey Aldrich, Robert. Greater France: A History of French Overseas Expansion (1996) Anderson, Frank Maloy, and Amos Shartle Hershey, eds. Handbook for the Diplomatic History of Europe, Asia, and Africa, 1870–1914 (1918), highly detailed summary prepared for use by the American delegation to the Paris peace conference of 1919. full text Baumgart, W. Imperialism: The Idea and Reality of British and French Colonial Expansion 1880-1914 (1982) Betts, Raymond F. Europe Overseas: Phases of Imperialism (1968) 206pp; basic survey Cady, John Frank. The Roots of French Imperialism in Eastern Asia (1967) Cain, Peter J., and Anthony G. Hopkins. "Gentlemanly capitalism and British expansion overseas II: new imperialism, 1850‐1945." The Economic History Review 40.1 (1987): 1–26. Hinsley, F.H., ed. The New Cambridge Modern History, vol. 11, Material Progress and World-Wide Problems 1870-1898 (1979) Hodge, Carl Cavanagh. Encyclopedia of the Age of Imperialism, 1800–1914 (2 vol., 2007); online Langer, William. An Encyclopedia of World History (5th ed. 1973); highly detailed outline of events; 1948 edition online Langer, William. The Diplomacy of Imperialism 1890–1902 (1950); advanced comprehensive history; online copy free to borrow also see online review Manning, Patrick. Francophone Sub-Saharan Africa, 1880–1995 (1998) online Moon, Parker T. Imperialism & World Politics (1926), Comprehensive coverage; online Mowat, C. L., ed. The New Cambridge Modern History, Vol. 12: The Shifting Balance of World Forces, 1898–1945 (1968); online Page, Melvin E. et al. eds. Colonialism: An International Social, Cultural, and Political Encyclopedia (2 vol 2003) Pakenham, Thomas. The Scramble for Africa: White Man's Conquest of the Dark Continent from 1876–1912 (1992) Stuchtey, Benedikt, ed. Colonialism and Imperialism, 1450–1950, European History Online, Mainz: Institute of European History, 2011 Taylor, A.J.P. The Struggle for Mastery in Europe 1848–1918 (1954) 638pp; advanced history and analysis of major diplomacy; online External links J.A. Hobson's Imperialism: A Study: A Centennial Retrospective by Professor Peter Cain Extensive information on the British Empire British Empire The Empire Strikes Out: The "New Imperialism" and Its Fatal Flaws by Ivan Eland, director of defense policy studies at the Cato Institute. (an article comparing contemporary defense policy with those of New Imperialism (1870–1914) The Martian Chronicles: History Behind the Chronicles New Imperialism 1870–1914 1- Coyne, Christopher J. and Steve Davies. "Empire: Public Goods and Bads" (Jan 2007). Wayback Machine Imperialism – Internet History Sourcebooks – Fordham University The New Imperialism (a course syllabus) The 19th Century: The New Imperialism 2- Coyne, Christopher J. and Steve Davies. "Empire: Public Goods and Bads" (Jan 2007). Wayback Machine 19th century European colonisation of Africa Western culture
0.770941
0.997128
0.768726
The Lessons of History
The Lessons of History is a 1968 book by historians Will Durant and Ariel Durant. The book provides a summary of periods and trends in history they had noted upon completion of the 10th volume of their momentous eleven-volume The Story of Civilization. Will Durant stated that he and Ariel "made note of events and comments that might illuminate present affairs, future probabilities, the nature of man, and the conduct of states." Thus, the book presents an overview of the themes and lessons observed from 5,000 years of human history, examined from 12 perspectives: geography, biology, race, character, morals, religion, economics, socialism, government, war, growth and decay, and progress. Reception John Barkham called the work a "masterpiece of distillation", praising the authors' balanced treatment of such concepts as the trade-offs between liberty and equality and the tensions between religion and secularism in modern societies. See also Historic recurrence Notes References Will and Ariel Durant, The Lessons of History, 1st ed., New York, Simon & Schuster, 1968. External links Books by Will Durant History books 1968 non-fiction books Interdisciplinary historical research Works about the theory of history World history
0.781573
0.983546
0.768713
Unilineal evolution
Unilineal evolution, also referred to as classical social evolution, is a 19th-century social theory about the evolution of societies and cultures. It was composed of many competing theories by various anthropologists and sociologists, who believed that Western culture is the contemporary pinnacle of social evolution. Different social status is aligned in a single line that moves from most primitive to most civilized. This theory is now generally considered obsolete in academic circles. Intellectual thought Theories of social and cultural evolution are common in modern European thought. Prior to the 18th century, Europeans predominantly believed that societies on Earth were in a state of decline. European society held up the world of antiquity as a standard to aspire to, and ancient Greece and ancient Rome produced levels of technical accomplishment which Europeans of the Middle Ages sought to emulate. At the same time, Christianity taught that people lived in a debased world fundamentally inferior to the Garden of Eden and Heaven. During the Age of Enlightenment, however, European self-confidence grew and the notion of progress became increasingly popular. It was during this period that what would later become known as 'sociological and cultural evolution' would have its roots. The Enlightenment thinkers often speculated that societies progressed through stages of increasing development and looked for the logic, order and the set of scientific truths that determined the course of human history. Georg Wilhelm Friedrich Hegel, for example, argued that social development was an inevitable and determined process, similar to an acorn which has no choice but to become an oak tree. Likewise, it was assumed that societies start out primitive, perhaps in a Hobbesian state of nature, and naturally progress toward something resembling industrial Europe. Scottish thinkers While earlier authors such as Michel de Montaigne discussed how societies change through time, it was truly the Scottish Enlightenment which proved key in the development of cultural evolution. After Scotland's union with England in 1707, several Scottish thinkers pondered on the relationship between progress and the 'decadence' brought about by increased trade with England and the affluence it produced. The result was a series of conjectural histories. Authors such as Adam Ferguson, John Millar, and Adam Smith argued that all societies pass through a series of four stages: hunting and gathering, pastoralism and nomadism, agricultural, and finally a stage of commerce. These thinkers thus understood the changes Scotland was undergoing as a transition from an agricultural to a mercantile society. Philosophical concepts of progress (such as those expounded by the German philosopher G.W.F. Hegel) developed as well during this period. In France authors such as Claude Adrien Helvétius and other philosophers were influenced by this Scottish tradition. Later thinkers such as Comte de Saint-Simon developed these ideas. Auguste Comte in particular presented a coherent view of social progress and a new discipline to study it: sociology. Rising interests These developments took place in a wider context. The first process was colonialism. Although Imperial powers settled most differences of opinion with their colonial subjects with force, increased awareness of non-Western peoples raised new questions for European scholars about the nature of society and culture. Similarly, effective administration required some degree of understanding of other cultures. Emerging theories of social evolution allowed Europeans to organize their new knowledge in a way that reflected and justified their increasing political and economic domination of others: colonized people were less-evolved, colonizing people were more evolved. The second process was the Industrial Revolution and the rise of capitalism which allowed and promoted continual revolutions in the means of production. Emerging theories of social evolution reflected a belief that the changes in Europe wrought by the Industrial Revolution and capitalism were obvious improvements. Industrialization, combined with the intense political change brought about by the French Revolution and US Constitution which were paving the way for the dominance of democracy, forced European thinkers to reconsider some of their assumptions about how society was organized. Eventually, in the 19th century, three great classical theories of social and historical change were created: the social evolutionism theory, the social cycle theory and the Marxist historical materialism theory. Those theories had one common factor: they all agreed that the history of humanity is pursuing a certain fixed path, most likely that of the social progress. Thus, each past event is not only chronologically, but causally tied to the present and future events. Those theories postulated that by recreating the sequence of those events, sociology could discover the laws of history. Birth and development While social evolutionists agree that the evolution-like process leads to social progress, classical social evolutionists have developed many different theories, known as theories of unilineal evolution. Social evolutionism was the prevailing theory of early socio-cultural anthropology and social commentary, and is associated with scholars like Auguste Comte, Edward Burnett Tylor, Lewis Henry Morgan, and Herbert Spencer. Social evolutionism represented an attempt to formalize social thinking along scientific lines, later influenced by the biological theory of evolution. If organisms could develop over time according to discernible, deterministic laws, then it seemed reasonable that societies could as well. This really marks the beginning of Anthropology as a scientific discipline and a departure from traditional religious views of "primitive" cultures. The term "classical social evolutionism" is most closely associated with the 19th-century writings of Auguste Comte, Herbert Spencer (who coined the phrase "survival of the fittest") and William Graham Sumner. In many ways Spencer's theory of 'cosmic evolution' has much more in common with the works of Jean-Baptiste Lamarck and Auguste Comte than with contemporary works of Charles Darwin. Spencer also developed and published his theories several years earlier than Darwin. In regard to social institutions, however, there is a good case that Spencer's writings might be classified as 'Social Evolutionism'. Although he wrote that societies over time progressed, and that progress was accomplished through competition, he stressed that the individual (rather than the collectivity) is the unit of analysis that evolves, that evolution takes place through natural selection and that it affects social as well as biological phenomenon. Progressivism Both Spencer and Comte view the society as a kind of organism subject to the process of growth—from simplicity to complexity, from chaos to order, from generalization to specialization, from flexibility to organization. They agreed that the process of societies growth can be divided into certain stages, have their beginning and eventual end, and that this growth is in fact social progress—each newer, more evolved society is better. Thus progressivism became one of the basic ideas underlying the theory of social evolutionism. Auguste Comte Auguste Comte, known as father of sociology, formulated the law of three stages: human development progresses from the theological stage, in which nature was mythically conceived and man sought the explanation of natural phenomena from supernatural beings, through metaphysical stage in which nature was conceived of as a result of obscure forces and man sought the explanation of natural phenomena from them until the final positive stage in which all abstract and obscure forces are discarded, and natural phenomena are explained by their constant relationship. This progress is forced through the development of human mind, and increasing application of thought, reasoning and logic to the understanding of world. Herbert Spencer Herbert Spencer believed that society was evolving toward increasing freedom for individuals; and so held that government intervention, ought to be minimal in social and political life, differentiated between two phases of development, focusing is on the type of internal regulation within societies. Thus, he differentiated between military and industrial societies. The earlier, more primitive military society has a goal of conquest and defence, is centralised, economically self-sufficient, collectivistic, puts the good of the group over the good of the individual, uses compulsion, force, and repression, rewards loyalty, obedience and discipline. The industrial society has a goal of production and trade, is decentralised, interconnected with other societies via economic relations, achieves its goals through voluntary cooperation and individual self-restraint, treats the good of the individual as the highest value, regulates the social life via voluntary relations, and values initiative, independence, and innovation. Regardless of how scholars of Spencer interpret his relation to Darwin, Spencer proved to be an incredibly popular figure in the 1870s, particularly in the United States. Authors such as Edward L. Youmans, William Graham Sumner, John Fiske, John W. Burgess, Lester Frank Ward, Lewis H. Morgan and other thinkers of the gilded age all developed theories of social evolutionism as a result of their exposure to Spencer as well as Darwin. Lewis H. Morgan In his 1877 classic Ancient Societies, lawyer and anthropologist Lewis H. Morgan followed Montestquieu and Tylor in distinguishing three eras: savagery, barbarism and civilisation, with Morgan introducing further subdivisions into the first two stages. Morgan attempts to assign particular cultures to one of his stages on the basis of its level of technological development, which for Morgan has in each case correlates in patterns of subsistence, and kinship and political structures. Thus Morgan introduced a link between the social progress and technological progress. Morgan viewed technological progress as a force behind social progress, and any social change—in social institutions, organisations or ideologies—as having its beginnings in changes in technology. Morgan disagreed with the accusation of unilinealism, writing:In speaking thus positively of the several forms of the family in their relative order, there is a danger of being misunderstood. I do not mean to imply that one form rises complete in a certain status in society, flourishes universally and exclusively wherever tribes are found in the same status, and then disappears in another, which is the next higher form...Morgan thus argued that the forms evolved unevenly and in different combinations of elements. Morgan's theories were popularised by Friedrich Engels, who based his The Origin of the Family, Private Property and the State on it. For Engels and other Marxists, this theory was important as it supported their conviction that materialistic factors—economical and technological—are decisive in shaping the fate of humanity. Émile Durkheim Émile Durkheim, another of the 'fathers' of sociology, has developed a similar, dichotomical view of social progress. His key concept was social solidarity, as he defined the social evolution in terms of progressing from mechanical solidarity to organic solidarity. In mechanical solidarity, people are self-sufficient, there is little integration and thus there is the need for use of force and repression to keep society together. In organic solidarity, people are much more integrated and interdependent and specialisation and cooperation is extensive. Progress from mechanical to organic solidarity is based first on population growth and increasing population density, second on increasing 'morality density' (development of more complex social interactions) and thirdly, on the increasing specialisation in workplace. To Durkheim, the most important factor in the social progress is the division of labor. Edward Burnett Tylor and Lewis H. Morgan. Anthropologists Edward Burnett Tylor in England and Lewis H. Morgan in the United States worked with data from indigenous people, who they claimed represented earlier stages of cultural evolution that gave insight into the process and progression of cultural evolution. Morgan would later have a significant influence on Karl Marx and Friedrich Engels, who developed a theory of cultural evolution in which the internal contradictions in society created a series of escalating stages that ended in a socialist society (see Marxism). Tylor and Morgan elaborated upon, modified and expanded the theory of unilinear evolution, specifying criteria for categorizing cultures according to their standing within a fixed system of growth of humanity as a whole while examining the modes and mechanisms of this growth. Their analysis of cross-cultural data was based on three assumptions: contemporary societies may be classified and ranked as more "primitive" or more "civilized"; There are a determinate number of stages between "primitive" and "civilized" (e.g. band, tribe, chiefdom, and state), All societies progress through these stages in the same sequence, but at different rates. Theorists usually measured progression (that is, the difference between one stage and the next) in terms of increasing social complexity (including class differentiation and a complex division of labor), or an increase in intellectual, theological, and aesthetic sophistication. These 19th-century ethnologists used these principles primarily to explain differences in religious beliefs and kinship terminologies among various societies. Lester Frank Ward There were however notable differences between the work of Lester Frank Ward's and Tylor's approaches. Lester Frank Ward developed Spencer's theory but unlike Spencer, who considered the evolution to be general process applicable to the entire world, physical and sociological, Ward differentiated sociological evolution from biological evolution. He stressed that humans create goals for themselves and strive to realise them, whereas there is no such intelligence and awareness guiding the non-human world, which develops more or less at random. He created a hierarchy of evolution processes. First, there is cosmogenesis, creation and evolution of the world. Then, after life develops, there is biogenesis. Development of humanity leads to anthropogenesis, which is influenced by the human mind. Finally, when society develops, so does sociogenesis, which is the science of shaping the society to fit with various political, cultural and ideological goals. Edward Burnett Tylor, pioneer of anthropology, focused on the evolution of culture worldwide, noting that culture is an important part of every society and that it is also subject to the process of evolution. He believed that societies were at different stages of cultural development and that the purpose of anthropology was to reconstruct the evolution of culture, from primitive beginnings to the modern state. Ferdinand Tönnies Ferdinand Tönnies describes the evolution as the development from informal society, where people have many liberties and there are few laws and obligations, to modern, formal rational society, dominated by traditions and laws and are restricted from acting as they wish. He also notes that there is a tendency of standardization and unification, when all smaller societies are absorbed into the single, large, modern society. Thus Tönnies can be said to describe part of the process known today as the globalisation. Tönnies was also one of the first sociologists to claim that the evolution of society is not necessarily going in the right direction, that the social progress is not perfect, it can even be called a regress as the newer, more evolved societies are obtained only after paying a high costs, resulting in decreasing satisfaction of individuals making up that society. Tönnies' work became the foundation of neo-evolutionism. Critique and impact Franz Boas The early 20th century inaugurated a period of systematic critical examination, and rejection of unilineal theories of cultural evolution. Cultural anthropologists such as Franz Boas, typically regarded as the leader of anthropology's rejection of classical social evolutionism, used sophisticated ethnography and more rigorous empirical methods to argue that Spencer, Tylor, and Morgan's theories were speculative and systematically misrepresented ethnographic data. Additionally, they rejected the distinction between "primitive" and "civilized" (or "modern"), pointing out that so-called primitive contemporary societies have just as much history, and were just as evolved, as so-called civilized societies. They therefore argued that any attempt to use this theory to reconstruct the histories of non-literate (i.e. leaving no historical documents) peoples is entirely speculative and unscientific. They observed that the postulated progression, a stage of civilization identical to that of modern Europe, is ethnocentric. They also pointed out that the theory assumes that societies are clearly bounded and distinct, when in fact cultural traits and forms often cross social boundaries and diffuse among many different societies (and is thus an important mechanism of change). Boas in his culture history approach focused on anthropological fieldwork in an attempt to identify factual processes instead of what he criticized as speculative stages of growth. Global change Later critics observed that this assumption of firmly bounded societies was proposed precisely at the time when European powers were colonizing non-Western societies, and was thus self-serving. Many anthropologists and social theorists now consider unilineal cultural and social evolution a Western myth seldom based on solid empirical grounds. Critical theorists argue that notions of social evolution are simply justifications for power by the elites of society. Finally, the devastating World Wars that occurred between 1914 and 1945 crippled Europe's internal confidence shaking the remaining belief in Western civilization's superiority. After millions of deaths, genocide, and the destruction of Europe's industrial infrastructure, the idea of linear progress with Western civilization furthest along seemed dubious at best. Major objections and concerns Thus modern socio-cultural evolutionism rejects most of classical social evolutionism due to various theoretical problems: The theory was deeply ethnocentric—it makes heavy value judgements on different societies; with Western civilization seen as the most valuable. It assumed all cultures follow the same path or progression and have the same goals. It equated civilization with material culture (technology, cities, etc.) It equated evolution with progress or fitness, based on deep misunderstandings of evolutionary theory. It is contradicted by evidence. Some (but not all) supposedly primitive societies are arguably more peaceful and equitable/democratic than many modern societies. Because social evolution was posited as a scientific theory, it was often used to support unjust and often racist social practices—particularly colonialism, slavery, and the unequal economic conditions present within industrialized Europe. See also Multilineal evolution Social Darwinism World-systems theory Orthogenesis References Anthropology Sociocultural evolution theory
0.787071
0.97665
0.768692
History of Western civilization before AD 500
Western civilization describes the development of human civilization beginning in Ancient Greece, and generally spreading westwards. It can be strongly associated with nations linked to the former Western Roman Empire and with Medieval Western Christendom. The civilizations of Classical Greece (Hellenic) and Roman Empire (Latin) as well as Ancient Israel (Hebraism) and early Christendom are considered seminal periods in Western history;. From Ancient Greece sprang belief in democracy, and the pursuit of intellectual inquiry into such subjects as truth and beauty; from Rome came lessons in government administration, martial organization, engineering and law; and from Ancient Israel sprang Christianity with its ideals of the brotherhood of humanity. Strong cultural contributions also emerged from the pagan Germanic, Celtic, Wendic, Finnic, Baltic and Nordic peoples of pre-Christian Europe. Following the 5th-century "Fall of Rome", Europe entered the Middle Ages, during which period the Catholic Church filled the power vacuum left in the West by the fallen Roman Empire, while the Eastern Roman Empire (Byzantine Empire) endured for centuries. Origins of the notion of "East" and "West" The opposition of a European "West" to an Asiatic "East" has its roots in Classical Antiquity, with the Persian Wars where the Greek city states (depicted as the west) were opposing the expansion of the Achaemenid Empire (depicted as the east). The Biblical opposition of Israel and Assyria from a European perspective was recast into these terms by early Christian authors such as Jerome, who compared it to the "barbarian" invasions of his own time (see also Assyria and Germany in Anglo-Israelism). The "East" in the Hellenistic period was the Seleucid Empire, with Greek influence stretching as far as Bactria and India, besides Scythia in the Pontic steppe to the north. In this period, there was significant cultural contact between the Mediterranean and the East, giving rise to syncretisms like Greco-Buddhism. The establishment of the Byzantine Empire around the 4th century established a political division of Europe into East and West and laid the foundations for divergent cultural directions, confirmed centuries later with the Great Schism between Eastern Orthodox Christianity and Roman Catholic Christianity. The Mediterranean and the Ancient West The earliest civilizations which influenced the development of the West were those of Mesopotamia, the area of the Tigris–Euphrates river system, largely corresponding to modern-day Iraq, northeastern Syria, southeastern Turkey and southwestern Iran: the cradle of civilization. An agricultural revolution began here around 10,000 years ago with the domestication of animals like sheep and goats and the appearance of new wheat hybrids, notably bread wheat, at the completion of the last Ice Age, which allowed for a transition from nomadism to village settlements and then cities like Jericho. The Sumerians, Akkadians, Babylonians and Assyrians all flourished in this region. Soon after the Sumerian civilization began, the Nile River valley of ancient Egypt was unified under the Pharaohs in the 4th millennium BC, and civilization quickly spread through the Fertile Crescent to the eastern coast of the Mediterranean Sea and throughout the Levant. The Phoenicians, Israelites and others later built important states in this region. The ancient peoples of the Mediterranean heavily influenced the origins of Western civilisation. The Mediterranean Sea provided reliable shipping routes linking Asia, Africa and Europe along which political and religious ideas could be traded along with raw materials such as timber, copper, tin, gold and silver as well as agricultural produce and necessities such as wine, olive oil, grain and livestock. By 3100 BC, the Egyptians were employing sails on boats on the Nile River and the subsequent development of the technology, coupled with knowledge of the wind and stars allowed naval powers such as the Phoenicians, Greeks, Carthaginians and Romans to navigate long distances and control large areas by commanding the sea. Cargo galleys often also employed slave oarsmen to power their ships and slavery was an important feature of the ancient Western economy. Thus, the great ancient capitals were linked — cities such as: Athens, home to Athenian democracy, and the Greek philosophers Aristotle, Plato and Socrates; the city of Jerusalem, the Jewish capital, where Jesus of Nazareth preached and was executed around AD 30; and the city of Rome, which gave rise to the Roman Empire which encompassed much of Europe and the Mediterranean. Knowledge of Greek, Roman and Judeo-Christian influence on the development of Western civilization is well documented because it attached to literate cultures, however, Western history was also strongly influenced by less literate groups such as the Germanic, Scandinavian and Celtic peoples who lived in Western and Northern Europe beyond the borders of the Roman world. Nevertheless, the Mediterranean was the centre of power and creativity in the development of ancient Western civilisation. Around 1500 BC, metallurgists learned to smelt iron ore, and by around 800BC, iron tools and weapons were common along the Aegean Sea, representing a major advance for warfare, agriculture and crafts in Greece. The earliest urban civilizations of Europe belong to the Bronze Age Minoans of Crete and Mycenaean Greece, which ended around the 11th century BC as Greece entered the Greek Dark Ages. Ancient Greece was the civilization belonging to the period of Greek history lasting from the Archaic period of the 8th to 6th centuries BC to 146 BC and the Roman conquest of Greece after the Battle of Corinth. Classical Greece flourished during the 5th to 4th centuries BC. Under Athenian leadership, Greece successfully repelled the military threat of Persian invasion at the battles of Marathon and Thermopylae. The Athenian Golden Age ended with the defeat of Athens at the hands of Sparta in the Peloponnesian War in 404 BC. By the 6th century BC, Greek colonists had spread far and wide — from the Russian Black Sea coast to the Spanish Mediterranean and through modern Italy, North Africa, Crete, Cyprus and Turkey. The Ancient Olympic Games are said to have begun in 776 BC and grew to be a major cultural event for the citizens of the Greek diaspora, who met every four years to compete in such sporting events as running, throwing, wrestling and chariot driving. Trade flourished and by 670 BC the barter economy was being replaced by a money economy, with Greeks minting coins in such places as the island of Aegina. Poultry arrived from India around 600 BC and would grow to be a European staple. The Hippocratic Oath, historically taken by doctors swearing to practice medicine ethically, is said to have been written by the Greek Hippocrates, often regarded as the father of western medicine, in Ionic Greek (late 5th century BC), The Greek city states competed and warred with each other, with Athens rising to be the most impressive. Learning from the Egyptians, Athenian art and architecture shone from 520 to 420 BC and the city completed the Parthenon around 447 BC to house a statue of their city goddess Athena. The Athenians also experimented with democracy. Property owners assembled almost weekly to make speeches and instruct their temporary rulers: a council of 500, chosen by lot or lottery, whose members could only serve a total of 2 years in a lifetime, and a smaller, high council from whom one man was selected by lottery to preside from sunset to the following sunset. Thus, the citizens' assembly shared power and prevented lifetime rulers from taking control. Military chiefs were exempt from the short term requirement however and were elected, rather than chosen by lot. Eloquent oratory became a Greek art form as speakers sought to sway large crowds of voters. Athenians believed in 'democracy' but not in equality and excluded women, slaves, the poor and foreigners from the assembly. Notions of a general "brotherhood of man" were yet to emerge. Ancient Greek philosophy arose in the 6th century BC and continued through the Hellenistic period, at which point Ancient Greece was incorporated into the Roman Empire. It dealt with a wide variety of subjects, including political philosophy, ethics, metaphysics, ontology, logic, biology, rhetoric, and aesthetics. Plato was a Classical Greek philosopher, mathematician and writer of philosophical dialogues. He was the founder of the Academy in Athens which was the first institution of higher learning in the Western world. Inspired by the admonition of his mentor, Socrates, prior to his unjust execution that "the unexamined life is not worth living", Plato and his student, the political scientist Aristotle, helped lay the foundations of Western philosophy and science. Plato's sophistication as a writer is evident in his Socratic dialogues. In classical tradition, Homer is the ancient Greek epic poet, author of the Iliad, the Odyssey and other works. Homer's epics stand at the beginning of the western canon of literature, exerting enormous influence on the history of fiction and literature in general. Alexander the Great (356 BC-323 BC) was a Greek king of Macedon and the creator of one of the largest empires in ancient history. He was tutored by the philosopher Aristotle and, as ruler, broke the power of Persia, overthrew the Persian king Darius III and conquered the Persian Empire. His Macedonian Empire stretched from the Adriatic Sea to the Indus River. He died in Babylon in 323 BC and his empire did not long survive his death. Nevertheless, the settlement of Greek colonists around the region had long lasting consequences and Alexander features prominently in Western history and mythology. The city of Alexandria in Egypt, which bears his name and was founded in 330 BC, became the successor to Athens as the intellectual cradle of the Western World. The city hosted such leading lights as the mathematician Euclid and anatomist Herophilus; constructed the great Library of Alexandria; and translated the Hebrew Bible into Greek (called the Septuagint for it was the work of 70 translators). The ancient Greeks excelled in engineering, science, logic, politics and medicine. Classical Greek culture had a powerful influence on the Roman Empire, which carried a version of it to many parts of the Mediterranean region and Europe, for which reason Classical Greece is generally considered to be the seminal culture which provided the foundation of Western civilization. Ancient Rome was a civilization that grew out of a small agricultural community, founded on the River Tiber, on the Italian Peninsula as early as the 10th century BC. Located along the Mediterranean Sea, and centered at the city of Rome, the Roman Empire became one of the largest empires in the ancient world. In its centuries of existence, Roman civilization shifted from a monarchy to an oligarchic republic to an increasingly autocratic empire. It came to dominate South-Western Europe, South-Eastern Europe/the Balkans and the Mediterranean region through conquest and assimilation. Originally ruled by Kings who ruled the settlement and a small area of land nearby, the Romans established a republic in 509BC that would last for five centuries. Initially a small number of families shared power, later representative assemblies and elected leaders ruled. Rome remained a minor power on the Italian peninsula, but found a talent for producing soldiers and sailors and, after subduing the Sabines, Etruscans and Piceni began to challenge the power Carthage. By 240BC, Rome controlled the formerly Greek controlled island of Sicily. Following the 207 BC defeat of the bold Carthaginian general Hannibal, who had led an army spearheaded by war elephants over the Alps into Italy, the Romans were able to expand their overseas empire into North Africa. Roman engineers built arterial roads throughout their empire, beginning with the Appian Way through Italy in 312 BC. Along such roads marched soldiers, merchants, slaves and citizens to all corners of a flourishing mercantile empire. Roman engineering was so formidable that roads, bridges and aqueducts survive in impressive scale and quantity to the present day. According to the historian Geoffrey Blainey, the population of the Imperial capital was probably the first in the world to approach one million people. It eventually consisted of monumental public buildings, such as the Colosseum (dedicated to sport), the bathhouses (dedicated to leisure) and the Roman Forum dedicated to civic affairs. Slavery helped power the economy, but also created occasional tension — as in the slave rebellion led by Spartacus which was put down in 71BC. Julius Caesar (100 BC-44 BC) was a Roman general and statesman who played a critical role in the gradual transformation of the Roman Republic into the Roman Empire. Conspirators who feared he was seeking to re-establish a monarchy assassinated him on the floor of the Roman Senate in 44 BC. His anointed successor Augustus outmaneuvered his opponents to reign as a de facto emperor from 27 BC. His successors became all-powerful and demanded veneration as gods. Rome entered its period of Imperial rule and stability (albeit often marred by occasional bouts of apparent insanity by various god-emperors) returned to the Empire. Roman civilization and history contributed greatly to the development of government, law, war, art, literature, architecture, technology, religion, and language in the Western world. Ecclesiastical Latin, the Roman Catholic Church's official language, remains a living legacy of the classical world to the contemporary world but the Classical languages of the Ancient Mediterranean influenced every European language, imparting to each a learned vocabulary of international application. It was, for many centuries, the international lingua franca and Latin itself evolved into the Romance languages, while Ancient Greek evolved into Modern Greek. Latin and Greek continue to influence English, not least in the specialised vocabularies of science, technology and the law. Judaism and the rise of Christendom The history of Judaism goes back 4000 years. The Hebrews were nomads who emerged from indigenous Canaanites and nearby deserts. The Hebrews (the name signified 'wanderer') formed one of the most enduring monotheistic religions, and the oldest to survive into the present day. Abraham is traditionally considered as the father of the Jewish people, and Moses the law giver, who led them out of slavery in Egypt and delivered them to the "Promised Land" of Israel. While the historicity of these accounts is not considered precise, the stories of the Hebrew Bible have been an inspiration for vast quantities of Western art, literature and scholarship. Around 1,000 BC, the Israelites had a period of power under King David who captured Jerusalem. His son King Solomon constructed the first magnificent Temple at Jerusalem for the worship of God. The Jews rejected the polytheism common to that age and would worship only Yahweh, whose Ten Commandments instructed them on morality. These Ten Commandments remain influential in the West and prohibited theft, lying and adultery; call for the worship of only one God; and for respect and honour for parents and neighbours. The Jews observed Sabbath as a "day of rest" (called "one of the first wide-ranging laws of social-welfare in the world" by the historian Geoffrey Blainey). In 587 BC, the Neo-Babylonian Empire of Nebuchadnezzar II destroyed the Temple and the Jewish leaders went into exile to return a century later to face a succession of foreign rulers: Persian and Greek. Judaism's texts, traditions and values play a major role in later Abrahamic religions, including Christianity, Islam and the Baháʼí Faith. Many aspects of Judaism have also influenced secular Western ethics and law. In 63 BC, Judea became part of the Roman Empire and around 6 BC Jesus was born to a Jewish family in the town of Nazareth, as a consequence of which, worship of the god of Israel would come to spread through, and later dominate, the Western World. Later the Western calendar would be divided into Before Christ (BC) (meaning before Jesus was born) and Anno Domini (AD). Christianity began as a sect within Judaism in the mid-1st century arising out of the life and teachings of Jesus of Nazareth. The life of Jesus is recounted in the New Testament of the Bible, one of the bedrock texts of Western Civilization. According to the New Testament, Jesus was raised as the son of the Nazarenes Mary (called the "Blessed Virgin" and "Mother of God") and her husband Joseph (a carpenter). Jesus' birth is commemorated during Christmas. Jesus learned the texts of the Hebrew Bible and like his contemporary John the Baptist, became an influential preacher. He gathered Twelve Disciples to assist in his work. He was a persuasive teller of parables and moral philosopher. In orations such as his Sermon on the Mount and stories such as The Good Samaritan and his declaration against hypocrisy "Let he who is without sin cast the first stone", Jesus called on followers to worship God, act without violence or prejudice and care for the sick, hungry and poor. He criticized the privilege and hypocrisy of the religious establishment which drew the ire of religious and civil authorities, who persuaded the Roman Governor of the province of Judaea, Pontius Pilate, to have him executed for subversion. In Jerusalem, around AD 30 Jesus was crucified (nailed alive to a wooden cross) and died. According to the Bible, his body disappeared from his tomb three days later, because he had been resurrected from the dead. Easter celebrates this event. The early followers of Jesus, including the apostles Paul and Peter carried a new theology concerning him throughout the Roman Empire and beyond, sowing the seeds of such institutions as the Catholic Church, of which Peter is remembered as the first pope. Paul, in particular, emphasised the universality of the faith and the religion moved beyond the Jewish population of the Empire and Asia Minor. Later Jesus was called "Christ" (meaning "anointed one" in Greek), and thus his followers became known as Christians. Christians often faced persecution from authorities or antagonistic populations during these early centuries, particularly for their refusal to join in worshiping the emperors. The Emperor Nero famously blamed them for the Great Fire of Rome in AD 64 and condemned them to Damnatio ad bestias, a form of capital punishment in which people were maimed to death by animals in the circus arena. Nevertheless, carried through the synagogues, merchants and missionaries across the known world, the new religion quickly grew in size and influence. Emperor Constantine's Edict of Milan in AD 313 ended the persecutions and his own conversion to Christianity was a significant turning point in history. In AD 325, Constantine conferred the First Council of Nicaea to gain consensus and unity within Christianity, with a view to establishing it as the religion of the Empire. The council composed the Nicean Creed which outlined a profession of the Christian faith. Constantine instigated Sunday as Sabbath and "day of rest" for Roman society (though initially this was only for urban dwellers). The population and wealth of the Roman Empire had been shifting east, and the division of Europe into a Western (Latin) and an Eastern (Greek) part was prefigured in the division of the Empire by the Emperor Diocletian in AD 285. Around 330, Constantine established the city of Constantinople as a new imperial city which would be the capital of the Byzantine Empire. Possessed of mighty fortifications and architectural splendour, the city would stand for another thousand years as a "Roman Capital". The Hagia Sophia Cathedral (later converted to a mosque following the Fall of Constantinople in 1453) is one of the greatest surviving examples of Byzantine architecture, with its vast dome and interior of mosaics and marble pillars, it was so richly decorated that the Emperor Justinian, the last emperor to speak Latin as a first language, is said to have proclaimed upon its completion in 562: "Solomon, I have surpassed thee!". The city of Rome itself never regained supremacy and was sacked by the Visigoths in 410 and the Vandals in 455. Although cultural continuity and interchange would continue between these Eastern and Western Roman Empires, the history of Christianity and Western culture took divergent routes, with a final Great Schism separating Roman and Eastern Christianity in 1054. When the Western Roman Empire was starting to disintegrate, Augustine was Bishop of Hippo Regius. He was a Latin-speaking philosopher and theologian who lived in the Roman Africa Province. His writings were very influential in the development of Western Christianity and he developed the concept of the church as a spiritual City of God (in a book of the same name), distinct from the material Earthly City. His book Confessions, which outlines his sinful youth and conversion to Christianity, is widely considered to be the first autobiography written in the canon of Western Literature. Augustine profoundly influenced the coming medieval worldview. The fall of Rome In 476 the western Roman Empire, which had ruled modern-day Italy, France, Belgium, Spain, Portugal, Austria, Switzerland and Great Britain for centuries, collapsed due to a combination of economic decline, and drastically reduced military strength which allowed invasion by barbarian tribes originating in southern Scandinavia and modern-day northern Germany. Historical opinion is divided as to the reasons for the fall of Rome, but the societal collapse encompassed both the gradual disintegration of the political, economic, military, and other social institutions of Rome as well as the barbarian invasions of Western Europe. In Britain, several Germanic tribes invaded, including the Angles and Saxons. In Gaul (modern-day France, Belgium and parts of Switzerland) and Germania Inferior (The Netherlands), the Franks settled, in Iberia the Visigoths invaded and Italy was conquered by the Ostrogoths. The slow decline of the Western Empire occurred over a period of roughly three centuries, culminating in 476, when Romulus Augustus, the last Emperor of the Western Roman Empire, was deposed by Odoacer, a Germanic chieftain. Some modern historians question the significance of this date, and not simply because Julius Nepos, the legitimate emperor recognized by the East Roman Empire, continued to live in Salona, Dalmatia, until he was assassinated in 480. More importantly, the Ostrogoths who succeeded considered themselves upholders of the direct line of Roman traditions and, as the historian Edward Gibbon noted, the Eastern Roman Empire continued until the Fall of Constantinople on May 29, 1453. See also Western world Western culture History of citizenship History of Europe Modern history Role of the Catholic Church in Western civilization References Further reading Bavaj, Riccardo: "The West": A Conceptual Exploration , European History Online, Mainz: Institute of European History, 2011, retrieved: November 28, 2011. Atlas of World Military History, , edited by Richard Brooks Almanac of World History by Patricia S. Daniels and Stephen G. Hyslop The Millennium Time Tapestry by Matthew Hurff The Earth and its Peoples , edited by Jean L. Woy Greek Ways: How the Greeks Created Western Civilization by Bruce Thornton, Encounter Books, 2002 How the Irish Saved Civilization: The Untold Story of Ireland's Heroic Role from the Fall of Rome to the Rise of Medieval Europe by Thomas Cahill, 1995. Further viewing Civilisation: A Personal View by Kenneth Clark, a 1969 BBC television series The Ascent of Man, a 1973 BBC television series presented by Jacob Bronowski Classical civilizations Western civilization Western culture
0.789116
0.974103
0.768681
Biodata
Biodata is the shortened form for biographical data. The term has two usages: In South Asia, the term carries the same meaning as a résumé or curriculum vitae (CV), for the purposes of jobs, grants, and marriage. While in industrial and organizational psychology, it is used as a predictor for future behaviours; in this sense, biodata is "factual kinds of questions about life and work experiences, as well as items involving opinions, values, beliefs, and attitudes that reflect a historical perspective." In South Asia In South Asia (India, Pakistan, Afghanistan, Bangladesh and Nepal), a biodata (a shortened form of biographical data) is essentially a résumé or curriculum vitae (CV), for the purposes of jobs, grants, and marriage. The purpose is similar to that of a résumé—to choose certain individuals from the pool of prospective candidates. The biodata generally contains the same type of information as a résumé (i.e. objective, work history, salary information, educational background, as well as personal details with respect to religion and nationality), but may also include physical attributes, such as height, weight, hair/eye colour, and a photograph. Industrial and organizational psychology With respect to industrial and organizational psychology, since the respondent replies to questions about themselves, there are elements of both biography and autobiography. The basis of biodata's predictive abilities is the axiom that past behaviour is the best predictor of future behaviour. Biographical information is not expected to predict all future behaviours but it is useful in personal selection in that it can give an indication of probable future behaviours based on an individual's prior learning history. Biodata instruments (also called Biographical Information Blanks) have an advantage over personality and interest inventories in that they can capture directly the past behaviour of a person, probably the best predictor of his or her future actions. These measures deal with facts about the person's life, not introspections and subjective judgements. Over the years, personnel selection has relied on standardized psychological tests. The five major categories for these tests are intellectual abilities, spatial and mechanical abilities, perceptual accuracy, motor abilities and personality tests. The mean correlation coefficient for a standardized test of g (intellectual ability) and job performance is 0.51. A review of 58 studies on biodata found coefficients that ranged from 0.32 to 0.46 with a mean validity of 0.35 The mean validity of interviews was found to be 0.19. research has indicated a validity coefficient of 0.29 for unstructured interviews and 0.31 for structured interviews but interview results can be affected by interviewer biases and have been challenged in a number of different court cases. Biodata has been shown to be a valid and reliable means to predict future performance based on an applicant's past performance. A well-constructed biodata instrument is legally defendable and unlike the interview, is not susceptible to error due to rater biases or the halo effect. It has proven its worth in personnel selection as a cost-effective tool. References Applied psychology Business documents Industrial and organizational psychology
0.778922
0.986826
0.76866
Chronological dating
Chronological dating, or simply dating, is the process of attributing to an object or event a date in the past, allowing such object or event to be located in a previously established chronology. This usually requires what is commonly known as a "dating method". Several dating methods exist, depending on different criteria and techniques, and some very well known examples of disciplines using such techniques are, for example, history, archaeology, geology, paleontology, astronomy and even forensic science, since in the latter it is sometimes necessary to investigate the moment in the past during which the death of a cadaver occurred. These methods are typically identified as absolute, which involves a specified date or date range, or relative, which refers to dating which places artifacts or events on a timeline relative to other events and/or artifacts. Other markers can help place an artifact or event in a chronology, such as nearby writings and stratigraphic markers. Absolute and relative dating Dating methods are most commonly classified following two criteria: relative dating and absolute dating. Relative dating Relative dating methods are unable to determine the absolute age of an object or event, but can determine the impossibility of a particular event happening before or after another event of which the absolute date is well known. In this relative dating method, Latin terms ante quem and post quem are usually used to indicate both the most recent and the oldest possible moments when an event occurred or an artifact was left in a stratum, respectively. But this method is also useful in many other disciplines. Historians, for example, know that Shakespeare's play Henry V was not written before 1587 because Shakespeare's primary source for writing his play was the second edition of Raphael Holinshed's Chronicles, not published until 1587. Thus, 1587 is the post quem dating of Shakespeare's play Henry V. That means that the play was without fail written after (in Latin, post) 1587. The same inductive mechanism is applied in archaeology, geology and paleontology, by many ways. For example, in a stratum presenting difficulties or ambiguities to absolute dating, paleopalynology can be used as a relative referent by means of the study of the pollens found in the stratum. This is admitted because of the simple reason that some botanical species, whether extinct or not, are well known as belonging to a determined position in the scale of time. For a non-exhaustive list of relative dating methods and relative dating applications used in geology, paleontology or archaeology, see the following: Cross-cutting relationships Fluorine absorption dating Harris matrix Law of included fragments Law of superposition Lichenometry Marine isotope stages, based on the oxygen isotope ratio cycle Melt inclusions Morphology (archaeology) Nitrogen dating Palynology, the study of modern-dated pollens for the relative dating of archaeological strata, also used in forensic palynology. Paleomagnetism Paleopalynology, also spelt "Palaeopalynology", the study of fossilized pollens for the relative dating of geological strata. Principle of original horizontality Principle of lateral continuity Principle of faunal succession Seriation (archaeology) Sequence dating (a type of seriation) Tephrochronology Typology (archaeology) Uranium–lead dating. Lead corrosion dating (exclusively used in archaeology) Varnish microlamination Vole clock Absolute dating Absolute dating methods seek to establish a specific time during which an object originated or an event took place. While the results of these techniques are largely accepted within the scientific community, there are several factors which can hinder the discovery of accurate absolute dating, including sampling errors and geological disruptions. This type of chronological dating utilizes absolute referent criteria, mainly the radiometric dating methods. Material remains can be absolutely dated by studying the organic materials which construct the remains. For example, remains that have pieces of brick can undergo the process of thermoluminescence (TL) dating in order to determine approximately how many years ago the material was fired. This technique was used to discover the date of St. James Church in Toruń by testing the thermoluminescence of removed bricks. In this example, an absolute date was determined which filled a gap in the historical knowledge of the church. These techniques are utilized in many other fields as well. Geologists, for example, apply absolute dating methods to rock sediment in order to discover their period of origin. Some examples of both radiometric and non-radiometric absolute dating methods are the following: Amino acid dating Archaeomagnetic dating Argon–argon dating Astronomical chronology Carbon dating: Also known as radiocarbon dating, it can reveal the age of organic material in artifacts as well as human and animal remains. This process can reliably measures dates up to approximately 50,000 years ago. Cementochronology, this method does not determine a precise moment in a scale of time but the age at death of a dead individual. Datestone (exclusively used in archaeology) Dendrochronology Electron spin resonance dating Fission track dating Geochronology Herbchronology Iodine–xenon dating Potassium–argon dating Lead–lead dating Luminescence dating Thermoluminescence dating Optically stimulated luminescence Optically stimulated luminescence thermochronometry Molecular clock (used mostly in phylogenetics and evolutionary biology) Obsidian hydration dating (exclusively used in archaeology) Oxidizable carbon ratio dating Rehydroxylation dating Rubidium–strontium dating Samarium–neodymium dating Tephrochronology Uranium–lead dating Uranium–thorium dating Uranium–uranium dating, useful in dating samples between about 10,000 and 2 million years Before Present (BP), or up to about eight times the half-life of 234U. Wiggle matching Dating methods in archaeology Just like geologists or paleontologists, archaeologists are also brought to determine the age of both ancient and recent humans. Thus, to be considered as archaeological, the remains, objects or artifacts to be dated must be related to human activity. It is commonly assumed that if the remains or elements to be dated are older than the human species, the disciplines which study them are sciences such geology or paleontology, among some others. Nevertheless, the range of time within archaeological dating can be enormous compared to the average lifespan of a singular human being. As an example Pinnacle Point's caves, in the southern coast of South Africa, provided evidence that marine resources (shellfish) have been regularly exploited by humans as of 170,000 years ago. On the other hand, remains as recent as a hundred years old can also be the target of archaeological dating methods. It was the case of an 18th-century sloop whose excavation was led in South Carolina (United States) in 1992. Thus, from the oldest to the youngest, all archaeological sites are likely to be dated by an appropriate method. Dating material drawn from the archaeological record can be made by a direct study of an artifact, or may be deduced by association with materials found in the context the item is drawn from or inferred by its point of discovery in the sequence relative to datable contexts. Dating is carried out mainly post excavation, but to support good practice, some preliminary dating work called "spot dating" is usually run in tandem with excavation. Dating is very important in archaeology for constructing models of the past, as it relies on the integrity of dateable objects and samples. Many disciplines of archaeological science are concerned with dating evidence, but in practice several different dating techniques must be applied in some circumstances, thus dating evidence for much of an archaeological sequence recorded during excavation requires matching information from known absolute or some associated steps, with a careful study of stratigraphic relationships. In addition, because of its particular relation with past human presence or past human activity, archaeology uses almost all the dating methods that it shares with the other sciences, but with some particular variations, like the following: Written markers Epigraphy – analysis of inscriptions, via identifying graphemes, clarifying their meanings, classifying their uses according to dates and cultural contexts, and drawing conclusions about the writing and the writers. Numismatics – many coins have the date of their production written on them or their use is specified in the historical record. Palaeography – the study of ancient writing, including the practice of deciphering, reading, and dating historical manuscripts. Seriation Seriation is a relative dating method (see, above, the list of relative dating methods). An example of a practical application of seriation, is the comparison of the known style of artifacts such as stone tools or pottery. Age-equivalent stratigraphic markers Paleomagnetism (a relative dating method, see the corresponding list above) Marine isotope stages based on the oxygen isotope ratio cycle (a relative dating method, see the corresponding list above) Tephrochronology (an absolute dating method, see the corresponding list above) Stratigraphic relationships The stratigraphy of an archaeological site can be used to date, or refine the date, of particular activities ("contexts") on that site. For example, if a context is sealed between two other contexts of known date, it can be inferred that the middle context must date to between those dates. See also Astronomical chronology Age of Earth Age of the universe Geochronology Geologic time scale Geological history of Earth Archaeological science References Chronology
0.781304
0.983733
0.768595
Degrowth
Degrowth is an academic and social movement critical of the concept of growth in gross domestic product as a measure of human and economic development. The idea of degrowth is based on ideas and research from economic anthropology, ecological economics, environmental sciences, and development studies. It argues that modern capitalism's unitary focus on growth causes widespread ecological damage and is unnecessary for the further increase of human living standards. Degrowth theory has been met with both academic acclaim and considerable criticism. Degrowth's main argument is that an infinite expansion of the economy is fundamentally contradictory to the finiteness of material resources on Earth. It argues that economic growth measured by GDP should be abandoned as a policy objective. Policy should instead focus on economic and social metrics such as life expectancy, health, education, housing, and ecologically sustainable work as indicators of both ecosystems and human well-being. Degrowth theorists posit that this would increase human living standards and ecological preservation even as GDP growth slows. Degrowth theory is highly critical of free market capitalism, and it highlights the importance of extensive public services, care work, self-organization, commons, relational goods, community, and work sharing. Degrowth theory partly orients itself as a critique of green capitalism or as a radical alternative to the market-based, sustainable development goal (SDG) model of addressing ecological overshoot and environmental collapse. A 2024 review of degrowth studies over the past 10 years showed that most were of poor quality: almost 90% were opinions rather than analysis, few used quantitative or qualitative data, and even fewer ones used formal modelling; the latter used small samples or a focus on non-representative cases. Also most studies offered subjective policy advice, but lacked policy evaluation and integration with insights from the literature on environmental/climate policies. Background The "degrowth" movement arose from concerns over the consequences of the productivism and consumerism associated with industrial societies (whether capitalist or socialist) including: The reduced availability of energy sources (see peak oil); The destabilization of Earth's ecosystems upon which all life on Earth depends (see Holocene Extinction, Anthropocene, global warming, pollution, current biodiversity loss); The rise of negative societal side-effects (unsustainable development, poorer health, poverty); and The ever-expanding use of resources by Global North countries to satisfy lifestyles that consume more food and energy, and produce greater waste, at the expense of the Global South (see neocolonialism). A 2017 review of the research literature on degrowth, found that it focused on three main goals: (1) reduction of environmental degradation; (2) redistribution of income and wealth locally and globally; (3) promotion of a social transition from economic materialism to participatory culture. Decoupling The concept of decoupling denotes decoupling economic growth, usually measured in GDP growth, GDP per capita growth or GNI per capita growth from the use of natural resources and greenhouse gas (GHG) emissions. Absolute decoupling refers to GDP growth coinciding with a reduction in natural resource use and GHG emissions, while relative decoupling describes an increase in resource use and GHG emissions lower than the increase in GDP growth. The degrowth movement heavily critiques this idea and argues that absolute decoupling is only possible for short periods, specific locations, or with small mitigation rates. In 2021 NGO European Environmental Bureau called stated that "not only is there no empirical evidence supporting the existence of a decoupling of economic growth from environmental pressures on anywhere near the scale needed to deal with environmental breakdown", and that reported cases of existing eco-economic decouplings either depict relative decoupling and/or are observed only temporarily and/or only on a local scale, arguing that alternatives to eco-economic decoupling are needed. This is supported by several other studies which state that absolute decoupling is highly unlikely to be achieved fast enough to prevent global warming over 1.5 °C or 2 °C, even under optimistic policy conditions. Major criticism of this view points out that Degrowth is politically unpalatable, defaulting towards the more free market green growth orthodoxy as a set of solutions that is more politically tenable. The problems with the SDG process are political rather than technical, Ezra Klein of the New York Times claims in summary of these criticisms, and degrowth has less plausibility than green growth as a democratic political platform. However, in a recent review of efforts toward Sustain Development Goals by the Council of Foreign Relations in 2023 it was found that progress toward 50% of the minimum viable SDG's have stalled and 30% of these verticals have reversed (or are getting worse, rather than better). Thus, while it may be true that Degrowth will be 'a difficult sell' (per Ezra Klein) to introduce via democratic voluntarism, the critique of SDG's and decoupling against green capitalism leveled by Degrowth theorists appear to have predictive power. Resource depletion Degrowth proponents argue that economic expansion must be met with a corresponding increase in resource consumption. Non-renewable resources, like petroleum, have a limited supply and can eventually be exhausted. Similarly, renewable resources can also be depleted if they are harvested at unsustainable rates for prolonged periods. An example of this depletion is evident in the case of caviar production in the Caspian Sea. Supporters of degrowth contend that reducing demand is the sole permanent solution to bridging the demand gap. To sustain renewable resources, both demand and production must be regulated to levels that avert depletion and ensure environmental sustainability. Transitioning to a society less reliant on oil is crucial for averting societal collapse as non-renewable resources dwindle. Degrowth can also be interpreted as a plea for resource reallocation, aiming to halt unsustainable practices of transforming certain entities into resources, such as non-renewable natural resources. Instead, the focus shifts towards identifying and utilizing alternative resources, such as renewable human capabilities. Ecological footprint The ecological footprint measures human demand on the Earth's ecosystems by comparing human demand with the Earth's ecological capacity to regenerate. It represents the amount of biologically productive land and sea area required to regenerate the resources a human population consumes and to absorb and render harmless the corresponding waste. According to a 2005 Global Footprint Network report, inhabitants of high-income countries live off of 6.4 global hectares (gHa), while those from low-income countries live off of a single gHa. For example, while each inhabitant of Bangladesh lives off of what they produce from 0.56 gHa, a North American requires 12.5 gHa. Each inhabitant of North America uses 22.3 times as much land as a Bangladeshi. According to the same report, the average number of global hectares per person was 2.1, while current consumption levels have reached 2.7 hectares per person. For the world's population to attain the living standards typical of European countries, the resources of between three and eight planet Earths would be required with current levels of efficiency and means of production. For world economic equality to be achieved with the currently available resources, proponents say rich countries would have to reduce their standard of living through degrowth. The constraints on resources would eventually lead to a forced reduction in consumption. A controlled reduction of consumption would reduce the trauma of this change, assuming no technological changes increase the planet's carrying capacity. Multiple studies now demonstrate that in many affluent countries per-capita energy consumption could be decreased substantially and quality living standards still be maintained. Sustainable development Degrowth ideology opposes all manifestations of productivism, which advocates that economic productivity and growth should be the primary objectives of human organization. Consequently, it stands in opposition to the prevailing model of sustainable development. While the concept of sustainability aligns with some aspects of degrowth philosophy, sustainable development, as conventionally understood, is based on mainstream development principles focused on augmenting economic growth and consumption. Degrowth views sustainable development as contradictory because any development reliant on growth within a finite and ecologically strained context is deemed intrinsically unsustainable. Development based on growth in a finite, environmentally stressed world is viewed as inherently unsustainable. Critics of degrowth argue that a slowing of economic growth would result in increased unemployment, increased poverty, and decreased income per capita. Many who believe in negative environmental consequences of growth still advocate for economic growth in the South, even if not in the North. Slowing economic growth would fail to deliver the benefits of degrowth — self-sufficiency and material responsibility — and would indeed lead to decreased employment. Rather, degrowth proponents advocate the complete abandonment of the current (growth) economic model, suggesting that relocalizing and abandoning the global economy in the Global South would allow people of the South to become more self-sufficient and would end the overconsumption and exploitation of Southern resources by the North. Supporters of degrowth view it as a potential method to shield ecosystems from human exploitation. Within this concept, there is an emphasis on communal stewardship of the environment, fostering a symbiotic relationship between humans and nature. Degrowth recognizes ecosystems as valuable entities beyond their utility as mere sources of resources. During the Second International Conference on degrowth, discussions encompassed concepts like implementing a maximum wage and promoting open borders. Degrowth advocates an ethical shift that challenges the notion that high-resource consumption lifestyles are desirable. Additionally, alternative perspectives on degrowth include addressing perceived historical injustices perpetrated by the global North through centuries of colonization and exploitation, advocating for wealth redistribution. Determining the appropriate scale of action remains a focal point of debate within degrowth movements. Some researchers believe that the world is poised to experience a Great Transformation, either by disastrous events or intentional design. They maintain that ecological economics must incorporate Postdevelopment theories, Buen vivir, and degrowth to affect the change necessary to avoid these potentially catastrophic events. A 2022 paper by Mark Diesendorf found that limiting global warming to 1,5 degrees with no overshoot would require a reduction of energy consumption. It describes (chapters 4–5) degrowth toward a steady state economy as possible and probably positive. The study ends with the words: "The case for a transition to a steady-state economy with low throughput and low emissions, initially in the high-income economies and then in rapidly growing economies, needs more serious attention and international cooperation. "Rebound effect" Technologies designed to reduce resource use and improve efficiency are often touted as sustainable or green solutions. Degrowth literature, however, warns about these technological advances due to the "rebound effect", also known as Jevons paradox. This concept is based on observations that when a less resource-exhaustive technology is introduced, behavior surrounding the use of that technology may change, and consumption of that technology could increase or even offset any potential resource savings. In light of the rebound effect, proponents of degrowth hold that the only effective "sustainable" solutions must involve a complete rejection of the growth paradigm and a move to a degrowth paradigm. There are also fundamental limits to technological solutions in the pursuit of degrowth, as all engagements with technology increase the cumulative matter-energy throughput. However, the convergence of digital commons of knowledge and design with distributed manufacturing technologies may arguably hold potential for building degrowth future scenarios. Mitigation of climate change and determinants of 'growth' Scientists report that degrowth scenarios, where economic output either "declines" or declines in terms of contemporary economic metrics such as current GDP, have been neglected in considerations of 1.5 °C scenarios reported by the Intergovernmental Panel on Climate Change (IPCC), finding that investigated degrowth scenarios "minimize many key risks for feasibility and sustainability compared to technology-driven pathways" with a core problem of such being feasibility in the context of contemporary decision-making of politics and globalized rebound- and relocation-effects. However, structurally realigning 'economic growth' and socioeconomic activity determination-structures may not be widely debated in both the degrowth community and in degrowth research which may largely focus on reducing economic growth either more generally or without structural alternative but with e.g. nonsystemic political interventions. Similarly, many green growth advocates suggest that contemporary socioeconomic mechanisms and metrics – including for economic growth – can be continued with forms of nonstructural "energy-GDP decoupling". A study concluded that public services are associated with higher human need satisfaction and lower energy requirements while contemporary forms of economic growth are linked with the opposite, with the contemporary economic system being fundamentally misaligned with the twin goals of meeting human needs and ensuring ecological sustainability, suggesting that prioritizing human well-being and ecological sustainability would be preferable to overgrowth in current metrics of economic growth. The word 'degrowth' was mentioned 28 times in the United Nations IPCC Sixth Assessment Report by Working Group III published in April 2022. Open Localism Open localism is a concept that has been promoted by the degrowth community when envisioning an alternative set of social relations and economic organization. It builds upon the political philosophies of localism and is based on values such as diversity, ecologies of knowledge, and openness. Open localism does not look to create an enclosed community but rather to circulate production locally in an open and integrative manner. Open localism is a direct challenge to the acts of closure regarding identitarian politics. By producing and consuming as much as possible locally, community members enhance their relationships with one another and the surrounding environment. Degrowth's ideas around open localism share similarities with ideas around the commons while also having clear differences. On the one hand, open localism promotes localized, common production in cooperative-like styles similar to some versions of how commons are organized. On the other hand, open localism does not impose any set of rules or regulations creating a defined boundary, rather it favours a cosmopolitan approach. Feminism The degrowth movement builds on feminist economics that has criticized measures of economic growth like the GDP as it excludes work mainly done by women such as unpaid care work (the work performed to fulfill people's needs) and reproductive work (the work sustaining life), first argued by Marilyn Waring. Further, degrowth draws on the critique of socialist feminists like Silvia Federici and Nancy Fraser claiming that capitalist growth builds on the exploitation of women's work. Instead of devaluing it, degrowth centers the economy around care, proposing that care work should be organized as a commons. Centering care goes hand in hand with changing society's time regimes. Degrowth scholars propose a working time reduction. As this does not necessarily lead to gender justice, the redistribution of care work has to be equally pushed. A concrete proposal by Frigga Haug is the 4-in-1 perspective that proposes 4 hours of wage work per day, freeing time for 4 hours of care work, 4 hours of political activities in a direct democracy, and 4 hours of personal development through learning. Furthermore, degrowth draws on materialist ecofeminisms that state the parallel of the exploitation of women and nature in growth-based societies and proposes a subsistence perspective conceptualized by Maria Mies and Ariel Salleh. Synergies and opportunities for cross-fertilization between degrowth and feminism were proposed in 2022, through networks including the Feminisms and Degrowth Alliance (FaDA). FaDA argued that the 2023 launch of Degrowth Journal created "a convivial space for generating and exploring knowledge and practice from diverse perspectives". Decolonialism A relevant concept within the theory of degrowth is decolonialism, which refers to putting an end to the perpetuation of political, social, economic, religious, racial, gender, and epistemological relations of power, domination, and hierarchy of the global north over the global south. The foundation of this relationship lies in the claim that the imminent socio-ecological collapse is caused by capitalism, which is sustained by economic growth. This economic growth in turn can only be maintained under the eaves of colonialism and extractivism, perpetuating asymmetric power relationships between territories. Colonialism is understood as the appropriation of common goods, resources, and labor, which is antagonistic to degrowth principles. Through colonial domination, capital depresses the prices of inputs and colonial cheapening occurs to the detriment of the oppressed countries. Degrowth criticizes these appropriation mechanisms and enclosure of one territory over another and proposes a provision of human needs through disaccumulation, de-enclosure, and decommodification. It also reconciles with social movements and seeks to recognize the ecological debt to achieve the catch-up, which is postulated as impossible without decolonization. In practice, decolonial practices close to degrowth are observed, such as the movement of Buen vivir or sumak kawsay by various indigenous peoples. Policies There is a wide range of policy proposals associated with degrowth. In 2022, Nick Fitzpatrick, Timothée Parrique and Inês Cosme conducted a comprehensive survey of degrowth literature from 2005 to 2020 and found 530 specific policy proposals with "50 goals, 100 objectives, 380 instruments". The survey found that the ten most frequently cited proposals were: universal basic incomes, work-time reductions, job guarantees with a living wage, maximum income caps, declining caps on resource use and emissions, not-for-profit cooperatives, holding deliberative forums, reclaiming the commons, establishing ecovillages, and housing cooperatives. To address the common criticism that such policies are not realistically financeable, economic anthropologist Jason Hickel sees an opportunity to learn from modern monetary theory, which argues that monetary sovereign states can issue the money needed to pay for anything available in the national economy without the need to first tax their citizens for the requisite funds. Taxation, credit regulations and price controls could be used to mitigate the inflation this may generate, while also reducing consumption. Origins of the movement The contemporary degrowth movement can trace its roots back to the anti-industrialist trends of the 19th century, developed in Great Britain by John Ruskin, William Morris and the Arts and Crafts movement (1819–1900), in the United States by Henry David Thoreau (1817–1862), and in Russia by Leo Tolstoy (1828–1910). Degrowth movements draw on the values of humanism, enlightenment, anthropology and human rights. Club of Rome reports In 1968, the Club of Rome, a think tank headquartered in Winterthur, Switzerland, asked researchers at the Massachusetts Institute of Technology for a report on the limits of our world system and the constraints it puts on human numbers and activity. The report, called The Limits to Growth, published in 1972, became the first significant study to model the consequences of economic growth. The reports (also known as the Meadows Reports) are not strictly the founding texts of the degrowth movement, as these reports only advise zero growth, and have also been used to support the sustainable development movement. Still, they are considered the first studies explicitly presenting economic growth as a key reason for the increase in global environmental problems such as pollution, shortage of raw materials, and the destruction of ecosystems. The Limits to Growth: The 30-Year Update was published in 2004, and in 2012, a 40-year forecast from Jørgen Randers, one of the book's original authors, was published as 2052: A Global Forecast for the Next Forty Years. In 2021, Club of Rome committee member Gaya Herrington published an article comparing the proposed models' predictions against empirical data trends. The BAU2 ("Business as Usual 2") scenario, predicting "collapse through pollution", as well as the CT ("Comprehensive Technology") scenario, predicting exceptional technological development and gradual decline, were found to align most closely with data observed as of 2019. In September 2022, the Club of Rome released updated predictive models and policy recommendations in a general-audiences book titled Earth for all – A survival guide to humanity. Lasting influence of Georgescu-Roegen The degrowth movement recognises Romanian American mathematician, statistician and economist Nicholas Georgescu-Roegen as the main intellectual figure inspiring the movement. In his 1971 work, The Entropy Law and the Economic Process, Georgescu-Roegen argues that economic scarcity is rooted in physical reality; that all natural resources are irreversibly degraded when put to use in economic activity; that the carrying capacity of Earth—that is, Earth's capacity to sustain human populations and consumption levels—is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse. Georgescu-Roegen's intellectual inspiration to degrowth dates back to the 1970s. When Georgescu-Roegen delivered a lecture at the University of Geneva in 1974, he made a lasting impression on the young, newly graduated French historian and philosopher, Jacques Grinevald, who had earlier been introduced to Georgescu-Roegen's works by an academic advisor. Georgescu-Roegen and Grinevald became friends, and Grinevald devoted his research to a closer study of Georgescu-Roegen's work. As a result, in 1979, Grinevald published a French translation of a selection of Georgescu-Roegen's articles entitled Demain la décroissance: Entropie – Écologie – Économie ('Tomorrow, the Decline: Entropy – Ecology – Economy'). Georgescu-Roegen, who spoke French fluently, approved the use of the term décroissance in the title of the French translation. The book gained influence in French intellectual and academic circles from the outset. Later, the book was expanded and republished in 1995 and once again in 2006; however, the word Demain ('tomorrow') was removed from the book's title in the second and third editions. By the time Grinevald suggested the term décroissance to form part of the title of the French translation of Georgescu-Roegen's work, the term had already permeated French intellectual circles since the early 1970s to signify a deliberate political action to downscale the economy on a permanent and voluntary basis. Simultaneously, but independently, Georgescu-Roegen criticised the ideas of The Limits to Growth and Herman Daly's steady-state economy in his article, "Energy and Economic Myths", delivered as a series of lectures from 1972, but not published before 1975. In the article, Georgescu-Roegen stated the following: When reading this particular passage of the text, Grinevald realised that no professional economist of any orientation had ever reasoned like this before. Grinevald also realised the congruence of Georgescu-Roegen's viewpoint and the French debates occurring at the time; this resemblance was captured in the title of the French edition. The translation of Georgescu-Roegen's work into French both fed on and gave further impetus to the concept of décroissance in France—and everywhere else in the francophone world—thereby creating something of an intellectual feedback loop. By the 2000s, when décroissance was to be translated from French back into English as the catchy banner for the new social movement, the original term "decline" was deemed inappropriate and misdirected for the purpose: "Decline" usually refers to an unexpected, unwelcome, and temporary economic recession, something to be avoided or quickly overcome. Instead, the neologism "degrowth" was coined to signify a deliberate political action to downscale the economy on a permanent, conscious basis—as in the prevailing French usage of the term—something good to be welcomed and maintained, or so followers believe. When the first international degrowth conference was held in Paris in 2008, the participants honoured Georgescu-Roegen and his work. In his manifesto on Petit traité de la décroissance sereine ("Farewell to Growth"), the leading French champion of the degrowth movement, Serge Latouche, credited Georgescu-Roegen as the "main theoretical source of degrowth". Likewise, Italian degrowth theorist Mauro Bonaiuti considered Georgescu-Roegen's work to be "one of the analytical cornerstones of the degrowth perspective". Schumacher and Buddhist economics E. F. Schumacher's 1973 book Small Is Beautiful predates a unified degrowth movement but nonetheless serves as an important basis for degrowth ideas. In this book he critiques the neo-liberal model of economic development, arguing that an increasing "standard of living", based on consumption is absurd as a goal of economic activity and development. Instead, under what he refers to as Buddhist economics, we should aim to maximize well-being while minimizing consumption. Ecological and social issues In January 1972, Edward Goldsmith and Robert Prescott-Allen—editors of The Ecologist—published A Blueprint for Survival, which called for a radical programme of decentralisation and deindustrialization to prevent what the authors referred to as "the breakdown of society and the irreversible disruption of the life-support systems on this planet". In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. The report was finalised in Paris. The main conclusions: Over the last 50 years, the state of nature has deteriorated at an unprecedented and accelerating rate. The main drivers of this deterioration have been changes in land and sea use, exploitation of living beings, climate change, pollution and invasive species. These five drivers, in turn, are caused by societal behaviors, from consumption to governance. Damage to ecosystems undermines 35 of 44 selected UN targets, including the UN General Assembly's Sustainable Development Goals for poverty, hunger, health, water, cities' climate, oceans and land. It can cause problems with food, water and humanity's air supply. To fix the problem, humanity needs transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management. Page 8 of the report proposes "enabling visions of a good quality of life that do not entail ever-increasing material consumption" as one of the main measures. The report states that "Some pathways chosen to achieve the goals related to energy, economic growth, industry and infrastructure and sustainable consumption and production (Sustainable Development Goals 7, 8, 9 and 12), as well as targets related to poverty, food security and cities (Sustainable Development Goals 1, 2 and 11), could have substantial positive or negative impacts on nature and therefore on the achievement of other Sustainable Development Goals". In a June 2020 paper published in Nature Communications, a group of scientists argue that "green growth" or "sustainable growth" is a myth: "we have to get away from our obsession with economic growth—we really need to start managing our economies in a way that protects our climate and natural resources, even if this means less, no or even negative growth." They conclude that a change in economic paradigms is imperative to prevent environmental destruction, and suggest a range of ideas from the reformist to the radical, with the latter consisting of degrowth, eco-socialism and eco-anarchism. In June 2020, the official site of one of the organizations promoting degrowth published an article by Vijay Kolinjivadi, an expert in political ecology, arguing that the emergence of COVID-19 is linked to the ecological crisis. The 2019 World Scientists' Warning of a Climate Emergency and its 2021 update have asserted that economic growth is a primary driver of the overexploitation of ecosystems, and to preserve the biosphere and mitigate climate change civilization must, in addition to other fundamental changes including stabilizing population growth and adopting largely plant-based diets, "shift from GDP growth and the pursuit of affluence toward sustaining ecosystems and improving human well-being by prioritizing basic needs and reducing inequality." In an opinion piece published in Al Jazeera, Jason Hickel states that this paper, which has more than 11,000 scientist cosigners, demonstrates that there is a "strong scientific consensus" towards abandoning "GDP as a measure of progress." In a 2022 comment published in Nature, Hickel, Giorgos Kallis, Juliet Schor, Julia Steinberger and others say that both the IPCC and the IPBES "suggest that degrowth policies should be considered in the fight against climate breakdown and biodiversity loss, respectively". Movement Conferences The movement has included international conferences promoted by the network Research & Degrowth (R&D). The First International Conference on Economic Degrowth for Ecological Sustainability and Social Equity in Paris (2008) was a discussion about the financial, social, cultural, demographic, and environmental crisis caused by the deficiencies of capitalism and an explanation of the main principles of degrowth. Further conferences were in Barcelona (2010), Montreal (2012), Venice (2012), Leipzig (2014), Budapest (2016), Malmö (2018), and Zagreb (2023). The 10th International Degrowth Conference will be held in Pontevedra in June 2024. Separately, two conferences have been organised as cross-party initiatives of Members of the European Parliament: the Post-Growth 2018 Conference and the Beyond Growth 2023 Conference, both held in the European Parliament in Brussels. International Degrowth Network The conferences have also been accompanied by informal degrowth assemblies since 2018, to build community between degrowth groups across countries. The 4th Assembly in Zagreb in 2023 discussed a proposal to create a more intentional organisational structure and led to the creation of the International Degrowth Network, which organised the 5th assembly in June 2024. Relation to other social movements The degrowth movement has a variety of relations to other social movements and alternative economic visions, which range from collaboration to partial overlap. The Konzeptwerk Neue Ökonomie (Laboratory for New Economic Ideas), which hosted the 2014 international Degrowth conference in Leipzig, has published a project entitled "Degrowth in movement(s)" in 2017, which maps relationships with 32 other social movements and initiatives. The relation to the environmental justice movement is especially visible. Although not explicitly called degrowth, movements inspired by similar concepts and terminologies can be found around the world, including Buen Vivir in Latin America, the Zapatistas in Mexico, the Kurdish Rojava or Eco-Swaraj in India, and the sufficiency economy in Thailand. The Cuban economic situation has also been of interest to degrowth advocates because its limits on growth were socially imposed (although as a result of geopolitics), and has resulted in positive health changes. Another set of movements the degrowth movement finds synergy with is the wave of initiatives and networks inspired by the commons, where resources are sustainably shared in a decentralised and self-managed manner, instead of through capitalist organization. For example, initiatives inspired by commons could be food cooperatives, open-source platforms, and group management of resources such as energy or water. Commons-based peer production also guides the role of technology in degrowth, where conviviality and socially useful production are prioritised over capital gain. This could happen in the form of cosmolocalism, which offers a framework for localising collaborative forms of production while sharing resources globally as digital commons, to reduce dependence on global value chains. Criticisms, challenges and dilemmas Critiques of degrowth concern the poor study quality of degrowth studies, negative connotation that the term "degrowth" imparts, the misapprehension that growth is seen as unambiguously bad, the challenges and feasibility of a degrowth transition, as well as the entanglement of desirable aspects of modernity with the growth paradigm. Criticisms According to a highly cited scientific paper of environmental economist Jeroen C. J. M. van den Bergh, degrowth is often seen as an ambiguous concept due to its various interpretations, which can lead to confusion rather than a clear and constructive debate on environmental policy. Many interpretations of degrowth do not offer effective strategies for reducing environmental impact or transitioning to a sustainable economy. Additionally, degrowth is unlikely to gain significant social or political support, making it an ineffective strategy for achieving environmental sustainability. Ineffectiveness and better alternatives In his scientific paper, Jeroen C. J. M. van den Bergh concludes that a degrowth strategy, which focuses on reducing the overall scale of the economy or consumption, tends to overlook the significance of changes in production composition and technological innovation. Van den Bergh also highlights that a focus solely on reducing consumption (or consumption degrowth) may lead to rebound effects. For instance, reducing consumption of certain goods and services might result in an increase in spending on other items, as disposable income remains unchanged. Alternatively, it could lead to savings, which would provide additional funds for others to borrow and spend. He emphasizes the importance of (global) environmental policies, such as pricing externalities through taxes or permits, which incentivize behavior changes that reduce environmental impact and which provide essential information for consumers and help manage rebound effects. Effective environmental regulation through pricing is crucial for transitioning from polluting to cleaner consumption patterns. Study quality A 2024 review of degrowth studies over the past 10 years showed that most were of poor quality: almost 90% were opinions rather than analysis, few used quantitative or qualitative data, and even fewer ones used formal modelling; the latter used small samples or a focus on non-representative cases. Also most studies offered subjective policy advice, but lacked policy evaluation and integration with insights from the literature on environmental/climate policies. Negative connotation The use of the term "degrowth" is criticized for being detrimental to the degrowth movement because it could carry a negative connotation, in opposition to the positively perceived "growth". "Growth" is associated with the "up" direction and positive experiences, while "down" generates the opposite associations. Research in political psychology has shown that the initial negative association of a concept, such as of "degrowth" with the negatively perceived "down", can bias how the subsequent information on that concept is integrated at the unconscious level. At the conscious level, degrowth can be interpreted negatively as the contraction of the economy, although this is not the goal of a degrowth transition, but rather one of its expected consequences. In the current economic system, a contraction of the economy is associated with a recession and its ensuing austerity measures, job cuts, or lower salaries. Noam Chomsky commented on the use of the term: "When you say 'degrowth' it frightens people. It's like saying you're going to have to be poorer tomorrow than you are today, and it doesn't mean that." Since "degrowth" contains the term "growth", there is also a risk of the term having a backfire effect, which would reinforce the initial positive attitude toward growth. "Degrowth" is also criticized for being a confusing term, since its aim is not to halt economic growth as the word implies. Instead, "a-growth" is proposed as an alternative concept that emphasizes that growth ceases to be an important policy objective, but that it can still be achieved as a side-effect of environmental and social policies. Systems theoretical critique In stressing the negative rather than the positive side(s) of growth, the majority of degrowth proponents remain focused on (de-)growth, thus giving continued attention to the issue of growth, leading to continued attention to the arguments that sustainable growth is possible. One way to avoid giving attention to growth might be extending from the economic concept of growth, which proponents of both growth and degrowth commonly adopt, to a broader concept of growth that allows for the observation of growth in other sociological characteristics of society. A corresponding "recoding" of "growth-obsessed", capitalist organizations was proposed by Steffen Roth. Marxist critique Traditional Marxists distinguish between two types of value creation: that which is useful to mankind, and that which only serves the purpose of accumulating capital. Traditional Marxists consider that it is the exploitative nature and control of the capitalist production relations that is the determinant and not the quantity. According to Jean Zin, while the justification for degrowth is valid, it is not a solution to the problem. Other Marxist writers have adopted positions close to the de-growth perspective. For example, John Bellamy Foster and Fred Magdoff, in common with David Harvey, Immanuel Wallerstein, Paul Sweezy and others focus on endless capital accumulation as the basic principle and goal of capitalism. This is the source of economic growth and, in the view of these writers, results in an unsustainable growth imperative. Foster and Magdoff develop Marx's own concept of the metabolic rift, something he noted in the exhaustion of soils by capitalist systems of food production, though this is not unique to capitalist systems of food production as seen in the Aral Sea. Many degrowth theories and ideas are based on neo-Marxist theory. Foster emphasizes that degrowth "is not aimed at austerity, but at finding a 'prosperous way down' from our current extractivist, wasteful, ecologically unsustainable, maldeveloped, exploitative, and unequal, class-hierarchical world." Challenges Lack of macroeconomics for sustainability It is reasonable for society to worry about recession as economic growth has been the unanimous goal around the globe in the past decades. However, in some advanced countries, there are attempts to develop a model for a regrowth economy. For instance, the Cool Japan strategy has proven to be instructive for Japan, which has been a static economy for almost decades. Political and social spheres According to some scholars in Sociology, the growth imperative is deeply entrenched in market capitalist societies such that it is necessary for their stability. Moreover, the institutions of modern societies, such as the nation state, welfare, labor market, education, academia, law and finance, have co-evolved with growth to sustain them. A degrowth transition thus requires not only a change of the economic system but of all the systems on which it relies. As most people in modern societies are dependent on those growth-oriented institutions, the challenge of a degrowth transition also lies in individual resistance to move away from growth. Land privatisation Baumann, Alexander and Burdon suggest that "the Degrowth movement needs to give more attention to land and housing costs, which are significant barriers hindering true political and economic agency and any grassroots driven degrowth transition." They claim that land – a necessity like land and air – privatisation creates an absolute economic growth determinant. They point out that even one who is fully committed to degrowth nevertheless has no option but decades of market growth participation to pay rent or mortgage. Because of this, land privatisation is a structural impediment to moving forward that makes degrowth economically and politically unviable. They conclude that without addressing land privatisation (the market's inaugural privatisation – primitive accumulation) the degrowth movement's strategies cannot succeed. Just as land enclosure (privatisation) initiated capitalism (economic growth), degrowth must start with reclaiming land commons. Agriculture When it comes to agriculture, a degrowth society would require a shift from industrial agriculture to less intensive and more sustainable agricultural practices such as permaculture or organic agriculture. Still, it is not clear if any of those alternatives could feed the current and projected global population. In the case of organic agriculture, Germany, for example, would not be able to feed its population under ideal organic yields over all of its arable land without meaningful changes to patterns of consumption, such as reducing meat consumption and food waste. Moreover, labour productivity of non-industrial agriculture is significantly lower due to the reduced use or absence of fossil fuels, which leaves much less labour for other sectors. Potential solutions to this challenge include scaling up approaches such as community-supported agriculture (CSA). Dilemmas Given that modernity has emerged with high levels of energy and material throughput, there is an apparent compromise between desirable aspects of modernity (e.g., social justice, gender equality, long life expectancy, low infant mortality) and unsustainable levels of energy and material use. Some researchers, however, argue that the decline in income inequality and rise in social mobility occurring under capitalism from the late 1940s to the 1960s was a product of the heavy bargaining power of labor unions and increased wealth and income redistribution during that time; while also pointing to the rise in income inequality in the 1970s following the collapse of labor unions and weakening of state welfare measures. Others also argue that modern capitalism maintains gender inequalities by means of advertising, messaging in consumer goods, and social media. Another way of looking at the argument that the development of desirable aspects of modernity require unsustainable energy and material use is through the lens of the Marxist tradition, which relates the superstructure (culture, ideology, institutions) and the base (material conditions of life, division of labor). A degrowth society, with its drastically different material conditions, could produce equally drastic changes in society's cultural and ideological spheres. The political economy of global capitalism has generated a lot of social and environmental bads, such as socioeconomic inequality and ecological devastation, which in turn have also generated a lot of goods through individualization and increased spatial and social mobility. At the same time, some argue the widespread individualization promulgated by a capitalist political economy is a bad due to its undermining of solidarity, aligned with democracy as well as collective, secondary, and primary forms of caring, and simultaneous encouragement of mistrust of others, highly competitive interpersonal relationships, blame of failure on individual shortcomings, prioritization of one's self-interest, and peripheralization of the conceptualization of human work required to create and sustain people. In this view, the widespread individuation resulting from capitalism may impede degrowth measures, requiring a change in actions to benefit society rather than the individual self. Some argue the political economy of capitalism has allowed social emancipation at the level of gender equality, disability, sexuality and anti-racism that has no historical precedent. However, others dispute social emancipation as being a direct product of capitalism or question the emancipation that has resulted. The feminist writer Nancy Holmstrom, for example, argues that capitalism's negative impacts on women outweigh the positive impacts, and women tend to be hurt by the system. In her examination of China following the Chinese Communist Revolution, Holmstrom notes that women were granted state-assisted freedoms to equal education, childcare, healthcare, abortion, marriage, and other social supports. Thus, whether the social emancipation achieved in Western society under capitalism may coexist with degrowth is ambiguous. Doyal and Gough allege that the modern capitalist system is built on the exploitation of female reproductive labor as well as that of the Global South, and sexism and racism are embedded in its structure. Therefore, some theories (such as Eco-Feminism or political ecology) argue that there cannot be equality regarding gender and the hierarchy between the Global North and South within capitalism. The structural properties of growth present another barrier to degrowth as growth shapes and is enforced by institutions, norms, culture, technology, identities, etc. The social ingraining of growth manifests in peoples' aspirations, thinking, bodies, mindsets, and relationships. Together, growth's role in social practices and in socio-economic institutions present unique challenges to the success of the degrowth movement. Another potential barrier to degrowth is the need for a rapid transition to a degrowth society due to climate change and the potential negative impacts of a rapid social transition including disorientation, conflict, and decreased well-being. In the United States, a large barrier to the support of the degrowth movement is the modern education system, including both primary and higher learning institutions. Beginning in the second term of the Reagan administration, the education system in the US was restructured to enforce neoliberal ideology by means of privatization schemes such as commercialization and performance contracting, implementation of standards and accountability measures incentivizing schools to adopt a uniform curriculum, and higher education accreditation and curricula designed to affirm market values and current power structures and avoid critical thought concerning the relations between those in power, ethics, authority, history, and knowledge. The degrowth movement, based on the empirical assumption that resources are finite and growth is limited, clashes with the limitless growth ideology associated with neoliberalism and the market values affirmed in schools, and therefore faces a major social barrier in gaining widespread support in the US. Nevertheless, co-evolving aspects of global capitalism, liberal modernity, and the market society, are closely tied and will be difficult to separate to maintain liberal and cosmopolitan values in a degrowth society. At the same time, the goal of the degrowth movement is progression rather than regression, and researchers point out that neoclassical economic models indicate neither negative nor zero growth would harm economic stability or full employment. Several assert the main barriers to the movement are social and structural factors clashing with implementing degrowth measures. Healthcare It has been pointed out that there is an apparent trade-off between the ability of modern healthcare systems to treat individual bodies to their last breath and the broader global ecological risk of such an energy and resource intensive care. If this trade-off exists, a degrowth society must choose between prioritizing the ecological integrity and the ensuing collective health or maximizing the healthcare provided to individuals. However, many degrowth scholars argue that the current system produces both psychological and physical damage to people. They insist that societal prosperity should be measured by well-being, not GDP. See also A Blueprint for Survival Agrowth Anti-consumerism Critique of political economy Degrowth advocates (category) Political ecology Postdevelopment theory Power Down: Options and Actions for a Post-Carbon World Paradox of thrift The Path to Degrowth in Overdeveloped Countries Post-capitalism Productivism Prosperity Without Growth Slow movement Steady-state economy Transition town Uneconomic growth References Reference details Further reading External links List of International Degrowth conferences on degrowth.info Research and Degrowth International Degrowth Network Degrowth Journal Planned Degrowth: Ecosocialism and Sustainable Human Development. Monthly Reviewissue on "Planned Degrowth". July 1, 2023. Simple living Sustainability Green politics Ecological economics Environmental movements Environmental ethics Environmental economics Environmental social science concepts
0.770947
0.996878
0.76854
Postnationalism
Postnationalism or non-nationalism is the process or trend by which nation states and national identities lose their importance relative to cross-nation and self-organized or supranational and global entities as well as local entities. Although postnationalism is not strictly considered the antonym of nationalism, the two terms and their associated assumptions are antithetic as postnationalism is an internationalistic process. There are several factors that contribute to aspects of postnationalism, including economic, political, and cultural elements. Increasing globalization of economic factors (such as the expansion of international trade with raw materials, manufactured goods, and services, and the importance of multinational corporations and internationalization of financial markets) have shifted emphasis from national economies to global ones. At the same time, socio-political power is partially transferred from national authorities to supernational entities, such as multinational corporations, the United Nations, the European Union, the North American Free Trade Agreement (NAFTA), and NATO. In addition, media and entertainment industries are becoming increasingly global and facilitate the formation of trends and opinions on a supranational scale. Migration of individuals or groups between countries contributes to the formation of postnational identities and beliefs, even though attachment to citizenship and national identities often remains important. Postnationalism and human rights In the scholarly literature, postnationalism is linked to the expansion of international human rights law and norms. International human rights norms are reflected in a growing stress on the rights of individuals in terms of their "personhood," not just their citizenship. International human rights law does not recognize the right of entry to any state by non-citizens, but demands that individuals should be judged increasingly on universal criteria not particularistic criteria (such as blood descent in ethnicity, or favoring a particular sex). This has impacted citizenship and immigration law, especially in western countries. The German parliament, for example, has felt pressure to, and has diluted (if not eradicated), citizenship based on ethnic descent, which had caused German-born Turks, for example, to be excluded from German citizenship. Scholars identified with this argument include Yasemin Soysal, David Jacobson, and Saskia Sassen. In the European Union European integration has created a system of supranational entities and is often discussed in relationship to the concept of postnationalism. In Canada During the 2011 election, John Ibbitson argued that in the fading issues of the "Laurentian Consensus" were responsible for turning Canada into the first post-national state. In 2015, Canadian Prime Minister Justin Trudeau, while defining Canadian values, also declared his country to be the world’s first post-national state. Writing in Le Devoir in 2019, Robert Dutrisac described multiculturalism as an ideology associated with English Canada. In opposition to the perceived shift toward post-nationalism in Canada, John Weissenberger has argued that it is the Laurentian elite themselves who have "diluted the 'Laurentian' nature of the class and boosted their disdain for national character." In the media Catherine Frost, professor of political science at McMaster University, argues that while the Internet and online social relations forge social and political bonds across national borders, they do not have "the commitment or cohesiveness needed to underpin a demanding new mode of social and political relations". Nonetheless, it has been argued the increasing options of obtaining virtual citizenship from established nations (e.g., E-Residency of Estonia) and micronations can be seen as examples of what citizenship might look like in a post-national world. In sports Postnational trends have been evident in professional sports. Simon Kuper called the 2008 European soccer championship (UEFA Euro 2008) "the first postnational" European Championship. He argues that during the tournament both for players and fans sportsmanship and enjoyment of the event were more important than national rivalries or even winning. See also Anti-globalization movement Digital currency Global citizenship Identity politics Transnationalism Tribe (Internet) Types of nationalism World Wide Web Constitutional patriotism Civic nationalism References Bibliography Globalization Political science terminology
0.780316
0.98485
0.768494
Multiregional origin of modern humans
The multiregional hypothesis, multiregional evolution (MRE), or polycentric hypothesis, is a scientific model that provides an alternative explanation to the more widely accepted "Out of Africa" model of monogenesis for the pattern of human evolution. Multiregional evolution holds that the human species first arose around two million years ago and subsequent human evolution has been within a single, continuous human species. This species encompasses all archaic human forms such as Homo erectus, Denisovans, and Neanderthals as well as modern forms, and evolved worldwide to the diverse populations of anatomically modern humans (Homo sapiens). The hypothesis contends that the mechanism of clinal variation through a model of "centre and edge" allowed for the necessary balance between genetic drift, gene flow, and selection throughout the Pleistocene, as well as overall evolution as a global species, but while retaining regional differences in certain morphological features. Proponents of multiregionalism point to fossil and genomic data and continuity of archaeological cultures as support for their hypothesis. The multiregional hypothesis was first proposed in 1984, and then revised in 2003. In its revised form, it is similar to the assimilation model, which holds that modern humans originated in Africa and today share a predominant recent African origin, but have also absorbed small, geographically variable, degrees of admixture from other regional (archaic) hominin species. The multiregional hypothesis is not currently the most accepted theory of modern human origin among scientists. "The African replacement model has gained the widest acceptance owing mainly to genetic data (particularly mitochondrial DNA) from existing populations. This model is consistent with the realization that modern humans cannot be classified into subspecies or races, and it recognizes that all populations of present-day humans share the same potential." The African replacement model is also known as the "out of Africa" theory, which is currently the most widely accepted model. It proposes that Homo sapiens evolved in Africa before migrating across the world." And: "The primary competing scientific hypothesis is currently recent African origin of modern humans, which proposes that modern humans arose as a new species in Africa around 100-200,000 years ago, moving out of Africa around 50-60,000 years ago to replace existing human species such as Homo erectus and the Neanderthals without interbreeding. This differs from the multiregional hypothesis in that the multiregional model predicts interbreeding with preexisting local human populations in any such migration." History Overview The Multiregional hypothesis was proposed in 1984 by Milford H. Wolpoff, Alan Thorne and Xinzhi Wu. Wolpoff credits Franz Weidenreich's "Polycentric" hypothesis of human origins as a major influence, but cautions that this should not be confused with polygenism, or Carleton Coon's model that minimized gene flow. According to Wolpoff, multiregionalism was misinterpreted by William W. Howells, who confused Weidenreich's hypothesis with a polygenic "candelabra model" in his publications spanning five decades: Through the influence of Howells, many other anthropologists and biologists have confused multiregionalism with polygenism i.e. separate or multiple origins for different populations. Alan Templeton for example notes that this confusion has led to the error that gene flow between different populations was added to the Multiregional hypothesis as a "special pleading in response to recent difficulties", despite the fact: "parallel evolution was never part of the multiregional model, much less its core, whereas gene flow was not a recent addition, but rather was present in the model from the very beginning" (emphasis in original). Despite this, multiregionalism is still confused with polygenism, or Coon's model of racial origins, from which Wolpoff and his colleagues have distanced themselves. Wolpoff has also defended Wiedenreich's Polycentric hypothesis from being labeled polyphyletic. Weidenreich himself in 1949 wrote: "I may run the risk of being misunderstood, namely that I believe in polyphyletic evolution of man". In 1998, Wu founded a China-specific Multiregional model called "Continuity with [Incidental] Hybridization". Wu's variant only applies the Multiregional hypothesis to the East Asian fossil record, and is popular among Chinese scientists. However, James Leibold, a political historian of modern China, has argued the support for Wu's model is largely rooted in Chinese nationalism. Outside of China, the Multiregional hypothesis has limited support, held only by a small number of paleoanthropologists. "Classic" vs "weak" multiregionalism Chris Stringer, a leading proponent of the more mainstream recent African origin theory, debated Multiregionalists such as Wolpoff and Thorne in a series of publications throughout the late 1980s and 1990s. Stringer describes how he considers the original Multiregional hypothesis to have been modified over time into a weaker variant that now allows a much greater role for Africa in human evolution, including anatomical modernity (and subsequently less regional continuity than was first proposed). Stringer distinguishes the original or "classic" Multiregional model as having existed from 1984 (its formulation) until 2003, to a "weak" post-2003 variant that has "shifted close to that of the Assimilation Model". Genetic studies The finding that "Mitochondrial Eve" was relatively recent and African seemed to give the upper hand to the proponents of the Out of Africa hypothesis. But in 2002, Alan Templeton published a genetic analysis involving other loci in the genome as well, and this showed that some variants that are present in modern populations existed already in Asia hundreds of thousands of years ago. This meant that even if our male line (Y chromosome) and our female line (mitochondrial DNA) came out of Africa in the last 100,000 years or so, we have inherited other genes from populations that were already outside of Africa. Since this study other studies have been done using much more data (see Phylogeography). Fossil evidence Morphological clades Proponents of the multiregional hypothesis see regional continuity of certain morphological traits spanning the Pleistocene in different regions across the globe as evidence against a single replacement model from Africa. In general, three major regions are recognized: Europe, China, and Indonesia (often including Australia). Wolpoff cautions that the continuity in certain skeletal features in these regions should not be seen in a racial context, instead calling them morphological clades; defined as sets of traits that "uniquely characterise a geographic region". According to Wolpoff and Thorne (1981): "We do not regard a morphological clade as a unique lineage, nor do we believe it necessary to imply a particular taxonomic status for it". Critics of multiregionalism have pointed out that no single human trait is unique to a geographical region (i.e. confined to one population and not found in any other) but Wolpoff et al. (2000) note that regional continuity only recognizes combinations of features, not traits if individually accessed, a point they elsewhere compare to the forensic identification of a human skeleton: Combinations of features are "unique" in the sense of being found in only one region, or more weakly limited to one region at high frequency (very rarely in another). Wolpoff stresses that regional continuity works in conjunction with genetic exchanges between populations. Long-term regional continuity in certain morphological traits is explained by Alan Thorne's "centre and edge" population genetics model which resolves Weidenreich's paradox of "how did populations retain geographical distinctions and yet evolve together?". For example, in 2001 Wolpoff and colleagues published an analysis of character traits of the skulls of early modern human fossils in Australia and central Europe. They concluded that the diversity of these recent humans could not "result exclusively from a single late Pleistocene dispersal", and implied dual ancestry for each region, involving interbreeding with Africans. Indonesia, Australia Thorne held that there was regional continuity in Indonesia and Australia for a morphological clade. This sequence is said to consist of the earliest fossils from Sangiran, Java, that can be traced through Ngandong and found in prehistoric and recent Aboriginal Australians. In 1991, Andrew Kramer tested 17 proposed morphological clade features. He found that: "a plurality (eight) of the seventeen non-metric features link Sangiran to modern Australians" and that these "are suggestive of morphological continuity, which implies the presence of a genetic continuum in Australasia dating back at least one million years" but Colin Groves has criticized Kramer's methodology, pointing out that the polarity of characters was not tested and that the study is actually inconclusive. Phillip Habgood discovered that the characters said to be unique to the Australasian region by Thorne are plesiomorphic: Yet, regardless of these criticisms Habgood (2003) allows for limited regional continuity in Indonesia and Australia, recognizing four plesiomorphic features which do not appear in such a unique combination on fossils in any other region: a sagittally flat frontal bone, with a posterior position of minimum frontal breadth, great facial prognathism, and zygomaxillary tuberosities. This combination, Habgood says, has a "certain Australianness about it". Wolpoff, initially skeptical of Thorne's claims, became convinced when reconstructing the Sangiran 17 Homo erectus skull from Indonesia, when he was surprised that the skull's face to vault angle matched that of the Australian modern human Kow Swamp 1 skull in excessive prognathism. Durband (2007) in contrast states that "features cited as showing continuity between Sangiran 17 and the Kow Swamp sample disappeared in the new, more orthognathic reconstruction of that fossil that was recently completed". Baba et al. who newly restored the face of Sangiran 17 concluded: "regional continuity in Australasia is far less evident than Thorne and Wolpoff argued". China Xinzhi Wu has argued for a morphological clade in China spanning the Pleistocene, characterized by a combination of 10 features. The sequence is said to start with Lantian and Peking Man, traced to Dali, to Late Pleistocene specimens (e.g. Liujiang) and recent Chinese. Habgood in 1992 criticized Wu's list, pointing out that most of the 10 features in combination appear regularly on fossils outside China. He did though note that three combined: a non-depressed nasal root, non-projecting perpendicularly oriented nasal bones and facial flatness are unique to the Chinese region in the fossil record and may be evidence for limited regional continuity. However, according to Chris Stringer, Habgood's study suffered from not including enough fossil samples from North Africa, many of which exhibit the small combination he considered to be region-specific to China. Facial flatness as a morphological clade feature has been rejected by many anthropologists since it is found on many early African Homo erectus fossils, and is therefore considered plesiomorphic, but Wu has responded that the form of facial flatness in the Chinese fossil record appears distinct to other (i.e. primitive) forms. Toetik Koesbardiati in her PhD thesis "On the Relevance of the Regional Continuity Features of the Face in East Asia" also found that a form of facial flatness is unique to China (i.e. only appears there at high frequency, very rarely elsewhere) but cautions that this is the only available evidence for regional continuity: "Only two features appear to show a tendency as suggested by the Multiregional model: flatness at the upper face expressed by an obtuse nasio-frontal angle and flatness at the middle part of the face expressed by an obtuse zygomaxillay angle". Shovel-shaped incisors are commonly cited as evidence for regional continuity in China. Stringer (1992) however found that shovel-shaped incisors are present on >70% of the early Holocene Wadi Halfa fossil sample from North Africa, and common elsewhere. Frayer, et al. (1993) have criticized Stringer's method of scoring shovel-shaped incisor teeth. They discuss the fact that there are different degrees of "shovelled" e.g. trace (+), semi (++), and marked (+++), but that Stringer misleadingly lumped all these together: "...combining shoveling categories in this manner is biologically meaningless and misleading, as the statistic cannot be validly compared with the very high frequencies for the marked shoveling category reported for East Asians." Palaeoanthropologist Fred H. Smith (2009) also emphasizes that: "It is the of shoveling that identities as an East Asian regional feature, not just the occurrence of shoveling of any sort". Multiregionalists argue that marked (+++) shovel-shaped incisors only appear in China at a high frequency, and have <10% occurrence elsewhere. Europe Since the early 1990s, David W. Frayer has described what he regards as a morphological clade in Europe. The sequence starts with the earliest dated Neanderthal specimens (Krapina and Saccopastore skulls) traced through the mid-Late Pleistocene (e.g. La Ferrassie 1) to Vindija Cave, and late Upper Palaeolithic Cro-Magnons or recent Europeans. Although many anthropologists consider Neanderthals and Cro Magnons morphologically distinct, Frayer maintains quite the opposite and points to their similarities, which he argues is evidence for regional continuity: Frayer et al. (1993) consider there to be at least four features in combination that are unique to the European fossil record: a horizontal-oval shaped mandibular foramen, anterior mastoid tubercle, suprainiac fossa, and narrowing of the nasal breadth associated with tooth-size reduction. Regarding the latter, Frayer observes a sequence of nasal narrowing in Neanderthals, following through to late Upper Palaeolithic and Holocene (Mesolithic) crania. His claims are disputed by others, but have received support from Wolpoff, who regards late Neanderthal specimens to be "transitional" in nasal form between earlier Neanderthals and later Cro Magnons. Based on other cranial similarities, Wolpoff et al. (2004) argue for a sizable Neanderthal contribution to modern Europeans. More recent claims regarding continuity in skeletal morphology in Europe focus on fossils with both Neanderthal and modern anatomical traits, to provide evidence of interbreeding rather than replacement. Examples include the Lapedo child found in Portugal and the Oase 1 mandible from Peștera cu Oase, Romania, though the "Lapedo child" is disputed by some. Genetic evidence Mitochondrial Eve A 1987 analysis of mitochondrial DNA from 147 people by Cann et al. from around the world indicated that their mitochondrial lineages all coalesced in a common ancestor from Africa between 140,000 and 290,000 years ago. The analysis suggested that this reflected the worldwide expansion of modern humans as a new species, replacing, rather than mixing with, local archaic humans outside of Africa. Such a recent replacement scenario is not compatible with the Multiregional hypothesis and the mtDNA results led to increased popularity for the alternative single replacement theory. According to Wolpoff and colleagues: Multiregionalists have responded to what they see as flaws in the Eve theory, and have offered contrary genetic evidences. Wu and Thorne have questioned the reliability of the molecular clock used to date Eve. Multiregionalists point out that Mitochondrial DNA alone can not rule out interbreeding between early modern and archaic humans, since archaic human mitochondrial strains from such interbreeding could have been lost due to genetic drift or a selective sweep. Wolpoff for example states that Eve is "not the most recent common ancestor of all living people" since "Mitochondrial history is not population history". Neanderthal mtDNA Neanderthal mitochondrial DNA (mtDNA) sequences from Feldhofer and Vindija Cave are substantially different from modern human mtDNA. Multiregionalists however have discussed the fact that the average difference between the Feldhofer sequence and living humans is less than that found between chimpanzee subspecies, and therefore that while Neanderthals were different subspecies, they were still human and part of the same lineage. Nuclear DNA Initial analysis of Y chromosome DNA, which like mitochondrial DNA, is inherited from only one parent, was consistent with a recent African replacement model. However, the mitochondrial and Y chromosome data could not be explained by the same modern human expansion out of Africa; the Y chromosome expansion would have involved genetic mixing that retained regionally local mitochondrial lines. In addition, the Y chromosome data indicated a later expansion back into Africa from Asia, demonstrating that gene flow between regions was not unidirectional. An early analysis of 15 noncoding sites on the X chromosome found additional inconsistencies with the recent African replacement hypothesis. The analysis found a multimodal distribution of coalescence times to the most recent common ancestor for those sites, contrary to the predictions for recent African replacement; in particular, there were more coalescence times near 2 million years ago (mya) than expected, suggesting an ancient population split around the time humans first emerged from Africa as Homo erectus, rather than more recently as suggested by the mitochondrial data. While most of these X chromosome sites showed greater diversity in Africa, consistent with African origins, a few of the sites showed greater diversity in Asia rather than Africa. For four of the 15 gene sites that did show greater diversity in Africa, the sites' varying diversity by region could not be explained by simple expansion from Africa, as would be required by the recent African replacement hypothesis. Later analyses of X chromosome and autosomal DNA continued to find sites with deep coalescence times inconsistent with a single origin of modern humans, diversity patterns inconsistent with a recent expansion from Africa, or both. For example, analyses of a region of RRM2P4 (ribonucleotide reductase M2 subunit pseudogene 4) showed a coalescence time of about 2 Mya, with a clear root in Asia, while the MAPT locus at 17q21.31 is split into two deep genetic lineages, one of which is common in and largely confined to the present European population, suggesting inheritance from Neanderthals. In the case of the Microcephalin D allele, evidence for rapid recent expansion indicated introgression from an archaic population. However, later analysis, including of the genomes of Neanderthals, did not find the Microcephalin D allele (in the proposed archaic species), nor evidence that it had introgressed from an archaic lineage as previously suggested. In 2001, a DNA study of more than 12,000 men from 163 East Asian regions showed that all of them carry a mutation that originated in Africa about 35,000 to 89,000 years ago and these "data do not support even a minimal in situ hominid contribution in the origin of anatomically modern humans in East Asia". In a 2005 review and analysis of the genetic lineages of 25 chromosomal regions, Alan Templeton found evidence of more than 34 occurrences of gene flow between Africa and Eurasia. Of these occurrences, 19 were associated with continuous restricted gene exchange through at least 1.46 million years ago; only 5 were associated with a recent expansion from Africa to Eurasia. Three were associated with the original expansion of Homo erectus out of Africa around 2 million years ago, 7 with an intermediate expansion out of Africa at a date consistent with the expansion of Acheulean tool technology, and a few others with other gene flows such as an expansion out of Eurasia and back into Africa subsequent to the most recent expansion out of Africa. Templeton rejected a hypothesis of complete recent African replacement with greater than 99% certainty (p < 10−17). Ancient DNA Recent analyses of DNA taken directly from Neanderthal specimens indicates that they or their ancestors contributed to the genome of all humans outside of Africa, indicating there was some degree of interbreeding with Neanderthals before their replacement. It has also been shown that Denisova hominins contributed to the DNA of Melanesians and Australians through interbreeding. By 2006, extraction of DNA directly from some archaic human samples was becoming possible. The earliest analyses were of Neanderthal DNA, and indicated that the Neanderthal contribution to modern human genetic diversity was no more than 20%, with a most likely value of 0%. By 2010, however, detailed DNA sequencing of the Neanderthal specimens from Europe indicated that the contribution was nonzero, with Neanderthals sharing 1-4% more genetic variants with living non-Africans than with living humans in sub-Saharan Africa. In late 2010, a recently discovered non-Neanderthal archaic human, the Denisova hominin from south-western Siberia, was found to share 4–6% more of its genome with living Melanesian humans than with any other living group, supporting admixture between two regions outside of Africa. In August 2011, human leukocyte antigen (HLA) alleles from the archaic Denisovan and Neanderthal genomes were found to show patterns in the modern human population demonstrating origins from these non-African populations; the ancestry from these archaic alleles at the HLA-A site was more than 50% for modern Europeans, 70% for Asians, and 95% for Papua New Guineans. Proponents of the multiregional hypothesis believe the combination of regional continuity inside and outside of Africa and lateral gene transfer between various regions around the world supports the multiregional hypothesis. However, "Out of Africa" Theory proponents also explain this with the fact that genetic changes occur on a regional basis rather than a continental basis, and populations close to each other are likely to share certain specific regional SNPs while sharing most other genes in common. Migration Matrix theory (A=Mt) indicates that dependent upon the potential contribution of Neanderthal ancestry, we would be able to calculate the percentage of Neanderthal mtDNA contribution to the human species. As we do not know the specific migration matrix, we are unable to input the exact data, which would answer these questions irrefutably. See also Human evolution Human origins Interbreeding between archaic and modern humans Mitochondrial Eve Phyletic gradualism Recent African origin of modern humans Y-chromosomal Adam References Further reading External links Templeton's lattice diagram showing major gene flows graphically. Via Conrante.com. Notes on drift and migration with equations for calculating the effects on allele frequencies of different populations. Via Darwin.EEB.CUonn.edu . "Human Evolution" (2011). Britannica.com. Plural Lineages in the Human mtDNA Genome. Via Rafonda.com. Human Timeline (Interactive) (August 2016). Smithsonian Institution, National Museum of Natural History. Anatomically modern humans Biological hypotheses Human evolution Race (human categorization) 1984 introductions es:Poligenismo id:Asal-usul multiregional manusia modern
0.775513
0.990863
0.768427
Timeline
A timeline is a list of events displayed in chronological order. It is typically a graphic design showing a long bar labelled with dates paralleling it, and usually contemporaneous events. Timelines can use any suitable scale representing time, suiting the subject and data; many use a linear scale, in which a unit of distance is equal to a set amount of time. This timescale is dependent on the events in the timeline. A timeline of evolution can be over millions of years, whereas a timeline for the day of the September 11 attacks can take place over minutes, and that of an explosion over milliseconds. While many timelines use a linear timescale—especially where very large or small timespans are relevant -- logarithmic timelines entail a logarithmic scale of time; some "hurry up and wait" chronologies are depicted with zoom lens metaphors. More usually, "timeline" refers merely to a data set which could be displayed as described above. For example, this meaning is used in the titles of many Wikipedia articles starting "Timeline of ..." History Time and space (particularly the line) are intertwined concepts in human thought. The line is ubiquitous in clocks in the form of a circle, time is spoken of in terms of length, intervals, a before and an after. The idea of orderly, segmented time is also represented in almanacs, calendars, charts, graphs, genealogical and evolutionary trees, where the line is central. Originally, chronological events were arranged in a mostly textual form. This took form in annals, like king lists. Alongside them, the table was used like in the Greek tables of Olympiads and Roman lists of consuls and triumphs. Annals had little narrative and noted what happened to people, making no distinction between natural and human actions. In Europe, from the 4th century, the dominant chronological notation was the table. This can be partially credited to Eusebius, who laid out the relations between Jewish, pagan, and Christian histories in parallel columns, culminating in the Roman Empire, according to the Christian view when Christ was born to spread salvation as far as possible. His work was widely copied and was among the first printed books. This served the idea of Christian world history and providential time. The table is easy to produce, append, and read with indices, so it also fit the Renaissance scholars' absorption of a wide variety of sources with its focus on commonalities. These uses made the table with years in one column and places of events (kingdoms) on the top the dominant visual structure of time. By the 17th century, historians had started to claim that chronology and geography were the two sources of precise information which bring order to the chaos of history. In geography, Renaissance mapmakers updated Ptolemy's maps and the map became a symbol of the power of monarchs, and knowledge. Likewise, the idea that a singular chronology of world history from contemporary sources is possible affected historians. The want for precision in chronology gave rise to adding historical eclipses to tables, like in the case of Gerardus Mercator. Various graphical experiments emerged, from fitting the whole of history on a calendar year to series of historical drawings, in the hopes of making a metaphorical map of time. Developments in printing and engraving that made practical larger and more detailed book illustrations allowed these changes, but in the 17th century, the table with some modifications continued to dominate. The modern timeline emerged in Joseph Priestley's A Chart of Biography, published in 1765. It presented dates simply and provided an analogue for the concept of historical progress that was becoming popular in the 18th century. However, as Priestley recognized, history is not totally linear. The table has the advantage in that it can present many of these intersections and branching paths. For Priestley, its main use was a "mechanical help to the knowledge of history", not as an image of history. Regardless, the timeline had become very popular during the 18th and 19th centuries. Positivism emerged in the 19th century and the development of chronophotography and tree ring analysis made visible time taking place at various speeds. This encouraged people to think that events might be truly objectively recorded. However, in some cases, filling in a timeline with more data only pushed it towards impracticality. Jacques Barbeu-Duborg's 1753 Chronologie Universelle was mounted on a 54-feet-long (16½ m) scroll. Charles Joseph Minard's 1869 thematic map of casualties of the French army in its Russian campaign put much less focus on the one-directional line. Charles Renouvier's 1876 Uchronie, a branching map of the history of Europe, depicted both the actual course of history and counterfactual paths. At the end of the 19th century, Henri Bergson declared the metaphor of the timeline to be deceiving in Time and Free Will. The question of big history and deep time engendered estranging forms of the timeline, like in Olaf Stapledon's 1930 work Last and First Men where timelines are drawn on scales from the historical to the cosmological. Similar techniques are used by the Long Now Foundation, and the difficulties of chronological representation have been presented by visual artists including Francis Picabia, On Kawara, J. J. Grandville, and Saul Steinberg. Types There are different types of timelines: Text timelines, labeled as text Number timelines, the labels are numbers, commonly line graphs Interactive, clickable, zoomable Video timelines There are many methods to visualize timelines. Historically, timelines were static images and were generally drawn or printed on paper. Timelines relied heavily on graphic design, and the ability of the artist to visualize the data. Uses Timelines are often used in education to help students and researchers with understanding the order or chronology of historical events and trends for a subject. To show time on a specific scale on an axis, a timeline can visualize time lapses between events, durations (such as lifetimes or wars), and the simultaneity or the overlap of spans and events. In historical studies Timelines are particularly useful for studying history, as they convey a sense of change over time. Wars and social movements are often shown as timelines. Timelines are also useful for biographies. Examples include: Timeline of the civil rights movement Timeline of European exploration Timeline of European imperialism Timeline of Solar System exploration Timeline of United States history Timeline of World War I List of timelines of World War II Timeline of religion In natural sciences Timelines are also used in the natural world and sciences, such as in astronomy, biology, chemistry, and geology: 2009 swine flu pandemic timeline Chronology of the universe Geologic time scale Timeline of the evolutionary history of life Timeline of crystallography In project management Another type of timeline is used for project management. Timelines help team members know what milestones need to be achieved and under what time schedule. An example is establishing a project timeline in the implementation phase of the life cycle of a computer system. Software Timelines (no longer constrained by previous space and functional limitations) are now digital and interactive, generally created with computer software. Microsoft Encarta encyclopedia provided one of the earliest multimedia timelines intended for students and the general public. ChronoZoom is another examplespa of interactive timeline software. See also Chronology ChronoZoom – an open source project for visualizing the timeline of Big History Detailed logarithmic timeline List of timelines Living graph Logarithmic timeline Many-worlds interpretation Sequence of events Synchronoptic view Timecode Timestream Timelines of world history World line References External links Infographics Statistical charts and diagrams Chronology Visualization (graphics)
0.772241
0.995055
0.768422
Philology
Philology is the study of language in oral and written historical sources. It is the intersection of textual criticism, literary criticism, history, and linguistics with strong ties to etymology. Philology is also defined as the study of literary texts and oral and written records, the establishment of their authenticity and their original form, and the determination of their meaning. A person who pursues this kind of study is known as a philologist. In older usage, especially British, philology is more general, covering comparative and historical linguistics. Classical philology studies classical languages. Classical philology principally originated from the Library of Pergamum and the Library of Alexandria around the fourth century BC, continued by Greeks and Romans throughout the Roman and Byzantine Empire. It was eventually resumed by European scholars of the Renaissance, where it was soon joined by philologies of other European (Romance, Germanic, Celtic), Eurasian (Slavic, etc.), Asian (Arabic, Persian, Sanskrit, Chinese, etc.), and African (Egyptian, Nubian, etc.) languages. Indo-European studies involve the comparative philology of all Indo-European languages. Philology, with its focus on historical development (diachronic analysis), is contrasted with linguistics due to Ferdinand de Saussure's insistence on the importance of synchronic analysis. While the contrast continued with the emergence of structuralism and the emphasis of Noam Chomsky on syntax, research in historical linguistics often relies on philological materials and findings. Etymology The term philology is derived from the Greek (philología), from the terms (phílos) 'love, affection, loved, beloved, dear, friend' and (lógos) 'word, articulation, reason', describing a love of learning, of literature, as well as of argument and reasoning, reflecting the range of activities included under the notion of . The term changed little with the Latin philologia, and later entered the English language in the 16th century, from the Middle French philologie, in the sense of 'love of literature'. The adjective (philólogos) meant 'fond of discussion or argument, talkative', in Hellenistic Greek, also implying an excessive ("sophistic") preference of argument over the love of true wisdom, (philósophos). As an allegory of literary erudition, philologia appears in fifth-century postclassical literature (Martianus Capella, De nuptiis Philologiae et Mercurii), an idea revived in Late Medieval literature (Chaucer, Lydgate). The meaning of "love of learning and literature" was narrowed to "the study of the historical development of languages" (historical linguistics) in 19th-century usage of the term. Due to the rapid progress made in understanding sound laws and language change, the "golden age of philology" lasted throughout the 19th century, or "from Giacomo Leopardi and Friedrich Schlegel to Nietzsche". Branches Comparative The comparative linguistics branch of philology studies the relationship between languages. Similarities between Sanskrit and European languages were first noted in the early 16th century and led to speculation of a common ancestor language from which all these descended. It is now named Proto-Indo-European. Philology's interest in ancient languages led to the study of what was, in the 18th century, "exotic" languages, for the light they could cast on problems in understanding and deciphering the origins of older texts. Textual Philology also includes the study of texts and their history. It includes elements of textual criticism, trying to reconstruct an author's original text based on variant copies of manuscripts. This branch of research arose among ancient scholars in the Greek-speaking world of the 4th century BC, who desired to establish a standard text of popular authors for both sound interpretation and secure transmission. Since that time, the original principles of textual criticism have been improved and applied to other widely distributed texts such as the Bible. Scholars have tried to reconstruct the original readings of the Bible from the manuscript variants. This method was applied to classical studies and medieval texts as a way to reconstruct the author's original work. The method produced so-called "critical editions", which provided a reconstructed text accompanied by a "critical apparatus", i.e., footnotes that listed the various manuscript variants available, enabling scholars to gain insight into the entire manuscript tradition and argue about the variants. A related study method known as higher criticism studies the authorship, date, and provenance of text to place such text in a historical context. As these philological issues are often inseparable from issues of interpretation, there is no clear-cut boundary between philology and hermeneutics. When text has a significant political or religious influence (such as the reconstruction of Biblical texts), scholars have difficulty reaching objective conclusions. Some scholars avoid all critical methods of textual philology, especially in historical linguistics, where it is important to study the actual recorded materials. The movement known as new philology has rejected textual criticism because it injects editorial interpretations into the text and destroys the integrity of the individual manuscript, hence damaging the reliability of the data. Supporters of new philology insist on a strict "diplomatic" approach: a faithful rendering of the text exactly as found in the manuscript, without emendations. Cognitive Another branch of philology, cognitive philology, studies written and oral texts. Cognitive philology considers these oral texts as the results of human mental processes. This science compares the results of textual science with the results of experimental research of both psychology and artificial intelligence production systems. Decipherment In the case of Bronze Age literature, philology includes the prior decipherment of the language under study. This has notably been the case with the Egyptian, Sumerian, Assyrian, Hittite, Ugaritic, and Luwian languages. Beginning with the famous decipherment and translation of the Rosetta Stone by Jean-François Champollion in 1822, some individuals attempted to decipher the writing systems of the Ancient Near East and Aegean. In the case of Old Persian and Mycenaean Greek, decipherment yielded older records of languages already known from slightly more recent traditions (Middle Persian and Alphabetic Greek). Work on the ancient languages of the Near East progressed rapidly. In the mid-19th century, Henry Rawlinson and others deciphered the Behistun Inscription, which records the same text in Old Persian, Elamite, and Akkadian, using a variation of cuneiform for each language. The elucidation of cuneiform led to the decipherment of Sumerian. Hittite was deciphered in 1915 by Bedřich Hrozný. Linear B, a script used in the ancient Aegean, was deciphered in 1952 by Michael Ventris and John Chadwick, who demonstrated that it recorded an early form of Greek, now known as Mycenaean Greek. Linear A, the writing system that records the still-unknown language of the Minoans, resists deciphering, despite many attempts. Work continues on scripts such as the Maya, with great progress since the initial breakthroughs of the phonetic approach championed by Yuri Knorozov and others in the 1950s. Since the late 20th century, the Maya code has been almost completely deciphered, and the Mayan languages are among the most documented and studied in Mesoamerica. The code is described as a logosyllabic style of writing. Contention In English-speaking countries, usage of the term "philology" to describe work on languages and works of literature, which had become synonymous with the practices of German scholars, was abandoned as a consequence of anti-German feelings following World War I. Most continental European countries still maintain the term to designate departments, colleges, position titles, and journals. J. R. R. Tolkien opposed the nationalist reaction against philological practices, claiming that "the philological instinct" was "universal as is the use of language". In British English usage, and British academia, philology remains largely synonymous with "historical linguistics", while in US English, and US academia, the wider meaning of "study of a language's grammar, history and literary tradition" remains more widespread. Based on the harsh critique of Friedrich Nietzsche, some US scholars since the 1980s have viewed philology as responsible for a narrowly scientistic study of language and literature. Disagreements in the modern day of this branch of study are followed with the likes of how the method is treated amongst other scholars, as noted by both the philologists R.D Fulk and Leonard Neidorf who have been quoted saying "This field "philology's commitment to falsification renders it "at odds with what many literary scholars believe because the purpose of philology is to narrow the range of possible interpretations rather than to treat all reasonable ones as equal". This use of falsification can be seen in the debate surrounding the etymology of the Old English character Unferth from the heroic epic poem Beowulf. James Turner further disagrees with how the use of the term is dismissed in the academic world, stating that due to its branding as a "simpleminded approach to their subject" the term has become unknown to college-educated students, furthering the stereotypes of "scrutiny of ancient Greek or Roman texts of a nit-picking classicist" and only the "technical research into languages and families". In popular culture In The Space Trilogy by C. S. Lewis, the main character, Elwin Ransom, is a philologist – as was Lewis' close friend J. R. R. Tolkien. Dr. Edward Morbius, one of the main characters in the science fiction film Forbidden Planet, is a philologist. Philip, the main character of Christopher Hampton's 'bourgeois comedy' The Philanthropist, is a professor of philology in an English university town. Moritz-Maria von Igelfeld, the main character in Alexander McCall Smith's 1997 comic novel Portuguese Irregular Verbs is a philologist, educated at Cambridge. The main character in the Academy Award Nominee for Best Foreign Language Film in 2012, Footnote, is a Hebrew philologist, and a significant part of the film deals with his work. The main character of the science fiction TV show Stargate SG-1, Dr. Daniel Jackson, is mentioned as having a PhD in philology. See also American Journal of Philology References External links Philology in Runet—(A special web search through the philological sites of Runet) v: Topic:German philology CogLit: Literature and Cognitive Linguistics A Bibliography of Literary Theory, Criticism, and Philology (ed. José Ángel García Landa, University of Zaragoza, Spain) Academic disciplines Historical linguistics Writing Textual scholarship
0.769078
0.999081
0.768371
Henriad
In Shakespearean scholarship, the Henriad refers to a group of William Shakespeare's history plays depicting the rise of the English kings. It is sometimes used to refer to a group of four plays (a tetralogy), but some sources and scholars use the term to refer to eight plays. In the 19th century, Algernon Charles Swinburne used the term to refer to three plays, but that use is not current. In one sense, the Henriad refers to: Richard II; Henry IV, Part 1; Henry IV, Part 2; and Henry Vwith the implication that these four plays are Shakespeare's epic, and that Prince Hal, who later becomes Henry V, is the epic hero. (This group may also be referred to as the "second tetralogy" or "second Henriad".) In a more inclusive meaning, the Henriad refers to eight plays: the tetralogy mentioned above (Richard II; Henry IV, Part 1; Henry IV, Part 2; and Henry V), plus four plays that were written earlier, and are based on the civil wars now known as The Wars of the RosesHenry VI, Part 1; Henry VI, Part 2; Henry VI, Part 3; and Richard III. The second tetralogy The term Henriad was popularized by Alvin Kernan in his 1969 article, "The Henriad: Shakespeare’s Major History Plays" to suggest that the four plays of the second tetralogy (Richard II; Henry IV, Part 1; Henry IV, Part 2; and Henry V), when considered together as a group, or a dramatic tetralogy, have coherence and characteristics that are the primary qualities associated with literary epic: "large-scale heroic action involving many men and many activities tracing the movement of a nation or people through violent change from one condition to another." In this context Kernan sees the four plays as analogous to Homer's Iliad, Virgil's Aeneid, Voltaire's Henriade, and Milton's Paradise Lost. The action of the Henriad follows the dynastic, cultural and psychological journey that England traveled as it left the medieval world with Richard II and moved on to Henry V and the Renaissance. Politically and socially the Henriad represents a "movement from feudalism and hierarchy to the national state and individualism". Kernan similarly discusses the Henriad in psychological, spatial, temporal, and mythical terms. "In mythical terms," he says, "the passage is from a garden world to a fallen world." This group of plays has recurring characters and settings. However, there is no evidence that these plays were written with the intention that they be considered as a group. The character Falstaff is introduced in Henry IV, pt. 1, he returns in Henry IV, pt. 2, and he dies early in Henry V. Falstaff represents the tavern world, a world which Prince Hal will leave behind. (This group of three plays is occasionally dubbed the "Falstaffiad" by Harold Bloom and others.) Eight-play Henriad The term Henriad, following after Kernan, acquired an expanded second meaning, which refers to two groups of Shakespearean plays: The tetralogy mentioned above (Richard II; Henry IV, Part 1; Henry IV, Part 2; and Henry V), and also four plays that were written earlier and are based on the historic events and civil wars now known as The Wars of the Roses; Henry VI, Part 1, Henry VI, Part 2, Henry VI, Part 3, and Richard III. In this sense, the eight Henry plays are known as the Henriad, and when divided in two, the group written earlier may be known as the "first Henriad" with the group that was written later known as the "second Henriad". The two Shakespearean tetralogies share the name Henriad, but only the "second Henriad" has the epic qualities that Kernan had in mind in his use of the term. In this way the two definitions are somewhat contradictory and overlapping. Which meaning is intended can usually be derived by the context. The eight plays, when considered together, are said to tell a unified story of a significant arc of British history from Richard II to Richard III. These plays cover this history, while going beyond the English chronicle play; they include some of Shakespeare's greatest writing. They are not tragedies, but as history plays they are comparable in terms of dramatic or literary quality and meaning. When considered as a group they contain a narrative pattern: disaster, followed by chaos and a battle of contending forces, followed by the happy ending—the restitution of order. This pattern is repeated in every play, as Britain leaves the medieval world and moves towards the British Renaissance. These plays further express the "Elizabethan world order", or mankind's striving in a world of unity battling chaos, based on the Elizabethan era's philosophies, sense of history, and religion. The eight-play Henriad is also known as The First Tetralogy and The Second Tetralogy; a terminology that had been in use, but was made popular by the influential Shakespearean scholar E.M.W. Tillyard in his 1944 book, Shakespeare’s History Plays. The word "tetralogy" is derived from the performance tradition of the Dionysian Festival of ancient Athens, in which a poet was to compose a tetralogy (τετραλογία): three tragedies and one comedic satyr play. Tillyard studied these Shakespearean history plays as combined in a dramatic serial form, and analyzed how, when combined, the stories, characters, historic chronology, and themes are linked and portrayed. After Tillyard's book, these plays have often been combined in performance, and it would be a very rare occurrence for Henry VI, part 2 or 3, for example, to be performed individually. Tillyard considered each tetralogy linked, and that the characters themselves link the stories together when they tell their own history or explain their titles. The theories that consider the eight plays as a group dominated scholarship in the mid 20th century, when the idea was introduced, and have since engendered a great deal of discussion. King John is not included in the Henriad because it is said to have a style that is of a different order than the other history plays. King John has great qualities of poetry, freedom and imagination, and is appreciated as a new direction taken by the author. Henry VIII is not included due to unresolved questions regarding how much of it is coauthored, and what of it is written by Shakespeare. Three-play Henriad In Algernon Charles Swinburne's book A Study of Shakespeare (1880), he refers to three plays, Henry IV pt. 1, Henry IV pt. 2, and Henry V, as "our English Henriade", and says the "ripest fruit of historic or national drama, the consummation and the crown of Shakespeare’s labours in that line, must of course be recognised and saluted by all students in the supreme and sovereign trilogy of King Henry IV and King Henry V." They are, according to Swinburne, England's "great national trilogy", and Shakespeare's "perfect triumph in the field of patriotic drama." H. A. Kennedy writing in 1896 refers to Henry IV pt. 1, Henry IV pt. 2, and Henry V, saying "taken together the three plays form a Henriade, a trilogy, whose central figure is the hero of Agincourt, whose subject is his development from the madcap prince to the conqueror of France". Authorship Shakespeare is well established as the sole author of the plays of the second Henriad, but there has been speculation regarding possible co-authors of the Henry VI plays of the first Henriad. Since then, the 16th century playwright Christopher Marlowe has been suggested as a possible contributor. Then in 2016 the editors of the New Oxford Shakespeare, led by Gary Taylor, announced that Marlowe and "anonymous" would be listed on their title pages of Henry VI, Parts 2 and 3 as co-author side-by-side with Shakespeare, and that Marlowe, Thomas Nashe and “anonymous" would be listed as the authors of Henry VI, Part 1, with Shakespeare listed only as the adaptor. This is not universally accepted, but it is the first time a major critical edition of Shakespeare's works has listed Marlowe as a co-author. Literary background The plays that may have influenced, inspired, or provided a tradition for Shakespeare's Henriad plays would include popular morality plays, which contributed to the evolution of British drama. Notable morality plays that focus on British history include John Skelton's Magnificence (1533), David Lyndsay's A Satire of the Three Estates (1552), and John Bale's play King John (c. 1538). Gorboduc (1561) is considered the first Senecan tragedy in the English language, though it is a chronicle play written in blank verse; it has numerous serious speeches, a unified dramatic action, and its violence is kept off-stage.<ref>Ward, A.W. editor. "Phyllyp Sparowe”. The Cambridge History of English and American Literature’' Cambridge University (1907–21) Volume III. Renascence and Reformation.</ref> Out of this tradition the English chronicle play developed to carry on the tradition of the medieval moralities, to provide historic stories and memorials of historic figures, and to teach morality. When King Lear was published as a quarto in 1608 it was called a "true English Chronicle". Some notable examples of the English chronicle include George Peele's Edward I, John Lyly’s Midas (1591), Robert Greene's Orlando Furioso, Thomas Heywood’s Edward IV, and Robert Wilson's Three Lords and Three Ladies of London (1590). Holinshed's Chronicles (1587) contributed greatly to the plays of Shakespeare's Henriad, and also advanced the development of the English chronicle play.Tillyard, E. M. W. Shakespeare’s History Plays. Chatto & Windus (1944) Criticism In his book, Shakespeare’s History Plays, E. M. W. Tillyard's mid-20th century theories regarding the eight-play Henriad, have been extremely influential. Tillyard supports the idea of the Tudor myth, which considers England's 15th century to be a dark time of lawlessness and warfare, that after many battles eventually led to a golden age of the Tudor Period. This theory suggests that Shakespeare believed this orthodoxy and promoted it with his Henriad. The Tudor myth is a theory that suggests that Shakespeare, with his history plays, contributes to the idea that the civil wars of the Henriad were all part of a divine plan that would ultimately lead to the Tudors — which in turn would support Shakespeare's monarch, Elizabeth. The argument against Tillyard's theory is that when these plays were written Elizabeth was approaching the end of her life and reign, and how her successor would be determined was causing the idea of a civil war to be a source of concern, not glorification. Furthermore, the lack of an heir to Elizabeth tended to outmode the idea that the Tudors were a divine solution. Critics including Paul Murray Kendall and Jan Kott, challenged the idea of the Tudor myth, and these newer ideas caused the image of Shakespeare to change so much he now seemed to become instead a prophetic voice in the wilderness who saw the existential meaninglessness of this history of warfare.Kott, Jan. Shakespeare Our Contemporary. Doubleday. (1966) If presented as one very long dramatic event, the plays of the Henriad do not cohere well together. In performance the plays can seem jumbled and tonally mismatched, and narratives are at times oddly dropped and resumed. Numerous inconsistencies exist between the individual plays of the first tetralogy, which is typical of serialized drama in the early modern playhouses. James Marino suggests, "It is more remarkable that any coherency appears at all in a 'series' cobbled together from elements of three different repertories". The four plays (of the first tetralogy) variously originated from three different theatre companies: The Queen's Men, Pembroke's Men and Chamberlain's Men. An earlier use An earlier use of the word "Henriad" to refer to a group of Shakespeare's plays occurs in a book published in 1876 titled Shakespeare’s Diversions; A Medley of Motley Wear. The author does not define the word, but indicates that the plays in which the character, Mistress Quickly, hostess of the Boar's Head Tavern, appears include "The English Henriad" as well as The Merry Wives of Windsor. The source also indicates that the number of plays she appears in is four — "one more than is granted to Falstaff". The four plays that Mistress Quickly appears in are The Merry Wives of Windsor, the two parts of Henry IV, and Henry V. Voltaire’s Henriade The French critic and playwright, Voltaire, is known for making extreme criticisms of Shakespeare that he would then balance with more positive comments. For example, Voltaire called Shakespeare a "barbarian" and his works a "huge dunghill" that contains some pearls. Voltaire wrote an epic poem titled La Henriade (1723), which is sometimes translated as Henriade. Voltaire's poem is based on Henry IV of France (1553 – 1610). Algernon Charles Swinburne points out how the two similarly titled works, Shakespeare's and Voltaire's, are dissimilar, in that Shakespeare's "differs from Voltaire’s as Zaïre [a tragedy written by Voltaire] differs from Othello." Broadcast productions 1960: An Age of Kings 1979: BBC Television Shakespeare 2012 & 2016: The Hollow Crown'', BBC2 References Shakespearean histories Tetralogies Wars of the Roses in fiction
0.771881
0.995411
0.768338
Taphonomy
Taphonomy is the study of how organisms decay and become fossilized or preserved in the paleontological record. The term taphonomy (from Greek , 'burial' and , 'law') was introduced to paleontology in 1940 by Soviet scientist Ivan Efremov to describe the study of the transition of remains, parts, or products of organisms from the biosphere to the lithosphere. The term taphomorph is used to describe fossil structures that represent poorly-preserved, deteriorated remains of a mixture of taxonomic groups, rather than of a single one. Description Taphonomic phenomena are grouped into two phases: biostratinomy, events that occur between death of the organism and the burial; and diagenesis, events that occur after the burial. Since Efremov's definition, taphonomy has expanded to include the fossilization of organic and inorganic materials through both cultural and environmental influences. Taphonomy is now most widely defined as the study of what happens to objects after they leave the biosphere (living contexts), enter the lithosphere (buried contexts), and are subsequently recovered and studied. This is a multidisciplinary concept and is used in slightly different contexts throughout different fields of study. Fields that employ the concept of taphonomy include: Archaeobotany Archaeology Biology Forensic science Geoarchaeology Geology Paleoecology Paleontology Zooarchaeology There are five main stages of taphonomy: disarticulation, dispersal, accumulation, fossilization, and mechanical alteration. The first stage, disarticulation, occurs as the organism decays and the bones are no longer held together by the flesh and tendons of the organism. Dispersal is the separation of pieces of an organism caused by natural events (i.e. floods, scavengers etc.). Accumulation occurs when there is a buildup of organic and/or inorganic materials in one location (scavengers or human behavior). When mineral rich groundwater permeates organic materials and fills the empty spaces, a fossil is formed. The final stage of taphonomy is mechanical alteration; these are the processes that physically alter the remains (i.e. freeze-thaw, compaction, transport, burial). These stages are not only successive, they interplay. For example, chemical changes occur at every stage of the process, because of bacteria. Changes begin as soon as the death of the organism: enzymes are released that destroy the organic contents of the tissues, and mineralised tissues such as bone, enamel and dentin are a mixture of organic and mineral components. Moreover, most often the organisms (vegetal or animal) are dead because they have been killed by a predator. The digestion modifies the composition of the flesh, but also that of the bones. Research areas Taphonomy has undergone an explosion of interest since the 1980s, with research focusing on certain areas. Microbial, biogeochemical, and larger-scale controls on the preservation of different tissue types; in particular, exceptional preservation in Konzervat-lagerstätten. Covered within this field is the dominance of biological versus physical agents in the destruction of remains from all major taxonomic groups (plants, invertebrates, vertebrates). Processes that concentrate biological remains; especially the degree to which different types of assemblages reflect the species composition and abundance of source faunas and floras. Actualistic taphonomy uses the present to understand past taphonomic events. This is often done through controlled experiments, such as the role microbes play in fossilization, the effects of mammalian carnivores on bone, or the burial of bone in a water flume. Computer modeling is also used to explain taphonomic events. Studies on actualistic taphonomy gave rise to the discipline conservation paleobiology. The spatio-temporal resolution and ecological fidelity of species assemblages, particularly the relatively minor role of out-of-habitat transport contrasted with the major effects of time-averaging. The outlines of megabiases in the fossil record, including the evolution of new bauplans and behavioral capabilities, and by broad-scale changes in climate, tectonics, and geochemistry of Earth surface systems. The Mars Science Laboratory mission objectives evolved from assessment of ancient Mars habitability to developing predictive models on taphonomy. Paleontology One motivation behind taphonomy is to understand biases present in the fossil record better. Fossils are ubiquitous in sedimentary rocks, yet paleontologists cannot draw the most accurate conclusions about the lives and ecology of the fossilized organisms without knowing about the processes involved in their fossilization. For example, if a fossil assemblage contains more of one type of fossil than another, one can infer either that the organism was present in greater numbers, or that its remains were more resistant to decomposition. During the late twentieth century, taphonomic data began to be applied to other paleontological subfields such as paleobiology, paleoceanography, ichnology (the study of trace fossils) and biostratigraphy. By coming to understand the oceanographic and ethological implications of observed taphonomic patterns, paleontologists have been able to provide new and meaningful interpretations and correlations that would have otherwise remained obscure in the fossil record. In the marine environment, taphonomy, specifically aragonite loss, poses a major challenge in reconstructing past environments from the modern, notably in settings such as carbonate platforms. Forensic science Forensic taphonomy is a relatively new field that has increased in popularity in the past 15 years. It is a subfield of forensic anthropology focusing specifically on how taphonomic forces have altered criminal evidence. There are two different branches of forensic taphonomy: biotaphonomy and geotaphonomy. Biotaphonomy looks at how the decomposition and/or destruction of the organism has happened. The main factors that affect this branch are categorized into three groups: environmental factors; external variables, individual factors; factors from the organism itself (i.e. body size, age, etc.), and cultural factors; factors specific to any cultural behaviors that would affect the decomposition (burial practices). Geotaphonomy studies how the burial practices and the burial itself affects the surrounding environment. This includes soil disturbances and tool marks from digging the grave, disruption of plant growth and soil pH from the decomposing body, and the alteration of the land and water drainage from introducing an unnatural mass to the area. This field is extremely important because it helps scientists use the taphonomic profile to help determine what happened to the remains at the time of death (perimortem) and after death (postmortem). This can make a huge difference when considering what can be used as evidence in a criminal investigation. Archaeology Taphonomy is an important study for archaeologists to better interpret archaeological sites. Since the archaeological record is often incomplete, taphonomy helps explain how it became incomplete. The methodology of taphonomy involves observing transformation processes in order to understand their impact on archaeological material and interpret patterns on real sites. This is mostly in the form of assessing how the deposition of the preserved remains of an organism (usually animal bones) has occurred to better understand a deposit. Whether the deposition was a result of human, animals and/or the environment is often the goal of taphonomic study. Archaeologists typically separate natural from cultural processes when identifying evidence of human interaction with faunal remains. This is done by looking at human processes preceding artifact discard in addition to processes after artifact discard. Changes preceding discard include butchering, skinning, and cooking. Understanding these processes can inform archaeologists on tool use or how an animal was processed. When the artifact is deposited, abiotic and biotic modifications occur. These can include thermal alteration, rodent disturbances, gnaw marks, and the effects of soil pH to name a few. While taphonomic methodology can be applied and used to study a variety of materials such as buried ceramics and lithics, its primary application in archaeology involves the examination of organic residues. Interpretation of the post-mortem, pre-, and post-burial histories of faunal assemblages is critical in determining their association with hominid activity and behaviour. For instance, to distinguish the bone assemblages that are produced by humans from those of non humans, much ethnoarchaeological observation has been done on different human groups and carnivores, to ascertain if there is anything different in the accumulation and fragmentation of bones. This study has also come in the form of excavation of animal dens and burrows to study the discarded bones and experimental breakage of bones with and without stone tools. Studies of this kind by C.K. Brain in South Africa have shown that bone fractures previously attributed to "killer man-apes" were in fact caused by the pressure of overlying rocks and earth in limestone caves. His research has also demonstrated that early hominins, for example australopithecines, were more likely preyed upon by carnivores rather than being hunters themselves, from cave sites such as Swartkrans in South Africa. Outside of Africa Lewis Binford observed the effects of wolves and dogs on bones in Alaska and the American Southwest, differentiating the interference of humans and carnivores on bone remains by the number of bone splinters and the number of intact articular ends. He observed that animals gnaw and attack the articular ends first leaving mostly bone cylinders behind, therefore it can be assumed a deposit with a high number of bone cylinders and a low number of bones with articular ends intact is therefore probably the result of carnivore activity. In practice John Speth applied these criteria to the bones from the Garnsey site in New Mexico. The rarity of bone cylinders indicated that there had been minimal destruction by scavengers, and that the bone assemblage could be assumed to be wholly the result of human activity, butchering the animals for meat and marrow extraction. One of the most important elements in this methodology is replication, to confirm the validity of results. There are limitations to this kind of taphonomic study in archaeological deposits as any analysis has to presume that processes in the past were the same as today, e.g that living carnivores behaved in a similar way to those in prehistoric times. There are wide variations among existing species so determining the behavioural patterns of extinct species is sometimes hard to justify. Moreover, the differences between faunal assemblages of animals and humans is not always so distinct, hyenas and humans display similar patterning in breakage and form similarly shaped fragments as the ways in which a bone can break are limited. Since large bones survive better than plants this also has created a bias and inclination towards big-game hunting rather than gathering when considering prehistoric economies. While all of archaeology studies taphonomy to some extent, certain subfields deal with it more than others. These include zooarchaeology, geoarchaeology, and paleoethnobotany. Microbial Mats Modern experiments have been conducted on post-mortem invertebrates and vertebrates to understand how microbial mats and microbial activity influence the formation of fossils and the preservation of soft tissues. In these studies, microbial mats entomb animal carcasses in a sarcophagus of microbes—the sarcophagus entombing the animal's carcass delays decay. Entombed carcasses were observed to be more intact than non-entombed counter-parts by years at a time. Microbial mats maintained and stabilized the articulation of the joints and the skeleton of post-mortem organisms, as seen in frog carcasses for up to 1080 days after coverage by the mats. The environment within the entombed carcasses is typically described as anoxic and acidic during the initial stage of decomposition. These conditions are perpetuated by the exhaustion of oxygen by aerobic bacteria within the carcass creating an environment ideal for the preservation of soft tissues, such as muscle tissue and brain tissue. The anoxic and acidic conditions created by that mats also inhibit the process of autolysis within the carcasses delaying decay even further.  Endogenous gut bacteria have also been described to aid the preservation of invertebrate soft tissue by delaying decay and stabilizing soft tissue structures. Gut bacteria form pseudomorphs replicating the form of soft tissues within the animal. These pseudomorphs are possible explanation for the increased occurrence of preserved guts impression among invertebrates. In the later stages of the prolonged decomposition of the carcasses, the environment within the sarcophagus alters to more oxic and basic conditions promoting biomineralization and the precipitation of calcium carbonate. Microbial mats additionally play a role in the formation of molds and impressions of carcasses. These molds and impressions replicate and preserve the integument of animal carcasses. The degree to which has been demonstrated in frog skin preservation. The original morphology of the frog skin, including structures such as warts, was preserved for more than 1.5 years. The microbial mats also aided in the formation of the mineral gypsum embedded within the frog skin. The microbes that constitute the microbial mats in addition to forming a sarcophagus, secrete an exopolymeric substances (EPS) that drive biomineralization. The EPS provides a nucleated center for biomineralization. During later stages of decomposition heterotrophic microbes degrade the EPS, facilitating the release of calcium ions into the environment and creating a Ca-enriched film. The degradation of the EPS and formation of the Ca-rich film is suggested to aid in the precipitation of calcium carbonate and further the process of biomineralization. Taphonomic biases in the fossil record Because of the very select processes that cause preservation, not all organisms have the same chance of being preserved. Any factor that affects the likelihood that an organism is preserved as a fossil is a potential source of bias. It is thus arguably the most important goal of taphonomy to identify the scope of such biases such that they can be quantified to allow correct interpretations of the relative abundances of organisms that make up a fossil biota. Some of the most common sources of bias are listed below. Physical attributes of the organism itself This perhaps represents the biggest source of bias in the fossil record. First and foremost, organisms that contain hard parts have a far greater chance of being represented in the fossil record than organisms consisting of soft tissue only. As a result, animals with bones or shells are overrepresented in the fossil record, and many plants are only represented by pollen or spores that have hard walls. Soft-bodied organisms may form 30% to 100% of the biota, but most fossil assemblages preserve none of this unseen diversity, which may exclude groups such as fungi and entire animal phyla from the fossil record. Many animals that moult, on the other hand, are overrepresented, as one animal may leave multiple fossils due to its discarded body parts. Among plants, wind-pollinated species produce so much more pollen than animal-pollinated species, the former being overrepresented relative to the latter. Characteristics of the habitat Most fossils form in conditions where material is deposited on the bottom of water bodies. Coastal areas are often prone to high rates of erosion, and rivers flowing into the sea may carry a high particulate load from inland. These sediments will eventually settle out, so organisms living in such environments have a much higher chance of being preserved as fossils after death than do those organisms living in non-depositing conditions. In continental environments, fossilization is likely in lakes and riverbeds that gradually fill in with organic and inorganic material. The organisms of such habitats are also liable to be overrepresented in the fossil record than those living far from these aquatic environments where burial by sediments is unlikely to occur. Mixing of fossils from different places A sedimentary deposit may have experienced a mixing of noncontemporaneous remains within single sedimentary units via physical or biological processes; i.e. a deposit could be ripped up and redeposited elsewhere, meaning that a deposit may contain a large number of fossils from another place (an allochthonous deposit, as opposed to the usual autochthonous). Thus, a question that is often asked of fossil deposits is to what extent does the fossil deposit record the true biota that originally lived there? Many fossils are obviously autochthonous, such as rooted fossils like crinoids, and many fossils are intrinsically obviously allochthonous, such as the presence of photoautotrophic plankton in a benthic deposit that must have sunk to be deposited. A fossil deposit may thus become biased towards exotic species (i.e. species not endemic to that area) when the sedimentology is dominated by gravity-driven surges, such as mudslides, or may become biased if there are very few endemic organisms to be preserved. This is a particular problem in palynology. Temporal resolution Because population turnover rates of individual taxa are much less than net rates of sediment accumulation, the biological remains of successive, noncontemporaneous populations of organisms may be admixed within a single bed, known as time-averaging. Because of the slow and episodic nature of the geologic record, two apparently contemporaneous fossils may have actually lived centuries, or even millennia, apart. Moreover, the degree of time-averaging in an assemblage may vary. The degree varies on many factors, such as tissue type, the habitat, the frequency of burial events and exhumation events, and the depth of bioturbation within the sedimentary column relative to net sediment accumulation rates. Like biases in spatial fidelity, there is a bias towards organisms that can survive reworking events, such as shells. An example of a more ideal deposit with respect to time-averaging bias would be a volcanic ash deposit, which captures an entire biota caught in the wrong place at the wrong time (e.g. the Silurian Herefordshire lagerstätte). Gaps in time series The geological record is very discontinuous, and deposition is episodic at all scales. At the largest scale, a sedimentological high-stand period may mean that no deposition may occur for millions of years and, in fact, erosion of the deposit may occur. Such a hiatus is called an unconformity. Conversely, a catastrophic event such as a mudslide may overrepresent a time period. At a shorter scale, scouring processes such as the formation of ripples and dunes and the passing of turbidity currents may cause layers to be removed. Thus the fossil record is biased towards periods of greatest sedimentation; periods of time that have less sedimentation are consequently less well represented in the fossil record. A related problem is the slow changes that occur in the depositional environment of an area; a deposit may experience periods of poor preservation due to, for example, a lack of biomineralizing elements. This causes the taphonomic or diagenetic obliteration of fossils, producing gaps and condensation of the record. Consistency in preservation over geologic time Major shifts in intrinsic and extrinsic properties of organisms, including morphology and behaviour in relation to other organisms or shifts in the global environment, can cause secular or long-term cyclic changes in preservation (megabias). Human biases Much of the incompleteness of the fossil record is due to the fact that only a small amount of rock is ever exposed at the surface of the Earth, and not even most of that has been explored. Our fossil record relies on the small amount of exploration that has been done on this. Unfortunately, paleontologists as humans can be very biased in their methods of collection; a bias that must be identified. Potential sources of bias include, Search images: field experiments have shown that paleontologists working on, say fossil clams are better at collecting clams than anything else because their search image has been shaped to bias them in favour of clams. Relative ease of extraction: fossils that are easy to obtain (such as many phosphatic fossils that are easily extracted en masse by dissolution in acid) are overabundant in the fossil record. Taxonomic bias: fossils with easily discernible morphologies will be easy to distinguish as separate species, and will thus have an inflated abundance. Preservation of biopolymers The taphonomic pathways involved in relatively inert substances such as calcite (and to a lesser extent bone) are relatively obvious, as such body parts are stable and change little through time. However, the preservation of "soft tissue" is more interesting, as it requires more peculiar conditions. While usually only biomineralised material survives fossilisation, the preservation of soft tissue is not as rare as sometimes thought. Both DNA and proteins are unstable, and rarely survive more than hundreds of thousands of years before degrading. Polysaccharides also have low preservation potential, unless they are highly cross-linked; this interconnection is most common in structural tissues, and renders them resistant to chemical decay. Such tissues include wood (lignin), spores and pollen (sporopollenin), the cuticles of plants (cutan) and animals, the cell walls of algae (algaenan), and potentially the polysaccharide layer of some lichens. This interconnectedness makes the chemicals less prone to chemical decay, and also means they are a poorer source of energy so less likely to be digested by scavenging organisms. After being subjected to heat and pressure, these cross-linked organic molecules typically "cook" and become kerogen or short (<17 C atoms) aliphatic/aromatic carbon molecules. Other factors affect the likelihood of preservation; for instance sclerotization renders the jaws of polychaetes more readily preserved than the chemically equivalent but non-sclerotized body cuticle. A peer-reviewed study in 2023 was the first to present an in-depth chemical description of how biological tissues and cells potentially preserve into the fossil record. This study generalized the chemistry underlying cell and tissue preservation to explain the phenomenon for potentially any cellular organism. It was thought that only tough, cuticle type soft tissue could be preserved by Burgess Shale type preservation, but an increasing number of organisms are being discovered that lack such cuticle, such as the probable chordate Pikaia and the shellless Odontogriphus. It is a common misconception that anaerobic conditions are necessary for the preservation of soft tissue; indeed much decay is mediated by sulfate reducing bacteria which can only survive in anaerobic conditions. Anoxia does, however, reduce the probability that scavengers will disturb the dead organism, and the activity of other organisms is undoubtedly one of the leading causes of soft-tissue destruction. Plant cuticle is more prone to preservation if it contains cutan, rather than cutin. Plants and algae produce the most preservable compounds, which are listed according to their preservation potential by Tegellaar (see reference). Disintegration How complete fossils are was once thought to be a proxy for the energy of the environment, with stormier waters leaving less articulated carcasses. However, the dominant force actually seems to be predation, with scavengers more likely than rough waters to break up a fresh carcass before it is buried. Sediments cover smaller fossils faster so they are likely to be found fully articulated. However, erosion also tends to destroy smaller fossils more easily. Distortion Often fossils, particularly those of vertebrates, are distorted by the subsequent movements of the surrounding sediment, this can include compression of the fossil in a particular axis, as well as shearing. Significance Taphonomic processes allow researchers of multiple fields to identify the past of natural and cultural objects. From the time of death or burial until excavation, taphonomy can aid in the understanding of past environments. When studying the past it is important to gain contextual information in order to have a solid understanding of the data. Often these findings can be used to better understand cultural or environmental shifts within the present day. The term taphomorph is used to collectively describe fossil structures that represent poorly-preserved and deteriorated remains of various taxonomic groups, rather than of a single species. For example, the 579–560 million year old fossil Ediacaran assemblages from Avalonian locations in Newfoundland contain taphomorphs of a mixture of taxa which have collectively been named Ivesheadiomorphs. Originally interpreted as fossils of a single genus, Ivesheadia, they are now thought to be the deteriorated remains of various types of frondose organism. Similarly, Ediacaran fossils from England, once assigned to Blackbrookia, Pseudovendia and Shepshedia, are now all regarded as taphomorphs related to Charnia or Charniodiscus. Fluvial taphonomy Fluvial taphonomy is concerned with the decomposition of organisms in rivers. An organism may sink or float within a river, it may also be carried by the current near the surface of the river or near its bottom. Organisms in terrestrial and fluvial environments will not undergo the same processes. A fluvial environment may be colder than a terrestrial environment. The ecosystem of live organisms that scavenge on the organism in question and the abiotic items in rivers will differ than on land. Organisms within a river may also be physically transported by the flow of the river. The flow of the river can additionally erode the surface of the organisms found within it. The processes an organism may undergo in a fluvial environment will result in a slower rate of decomposition within a river compared to on land. See also Beecher's Trilobite type preservation Bitter Springs type preservation Burgess Shale type preservation Doushantuo type preservation Ediacaran type preservation Fossil record Karen Chin Lagerstätte Permineralization Petrifaction Pseudofossil Trace fossil References Further reading External links The Shelf and Slope Experimental Taphonomy Initiative is the first long-term large-scale deployment and re-collection of organism remains on the sea floor. Journal of Taphonomy Bioerosion Website at the College of Wooster Comprehensive bioerosion bibliography compiled by Mark A. Wilson Taphonomy Minerals and the Origins of Life (Robert Hazen, NASA) (video, 60m, April 2014). 7th International Meeting on Taphonomy and Fossilization (Taphos 2014), at the Università degli studi di Ferrara, Italy, 10–13 September 2014 Archaeological science Methods in archaeology
0.777535
0.988142
0.768315
Schismogenesis
Schismogenesis is a term in anthropology that describes the formation of social divisions and differentiation. Literally meaning "creation of division", the term derives from the Greek words σχίσμα skhisma "cleft" (borrowed into English as schism, "division into opposing factions"), and γένεσις genesis "generation, creation" (deriving in turn from gignesthai "be born or produced, creation, a coming into being"). The term was introduced by anthropologist Gregory Bateson and has been applied to various fields. Concepts In anthropology Gregory Bateson developed the concept of schismogenesis in the 1930s in reference to certain forms of social behavior between groups of the Iatmul people of the Sepik River in New Guinea. Bateson first used the term in a publication in 1935, but elaborated on the concept in his classic 1936 ethnography Naven: A Survey of the Problems suggested by a Composite Picture of the Culture of a New Guinea Tribe drawn from Three Points of View (reissued with a new epilogue in 1958). The word "naven" refers to an honorific ceremony among the Iatmul (still practiced) whereby certain categories of kin celebrate first-time cultural achievements. In a schematic summary, Bateson focused on how groups of women and groups of men (especially the honorees' mothers' brothers) seemingly inverted their everyday, gendered-norms for dress, behavior, and emotional expression. For the most part, these groups of people belonged to different patrilineages who not only did not regularly renew their marriage alliances, but also interacted through the mode he called schismogenesis. Men and women, too, interacted in this mode. And thus the naven ritual served to correct schismogenesis, enabling the society to endure. In his 1936 book Naven, Bateson defined schismogenesis as "a process of differentiation in the norms of individual behaviour resulting from cumulative interaction between individuals" (p. 175). He continued: Bateson understood the symmetrical form of schismogenic behavior among Iatmul men – somewhat analogously to Émile Durkheim's concepts of mechanical and organic solidarity (see functionalism) – as a competitive relationship between categorical equals (e.g., rivalry). Thus one man, or a group of men, boast, and another man/group must offer an equal or better boast, prompting the first group to respond accordingly, and so forth. Complementary schismogenesis among the Iatmul was observed by Bateson between mainly men and women, or between categorical unequals (e.g., dominance and submission). Men would act dominant, leading women to act submissive, to which men responded with more dominance, and so forth. In both types of schismogenesis, the everyday emotional norms or ethos of Iatmul men and women prevented a halt to schismogenesis. The crux of the matter for Bateson was that, left unchecked, either form of schismogenesis would cause Iatmul society simply to break apart. Thus some social or cultural mechanism was needed by society to maintain social integration. That mechanism among the Iatmul was the naven rite. Bateson's specific contribution was to suggest that certain concrete ritual behaviors either inhibited or stimulated the schismogenic relationship in its various forms. In The Dawn of Everything (2021), anthropologist David Graeber and archaeologist David Wengrow suggest that schismogenesis can describe differences between societies, as groups define themselves against their neighbors. Some examples of this would be Ancient Athens and Sparta, and the indigenous peoples of the Pacific Northwest Coast and the indigenous peoples of California. In natural resource management Bateson's treatment of conflict escalation has been used to explain how conflicts arise over natural resources, including human-predator conflicts in Norway and also for conflicts among stakeholder groups in shared fisheries. In the latter case, Harrison and Loring compare conflict schismogenesis to the Tragedy of the Commons, arguing that it is a similar kind of escalation of behavior also caused by the failure of social institutions to ensure equity in fisheries-management outcomes. In music Steven Feld (1994, p. 265-271), apparently in response to R. Murray Schafer's schizophonia and borrowing the term from Bateson, employs schismogenesis to name the recombination and recontextualization of sounds split from their sources. In modern warfare and politics There is documented usage of schismogenesis techniques by the U.S. Office of Strategic Services (OSS, an institutional precursor to the Central Intelligence Agency (CIA)), against Japanese-held territories in the Pacific during World War II. U.S. military academics have identified how China and Russia have pursued social-media strategies of schismogenesis against the U.S. and other Western liberal democracies in an attempt to polarize civil society across the political spectrum to damage policy-making processes and to weaken state/military power. Similarly, scholars in Ukraine have documented how Russia has relied on a strategy of schismogenesis to undermine Ukrainian identity and values as a way of promoting pro-Russian territories that can be used against Kyiv, to include forming their own militias which operate alongside Russian special operation forces. In religion The concept of schismogenesis has relevance to the numerous schisms which have occurred within religious thought and practice. Types Bateson, in Steps to an Ecology of Mind describes the two forms of schismogenesis and proposes that both forms are self-destructive to the parties involved. He goes on to suggest that researchers look into methods that one or both parties may employ to stop a schismogenesis before it reaches its destructive stage. Complementary schismogenesis The first type of schismogenesis is best characterized by a class struggle, but is defined more broadly to include a range of other possible social phenomena. Given two groups of people, the interaction between them is such that a behavior X from one side elicits a behavior Y from the other side, The two behaviors complement one another, exemplified in the dominant-submissive behaviors of a class struggle. Furthermore, the behaviors may exaggerate one another, leading to a severe rift and possible conflict. Conflict can be reduced by narrowing information asymmetries between the two groups. Symmetrical schismogenesis The second type of schismogenesis is best shown by an arms race. The behaviors of the parties involved elicit similar or symmetrical behaviors from the other parties. In the case of the United States and the Soviet Union, each party continually sought to amass more nuclear weapons than the other party, a clearly fruitless but seemingly necessary endeavor on both sides. A form of symmetrical schismogenesis exists in common sporting events, where the rules are the same for both teams. Interpersonal communication In the field of communication, complementary schismogenesis is a force that can take effect in a conversation where people have different conversational styles, "creating a split in a mutually aggravating way". The effect causes two well-meaning individuals having a conversation to ramp up different styles, resulting in a disagreement that does not stem from actual difference of opinion. For example, if one person's conversational style favoured louder voices, while the other favoured softer speech, the first person might increase the loudness in their voice while the other spoke softer and softer, each trying to lead the conversation towards their style's conception of normal talking. Systems of holding back Systems of holding back are also a form of schismogenesis. They are defined as "mutually aggregating spirals which lead people to hold back contributions they could make because others hold back contributions they could make." In Systems intelligence literature, it is held that human interaction has a tendency to fall into such systems unless conscious effort is made to counter this tendency. For example, although most managers would want to give support to their team and most team members would like to receive such support many times support does not result. This is because both parties might feel that the other party is not giving enough and thus they will themselves hold back what they in the best case could give. It has been suggested that systems of holding back are "the single most important key to life-decreasing, reciprocity-trivializing and vitality-downgrading mechanisms in human life." References Cultural anthropology Communication Social anthropology
0.779431
0.985705
0.768289
National myth
A national myth is an inspiring narrative or anecdote about a nation's past. Such myths often serve as important national symbols and affirm a set of national values. A myth is a mixture of reality and fiction, and operates in a specific social and historical setting. Social myths structure national imaginaries. A national myth may take the form of a national epic, or it may be incorporated into a civil religion. A group of related myths about a nation may be referred to as the national mythos, from μῦθος, Greek for "myth". A national myth is a narrative which has been elevated to a serious symbolic and esteemed level so as to be true to the nation. The national folklore of many nations includes a founding myth, which may involve a struggle against colonialism or a war of independence or unification. In many cases, the meaning of the national myth is disputed among different parts of the population. In some places, the national myth may be spiritual and refer to stories of the nation's founding by a God, several gods, leaders favored by gods, or other supernatural beings.National myths often exist only for the purpose of state-sponsored propaganda. In totalitarian dictatorships, the leader might be given, for example, a mythical supernatural life history in order to make them seem god-like and supra-powerful (see also cult of personality). In liberal regimes they can inspire civic virtue and self-sacrifice or consolidate the power of dominant groups and legitimate their rule. National identity The concept of national identity is inescapably connected with myths. A complex of myths is at the core of nationalistic ethnic identity. Some scholars believe that national identities, supported by invented histories, were constructed only after national movements and national ideologies emerged. All modern national identities were preceded by nationalist movements. Although the term "nation" was used in the Middle Ages, it had usually an ethnic meaning and seldom referred to a state. In the age of nationalism, it was linked to efforts aimed at creating nation-states. National myths foster national identities. They are important tools of nation-building, which can be done by emphasizing differences between people of different nations. They can cause conflict as they exaggerate threats posed by other nations and minimize the costs of war. The nationalist myth of a stable homeland community is explained psychoanalytically as the result of the complexity of relations within the modern external world and the incoherence of one's inner psychological world. Nationalist identity facilitates imagined stability. Dissemination National myths are created and propagated by national intellectuals, and they can be used as instruments of political mobilization on demographic bases such as ethnicity. They might over-dramatize true incidents, omit important historical details, or add details for which there is no evidence; or a national myth might simply be a fictional story that no one takes to be true literally. Mythopoeic methods Traditional myth-making often depended on literary story-tellers — especially epic poets. Ancient Hellenic culture adopted Homer's Ionian Iliad as a justification of its theoretical unity, and Virgil (70–19 BCE) composed the Aeneid in support of the political renewal and reunification of the Roman world after lengthy civil wars. Generations of medieval writers (in poetry and prose) contributed to the Arthurian Matter of Britain, developing what became a focus for English nationalism by adopting British Celtic material. Camões (–1580) composed in Macao the Lusiads as a national poetic epic for Portugal. Voltaire attempted a similar work for French mythologised history in the Henriade (1723). Wagnerian opera came to foster German national enthusiasm. Other methods Modern purveyors of national mythologies have tended to appeal to the people more directly through the media. French pamphleteers spread the ideas of Liberty, Equality and Fraternity in the 1790s, and American journalists, politicians, and scholars popularized mythic tropes like "Manifest Destiny", "the Frontier", or the "Arsenal of Democracy". Socialists advocating ideas like the dictatorship of the proletariat have promoted catchy nation-promoting slogans such as "Socialism with Chinese characteristics" and "Kim Il Sung thought". National myths The ideology of nationalism is related to two myths: the myth of the eternal nation, referring to the permanence of a community, and the myth of common ancestry. These are represented in the particular national myths of various countries and groups. Finland The Kalevala is a 19th-century work of epic poetry compiled by Elias Lönnrot from Karelian and Finnish oral folklore and mythology,. The Kalevala is regarded as the national epic of Karelia and Finland It narrates an epic story about the Creation of the Earth, describing the controversies and retaliatory voyages between the peoples of the land of Kalevala called Väinölä and the land of Pohjola and their various protagonists and antagonists as well as the construction and robbery of the epic mythical wealth-making machine Sampo. The Kalevala was instrumental in the development of the Finnish national identity and the intensification of Finland's language strife that ultimately led to Finland's independence from Russia in 1917. Greece According to Greek mythology, the Hellenes descend from Hellen. He is the child of Deucalion (or Zeus) and Pyrrha, and the father of three sons, Dorus, Xuthus, and Aeolus, by whom he is the ancestor of the Greek peoples. Iceland The sagas of Icelanders, also known as family sagas, are one sub-genre or text groups of Icelandic sagas. They are prose narratives mostly based on historical events that mostly took place in Iceland in the ninth, tenth, and early eleventh centuries, during the so-called Saga Age. They were written in Old Icelandic, a western dialect of Old Norse. They are the best-known specimens of Icelandic literature. They are focused on history, especially genealogical and family history. They reflect the struggle and conflict that arose within the societies of the early generations of Icelandic settlers. The Icelandic sagas are valuable and unique historical sources about medieval Scandinavian societies and kingdoms, in particular regarding pre-Christian religion and culture and heroic age. Japan In Japanese mythology, Emperor Jimmu is the legendary first emperor of Japan. He is described in the Nihon Shoki and Kojiki. His ascension is traditionally dated as 660 BC. He is said to be a descendant of the sun goddess Amaterasu, through her grandson Ninigi, as well as a descendant of the storm god Susanoo. He launched a military expedition from Hyūga near the Seto Inland Sea, captured Yamato, and established this as his center of power. In modern Japan, Emperor Jimmu's legendary accession is marked as National Foundation Day on February 11. There is no evidence to suggest that Jimmu existed. However, there is a high probability that there was a powerful dynasty in the vicinity of Miyazaki Prefecture during the Kofun period. United States of America The American frontier (also known as the Old West or Wild West) is a theme in American mythology that defines the American national identity as brave pioneers who discovered, conquered, and settled the vast wilderness. It affirms individualism, informality, and pragmatism as American values. Richard Slotkin describes this myth as depicting "America as a wide-open land of unlimited opportunity for the strong, ambitious, self-reliant individual to thrust his way to the top." Cowboys, gunfighters, and farmers are commonly appearing archetypes in this myth. The American frontier produced various mythologized figures such as Wild Bill Hickok, Johnny Appleseed, Paul Bunyan, Wyatt Earp, Billy the Kid, Annie Oakley, Doc Holliday, Butch Cassidy, and Davy Crockett. The mythology surrounding the American frontier is immortalized in the Western genre of fiction, particularly Western films and literature. Korea The first Korean kingdom is said to have been founded by Dangun, the legendary founder and god-king of Gojoseon, in 2333 BCE. Dangun is said to be the "grandson of heaven" and "son of a bear". The earliest recorded version of the Dangun legend appears in the 13th-century Samguk Yusa, which cites China's Book of Wei and Korea's lost historical record Gogi; it has been confirmed that there is no relevant record in China's Book of Wei. There are around seventeen religious groups involving the worship of Dangun. Italy The Kingdom of Fanes is the national epic of the Ladin people in the Dolomites and the most important part of the Ladin literature. Originally an orally transmitted epic cycle, today it is known through the work of Karl Felix Wolff in 1932, gathered in Dolomitensagen. This legend is part of the larger corpus of the South Tyrolean sagas, whose protagonists are the Fanes themselves. Brazil The national myth of Brazil as a racial democracy was first advanced by Brazilian sociologist Gilberto Freyre in his 1933 work Casa-Grande & Senzala, which argues that Brazilians do not view each other through the lens of race, and that Brazilian society eliminated racism and racial discrimination. Freyre's theory became a source of national pride for Brazil, which contrasted itself favorably vis-a-vis the contemporaneous racial divisions and violence in the United States. Serbia The Kosovo Myth is a Serbian national myth based on legends about events related to the Battle of Kosovo (1389). It has been a subject in Serbian folklore and literary tradition and has been cultivated oral epic poetry and guslar poems. The final form of the legend was not created immediately after the battle but evolved from different originators into various versions. In its modern form it emerged in 19th-century Serbia and served as an important constitutive element of the national identity of modern Serbia and its politics. Great Britain King Arthur was a legendary noble king that united Britain, laid the foundation to medieval notions of chivalry in western Europe, and was later important for building a common British identity. Nazi Germany The Master race is a Nazi ideology propaganda of pseudoscientific racial theories purporting that ethnic Germans belonged to a superior Aryan or Nordic race, which combined with other antisemitic myths (including stab-in-the-back), which resulted in Nazi Germany and its justification for conquering Europe (for "living space") and for The Holocaust, its genocide of those it mythologized were threats and lesser races, primarily Jews. New Zealand The Treaty of Waitangi is a document of central importance to the history of New Zealand, its constitution, and its national mythos. It has played a major role in the treatment of the Māori people in New Zealand by successive governments and the wider population, something that has been especially prominent since the late 20th century. The treaty document is an agreement, not a treaty as recognised in international law, and has no independent legal status, being legally effective only to the extent it is recognised in various statutes. It was first signed on 6 February 1840 by Captain William Hobson as consul for the British Crown and by Māori chiefs from the North Island of New Zealand. Kupe was a legendary Polynesian explorer who was the first person to discover New Zealand, according to Māori oral history. It is likely that Kupe existed historically, but this is difficult to confirm. His voyage to New Zealand ensured that the land was known to the Polynesians, and he would therefore be responsible for the genesis of the Māori people. Iran The Shahnameh is a long epic poem written by the Persian poet Ferdowsi between and 1010 CE and is the national epic of Greater Iran. Consisting of some 50,000 distichs or couplets (two-line verses), the Shahnameh is one of the world's longest epic poems, and the longest epic poem created by a single author. It tells mainly the mythical and to some extent the historical past of the Persian Empire from the creation of the world until the Muslim conquest in the seventh century. Israel The Promised Land is Middle Eastern land that Abrahamic religions (which include Judaism, Christianity, Islam, and others) claim their God promised and subsequently gave to Abraham (the legendary patriarch in Abrahamic religions) and several more times to his descendants.The concept of the Promised Land originates from a religious narrative written in the Hebrew religious text, the Torah. See also Anzac spirit Civil religion Euromyth Folk epics Founding myth Imagined community Nationalism and archaeology Nationalist historiography Nationalization of history Mythomoteur Nation branding National epic National monument National mysticism Noble lie Political myth Primordialism Ernest Renan What is a Nation? Notes References Further reading Myth Mythography Propaganda
0.776189
0.989748
0.768231
New Age
New Age is a range of spiritual or religious practices and beliefs which rapidly grew in Western society during the early 1970s. Its highly eclectic and unsystematic structure makes a precise definition difficult. Although many scholars consider it a religious movement, its adherents typically see it as spiritual or as unifying Mind-Body-Spirit, and rarely use the term New Age themselves. Scholars often call it the New Age movement, although others contest this term and suggest it is better seen as a milieu or zeitgeist. As a form of Western esotericism, the New Age drew heavily upon esoteric traditions such as the occultism of the eighteenth and nineteenth centuries, including the work of Emanuel Swedenborg and Franz Mesmer, as well as Spiritualism, New Thought, and Theosophy. More immediately, it arose from mid-twentieth century influences such as the UFO religions of the 1950s, the counterculture of the 1960s, and the Human Potential Movement. Its exact origins remain contested, but it became a major movement in the 1970s, at which time it was centered largely in the United Kingdom. It expanded widely in the 1980s and 1990s, in particular in the United States. By the start of the 21st century, the term New Age was increasingly rejected within this milieu, with some scholars arguing that the New Age phenomenon had ended. Despite its eclectic nature, the New Age has several main currents. Theologically, the New Age typically accepts a holistic form of divinity that pervades the universe, including human beings themselves, leading to a strong emphasis on the spiritual authority of the self. This is accompanied by a common belief in a variety of semi-divine non-human entities such as angels, with whom humans can communicate, particularly by channeling through a human intermediary. Typically viewing history as divided into spiritual ages, a common New Age belief is in a forgotten age of great technological advancement and spiritual wisdom, declining into periods of increasing violence and spiritual degeneracy, which will now be remedied by the emergence of an Age of Aquarius, from which the milieu gets its name. There is also a strong focus on healing, particularly using forms of alternative medicine, and an emphasis on unifying science with spirituality. The dedication of New Agers varied considerably, from those who adopted a number of New Age ideas and practices to those who fully embraced and dedicated their lives to it. The New Age has generated criticism from Christians as well as modern Pagan and Indigenous communities. From the 1990s onward, the New Age became the subject of research by academic scholars of religious studies. Definitions The New Age phenomenon has proved difficult to define, with much scholarly disagreement as to its scope. The scholars Steven J. Sutcliffe and Ingvild Sælid Gilhus have even suggested that it remains "among the most disputed of categories in the study of religion". The scholar of religion Paul Heelas characterised the New Age as "an eclectic hotch-potch of beliefs, practices, and ways of life" that can be identified as a singular phenomenon through their use of "the same (or very similar) lingua franca to do with the human (and planetary) condition and how it can be transformed." Similarly, the historian of religion Olav Hammer termed it "a common denominator for a variety of quite divergent contemporary popular practices and beliefs" that have emerged since the late 1970s and are "largely united by historical links, a shared discourse and an air de famille". According to Hammer, this New Age was a "fluid and fuzzy cultic milieu". The sociologist of religion Michael York described the New Age as "an umbrella term that includes a great variety of groups and identities" that are united by their "expectation of a major and universal change being primarily founded on the individual and collective development of human potential." The scholar of religion Wouter Hanegraaff adopted a different approach by asserting that "New Age" was "a label attached indiscriminately to whatever seems to fit it" and that as a result it "means very different things to different people". He thus argued against the idea that the New Age could be considered "a unified ideology or Weltanschauung", although he believed that it could be considered a "more or less unified 'movement'." Other scholars have suggested that the New Age is too diverse to be a singular movement. The scholar of religion George D. Chryssides called it "a counter-cultural Zeitgeist", while the sociologist of religion Steven Bruce suggested that New Age was a milieu; Heelas and scholar of religion Linda Woodhead called it the "holistic milieu". There is no central authority within the New Age phenomenon that can determine what counts as New Age and what does not. Many of those groups and individuals who could analytically be categorised as part of the New Age reject the term New Age in reference to themselves. Some even express active hostility to the term. Rather than terming themselves New Agers, those involved in this milieu commonly describe themselves as spiritual "seekers", and some self-identify as a member of a different religious group, such as Christianity, Judaism, or Buddhism. In 2003 Sutcliffe observed that the use of the term New Age was "optional, episodic and declining overall", adding that among the very few individuals who did use it, they usually did so with qualification, for instance by placing it in quotation marks. Other academics, such as Sara MacKian, have argued that the sheer diversity of the New Age renders the term too problematic for scholars to use. MacKian proposed "everyday spirituality" as an alternate term. While acknowledging that New Age was a problematic term, the scholar of religion James R. Lewis stated that it remained a useful etic category for scholars to use because "There exists no comparable term which covers all aspects of the movement." Similarly, Chryssides argued that the fact that "New Age" is a "theoretical concept" does not "undermine its usefulness or employability"; he drew comparisons with "Hinduism", a similar "Western etic piece of vocabulary" that scholars of religion used despite its problems. Religion, spirituality, and esotericism In discussing the New Age, academics have varyingly referred to "New Age spirituality" and "New Age religion". Those involved in the New Age rarely consider it to be "religion"—negatively associating that term solely with organized religion—and instead describe their practices as "spirituality". Religious studies scholars, however, have repeatedly referred to the New Age milieu as a "religion". York described the New Age as a new religious movement (NRM). Conversely, both Heelas and Sutcliffe rejected this categorisation; Heelas believed that while elements of the New Age represented NRMs, this did not apply to every New Age group. Similarly, Chryssides stated that the New Age could not be seen as "a religion" in itself. The New Age is also a form of Western esotericism. Hanegraaff regarded the New Age as a form of "popular culture criticism", in that it represented a reaction against the dominant Western values of Judeo-Christian religion and rationalism, adding that "New Age religion formulates such criticism not at random, but falls back on" the ideas of earlier Western esoteric groups. The New Age has also been identified by various scholars of religion as part of the cultic milieu. This concept, developed by the sociologist Colin Campbell, refers to a social network of marginalized ideas. Through their shared marginalization within a given society, these disparate ideas interact and create new syntheses. Hammer identified much of the New Age as corresponding to the concept of "folk religions" in that it seeks to deal with existential questions regarding subjects like death and disease in "an unsystematic fashion, often through a process of bricolage from already available narratives and rituals". York also heuristically divides the New Age into three broad trends. The first, the social camp, represents groups that primarily seek to bring about social change, while the second, the occult camp, instead focus on contact with spirit entities and channeling. York's third group, the spiritual camp, represents a middle ground between these two camps that focuses largely on individual development. Terminology The term new age, along with related terms like new era and new world, long predate the emergence of the New Age movement, and have widely been used to assert that a better way of life for humanity is dawning. It occurs commonly, for instance, in political contexts; the Great Seal of the United States, designed in 1782, proclaims a "new order of ages", while in the 1980s the Soviet General Secretary Mikhail Gorbachev proclaimed that "all mankind is entering a new age". The term has also appeared within Western esoteric schools of thought, having a scattered use from the mid-nineteenth century onward. In 1864 the American Swedenborgian Warren Felt Evans published The New Age and its Message, while in 1907 Alfred Orage and Holbrook Jackson began editing a weekly journal of Christian liberalism and socialism titled The New Age. The concept of a coming "new age" that would be inaugurated by the return to Earth of Jesus Christ was a theme in the poetry of Wellesley Tudor Pole (1884–1968) and of Johanna Brandt (1876–1964), and then also appeared in the work of the British-born American Theosophist Alice Bailey (1880–1949), featuring in titles such as Discipleship in the New Age (1944) and Education in the New Age (1954). Between the 1930s and 1960s a small number of groups and individuals became preoccupied with the concept of a coming "New Age" and used the term accordingly. The term had thus become a recurring motif in the esoteric spirituality milieu. Sutcliffe, therefore, expressed the view that while the term New Age had originally been an "apocalyptic emblem", it would only be later that it became "a tag or codeword for a 'spiritual' idiom". History Antecedents in occult and theosophy According to scholar Nevill Drury, the New Age has a "tangible history", although Hanegraaff expressed the view that most New Agers were "surprisingly ignorant about the actual historical roots of their beliefs". Similarly, Hammer thought that "source amnesia" was a "building block of a New Age worldview", with New Agers typically adopting ideas with no awareness of where those ideas originated. As a form of Western esotericism, the New Age has antecedents that stretch back to southern Europe in Late Antiquity. Following the Age of Enlightenment in 18th-century Europe, new esoteric ideas developed in response to the development of scientific rationality. Scholars call this new esoteric trend occultism, and this occultism was a key factor in the development of the worldview from which the New Age emerged. One of the earliest influences on the New Age was the Swedish 18th-century Christian mystic Emanuel Swedenborg, who professed the ability to communicate with angels, demons, and spirits. Swedenborg's attempt to unite science and religion and his prediction of a coming era in particular have been cited as ways that he prefigured the New Age. Another early influence was the late 18th and early 19th century German physician and hypnotist Franz Mesmer, who wrote about the existence of a force known as "animal magnetism" running through the human body. The establishment of Spiritualism, an occult religion influenced by both Swedenborgianism and Mesmerism, in the U.S. during the 1840s has also been identified as a precursor to the New Age, in particular through its rejection of established Christianity, representing itself as a scientific approach to religion, and its emphasis on channeling spirit entities. A further major influence on the New Age was the Theosophical Society, an occult group co-founded by the Russian Helena Blavatsky in the late 19th century. In her books Isis Unveiled (1877) and The Secret Doctrine (1888), Blavatsky wrote that her Society was conveying the essence of all world religions, and it thus emphasized a focus on comparative religion. Serving as a partial bridge between Theosophical ideas and those of the New Age was the American esotericist Edgar Cayce, who founded the Association for Research and Enlightenment. Another partial bridge was the Danish mystic Martinus who is popular in Scandinavia. Another influence was New Thought, which developed in late nineteenth-century New England as a Christian-oriented healing movement before spreading throughout the United States. Another influence was the psychologist Carl Jung. Drury also identified as an important influence upon the New Age the Indian Swami Vivekananda, an adherent of the philosophy of Vedanta who first brought Hinduism to the West in the late 19th century. Hanegraaff believed that the New Age's direct antecedents could be found in the UFO religions of the 1950s, which he termed a "proto-New Age movement". Many of these new religious movements had strong apocalyptic beliefs regarding a coming new age, which they typically asserted would be brought about by contact with extraterrestrials. Examples of such groups included the Aetherius Society, founded in the UK in 1955, and the Heralds of the New Age, established in New Zealand in 1956. 1960s From a historical perspective, the New Age phenomenon is most associated with the counterculture of the 1960s. According to author Andrew Grant Jackson, George Harrison's adoption of Hindu philosophy and Indian instrumentation in his songs with the Beatles in the mid-1960s, together with the band's highly publicised study of Transcendental Meditation, "truly kick-started" the Human Potential Movement that subsequently became New Age. Although not common throughout the counterculture, usage of the terms New Age and Age of Aquarius—used in reference to a coming era—were found within it, for instance appearing on adverts for the Woodstock festival of 1969, and in the lyrics of "Aquarius", the opening song of the 1967 musical Hair: The American Tribal Love-Rock Musical. This decade also witnessed the emergence of a variety of new religious movements and newly established religions in the United States, creating a spiritual milieu from which the New Age drew upon; these included the San Francisco Zen Center, Transcendental Meditation, Soka Gakkai, the Inner Peace Movement, the Church of All Worlds, and the Church of Satan. Although there had been an established interest in Asian religious ideas in the U.S. from at least the eighteenth-century, many of these new developments were variants of Hinduism, Buddhism, and Sufism, which had been imported to the West from Asia following the U.S. government's decision to rescind the Asian Exclusion Act in 1965. In 1962 the Esalen Institute was established in Big Sur, California. Esalen and similar personal growth centers had developed links to humanistic psychology, and from this, the human potential movement emerged and strongly influenced the New Age. In Britain, a number of small religious groups that came to be identified as the "light" movement had begun declaring the existence of a coming new age, influenced strongly by the Theosophical ideas of Blavatsky and Bailey. The most prominent of these groups was the Findhorn Foundation, which founded the Findhorn Ecovillage in the Scottish area of Findhorn, Moray in 1962. Although its founders were from an older generation, Findhorn attracted increasing numbers of countercultural baby boomers during the 1960s, to the extent that its population had grown sixfold to c. 120 residents by 1972. In October 1965, the co-founder of Findhorn Foundation, Peter Caddy, a former member of the occult Rosicrucian Order Crotona Fellowship, attended a meeting of various figures within Britain's esoteric milieu; advertised as "The Significance of the Group in the New Age", it was held at Attingham Park over the course of a weekend. All of these groups created the backdrop from which the New Age movement emerged. As James R. Lewis and J. Gordon Melton point out, the New Age phenomenon represents "a synthesis of many different preexisting movements and strands of thought". Nevertheless, York asserted that while the New Age bore many similarities with both earlier forms of Western esotericism and Asian religion, it remained "distinct from its predecessors in its own self-consciousness as a new way of thinking". Emergence and development: c. 1970–2000 By the early 1970s, use of the term New Age was increasingly common within the cultic milieu. This was because—according to Sutcliffe—the "emblem" of the "New Age" had been passed from the "subcultural pioneers" in groups like Findhorn to the wider array of "countercultural baby boomers" between and 1974. He noted that as this happened, the meaning of the term New Age changed; whereas it had once referred specifically to a coming era, at this point it came to be used in a wider sense to refer to a variety of spiritual activities and practices. In the latter part of the 1970s, the New Age expanded to cover a wide variety of alternative spiritual and religious beliefs and practices, not all of which explicitly held to the belief in the Age of Aquarius, but were nevertheless widely recognized as broadly similar in their search for "alternatives" to mainstream society. In doing so, the "New Age" became a banner under which to bring together the wider "cultic milieu" of American society. The counterculture of the 1960s had rapidly declined by the start of the 1970s, in large part due to the collapse of the commune movement, but it would be many former members of the counter-culture and hippie subculture who subsequently became early adherents of the New Age movement. The exact origins of the New Age movement remain an issue of debate; Melton asserted that it emerged in the early 1970s, whereas Hanegraaff instead traced its emergence to the latter 1970s, adding that it then entered its full development in the 1980s. This early form of the movement was based largely in Britain and exhibited a strong influence from theosophy and Anthroposophy. Hanegraaff termed this early core of the movement the New Age sensu stricto, or "New Age in the strict sense". Hanegraaff terms the broader development the New Age sensu lato, or "New Age in the wider sense". Stores that came to be known as "New Age shops" opened up, selling related books, magazines, jewelry, and crystals, and they were typified by the playing of New Age music and the smell of incense.This probably influenced several thousand small metaphysical book- and gift-stores that increasingly defined themselves as "New Age bookstores", while New Age titles came to be increasingly available from mainstream bookstores and then websites like Amazon.com. Not everyone who came to be associated with the New Age phenomenon openly embraced the term New Age, although it was popularised in books like David Spangler's 1977 work Revelation: The Birth of a New Age and Mark Satin's 1979 book New Age Politics: Healing Self and Society. Marilyn Ferguson's 1982 book The Aquarian Conspiracy has also been regarded as a landmark work in the development of the New Age, promoting the idea that a new era was emerging. Other terms that were employed synonymously with New Age in this milieu included "Green", "Holistic", "Alternative", and "Spiritual". 1971 witnessed the foundation of est by Werner H. Erhard, a transformational training course that became a part of the early movement. Melton suggested that the 1970s witnessed the growth of a relationship between the New Age movement and the older New Thought movement, as evidenced by the widespread use of Helen Schucman's A Course in Miracles (1975), New Age music, and crystal healing in New Thought churches. Some figures in the New Thought movement were skeptical, challenging the compatibility of New Age and New Thought perspectives. During these decades, Findhorn had become a site of pilgrimage for many New Agers, and greatly expanded in size as people joined the community, with workshops and conferences being held there that brought together New Age thinkers from across the world. Several key events occurred, which raised public awareness of the New Age subculture: publication of Linda Goodman's best-selling astrology books Sun Signs (1968) and Love Signs (1978); the release of Shirley MacLaine's book Out on a Limb (1983), later adapted into a television mini-series with the same name (1987); and the "Harmonic Convergence" planetary alignment on August 16 and 17, 1987, organized by José Argüelles in Sedona, Arizona. The Convergence attracted more people to the movement than any other single event. Heelas suggested that the movement was influenced by the "enterprise culture" encouraged by the U.S. and U.K. governments during the 1980s onward, with its emphasis on initiative and self-reliance resonating with any New Age ideas. Channelers Jane Roberts (Seth Material), Helen Schucman (A Course in Miracles), J. Z. Knight (Ramtha), Neale Donald Walsch (Conversations with God) contributed to the movement's growth. The first significant exponent of the New Age movement in the U.S. has been cited as Ram Dass. Core works in the propagating of New Age ideas included Jane Roberts's Seth series, published from 1972 onward, Helen Schucman's 1975 publication A Course in Miracles, and James Redfield's 1993 work The Celestine Prophecy. A number of these books became best sellers, such as the Seth book series which quickly sold over a million copies. Supplementing these books were videos, audiotapes, compact discs and websites. The development of the internet in particular further popularized New Age ideas and made them more widely accessible. New Age ideas influenced the development of rave culture in the late 1980s and 1990s. In Britain during the 1980s, the term New Age Travellers came into use, although York characterised this term as "a misnomer created by the media". These New Age Travellers had little to do with the New Age as the term was used more widely, with scholar of religion Daren Kemp observing that "New Age spirituality is not an essential part of New Age Traveller culture, although there are similarities between the two worldviews". The term New Age came to be used increasingly widely by the popular media in the 1990s. Decline or transformation: 1990–present By the late 1980s, some publishers dropped the term New Age as a marketing device. In 1994, the scholar of religion Gordon J. Melton presented a conference paper in which he argued that, given that he knew of nobody describing their practices as "New Age" anymore, the New Age had died. In 2001, Hammer observed that the term New Age had increasingly been rejected as either pejorative or meaningless by individuals within the Western cultic milieu. He also noted that within this milieu it was not being replaced by any alternative and that as such a sense of collective identity was being lost. Other scholars disagreed with Melton's idea; in 2004 Daren Kemp stated that "New Age is still very much alive". Hammer himself stated that "the New Age movement may be on the wane, but the wider New Age religiosity... shows no sign of disappearing". MacKian suggested that the New Age "movement" had been replaced by a wider "New Age sentiment" which had come to pervade "the socio-cultural landscape" of Western countries. Its diffusion into the mainstream may have been influenced by the adoption of New Age concepts by high-profile figures: U.S. First Lady Nancy Reagan consulted an astrologer, British Princess Diana visited spirit mediums, and Norwegian Princess Märtha Louise established a school devoted to communicating with angels. New Age shops continued to operate, although many have been remarketed as "Mind, Body, Spirit". In 2015, the scholar of religion Hugh Urban argued that New Age spirituality is growing in the United States and can be expected to become more visible: "According to many recent surveys of religious affiliation, the 'spiritual but not religious' category is one of the fastest-growing trends in American culture, so the New Age attitude of spiritual individualism and eclecticism may well be an increasingly visible one in the decades to come". Australian scholar Paul J. Farrelly, in his 2017 doctoral dissertation at Australian National University, argued that, while the term New Age may become less popular in the West, it is actually booming in Taiwan, where it is regarded as something comparatively new and is being exported from Taiwan to the Mainland China, where it is more or less tolerated by the authorities. Beliefs and practices Eclecticism and self-spirituality The New Age places strong emphasis on the idea that the individual and their own experiences are the primary source of authority on spiritual matters. It exhibits what Heelas termed "unmediated individualism", and reflects a world-view that is "radically democratic". It places an emphasis on the freedom and autonomy of the individual. This emphasis has led to ethical disagreements; some New Agers believe helping others is beneficial, although another view is that doing so encourages dependency and conflicts with a reliance on the self. Nevertheless, within the New Age, there are differences in the role accorded to voices of authority outside of the self. Hammer stated that "a belief in the existence of a core or true Self" is a "recurring theme" in New Age texts. The concept of "personal growth" is also greatly emphasised among New Agers, while Heelas noted that "for participants spirituality is life-itself". New Age religiosity is typified by its eclecticism. Generally believing that there is no one true way to pursue spirituality, New Agers develop their own worldview "by combining bits and pieces to form their own individual mix", seeking what Drury called "a spirituality without borders or confining dogmas". The anthropologist David J. Hess noted that in his experience, a common attitude among New Agers was that "any alternative spiritual path is good because it is spiritual and alternative". This approach that has generated a common jibe that New Age represents "supermarket spirituality". York suggested that this eclecticism stemmed from the New Age's origins within late modern capitalism, with New Agers subscribing to a belief in a free market of spiritual ideas as a parallel to a free market in economics. As part of its eclecticism, the New Age draws ideas from many different cultural and spiritual traditions from across the world, often legitimising this approach by reference to "a very vague claim" about underlying global unity. Certain societies are more usually chosen over others; examples include the ancient Celts, ancient Egyptians, the Essenes, Atlanteans, and ancient extraterrestrials. As noted by Hammer: "to put it bluntly, no significant spokespersons within the New Age community claim to represent ancient Albanian wisdom, simply because beliefs regarding ancient Albanians are not part of our cultural stereotypes". According to Hess, these ancient or foreign societies represent an exotic "Other" for New Agers, who are predominantly white Westerners. Theology, cosmogony, and cosmology A belief in divinity is integral to New Age ideas, although understandings of this divinity vary. New Age theology exhibits an inclusive and universalistic approach that accepts all personal perspectives on the divine as equally valid. This intentional vagueness as to the nature of divinity also reflects the New Age idea that divinity cannot be comprehended by the human mind or language. New Age literature nevertheless displays recurring traits in its depiction of the divine: the first is the idea that it is holistic, thus frequently being described with such terms as an "Ocean of Oneness", "Infinite Spirit", "Primal Stream", "One Essence", and "Universal Principle". A second trait is the characterisation of divinity as "Mind", "Consciousness", and "Intelligence", while a third is the description of divinity as a form of "energy". A fourth trait is the characterisation of divinity as a "life force", the essence of which is creativity, while a fifth is the concept that divinity consists of love. Most New Age groups believe in an Ultimate Source from which all things originate, which is usually conflated with the divine. Various creation myths have been articulated in New Age publications outlining how this Ultimate Source created the universe and everything in it. In contrast, some New Agers emphasize the idea of a universal inter-relatedness that is not always emanating from a single source. The New Age worldview emphasises holism and the idea that everything in existence is intricately connected as part of a single whole, in doing so rejecting both the dualism of the Christian division of matter and spirit and the reductionism of Cartesian science. A number of New Agers have linked this holistic interpretation of the universe to the Gaia hypothesis of James Lovelock. The idea of holistic divinity results in a common New Age belief that humans themselves are divine in essence, a concept described using such terms as "droplet of divinity", "inner Godhead", and "divine self". Influenced by Theosophical and Anthroposophical ideas regarding 'subtle bodies', a common New Age idea holds to the existence of a Higher Self that is a part of the human but connects with the divine essence of the universe, and which can advise the human mind through intuition. Cosmogonical creation stories are common in New Age sources, with these accounts reflecting the movement's holistic framework by describing an original, primal oneness from which all things in the universe emanated. An additional common theme is that human souls—once living in a spiritual world—then descended into a world of matter. The New Age movement typically views the material universe as a meaningful illusion, which humans should try to use constructively rather than focus on escaping into other spiritual realms. This physical world is hence seen as "a domain for learning and growth" after which the human soul might pass on to higher levels of existence. There is thus a widespread belief that reality is engaged in an ongoing process of evolution; rather than Darwinian evolution, this is typically seen as either a teleological evolution which assumes a process headed to a specific goal or an open-ended, creative evolution. Spirit and channeling A conduit, in esoterism, and spiritual discourse, is a specific object, person, location, or process (such as engaging in a séance or entering a trance, or using psychedelic medicines) which allows a person to connect or communicate with a spiritual realm, metaphysical energy, or spiritual entity, or vice versa. The use of such a conduit may be entirely metaphoric or symbolic, or it may be earnestly believed to be functional. MacKian argued that a central, but often overlooked, element of the phenomenon was an emphasis on "spirit", and in particular participants' desire for a relationship with spirit. Many practitioners in her UK-focused study described themselves as "workers for spirit", expressing the desire to help people learn about spirit. They understood various material signs as marking the presence of spirit, for instance, the unexpected appearance of a feather. New Agers often call upon this spirit to assist them in everyday situations, for instance, to ease the traffic flow on their way to work. New Age literature often refers to benevolent non-human spirit-beings who are interested in humanity's spiritual development; these are variously referred to as angels, guardian angels, personal guides, masters, teachers, and contacts. New Age angelology is nevertheless unsystematic, reflecting the idiosyncrasies of individual authors. The figure of Jesus Christ is often mentioned within New Age literature as a mediating principle between divinity and humanity, as well as an exemplar of a spiritually advanced human being. Although not present in every New Age group, a core belief within the milieu is in channeling. This is the idea that humans beings, sometimes (although not always) in a state of trance, can act "as a channel of information from sources other than their normal selves". These sources are varyingly described as being God, gods and goddesses, ascended masters, spirit guides, extraterrestrials, angels, devas, historical figures, the collective unconscious, elementals, or nature spirits. Hanegraaff described channeling as a form of "articulated revelation", and identified four forms: trance channeling, automatisms, clairaudient channeling, and open channeling. A notable channeler in the early 1900s was Rose Edith Kelly, wife of the English occultist and ceremonial magician Aleister Crowley (1875–1947). She allegedly channeled the voice of a non-physical entity named Aiwass during their honeymoon in Cairo, Egypt (1904). Others purport to channel spirits from "future dimensions", ascended masters, or, in the case of the trance mediums of the Brahma Kumaris, God. Another channeler in the early 1900s was Edgar Cayce, who said that he was able to channel his higher self while in a trance-like state. In the later half of the 20th century, Western mediumship developed in two different ways. One type involves clairaudience, in which the medium is said to hear spirits and relay what they hear to their clients. The other is a form of channeling in which the channeler seemingly goes into a trance, and purports to leave their body allowing a spirit entity to borrow it and then speak through them. When in a trance the medium appears to enter into a cataleptic state, although modern channelers may not. Some channelers open the eyes when channeling, and remain able to walk and behave normally. The rhythm and the intonation of the voice may also change completely. Examples of New Age channeling include Jane Roberts' belief that she was contacted by an entity called Seth, and Helen Schucman's belief that she had channeled Jesus Christ. The academic Suzanne Riordan examined a variety of these New Age channeled messages, noting that they typically "echoed each other in tone and content", offering an analysis of the human condition and giving instructions or advice for how humanity can discover its true destiny. For many New Agers, these channeled messages rival the scriptures of the main world religions as sources of spiritual authority, although often New Agers describe historical religious revelations as forms of "channeling" as well, thus attempting to legitimate and authenticate their own contemporary practices. Although the concept of channeling from discarnate spirit entities has links to Spiritualism and psychical research, the New Age does not feature Spiritualism's emphasis on proving the existence of life after death, nor psychical research's focus of testing mediums for consistency. Other New Age channels include: J. Z. Knight (b. 1946), who channels the spirit "Ramtha", a 30-thousand-year-old man from Lemuria Esther Hicks (b. 1948), who channels a purported collective consciousness she calls "Abraham" Gary Douglas, who purportedly channels Grigori Rasputin, aliens called Novian, a 14th century monk names Brother George, and an ancient Chinese man called Tchia Tsinin his organization, Access Consciousness. Astrological cycles and the Age of Aquarius New Age thought typically envisions the world as developing through cosmological cycles that can be identified astrologically. It adopts this concept from Theosophy, although often presents it in a looser and more eclectic way than is found in Theosophical teaching. New Age literature often proposes that humanity once lived in an age of spiritual wisdom. In the writings of New Agers like Edgar Cayce, the ancient period of spiritual wisdom is associated with concepts of supremely-advanced societies living on lost continents such as Atlantis, Lemuria, and Mu, as well as the idea that ancient societies like those of Ancient Egypt were far more technologically advanced than modern scholarship accepts. New Age literature often posits that the ancient period of spiritual wisdom gave way to an age of spiritual decline, sometimes termed the Age of Pisces. Although characterised as being a negative period for humanity, New Age literature views the Age of Pisces as an important learning experience for the species. Hanegraaff stated that New Age perceptions of history were "extremely sketchy" in their use of description, reflecting little interest in historiography and conflating history with myth. He also noted that they were highly ethnocentric in placing Western civilization at the centre of historical development. A common belief among the New Age is that humanity has entered, or is coming to enter, a new period known as the Age of Aquarius, which Melton has characterised as a "New Age of love, joy, peace, abundance, and harmony[...] the Golden Age heretofore only dreamed about." In accepting this belief in a coming new age, the milieu has been described as "highly positive, celebratory, [and] utopian", and has also been cited as an apocalyptic movement. Opinions about the nature of the coming Age of Aquarius differ among New Agers. There are for instance differences in belief about its commencement; New Age author David Spangler wrote that it began in 1967, others placed its beginning with the Harmonic Convergence of 1987, author José Argüelles predicted its start in 2012, and some believe that it will not begin until several centuries into the third millennium. There are also differences in how this new age is envisioned. Those adhering to what Hanegraaff termed the "moderate" perspective believed that it would be marked by an improvement to current society, which affected both New Age concerns—through the convergence of science and mysticism and the global embrace of alternative medicine—to more general concerns, including an end to violence, crime and war, a healthier environment, and international co-operation. Other New Agers adopt a fully utopian vision, believing that the world will be wholly transformed into an "Age of Light", with humans evolving into totally spiritual beings and experiencing unlimited love, bliss, and happiness. Rather than conceiving of the Age of Aquarius as an indefinite period, many believe that it would last for around two thousand years before being replaced by a further age. There are various beliefs within the milieu as to how this new age will come about, but most emphasise the idea that it will be established through human agency; others assert that it will be established with the aid of non-human forces such as spirits or extraterrestrials. Ferguson, for instance, said that there was a vanguard of humans known as the "Aquarian conspiracy" who were helping to bring the Age of Aquarius forth through their actions. Participants in the New Age typically express the view that their own spiritual actions are helping to bring about the Age of Aquarius, with writers like Ferguson and Argüelles presenting themselves as prophets ushering forth this future era. Healing and alternative medicine Another recurring element of New Age is an emphasis on healing and alternative medicine. The general New Age ethos is that health is the natural state for the human being and that illness is a disruption of that natural balance. Hence, New Age therapies seek to heal "illness" as a general concept that includes physical, mental, and spiritual aspects; in doing so it critiques mainstream Western medicine for simply attempting to cure disease, and thus has an affinity with most forms of traditional medicine. Its focus of self-spirituality has led to the emphasis of self-healing, although also present are ideas on healing both others and the Earth itself. The healing elements of the movement are difficult to classify given that a variety of terms are used, with some New Age authors using different terms to refer to the same trends, while others use the same term to refer to different things. However, Hanegraaff developed a set of categories into which the forms of New Age healing could be roughly categorised. The first of these was the Human Potential Movement, which argues that contemporary Western society suppresses much human potential, and accordingly professes to offer a path through which individuals can access those parts of themselves that they have alienated and suppressed, thus enabling them to reach their full potential and live a meaningful life. Hanegraaff described transpersonal psychology as the "theoretical wing" of this Human Potential Movement; in contrast to other schools of psychological thought, transpersonal psychology takes religious and mystical experiences seriously by exploring the uses of altered states of consciousness. Closely connected to this is the shamanic consciousness current, which argues that the shaman was a specialist in altered states of consciousness and seeks to adopt and imitate traditional shamanic techniques as a form of personal healing and growth. Hanegraaff identified the second main healing current in the New Age movement as being holistic health. This emerged in the 1970s out of the free clinic movement of the 1960s, and has various connections with the Human Potential Movement. It emphasises the idea that the human individual is a holistic, interdependent relationship between mind, body, and spirit, and that healing is a process in which an individual becomes whole by integrating with the powers of the universe. A very wide array of methods are utilised within the holistic health movement, with some of the most common including acupuncture, reiki, biofeedback, chiropractic, yoga, applied kinesiology, homeopathy, aromatherapy, iridology, massage and other forms of bodywork, meditation and visualisation, nutritional therapy, psychic healing, herbal medicine, healing using crystals, metals, music, chromotherapy, and reincarnation therapy. Although the use of crystal healing has become a visual trope within the New Age, this practice was not common in esotericism prior to their adoption in the New Age milieu. The mainstreaming of the Holistic Health movement in the UK is discussed by Maria Tighe. The inter-relation of holistic health with the New Age movement is illustrated in Jenny Butler's ethnographic description of "Angel therapy" in Ireland. New Age science According to Drury, the New Age attempts to create "a worldview that includes both science and spirituality", while Hess noted how New Agers have "a penchant for bringing together the technical and the spiritual, the scientific and the religious". Although New Agers typically reject rationalism, the scientific method, and the academic establishment, they employ terminology and concepts borrowed from science and particularly from new physics. Moreover, a number of influences on New Age, such as David Bohm and Ilya Prigogine, had backgrounds as professional scientists. Hanegraaff identified "New Age science" as a form of Naturphilosophie. In this, the milieu is interested in developing unified world views to discover the nature of the divine and establish a scientific basis for religious belief. Figures in the New Age movement—most notably Fritjof Capra in his The Tao of Physics (1975) and Gary Zukav in The Dancing Wu Li Masters (1979)—have drawn parallels between theories in the New Physics and traditional forms of mysticism, thus arguing that ancient religious ideas are now being proven by contemporary science. Many New Agers have adopted James Lovelock's Gaia hypothesis that the Earth acts akin to a single living organism, going further to propound that the Earth has a consciousness and intelligence. Despite New Agers' appeals to science, most of the academic and scientific establishments dismiss "New Age science" as pseudo-science, or at best existing in part on the fringes of genuine scientific research. This is an attitude also shared by many active in the field of parapsychology. In turn, New Agers often accuse the scientific establishment of pursuing a dogmatic and outmoded approach to scientific enquiry, believing that their own understandings of the universe will replace those of the academic establishment in a paradigm shift. Ethics and afterlife There is no ethical cohesion within the New Age phenomenon, although Hanegraaff argued that the central ethical tenet of the New Age is to cultivate one's own divine potential. Given that the movement's holistic interpretation of the universe prohibits a belief in a dualistic good and evil, negative events that happen are interpreted not as the result of evil but as lessons designed to teach an individual and enable them to advance spiritually. It rejects the Christian emphasis on sin and guilt, believing that these generate fear and thus negativity, which then hinder spiritual evolution. It also typically criticises the blaming and judging of others for their actions, believing that if an individual adopts these negative attitudes it harms their own spiritual evolution. Instead, the movement emphasizes positive thinking, although beliefs regarding the power behind such thoughts vary within New Age literature. Common New Age examples of how to generate such positive thinking include the repeated recitation of mantras and statements carrying positive messages, and the visualisation of a white light. According to Hanegraaff, the question of death and afterlife is not a "pressing problem requiring an answer" in the New Age. A belief in reincarnation is very common, where it is often viewed as being part of an individual's progressive spiritual evolution toward realisation of their own divinity. In New Age literature, the reality of reincarnation is usually treated as self-evident, with no explanation as to why practitioners embrace this afterlife belief over others, although New Agers endorse it in the belief that it ensures cosmic justice. Many New Agers believe in karma, treating it as a law of cause and effect that assures cosmic balance, although in some cases they stress that it is not a system that enforces punishment for past actions. Much New Age literature on reincarnation says that part of the human soul, that which carries the personality, perishes with the death of the body, while the Higher Self—that which connects with divinity—survives in order to be reborn into another body. It is believed that the Higher Self chooses the body and circumstances into which it will be born, in order to use it as a vessel through which to learn new lessons and thus advance its own spiritual evolution. New Age writers like Shakti Gawain and Louise Hay therefore express the view that humans are responsible for the events that happen to them during their life, an idea that many New Agers regard as empowering. At times, past life regression are employed within the New Age in order to reveal a Higher Soul's previous incarnations, usually with an explicit healing purpose. Some practitioners espouse the idea of a "soul group" or "soul family", a group of connected souls who reincarnate together as family of friendship units. Rather than reincarnation, another afterlife belief found among New Agers holds that an individual's soul returns to a "universal energy" on bodily death. Demographics In the mid-1990s, the New Age was found primarily in the United States and Canada, Western Europe, and Australia and New Zealand. The fact that most individuals engaging in New Age activity do not describe themselves as "New Agers" renders it difficult to determine the total number of practitioners. Heelas highlighted the range of attempts to establish the number of New Age participants in the U.S. during this period, noting that estimates ranged from 20,000 to 6 million; he believed that the higher ranges of these estimates were greatly inflated by, for instance, an erroneous assumption that all Americans who believed in reincarnation were part of the New Age. He nevertheless suggested that over 10 million people in the U.S. had had some contact with New Age practices or ideas. Between 2000 and 2002, Heelas and Woodhead conducted research into the New Age in the English town of Kendal, Cumbria; they found 600 people actively attended New Age activities on a weekly basis, representing 1.6% of the town's population. From this, they extrapolated that around 900,000 Britons regularly took part in New Age activities. In 2006, Heelas stated that New Age practices had grown to such an extent that they were "increasingly rivaling the sway of Christianity in Western settings". Sociological investigation indicates that certain sectors of society are more likely to engage in New Age practices than others. In the United States, the first people to embrace the New Age belonged to the baby boomer generation, those born between 1946 and 1964. Sutcliffe noted that although most influential New Age figureheads were male, approximately two-thirds of its participants were female. Heelas and Woodhead's Kendal Project found that of those regularly attending New Age activities in the town, 80% were female, while 78% of those running such activities were female. They attributed this female dominance to "deeply entrenched cultural values and divisions of labour" in Western society, according to which women were accorded greater responsibility for the well-being of others, thus making New Age practices more attractive to them. They suggested that men were less attracted to New Age activities because they were hampered by a "masculinist ideal of autonomy and self-sufficiency" which discouraged them from seeking the assistance of others for their inner development. The majority of New Agers are from the middle and upper-middle classes of Western society. Heelas and Woodhead found that of the active Kendal New Agers, 57% had a university or college degree. Their Kendal Project also determined that 73% of active New Agers were aged over 45, and 55% were aged between 40 and 59; it also determined that many got involved while middle-aged. Comparatively few were either young or elderly. Heelas and Woodhead suggested that the dominance of middle-aged people, particularly women, was because at this stage of life they had greater time to devote to their own inner development, with their time previously having been dominated by raising children. They also suggested that middle-aged people were experiencing more age-related ailments than the young, and thus more keen to pursue New Age activities to improve their health. Heelas added that within the baby boomers, the movement had nevertheless attracted a diverse clientele. He typified the typical New Ager as someone who was well-educated yet disenchanted with mainstream society, thus arguing that the movement catered to those who believe that modernity is in crisis. He suggested that the movement appealed to many former practitioners of the 1960s counter-culture because while they came to feel that they were unable to change society, they were nonetheless interested in changing the self. He believed that many individuals had been "culturally primed for what the New Age has to offer", with the New Age attracting "expressive" people who were already comfortable with the ideals and outlooks of the movement's self-spirituality focus. It could be particularly appealing because the New Age suited the needs of the individual, whereas traditional religious options that are available primarily catered for the needs of a community. He believed that although the adoption of New Age beliefs and practices by some fitted the model of religious conversion, others who adopted some of its practices could not easily be considered to have converted to the religion. Sutcliffe described the "typical" participant in the New Age milieu as being "a religious individualist, mixing and matching cultural resources in an animated spiritual quest". The degree to which individuals are involved in the New Age varies. Heelas argued that those involved could be divided into three broad groups; the first comprised those who were completely dedicated to it and its ideals, often working in professions that furthered those goals. The second consisted of "serious part-timers" who worked in unrelated fields but who nevertheless spent much of their free time involved in movement activities. The third was that of "casual part-timers" who occasionally involved themselves in New Age activities but for whom the movement was not a central aspect of their life. MacKian instead suggested that involvement could be seen as being layered like an onion; at the core are "consultative" practitioners who devote their life to New Age practices, around that are "serious" practitioners who still invest considerable effort into New Age activities, and on the periphery are "non-practitioner consumers", individuals affected by the general dissemination of New Age ideas but who do not devote themselves more fully to them. Many New Age practices have filtered into wider Western society, with a 2000 poll, for instance, revealing that 39% of the UK population had tried alternative therapies. In 1995, Kyle stated that on the whole, New Agers in the United States preferred the values of the Democratic Party over those of the Republican Party. He added that most New Agers "soundly rejected" the agenda of former Republican President Ronald Reagan. Social communities MacKian suggested that this phenomenon was "an inherently social mode of spirituality", one which cultivated a sense of belonging among its participants and encouraged relations both with other humans and with non-human, otherworldly spirit entities. MacKian suggested that these communities "may look very different" from those of traditional religious groups. Online connections were one of the ways that interested individuals met new contacts and established networks. Commercial aspects Some New Agers advocate living in a simple and sustainable manner to reduce humanity's impact on the natural resources of Earth; and they shun consumerism. The New Age movement has been centered around rebuilding a sense of community to counter social disintegration; this has been attempted through the formation of intentional communities, where individuals come together to live and work in a communal lifestyle. New Age centres have been set up in various parts of the world, representing an institutionalised form of the movement. Notable examples include the Naropa Institute in Boulder, Colorado, Holly Hock Farm near to Vancouver, the Wrekin Trust in West Malvern, Worcestershire, and the Skyros Centre in Skyros. Criticising mainstream Western education as counterproductive to the ethos of the movement, many New Age groups have established their own schools for the education of children, although in other cases such groups have sought to introduce New Age spiritual techniques into pre-existing establishments. Bruce argued that in seeking to "denying the validity of externally imposed controls and privileging the divine within", the New Age sought to dismantle pre-existing social order, but that it failed to present anything adequate in its place. Heelas, however, cautioned that Bruce had arrived at this conclusion based on "flimsy evidence", and Aldred argued that only a minority of New Agers participate in community-focused activities; instead, she argued, the majority of New Agers participate mainly through the purchase of books and products targeted at the New Age market, positioning New Age as a primarily consumerist and commercial movement. Fairs and festivals New Age spirituality has led to a wide array of literature on the subject and an active niche market, with books, music, crafts, and services in alternative medicine available at New Age stores, fairs, and festivals. New Age fairs—sometimes known as "Mind, Body, Spirit fairs", "psychic fairs", or "alternative health fairs"—are spaces in which a variety of goods and services are displayed by different vendors, including forms of alternative medicine and esoteric practices such as palmistry or tarot card reading. An example is the Mind Body Spirit Festival, held annually in the United Kingdom, at which—the religious studies scholar Christopher Partridge noted—one could encounter "a wide range of beliefs and practices from crystal healing to ... Kirlian photography to psychic art, from angels to past-life therapy, from Theosophy to UFO religion, and from New Age music to the vegetarianism of Suma Chign Hai." Similar festivals are held across Europe and in Australia and the United States. Approaches to financial prosperity and business A number of New Age proponents have emphasised the use of spiritual techniques as a tool for attaining financial prosperity, thus moving the movement away from its counter-cultural origins. Commenting on this "New Age capitalism", Hess observed that it was largely small-scale and entrepreneurial, focused around small companies run by members of the petty bourgeoisie, rather than being dominated by large scale multinational corporations. The links between New Age and commercial products have resulted in the accusation that New Age itself is little more than a manifestation of consumerism. This idea is generally rejected by New Age participants, who often reject any link between their practices and consumerist activities. Embracing this attitude, various books have been published espousing such an ethos, established New Age centres have held spiritual retreats and classes aimed specifically at business people, and New Age groups have developed specialised training for businesses. During the 1980s, many U.S. corporations—among them IBM, AT&T, and General Motors—embraced New Age seminars, hoping that they could increase productivity and efficiency among their workforce, although in several cases this resulted in employees bringing legal action against their employers, saying that such seminars had infringed on their religious beliefs or damaged their psychological health. However, the use of spiritual techniques as a method for attaining profit has been an issue of major dispute within the wider New Age movement, with New Agers such as Spangler and Matthew Fox criticising what they see as trends within the community that are narcissistic and lack a social conscience. In particular, the movement's commercial elements have caused problems given that they often conflict with its general economically egalitarian ethos; as York highlighted, "a tension exists in New Age between socialistic egalitarianism and capitalistic private enterprise". Given that it encourages individuals to choose spiritual practices on the grounds of personal preference and thus encourages them to behave as a consumer, the New Age has been considered to be well suited to modern society. Music The term "new-age music" is applied, sometimes negatively, to forms of ambient music, a genre that developed in the 1960s and was popularised in the 1970s, particularly with the work of Brian Eno. The genre's relaxing nature resulted in it becoming popular within New Age circles, with some forms of the genre having a specifically New Age orientation. Studies have determined that new-age music can be an effective component of stress management. The style began in the late 1960s and early 1970s with the works of free-form jazz groups recording on the ECM label; such as Oregon, the Paul Winter Consort, and other pre-ambient bands; as well as ambient music performer Brian Eno, classical avant-garde musician Daniel Kobialka, and the psychoacoustic environments recordings of Irv Teibel. In the early 1970s, it was mostly instrumental with both acoustic and electronic styles. New-age music evolved to include a wide range of styles from electronic space music using synthesizers and acoustic instrumentals using Native American flutes and drums, singing bowls, Australian didgeridoos and world music sounds to spiritual chanting from other cultures. Politics While many commentators have focused on the spiritual and cultural aspects of the New Age movement, it also has a political component. The New Age political movement became visible in the 1970s, peaked in the 1980s, and continued into the 1990s. The sociologist of religion Steven Bruce noted that the New Age provides ideas on how to deal with "our socio-psychological problems". Scholar of religion James R. Lewis observed that, despite the common caricature of New Agers as narcissistic, "significant numbers" of them were "trying to make the planet a better place on which to live," and scholar J. Gordon Melton's New Age Encyclopedia (1990) included an entry called "New Age politics". Some New Agers have entered the political system in an attempt to advocate for the societal transformation that the New Age promotes. Ideas Although New Age activists have been motivated by New Age concepts like holism, interconnectedness, monism, and environmentalism, their political ideas are diverse, ranging from far-right and conservative through to liberal, socialist, and libertarian. Accordingly, Kyle stated that "New Age politics is difficult to describe and categorize. The standard political labels—left or right, liberal or conservative—miss the mark." MacKian suggested that the New Age operated as a form of "world-realigning infrapolitics" that undermines the disenchantment of modern Western society. The extent to which New Age spokespeople mix religion and politics varies. New Agers are often critical of the established political order, regarding it as "fragmented, unjust, hierarchical, patriarchal, and obsolete". The New Ager Mark Satin for instance spoke of "New Age politics" as a politically radical "third force" that was "neither left nor right". He believed that in contrast to the conventional political focus on the "institutional and economic symptoms" of society's problems, his "New Age politics" would focus on "psychocultural roots" of these issues. Ferguson regarded New Age politics as "a kind of Radical Centre", one that was "not neutral, not middle-of-the-road, but a view of the whole road." Fritjof Capra argued that Western societies have become sclerotic because of their adherence to an outdated and mechanistic view of reality, which he calls the Newtonian/Cartesian paradigm. In Capra's view, the West needs to develop an organic and ecological "systems view" of reality in order to successfully address its social and political issues. Corinne McLaughlin argued that politics need not connote endless power struggles, that a new "spiritual politics" could attempt to synthesize opposing views on issues into higher levels of understanding. Many New Agers advocate globalisation and localisation, but reject nationalism and the role of the nation-state. Some New Age spokespeople have called for greater decentralisation and global unity, but are vague about how this might be achieved; others call for a global, centralised government. Satin for example argued for a move away from the nation-state and towards self-governing regions that, through improved global communication networks, would help engender world unity. Benjamin Creme conversely argued that "the Christ", a great Avatar, Maitreya, the World Teacher, expected by all the major religions as their "Awaited One", would return to the world and establish a strong, centralised global government in the form of the United Nations; this would be politically re-organised along a spiritual hierarchy. Kyle observed that New Agers often speak favourably of democracy and citizens' involvement in policy making but are critical of representative democracy and majority rule, thus displaying elitist ideas to their thinking. Groups Scholars have noted several New Age political groups. Self-Determination: A Personal/Political Network, lauded by Ferguson and Satin, was described at length by sociology of religion scholar Steven Tipton. Founded in 1975 by California state legislator John Vasconcellos and others, it encouraged Californians to engage in personal growth work and political activities at the same time, especially at the grassroots level. Hanegraaff noted another California-based group, the Institute of Noetic Sciences, headed by the author Willis Harman. It advocated a change in consciousness—in "basic underlying assumptions"—in order to come to grips with global crises. Kyle said that the New York City-based Planetary Citizens organization, headed by United Nations consultant and Earth at Omega author Donald Keys, sought to implement New Age political ideas. Scholar J. Gordon Melton and colleagues focused on the New World Alliance, a Washington, DC-based organization founded in 1979 by Mark Satin and others. According to Melton et al., the Alliance tried to combine left- and right-wing ideas as well as personal growth work and political activities. Group decision-making was facilitated by short periods of silence. Sponsors of the Alliance's national political newsletter included Willis Harman and John Vasconcellos. Scholar James R. Lewis counted "Green politics" as one of the New Age's more visible activities. One academic book says that the U.S. Green Party movement began as an initiative of a handful of activists including Charlene Spretnak, co-author of a "'new age' interpretation" of the German Green movement (Capra and Spretnak's Green Politics), and Mark Satin, author of New Age Politics. Another academic publication says Spretnak and Satin largely co-drafted the U.S. Greens' founding document, the "Ten Key Values" statement. In the 21st century While the term New Age may have fallen out of favor, scholar George Chryssides notes that the New Age by whatever name is "still alive and active" in the 21st century. In the realm of politics, New Ager Mark Satin's book Radical Middle (2004) reached out to mainstream liberals. York (2005) identified "key New Age spokespeople" including William Bloom, Satish Kumar, and Starhawk who were emphasizing a link between spirituality and environmental consciousness. Former Esalen Institute staffer Stephen Dinan's Sacred America, Sacred World (2016) prompted a long interview of Dinan in Psychology Today, which called the book a "manifesto for our country's evolution that is both political and deeply spiritual". In 2013 longtime New Age author Marianne Williamson launched a campaign for a seat in the United States House of Representatives, telling The New York Times that her type of spirituality was what American politics needed. "America has swerved from its ethical center", she said. Running as an independent in west Los Angeles, she finished fourth in her district's open primary election with 13% of the vote. In early 2019, Williamson announced her candidacy for the Democratic Party nomination for president of the United States in the 2020 United States presidential election. A 5,300-word article about her presidential campaign in The Washington Post said she had "plans to fix America with love. Tough love". In January 2020 she withdrew her bid for the nomination. Reception Popular media Mainstream periodicals tended to be less than sympathetic; sociologist Paul Ray and psychologist Sherry Anderson discussed in their 2000 book The Cultural Creatives, what they called the media's "zest for attacking" New Age ideas, and offered the example of a 1996 Lance Morrow essay in Time magazine. Nearly a decade earlier, Time had run a long cover story critical of New Age culture; the cover featured a headshot of a famous actress beside the headline, "Om.... THE NEW AGE starring Shirley MacLaine, faith healers, channelers, space travelers, and crystals galore". The story itself, by former Saturday Evening Post editor Otto Friedrich, was sub-titled, "A Strange Mix of Spirituality and Superstition Is Sweeping Across the Country". In 1988, the magazine The New Republic ran a four-page critique of New Age culture and politics by a journalist Richard Blow entitled simply, "Moronic Convergence". Some New Agers and New Age sympathizers responded to such criticisms. For example, sympathizers Ray and Anderson said that much of it was an attempt to "stereotype" the movement for idealistic and spiritual change, and to cut back on its popularity. New Age theoretician David Spangler tried to distance himself from what he called the "New Age glamour" of crystals, talk-show channelers, and other easily commercialized phenomena, and sought to underscore his commitment to the New Age as a vision of genuine social transformation. Academia Initially, academic interest in the New Age was minimal. The earliest academic studies of the New Age phenomenon were performed by specialists in the study of new religious movements such as Robert Ellwood. This research was often scanty because many scholars regarded the New Age as an insignificant cultural fad. Having been influenced by the U.S. anti-cult movement, much of it was also largely negative and critical of New Age groups. The "first truly scholarly study" of the phenomenon was an edited volume put together by James R. Lewis and J. Gordon Melton in 1992. From that point on, the number of published academic studies steadily increased. In 1994, Christoph Bochinger published his study of the New Age in Germany, "New Age" und moderne Religion. This was followed by Michael York's sociological study in 1995 and Richard Kyle's U.S.-focused work in 1995. In 1996, Paul Heelas published a sociological study of the movement in Britain, being the first to discuss its relationship with business. That same year, Wouter Hanegraaff published New Age Religion and Western Culture, a historical analysis of New Age texts; Hammer later described it as having "a well-deserved reputation as the standard reference work on the New Age". Most of these early studies were based on a textual analysis of New Age publications, rather than on an ethnographic analysis of its practitioners. Sutcliffe and Gilhus argued that 'New Age studies' could be seen as having experienced two waves; in the first, scholars focused on "macro-level analyses of the content and boundaries" of the "movement", while the second wave featured "more variegated and contextualized studies of particular beliefs and practices". Sutcliffe and Gilhus have also expressed concern that, as of 2013, 'New Age studies' has yet to formulate a set of research questions scholars can pursue. The New Age has proved a challenge for scholars of religion operating under more formative models of what "religion" is. By 2006, Heelas noted that the New Age was so vast and diverse that no scholar of the subject could hope to keep up with all of it. Christian perspectives Mainstream Christianity has typically rejected the ideas of the New Age; Christian critiques often emphasise that the New Age places the human individual before God. Most published criticism of the New Age has been produced by Christians, particularly those on the religion's fundamentalist wing. In the United States, the New Age became a major concern of evangelical Christian groups in the 1980s, an attitude that influenced British evangelical groups. During that decade, evangelical writers such as Constance Cumbey, Dave Hunt, Gary North, and Douglas Groothuis published books criticising the New Age; a number propagated conspiracy theories regarding its origin and purpose. The most successful such publication was Frank E. Peretti's 1986 novel This Present Darkness, which sold over a million copies; it depicted the New Age as being in league with feminism and secular education as part of a conspiracy to overthrow Christianity. Modern Christian critics of the New Age include Doreen Virtue, a former New Age writer from California who converted to fundamentalist Christianity in 2017. Official responses to the New Age have been produced by major Christian organisations like the Roman Catholic Church, the Church of England, and the Methodist Church. The Roman Catholic Church published A Christian reflection on the New Age in 2003, following a six-year study; the 90-page document criticizes New Age practices such as yoga, meditation, feng shui, and crystal healing. According to the Vatican, euphoric states attained through New Age practices should not be confused with prayer or viewed as signs of God's presence. Cardinal Paul Poupard, then-president of the Pontifical Council for Culture, said the New Age is "a misleading answer to the oldest hopes of man". Monsignor Michael Fitzgerald, then-president of the Pontifical Council for Interreligious Dialogue, stated at the Vatican conference on the document: the "Church avoids any concept that is close to those of the New Age". On the contrary, some fringe Christian groups have adopted a more positive view of the New Age, among them the Christaquarians, and Christians Awakening to a New Awareness, all of which believe that New Age ideas can enhance a person's Christian faith. Contemporary Pagan perspectives There is academic debate about the connection between the New Age and Modern Paganism, sometimes termed "Neo-paganism". The two phenomena have often been confused and conflated, particularly in Christian critiques. Religious studies scholar Sarah Pike asserted that there was a "significant overlap" between the two religious movements, while Aidan A. Kelly stated that Paganism "parallels the New Age movement in some ways, differs sharply from it in others, and overlaps it in some minor ways". Other scholars have identified them as distinct phenomena that share overlap and commonalities. Hanegraaff suggested that whereas various forms of contemporary Paganism were not part of the New Age movement—particularly those that pre-dated the movement—other Pagan religions and practices could be identified as New Age. Partridge portrayed both Paganism and the New Age as different streams of occulture (occult culture) that merge at points. Various differences between the two movements have been highlighted; the New Age movement focuses on an improved future, whereas the focus of Paganism is on the pre-Christian past. Similarly, the New Age movement typically propounds a universalist message that sees all religions as fundamentally the same, whereas Paganism stresses the difference between monotheistic religions and those embracing a polytheistic or animistic theology. While the New Age emphasises a light-centred image, Paganism acknowledges both light and dark, life and death, and recognises the savage side of the natural world. Many Pagans have sought to distance themselves from the New Age movement, even using "New Age" as an insult within their community, while conversely many involved in the New Age have expressed criticism of Paganism for emphasizing the material world over the spiritual. Many Pagans have expressed criticism of the high fees charged by New Age teachers, something not typically present in the Pagan movement. Non-Western and Indigenous criticism New Age often adopts spiritual ideas and practices from other, particularly non-Western cultures. According to York, these may include "Hawaiian Kahuna magic, Australian Aboriginal dream-working, South American Amerindian ayahuasca and San Pedro ceremonies, Hindu Ayurveda and yoga, Chinese Feng Shui, Qi Gong, and Tai Chi." The New Age has been accused of cultural imperialism, misappropriating sacred ceremonies, and exploitation of the intellectual and cultural property of Indigenous peoples. Indigenous American spiritual leaders, such as Elders councils of the Lakota, Cheyenne, Navajo, Creek, Hopi, Chippewa, and Haudenosaunee have denounced New Age misappropriation of their sacred ceremonies and other intellectual property, stating that "[t]he value of these instructions and ceremonies [when led by unauthorized people] are questionable, maybe meaningless, and hurtful to the individual carrying false messages". Traditional leaders of the Lakota, Dakota, and Nakota peoples have reached consensus to reject "the expropriation of [their] ceremonial ways by non-Indians". They see the New Age movement as either not fully understanding, deliberately trivializing, or distorting their way of life, and strongly disapprove of all such "plastic medicine people" who are appropriating their spiritual ways. Indigenous leaders have spoken out against individuals from within their own communities who may go out into the world to become a "white man's shaman", and any "who are prostituting our spiritual ways for their own selfish gain, with no regard for the spiritual well-being of the people as a whole". The terms "plastic shaman" and "plastic medicine person" have been used to describe an outsider who identifies or promotes themselves as a shaman, holy person, or other traditional spiritual leader, yet has no genuine connection to the traditions or cultures represented. Political writers and activists Toward the end of the 20th century, some social and political analysts and activists were arguing that the New Age political perspective had something to offer mainstream society. In 1987, some political scientists launched the "Section on Ecological and Transformational Politics" of the American Political Science Association, and an academic book prepared by three of them stated that the "transformational politics" concept was meant to subsume such terms as new age and new paradigm. In 1991, scholar of cultural studies Andrew Ross suggested that New Age political ideas—however muddled and naïve—could help progressives construct an appealing alternative to both atomistic individualism and self-denying collectivism. In 2005, British researcher Stuart Rose urged scholars of alternative religions to pay more attention to the New Age's interest in such topics as "new socio-political thinking" and "New Economics", topics Rose discussed in his book Transforming the World: Bringing the New Age Into Focus, issued by a European academic publisher. Other political thinkers and activists saw New Age politics less positively. On the political right, author George Weigel argued that New Age politics was just a retooled and pastel-colored version of leftism. Conservative evangelical writer Douglas Groothuis, discussed by scholars Hexham and Kemp, warned that New Age politics could lead to an oppressive world government. On the left, scholars argued that New Age politics was an oxymoron: that personal growth has little or nothing to do with political change. One political scientist said New Age politics fails to recognize the reality of economic and political power. Another academic, Dana L. Cloud, wrote a lengthy critique of New Age politics as a political ideology; she faulted it for not being opposed to the capitalist system, or to liberal individualism. A criticism of New Age often made by leftists is that its focus on individualism deflects participants from engaging in socio-political activism. This perspective regards New Age as a manifestation of consumerism that promotes elitism and indulgence by allowing wealthier people to affirm their socio-economic status through consuming New Age products and therapies. New Agers who do engage in socio-political activism have also been criticized. Journalist Harvey Wasserman suggested that New Age activists were too averse to social conflict to be effective politically. Melton et al. found that New Age activists' commitment to the often frustrating process of consensus decision-making led to "extended meetings and minimal results", and a pair of futurists concluded that one once-promising New Age activist group had been both "too visionary and too vague" to last. See also Higher consciousness Hypnosis Mindfulness New Age communities New religious movement Nonviolent resistance Peace movement Philosophy of happiness Post-scarcity Postchristianity Roerichism Social theory References Citations Works cited Further reading External links "Rainbow Gathering" – New Age Annual Event since 1972 "The New Age 40 Years Later ". Huffington Post interview of Mark Satin, author of New Age Politics, cited above. 1970s establishments Counterculture of the 1970s Mysticism Nonduality Panentheism Spirituality Subcultures Western esotericism
0.768832
0.999193
0.768211
Feminization of poverty
Feminization of poverty refers to a trend of increasing inequality in living standards between men and women due to the widening gender gap in poverty. This phenomenon largely links to how women and children are disproportionately represented within the lower socioeconomic status community in comparison to men within the same socioeconomic status. Causes of the feminization of poverty include the structure of family and household, employment, sexual violence, education, climate change, "femonomics" and health. The traditional stereotypes of women remain embedded in many cultures restricting income opportunities and community involvement for many women. Matched with a low foundation income, this can manifest to a cycle of poverty and thus an inter-generational issue. Entrepreneurship is usually perceived as the cure-all solution for deprivation depletion. Advocates assert that it guides to job design, higher earnings, and lower deprivation prices in the towns within it happens. Others disagree that numerous entrepreneurs are generating low-capacity companies helping regional markets. This term was originated in the US, towards the end of the twentieth century and maintains prominence as a contested international phenomenon. Some researchers describe these issues as prominent in some countries of Asia, Africa and areas of Europe. Women in these countries are typically deprived of income, employment opportunities and physical and emotional help putting them at the highest risk of poverty. This phenomenon also differs between religious groups, dependent on the focus put on gender roles and how closely their respective religious texts are followed. Feminisation of poverty is primarily measured using three international indexes. These indexes are the Gender Development Index, the Gender Empowerment Measure and the Human Poverty Index. These indexes focus on issues other than monetary or financial issues. These indexes focus on gender inequalities, standard of living and highlight the difference between human poverty and income poverty. History The concept of the 'feminization of poverty' dates back to the 1970s and became popular in the 1990s through some United Nations documents. It became a prominent in popular society after a study focusing on gender patterns in the evolution of poverty rates in the United States was released. The feminization of poverty is a relative concept based on a women-men comparison. For instance, feminisation of poverty is if poverty in a society is distinctly reduced among men and is only slightly reduced among women. Definitions The feminization of poverty is a contested idea with a multitude of meanings and layers. Marcielo M. and Joana C. define feminization of poverty in two parts: feminization, and poverty. Feminization designates gendered change; something becoming more feminine, by extension more familiar or severe among women or female-headed households. Poverty is a deficit of resources or abilities. Marcielo M. and Joana C. (2008) likewise depicts the escalating role that gender discrimination has in determining poverty. For instance, an increase of wage discrimination between males and females which can also exacerbates poverty among women and men of all types of families. Medieros considers this possibility as a feminization of poverty because it denotes the relation between the biases against women and a rise in poverty. In numerous cases, Medieros claims that such alleged changes in the causes of poverty will result in one of the types of the feminization of poverty, that is, the relative changes in the poverty levels of women and female-headed households. The concept also served to illustrate the many social and economic factors contributing to women's poverty, including the significant gender pay gap between women and men. The term originates in the US and its prominence as an international phenomenon is contested. The proportion of female-headed households whose incomes fall below the "poverty line" has been broadly adopted as a measure of women's poverty. [Feminist sources]In many countries, household consumption and expenditure surveys show a high incidence of female-headed households among the "poor," defined as those whose incomes fall below the poverty line. There are two assumptions underlying income-based measures of poverty according to Bessell S (2010). First, there is that tendency to equate income with the ability to control income. While women may control earned income, the limits on poor women's financial sovereignty have been well demonstrated. [detail sources] An income-based measure may hide the extent and nature of poverty when women earn an income but have no control over those earnings, claims Bessel. While the question of who controls income is a delicate matter for women, it is also relevant to the position and well-being of men. Societies that place upon individuals a heavy communal, kinship or clan-based obligation may end in both women and men having limited control over individual income. Second, is the assumption that income creates equal access and generates equal benefits. Access to education illustrates the point. While lack of financial resources may result in low enrollment or high drop-out rates among poor children, social values around the role of women and the importance of formal education for girls are likely to be more meaningful in demonstrating the difference between male and female enrolment rates. Bessel claims. Causes Factors that place women at high risk of poverty include change of family structure, gender wage gaps , women's prevalence in low-paid occupations , a lack of work-family supports , and the challenges involved in accessing public benefits . Feminisation of poverty is a problem which may be most severe in parts of South Asia, and may also differ by social class. Although low income is the major cause, there are many interrelated facets of this problem. Lone mothers are usually at the highest risk for extreme poverty [source?] because their income is insufficient to rear children. The image of a "traditional" woman and a traditional role still influences many cultures in today's world and is still not in full realization that women are essential part of the economy. In addition, income poverty lowers their children's possibilities for good education and nourishment . Low income is a consequence of the social bias women face in trying to obtain formal employment, which in turn deepens the cycle of poverty. Beyond income, poverty manifests in other dimensions such as time poverty and capability deprivations. Poverty is multidimensional, and therefore economic, demographic, and socio-cultural factors all overlap and contribute to the establishment of poverty. It is a phenomenon with multiple root causes and manifestations. Single mother households Single mother households are critical in addressing feminization of poverty and can be broadly defined as households in which there are female headships and no male headships. Single mother households are at the highest risk of poverty for women due to lack of income and resources. There is a continuing increase of single mother households in the world, which results in higher percentages of women in poverty. Single mothers are the poorest women in society, and their children tend to be disadvantaged in comparison to their peers. Different factors can be taken into account for the rise in the number of female headship in households. While never-married heads of household are also at economic risk, changes of family structure, particularly divorce, are the major cause of initial spells of poverty among female-headed households. When men become migrant workers , women are left to be the main caretaker of their homes. Those women who have the opportunity to work usually don't get better jobs with a furthered education [source?]. They are left with jobs that don't offer financial sustainability or benefits. Other factors such as illnesses and deaths of husbands lead to an increase in single mother households in developing countries. Female headed households are most susceptible to poverty because they have fewer income earners to provide financial support within the household. According to a case study in Zimbabwe, households headed by widows have an income of approximately half that of male-headed households, and de facto female headed households have about three-quarters of the income of male headed households. Additionally, single mother households lack critical resources in life, which worsens their state of poverty. They do not have access to the opportunities to attain a decent standard of living along with basic needs such as health and education. Single mother households relate to gender inequality issues as women are more susceptible to poverty and lack essential life needs in comparison to men. Parenting in poverty ridden conditions can cause emotional instability for a child and their relationship with a single mother. Many factors contribute to becoming impoverished. Some of these factors are more prevalent in the lives of single mothers. When demographic attributes of single mothers are surveyed, a few factors showed up in higher rates. Marital status (divorced or widowed), education, and race correlated strongly with levels of poverty for single mothers. Specifically, very few mothers on the poverty line had a college degree and were having to "work to make ends meet". Not only do these demographic attributes affect parenting in poverty, emotional attributes provided an instability as well when viewed by Dr. Bloom. Mothers have been noted as the "caregivers" or "nurturer" of families. Some stereotypical things that are expected of mothers are harder to provide in a low-income household when a mother is the main provider. Dr. Bloom's example of a stereotypical mother job in Western Societies was bringing treats to school on birthdays and expected to go to parent teacher conferences. A researcher, Denise Zabkiewicz, surveyed single mothers in poverty and measured rates of depression over time. Since recent studies in 2010 had brought the idea that work was beneficial for mental health, Zabkiewicz thought to research if jobs were mentally beneficial to poverty line single mothers. Those results concluded to be true; mothers' rates of depression were significantly lower when one held a stable, long-term job. The likelihood of getting a full-time job decreases with certain factors. When these certain factors were surveyed in single moms they occurred at higher rates: co-inhabiting, college degree, and use of welfare. All of these factors are ones that the researchers, Brian Brown and Daniel Lichter, identified as contributing to single mothers' poverty. Employment Employment opportunities are limited for women worldwide . The ability to materially control one's environment by gaining equal access to work that is humanizing and allows for meaningful relationships with other workers is an essential capability. Employment impacts go beyond financial independence. Employment establishes higher security and real world experience which elevates regard within families settings and increases bargaining positions for women. Though there has been major growth in women's employment, the quality of the jobs still remains deeply unequal. Teenage motherhood is a factor that corresponds to poverty. There are two kinds of employment: formal and informal. Formal employment is government regulated and workers are insured a wage and certain rights. Informal employment takes place in small, unregistered enterprises. It is generally a large source of employment for women. The burden of informal care work falls predominantly on women, who work longer and harder in this role than men. This affects their ability to hold other jobs and change positions, the hours they can work, and their decision to give up work. However, women who have University degrees or other forms of higher learning tend to stay in their jobs even with caring responsibilities, which suggests that the human capital from this experience causes women to feel opportunity costs when they lose their employment. Having children has also historically affected women's choice to stay employed. While this "child-effect" has significantly decreased since the 1970s, women's employment is currently decreasing . This has less to do with child-rearing and more with a poor job market for all women , mothers and non-mothers alike. Sexual violence A form of sexual violence on the rise in the United States is human trafficking. Poverty can lead to increased trafficking due to more people on the streets. Women who are impoverished, foreign, socially deprived, or at other disadvantages are more susceptible to being recruited into trafficking. Many laws stated in Kelsey Tumiel's dissertation, have recently been made to try to combat the phenomenon, but it is predicted that human trafficking will surpass illegal drug trafficking amounts in the US. Women that are victims of these sexual violence acts have a difficult time escaping the life due to abuse of power, organised crime, and insufficient laws to protect them. There are more people current enslaved in trafficking than there were during the African slave trade. "Branding" of human trafficking brings awareness to the issue claims Tam Mai, the author. This allows for public assertion and intervention. A claim made in Tam Mai's article states that reducing poverty may thus lead to a decrease in trafficking from the streets. Education Women and girls have limited access to basic education in developing countries. This is due to strong gender discrimination and social hierarchies in these countries. However, this trend is reversed in the Western world. Approximately one quarter of girls in the developing world do not attend school. This impedes a woman's ability to make informed choices and achieve goals. Enabling female education leads to the reduction of household poverty. Higher education is a major key to reducing women's poverty. The limited number of girls who are enrolled in education in developing countries have a higher drop out rate than boys. This is caused by the high rape and sexual assault rates, which can lead to an unwanted pregnancy, and male prioritisation of education. Males will be receiving an education while females are learning domestic skills, including cleaning, cooking and looking after children. There are extremely high levels of claims of professional misconduct, usually in the terms of sexual favours by females for grades. Because of sexual harassment by students and lecturers, there is a large inequality of higher education for females. Climate change According to MacGregor, women are more likely to be poor, and to be responsible for the care of poor children, than men. According to MacGregor, approximately 70 percent of the world's poor are women ; rural women developing countries are among the most disadvantaged groups on the planet . They are therefore unlikely to have the necessary resources to cope with the changes brought by climate change, and very likely to suffer a worsening of their everyday conditions, says MacGregor. MacGregor also says that poor women are more likely to be hurt or killed by natural disasters and extreme weather events than men. MacGregor also claims that there is evidence to suggest that when households experience food shortages, women tend to go without so that their children may eat, with all the health implications this brings for them. Since poverty and climate change are closely linked, the poorest and most disadvantaged groups often depend on climate-sensitive livelihoods like agriculture, which makes them disproportionately vulnerable to climate change. These groups lack the resources required to weather severe climatic effects like better houses and drought-resistant crops. This diminished adaptive capacity makes them even more vulnerable, pushing them to take part in unsustainable environmental practices such as deforestation in order to maintain their well-being. The extent to which people are impacted by climate change is partially a function of their social status, power, poverty, and access to and control over resources. Women are more vulnerable to the influences of climate change since they make up the bulk of the world's poor and are more dependent for their livelihood on natural resources that are threatened by climate change [more dependent than men on natural resources??]. Limited mobility combined with unequal access to resources and to decision-making processes places women in rural areas in a position where they are disproportionately affected by climate change. There are three main arguments in association to women and climate change. Firstly, that women need special attention because they are the poorest of the poor; secondly, because they have a higher mortality rate during natural disasters caused by climate change and thirdly because women are more environmentally conscious. While the first two refer mainly to the women in the South, the last is especially apparent in the literature on gender and climate change in the North. The feminization of poverty has been used to illustrate differences between male and female poverty in a given context as well as changes in male and female poverty over time. Typically, this approach has fed the perception that female-headed households, however, defined, tend to be poorer than other households. Women are clearly more disadvantaged than men by poor household infrastructure or the lack of piped water and less-consuming energy sources, according to Gammage. Femonomics In addition to earning less , women may encounter "Femonomics", or gender of money, a term created by Reeta Wolfsohn, CMSW, to reflect many of the inequities women face that increase their likelihood [how?] of suffering from financial difficulties. The image of a "traditional" woman and a traditional role still influences many cultures in today's world and is still not in full realisation that women are essential part of the economy. Women have unique healthcare problems/access problems related to reproduction increasing both their healthcare costs and risks. Research also suggests that females tend to live five years longer on average than men in the United States. The death of a spouse is an important determinant of female old-age poverty, as it leaves women in charge of the finances. However, women are more likely to be financially illiterate and thus have a harder time knowing how to manage their money. In 2009 Gornick et al. found that older women (over 60) were typically much wealthier than their national average in Germany, US, UK, Sweden and Italy (data from 1999 to 2001). In the US their wealth holdings were four times the national median. Health Women in poverty have reduced access to health care services and resources . Being able to have good health, including reproductive health, be adequately nourished, and have proper shelter can make an enormous difference to their lives. Gender inequality in society prevents women from utilizing care services and therefore puts women at risk of poor health, nutrition, and severe diseases. Women in poverty are also more vulnerable to sexual violence and risk of HIV/AIDS, as they are less able to defend themselves from influential people who might sexually abuse them. HIV transmission adds to the stigma and social risk for women and girls. Other ailments such as malnutrition and parasite burden can weaken the mother and create a dangerous environment, making sex, birth, and maternal care riskier for poor women. In Korea poor health is a key factor in household poverty. Women as a solution to poverty Due to financial aid programs for impoverished families assuming only women to be responsible for the maintenance of a household and caring for children, the burden may fall on women to ensure this financial aid is properly managed. Such programs also tend to assume that women all have the same social standing and needs, even though this is not the case. This effect is exacerbated by the increased number of NGOs targeting solely female development. Women are expected to maintain the household as well as lift the family out of poverty, responsibilities which can add to the burden of poverty that females face in developing nations. In many areas, Conditional Cash Transfer (CCT) programs provide direct financial assistance to women with the goal of lifting them out of poverty, but they often end up limiting women's income-earning potential. The programs typically expect women to be responsible for the health and educational outcomes of their children, as well as require them to complete other program activities that don't allow them the time to pursue vocational or educational opportunities that would result in higher income-earning potential. Forms of poverty Decision-making power Decision-making power is central to the bargaining position of women within the household. It is how women and men make decisions that affect the entire household unit. However, women and men often have very different priorities when it comes to determining what is most important for the family. Factors that determine which member of the household has the most power in decision-making vary across cultures, but in most countries there is extreme gender inequality in the household. Men of the household usually have the power to determine what choices are made towards women's health, their ability to visit friends and family, and household expenditures. The ability to make choices for their own health affects both women and children's health. How household expenditures are decided affects women and children's education, health, and well-being. Women's freedom of mobility affects their ability to provide for their own needs as well as for the needs of their children. Gender discrimination within households is often rooted in patriarchal biases against the social status of women. Major determinants of the household bargaining power include control of income and assets, age, and access to and level of education. As women's decision-making power increases, the welfare of their children and the family in general benefits. Women who achieve greater education are also more likely to worry about their children's survival, nutrition, and school attendance. Disparate income Lack of income is a principal reason for women's risk of poverty. Income deprivation prevents women from attaining resources and converting their monetary resources into socioeconomic status. Not only does higher income allow greater access to job skills; obtaining more job skills raises income as well. As women earn less income than men and struggle to access public benefits. They are deprived of basic education and health care, which eventually becomes a cycle to debilitate women's ability to earn higher income. Energy poverty Lack of assets According to Martha Nussbaum, one central human functional capability is being able to hold property of both land and movable goods. In various nations, women are not full equals under the law, which means they do not have the same property rights as men; the rights to make a contract; or the rights of association, mobility, and religious liberty. Assets are primarily owned by husbands or are used for household production or consumption, neither of which help women with loan repayments. In order to refund their loans, women are usually required to undergo the 'disempowering' process of having to work harder as wage laborers, while also encountering a growing gendered resource divide at the domestic level. One of the major factors influencing women to greater poverty are the limited opportunities, capabilities, and empowerment in terms of access to and control over production resources of land, labor, human capital assets including education and health, and social capital assets such as participation at various levels, legal rights, and protection. Time poverty Time is a component that is included in poverty because it is an essential resource that is oftentimes distributed inequitably across individuals, especially in the context of the inadequacy of other resources. It is extremely relevant to gender, with a marked difference in gender roles and responsibilities observed across the world. Women are certainly more time-poor than men across the income distribution. Women concentrate on reproductive or unremunerated activities, while men concentrate in productive or compensated activities. Women generally face more limited access to leisure and work more hours in the sum of productive and reproductive work than do men. Time poverty can be interpreted in regards to the lack of sufficient time to rest and sleep. The greater the time devoted to paid or unremunerated work, the less time there is available for other activities such as relaxation and pleasure. A person who lacks adequate time to sleep and rest, levies and works in a state of 'time poverty'. The allocation of time between women and men in the household and in the economy, is a major gender issue in the evolving discourse on time poverty. According to the capabilities approach, any inquiry into people's well-being must involve asking not only how much people make but also how they manage their time in order to obtain the goods and services to meet their livelihoods. Time poverty is a serious constraint on individual well-being as it prevents having sufficient rest and sleep, enjoying leisure, and taking part in community or social life. Capability deprivations Since the last twenty-five years, feminist research has consistently stressed the importance of more holistic conceptual frameworks to encapsulate gendered privation. These include: 'capability' and 'human development' frameworks, which identify factors such as deprivations in education and health. Another is 'livelihoods' frameworks, which indicate social as well as material assets. Also, 'social exclusion' perspectives, which highlight the marginalization of the poor; and frameworks which stress the significance of subjective dimensions of poverty such as self-esteem, dignity, choice, and power. A higher share of women than of men are poor, women undergo greater depth or severity of poverty than men, women are likely to experience more persistent and longer-term poverty than men, women's irregular burden of poverty is increasing relative to men, women face more difficulties in lifting themselves out of poverty, and women-headed households are the 'poorest of the poor' are the common characterizations of the 'Feminization of poverty'. Deprivation of health outcomes Poor women are more vulnerable to chronic diseases because of material deprivation and psychosocial stress, higher levels of risk behavior, unhealthy living conditions and limited access to good quality healthcare. Women are more susceptible to diseases in poverty because they are less well-nourished and healthy than men and more vulnerable to physical violence and sexual abuse. Being able to have good health, including reproductive health, be adequately nourished, and have adequate shelter can make an enormous difference to their lives. Violence against women is a major contributing factor to HIV infection. Stillwaggon argues that in sub-Saharan Africa poverty associated with high-risk for HIV transmission adds to the stigma and social risk for women and girls in particular. Poverty and its correlates like malnutrition and parasite burden can weaken the host and create a dangerous environment, making sex and birth and medical care riskier for poor women. Social and cultural exclusions Other metrics can be used besides the poverty line, to see whether or not people are impoverished in their respective countries. The concept of social and cultural exclusion helps to better convey poverty as a process that involves multiple agents. Many developing countries have social and cultural norms that prevent women from having access to formal employment. Especially in parts of Asia, North Africa, and Latin America, the cultural and social norms do not allow women to have much labor productivity outside the home as well as an economic bargaining position within the household. This social inequality deprives women of capabilities, particularly employment, which leads to women having a higher risk of poverty. This increase in occupational gender segregation and widening of the gender wage gap increases women's susceptibility to poverty. Measures of poverty An important aspect of analyzing the feminization of poverty is the understanding of how it is measured. It is inaccurate to assume that income is the only deprivation that affects women's poverty. To examine the issue from a multidimensional perspective, there must first be accurate and indices available for policy makers interested in gender empowerment. Often aggregate indices are criticized for their concentration on monetary issues, especially when data on women's income is sparse and groups women into one large, undifferentiated mass. Three indexes often examined are Gender-related Development Index, Gender Empowerment Measure, and Human Poverty Index. The first two are gendered- indices, in that they specifically gather data on women to evaluate gender inequalities, and are useful in understanding disparities in gender opportunities and choices. HPI, however, focuses on deprivation measures rather than income measures. GDI adjusts the Human Development Index in three ways: Shows longevity, or life-expectancy of females and males Education or knowledge Decent standard of living The aim of this index is to rank countries according to both their absolute level of human development and relative scores on gender equality. Although this index has increased government attention to gender inequality and development, its three measures have often been criticized for neglecting important aspects. Its relevance, however, continues to be integral to the understanding of the feminization of poverty, as countries with lower scores may then be then stimulated to focus on policies to assess and reduce gender disparities. GEM measures female political and income opportunities through: Analyzing how many seats of government are occupied by women Proportion of management positions occupied by women Female share of jobs Estimated female to male income ratio HPI is a multidimensional, non-income-based approach. It takes into consideration four dimensions: Survival Knowledge Decent standard of living Social participation This index is useful in understanding and illuminating the differences between human poverty (which focuses on the denial of basic rights, such as dignity and freedom) and income poverty. For example, despite the U.S.'s high income stability, it is also ranked among the highest developed nations in human poverty. In her article, "Towards a Gendered Human Poverty Measure", Elizabeth Durbin critiques HPI and expands on the possibility of a gender-sensitive index. She argues that HPI incorporates three dimensions of poverty: life span measured by the proportion of the population expected to die before age 40, lack of knowledge measured by the proportion who are illiterate, and a decent standard of living measured by a composite index of access to health services, access to safe water, and malnutrition among children less than 5, that could specifically account for gender disparities. Despite its uses, however, it is important to note that HPI cannot be a true measure of poverty because it fails to examine certain deprivations, such as lack of property ownership and credit, that are essential to a stronger bargaining position in the household for women. Religion Within many of the major religious groups in the world, focus is placed upon traditional gender roles and each individual's duty. Many devout followers of each religion have used their respective religious texts or rulings to further the poverty cycle of women around the world. Islam In a 2004 report by the Norwegian Institute for Urban and Regional Research, Muslim women were found more likely to work part-time jobs than Muslim men because of their religion's emphasis on the role of women as caregivers and housekeepers. The study found that these women are more likely to be financially dependent than men because they choose to participate less in the labor market. Muslim women who choose to wear traditional female Muslim accessories such as henna and hijabs may have a more difficult time finding employment than those who do not wear such clothing. On the local level, a woman was fired from a Jiffy Lube for refusing to remove her hijab at work because it violated the company's "no hat" rule. In the 2008 case Webb versus Philadelphia, the court ruled that an officer wearing her hijab with her uniform, was in violation of the states' standard of neutrality. Because of the violation of this standard, she was not allowed to legally wear the hijab while on duty. Traditional Judaism Under traditional Halachic law, Jewish women are also considered to be household caregivers rather than breadwinners. Within the Jewish text, the Mishnah, it states "she should fill for him his cup, make ready his bed and wash his face, hands and feet," when describing the role of women under Jewish law. Christianity Certain sects of Christianity also regard women as more family-oriented than men. Female poverty by region Many developing countries in the world have exceptionally high rates of women under the poverty line. Many countries in Asia, Africa, and parts of Europe deprive women of access to higher income and important capabilities. Women in these countries are disproportionately put at the highest risk of poverty and continue to face social and cultural barriers that prevent them from escaping poverty. East Asia Although China has grown tremendously in its economy over the past years, its economic growth has had minimal effect on decreasing the number of women below the poverty line. Economic growth did not reduce gender gaps in income or provide more formal employment opportunities for women. Instead, China's economic growth increased its use of informal employment, which has affected women disproportionately. In the Republic of Korea, low wages for women helped instigate an economic growth in Korea since low-cost exports were mostly produced by women. Similar to China, Korean women mostly had the opportunity for informal employment, which deprives women of financial stability and safe working environments. Although women in East Asia had greater access to employment, they faced job segregation in export industries, which placed them at a high risk of poverty. China is a country with a long history of gender discrimination. In order to address gender inequality issues, Chinese leaders have created more access for women to obtain capabilities. As a result, Chinese women are granted greater access to health services, employment opportunities, and general recognition for their important contributions to the economy and society. Africa Women in Africa face considerable barriers to achieving economic equality with their male counterparts due to a general lack of property rights, access to credit, education and technical skills, health, protection against gender-based violence, and political power. Although women work 50% longer workdays than men, they receive two-thirds of the pay of their male counterparts and hold only 40% of formal salaried jobs. The longer workdays can be attributed to the cultural expectations of women to perform forms of unpaid labor such as gathering firewood, drawing water, childcare, eldercare, and housework. Women face greater challenges in finding employment because of their lack of education. According to Montenegro and Patrinos, one additional year of primary, secondary, and tertiary school can increase future wages by 17.5%, 12.7%, and 21.3% respectively. Unfortunately, due to factors such as child marriage, early pregnancy, and cultural norms, only 21% of girls complete tertiary school. Without formal property rights, women in Africa only own 15% of the land, which makes them more vulnerable to be economically dependent on male family members or partners and diminishes their ability to use property to access financial systems such as banks and loans. As a result of having less economic power, women are generally more vulnerable to gender-based violence and risk of HIV/AIDS. Morocco The female population, especially in rural areas, dominantly represents the face of poverty in Morocco. There have been two major methods to measure poverty in Morocco, which include the 'classic approach' and a second approach that pertains more towards the capabilities approach. The 'classic approach' uses the poverty line to statistically determine the impoverished population. This approach quantifies the number of poor individuals and households but does not take into account how the impoverished population lacks basic needs such as housing, food, health and education. The second approach focuses on satisfying this lack of basic needs and emphasizes the multidimensional nature of poverty. Moroccan women represent the most economically insecure social group in the country. One of six Moroccan households are lone-mother households, which represent the most impoverished households in the country. Women are categorized to have the highest levels of socio-economic and legal constraints, which exclude them from obtaining their basic needs. Although recent surveys show that women actively help in providing for their families economically, Moroccan legal texts discourage women's participation in economic productivity. Article 114 of the Moroccan Family Law states, "every human being is responsible for providing for his needs by his own powers except the wife whose needs will be taken care of by her husband." The patriarchal social structure of Morocco puts women as being inferior to men in all aspects. Women are denied equal opportunities in education and employment before the law, as well as access to resources. As a result, the female population in Morocco suffers from deprivation of capabilities. Young girls are often excluded from educational opportunities due to limited financial resources within the household and the burden of household chores expected from them. Over time, Moroccan women have gained more access to employment. However, this quantitative increase in labor participation for women has not been accompanied by higher qualitative standards of labor. The labor of rural women in Morocco remain unacknowledged and unpaid. Women are put into a higher risk of poverty as their domestic workload is added onto their unpaid labor. This balance of domestic labor and work outside the home imposes a burden on rural women. Since the socioeconomic exclusion of women deprive them of the capabilities to be educated and trained for certain employment skills, their susceptibility to poverty is heightened. Low educational skills of women directly relate to the limited employment options they have in society. Although both men and women are affected by unemployment, women are more likely to lose their jobs than men. Recent research in Morocco shows that economic recessions in the country affect women the most. United Kingdom An investigation of women below the poverty line in the United Kingdom between 1959 and 1984 discovered a substantial increase in the percentage of women who are in poverty in the 1960s. The percentage remained relatively constant in the 1970s, and then decreased between 1979 and 1984. The increase of women below the poverty line in the 1960s was determined to be from an increase of women in one-sex households. This was more adverse for black women than white women. Dominican Republic Dominican women make generally forty-four cents on the dollar as compared to men. This wage gap often leads to a high level of food insecurity among women in the Dominican Republic. Those in poverty have an increased likelihood to participate in dangerous behaviors such as unprotected sex and drug use. These behaviors put them at a greater risk for contracting HIV and other diseases. There is a negative stigma around HIV positive women in the Dominican Republic. For this reason, women are more likely to be subjected to health screenings when applying for a job. If the screening reveals a person is HIV positive, they are less likely to be given employment. United States In 2016, 14.0% of women and 11.3% of men were below the poverty threshold. The 2016 poverty threshold was $12,228 for single people and $24,339 for a family of four with two children. In response, the United States government provides financial assistance to those who do not earn as much money. In 2015, 23.2% of women were given financial assistance compared with 19.3% of men. More women are given financial assistance than men in all government programs (Medicaid, SNAP, housing assistance, SSI, TANF/GA). Women were given 86% of child-support in 2013. India The poverty that women experience in India is known as human poverty, or issues of inadequate food, housing, education, healthcare, sanitation, poor developmental policies, and more. Poverty has been prevalent in India for many years, but there was a noticeable increase after globalization in 1991 when the International Monetary Fund instilled a structural adjustment program (SAP) in order to give India a loan. Large amounts of capital flowed into the country but also led to the exploitation of the Indian market, particularly of women for their cheap labor. This reduced their opportunities for education and escape from the poverty trap. The Indian Constitution has proclaimed that all citizens have equal rights, but this is not always practiced by all Indians. Sex-selective abortion is a wide phenomenon in India in which boys are preferentially selected. In order to get married, it is normal to see the woman's family paying dowry to the man's family. This leads to more sex-selective abortion as girls are more costly for the family, and less focus on female development. Home life Women are restricted in India due to a heavy dependency of social status on a woman's appearance and activity around the home. Poor behavior on their part results in lower social status and shame for the male head of the family. Women are expected to maintain the household with a strict schedule. Husbands often move to the city to find work and leave their wife as the primary earner in their absence. Women in these situations may resort to using favors or borrowing money in order to survive, which they must later return in cash with interest. Young girls are especially vulnerable to prostitution or bribing as a form of repayment. Competition amongst women around water, food, and employment is also prevalent, especially in urban slums. Employment The expectation for Indian women is to be the sole care taker and maintainer of the home. If women leave their children and work they are often left in the hands of a poor care taker (possibly the eldest daughter) and don't get enough resources for development. In many areas working outside of the home is seen as symbolic of having low status. Upper-class women have similar social restrictions, although lower class women frequently have a larger necessity of the added income than upper class women. Men tend to send money back to extended family, whereas money that a woman makes goes to her husband. This reduces the incentive of the family to urge their daughter to find work as they wouldn't receive money but would face shame in society. Conceptual barriers prevent women from being accepted as equally paid and equally able laborers. In many ways women are seen as excess reserve labor and get pushed into roles that are known as being dirty, unorganized, arduous, and underdeveloped. They are hurt by the mechanization of industries and while self-employment is a viable option, there is always a large risk of failure and exploitation. Healthcare Healthcare is difficult to access for women, particularly elderly women. Public clinics are overcrowded, understaffed, and have high transportation costs, while private clinics are too expensive without insurance. Females are more likely to get ill than males although males receive medical advice with higher frequency. Women frequently feel as if they are a burden to their husband or son when they get sick and require money to purchase the correct medicines. Some believe that their symptoms are not serious or important enough to spend money on. When women do receive some form of care, many times medical providers are biased against them and are partial to treating men over women. Many mothers also die during childbirth or pregnancy as they suffer from malnutrition and anemia. Over 50% of women in the National Family Health Surveys were anemic. Nutrition Poverty is a large source of malnutrition in women. Women in poverty are not allowed to eat the nutritious food that men are when it is available. While it is the women's job to obtain the food, it is fed to the men of the household. The 2005-2006 National Family Health Survey found that more men drink milk and eat fruit in comparison to women, and that less than 5% of women and girls in the states of Punjab, Haryana, and Rajasthan eat meat or eggs. Poor nutrition begins at a young age and gets worse as women mature and become mothers. Education Effective policies to aid in expanding female education aren't productively enforced by the Government of India. Data from the 2001 census showed that primary school completion rates were around 62% for boys and 40% for girls. Teenage girls are generally taught how to care for their siblings and cook food and not taught math or science. Some families may believe men to be more qualified than women to get a higher paying job. In many instances this inequality between male and female education leads to child marriage, teenage pregnancies, and a male dominated household. Evidence suggests that educating girls results in reduced fertility, due to an urge to work and pursue higher social status. This lessens the financial burden on families. Policies Conditional cash transfer Conditional cash transfer is a possible policy for addressing current and intergenerational poverty where poor women play a central role. Women in the role as mothers are given the additional work burdens imposed. Conditional cash transfers are not ideal for addressing single-mother poverty. Microcredit Microcredit can be a potential policy for assisting poor women in developing countries. Microcredit is a tool design to hopefully alleviate poverty given that women living in developing countries have very few resources and connections for survival due to not having a solid financial foundation. Welfare reform in the U.S. In light of welfare reforms as of 2001, federal legislation required recipients of welfare (mainly aided to families) to participate in an educational or vocational school and work part-time in order to receive the benefits. Recipients attending a college now have 3 years to complete those degree in order to get people to work as quickly as possible. To try towards a system of reward, Mojisola Tiamiyu and Shelley Mitchell, suggest implementing child care services to promote employment. Women with children work in either low-paying or part-time jobs that are insufficient to raise a family. Single parenting in the United States has increased to 1 in 4 families being headed by a single parent. It is estimated that children living in single parent homes are as much as 4 times more likely to become impoverished (Juvenilization of poverty). See also References Further reading CAPTURING WOMEN'S MULTIDIMENSIONAL EXPERIENCES OF EXTREME POVERTY Why many of the hungry are women Gentrification Is a Feminist Issue: The Intersection of Class, Race, Gender and Housing Pdf. Also as: Pdf. Feminism and social class Feminist economics Feminist terminology Poverty
0.782097
0.982239
0.768207
Paleontology
Paleontology, also spelled palaeontology or palæontology, is the scientific study of life that existed prior to the start of the Holocene epoch (roughly 11,700 years before present). It includes the study of fossils to classify organisms and study their interactions with each other and their environments (their paleoecology). Paleontological observations have been documented as far back as the 5th century BC. The science became established in the 18th century as a result of Georges Cuvier's work on comparative anatomy, and developed rapidly in the 19th century. The term has been used since 1822 formed from Greek (, "old, ancient"), (, (gen. ), "being, creature"), and (, "speech, thought, study"). Paleontology lies on the border between biology and geology, but it differs from archaeology in that it excludes the study of anatomically modern humans. It now uses techniques drawn from a wide range of sciences, including biochemistry, mathematics, and engineering. Use of all these techniques has enabled paleontologists to discover much of the evolutionary history of life, almost back to when Earth became capable of supporting life, nearly 4 billion years ago. As knowledge has increased, paleontology has developed specialised sub-divisions, some of which focus on different types of fossil organisms while others study ecology and environmental history, such as ancient climates. Body fossils and trace fossils are the principal types of evidence about ancient life, and geochemical evidence has helped to decipher the evolution of life before there were organisms large enough to leave body fossils. Estimating the dates of these remains is essential but difficult: sometimes adjacent rock layers allow radiometric dating, which provides absolute dates that are accurate to within 0.5%, but more often paleontologists have to rely on relative dating by solving the "jigsaw puzzles" of biostratigraphy (arrangement of rock layers from youngest to oldest). Classifying ancient organisms is also difficult, as many do not fit well into the Linnaean taxonomy classifying living organisms, and paleontologists more often use cladistics to draw up evolutionary "family trees". The final quarter of the 20th century saw the development of molecular phylogenetics, which investigates how closely organisms are related by measuring the similarity of the DNA in their genomes. Molecular phylogenetics has also been used to estimate the dates when species diverged, but there is controversy about the reliability of the molecular clock on which such estimates depend. Overview The simplest definition of "paleontology" is "the study of ancient life". The field seeks information about several aspects of past organisms: "their identity and origin, their environment and evolution, and what they can tell us about the Earth's organic and inorganic past". Historical science William Whewell (1794–1866) classified paleontology as one of the historical sciences, along with archaeology, geology, astronomy, cosmology, philology and history itself: paleontology aims to describe phenomena of the past and to reconstruct their causes. Hence it has three main elements: description of past phenomena; developing a general theory about the causes of various types of change; and applying those theories to specific facts. When trying to explain the past, paleontologists and other historical scientists often construct a set of one or more hypotheses about the causes and then look for a "smoking gun", a piece of evidence that strongly accords with one hypothesis over any others. Sometimes researchers discover a "smoking gun" by a fortunate accident during other research. For example, the 1980 discovery by Luis and Walter Alvarez of iridium, a mainly extraterrestrial metal, in the Cretaceous–Paleogene boundary layer made asteroid impact the most favored explanation for the Cretaceous–Paleogene extinction event – although debate continues about the contribution of volcanism. A complementary approach to developing scientific knowledge, experimental science, is often said to work by conducting experiments to disprove hypotheses about the workings and causes of natural phenomena. This approach cannot prove a hypothesis, since some later experiment may disprove it, but the accumulation of failures to disprove is often compelling evidence in favor. However, when confronted with totally unexpected phenomena, such as the first evidence for invisible radiation, experimental scientists often use the same approach as historical scientists: construct a set of hypotheses about the causes and then look for a "smoking gun". Related sciences Paleontology lies between biology and geology since it focuses on the record of past life, but its main source of evidence is fossils in rocks. For historical reasons, paleontology is part of the geology department at many universities: in the 19th and early 20th centuries, geology departments found fossil evidence important for dating rocks, while biology departments showed little interest. Paleontology also has some overlap with archaeology, which primarily works with objects made by humans and with human remains, while paleontologists are interested in the characteristics and evolution of humans as a species. When dealing with evidence about humans, archaeologists and paleontologists may work together – for example paleontologists might identify animal or plant fossils around an archaeological site, to discover the people who lived there, and what they ate; or they might analyze the climate at the time of habitation. In addition, paleontology often borrows techniques from other sciences, including biology, osteology, ecology, chemistry, physics and mathematics. For example, geochemical signatures from rocks may help to discover when life first arose on Earth, and analyses of carbon isotope ratios may help to identify climate changes and even to explain major transitions such as the Permian–Triassic extinction event. A relatively recent discipline, molecular phylogenetics, compares the DNA and RNA of modern organisms to re-construct the "family trees" of their evolutionary ancestors. It has also been used to estimate the dates of important evolutionary developments, although this approach is controversial because of doubts about the reliability of the "molecular clock". Techniques from engineering have been used to analyse how the bodies of ancient organisms might have worked, for example the running speed and bite strength of Tyrannosaurus, or the flight mechanics of Microraptor. It is relatively commonplace to study the internal details of fossils using X-ray microtomography. Paleontology, biology, archaeology, and paleoneurobiology combine to study endocranial casts (endocasts) of species related to humans to clarify the evolution of the human brain. Paleontology even contributes to astrobiology, the investigation of possible life on other planets, by developing models of how life may have arisen and by providing techniques for detecting evidence of life. Subdivisions As knowledge has increased, paleontology has developed specialised subdivisions. Vertebrate paleontology concentrates on fossils from the earliest fish to the immediate ancestors of modern mammals. Invertebrate paleontology deals with fossils such as molluscs, arthropods, annelid worms and echinoderms. Paleobotany studies fossil plants, algae, and fungi. Palynology, the study of pollen and spores produced by land plants and protists, straddles paleontology and botany, as it deals with both living and fossil organisms. Micropaleontology deals with microscopic fossil organisms of all kinds. Instead of focusing on individual organisms, paleoecology examines the interactions between different ancient organisms, such as their food chains, and the two-way interactions with their environments.  For example, the development of oxygenic photosynthesis by bacteria caused the oxygenation of the atmosphere and hugely increased the productivity and diversity of ecosystems. Together, these led to the evolution of complex eukaryotic cells, from which all multicellular organisms are built. Paleoclimatology, although sometimes treated as part of paleoecology, focuses more on the history of Earth's climate and the mechanisms that have changed it – which have sometimes included evolutionary developments, for example the rapid expansion of land plants in the Devonian period removed more carbon dioxide from the atmosphere, reducing the greenhouse effect and thus helping to cause an ice age in the Carboniferous period. Biostratigraphy, the use of fossils to work out the chronological order in which rocks were formed, is useful to both paleontologists and geologists. Biogeography studies the spatial distribution of organisms, and is also linked to geology, which explains how Earth's geography has changed over time. History Although paleontology became established around 1800, earlier thinkers had noticed aspects of the fossil record. The ancient Greek philosopher Xenophanes (570–480 BCE) concluded from fossil sea shells that some areas of land were once under water. During the Middle Ages the Persian naturalist Ibn Sina, known as Avicenna in Europe, discussed fossils and proposed a theory of petrifying fluids on which Albert of Saxony elaborated in the 14th century. The Chinese naturalist Shen Kuo (1031–1095) proposed a theory of climate change based on the presence of petrified bamboo in regions that in his time were too dry for bamboo. In early modern Europe, the systematic study of fossils emerged as an integral part of the changes in natural philosophy that occurred during the Age of Reason. In the Italian Renaissance, Leonardo da Vinci made various significant contributions to the field as well as depicted numerous fossils. Leonardo's contributions are central to the history of paleontology because he established a line of continuity between the two main branches of paleontologyichnology and body fossil paleontology. He identified the following: The biogenic nature of ichnofossils, i.e. ichnofossils were structures left by living organisms; The utility of ichnofossils as paleoenvironmental toolscertain ichnofossils show the marine origin of rock strata; The importance of the neoichnological approachrecent traces are a key to understanding ichnofossils; The independence and complementary evidence of ichnofossils and body fossilsichnofossils are distinct from body fossils, but can be integrated with body fossils to provide paleontological information At the end of the 18th century Georges Cuvier's work established comparative anatomy as a scientific discipline and, by proving that some fossil animals resembled no living ones, demonstrated that animals could become extinct, leading to the emergence of paleontology. The expanding knowledge of the fossil record also played an increasing role in the development of geology, particularly stratigraphy. Cuvier proved that the different levels of deposits represented different time periods in the early 19th century. The surface-level deposits in the Americas contained later mammals like the megatheriid ground sloth Megatherium and the mammutid proboscidean Mammut (later known informally as a "mastodon"), which were some of the earliest-named fossil mammal genera with official taxonomic authorities. They today are known to date to the Neogene-Quaternary. In deeper-level deposits in western Europe are early-aged mammals such as the palaeothere perissodactyl Palaeotherium and the anoplotheriid artiodactyl Anoplotherium, both of which were described earliest after the former two genera, which today are known to date to the Paleogene period. Cuvier figured out that even older than the two levels of deposits with extinct large mammals is one that contained an extinct "crocodile-like" marine reptile, which eventually came to be known as the mosasaurid Mosasaurus of the Cretaceous period. The first half of the 19th century saw geological and paleontological activity become increasingly well organised with the growth of geologic societies and museums and an increasing number of professional geologists and fossil specialists. Interest increased for reasons that were not purely scientific, as geology and paleontology helped industrialists to find and exploit natural resources such as coal. This contributed to a rapid increase in knowledge about the history of life on Earth and to progress in the definition of the geologic time scale, largely based on fossil evidence. Although she was rarely recognised by the scientific community, Mary Anning was a significant contributor to the field of palaeontology during this period; she uncovered multiple novel Mesozoic reptile fossils and deducted that what were then known as bezoar stones are in fact fossilised faeces. In 1822 Henri Marie Ducrotay de Blainville, editor of Journal de Physique, coined the word "palaeontology" to refer to the study of ancient living organisms through fossils. As knowledge of life's history continued to improve, it became increasingly obvious that there had been some kind of successive order to the development of life. This encouraged early evolutionary theories on the transmutation of species. After Charles Darwin published Origin of Species in 1859, much of the focus of paleontology shifted to understanding evolutionary paths, including human evolution, and evolutionary theory. The last half of the 19th century saw a tremendous expansion in paleontological activity, especially in North America. The trend continued in the 20th century with additional regions of the Earth being opened to systematic fossil collection. Fossils found in China near the end of the 20th century have been particularly important as they have provided new information about the earliest evolution of animals, early fish, dinosaurs and the evolution of birds. The last few decades of the 20th century saw a renewed interest in mass extinctions and their role in the evolution of life on Earth. There was also a renewed interest in the Cambrian explosion that apparently saw the development of the body plans of most animal phyla. The discovery of fossils of the Ediacaran biota and developments in paleobiology extended knowledge about the history of life back far before the Cambrian. Increasing awareness of Gregor Mendel's pioneering work in genetics led first to the development of population genetics and then in the mid-20th century to the modern evolutionary synthesis, which explains evolution as the outcome of events such as mutations and horizontal gene transfer, which provide genetic variation, with genetic drift and natural selection driving changes in this variation over time. Within the next few years the role and operation of DNA in genetic inheritance were discovered, leading to what is now known as the "Central Dogma" of molecular biology. In the 1960s molecular phylogenetics, the investigation of evolutionary "family trees" by techniques derived from biochemistry, began to make an impact, particularly when it was proposed that the human lineage had diverged from apes much more recently than was generally thought at the time. Although this early study compared proteins from apes and humans, most molecular phylogenetics research is now based on comparisons of RNA and DNA. Sources of evidence Body fossils Fossils of organisms' bodies are usually the most informative type of evidence. The most common types are wood, bones, and shells. Fossilisation is a rare event, and most fossils are destroyed by erosion or metamorphism before they can be observed. Hence the fossil record is very incomplete, increasingly so further back in time. Despite this, it is often adequate to illustrate the broader patterns of life's history. There are also biases in the fossil record: different environments are more favorable to the preservation of different types of organism or parts of organisms. Further, only the parts of organisms that were already mineralised are usually preserved, such as the shells of molluscs. Since most animal species are soft-bodied, they decay before they can become fossilised. As a result, although there are 30-plus phyla of living animals, two-thirds have never been found as fossils. Occasionally, unusual environments may preserve soft tissues. These lagerstätten allow paleontologists to examine the internal anatomy of animals that in other sediments are represented only by shells, spines, claws, etc. – if they are preserved at all. However, even lagerstätten present an incomplete picture of life at the time. The majority of organisms living at the time are probably not represented because lagerstätten are restricted to a narrow range of environments, e.g. where soft-bodied organisms can be preserved very quickly by events such as mudslides; and the exceptional events that cause quick burial make it difficult to study the normal environments of the animals. The sparseness of the fossil record means that organisms are expected to exist long before and after they are found in the fossil record – this is known as the Signor–Lipps effect. Trace fossils Trace fossils consist mainly of tracks and burrows, but also include coprolites (fossil feces) and marks left by feeding. Trace fossils are particularly significant because they represent a data source that is not limited to animals with easily fossilised hard parts, and they reflect organisms' behaviours. Also many traces date from significantly earlier than the body fossils of animals that are thought to have been capable of making them. Whilst exact assignment of trace fossils to their makers is generally impossible, traces may for example provide the earliest physical evidence of the appearance of moderately complex animals (comparable to earthworms). Geochemical observations Geochemical observations may help to deduce the global level of biological activity at a certain period, or the affinity of certain fossils. For example, geochemical features of rocks may reveal when life first arose on Earth, and may provide evidence of the presence of eukaryotic cells, the type from which all multicellular organisms are built. Analyses of carbon isotope ratios may help to explain major transitions such as the Permian–Triassic extinction event. Classifying ancient organisms Simple example cladogram Warm-bloodedness evolved somewhere in thesynapsid–mammal transition. Warm-bloodedness must also have evolved at one of these points – an example of convergent evolution. Naming groups of organisms in a way that is clear and widely agreed is important, as some disputes in paleontology have been based just on misunderstandings over names. Linnaean taxonomy is commonly used for classifying living organisms, but runs into difficulties when dealing with newly discovered organisms that are significantly different from known ones. For example: it is hard to decide at what level to place a new higher-level grouping, e.g. genus or family or order; this is important since the Linnaean rules for naming groups are tied to their levels, and hence if a group is moved to a different level it must be renamed. Paleontologists generally use approaches based on cladistics, a technique for working out the evolutionary "family tree" of a set of organisms. It works by the logic that, if groups B and C have more similarities to each other than either has to group A, then B and C are more closely related to each other than either is to A. Characters that are compared may be anatomical, such as the presence of a notochord, or molecular, by comparing sequences of DNA or proteins. The result of a successful analysis is a hierarchy of clades – groups that share a common ancestor. Ideally the "family tree" has only two branches leading from each node ("junction"), but sometimes there is too little information to achieve this, and paleontologists have to make do with junctions that have several branches. The cladistic technique is sometimes fallible, as some features, such as wings or camera eyes, evolved more than once, convergently – this must be taken into account in analyses. Evolutionary developmental biology, commonly abbreviated to "Evo Devo", also helps paleontologists to produce "family trees", and understand fossils. For example, the embryological development of some modern brachiopods suggests that brachiopods may be descendants of the halkieriids, which became extinct in the Cambrian period. Estimating the dates of organisms Paleontology seeks to map out how living things have changed through time. A substantial hurdle to this aim is the difficulty of working out how old fossils are. Beds that preserve fossils typically lack the radioactive elements needed for radiometric dating. This technique is our only means of giving rocks greater than about 50 million years old an absolute age, and can be accurate to within 0.5% or better. Although radiometric dating requires very careful laboratory work, its basic principle is simple: the rates at which various radioactive elements decay are known, and so the ratio of the radioactive element to the element into which it decays shows how long ago the radioactive element was incorporated into the rock. Radioactive elements are common only in rocks with a volcanic origin, and so the only fossil-bearing rocks that can be dated radiometrically are a few volcanic ash layers. Consequently, paleontologists must usually rely on stratigraphy to date fossils. Stratigraphy is the science of deciphering the "layer-cake" that is the sedimentary record, and has been compared to a jigsaw puzzle. Rocks normally form relatively horizontal layers, with each layer younger than the one underneath it. If a fossil is found between two layers whose ages are known, the fossil's age must lie between the two known ages. Because rock sequences are not continuous, but may be broken up by faults or periods of erosion, it is very difficult to match up rock beds that are not directly next to one another. However, fossils of species that survived for a relatively short time can be used to link up isolated rocks: this technique is called biostratigraphy. For instance, the conodont Eoplacognathus pseudoplanus has a short range in the Middle Ordovician period. If rocks of unknown age are found to have traces of E. pseudoplanus, they must have a mid-Ordovician age. Such index fossils must be distinctive, be globally distributed and have a short time range to be useful. However, misleading results are produced if the index fossils turn out to have longer fossil ranges than first thought. Stratigraphy and biostratigraphy can in general provide only relative dating (A was before B), which is often sufficient for studying evolution. However, this is difficult for some time periods, because of the problems involved in matching up rocks of the same age across different continents. Family-tree relationships may also help to narrow down the date when lineages first appeared. For instance, if fossils of B or C date to X million years ago and the calculated "family tree" says A was an ancestor of B and C, then A must have evolved more than X million years ago. It is also possible to estimate how long ago two living clades diverged – i.e. approximately how long ago their last common ancestor must have lived – by assuming that DNA mutations accumulate at a constant rate. These "molecular clocks", however, are fallible, and provide only a very approximate timing: for example, they are not sufficiently precise and reliable for estimating when the groups that feature in the Cambrian explosion first evolved, and estimates produced by different techniques may vary by a factor of two. History of life Earth formed about and, after a collision that formed the Moon about 40 million years later, may have cooled quickly enough to have oceans and an atmosphere about . There is evidence on the Moon of a Late Heavy Bombardment by asteroids from . If, as seems likely, such a bombardment struck Earth at the same time, the first atmosphere and oceans may have been stripped away. Paleontology traces the evolutionary history of life back to over , possibly as far as . The oldest clear evidence of life on Earth dates to , although there have been reports, often disputed, of fossil bacteria from and of geochemical evidence for the presence of life . Some scientists have proposed that life on Earth was "seeded" from elsewhere, but most research concentrates on various explanations of how life could have arisen independently on Earth. For about 2,000 million years microbial mats, multi-layered colonies of different bacteria, were the dominant life on Earth. The evolution of oxygenic photosynthesis enabled them to play the major role in the oxygenation of the atmosphere from about . This change in the atmosphere increased their effectiveness as nurseries of evolution. While eukaryotes, cells with complex internal structures, may have been present earlier, their evolution speeded up when they acquired the ability to transform oxygen from a poison to a powerful source of metabolic energy. This innovation may have come from primitive eukaryotes capturing oxygen-powered bacteria as endosymbionts and transforming them into organelles called mitochondria. The earliest evidence of complex eukaryotes with organelles (such as mitochondria) dates from . Multicellular life is composed only of eukaryotic cells, and the earliest evidence for it is the Francevillian Group Fossils from , although specialisation of cells for different functions first appears between (a possible fungus) and (a probable red alga). Sexual reproduction may be a prerequisite for specialisation of cells, as an asexual multicellular organism might be at risk of being taken over by rogue cells that retain the ability to reproduce. The earliest known animals are cnidarians from about , but these are so modern-looking that they must be descendants of earlier animals. Early fossils of animals are rare because they had not developed mineralised, easily fossilized hard parts until about . The earliest modern-looking bilaterian animals appear in the Early Cambrian, along with several "weird wonders" that bear little obvious resemblance to any modern animals. There is a long-running debate about whether this Cambrian explosion was truly a very rapid period of evolutionary experimentation; alternative views are that modern-looking animals began evolving earlier but fossils of their precursors have not yet been found, or that the "weird wonders" are evolutionary "aunts" and "cousins" of modern groups. Vertebrates remained a minor group until the first jawed fish appeared in the Late Ordovician. The spread of animals and plants from water to land required organisms to solve several problems, including protection against drying out and supporting themselves against gravity. The earliest evidence of land plants and land invertebrates date back to about and respectively. Those invertebrates, as indicated by their trace and body fossils, were shown to be arthropods known as euthycarcinoids. The lineage that produced land vertebrates evolved later but very rapidly between and ; recent discoveries have overturned earlier ideas about the history and driving forces behind their evolution. Land plants were so successful that their detritus caused an ecological crisis in the Late Devonian, until the evolution of fungi that could digest dead wood. During the Permian period, synapsids, including the ancestors of mammals, may have dominated land environments, but this ended with the Permian–Triassic extinction event , which came very close to wiping out all complex life. The extinctions were apparently fairly sudden, at least among vertebrates. During the slow recovery from this catastrophe a previously obscure group, archosaurs, became the most abundant and diverse terrestrial vertebrates. One archosaur group, the dinosaurs, were the dominant land vertebrates for the rest of the Mesozoic, and birds evolved from one group of dinosaurs. During this time mammals' ancestors survived only as small, mainly nocturnal insectivores, which may have accelerated the development of mammalian traits such as endothermy and hair. After the Cretaceous–Paleogene extinction event killed off all the dinosaurs except the birds, mammals increased rapidly in size and diversity, and some took to the air and the sea. Fossil evidence indicates that flowering plants appeared and rapidly diversified in the Early Cretaceous between and . Their rapid rise to dominance of terrestrial ecosystems is thought to have been propelled by coevolution with pollinating insects. Social insects appeared around the same time and, although they account for only small parts of the insect "family tree", now form over 50% of the total mass of all insects. Humans evolved from a lineage of upright-walking apes whose earliest fossils date from over . Although early members of this lineage had chimp-sized brains, about 25% as big as modern humans', there are signs of a steady increase in brain size after about . There is a long-running debate about whether modern humans are descendants of a single small population in Africa, which then migrated all over the world less than 200,000 years ago and replaced previous hominine species, or arose worldwide at the same time as a result of interbreeding. Mass extinctions Life on earth has suffered occasional mass extinctions at least since . Despite their disastrous effects, mass extinctions have sometimes accelerated the evolution of life on earth. When dominance of an ecological niche passes from one group of organisms to another, this is rarely because the new dominant group outcompetes the old, but usually because an extinction event allows a new group, which may possess an advantageous trait, to outlive the old and move into its niche. The fossil record appears to show that the rate of extinction is slowing down, with both the gaps between mass extinctions becoming longer and the average and background rates of extinction decreasing. However, it is not certain whether the actual rate of extinction has altered, since both of these observations could be explained in several ways: The oceans may have become more hospitable to life over the last 500 million years and less vulnerable to mass extinctions: dissolved oxygen became more widespread and penetrated to greater depths; the development of life on land reduced the run-off of nutrients and hence the risk of eutrophication and anoxic events; marine ecosystems became more diversified so that food chains were less likely to be disrupted. Reasonably complete fossils are very rare: most extinct organisms are represented only by partial fossils, and complete fossils are rarest in the oldest rocks. So paleontologists have mistakenly assigned parts of the same organism to different genera, which were often defined solely to accommodate these finds – the story of Anomalocaris is an example of this. The risk of this mistake is higher for older fossils because these are often unlike parts of any living organism. Many "superfluous" genera are represented by fragments that are not found again, and these "superfluous" genera are interpreted as becoming extinct very quickly. Biodiversity in the fossil record, which is "the number of distinct genera alive at any given time; that is, those whose first occurrence predates and whose last occurrence postdates that time" shows a different trend: a fairly swift rise from , a slight decline from , in which the devastating Permian–Triassic extinction event is an important factor, and a swift rise from to the present. Paleontology in the popular press Books catered to the general public on paleontology include: The Last Days of the Dinosaurs: An Asteroid Extinction, and the Beginning of our World written by Riley Black The Rise and Reign of the Mammals: A New History, from the Shadow of the Dinosaurs to Us written by Steve Brusatte Otherlands: A Journey Through Earth's Extinct Worlds written by Thomas Halliday See also (with link directory) List of notable fossils List of paleontologists List of transitional fossils Une Femme ou Deux - French screwball comedy romance film starring Gérard Depardieu as a paleontologist. Notes References External links Smithsonian's Paleobiology website University of California Museum of Paleontology The Paleontological Society The Palaeontological Association The Society of Vertebrate Paleontology The Paleontology Portal "Geology, Paleontology & Theories of the Earth" – a collection of more than 100 digitised landmark and early books on Earth sciences at the Linda Hall Library Earth sciences Evolutionary biology Historical geology
0.768906
0.999051
0.768176
11th century
The 11th century is the period from 1001 (represented by the Roman numerals MI) through 1100 (MC) in accordance with the Julian calendar, and the 1st century of the 2nd millennium. In the history of Europe, this period is considered the early part of the High Middle Ages. There was, after a brief ascendancy, a sudden decline of Byzantine power and a rise of Norman domination over much of Europe, along with the prominent role in Europe of notably influential popes. Christendom experienced a formal schism in this century which had been developing over previous centuries between the Latin West and Byzantine East, causing a split in its two largest denominations to this day: Roman Catholicism and Eastern Orthodoxy. In Song dynasty China and the classical Islamic world, this century marked the high point for both classical Chinese civilization, science and technology, and classical Islamic science, philosophy, technology and literature. Rival political factions at the Song dynasty court created strife amongst the leading statesmen and ministers of the empire. In Korea, the Goryeo Kingdom flourished and faced external threats from the Liao dynasty (Manchuria). In this century the Turkish Seljuk dynasty comes to power in Western Asia over the now fragmented Abbasid realm, while the first of the Crusades were waged towards the close of the century. The Fatimid Caliphate in Egypt, the Ghaznavids, and the Chola dynasty in India had reached their zenith in military might and international influence. The Western Chalukya Empire (the Chola's rival) also rose to power by the end of the century. In Japan, the Fujiwara clan continued to dominate the affairs of state. In the Americas, the Toltec and Mixtec civilizations flourished in Central America, along with the Huari Culture of South America and the Mississippian culture of North America. The Tiwanaku Empire centered around Lake Titicaca collapsed in the first half of the century. Overview In European history, the 11th century is regarded as the beginning of the High Middle Ages, an age subsequent to the Early Middle Ages. The century began while the translatio imperii of 962 was still somewhat novel and ended in the midst of the Investiture Controversy. It saw the final Christianisation of Scandinavia and the emergence of the Peace and Truce of God movements, the Gregorian Reforms, and the Crusades which revitalised a church and a papacy that had survived tarnished by the tumultuous 10th century. In 1054, the Great Schism saw the political and religious culmination and a formal split between the Western and Eastern church. In Germany, the century was marked by the ascendancy of the Holy Roman Emperors, who hit their high-water mark under the Salians. In Britain, it saw the transformation of Scotland into a single, more unified and centralised kingdom and the Norman conquest of England in 1066. The social transformations wrought in these lands brought them into the fuller orbit of European feudal politics. In France, it saw the nadir of the monarchy and the zenith of the great magnates, especially the dukes of Aquitaine and Normandy, who could thus foster such distinctive contributions of their lands as the pious warrior who conquered Britain, Italy, and the East and the impious peacelover, the troubadour, who crafted out of the European vernacular its first great literary themes. There were also the first figures of the intellectual movement known as Scholasticism, which emphasized dialectic arguments in disputes of Christian theology as well as classical philosophy. In Italy, the century began with the integration of the kingdom into the Holy Roman Empire and the royal palace at Pavia was summoned in 1024. By the end of the century, Lombard and Byzantine rule in the Mezzogiorno had been usurped by the Normans and the power of the territorial magnates was being replaced by that of the citizens of the northern cities. In Northern Italy, a growth of population in urban centers gave rise to an early organized capitalism and more sophisticated, commercialized culture by the late 11th century, most notably in Venice. In Spain, the century opened with the successes of the last caliphs of Córdoba and ended in the successes of the Almoravids. In between was a period of Christian unification under Navarrese hegemony and success in the Reconquista against the taifa kingdoms that replaced the fallen caliphate. In Eastern Europe, there was a golden age for the principality of Kievan Rus. In China, there was a triangular affair of continued war and peace settlements between the Song dynasty, the Tanguts-led Western Xia in the northwest, and the Khitans of the Liao dynasty in the northeast. Meanwhile, opposing political factions evolved at the Song imperial court of Kaifeng. The political reformers at court, called the New Policies Group (新法, Xin Fa), were led by Emperor Shenzong of Song and the Chancellors Fan Zhongyan and Wang Anshi, while the political conservatives were led by Chancellor Sima Guang and Empress Dowager Gao, regent of the young Emperor Zhezong of Song. Heated political debate and sectarian intrigue followed, while political enemies were often dismissed from the capital to govern frontier regions in the deep south where malaria was known to be very fatal to northern Chinese people (see History of the Song dynasty). This period also represents a high point in classical Chinese science and technology, with figures such as Su Song and Shen Kuo, as well as the age where the matured form of the Chinese pagoda was accomplished in Chinese architecture. In Japan, the Fujiwara clan dominated central politics by acting as imperial regents, controlling the actions of the Emperor of Japan, who acted merely as a 'puppet monarch' during the Heian period. In Korea, the rulers of the Goryeo Kingdom were able to concentrate more central authority into their own hands than in that of the nobles, and were able to fend off two Khitan invasions with their armies. In the Middle East, the Fatimid Empire of Egypt reached its zenith only to face steep decline, much like the Byzantine Empire in the first half of the century. The Seljuks came to prominence while the Abbasid caliphs held traditional titles without real, tangible authority in state affairs. In India, the Chola dynasty reached its height of naval power under leaders such as Rajaraja Chola I and Rajendra Chola I, dominating southern India (Tamil Nadu), Sri Lanka, and regions of Southeast Asia. The Ghaznavid Empire would invade northwest India, an event that would pave the way to a series of later Muslim expansions into India. In Southeast Asia, the Pagan Kingdom reached its height of political and military power. The Khmer Empire would dominate in Mainland Southeast Asia while Srivijaya would dominate Maritime Southeast Asia. Further east, the Kingdom of Butuan, centered on the northern portion of Mindanao island flourished as the dominant trading polity in the archipelago. In Vietnam, the Lý dynasty began, which would reach its golden era during the 11th century. In Nigeria, formation of city states, kingdoms and empires, including Hausa kingdoms and Borno dynasty in the north, and the Oyo Empire and Kingdom of Benin in the south. Events 1001–1009 1001: Mahmud of Ghazni, Muslim leader of Ghazni, begins a series of raids into Northern India; he finishes in 1027 with the destruction of Somnath. c. 1001: Norsemen, led by Leif Eriksson, establish short-lived settlements in and around Vinland in North America. 1001–1008: Japanese Lady Murasaki Shikibu writes The Tale of Genji. 1001 ± 40 years: Baitoushan volcano on what would be the Chinese-Korean border, erupts with a force of 6.5, the fourth largest Holocene blast. 1001: The ancient kingdom of Butuan, through its King, Rajah Kiling, made contact with the Chinese, Song dynasty recorded the first appearance of Butuan tributary mission through Lijehan and Jiaminan at the Chinese Imperial Court on March 17, 1001 AD. 1003: Robert II of France invades the Duchy of Burgundy, then ruled by Otto-William, Duke of Burgundy; the initial invasion is unsuccessful, but Robert II eventually gains the acceptance of the Roman Catholic Church in 1016 and annexes Burgundy into his realm. 1004: Song dynasty court prohibited Butuan from exporting several items with their predilection due to issues on rules and regulation. 1004: The library and university Dar Al-Hekma is founded in Egypt under the Fatimids. 1005: The Treaty of Shanyuan is signed between the Chinese Song dynasty and the Khitan Liao dynasty. 1006: King Dharmawangsa's Mataram kingdom falls under the invasion of King Wurawari from Lwaram (highly possible Srivijayan ally in Java). 1007: Butuan king, Rajah Kiling through the ambassador I-hsu-han sent a formal memorial on Song dynasty Imperial court requesting equal status with Champa but the request was denied on the grounds that "Butuan is beneath Champa." due to Champa being an older tributary state since the 4th century. 1008: The Fatimid Egyptian sea captain Domiyat travels to the Buddhist pilgrimage site in Shandong, China, to seek out the Chinese Emperor Zhenzong of Song with gifts from his ruling Imam Al-Hakim bi-Amr Allah, successfully reopening diplomatic relations between Egypt and China that had been lost since the collapse of the Tang dynasty. 1009: Lý Thái Tổ overthrows the Anterior Lê dynasty of Vietnam, establishing the Lý dynasty. 1009–1010: The Lombard known as Melus of Bari leads an insurrection against the Byzantine Catepan of Italy, John Curcuas, as the latter was killed in battle and replaced by Basil Mesardonites, who brought Byzantine reinforcements. 1010s 1010–1011: The Second Goryeo-Khitan War; the Korean king is forced to flee the capital temporarily, but is unable to establish a foothold and fearing a counterattack, the Khitan forces withdrew. 1011–1021: Ibn al-Haytham (Alhacen), a famous Iraqi scientist working in Egypt, feigns madness in fear of angering the Egyptian caliph Al-Hakim bi-Amr Allah, and is kept under house arrest from 1011 to 1021. During this time, he writes his influential Book of Optics. 1011: Under a new Rajah named Sri Bata Shaja, Butuan finally succeeded in attaining diplomatic equality with Champa after being denied in an older request made 4 years earlier to the Song dynasty court by sending the flamboyant ambassador Likanhsieh. 1013: Danish king Sweyn Forkbeard conquers England. 1014: The Byzantine armies of Basil II are victorious over Samuil of Bulgaria in the Battle of Kleidion. 1014: The Gaelic forces of Munster and most other Irish kingdoms under High King Brian Boru defeat a combined Leinster-Viking force in the Battle of Clontarf but Brian Boru is killed at the end of the battle. 1014–1020: The Book of Healing, a vast philosophical and scientific encyclopaedia, is written by Avicenna, Persian scholar. 1015: In the Battle of Nesjar in Oslofjord, Norway, the forces of Olav Haraldsson fought the forces of Sveinn Hákonarson, with a victory for Olav. 1018: The First Bulgarian Empire is conquered by the Byzantine Empire 1018: The Byzantine armies of Basil Boioannes are victorious at the Battle of Cannae against the Lombards under Melus of Bari. 1018: The Third Goryeo-Khitan War; the Korean General Kang Kam-ch'an inflicted heavy losses to Khitan forces at the Battle of Kwiju. The Khitans withdrew and both sides signed a peace treaty. 1019: Airlangga establishes the Kingdom of Kahuripan. 1020s 1021: the ruling Fatimid Caliph Al-Hakim bi-Amr Allah disappears suddenly, possibly assassinated by his own sister Sitt al-Mulk, which leads to the open persecution of the Druze by Ismaili Shia; the Druze proclaim that Al-Hakim went into hiding (ghayba), whereupon he would return as the Mahdi savior. 1025: the Chola dynasty of India uses its naval powers to conquer the South East Asian kingdom of Srivijaya, turning it into a vassal. 1025: ruler Rajendra Chola I moves the capital city of the empire from Thanjavur to Gangaikonda Cholapuram 1025: Rajendra Chola, the Chola king from Cholamandala in South India, conquers Pannai and Kadaram from Srivijaya and occupies it for some time. The Cholas continue a series of raids and conquests of parts Srivijayan empire in Sumatra and the Malay Peninsula. 1028: the King of Srivijaya appeals to the Song dynasty Chinese, sending a diplomatic mission to their capital at Kaifeng. 1020s: The Canon of Medicine, a medical encyclopedia, is written by Avicenna, Persian Muslim scholar. 1030s 1030: Stephen I of the Kingdom of Hungary defeats Conrad II of the Holy Roman Empire; after the war, Conrad had ceded the lands between the rivers Leitha and Fischa to Hungary in the summer of 1031. 1030: the Battle of Stiklestad (Norway): Olav Haraldsson loses to his pagan vassals and is killed in the battle. He is later canonized and becomes the patron saint of Norway and Rex perpetuum Norvegiae ('the eternal king of Norway'). 1030: Sanghyang Tapak inscription in the Cicatih River bank in Cibadak, Sukabumi, West Java, mentioned about the establishment of sacred forest and Kingdom of Sunda. (to 1579) 1033: An earthquake strikes the Jordan Valley, followed by a tsunami along the Mediterranean coast, killing tens of thousands. 1035: Raoul Glaber chronicles a devastating three-year famine induced by climatic changes in southern France 1035: Canute the Great dies, and his kingdom of present-day Norway, England, and Denmark was split amongst three rivals to his throne. 1035: William Iron Arm ventures to the Mezzogiorno 1037: Ferdinand I of León conquers the Kingdom of Galicia. 1040s 1040: Duncan I of Scotland slain in battle. Macbeth succeeds him. 1041: Samuel Aba became King of Hungary. 1041: Airlangga divides Kahuripan into two kingdoms Janggala and Kadiri and abdicates in favour of his successors. 1042: the Normans establish Melfi as the capital of southern Italy. 1041–1048: Chinese artisan Bi Sheng invents ceramic movable type printing 1043: the Byzantine Empire and Kievan Rus engage in a naval confrontation, although a later treaty is signed between two parties that includes the marriage alliance of Vsevolod I of Kiev to a princess daughter of Constantine IX Monomachos. 1043: the Byzantine General George Maniaces, who had served in Sicily back in 1038, is proclaimed emperor by his troops while he is catepan of Italy; he leads an unsuccessful rebellion against Constantine IX Monomachos and is killed in battle in Macedonia during his march towards Constantinople. 1043: the Song dynasty Chancellor of China, Fan Zhongyan, and prominent official and historian Ouyang Xiu introduce the Qingli Reforms, which would be rescinded by the court in 1045 due to partisan resistance to reforms. 1043: the Kingdom of Nri of West Africa is said to have started in this year with Eze Nri Ìfikuánim 1044: the Chinese Wujing Zongyao, written by Zeng Gongliang and Yang Weide, is the first book to describe gunpowder formulas; it also described their use in warfare, such as blackpowder-impregnated fuses for flamethrowers. It also described an early form of the compass, a thermoremanence compass. 1044: Henry III of the Holy Roman Empire defeats the Kingdom of Hungary in the Battle of Ménfő; Peter Urseolo captured Samuel Aba after the battle, executing him, and restoring his claim to the throne; the Kingdom of Hungary then briefly becomes a vassal to the Holy Roman Empire. 1045: The Zirids, a Berber dynasty of North Africa, break their allegiance with the Fatimid court of Egypt and recognize the Abbasids of Baghdad as the true caliphs. 1050s 1052: Fujiwara no Yorimichi converts the rural villa at Byōdō-in into a famous Japanese Buddhist temple. 1053: the Norman commander Humphrey of Hauteville is victorious in the Battle of Civitate against the Lombards and the papal coalition led by Rudolf of Benevento; Pope Leo IX himself is captured by the Normans. 1054: the Great Schism, in which the Western (Roman Catholic) and Eastern Orthodox churches separated from each other. Similar schisms in the past had been later repaired, but this one continues after nearly 1000 years. 1054: a large supernova is observed by astronomers, the remnants of which would form the Crab Nebula. 1054: the Battle of Atapuerca is fought between García V of Navarre and Ferdinand I of León. 1055: the Seljuk Turks capture Baghdad, taking the Buyid Emir Al-Malik al-Rahim prisoner. 1056: Ferdinand I of León, King of Castile and King of León, is crowned Imperator totius Hispaniae (Emperor of All Hispania). 1056: William II of England the son of William the Conqueror, was born. 1057: Anawrahta, ruler of the Pagan Kingdom, defeated the Mon city of Thaton, thus unifying all of Myanmar. 1057: Macbeth, king of Scotland, dies in battle against the future king Malcolm III. 1057: Invasion of the Banu Hilal, Kairouan destroyed, Zirids reduced to a tiny coastal strip, remainder fragments into petty Bedouin emirates. 1060s 1061–1091: Norman conquest of Sicily in the Mediterranean Sea 1064-1065: The Great German Pilgrimage, consisting of around unarmed 7,000 pilgrims, travels to Jerusalem under the leadership of Gunther of Bamberg. 1065: Seljuks first invasion to Georgia under leadership of Alp Arslan 1065: Independence of the Kingdom of Galicia and Portugal under the rule of Garcia 1066: in the Battle of Stamford Bridge, the last Anglo-Saxon King Harold Godwinson defeated his brother Tostig Godwinson and Harold III of Norway. 1066: Edward the Confessor dies; Harold Godwinson is killed in the Battle of Hastings, while the Norman William the Conqueror is crowned king of England. This is what most experts think of as the end of the Viking age. 1066: the Jewish vizier Joseph ibn Naghrela and many others are killed in the 1066 Granada massacre. 1068–1073: the reign of Japanese Emperor Go-Sanjō brings about a brief period where central power is taken out of the hands of the Fujiwara clan. 1068: Virarajendra Chola begins sending military raids into Malaysia and Indonesia. 1068: Seljuks destroyed Georgia for the second time 1069–1076: with the support of Emperor Shenzong of Song, Chancellor Wang Anshi of the Chinese Song dynasty introduces the 'New Policies', including the Baojia system of societal organization and militias, low-cost loans for farmers, taxes instead of corvée labor, government monopolies on tea, salt, and wine, reforming the land survey system, and eliminating the poetry requirement in the imperial examination system to gain bureaucrats of a more practical bent. 1070s 1070: the death of Athirajendra Chola and the ascension of Kulothunga Chola I marks the transition between the Medieval Cholas and the Chalukya Cholas. 1071: Defeat of the Byzantine Empire at the Battle of Manzikert by the Seljuk army of Alp Arslan, ending three centuries of a Byzantine military and economic Golden Age. 1072: the Battle of Golpejera is fought between Sancho II of Castile and Alfonso VI of Castile 1073: the Seljuk Turks capture Ankara from the Byzantines. 1074: the Seljuk Turks capture Jerusalem from the Fatimids, and cut pilgrim transit. 1075: Henry IV suppresses the rebellion of Saxony in the First Battle of Langensalza. 1075: the Investiture Controversy is sparked when Pope Gregory VII asserted in the Dictatus papae extended rights granted to the pope (disturbing the balance of power) and a new interpretation of God's role in founding the Church itself. 1075: Chinese official and diplomat Shen Kuo asserts the Song dynasty's rightful border lines by using court archives against the bold bluff of Emperor Daozong of Liao, who had asserted that Liao dynasty territory exceeded its earlier-accepted bounds. 1075–1076: a civil war in the Western Chalukya Empire of India; the Western Chalukya monarch Someshvara II plans to defeat his own ambitious brother Vikramaditya VI by allying with a traditional enemy, Kulothunga Chola I of the Chola Empire; Someshvara's forces suffer a heavy defeat, and he is eventually captured and imprisoned by Vikramaditya, who proclaimed himself king. 1075–1077: the Song dynasty of China and the Lý dynasty of Vietnam fight a border war, with Vietnamese forces striking first on land and with their navy, and afterwards Song armies advancing as far as modern-day Hanoi, the capital, but withdraw after Lý makes peace overtures; in 1082, both sides exchange the territories that they had captured during the war, and later a border agreement is reached. 1076: the Ghana Empire is attacked by the Almoravids, who sack the capital of Koumbi Saleh, ending the rule of king Tunka Manin 1076: the Chinese Song dynasty places strict government monopolies over the production and distribution of sulfur and saltpetre, in order to curb the possibility of merchants selling gunpowder formula components to enemies such as the Tanguts and Khitans. 1076: the Song Chinese allies with southern Vietnamese Champa and Cambodian Chenla to conquer the Lý dynasty, which is an unsuccessful campaign. 1077: the Walk to Canossa by Henry IV of the Holy Roman Empire. 1077: Chinese official Su Song is sent on a diplomatic mission to the Liao dynasty and discovers that the Khitan calendar is more mathematically accurate than the Song calendar; Emperor Zhezong later sponsors Su Song's astronomical clock tower in order to compete with Liao astronomers. 1078: Oleg I of Chernigov is defeated in battle by his brother Vsevolod I of Kiev; Oleg escapes to Tmutarakan, but is imprisoned by the Khazars, sent to Constantinople as a prisoner, and then exiled to Rhodes. 1078: the revolt of Nikephoros III against Byzantine ruler Michael VII 1079: Malik Shah I reforms the Iranian Calendar. 1079: Franks start to settle around the Way of Saint James (Today, modern North Spain) 1080s 1080–1081: The Chinese statesman and scientist Shen Kuo is put in command of the campaign against the Western Xia, and although he successfully halts their invasion route to Yanzhou (modern Yan'an), another officer disobeys imperial orders and the campaign is ultimately a failure because of it. 1081: birth of Urraca of León and Castile future Queen of Castille and León. 1084: the enormous Chinese historical work of the Zizhi Tongjian is compiled by scholars under Chancellor Sima Guang, completed in 294 volumes and included 3 million written Chinese characters 1085: Alfonso VI of Castile captures the Moorish Muslim city of Toledo, Spain. 1085: the Katedralskolan, Lund school of Sweden is established by Canute IV of Denmark 1086: compilation of the Domesday Book by order of William I of England; it was similar to a modern-day government census, as it was used by William to thoroughly document all the landholdings within the kingdom that could be properly taxed. 1086: the Battle of az-Zallaqah between the Almoravids and Castilians 1087: a new office at the Chinese international seaport of Quanzhou is established to handle and regulate taxes and tariffs on all mercantile transactions of foreign goods coming from Africa, Arabia, India, Sri Lanka, Persia, and South East Asia. 1087: the Italian cities of Genoa and Pisa engage in the African Mahdia campaign 1087: William II of England, son of William the Conqueror, is crowned king of England. 1088: the renowned polymath Chinese scientist and official Shen Kuo made the world's first reference to the magnetic compass in his book Dream Pool Essays, along with encyclopedic documentation and inquiry into scientific discoveries. 1088: The University of Bologna is established. 1088: Rebellion of 1088 against William II of England led by Odo of Bayeux. 1090–1100 1091: Normans from the Duchy of Normandy take control of Malta and surrounding islands. 1091: the Byzantine Empire under Alexios I Komnenos and his Cuman allies defeat Pechenegs at the Battle of Levounion 1093: Vikramaditya VI, ruler of the Western Chalukya Empire, defeats the army of Kulothunga Chola I in the Battle of Vengi. 1093: when the Chinese Empress Dowager Gao dies, the conservative faction that had followed Sima Guang is ousted from court, the liberal reforms of Wang Anshi reinstated, and Emperor Zhezong of Song halted all negotiations with the Tanguts of the Western Xia, resuming in armed conflict with them. 1093: the Kypchaks defeat princes of Kievan Rus at the Battle of the Stugna River 1093: Battle of Alnwick: Malcolm III of Scotland is killed by the forces of William II of England. 1094: the astronomical clock tower of Kaifeng, China—engineered by the official Su Song—is completed. 1094: El Cid, the great Spanish hero, conquers the Muslim city of Valencia 1094: a succession crisis following the reign of the Fatimid Caliph Ma'ad al-Mustansir Billah sparks a rebellion which leads to the split of Ismaili Shia into the new Nizari religious branch. 1095: Pope Urban II calls upon Western Europeans to take up the cross and reclaim the Holy Lands, officially commencing the First Crusade. –1099: earliest extant manuscript of the Song of Roland 1096: University of Oxford in England holds its first lectures 1097: the Siege of Nicaea during the First Crusade 1097: Diego Rodriguez, a son of El Cid, dies in the Battle of Consuegra, an Almoravid victory 1098: the Siege of Antioch during the First Crusade 1098: Pope Urban II makes an appearance at the Siege of Capua 1098: the Dongpo Academy of Hainan, China is built in honor of the Song dynasty Chinese official and poet Su Shi, who was exiled there for criticizing reforms of the New Policies Group. 1098: the birth of Hildegard of Bingen, Doctor of the Church, abbess, monastic leader, mystic, prophetess, medical, German composer and writer, polymath. 1099: the Siege of Jerusalem by European Crusaders. 1099: after the Kingdom of Jerusalem is established, the Al-Aqsa Mosque is made into the residential palace for the kings of Jerusalem. 1099: death of the great Spanish hero Rodrigo Díaz "El Cid Campeador". 1099: after building considerable strength, David IV of Georgia discontinues tribute payments to the Seljuk Turks. 1100: On August 5, Henry I is crowned King of England. 1100: On December 25, Baldwin of Boulogne is crowned as the first King of Jerusalem in the Church of the Nativity in Bethlehem. Undated King Anawrahta of Myanmar made a pilgrimage to Ceylon, returning to convert his country to Theravada Buddhism. The Tuareg migrate to the Aïr region. Kanem-Bornu expands southward into modern Nigeria. The first of seven Hausa city-states are founded in Nigeria. The Hodh region of Mauritania becomes desert. Fortified Chinese trade bases were established in the Philippines, to gather forest products and distribute imports. Gallery Architecture Ani Cathedral, Kingdom of Armenia, builded 1001 or 1010 Svetitskhoveli Cathedral, Georgia, is entirely renewed in 1029 The St Albans Cathedral of Norman-era England is completed in 1089. The Al-Hakim Mosque of Fatimid Egypt is completed in 1013. The Iron Pagoda of Kaifeng, China is built in 1049. The Phoenix Hall of Byōdō-in, Japan, is completed in 1053. The Brihadeeswarar Temple of India is completed in 1010 during the reign of Rajaraja Chola I. The Fruttuaria of San Benigno Canavese, Italy is completed in 1007. The Kedareshwara Temple of Balligavi, India, is built in 1060 by the Western Chalukyas. Construction work begins in 1059 on the Parma Cathedral of Italy. The Saint Sophia Cathedral in Novgorod is completed in 1052, the oldest existent church in Russia. Construction begins on the Saint Sophia Cathedral in Kiev, Kievan Rus, in 1037. The Byzantine Greek Hosios Loukas monastery sees the completion of its Katholikon (main church), the earliest extant domed-octagon church from 1011 to 1012. The Lingxiao Pagoda of Zhengding, Hebei province, China, is built in 1045. The Pagoda of Fogong Temple of Shanxi province, China, is completed under the Liao dynasty in 1056. The Nikortsminda Cathedral of Georgia is completed in 1014. The Speyer Cathedral in Speyer, Germany is completed in 1061. The Chinese official Cai Xiang oversaw the construction of the Wanan Bridge in Fujian. The Imam Ali Mosque in Iraq is rebuilt by Malik Shah I in 1086 after it was destroyed by fire. The Pizhi Pagoda of Lingyan Temple, Shandong, China is completed in 1063. Reconstruction of the San Liberatore a Maiella in Italy begins in 1080. Westminster Abbey, London, England, is completed in 1065. The Ananda Temple of the Myanmar ruler King Kyanzittha is completed in 1091. The Văn Miếu, or Temple of Literature, in Vietnam is established in 1070. Construction of Richmond Castle in England begins in 1071. The tallest pagoda tower in China's pre-modern history, the Liaodi Pagoda, is completed in 1055, standing at a height of 84 m (275 ft). The Tower of Gonbad-e Qabus in Iran is built in 1006. Construction begins on the Sassovivo Abbey of Foligno, Italy, in 1070. The Palace of Aljafería is built in Zaragoza, Spain, during the Al-Andalus period. The Rotonda di San Lorenzo is built in Mantua, Lombardy, Italy, during the late 11th century. Construction of the Ponte della Maddalena bridge in the Province of Lucca, Italy begins in 1080. The domes of the Jamé Mosque of Isfahan, Iran are built in 1086 to 1087. 11th–18th century – The courtyard of Jamé Mosque of Isfahan, Isfahan, Persia (Iran), is built. The Chester Castle in England was built in 1069. Construction begins on the Bagrati Cathedral in Georgia in 1003. The St. Michael's Church, Hildesheim in Germany is completed in 1031. The Basilica of Sant'Abbondio of Lombardy, Italy is completed in 1095. Construction begins on the Great Zimbabwe National Monument, sometime in the century. Construction begins on the San Pietro in Vinculis in Pisa, Italy, in 1072. The Tower of London in England is founded in 1078. The St. Grigor's Church of Kecharis Monastery in Armenia is built in 1003. The Martin-du-Canigou monastery on Mount Canigou in southern France is built in 1009. The St. Mary's Cathedral, Hildesheim in Germany is completed in 1020. The One Pillar Pagoda in Hanoi, Vietnam, is constructed in 1049. The St Michael at the Northgate, Oxford's oldest building, is built in Saxon England in 1040. Oxford Castle in England is built in 1071. The Florence Baptistry in Florence, Italy is founded in 1059. The Kandariya Mahadeva temple in India is built in 1050. St Mark's Basilica in Venice, Italy is rebuilt in 1063. Canterbury Cathedral in Canterbury, England is completed by 1077. Construction begins on the Cathedral of Santiago de Compostela in Spain in 1075. Inventions, discoveries, introductions Science and technology Early 11th century – Fan Kuan paints Travelers among Mountains and Streams. Northern Song dynasty. It is now kept at National Palace Museum, Taipei, Taiwan (Republic of China). c. 1000 – Abu al-Qasim al-Zahrawi (Abulcasis) of al-Andalus publishes his influential 30-volume Arabic medical encyclopedia, the Al-Tasrif c. 1000 – Ibn Yunus of Egypt publishes his astronomical treatise Al-Zij al-Hakimi al-Kabir. c. 1000 – Abu Sahl al-Quhi (Kuhi) c. 1000 – Abu-Mahmud al-Khujandi c. 1000 – Law of sines is discovered by Muslim mathematicians, but it is uncertain who discovers it first between Abu-Mahmud al-Khujandi, Abu Nasr Mansur, and Abu al-Wafa. c. 1000 – Ammar ibn Ali al-Mawsili 1000–1048 – Abū Rayhān al-Bīrūnī of Persia writes more than a hundred books on many different topics. 1001–1100 – the demands of the Chinese iron industry for charcoal led to a huge amount of deforestation, which was curbed when the Chinese discovered how to use bituminous coal in smelting cast iron and steel, thus sparing thousands of acres of prime timberland. 1003 – Pope Sylvester II, born Gerbert d'Aurillac, dies; however, his teaching continued to influence those of the 11th century; his works included a book on arithmetic, a study of the Hindu–Arabic numeral system, a hydraulic-powered organ, the reintroduction of the abacus to Europe, and a possible treatise on the astrolabe that was edited by Hermann of Reichenau five decades later. The contemporary monk Richer from Rheims described Gerbert's contributions in reintroducing the armillary sphere that was lost to European science after the Greco-Roman era; from Richer's description, Gerbert's placement of the tropics was nearly exact and his placement of the equator was exact. He reintroduced the liberal arts education system of trivium and quadrivium, which he had borrowed from the educational institution of Islamic Córdoba. Gerbert also studied and taught Islamic medicine. 1013 – One of the Four Great Books of Song, the Prime Tortoise of the Record Bureau compiled by 1013 was the largest of the Song Chinese encyclopedias. Divided into 1000 volumes, it consisted of 9.4 million written Chinese characters. 1020 – Ibn Samh of Al-Andalus builds a geared mechanical astrolabe. 1021 – Ibn al-Haytham (Alhacen) of Basra, Iraq writes his influential Book of Optics from 1011 to 1021 (while he was under house arrest in Egypt), 1024 – The world's first paper-printed money can be traced back to the year 1024, in Sichuan province of Song dynasty China. The Chinese government would step in and overtake this trend, issuing the central government's official banknote in the 1120s. 1025 – Avicenna of Persia publishes his influential treatise, The Canon of Medicine, which remains the most influential medical text in both Islamic and Christian lands for over six centuries, and The Book of Healing, a scientific encyclopedia. 1027 – The Chinese engineer Yan Su recreates the mechanical compass-vehicle of the south-pointing chariot, first invented by Ma Jun in the 3rd century. 1028–1087 – Abū Ishāq Ibrāhīm al-Zarqālī (Arzachel) builds the equatorium and universal latitude-independent astrolabe. 1031 – Abū Rayhān al-Bīrūnī writes Kitab al-qanun al-Mas'udi 1031–1095 – Chinese scientist Shen Kuo creates a theory for land formation, or geomorphology, theorized that climate change occurred over time, discovers the concept of true north, improves the design of the astronomical sighting tube to view the pole star indefinitely, hypothesizes the retrogradation theory of planetary motion, and by observing lunar eclipse and solar eclipse he hypothesized that the sun and moon were spherical. Shen Kuo also experimented with camera obscura just decades after Ibn al-Haitham, although Shen was the first to treat it with quantitative attributes. He also took an interdisciplinary approach to studies in archaeology. 1041–1048 – Artisan Bi Sheng of Song dynasty China invents movable type printing using individual ceramic characters. Mid-11th century – Harbaville Triptych, is made. It is now kept at Musée du Louvre, Paris. Mid-11th century – Xu Daoning paints Fishing in a Mountain Stream. Northern Song dynasty. 1068 – First known use of the drydock in China. 1070 – With a team of scholars, the Chinese official Su Song also published the Ben Cao Tu Jing in 1070, a treatise on pharmacology, botany, zoology, metallurgy, and mineralogy. Some of the drug concoctions in Su's book included ephedrine, mica minerals, and linaceae. 1075 – the Song Chinese innovate a partial decarbonization method of repeated forging of cast iron under a cold blast that Hartwell and Needham consider to be a predecessor to the 18th century Bessemer process. 1077 – Constantine the African introduces ancient Greek medicine to the Schola Medica Salernitana in Salerno, Italy. c. 1080 – the Liber pantegni, a compendium of Hellenistic and Islamic medicine, is written in Italy by the Carthaginian Christian Constantine the African, paraphrasing translated passages from the Kitab al-malaki of Ali ibn Abbas al-Majusi as well as other Arabic texts. 1088 – As written by Shen Kuo in his Dream Pool Essays, the earlier 10th-century invention of the pound lock in China allows large ships to travel along canals without laborious hauling, thus allowing smooth travel of government ships holding cargo of up to 700 tan (49 tons) and large privately owned-ships holding cargo of up to 1600 tan (113 tons). 1094 – The Chinese mechanical engineer and astronomer Su Song incorporates an escapement mechanism and the world's first known chain drive to operate the armillary sphere, the astronomical clock, and the striking clock jacks of his clock tower in Kaifeng. Literature 1000 – The Remaining Signs of Past Centuries is written by Abū Rayhān al-Bīrūnī. c. 1000 – The Al-Tasrif is written by the Andalusian physician and scientist Abu al-Qasim al-Zahrawi (Abulcasis). c. 1000 – The Zij al-Kabir al-Hakimi is written by the Egyptian astronomer Ibn Yunus. 1002–1003 – Book of Lamentations is written by Gregory of Narek, one of the Doctors of the Church. 1000–1037 – Hayy ibn Yaqdhan is written by Ibn Tufail. 1008 – The Leningrad Codex, one of the oldest full manuscripts of the Hebrew Bible, is completed. c. 1010 – The oldest known copy of the epic poem Beowulf was written around this year. 1013 – The Prime Tortoise of the Record Bureau, a Chinese encyclopedia, is completed by a team of scholars including Wang Qinruo. 1020 – The Bamberg Apocalypse commissioned by Otto III is completed. 1021 – Lady Murasaki Shikibu writes her Japanese novel, The Tale of Genji. 1021 – The Book of Optics by Ibn al-Haytham (Alhazen or Alhacen) is completed. 1025 – The Canon of Medicine by Avicenna (Ibn Sina) is completed. 1027 – The Book of Healing is published by Avicenna. 1037 – The Jiyun, a Chinese rime dictionary, is published by Ding Du and expanded by later scholars. 1037 – Birth of the Chinese poet Su Shi, one of the renowned poets of the Song dynasty, who also penned works of travel literature. 1044 – The Wujing Zongyao military manuscript is completed by Chinese scholars Zeng Gongliang, Ding Du, and Yang Weide. 1048–1100 – The Rubaiyat of Omar Khayyam is written by Omar Khayyam sometime after 1048. 1049 – The Record of Tea is written by Chinese official Cai Xiang 1052 – The Uji Dainagon Monogatari, a collection of stories allegedly penned by Minamoto-no-Takakuni, is written sometime between now and 1077. 1053 – The New History of the Five Dynasties by Chinese official Ouyang Xiu is completed. 1054 – Russian legal code of the Russkaya Pravda is created during the reign of Yaroslav I the Wise. 1057 – The Ostromir Gospels of Novgorod are written. 1060 – compilation of the New Book of Tang, edited by Chinese official Ouyang Xiu, is complete. 1060 – the Mugni Gospels of Armenia are written in illuminated manuscript form. 1068 – The Book of Roads and Kingdoms is written by Abū 'Ubayd 'Abd Allāh al-Bakrī. 1070 – William I of England commissioned the Norman monk William of Jumièges to extend the Gesta Normannorum Ducum chronicle. 1078 – The Proslogion is written by Anselm of Canterbury. 1080 – The Chinese poet Su Shi is exiled from court for writing poems criticizing the various reforms of the New Policies Group. c. 1080 – the Liber pantegni is written by Constantine the African. 1084 – The Zizhi Tongjian history is completed by Chinese official Sima Guang. 1086 – The Domesday Book is initiated by William I of England. 1088 – The Dream Pool Essays is completed by Shen Kuo of Song China. The roots of European Scholasticism are found in this period, as the renewed spark of interest in literature and Classicism in Europe would bring about the Renaissance. In the 11th century, there were early Scholastic figures such as Anselm of Canterbury, Peter Abelard, Solomon ibn Gabirol, Peter Lombard, and Gilbert de la Porrée. Notes References Abattouy, Mohammed. (2002), "The Arabic Science of weights: A Report on an Ongoing Research Project", The Bulletin of the Royal Institute for Inter-Faith Studies 4, pp. 109–130: Bowman, John S. (2000). Columbia Chronologies of Asian History and Culture. New York: Columbia University Press. Chan, Alan Kam-leung and Gregory K. Clancey, Hui-Chieh Loy (2002). Historical Perspectives on East Asian Science, Technology and Medicine. Singapore: Singapore University Press. Darlington, Oscar G. "Gerbert, the Teacher", The American Historical Review (Volume 52, Number 3, 1947): 456 – 476. Ebrey, Patricia Buckley, Anne Walthall, James B. Palais (2006). East Asia: A Cultural, Social, and Political History. Boston: Houghton Mifflin Company. . Fraser, Julius Thomas and Francis C. Haber. (1986). Time, Science, and Society in China and the West. Amherst: University of Massachusetts Press. . Hartwell, Robert. "Markets, Technology, and the Structure of Enterprise in the Development of the Eleventh-Century Chinese Iron and Steel Industry", The Journal of Economic History (Volume 26, Number 1, 1966): 29–58. Holmes, Jr., Urban T. "The Idea of a Twelfth-Century Renaissance", Speculum (Volume 26, Number 4, 1951): 643 – 651. Kennedy, E. S. (1970–80). "Bīrūnī, Abū Rayḥān al-". Dictionary of Scientific Biography II. New York: Charles Scribner's Sons. . Mohn, Peter (2003). Magnetism in the Solid State: An Introduction. New York: Springer-Verlag Inc. . Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 1, Physics. Taipei: Caves Books Ltd. Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 2, Mechanical Engineering. Taipei: Caves Books Ltd. Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 3, Civil Engineering and Nautics. Taipei: Caves Books Ltd. Needham, Joseph (1986). Science and Civilization in China: Volume 5, Chemistry and Chemical Technology, Part 1: Paper and Printing. Taipei: Caves Books, Ltd. Needham, Joseph (1986). Science and Civilization in China: Volume 5, Chemistry and Chemical Technology, Part 7, Military Technology; the Gunpowder Epic. Taipei: Caves Books, Ltd. Needham, Joseph (1986). Science and Civilization in China: Volume 6, Biology and Biological Technology, Part 1, Botany. Taipei: Caves Books Ltd. Prioreschi, Plinio. (2003). A History of Medicine. Omaha: Horatius Press. . Salhab, Walid Amine. (2006). The Knights Templar of the Middle East: The Hidden History of the Islamic Origins of Freemasonry. San Francisco: Red Wheel/Weiser LLC. . Seife, Charles. (2000) Zero: The Biography of a Dangerous Idea. New York: Penguin Books. . Sivin, Nathan (1995). Science in Ancient China: Researches and Reflections. Brookfield, Vermont: VARIORUM, Ashgate Publishing. Tester, S. Jim. (1987). A History of Western Astrology. Rochester: Boydell & Brewer Inc. . Unschuld, Paul U. (2003). Nature, Knowledge, Imagery in an Ancient Chinese Medical Text. Berkeley: University of California Press. Wu, Jing-nuan (2005). An Illustrated Chinese Materia Medica. New York: Oxford University Press. 2nd millennium Centuries
0.772223
0.994759
0.768175
Historical romance
Historical romance is a broad category of mass-market fiction focusing on romantic relationships in historical periods, which Byron helped popularize in the early 19th century. Varieties Viking Viking books feature warriors during the Dark Ages or Middle Ages. Heroes in Viking romances are stereotypically masculine men who are later "tamed" by their heroines. Most heroes are described as "tall, blonde, and strikingly handsome." Using the Viking culture allows novels set in these time periods to include some travel, as the Vikings were "adventurers, founding and conquering colonies all over the globe." In a 1997 poll of over 200 readers of Viking romances, Johanna Lindsey's Fires of Winter was considered the best of the subgenre. The subgenre has fallen out of style, and few novels in this vein have been published since the mid-1990s. Medieval Medieval romances are typically set between 938 and 1485. Women in the medieval time periods were often considered as no more than property who were forced to live at the mercy of their father, guardian, or the king. Always a lady, the heroine must use her wits and will and find a husband who will accept her need to be independent, yet still protect her from the dangers of the times. The hero is almost always a knight who first learns to respect her and her uncommon ideas and then falls in love. Heroes are always strong and dominant, and the heroine, despite the gains she has made, is usually still in a subordinate position. However, that position is her choice, made "for the sake of and with protection from an adoring lover, whose main purpose in life is to fulfill his beloved's wishes." Tudor Tudor romances are set in England between 1485 and 1558. Elizabethan Elizabethan romances are set in England between 1558 and 1603, during the time of Elizabeth I. Stuart Stuart romances are set between 1603 and 1714 in England. Georgian Georgian romances are set between 1714 and 1811 in England. Regency Regency romances are set between 1811 and 1820 in England. Victorian Victorian romances are set in England between 1832 and 1901, beginning with the Reform Act 1832 and including the reign of Queen Victoria. Novels set during this period but in a fictional country may be Ruritanian novels such as those by Beatrice Heron-Maxwell. M.M. Kaye focuses on the British Raj in this period rather than England itself. Pirate Pirate novels feature a male or female who is sailing, or thought to be sailing, as a pirate or privateer on the high seas. According to Ryan Kate, heroes are the "ultimate bad boys," who "dominate all for the sake of wealth and freedom." The heroine is usually captured by the hero in an early part of the novel, and then is forced to succumb to his wishes; eventually she falls in love with her captor. On the rarer occasions where the heroine is the pirate, the book often focuses on her struggle to maintain her freedom of choice while living the life of a man. Regardless of the sex of the pirate, much of the action in the book takes place at sea. Colonial United States Colonial United States novels are all set in that country between 1630 and 1798. Civil War Civil War novels place their characters within the events of the American Civil War and the Reconstruction era. They may be set in the Confederacy or the Union. Western Western novels are set in the frontier of the United States, Canada, or Australia. Unlike Westerns, where women are often marginalized, the Western romance focuses on the experiences of the female. Heroes in these novels seek adventure and are forced to conquer the unknown. They are often loners, slightly uncivilized, and "earthy." Their heroines are often forced to travel to the frontier by events outside their control. These women must learn to survive in a man's world, and, by the end of the novel, have conquered their fears with love. In many cases the couple must face a level of personal danger, and, upon surmounting their troubles, are able to forge a strong relationship for the future. Native American Native American novels could also fall into the Western subgenre, but always feature Native American protagonists, historically described as "Red Indians", whose "heritage is integral to the story." These romances "[emphasize] instinct, creativity, freedom, and the longing to escape from the strictures of society to return to nature." Members of Native American tribes who appear in the books are usually depicted as "exotic figures" who "[possess] a freedom to be admired and envied." Often the Native protagonist is struggling against racial prejudice and incurs hardships trying to maintain a way of life that is different from the norm. By the end of the novel, however, the problems are surmounted. The heroes of these novels are often fighting to control their darker desires. In many cases, the hero or heroine is captured and then falls in love with a member of the tribe. The tribe is always depicted as civilized, not consisting of savages, and misunderstood. When surveyed about their reasons for reading Native American romances, many readers cite the desire to learn about the beliefs, customs and culture of the Native American tribes. The novels within this subgenre are generally not limited to a specific tribe, location, or time period. Readers appreciate that native tribes "have a whole different way of life, a different way of thinking and a different way of looking at things". In many cases, the tribe's love of nature is highlighted. Americana Americana novels are set in the United States between 1880 and 1920, usually in a small town or in the Midwest. History In England One of the first popular historical romances appeared in 1921, when Georgette Heyer published The Black Moth, which is set in 1751. It was not until 1935 that she wrote the first of her signature Regency novels, set around the English Regency period (1811–1820), when the Prince Regent ruled England in place of his ill father, George III. Heyer's Regency novels were inspired by Jane Austen's novels of the late 18th and early 19th century. Because Heyer's writing was set in the midst of events that had occurred over 100 years previously, she included authentic period detail in order for her readers to understand. Where Heyer referred to historical events, it was as background detail to set the period, and did not usually play a key role in the narrative. Heyer's characters often contained more modern-day sensibilities, and more conventional characters in the novels would point out the heroine's eccentricities, such as wanting to marry for love. In the United States The modern romance genre was born in America 1972 with Avon's publication of Kathleen Woodiwiss's The Flame and the Flower, the first romance novel "to [follow] the principals into the bedroom." Aside from its content, the book was revolutionary in that it was one of the first single-title romance novels to be published as an original paperback, rather than being first published in hardcover, and, like the category romances, was distributed in drug stores and other mass-market merchandising outlets. The novel went on to sell 2.35 million copies. Avon followed its release with the 1974 publication of Woodiwiss's second novel, The Wolf and the Dove and two novels by newcomer Rosemary Rogers. One of Rogers's novels, Dark Fires sold two million copies in its first three months of release, and, by 1975, Publishers Weekly had reported that the "Avon originals" had sold a combined 8 million copies. The following year over 150 historical romance novels, many of them paperback originals, were published, selling over 40 million copies. Unlike Woodiwiss, Rogers's novels featured couples who travelled the world, usually were separated for a time, and had multiple partners within the book. The success of these novels prompted a new style of writing romance, concentrating primarily on historical fiction tracking the monogamous relationship between a helpless heroines and the hero who rescued her, even if he had been the one to place her in danger. The covers of these novels tended to feature scantily clad women being grabbed by the hero, and caused the novels to be referred to as "bodice-rippers." A Wall St. Journal article in 1980 referred to these bodice rippers as "publishing's answer to the Big Mac: They are juicy, cheap, predictable, and devoured in stupifying quantities by legions of loyal fans." The term bodice-ripper is now considered offensive to many in the romance industry. In this new style of historical romance, heroines were independent and strong-willed and were often paired with heroes who evolved into caring and compassionate men who truly admired the women they loved. This was in contrast to the contemporary romances published during this time, which were often characterized by weak females who fell in love with overbearing alpha males. Although these heroines had active roles in the plot, they were "passive in relationships with the heroes", Across the genre, heroines during this time were usually aged 16–21, with the heroes slightly older, usually around 30. The women were virgins, while the men were not, and both members of the couple were described as beautiful. In the late 1980s, historical romance dominated the romance genre. The most popular of the historical romances were those that featured warriors, knights, pirates, and cowboys. In the 1990s the genre began to focus more on humor, as Julie Garwood began introducing humorous elements and characters into her historical romances. Market Historical romance novels are rarely published in hardcover, with fewer than 15 receiving that status each year. The contemporary market usually sees 4 to 5 times that many hardcovers. Because historical romances are primarily published in mass-market format, their fortunes are tied to a certain extent to the mass-market trends. Booksellers and large merchandisers are selling fewer mass market paperbacks, preferring trade paperbacks or hardcovers, which prevent historical romances from being sold in some price clubs and other mass merchandise outlets. In 2001, historical romance reached a 10-year high as 778 were published. By 2004, that number had dropped to 486, which was still 20% of all romance novels published. Kensington Books claims that they are receiving fewer submissions of historical novels, and that their previously published authors are switiching to contemporary. Footnotes References Literary genres Historical novels subgenres
0.772334
0.994604
0.768166
Monarchism
Monarchism is the advocacy of the system of monarchy or monarchical rule. A monarchist is an individual who supports this form of government independently of any specific monarch, whereas one who supports a particular monarch is a royalist. Conversely, the opposition to monarchical rule is referred to as republicanism. Depending on the country, a royalist may advocate for the rule of the person who sits on the throne, a regent, a pretender, or someone who would otherwise occupy the throne but has been deposed. History Monarchical rule is among the oldest political institutions. The similar form of societal hierarchy known as chiefdom or tribal kingship is prehistoric. Chiefdoms provided the concept of state formation, which started with civilizations such as Mesopotamia, Ancient Egypt and the Indus Valley civilization. In some parts of the world, chiefdoms became monarchies. Monarchs have generally ceded power in the modern era, having substantially diminished since World War I and World War II. This process can be traced back to the 18th century, when Voltaire and others encouraged "enlightened absolutism", which was embraced by the Holy Roman Emperor Joseph II and by Catherine II of Russia. In the 17th and 18th centuries the Enlightenment began. This resulted in new anti-monarchist ideas which resulted in several revolutions such as the 18th century American Revolution and the French Revolution which were both additional steps in the weakening of power of European monarchies. Each in its different way exemplified the concept of popular sovereignty upheld by Jean-Jacques Rousseau. 1848 ushered in a wave of revolutions against the continental European monarchies. World War I and its aftermath saw the end of three major European monarchies: the Russian Romanov dynasty, the German Hohenzollern dynasty, including all other German monarchies, and the Austro-Hungarian Habsburg dynasty. With the arrival of communism in Eastern Europe by the end of 1947, the remaining Eastern European monarchies, namely the Kingdom of Romania, the Kingdom of Hungary, the Kingdom of Albania, the Kingdom of Bulgaria, and the Kingdom of Yugoslavia, were all abolished and replaced by socialist republics. Africa Central Africa In 1966, the Central African Republic was overthrown at the hands of Jean-Bédel Bokassa during the Saint-Sylvestre coup d'état. He established the Central African Empire in 1976 and ruled as Emperor Bokassa I until 1979, when he was subsequently deposed during Operation Caban and Central Africa returned to republican rule. Ethiopia In 1974, one of the world's oldest monarchies was abolished in Ethiopia with the fall of Emperor Haile Selassie. Asia China For most of its history, China was organized into various dynastic states under the rule of hereditary monarchs. Beginning with the establishment of dynastic rule by Yu the Great , and ending with the abdication of the Xuantong Emperor in AD 1912, Chinese historiography came to organize itself around the succession of monarchical dynasties. Besides those established by the dominant Han ethnic group or its spiritual Huaxia predecessors, dynasties throughout Chinese history were also founded by non-Han peoples. India In India, monarchies recorded history of thousands of years before the country was declared a republic in 1950. King George VI had previously been the last Emperor of India until August 1947, when the British Raj dissolved. Karan Singh served as the last prince regent of Jammu and Kashmir until November 1952. Japan The emperor of Japan or , literally "ruler from heaven" or "heavenly sovereign", is the hereditary monarch and head of state of Japan. The Imperial Household Law governs the line of imperial succession. The emperor is personally immune from prosecution and is also recognized as the head of the Shinto religion, which holds the emperor to be the direct descendant of the sun goddess Amaterasu. According to tradition, the office of emperor was created in the 7th century BC, but modern scholars believe that the first emperors did not appear until the 5th or 6th centuries AD. During the Kamakura period from 1185 to 1333, the shōguns were the de facto rulers of Japan, with the emperor and the imperial court acting as figureheads. In 1867, shogun Tokugawa Yoshinobu stepped down, restoring Emperor Meiji to power. The Meiji Constitution was adopted In 1889, after which the emperor became an active ruler with considerable political power that was shared with the Imperial Diet. After World War II, the 1947 Constitution of Japan was enacted, defining the emperor as the symbol of the Japanese state and the unity of the Japanese people. The emperor has exercised a purely ceremonial role ever since. Europe Albania The last separate monarchy to take root in Europe, Albania began its recognised modern existence as a principality (1914) and became a kingdom after a republican interlude in 1925–1928. Since 1945 the country has operated as an independent republic. The Albanian Democratic Monarchist Movement Party (founded in 2004) and the Legality Movement Party (founded in 1924) advocate restoration of the House of Zogu as monarchs—the concept has gained little electoral support. Austria-Hungary Following the collapse of Austria-Hungary, the Republic of German-Austria was proclaimed. The Constitutional Assembly of German Austria passed the Habsburg Law, which permanently exiled the Habsburg family from Austria. Despite this, significant support for the Habsburg family persisted in Austria. Following the Anschluss of 1938, the Nazi government suppressed monarchist activities. By the time Nazi rule ended in Austria, support for monarchism had largely evaporated. In Hungary, the rise of the Hungarian Soviet Republic in 1919 provoked an increase in support for monarchism; however, efforts by Hungarian monarchists failed to bring back a royal head of state, and the monarchists settled for a regent, Admiral Miklós Horthy, to represent the monarchy until the throne could be re-occupied. Horthy ruled as regent from 1920 to 1944. During his regency, attempts were made by Karl von Habsburg to return to the Hungarian throne, which ultimately failed. Following Karl's death in 1922, his claim to the Kingdom of Hungary was inherited by Otto von Habsburg (1912–2011), although no further attempts were made to take the Hungarian throne. France France was ruled by monarchs from the establishment of the Kingdom of West Francia in 843 until the end of the Second French Empire in 1870, with several interruptions. Classical French historiography usually regards Clovis I, king of the Franks, as the first king of France. However, historians today consider that such a kingdom did not begin until the establishment of West Francia, during the dissolution of the Carolingian Empire in the 800s. Germany In 1920s Germany, a number of monarchists gathered around the German National People's Party (founded in 1918), which demanded the return of the Hohenzollern monarchy and an end to the Weimar Republic; the party retained a large base of support until the rise of Nazism in the 1930s, as Adolf Hitler staunchly opposed monarchism. Italy The aftermath of World War II saw the return of monarchist/republican rivalry in Italy, where a referendum was held on whether the state should remain a monarchy or become a republic. The republican side won the vote by a narrow margin, and the modern Republic of Italy was created. Liechtenstein There have been 16 monarchs of the Principality of Liechtenstein since 1608. The current Prince of Liechtenstein, Hans-Adam II, has reigned since 1989. In 2003, during a referendum, 64.3% of the population voted to increase the power of the prince. Norway The position of King of Norway has existed continuously since the unification of Norway in 872. Following the dissolution of union with Sweden and the abdication of King Oscar II of Sweden as King of Norway, the 1905 Norwegian monarchy referendum saw 78.94% of Norway's voters approving the government's proposition to invite Prince Carl of Denmark to become their new king. Following the vote, the prince then accepted the offer, becoming King Haakon VII. In 2022, the Norwegian parliament held a vote on abolishing the monarchy and replacing it with a republic. The proposal failed, with a 134–35 result in favor of retaining the monarchy. The idea was highly controversial in Norway, as the vote was spearheaded by the sitting Minister of Culture and Equality, who had sworn an oath of loyalty to King Harald V of Norway the previous year. Additionally, when polls were conducted, it was found that 84% of the Norwegian public supported the monarchy, with only 16% unsure or against the monarchy. Russia Monarchy in the Russian Empire collapsed in March 1917, following the abdication of Tsar Nicholas II. Parts of the White movement, and in particular émigrés and their (founded in 1921 and now based in Canada) continued to advocate for monarchy as "the sole path to the rebirth of Russia". In the modern era, a minority of Russians, including Vladimir Zhirinovsky (1946–2022), have openly advocated for a restoration of the Russian monarchy. Grand Duchess Maria Vladimirovna is widely considered the valid heir to the throne, in the event that a restoration occurs. Other pretenders and their supporters dispute her claim. Spain In 1868, Queen Isabella II of Spain was deposed during the Spanish Glorious Revolution. The Duke of Aosta, an Italian prince, was invited to rule and replace Isabella. He did so for a three-year period, reigning as Amadeo I before abdicating in 1873, resulting in the establishment of the First Spanish Republic. The republic lasted less than two years, and was overthrown during a coup by General Arsenio Martínez Campos. Campos restored the Bourbon monarchy under Isabella II's more popular son, Alfonso XII. After the 1931 Spanish local elections, King Alfonso XIII voluntarily left Spain and republicans proclaimed a Second Spanish Republic. After the assassination of opposition leader José Calvo Sotelo in 1936, right-wing forces banded together to overthrow the Republic. During the Spanish Civil War of 1936 to 1939, General Francisco Franco established the basis for the Spanish State (1939–1975). In 1938, the autocratic government of Franco claimed to have reconstituted the Spanish monarchy in absentia (and in this case ultimately yielded to a restoration, in the person of King Juan Carlos). In 1975, Juan Carlos I became King of Spain and began the Spanish transition to democracy. He abdicated in 2014, and was succeeded by his son Felipe VI. United Kingdom In England, royalty ceded power to other groups in a gradual process. In 1215, a group of nobles forced King John to sign Magna Carta, which guaranteed the English barons certain liberties and established that the king's powers were not absolute. King Charles I was executed in 1649, and the Commonwealth of England was established as a republic. Highly unpopular, the republic was ended in 1660, and the monarchy was restored under King Charles II. In 1687–88, the Glorious Revolution and the overthrow of King James II established the principles of constitutional monarchy, which would later be worked out by Locke and other thinkers. However, absolute monarchy, justified by Hobbes in Leviathan (1651), remained a prominent principle elsewhere. Following the Glorious Revolution, William III and Mary II were established as constitutional monarchs, with less power than their predecessor James II. Since then, royal power has become more ceremonial, with powers such as refusal to assent last exercised in 1708 by Queen Anne. Once part of the United Kingdom (1801–1922), southern Ireland rejected monarchy and became the Republic of Ireland in 1949. Support for a ceremonial monarchy remains high in Britain: Queen Elizabeth II, possessed wide support from the U.K.'s population. Vatican City State The Vatican City State is considered to be Europe's last absolute monarchy. The microstate is headed by the Pope, who doubles as its monarch according to the Vatican constitution. The nation was formed under Pope Pius XI in 1929, following the signing of the Lateran Treaty. It was the successor state to the Papal States, which collapsed under Pope Pius IX in 1870. Pope Francis (in office from 2013) serves as the nation's absolute monarch. North America Canada Canada possesses one of the world's oldest continuous monarchies, having been established in the 16th century. Queen Elizabeth II had served as its sovereign since her ascension to the throne in 1952 until her death in 2022. Her son, King Charles III, now sits on the throne. Costa Rica The struggle between monarchists and republicans led to the Costa Rican civil war of 1823. Costa Rican monarchists include Joaquín de Oreamuno y Muñoz de la Trinidad, José Santos Lombardo y Alvarado and José Rafael Gallegos Alvarado. Costa Rica stands out for being one of the few countries with foreign monarchism, that is, where the monarchists did not intend to establish an indigenous monarchy. Costa Rican monarchists were loyal to Emperor Agustín de Iturbide of the First Mexican Empire. Honduras After the independence of the general captaincy of Guatemala from the Spanish empire, she joined the First Mexican Empire for a brief period, this unleashed the division of the Honduran elites. These were divided between the annexationists, made up mostly of illustrious Spanish-descendant families and members of the conservative party who supported the idea of being part of an empire, and the liberals who wanted Central America to be a separate nation under a republican system. The greatest example of this separation was in the two most important cities of the province, on the one hand Comayagua, which firmly supported the legitimacy of Iturbide I as emperor and remained a pro-monarchist bastion in Honduras, and on the other hand Tegucigalpa who supported the idea of forming a federation of Central American states under a republican system. Mexico After obtaining independence from Spain, the First Mexican Empire was established under Emperor Agustín I. His reign lasted less than one year, and he was forcefully deposed. In 1864, the Second Mexican Empire was formed under Emperor Maximilian I. Maximilian's government enjoyed French aid, but opposition from America, and collapsed after three years. Much like Agustín I, Maximilian I was deposed and later executed by his republican enemies. Since 1867, Mexico has not possessed a monarchy. Today, some Mexican monarchist organizations advocate for Maximilian von Götzen-Iturbide or Carlos Felipe de Habsburgo to be instated as the Emperor of Mexico. Nicaragua The miskito ethnic group inhabits part of the Atlantic coast of Honduras and Nicaragua, by the beginning of the 17th century the said ethnic group was reorganized under a single chief known as Ta Uplika, for the reign of his grandson King Oldman I this group had a very close relationship With the English, they managed to turn the Mosquitia coast into an English protectorate that would decline in the 19th century until it completely disappeared in 1894 with the abdication of Robert II. Currently, the Miskitos who are shot between the two countries have denounced the neglect of their communities and abuses committed by the authorities. As a result of this, in Nicaragua several Miskito people began a movement of separatism from present-day Nicaragua and a re-institution of the monarchy. United States English settlers first established the colony of Jamestown in 1607, taking its name after King James VI and I. For 169 years, the Thirteen Colonies were ruled by the authority of the British crown. The Thirteen American Colonies possessed a total of 10 monarchs, ending with George III. During the American Revolutionary War, the colonies declared independence from Britain in 1776. Despite erroneous popular belief, the Revolutionary war was in fact fought over independence, not anti-monarchism as is commonly believed. In fact, many American colonists who fought in the war against George III were monarchists themselves, who opposed George, but desired to possess a different king. Additionally, the American colonists received the financial support of Louis XVI and Charles III of Spain during the war. After the U.S. declared its independence, the form of government by which it would operate still remained unsettled. At least two of America's Founding Fathers, Alexander Hamilton and Nathaniel Gorham, believed that America should be an independent monarchy. Various proposals to create an American monarchy were considered, including the Prussian scheme which would have made Prince Henry of Prussia king of the United States. Hamilton proposed that the leader of America should be an elected monarch, while Gorham pushed for a hereditary monarchy. U.S. military officer Lewis Nicola also desired for America to be a monarchy, suggesting George Washington accept the crown of America, which he declined. All attempts ultimately failed, and America was founded a Republic. During the American Civil War, a return to monarchy was considered as a way to solve the crisis, though it never came to fruition. Since then, the idea has possessed low support, but has been advocated by some public figures such as Ralph Adams Cram, Solange Hertz, Leland B. Yeager, Michael Auslin, Charles A. Coulombe, and Curtis Yarvin. South America Brazil From gaining its independence in 1822 until 1889, Brazil was governed as a constitutional monarchy with a branch of the Portuguese Royal Family serving as monarchs. Prior to this period, Brazil had been a royal colony which had also served briefly as the seat of government for the Portuguese Empire following the occupation of that country by Napoleon Bonaparte in 1808. The history of the Empire of Brazil was marked by brief periods of political instability, several wars that Brazil won, and a marked increase in immigration which saw the arrival of both Jews and Protestants who were attracted by Brazil's reputation for religious tolerance. The final decades of the Empire under the reign of Pedro II saw a remarkable period of relative peace both at home and internationally, coupled with dramatic economic expansion, the extension of basic civil rights to most people and the gradual restriction of slavery, culminating in its final abolition in 1888. It is also remembered for its thriving culture and arts. However, Pedro II had little interest in preserving the monarchy and passively accepted its overthrow by a military coup d'état in 1889 resulting in the establishment of a dictatorship known as the First Brazilian Republic. Current monarchies The majority of current monarchies are constitutional monarchies. In a constitutional monarchy the power of the monarch is restricted by either a written or unwritten constitution, this should not be confused with a ceremonial monarchy, in which the monarch holds only symbolic power and plays very little to no part in government or politics. In some constitutional monarchies the monarch does play a more active role in political affairs than in others. In Thailand, for instance, King Bhumibol Adulyadej, who reigned from 1946 to 2016, played a critical role in the nation's political agenda and in various military coups. Similarly, in Morocco, King Mohammed VI wields significant, but not absolute power. Liechtenstein is a democratic principality whose citizens have voluntarily given more power to their monarch in recent years. There remain a handful of countries in which the monarchy is an absolute monarchy. The majority of these countries are oil-producing Arab Islamic monarchies like Saudi Arabia, Bahrain, Qatar, Oman, and the United Arab Emirates. Other strong monarchies include Brunei and Eswatini. Political philosophy Absolute monarchy stands as an opposition to anarchism and, additionally since the Age of Enlightenment; liberalism, capitalism, communism and socialism. Otto von Habsburg advocated a form of constitutional monarchy based on the primacy of the supreme judicial function, with hereditary succession, mediation by a tribunal is warranted if suitability is problematic. Non-partisanship British political scientist Vernon Bogdanor justifies monarchy on the grounds that it provides for a nonpartisan head of state, separate from the head of government, and thus ensures that the highest representative of the country, at home and internationally, does not represent a particular political party, but all people. Bogdanor also notes that monarchies can play a helpful unifying role in a multinational state, noting that "In Belgium, it is sometimes said that the king is the only Belgian, everyone else being either Fleming or Walloon" and that the British sovereign can belong to all of the United Kingdom's constituent countries (England, Scotland, Wales, and Northern Ireland), without belonging to any particular one of them. Private interest Thomas Hobbes wrote that the private interest of the monarchy is the same with the public. The riches, power, and humour of a monarch arise only from the riches, strength, and reputation of his subjects. An elected Head of State is incentivised to increase his own wealth for leaving office after a few years whereas a monarch has no reason to corrupt because he would be cheating himself. Wise counsel Thomas Hobbes wrote that a monarch can receive wise counsel with secrecy while an assembly cannot. Advisors to the assembly tend to be well-versed more in the acquisition of their own wealth than of knowledge; are likely to give their advices in long discourses which often excite men into action but do not govern them in it, moved by the flame of passion instead of enlightenment. Their multitude is a weakness. Long termism Thomas Hobbes wrote that the resolutions of a monarch are subject to no inconsistency save for human nature; in assemblies, inconsistencies arise from the number. For in an assembly, as little as the absence of a few or the diligent appearance of a few of the contrary opinion, "undoes today all that was done yesterday". Civil war reduction Thomas Hobbes wrote that a monarch cannot disagree with himself, out of envy or interest, but an assembly may and to such a height that may produce a civil war. Liberty The International Monarchist League, founded in 1943, has always sought to promote monarchy on the grounds that it strengthens popular liberty, both in a democracy and in a dictatorship, because by definition the monarch is not beholden to politicians. British-American libertarian writer Matthew Feeney argues that European constitutional monarchies "have managed for the most part to avoid extreme politics"—specifically fascism, communism, and military dictatorship—"in part because monarchies provide a check on the wills of populist politicians" by representing entrenched customs and traditions. Feeny notes that European monarchies—such as the Danish, Belgian, Swedish, Dutch, Norwegian, and British—have ruled over countries that are among the most stable, prosperous, and free in the world. Socialist writer George Orwell argued a similar point, that constitutional monarchy is effective at preventing the development of fascism. "The function of the King in promoting stability and acting as a sort of keystone in a non-democratic society is, of course, obvious. But he also has, or can have, the function of acting as an escape-valve for dangerous emotions. A French journalist said to me once that the monarchy was one of the things that have saved Britain from Fascism...It is at any rate possible that while this division of function exists a Hitler or a Stalin cannot come to power. On the whole the European countries which have most successfully avoided Fascism have been constitutional monarchies... I have often advocated that a Labour government, i.e. one that meant business, would abolish titles while retaining the Royal Family.’ Erik von Kuehnelt-Leddihn took a different approach, arguing that liberty and equality are contradictions. As such, he argued that attempts to establish greater social equality through the abolishment of monarchy, ultimately results in a greater loss of liberty for citizens. He believed that equality can only be accomplished through the suppression of liberty, as humans are naturally unequal and hierarchical. Kuehnelt-Leddihn also believed that people are on average freer under monarchies than they are under democratic republics, as the latter tends to more easily become tyrannical through ochlocracy. In Liberty or Equality, he writes: There is little doubt that the American Congress or the French Chambers have a power over their nations which would rouse the envy of a Louis XIV or a George III, were they alive today. Not only prohibition, but also the income tax declaration, selective service, obligatory schooling, the fingerprinting of blameless citizens, premarital blood tests—none of these totalitarian measures would even the royal absolutism of the seventeenth century have dared to introduce. Hans-Hermann Hoppe also argues that monarchy helps to preserve individual liberty more effectively than democracy. Natural desire for hierarchy In a 1943 essay in The Spectator, "Equality", British author C.S. Lewis criticized egalitarianism, and its corresponding call for the abolition of monarchy, as contrary to human nature, writing, A man's reaction to Monarchy is a kind of test. Monarchy can easily be 'debunked'; but watch the faces, mark well the accents, of the debunkers. These are the men whose tap-root in Eden has been cut: whom no rumour of the polyphony, the dance, can reach—men to whom pebbles laid in a row are more beautiful than an arch...Where men are forbidden to honour a king they honour millionaires, athletes, or film-stars instead: even famous prostitutes or gangsters. For spiritual nature, like bodily nature, will be served; deny it food and it will gobble poison. Political accountability Oxford political scientists Petra Schleiter and Edward Morgan-Jones wrote that in monarchies, it is more common to hold elections than non-electoral replacements. Notable works Notable works arguing in favor of monarchy include Abbott, Tony (1995). The Minimal Monarchy: And Why It Still Makes Sense For Australia Alighieri, Dante (c. 1312). De Monarchia Aquinas, Thomas (1267). De Regno, to the King of Cyprus Auslin, Michael (2014). America Needs a King Balmes, Jaime (1850). European Civilization: Protestantism and Catholicity Compared in their Effects on the Civilization of Europe Bellarmine, Robert (1588). De Romano Pontifice, On the Roman Pontiff Bodin, Jean (1576). The Six Books of the Republic Bogdanor, Vernon (1997). The Monarchy and the Constitution Bossuet, Jacques-Bénigne (1709). Politics Drawn from the Very Words of Holy Scripture Charles I of England (1649). Eikon Basilike Coulombe, Charles A. (2016). Star-Spangled Crown: A Simple Guide to the American Monarchy Chateaubriand, François-René de (1814). Of Buonaparte, and the Bourbons, and of the Necessity of Rallying Round Our Legitimate Princes Cram, Ralph Adams (1936). Invitation to Monarchy Filmer, Robert (1680). Patriarcha Hobbes, Thomas (1651). Leviathan Hermann-Hoppe, Hans (2001). Democracy: The God That Failed — (2014). From Aristocracy to Monarchy to Democracy: A Tale of Moral and Economic Folly and Decay James VI and I (1598). The True Law of Free Monarchies — (1599). Basilikon Doron Jean, Count of Paris (2009). Un Prince Français Kuehnelt-Leddihn, Erik von (1952). Liberty or Equality: The Challenge of Our Times — (2000). Monarchy and War Maistre, Joseph de (1797). Considerations on France Pius VI (1793). Pourquoi Notre Voix Scruton, Roger (1991). A Focus of Loyalty Higher Than the State Ségur, Louis Gaston Adrien de (1871). Vive le Roi! Whittle, Peter (2011). Monarchy Matters Support for monarchy Current monarchies Former monarchies The following is a list of former monarchies and their percentage of public support for monarchism. Notable monarchists Several notable public figures who advocated for monarchy or are monarchists include: Arts and entertainment Honoré de Balzac, French novelist & playwright Fyodor Dostoevsky, Russian novelist & essayist Pedro Muñoz Seca, Spanish playwright T.S. Eliot, American-British poet & writer Salvador Dalí, Spanish artist Hergé, Belgian cartoonist Éric Rohmer, French filmmaker Yukio Mishima, Japanese author Joan Collins, English actress & author Stephen Fry, English actor & author Clergy Thomas Aquinas, Italian Catholic priest & theologian Robert Bellarmine, Italian Cardinal & theologian Jacques-Bénigne Bossuet, French Bishop & theologian Jules Mazarin, Italian Cardinal & minister André-Hercule de Fleury, French Cardinal & minister Pius VI, Italian Pope & ruler of the Papal States Fabrizio Ruffo, Italian Cardinal & treasurer Ercole Consalvi, Italian Cardinal Secretary of State Pelagio Antonio de Labastida y Dávalos, Mexican Archbishop & Regent of the Second Mexican Empire Louis Gaston Adrien de Ségur, French Bishop & writer Louis Billot, French priest & theologian Pius XII, Italian Pope & sovereign of Vatican City József Mindszenty, Hungarian Cardinal & Prince-primate Philosophy Dante Alighieri, Italian poet & philosopher Jean Bodin, French political philosopher Robert Filmer, English political theorist Thomas Hobbes, English philosopher Joseph de Maistre, Savoyard philosopher & writer Juan Donoso Cortés, Spanish politician & political theologian Søren Kierkegaard, Danish philosopher & theologian Charles Maurras, French author & philosopher Kang Youwei, Chinese political thinker & reformer Ralph Adams Cram, American architect & writer Erik von Kuehnelt-Leddihn, Austrian political scientist & philosopher Vernon Bogdanor, British political scientist & historian Roger Scruton, English philosopher & writer Hans Hermann-Hoppe, German-American political theorist Charles A. Coulombe, American historian & author Politics François-René de Chateaubriand, French historian & Ambassador Manuel Belgrano, Argentinian politician Klemens von Metternich, Austrian Chancellor Miguel Miramón, Mexican President & military general Otto von Bismarck, German Chancellor Juan Vázquez de Mella, Spanish politician & political theorist Panagis Tsaldaris, Greek Prime Minister Winston Churchill, British Prime Minister of the U.K. Călin Popescu-Tăriceanu, Romanian Prime Minister Salome Zourabichvili, Georgian President Tony Abbott, Australian Prime Minister Carla Zambelli, Brazilian politician Monarchist movements and parties Action Française Alfonsism Alliance Royale Australian Monarchist League Australians for Constitutional Monarchy Bonapartism Black-Yellow Alliance Carlism Cavalier Chouannerie Conservative-Monarchist Club Constantian Society Constitutionalist Party of Iran Druk Phuensum Tshogpa Hawaiian sovereignty movement Hovpartiet International Monarchist League Jacobitism Koruna Česká (party) Legality Movement Legitimism Liberal Democratic Party of Russia Loyalism Loyalist (American Revolution) Miguelist Monarchist League of Canada Monarchist Party of Russia Monarchy New Zealand Movement for the Restoration of the Kingdom of Serbia Nouvelle Action Royaliste Orléanism People's Alliance for Democracy Rastriya Prajatantra Party Royal Stuart Society Royalist Party Sanfedismo Serbian Renewal Movement Sonnō jōi Tradition und Leben Traditionalist Communion Ultra-royalist Anti-monarchism Criticism of monarchy can be targeted against the general form of government—monarchy—or more specifically, to particular monarchical governments as controlled by hereditary royal families. In some cases, this criticism can be curtailed by legal restrictions and be considered criminal speech, as in lèse-majesté. Monarchies in Europe and their underlying concepts, such as the Divine Right of Kings, were often criticized during the Age of Enlightenment, which notably paved the way to the French Revolution and the proclamation of the abolition of the monarchy in France. Earlier, the American Revolution had seen the Patriots suppress the Loyalists and expel all royal officials. In this century, monarchies are present in the world in many forms with different degrees of royal power and involvement in civil affairs: Absolute monarchies in Brunei, Eswatini, Oman, Qatar, Saudi Arabia, the United Arab Emirates, and the Vatican City; Constitutional monarchies in the United Kingdom and its sovereign's Commonwealth Realms, and in Belgium, Denmark, Japan, Liechtenstein, Luxembourg, Malaysia, Monaco, The Netherlands, Norway, Spain, Sweden, Thailand, and others. The twentieth century, beginning with the 1917 February Revolution in Russia and accelerated by two world wars, saw many European countries replace their monarchies with republics, while others replaced their absolute monarchies with constitutional monarchies. Reverse movements have also occurred, with brief returns of the monarchy in France under the Bourbon Restoration, the July Monarchy, and the Second French Empire, the Stuarts after the English Civil War and the Bourbons in Spain after the Franco dictatorship. See also Dark Enlightenment List of dynasties Reactionary modernism Notes References External links The Monarchist League IMC, official site of the International Monarchist Conference. SYLM, Support Your Local Monarch, the independent monarchist community. ja:君主主義
0.771178
0.99608
0.768155
Historical particularism
Historical particularism (coined by Marvin Harris in 1968) is widely considered the first American anthropological school of thought. Closely associated with Franz Boas and the Boasian approach to anthropology, historical particularism rejected the cultural evolutionary model that had dominated anthropology until Boas. It argued that each society is a collective representation of its unique historical past. Boas rejected parallel evolutionism, the idea that all societies are on the same path and have reached their specific level of development the same way all other societies have. Instead, historical particularism showed that societies could reach the same level of cultural development through different paths. Boas suggested that diffusion, trade, corresponding environment, and historical accident may create similar cultural traits. Three traits, as suggested by Boas, are used to explain cultural customs: environmental conditions, psychological factors, and historical connections, history being the most important (hence the school's name). Critics of historical particularism argue that it is anti-theoretical because it doesn't seek to make universal theories, applicable to all the world's cultures. Boas believed that theories would arise spontaneously once enough data was collected. This school of anthropological thought was the first to be uniquely American and Boas (his school of thought included) was, arguably, the most influential anthropological thinker in American history. References Further reading Anthropology
0.787687
0.975119
0.768089
Social mobility
Social mobility is the movement of individuals, families, households or other categories of people within or between social strata in a society. It is a change in social status relative to one's current social location within a given society. This movement occurs between layers or tiers in an open system of social stratification. Open stratification systems are those in which at least some value is given to achieved status characteristics in a society. The movement can be in a downward or upward direction. Markers for social mobility such as education and class, are used to predict, discuss and learn more about an individual or a group's mobility in society. Typology Mobility is most often quantitatively measured in terms of change in economic mobility such as changes in income or wealth. Occupation is another measure used in researching mobility which usually involves both quantitative and qualitative analysis of data, but other studies may concentrate on social class. Mobility may be intragenerational, within the same generation or intergenerational, between different generations. Intragenerational mobility is less frequent, representing "rags to riches" cases in terms of upward mobility. Intergenerational upward mobility is more common where children or grandchildren are in economic circumstances better than those of their parents or grandparents. In the US, this type of mobility is described as one of the fundamental features of the "American Dream" even though there is less such mobility than almost all other OECD countries. Mobility can also be defined in terms of relative or absolute mobility. Absolute mobility looks at a person's progress in the areas of education, health, housing, income, job opportunities and other factors and compares it to some starting point, usually the previous generation. As technological advancements and economic development increase so do income levels and the conditions in which most people live. In absolute terms, people around the world, on average, are living better today than yesterday and in that sense, have experienced absolute mobility. Relative mobility looks at the mobility of a person in comparison to the mobility of others in the same cohort. In more advanced economies and OECD countries there is more space for absolute mobility than for relative mobility because a person from an average status background may remain average (thus no relative mobility) but still have a gradual increase in living standards due to a total social average increasing over time. There is also an idea of stickiness concerning mobility. This is when an individual is no longer experiencing relative mobility and it occurs mostly at the ends. At the bottom end of the socioeconomic ladder, parents cannot provide their children with the necessary resources or opportunity to enhance their lives. As a result, they remain on the same ladder rung as their parents. On the opposite side of the ladder, the high socioeconomic status parents have the necessary resources and opportunities to ensure their children also remain in same ladder rung as them. In East Asian countries this is exemplified by the concept of familial karma. Social status and social class Social mobility is highly dependent on the overall structure of social statuses and occupations in a given society. The extent of differing social positions and the manner in which they fit together or overlap provides the overall social structure of such positions. Add to this the differing dimensions of status, such as Max Weber's delineation of economic stature, prestige, and power and we see the potential for complexity in a given social stratification system. Such dimensions within a given society can be seen as independent variables that can explain differences in social mobility at different times and places in different stratification systems. The same variables that contribute as intervening variables to the valuation of income or wealth and that also affect social status, social class, and social inequality do affect social mobility. These include sex or gender, race or ethnicity, and age. Education provides one of the most promising chances of upward social mobility and attaining a higher social status, regardless of current social standing. However, the stratification of social classes and high wealth inequality directly affects the educational opportunities and outcomes. In other words, social class and a family's socioeconomic status directly affect a child's chances for obtaining a quality education and succeeding in life. By age five, there are significant developmental differences between low, middle, and upper class children's cognitive and noncognitive skills. Among older children, evidence suggests that the gap between high- and low-income primary- and secondary-school students has increased by almost 40 percent over the past thirty years. These differences persist and widen into young adulthood and beyond. Just as the gap in K–12 test scores between high- and low-income students is growing, the difference in college graduation rates between the rich and the poor is also growing. Although the college graduation rate among the poorest households increased by about 4 percentage points between those born in the early 1960s and those born in the early 1980s, over this same period, the graduation rate increased by almost 20 percentage points for the wealthiest households. Average family income, and social status, have both seen a decrease for the bottom third of all children between 1975 and 2011. The 5th percentile of children and their families have seen up to a 60% decrease in average family income. The wealth gap between the rich and the poor, the upper and lower class, continues to increase as more middle-class people get poorer and the lower-class get even poorer. As the socioeconomic inequality continues to increase in the United States, being on either end of the spectrum makes a child more likely to remain there and never become socially mobile. A child born to parents with income in the lowest quintile is more than ten times more likely to end up in the lowest quintile than the highest as an adult (43 percent versus 4 percent). And, a child born to parents in the highest quintile is five times more likely to end up in the highest quintile than the lowest (40 percent versus 8 percent). This may be partly due to lower- and working-class parents, where neither is educated above high school diploma level, spending less time on average with their children in their earliest years of life and not being as involved in their children's education and time out of school. This parenting style, known as "accomplishment of natural growth" differs from the style of middle-class and upper-class parents, with at least one parent having higher education, known as "cultural cultivation". More affluent social classes are able to spend more time with their children at early ages, and children receive more exposure to interactions and activities that lead to cognitive and non-cognitive development: things like verbal communication, parent-child engagement and being read to daily. These children's parents are much more involved in their academics and their free time; placing them in extracurricular activities which develop not only additional non-cognitive skills but also academic values, habits, and abilities to better communicate and interact with authority figures. Enrollment in so many activities can often lead to frenetic family lives organized around transporting children to their various activities. Lower class children often attend lower quality schools, receive less attention from teachers and ask for help much less than their higher class peers. The chances for social mobility are primarily determined by the family a child is born into. Today, the gaps seen in both access to education and educational success (graduating from a higher institution) is even larger. Today, while college applicants from every socioeconomic class are equally qualified, 75% of all entering freshmen classes at top-tier American institutions belong to the uppermost socioeconomic quartile. A family's class determines the amount of investment and involvement parents have in their children's educational abilities and success from their earliest years of life, leaving low-income students with less chance for academic success and social mobility due to the effects that the common parenting style of the lower and working-class have on their outlook on and success in education. Class cultures and social networks These differing dimensions of social mobility can be classified in terms of differing types of capital that contribute to changes in mobility. Cultural capital, a term first coined by French sociologist Pierre Bourdieu distinguishes between the economic and cultural aspects of class. Bourdieu described three types of capital that place a person in a certain social category: economic capital; social capital; and cultural capital. Economic capital includes economic resources such as cash, credit, and other material assets. Social capital includes resources one achieves based on group membership, networks of influence, relationships and support from other people. Cultural capital is any advantage a person has that gives them a higher status in society, such as education, skills, or any other form of knowledge. Usually, people with all three types of capital have a high status in society. Bourdieu found that the culture of the upper social class is oriented more toward formal reasoning and abstract thought. The lower social class is geared more towards matters of facts and the necessities of life. He also found that the environment in which a person develops has a large effect on the cultural resources that a person will have. The cultural resources a person has obtained can heavily influence a child's educational success. It has been shown that students raised under the concerted cultivation approach have "an emerging sense of entitlement" which leads to asking teachers more questions and being a more active student, causing teachers to favor students raised in this manner. This childrearing approach which creates positive interactions in the classroom environment is in contrast with the natural growth approach to childrearing. In this approach, which is more common amongst working-class families, parents do not focus on developing the special talents of their individual children, and they speak to their children in directives. Due to this, it is rarer for a child raised in this manner to question or challenge adults and conflict arises between childrearing practices at home and school. Children raised in this manner are less inclined to participate in the classroom setting and are less likely to go out of their way to positively interact with teachers and form relationships. However, the greater freedom of working-class children gives them a broader range of local playmates, closer relationships with cousins and extended family, less sibling rivalry, fewer complaints to their parents of being bored, and fewer parent-child arguments. In the United States, links between minority underperformance in schools have been made with a lacking in the cultural resources of cultural capital, social capital and economic capital, yet inconsistencies persist even when these variables are accounted for. "Once admitted to institutions of higher education, African Americans and Latinos continued to underperform relative to their white and Asian counterparts, earning lower grades, progressing at a slower rate and dropping out at higher rates. More disturbing was the fact that these differentials persisted even after controlling for obvious factors such as SAT scores and family socioeconomic status". The theory of capital deficiency is among the most recognized explanations for minority underperformance academically—that for whatever reason they simply lack the resources to find academic success. One of the largest factors for this, aside from the social, economic, and cultural capital mentioned earlier, is human capital. This form of capital, identified by social scientists only in recent years, has to do with the education and life preparation of children. "Human capital refers to the skills, abilities and knowledge possessed by specific individuals". This allows college-educated parents who have large amounts of human capital to invest in their children in certain ways to maximize future success—from reading to them at night to possessing a better understanding of the school system which causes them to be less deferential to teachers and school authorities. Research also shows that well-educated black parents are less able to transmit human capital to their children when compared to their white counterparts, due to a legacy of racism and discrimination. Markers Health The term "social gradient" in health refers to the idea that the inequalities in health are connected to the social status a person has. Two ideas concerning the relationship between health and social mobility are the social causation hypothesis and the health selection hypothesis. These hypotheses explore whether health dictates social mobility or whether social mobility dictates quality of health. The social causation hypothesis states that social factors, such as individual behavior and the environmental circumstances, determine an individual's health. Conversely, the health selection hypothesis states that health determines what social stratum an individual will be in. There has been a lot of research investigating the relationship between socioeconomic status and health and which has the greater influence on the other. A recent study has found that the social causation hypothesis is more empirically supported than the health selection hypothesis. Empirical analysis shows no support for the health selection hypothesis. Another study found support for either hypotheses depends on which lens the relationship between SES and health is being looked through. The health selection hypothesis is supported when people looking at SES and health through labor market lens. One possible reason for this is health dictates an individual's productivity and to a certain extent if the individual is employed. While, the social causation hypothesis is supported when looking at health and socioeconomic status relationship through an education and income lenses. Education The systems of stratification that govern societies hinder or allow social mobility. Education can be a tool used by individuals to move from one stratum to another in stratified societies. Higher education policies have worked to establish and reinforce stratification. Greater gaps in education quality and investment in students among elite and standard universities account for the lower upward social mobility of the middle class and/or low class. Conversely, the upper class is known to be self-reproducing since they have the necessary resources and money to afford, and get into, an elite university. This class is self-reproducing because these same students can then give the same opportunities to their children. Another example of this is high and middle socioeconomic status parents are able to send their children to an early education program, enhancing their chances at academic success in the later years. Housing Mixed housing is the idea that people of different socioeconomic statuses can live in one area. There is not a lot of research on the effects of mixed housing. However, the general consensus is that mixed housing will allow individuals of low socioeconomic status to acquire the necessary resources and social connections to move up the social ladder. Other possible effects mixed housing can bring are positive behavioral changes and improved sanitation and safer living conditions for the low socioeconomic status residents. This is because higher socioeconomic status individuals are more likely to demand higher quality residencies, schools, and infrastructure. This type of housing is funded by profit, nonprofit and public organizations. The existing research on mixed housing, however, shows that mixed housing does not promote or facilitate upward social mobility. Instead of developing complex relationships among each other, mixed housing residents of different socioeconomic statuses tend to engage in casual conversations and keep to themselves. If noticed and unaddressed for a long period of time, this can lead to the gentrification of a community. Outside of mixed housing, individuals with a low socioeconomic status consider relationships to be more salient than the type of neighborhood they live to their prospects of moving up the social ladder. This is because their income is often not enough to cover their monthly expenses including rent. The strong relationships they have with others offers the support system they need in order for them to meet their monthly expenses. At times, low income families might decide to double up in a single residency to lessen the financial burden on each family. However, this type of support system, that low socioeconomic status individuals have, is still not enough to promote upward relative mobility. Income Economic and social mobility are two separate entities. Economic mobility is used primarily by economists to evaluate income mobility. Conversely, social mobility is used by sociologists to evaluate primarily class mobility. How strongly economic and social mobility are related depends on the strength of the intergenerational relationship between class and income of parents and kids, and "the covariance between parents' and children's class position". Economic and social mobility can also be thought of as following the Great Gatsby curve. This curve demonstrates that high levels of economic inequality fosters low rates of relative social mobility. The culprit behind this model is the Economic Despair idea, which states that as the gap between the bottom and middle of income distribution increases, those who are at the bottom are less likely to invest in their human capital, as they lose faith in their ability and fair chance to experience upward mobility. An example of this is seen in education, particularly in high school drop-outs. Low income status students who no longer see value in investing in their education, after continuously failing to upgrade their social status. Race Race as an influencer on social mobility stems from colonial times. There has been discussion as to whether race can still hinder an individual's chances at upward mobility or whether class has a greater influence. A study performed on the Brazilian population found that racial inequality was only present for those who did not belong to the high-class status. Meaning race affects an individual's chances at upward mobility if they do not begin at the upper-class population. Another theory concerning race and mobility is, as time progresses, racial inequality will be replaced by class inequality. However, other research has found that minorities, particularly African Americans, are still being policed and observed more at their jobs than their white counterparts. The constant policing has often led to the frequent firing of African Americans. In this case, African Americans experience racial inequality that stunts their upward social mobility. Gender A 2019 Indian study, found that Indian women, in comparison to men, experience less social mobility. One possible reason for this is the poor quality or lack of education that females receive. In countries like India it is common for educated women not use their education to move up the social ladder due to cultural and traditional customs. They are expected to become homemakers and leave the bread winning to the men. A 2017 study of Indian women found that women are denied an education, as their families may find it more economically beneficial to invest in the education and wellbeing of their males instead of their females. In the parent's eyes the son will be the one who provides for them in their old age while the daughter will move away with her husband. The son will bring an income while the daughter might require a dowry to get married. When women enter the workforce, they are highly unlikely to earn the same pay as their male counterparts. Women can even differ in pay among each other due to race. To combat these gender disparities, the UN has made it one of their goals on the Millennium Development Goals reduce gender inequality. This goal is accused of being too broad and having no action plan. Patterns of mobility While it is generally accepted that some level of mobility in society is desirable, there is no consensus agreement upon "how much" social mobility is good for or bad for a society. There is no international benchmark of social mobility, though one can compare measures of mobility across regions or countries or within a given area over time. While cross-cultural studies comparing differing types of economies are possible, comparing economies of similar type usually yields more comparable data. Such comparisons typically look at intergenerational mobility, examining the extent to which children born into different families have different life chances and outcomes. In a 2009 study, The Spirit Level: Why More Equal Societies Almost Always Do Better, Wilkinson and Pickett conducted an exhaustive analysis of social mobility in developed countries. In addition to other correlations with negative social outcomes for societies having high inequality, they found a relationship between high social inequality and low social mobility. Of the eight countries studied—Canada, Denmark, Finland, Sweden, Norway, Germany, the UK and the US, the US had both the highest economic inequality and lowest economic mobility. In this and other studies, the US had very low mobility at the lowest rungs of the socioeconomic ladder, with mobility increasing slightly as one goes up the ladder. At the top rung of the ladder, mobility again decreases. A 2006 study comparing social mobility between developed countries found that the four countries with the lowest "intergenerational income elasticity", i.e. the highest social mobility, were Denmark, Norway, Finland and Canada with less than 20% of advantages of having a high income parent passed on to their children. A 2012 study found "a clear negative relationship" between income inequality and intergenerational mobility. Countries with low levels of inequality such as Denmark, Norway and Finland had some of the greatest mobility, while the two countries with the high level of inequality—Chile and Brazil—had some of the lowest mobility. In Britain, much debate on social mobility has been generated by comparisons of the 1958 National Child Development Study (NCDS) and the 1970 Birth Cohort Study BCS70, which compare intergenerational mobility in earnings between the 1958 and the 1970 UK cohorts, and claim that intergenerational mobility decreased substantially in this 12-year period. These findings have been controversial, partly due to conflicting findings on social class mobility using the same datasets, and partly due to questions regarding the analytical sample and the treatment of missing data. UK Prime Minister Gordon Brown has famously said that trends in social mobility "are not as we would have liked". Along with the aforementioned "Do Poor Children Become Poor Adults?" study, The Economist also stated that "evidence from social scientists suggests that American society is much 'stickier' than most Americans assume. Some researchers claim that social mobility is actually declining." A 2006 German study corroborates these results. In spite of this low social mobility, in 2008, Americans had the highest belief in meritocracy among middle- and high-income countries. A 2014 study of social mobility among the French corporate class found that social class influences who reaches the top in France, with those from the upper-middle classes tending to dominate, despite a longstanding emphasis on meritocracy. In 2014, Thomas Piketty found that wealth-income ratios seem to be returning to very high levels in low economic growth countries, similar to what he calls the "classic patrimonial" wealth-based societies of the 19th century, where a minority lives off its wealth while the rest of the population works for subsistence living. Social mobility can also be influenced by differences that exist within education. The contribution of education to social mobility often gets neglected in social mobility research, although it really has the potential to transform the relationship between people's social origins and destinations. Recognizing the disparities between strictly location and its educational opportunities highlights how patterns of educational mobility are influencing the capacity for individuals to experience social mobility. There is some debate regarding how important educational attainment is for social mobility. A substantial literature argues that there is a direct effect of social origins (DESO) which cannot be explained by educational attainment. Other evidence suggests that, using a sufficiently fine-grained measure of educational attainment, taking on board such factors as university status and field of study, education fully mediates the link between social origins and access to top class jobs. In the US, the patterns of educational mobility that exist between inner-city schools versus schools in the suburbs is transparent. Graduation rates supply a rich context to these patterns. In the 2013–14 school year, Detroit Public Schools had a graduation rate of 71%. Grosse Pointe High School, a whiter Detroit suburb, had an average graduation rate of 94%. In 2017, a similar phenomena was observed in Los Angeles, California as well as in New York City. Los Angeles Senior High School (inner city) observed a graduation rate of 58% and San Marino High School (suburb) observed a graduation rate of 96%. New York City Geographic District Number Two (inner city) observed a graduation rate of 69% and Westchester School District (suburb) observed a graduation rate of 85%. These patterns were observed across the country when assessing the differences between inner city graduation rates and suburban graduation rates. The economic grievance thesis argues that economic factors, such as deindustrialisation, economic liberalisation, and deregulation, are causing the formation of a 'left-behind' precariat with low job security, high inequality, and wage stagnation, who then support populism. Some theories only focus on the effect of economic crises, or inequality. Another objection for economic reasons is due to the globalization that is taking place in the world today. In addition to criticism of the widening inequality caused by the elite, the widening inequality among the general public caused by the influx of immigrants and other factors due to globalization is also a target of populist criticism. The evidence of increasing economic disparity and volatility of family incomes is clear, particularly in the United States, as shown by the work of Thomas Piketty and others. Commentators such as Martin Wolf emphasize the importance of economics. They warn that such trends increase resentment and make people susceptible to populist rhetoric. Evidence for this is mixed. At the macro level, political scientists report that xenophobia, anti-immigrant ideas, and resentment towards out-groups tend to be higher during difficult economic times. Economic crises have been associated with gains by far-right political parties. However, there is little evidence at the micro- or individual level to link individual economic grievances and populist support. Populist politicians tend to put pressure on central bank independence. Influence of intelligence and education Social status attainment and therefore social mobility in adulthood are of interest to psychologists, sociologists, political scientists, economists, epidemiologists and many more. The reason behind the interest is because it indicates access to material goods, educational opportunities, healthy environments, and economic growth. In Scotland, a long range study examined individuals in childhood and mid-adulthood. Most Scottish children born in 1921 participated in the Scottish Mental Survey 1932, conducted by the Scottish Council for Research in Education (SCRE) It obtained the data of psychometric intelligence of Scottish pupils. The number of children who took the mental ability test (based on the Moray House tests) was 87,498. They were between age 10 and 11. The tests covered general, spatial and numerical reasoning. At midlife period, a subset of the subjects participated in one of the studies, which were large health studies of adults and were carried out in Scotland in the 1960s and 1970s. The particular study they took part in was the collaborative study of 6022 men and 1006 women, conducted between 1970 and 1973 in Scotland. Participants completed a questionnaire (participant's address, father's occupation, the participant's own first regular occupation, the age of finishing full-time education, number of siblings, and if the participant was a regular car driver) and attended a physical examination (measurement of height). Social class was coded according to the Registrar General's Classification for the participant's occupation at the time of screening, his first occupation and his father's occupation. Researchers separated into six social classes were used. A correlation and structural equation model analysis was conducted. In the structural equation models, social status in the 1970s was the main outcome variable. The main contributors to education (and first social class) were father's social class and IQ at age 11, which was also found in a Scandinavian study. This effect was direct and also mediated via education and the participant's first job. Participants at midlife did not necessarily end up in the same social class as their fathers. There was social mobility in the sample: 45% of men were upwardly mobile, 14% were downward mobile and 41% were socially stable. IQ at age 11 had a graded relationship with participant's social class. The same effect was seen for father's occupation. Men at midlife social class I and II (the highest, more professional) also had the highest IQ at age 11. Height at midlife, years of education and childhood IQ were significantly positively related to upward social mobility, while number of siblings had no significant effect. For each standard deviation increase in IQ score at the age 11, the chances of upward social mobility increases by 69% (with a 95% confidence). After controlling the effect of independent variables, only IQ at age 11 was significantly inversely related to downward movement in social mobility. More years of education increase the chance that a father's son will surpass his social class, whereas low IQ makes a father's son prone to falling behind his father's social class. Higher IQ at age 11 was also significantly related to higher social class at midlife, higher likelihood car driving at midlife, higher first social class, higher father's social class, fewer siblings, higher age of education, being taller and living in a less deprived neighbourhood at midlife. IQ was significantly more strongly related to the social class in midlife than the social class of the first job. Height, education and IQ at age 11 were predictors of upward social mobility and only IQ at age 11 and height were significant predictors of downward social mobility. Number of siblings was not significant in either of the models. Another research looked into the pivotal role of education in association between ability and social class attainment through three generations (fathers, participants and offspring) using the SMS1932 (Lothian Birth Cohort 1921) educational data, childhood ability and late life intellectual function data. It was proposed that social class of origin acts as a ballast restraining otherwise meritocratic social class movement, and that education is the primary means through which social class movement is both restrained and facilitated—therefore acting in a pivotal role. It was found that social class of origin predicts educational attainment in both the participant's and offspring generations. Father's social class and participant's social class held the same importance in predicting offspring educational attainment—effect across two generations. Educational attainment mediated the association of social class attainments across generations (father's and participants social class, participant's and offspring's social class). There was no direct link between social classes across generations, but in each generation educational attainment was a predictor of social class, which is consistent with other studies. Participant's childhood ability moderately predicted their educational and social class attainment (.31 and .38). Participant's educational attainment was strongly linked with the odds of moving downward or upward on the social class ladder. For each SD increase in education, the odds of moving upward on the social class spectrum were 2.58 times greater. The downward ones were .26 times greater. Offspring's educational attainment was also strongly linked with the odds of moving upward or downward on the social class ladder. For each SD increase in education, the odds of moving upward were 3.54 times greater. The downward ones were .40 times greater. In conclusion, education is very important, because it is the fundamental mechanism functioning both to hold individuals in their social class of origin and to make it possible for their movement upward or downward on the social class ladder. In the Cohort 1936 it was found that regarding whole generations (not individuals) the social mobility between father's and participant's generation is: 50.7% of the participant generation have moved upward in relation to their fathers, 22.1% had moved downwards, and 27.2% had remained stable in their social class. There was a lack of social mobility in the offspring generation as a whole. However, there was definitely individual offspring movement on the social class ladder: 31.4% had higher social class attainment than their participant parents (grandparents), 33.7% moved downward, and 33.9% stayed stable. Participant's childhood mental ability was linked to social class in all three generations. A very important pattern has also been confirmed: average years of education increased with social class and IQ. There were some great contributors to social class attainment and social class mobility in the twentieth century: Both social class attainment and social mobility are influenced by pre-existing levels of mental ability, which was in consistence with other studies. So, the role of individual level mental ability in pursuit of educational attainment—professional positions require specific educational credentials. Educational attainment contributes to social class attainment through the contribution of mental ability to educational attainment. Mental ability can contribute to social class attainment independent of actual educational attainment, as in when the educational attainment is prevented, individuals with higher mental ability manage to make use of the mental ability to work their way up on the social ladder. This study made clear that intergenerational transmission of educational attainment is one of the key ways in which social class was maintained within family, and there was also evidence that education attainment was increasing over time. Finally, the results suggest that social mobility (moving upward and downward) has increased in recent years in Britain. Which according to one researcher is important because an overall mobility of about 22% is needed to keep the distribution of intelligence relatively constant from one generation to the other within each occupational category. In 2010, researchers looked into the effects elitist and non-elitist education systems have on social mobility. Education policies are often critiqued based on their impact on a single generation, but it is important to look at education policies and the effects they have on social mobility. In the research, elitist schools are defined as schools that focus on providing its best students with the tools to succeed, whereas an egalitarian school is one that predicates itself on giving equal opportunity to all its students to achieve academic success. When private education supplements were not considered, it was found that the greatest amount of social mobility was derived from a system with the least elitist public education system. It was also discovered that the system with the most elitist policies produced the greatest amount of utilitarian welfare. Logically, social mobility decreases with more elitist education systems and utilitarian welfare decreases with less elitist public education policies. When private education supplements are introduced, it becomes clear that some elitist policies promote some social mobility and that an egalitarian system is the most successful at creating the maximum amount of welfare. These discoveries were justified from the reasoning that elitist education systems discourage skilled workers from supplementing their children's educations with private expenditures. The authors of the report showed that they can challenge conventional beliefs that elitist and regressive educational policy is the ideal system. This is explained as the researchers found that education has multiple benefits. It brings more productivity and has a value, which was a new thought for education. This shows that the arguments for the regressive model should not be without qualifications. Furthermore, in the elitist system, the effect of earnings distribution on growth is negatively impacted due to the polarizing social class structure with individuals at the top with all the capital and individuals at the bottom with nothing. Education is very important in determining the outcome of one's future. It is almost impossible to achieve upward mobility without education. Education is frequently seen as a strong driver of social mobility. The quality of one's education varies depending on the social class that they are in. The higher the family income the better opportunities one is given to get a good education. The inequality in education makes it harder for low-income families to achieve social mobility. Research has indicated that inequality is connected to the deficiency of social mobility. In a period of growing inequality and low social mobility, fixing the quality of and access to education has the possibility to increase equality of opportunity for all Americans. "One significant consequence of growing income inequality is that, by historical standards, high-income households are spending much more on their children's education than low-income households." With the lack of total income, low-income families cannot afford to spend money on their children's education. Research has shown that over the past few years, families with high income has increased their spending on their children's education. High income families were paying $3,500 per year and now it has increased up to nearly $9,000, which is seven times more than what low income families pay for their kids' education. The increase in money spent on education has caused an increase in college graduation rates for the families with high income. The increase in graduation rates is causing an even bigger gap between high income children and low-income children. Given the significance of a college degree in today's labor market, rising differences in college completion signify rising differences in outcomes in the future. Family income is one of the most important factors in determining the mental ability (intelligence) of their children. With such bad education that urban schools are offering, parents of high income are moving out of these areas to give their children a better opportunity to succeed. As urban school systems worsen, high income families move to rich suburbs because that is where they feel better education is; if they do stay in the city, they put their children to private schools. Low income families do not have a choice but to settle for the bad education because they cannot afford to relocate to rich suburbs. The more money and time parents invest in their child plays a huge role in determining their success in school. Research has shown that higher mobility levels are perceived for locations where there are better schools. See also Ascriptive inequality Asset poverty Cycle of poverty Desert (philosophy) Distribution of wealth Essential facilities doctrine Global Social Mobility Index Higher education bubble in the United States Kitchen sink realism Horizontal mobility Occupational prestige One-upmanship Rational expectations Social and Cultural Mobility Social stigma Socio-economic mobility in the United States Spatial inequality Winner and loser culture References Further reading External links The New York Times offers a graphic about social mobility, overall trends, income elasticity and country by country. European nations such as Denmark and France, are ahead of the US. Social classes Social inequality Social stratification
0.77087
0.996384
0.768083
Politicisation
Politicisation (also politicization; see English spelling differences) is a concept in political science and theory used to explain how ideas, entities or collections of facts are given a political tone or character, and are consequently assigned to the ideas and strategies of a particular group or party, thus becoming the subject of contestation. Politicisation has been described as compromising objectivity, and is linked with political polarisation. Conversely, it can have a democratising effect and enhance political choice, and has been shown to improve the responsiveness of supranational institutions such as the European Union. The politicisation of a group is more likely to occur when justifications for political violence are considered acceptable within a society, or in the absence of norms condemning violence. Depoliticisation, the reverse process, is when issues are no longer the subject of political contestation. It is characterised by governance through consensus-building and pragmatic compromise. It occurs when subjects are left to experts, such as technocratic or bureaucratic institutions, or left to individuals and free markets, through liberalisation or deregulation. It is often connected with multi-level governance. The concept has been used to explain the "democratic gap" between politicians and citizens who lack choice, agency and opportunities for deliberation. In the 21st century, depoliticisation has been linked to disillusionment with neoliberalism. Depoliticisation has negative consequences for regime legitimacy, and produces anti-political sentiment associated with populism, which can result in "repoliticisation" (politicisation following depoliticisation). Current studies of politicisation are separated into various subfields. It is primarily examined on three separate levels: within national political systems, within the European Union and within international institutions. Academic approaches vary greatly and are frequently disconnected. It has been studied from subdisciplines such as comparative politics, political sociology, European studies and legal theory. The politicisation of science occurs when actors stress the inherent uncertainty of scientific method to challenge scientific consensus, undermining the positive impact of science on political debate by causing citizens to dismiss scientific evidence. Definitions The dominant academic framework for understanding politicisation is the systems model, which sees politics as an arena or sphere. In this perspective, politicisation is the process by which issues or phenomena enter the sphere of "the political", a space of controversy and conflict. Alternatively, in the behaviouralist approach to political science, which sees politics as action or conflict, politicisation is conceptualised as the process by which an issue or phenomenon becomes significantly more visible in the collective consciousness, causing political mobilisation. In the systems model, depoliticisation is seen as "arena-shifting": removing issues from the political sphere by placing them outside the direct control or influence of political institutions, such as legislatures and elected politicians, thereby denying or minimising their political nature. In the behaviouralist model, depoliticisation indicates the reduction of popular interest in an issue, a weakening of participation in the public sphere and the utilisation of power to prevent opposition. Theory Comparative politics (national level) Majoritarian institutions, such as parliaments (legislatures) and political parties, are associated with politicisation because they represent popular sovereignty and their agents are subject to short-term political considerations, particularly the need to compete for votes ("vote-seeking") by utilising populist rhetoric and policies. Non-majoritarian institutions, such as constitutional courts, central banks and international organisations, are neither directly elected nor directly managed by elected officials, and are connected with depoliticisation as they tend towards moderation and compromise. Declines in voter turnout, political mobilisation and political party membership, trends present in most OECD countries from the 1960s onwards, reflect depoliticisation. A number of causes for this shift have been suggested. The growth of big tent political parties (parties which aim to appeal to a broad spectrum of voters) resulted in reduced polarisation and centralised decision-making, with increased compromise and bargaining. In postwar Europe, the development of neo-corporatism led to political bargaining between powerful employers' organizations, trade unions and the government in a system known as tripartism, within which cartel parties could successfully prevent competition from newer parties. Globally during the late 20th century, central banks and constitutional courts became increasingly important. Robert Dahl argued that these processes risked producing alienation because they created a professionalised form of politics that was "anti-ideological" and "too remote and bureaucratized". Other contemporary scholars saw depoliticisation as a positive indication of dealignment and democratic maturity, as political competition came to be dominated by issues rather than cleavages. In the early 21st century, theorists such as Colin Crouch and Chantal Mouffe argued that low participation was not the result of satisfaction with political systems, but the consequence of low confidence in institutions and political representatives; in 2007, Colin Hay explicitly linked these studies with the concept of politicisation. Since the 1990s, a process of "repoliticisation" has occurred on the national level, marked by the growth of right-wing populist parties in Europe, increased polarisation in American politics and higher voter turnout. The divide between the winners and losers of globalisation and neoliberalism is hypothesised to have played a major role in this process, having replaced class conflict as the primary source of politicisation. Sources of conflict along this line include an "integration–demarcation" cleavage (between the losers of globalisation, who favour protectionism and nationalism, and the winners of globalisation, who prefer increased competition, open borders and internationalism); and a similar "cosmopolitan–communitarian" cleavage (which places additional emphasis on a cultural divide between supporters of universal norms and those who believe in cultural particularism). Disillusionment with neoliberal policies has also been cited as a factor behind the processes of depoliticisation and repoliticisation, particularly through the lens of public choice theory. In 2001, Peter Burnham argued that in the UK the New Labour administration of Tony Blair used depoliticisation as a governing strategy, presenting contentious neoliberal reforms as non-negotiable "constraints" in order to lower political expectations, thus creating apathy and submission among the electorate and facilitating the emergence of "anti-politics". Neo-Marxist, radical democratic and anti-capitalist critiques aim to repoliticise what they describe as neoliberal society, arguing that Marx's theory of alienation can be used to explain depoliticisation. European studies (European Union) In post-functionalist theory, the politicisation of the EU is seen as a threat to integration because it constrains executive decision makers in member states due to domestic partisanship, fear of referendum defeat and the electoral repercussions of European policies, ultimately preventing political compromise on the European level. The EU has experienced politicisation over time however it has been at an increased rate since the early 2000's due to the series of crises. At a national level within its member states, a rise in populism has contributed to volatile party politics and the election of anti-EU representatives. Due to the EU's increasing involvement and influence in controversial policy issues as it strives for further integration, there is a rise in the contestational nature of interactions between EU agents. After dissatisfaction with governance, rising populist challengers have grown the cleavages in electoral divides. International relations (international level) Government agencies Politicisation of science Climate science COVID-19 pandemic During the COVID-19 pandemic, the politicisation of investigations into the origin of COVID-19 led to geopolitical tension between the United States and China, the growth of anti-Asian rhetoric and the bullying of scientists. Some scientists said that politicisation could obstruct global efforts to suppress the virus and prepare for future pandemics. Political scientists Giuliano Bobba and Nicolas Hubé have argued that the pandemic strengthened populist politicians by providing an opportunity for them to promote policies such as tighter border controls, anti-elitism and restriction of public freedoms. See also Political polarisation References Notes Citations Bibliography Political science Political terminology Comparative politics Politics
0.781085
0.983271
0.768018
Mediatization (media)
Mediatization (or medialization) is a method whereby the mass media influence other sectors of society, including politics, business, culture, entertainment, sport, religion, or education. Mediatization is a process of change or a trend, similar to globalization and modernization, where the mass media integrates into other sectors of the society. Political actors, opinion makers, business organizations, civil society organizations, and others have to adapt their communication methods to a form that suits the needs and preferences of the mass media. Any person or organization wanting to spread messages to a larger audience have to adapt their messages and communication style to make it attractive for the mass media. Introduction The concept of mediatization is still requires development, and there is no commonly agreed definition of the term. For example, a sociologist, Ernst Manheim, used mediatization as a way to describe social shifts that are controlled by the mass media, while a media researcher, Kent Asp, viewed mediatization as the relationship between politics, mass media, and the ever-growing divide between the media and government control. Some theorists reject precise definitions and operationalizations of mediatization, fearing that they would reduce the complexity of the concept and the phenomena it refers to, while others prefer a clear theory that can be tested, refined, or potentially refuted. The concept of mediatization is seen not as an isolated theory, but as a framework that holds the potential to integrate different theoretical strands, linking micro-level with meso- and macro-level processes and phenomena, and thus contributing to a broader understanding of the role of the media in the transformation of modern societies. Technological developments from newspapers to radio, television, Internet, and interactive social media helped shape mediatization. Other important influences include changes in organization and economic conditions of the media, such as the growing importance of independent market-driven media and a decreasing influence of state-sponsored, public service, and partisan media. Mass media influence public opinion and the structure and processes of political communication, political decision-making and the democratic process. This political influence is not a one-way influence. While the mass media may influence government and political actors, the politicians also influence the media through regulation, negotiation, or selective access to information. The increasing influence of economic market forces is typically seen in trends such as tabloidization and trivialization, while news reporting and political coverage diminish to slogans, sound bites, spin, horse race reporting, celebrity scandals, populism, and infotainment. History The Canadian philosopher Marshall McLuhan is sometimes associated with the founding of the field. He proposed that a communication medium, not the messages it carries, should be the primary focus of study. The Hungarian-born sociologist Ernest Manheim was the first to use the German word Mediatisierung to describe the social influence of the mass media in a book published in 1933, though with little elaboration on the concept. Mediatisierung already existed in German but had a different meaning (see German mediatisation). In his Theory of Communicative Action, the German sociologist Jürgen Habermas used the word in 1981. Whether Habermas used the word in the old meaning or in the new meaning of media influence is debated. The first appearance of the word mediatization in the English language may be in the English translation of Theory of Communication Action. The Swedish professor of journalism, Kent Asp, was the first into develop the concept of mediatization to a coherent theory in his seminal dissertation, where he investigated the mediatization of politics. His dissertation was published as a book in Swedish in 1986. Kent Asp described the mediatization of political life, by which he meant a process whereby “a political system to a high degree is influenced by and adjusted to the demands of the mass media in their coverage of politics." In the tradition of Kent Asp, the Danish media science professor Stig Hjarvard further developed the concept of mediatization and applied it not only to politics but also to other sectors of society, including religion. Hjarvard defined mediatization as a social process whereby the society is saturated and inundated by the media to the extent that the media can no longer be thought of as separated from other institutions within the society. Mediatization has since gained widespread usage in English despite sounding awkward. Mediatization theory is part of a paradigmatic shift in media and communication research. Following the concept of mediation, mediatization has become a significant concept for capturing how processes of communication transform society in large-scale relationships. While the early theory building around mediatization had a strong center in Europe, many American media sociologists and media economists made observations about the effects of commercial mass media competition on news quality, public opinion, and political processes. For example, David Altheide discussed how media logic distorts political news, and John McManus demonstrated how economic competition violates media ethics and makes it difficult for citizens to evaluate the quality of the news. The European theorists readily embraced Altheide's concept of media logic, and now the two lines of research are integrated into one standard paradigm. Modern theorists now believe there is a new form of mediatization developing. This next phase of mediatization has been dubbed "deep mediatization". Industry change caused through mediatization only increase under deep mediatization and may quickly grow out of control. Schools of Mediatization Theorists have distinguished three theoretical schools of mediatization, listed below. Institutionalist The leading scholars of this school of mediatization, David Altheide and Robert Snow, coined the term media logic in 1979. Media logic refers to the form of communication and the process through which media transmit and communicate information. The logic of media forms the fund of knowledge that is generated and circulated in society. Building on Marshall McLuhan, Altheide discusses the role of communication formats for the recognition, definition, selection, organization, and presentation of experience. A central thesis is that knowledge affects social activities more than wealth or force. A consequence of this is communication technology influencing power. For example, Gutenberg's printing press enabled the wide distribution of his Bible, which was a threat to the dominance of the Catholic Church. Altheide has emphasized that social order is communicated. It has severe consequences if this communication is exaggerated and dramatized to fit the media logic. The media may create moral panics by exaggerating and misrepresenting social problems. One example documented by Altheide is a media panic over missing children in the 1980s. The media gave the impression that many children were abducted by criminals, when in fact, most of the children listed as missing were runaways or involved in custody disputes. The penchant of the media for emotional drama and horror may lead to gonzo journalism and perversion of justice. Altheide describes "gonzo justice" as a process where the media become active players in the persecution of perceived wrongdoers, where public humiliation replaces court trials without concern for due process and civil liberties. Gonzo journalism can have severe consequences for democracy and international relations when, for example, international conflicts are presented by dramatizing the evil of foreign heads of state, such as Muammar Gaddafi, Manuel Noriega, and Saddam Hussein. Socio-constructivist The social constructivist school of mediatization theory involves discussions at a high level of abstraction to embrace the complexity of the interaction between mass media and other fields of society. The theorists are not denying the relevance of empirical research of causal connections but warning against a linear understanding of process and change. The theorists want to avoid the extreme positions of either technological determinism or social determinism. Their approach is not media-centric in the sense of a one-sided approach to causality, but media-centered in the sense of a holistic understanding of the various intersecting social forces at work, allowing a particular perspective and emphasis on the role of the media in these processes. The concept of media logic is criticized with the argument that there is not one media logic but many media logics, depending on the context. Andreas Hepp, a leading theorist of the constructivist school of mediatization theory, describes the role of the mass media not as a driving force but as a molding force. This force is not a direct effect of the material structure of the media. The molding force of the media only becomes concrete in different ways of mediation that are highly contextual. Hepp does not see mediatization as a theory of media effects, but as a sensitizing concept that draws our attention to fundamental transformations we experience in today's media environment. This concept provides a panorama of investigating the meta-process of interrelation between media communicative change and sociocultural change. These transformations are seen in three ways in particular: the historical depth of the process of media-related transformations, the diversity of media-related transformations in different domains of society, and the connection of media-related transformations to further processes of modernization. Hepp is deliberately avoids precise definitions of mediatization by using metaphors such as molding force and panorama. He argues that precise definitions may limit the complexity of the interrelations where it is important to consider both the material and the symbolic domain. However, materialists argue that such a loosely defined concept may too easily become a matter of belief rather than a proper theory than can be tested. The process of media change is coupled with technological change. The emergence of digital media has brought a new stage of mediatization, which can be called deep mediatization. Deep mediatization is an advanced stage of the process in which all elements of our social world are intricately related to digital media and their underlying infrastructures, and where large IT corporations play a greater role. Materialist The materialist school of mediatization theory studies how society, to an increasing degree, becomes dependent on the media and their logic. The studies combine results from different areas of science to describe how changes in the media and society are interrelated. In particular, they are focusing on how the political processes in Western democracies are changing through mediatization. The mediatization of politics can be characterized by four different dimensions, according to the Swedish professor of political communication Jesper Strömbäck and the Swiss professor of media research Frank Esser: The first dimension refers to the degree to which the media constitute the most important source of information about politics and society. The second dimension refers to the degree to which the media have become independent from other political and social institutions. The third dimension refers to the degree to which media content and the coverage of politics and current affairs are guided by media logic rather than political logic. This dimension deals with the extent to which the media's needs and standards of newsworthiness, rather than those of political actors or institutions, are decisive for what the media cover and how they cover it. The fourth dimension refers to how media logic or political logic guides political institutions, organizations, and actors. This four-dimensional framework makes it possible to break down the highly complex process of the mediatization of politics into discrete dimensions that can be studied empirically. The relationship between these four dimensions can be described as follows: If the mass media provide the most important source of information and the media are relatively independent, then media will be able to shape their contents to fit their demand for optimizing the number of readers and viewers, i.e., the media logic, while politicians have to adjust their communication to fit this media logic. The media are never completely independent, of course. They are subject to political regulation and dependent on economic factors and news sources. Scholars are debating where the balance of powers between media and politicians lies. The central concept of media logic contains three components: professionalism, commercialism, and technology. Media professionalism refers to the professional norms and values that guide journalists, such as independence and newsworthiness. Commercialism refers to the result of economic competition between commercial news media. The commercial criteria can be summarized as the least expensive mix of content that protects the interests of sponsors and investors while garnering the largest audience advertisers will pay to reach. Media technology refers to the specific requirements and possibilities that are characteristic of each of the different media technologies, including newspapers with their emphasis on print, radio with its emphasis on audio, television with its emphasis on visuals, and digital media with their emphasis on interactivity and instantaneousness. Mediatization plays a key role in social change that can be defined by four tendencies: extension, substitution, amalgamation, and accommodation. Extension refers to how communication technology extends the limits of human communication in terms of space, time, and expressiveness. Substitution refers to how media consumption replaces other activities by providing an attractive alternative or simply by consuming time that might otherwise have been spent on, for example, social activities. Amalgamation refers to how media use is woven into the fabric of everyday life so that the boundaries between mediated and nonmedia activities and between mediated and social definitions of reality are becoming blurred. Accommodation refers to how actors and organizations of all sectors of society, including business, politics, entertainment, sport, etc., adapt their activities and modes of operation to fit the media system. There is a vigorous discussion about the role of mediatization in society. Some argue that we live in a mediatization society where mass media deeply penetrate all spheres of society and are complicit in the rising political populism, while others warn against inflating mediatization to a meta-process or a superordinate process of social change. The media should not be seen as powerful agents of change because it is rare to observe the consequences of intentional actions by the media. The social consequences of mediatization are more often to be seen as unintended consequences of the media structure. Influence of media technology Newspapers Newspapers have been available since the 18th century and became more widespread in the early 20th century due to improvements in printing technology (see history of journalism). Four typical types of newspapers can be distinguished: popular, quality, regional, and financial newspapers. The popular or tabloid newspapers typically contain a high proportion of soft news, personal focus, and negative news. They often use sensationalism and attention-catching headlines to increase single-copy sales from newsstands and supermarkets, while quality newspapers are generally considered to have a higher quality of journalism. Relying more on subscriptions than on single copy sales, they have less need for sensationalism. Regional newspapers have more local news, while financial newspapers have more international news of interest to their readers. Early newspapers were often partisan, associated with a particular political party, while today they are mostly controlled by free market forces. Telegraph The introduction of the electric telegraph in the US in the mid-19th century significantly influenced the contents of newspapers, giving them easy access to national news. This increased voter turnout for presidential elections. Radio When radio became commonly available, it became an efficient medium for news, education of the public, and propaganda. Exposure to radio programs with educational content significantly increased children's school performance. Campaigns about the health effects of tobacco smoking and other health issues have been effective. The effects of radio programs may be unintended. For example, soap opera programs in Africa that portrayed attractive lifestyles affected people's norms and behaviors and their political preferences for redistribution of wealth. The radio can also facilitate political activism. Radio stations targeting a black audience had a strong effect on political activism and participation in the civil rights movement in the southern US states in the 1960s. The radio could also be a strong medium for propaganda in the years before television became available. The Roman Catholic priest Charles Coughlin in Michigan embraced radio broadcasting when radio was a new and rapidly expanding technology during the 1920s. Coughlin initially used the new possibility for reaching a mass audience for religious sermons, but after the onset of the Great Depression, he switched to mainly voicing his controversial political opinions, which were often antisemitic and fascistic. The radio was also a powerful tool for propaganda in Nazi Germany in the 1930s and during the war. The Nazi government facilitated the distribution of cheap radio receivers (Volksempfänger), which enabled Adolf Hitler to reach a large audience through his frequent propaganda speeches, while it was illegal for the Germans to listen to foreign radio stations. In Italy, Benito Mussolini used the radio for similar propaganda speeches. Television The social impact of radio was reduced after the war when television outcompeted the radio. Kent Asp, who studied the interaction of television with politics in Sweden, has identified a history of increasing mediatization. The politicians recognized in the 1960s that television had become a predominant channel for political communication took place through the following decades. The gradual acclimatization, adjustment, and adoption of media logic in political communication took place through the following decades. By the 2000s, the political institutions had almost completely integrated the logic of television and other mass media into their procedures. Television outcompeted newspapers and radio and crowded out other activities such as play, sports, study, and social activities. This outcome has led to lower school performance for children who have access to entertainment TV programs. TV viewers tend to imitate the lifestyle of role models that they see on entertainment shows. This imitation has resulted in lower fertility and higher divorce rates in various countries. Television is delivering strong messages of patriotism and national unity in China where the media are state-controlled. Toys/Play The mediatization of toys in the United States can be traced back to the post-World War II era of the 1950s. Advertisers saw the rise of children's television programming as an opportunity to utilize a new medium to market toys. Toys became heavily promoted in the media through television. Commercialization of children's television programs increased in the 1980s after the deregulation of American television. Over time, this led to the creation of popular toy brands and characters, such as G.I. Joe and Barbie, who were given their own television shows and movies to sell more toys. With the rise of the Internet, tablets, smartphones, and other Internet-connected devices, the toy and media industries have become even more closely linked, giving companies even greater opportunities to market their toys to children with the help of mediatization. Internet The advent of the Internet has created new opportunities and conditions for traditional newspapers and online-only news providers. Many newspapers are now publishing their news on paper and also online. This shift has enabled a more diverse assembly of breaking news, longer reports, and traditional magazine journalism. The increased competition in a diversified media market has led to more human interest and lifestyle stories and less political news, especially in the online versions of the newspapers. Social media Social media, such as Facebook, Twitter, YouTube, have enabled a more interactive form of mass communication. The new form of Internet media that allow user-generated content has been called Web 2.0. The possibilities for user involvement have increased opportunities for networking, collaboration, and civic engagement. Protest movements, in particular, have benefited from an independent communication infrastructure. The circulation of messages on social media relies, to a great extent, on users who like, share, and re-distribute messages. This kind of circulation of messages is controlled less by the logic of market economics and more by the principles of memetics. Messages are selected and recirculated based on a new set of criteria different from the selection criteria of newspapers, radio, and television. People tend to share the psychologically appealing and attention-catching messages. Social media users are remarkably bad at evaluating the truth of the messages they share. Studies show that false messages are shared more often than true messages because false messages are more surprising and attention-catching. This spreading of false information has led to the proliferation of fake news and conspiracy theories on social media. Attempts to counter misinformation by fact-checking have had limited effect. People prefer to follow the Internet forums, pages, and groups they agree with. At the same time, the media prefer topics that are already popular. This has led to the large-scale occurrence of echo chambers and filter bubbles. A consequence of this is that the political arena has become more polarized because different groups of citizens are attending to different news sources, though the evidence of this effect is mixed. Other forms of communication channels Online political participation may affect the political standpoints of frequent media consumers due to mass mediatization, which is becoming increasingly prevalent. Blogs, videos, and websites are all examples of alternative communication channels, as opposed to traditional media, such as newspapers and television. Through blog, video and website communication, individuals can gain a further connection to political institutions through freely expressing their views and opinions. This communication is possible because the Internet is bringing elites and members of the public closer together. Any ordinary person can send e-mails to a politician or a political journalist, expecting a response, or even generate millions of impressions upon regular viewers on YouTube or the Internet by publishing their opinions. Through these alternative means of communication, many people find that online participation with politics and even high-status politicians is becoming increasingly common and accessible. Expressive communication through the Internet proves to be more effective than communication through traditional sources, as prosumers (a combination of a producer and consumer making their media as a consumer) are becoming powerful through their reach. This alternative means of communication makes it more likely for false information to spread online, however, through sources that are unreliable and that anybody can post on, such as TikTok, and political participation can be damaged by this or corrupted through ideas or concepts that are not true. Online participation has led to in-person political activities and the contribution of political activists. An example is Howard Dean's Blog for America, which served as a forum for people from various backgrounds to get involved and coordinate events in the 2004 election. Online communication breeds offline communication through activism organized online, which takes place in the real world. Physical resources Media materialism is a theory that addresses the media's impact on the physical environment. Media materialism covers three aspects: The consumption of natural resources for industrial production of modern communication technology The energy consumption of communication technology in residential and institutional sectors The waste that is created by discarded cell phones, televisions, computers, etc. Influence of market forces The economic mechanisms that influence the mass media are quite complex because commercial mass media are competing on many different markets at the same time: Competition for consumers, i.e. readers, listeners, and viewers Competition for advertisers and sponsors Competition for investors Competition for access to information sources, such as politicians, experts, etc. Competition for content providers and access rights, e.g. transmission rights for sports events The economists Carl Shapiro and Hal Varian wrote that information commodity markets don't work. There are several reasons for this. An important characteristic that makes information markets different from most other markets is that the fixed costs are high while the variable costs are low or zero. The fixed costs are the costs of producing content. This includes journalistic work, research, production of educational content, entertainment, etc. The variable costs are the marginal costs of adding one more consumer. The costs of broadcasting a TV show are the same whether there is one viewer or a million viewers, hence the variable costs are zero. In general, the variable costs for digital media is virtually zero because information can be copied at very low costs. The variable costs for newspapers are the costs of printing and selling one more copy, which are low but not zero. Commercial mass media are competing for a limited supply of advertising money. The more media companies that compete for advertising money, the lower the price of advertising, and the less money each company has for covering the fixed costs of producing content. Free competition in a media market with many competitors can lead to ruinous competition where the revenue for each company is hardly enough to produce content of the lowest possible quality. The news media are not only competing for advertisers with other news media, they are also competing for advertisers with other companies that mainly facilitate communication rather than produce information, such as search engines and social media. IT companies such as Google, Facebook, etc. are dominating the advertising market, leaving less than half of the revenue for news media. The strong dependence on advertising money is forcing commercial mass media to mainly target audiences that are profitable to the advertisers. They tend to avoid controversial content and avoid issues that the advertisers dislike. The competition for access to politicians, police, and other important news sources can enable these sources to manipulate the media by providing selective information and by favoring those media that give them positive coverage. Competition between TV stations for transmission rights to the most popular sports events, the most popular entertainment formats, and the most popular talk show hosts can drive up prices to extreme levels. This is often a winner-takes-it-all market where perhaps a pay TV channel is able to outbid the public broadcast channels. The result is that for example a popular sports event will be available to fewer viewers at higher prices than would result if competition was limited. Thus, competition on media markets is very different from competition on other markets with higher variable costs. Many studies have shown that fierce competition between news media results in trivialization and poor quality. We are seeing a large amount of cheap entertainment, gossip, and sensationalism, and very little civic affairs and thorough journalistic research. Newspapers are particularly affected by the increasing competition, resulting in lower circulation and lower journalistic quality. Classical economic theory would predict that competition leads to diversity, but this is not always the case with media markets. Moderate competition may lead to niche diversification, but there are many examples where fierce competition instead leads to wasteful sameness. Many TV channels are producing the same kind of cheap entertainment that appeals to the largest possible audience. The high fixed costs favor large companies and large markets. Unregulated media markets often lead to concentration of ownership, which can be horizontal (same company owning multiple channels) or vertical (content suppliers and network distributors under same owner). Economic efficiency is improved by the concentration of ownership, but it may reduce diversity by excluding unaffiliated content suppliers. Unregulated markets tend to be dominated by a few large companies, while smaller firms may occupy niche positions. Large markets are characterized by monopolistic competition where each company offers a slightly different product. The cable TV companies are differentiated along political lines in the USA where the fairness doctrine no longer applies. We may expect that a company that runs multiple broadcast channels would produce different content on the different channels to avoid competing with itself, but the evidence shows a mixed picture. Some studies show that market concentration increases diversity and innovation, while other studies show the opposite. A market where multiple companies own one TV channel each does not guarantee diversity either. On the contrary, we often see wasteful duplication where everybody is trying to reach the same mainstream audience with the same kind of programs. The situation is different for publicly funded TV channels. The non-commercial Danish national TV, for example, has multiple broadcast channels sending different kinds of content in order to meet its public service obligation. European countries have a tradition for public service radio and television that is funded fully or partially by government subsidies or mandatory license payment for everybody who has a radio or TV. Historically, these public service broadcasters have delivered high quality programmes including news based on thorough journalistic investigation, as well as educational programmes, public information, debate, special programs for minorities, and entertainment. However, broadcasters who depend on government funding or mandatory license payments are vulnerable to political pressure from the incumbent government. Some media are protected from political pressure through strong charters and arms-length oversight organizations, while those with weaker protection are more influenced by pressure from politicians. The public service broadcasters in several European countries initially had monopoly on broadcasting, but the strict regulation was relaxed in the late 1980s and early 1990s. Competition from commercial radio and TV stations had a strong impact on the public service broadcasters. In Greece, the new competition from commercial TV led to lower quality and less diversity, contrary to the expectation of the economists. The contents of the public channels became similar to the commercial channels with less news and more entertainment. In the Netherlands, diversity of TV programs increased in periods with moderate competition, but decreased in periods with ruinous competition. In Denmark, the degree of dependence on advertising and private investors influenced the amount of trivialization, but even a publicly financed advertisement-free TV channel became more trivialized as a result of competition with commercial channels. In Finland, the government has avoided ruinous competition by strict regulation of the TV market. The result is more diversity. Sociocultural change The concept of mediatization is focusing not only on media effects but on the interrelation between the change of media communication on the one hand and sociocultural changes on the other. Some aspects of sociocultural change are reviewed in the following sections. Crime, disaster, and fear It is a common adage that fear sells. News media are often using fearmongering to attract readers, listeners, and viewers. Stories about crime, disaster, dangerous diseases, etc. have a prominent place in many news media. Historically, the tabloid newspapers have relied quite a lot on crime news in order to make customers buy today's newspaper. This strategy has been copied by the electronic media, especially when competition is fierce. The news media have often created moral panics by exaggerating minor social problems or even completely imaginary dangers as seen, for example, in the satanic cult scare. The scare stories may have political consequences, even if the media have only economic motives. Politicians often implement draconian laws and tough on crime policies because they feel compelled to react to the perceived dangers. In a larger perspective, the high affinity of many news media for crime and disaster has led to a culture of fear where people are taking unnecessary precautions against minor or unlikely dangers while they pay less attention to the much higher risks of, for example, lifestyle diseases or traffic accidents. Psychologists fear that the heavy exposure to crime and disaster in the media is fostering a mean world syndrome causing depression, anxiety, and anger. The perception of the world as a dangerous place may lead to authoritarian submission, conformism, and aggression against minorities according to the theory of right-wing authoritarianism. The culture of fear may have a strong influence on the whole culture and political climate. A widespread perception of collective danger can push the culture and politics in the direction of authoritarianism, intolerance, and bellicosity, according to regality theory. This is an unintended consequence of the economic competition between the news media. Law enforcement agencies have learned to cooperate with the mass media to dramatize crime in order to promote their own agenda. It is often suspected that politicians actively take advantage of the media's proclivity for fearmongering in order to promote a particular agenda. Warnings about possible terror attacks have increased public support for the US president, and the fearful sentiments after the September 11 terror attacks have been used to garner support for the wars in Afghanistan and Iraq. Mediatization of Ignorance Unlike other forms of mediatization that focus on spreading knowledge, the mediatization of ignorance involves the mediatization of unknowns (known unknowns). The mediatization of ignorance occurs when information that has not yet been vetted, fully understood, or confirmed by experts moves through various media channels and is presented to audiences as fact. Three phases are found in the mediatization of ignorance: the revelation, the acceleration, and the irredeemable phases. During the revelation phase, information that experts still need to fully vetted is revealed to the media; however, communicative leaders such as scientists, health professionals, or researchers still have control of the narrative. During the acceleration phase, the information spreads rapidly and becomes, regardless of validity, what the audience begins to view as reality. Communicative leaders lose control of the narrative during this phase of the mediatization of ignorance. Finally, during the irredeemable phase, experts lose all control of the narrative even after gathering scientific evidence to prove that the non-vetted information was false. An example of the three phases of the mediatization of ignorance can be found during the early months of the COVID-19 Pandemic surrounding the hydroxychloroquine (HCQ) drug. During the revelation phase, the media heard that the HCQ could be a potential treatment for COVID-19 based on limited initial evidence. This revelation sparked media and audience interest. The topic of the HCQ drug was later boosted into the acceleration phase after Donald Trump endorsed the drug, even though evidence of the drug's effectiveness was still lacking. Due to the reports of success and a celebrity endorsement, there was a temporary shortage of the HCQ drug due to high demand based on the perceived effectiveness of the drug. Even though later research and trials revealed little to no effectiveness of the HCQ drug against the COVID-19 virus, the irredeemable phase of the mediatization of ignorance had already been reached. Because of this, the link between the ineffective drug and COVID-19 had already been established and believed as true by a majority of audiences. Democracy and news media A democracy can only function properly if voters are well informed about candidates and political issues. It is generally assumed that the news media are serving the function of informing voters. However, since the late 20th century there has been a growing concern that voters may be poorly informed because the news media are focusing more on entertainment and gossip and less on serious journalistic research on political issues. The media professors Michael Gurevitch and Jay Blumler have proposed a number of functions that the mass media are expected to fulfill in a democracy: Surveillance of the sociopolitical environment Meaningful agenda setting Platforms for an intelligible and illuminating advocacy Dialogue across a diverse range of views Mechanisms for holding officials to account for how they have exercised power Incentives for citizens to learn, choose, and become involved A principled resistance to the efforts of forces outside the media to subvert their independence, integrity, and ability to serve the audience A sense of respect for the audience member, as potentially concerned and able to make sense of his or her political environment This proposal has inspired a lot of discussions over whether the news media are actually fulfilling the functions that a well functioning democracy requires. Commercial mass media are generally not accountable to anybody but their owners, and they have no obligation to serve a democratic function. They are controlled mainly by economic market forces. Fierce economic competition may force the mass media to divert themselves from any democratic ideals and focus entirely on how to survive the competition. Quality or elite newspapers are still providing serious political news, while tabloid newspapers and commercial TV stations deliver more soft news and entertainment. The quality of the news media is different in different countries, depending on regulation and market structure. However, even the quality newspapers are dumbing down their contents in order to target more readers when competition is fierce. Public service media have an obligation to provide reliable information to voters. Many countries have publicly funded radio and television stations with public service obligations, especially in Europe and Japan, while such media are weak or non-existent in other countries including the USA. Several studies have shown that the stronger the dominance of commercial broadcast media over public service media, the less the amount of policy-relevant information in the media and the more focus on horse race journalism, personalities, and the peccadillos of politicians. Public service broadcasters are characterized by more policy-relevant information and more respect for journalistic norms of impartiality than the commercial media. However, the trend of deregulation has put the public service model under increased pressure from competition with commercial media. Many journalists would prefer to hold their professional standards high, but the competition for audience is forcing them to deliver more soft news and entertainment and less substantial public affairs coverage. Politics has become popularized to such a degree that the lines between politics and entertainment are becoming increasingly blurred. At the same time, the commercialization has made the news media vulnerable to external influence and manipulation. The tabloidization and popularization of the news media is seen in an increasing focus on human examples rather than statistics and principles. The ability to find effective political solutions to social problems is hampered when problems tend to be blamed on individuals rather than on structural causes. This person-centered focus may have far-reaching consequences not only for domestic problems but also for foreign policy when international conflicts are blamed on foreign heads of state rather than on political and economic structures. A strong focus on fear and terrorism has allowed military logic to penetrate public institutions, leading to increased surveillance and the erosion of civil rights. There is more focus on politicians as personalities and less focus on political issues in the popular media. Election campaigns are covered more as horse races and less as debates about ideologies and issues. The dominating focus on spin, conflict, and competitive strategies has made voters perceive the politicians as egoists rather than idealists. This fosters mistrust and a cynical attitude to politics, less civic engagement, and less interest in voting. Bargaining between political parties becomes more difficult under media focus because necessary concessions will make individual negotiators lose credibility. Negotiations require an atmosphere of privacy which allows for compromises, communicated to the public as collective decisions without indicating any winner or loser. A considerable decline in the quantity and quality of negotiation outcomes seems likely due to this incompatibility between news media logic and political bargaining logic. The responsiveness and accountability of the democratic system is compromised when lack of access to substantive, diverse, and undistorted information is handicapping the citizens' capability of evaluating the political process. Formal ties between newspapers and political parties were common in the first half of the 20th century, but rare today. Instead, politicians must adapt to the media logic. Many politicians have found ways to manipulate the media to serve their own ends. They often stage events or leak information with the sole purpose of getting the media to cover their agenda. The fast pace and trivialization in the competitive news media is handicapping the political debate. Thorough and balanced investigation of complex political issues does not fit into this format. The political communication is characterized by short time horizons, short slogans, simple explanations, and simple answers. This is conducive to political populism rather than serious deliberation. The Italian businessman and populist politician Silvio Berlusconi took advantage of the fact that he owned many of the commercial TV stations. This secured him a favorable coverage that enabled him to become prime minister for a total of nine years. Studies in Italy show that individuals exposed to entertainment TV as children were less cognitively sophisticated and less civic minded as adults. Exposure to educational content, on the other hand, improved the cognitive abilities and civic engagement. People form habits around their media consumption and often stick to the same media. This is an easy way to minimize the cognitive efforts of information processing. An experiment in China showed that consumers who were given access to uncensored news tended to stick to their old habits and watch the state censored news media. However, after given incentives to watch the uncensored news, they kept preferring the uncensored news, which led to persistent changes in their knowledge, beliefs, and attitudes. Some commentators have presented an optimistic view, arguing that democracy is still functioning despite the shortcomings of the media, while others deplore the rise of political populism, polarization, and extremism that the popular media seem to be contributing to. Many media scholars have discussed non-commercial news media with public service obligations as a means to improve the democratic process by providing the kind of political contents that a free market does not provide. The World Bank has recommended public service broadcasting services in order to strengthen democracy in developing countries. These broadcasting services should be accountable to an independent regulatory body that is adequately protected from interference from political and economic interests. Democracy and social media The emergence of the internet and the social media has profoundly altered the conditions for political communication. The social media have given ordinary citizens easy access to voice their opinion and share information while bypassing the filters of the large news media. This is often seen as an advantage for democracy. The social media make it possible for politicians to get immediate feedback from citizens on their policy proposals, but they also make it difficult for politicians and business leaders to hide information. The new possibilities for communication have fundamentally changed the way social movements and protest movements operate and organize. The internet and social media have provided powerful new tools for democracy movements in developing countries and emerging democracies, enabling them to organize protests and to produce visual events suitable for the media. The social media and search engines are financed mainly by advertising. They are able to target advertisements specifically to the population segments that the advertisers select. The fact that these media act like marketing companies and consultants may compromise their neutrality. Another problem is that the social media have no truth filters. The established news media have to guard their reputation as trustworthy, while ordinary citizens may post unreliable information. Echo chambers may emerge when people are sharing unchecked information with groups of like minded people. Studies find evidence of clusters of people with the same opinions on social media like Facebook. People tend to trust information shared by their friends. This may lead to selective exposure to partisan opinions, but several studies show that people are exposed to a more diverse set of news and opinions on social media than on traditional news media. False stories are shared more than true stories, as discussed above. Conspiracy theories, whether true or false, are shared on social media because people find them interesting, exciting, and entertaining. The proliferation of conspiracy beliefs may undermine public trust in the political system and public officials. A noteworthy example is the mistrust of health officials during the COVID-19 pandemic. Some studies indicate that there are political asymmetries in responses to misinformation due to differences in personality characteristics and media structures. Psychological traits such as close-mindedness, uncertainty avoidance, and resistance to change are more common among conservatives than among liberals and moderates. These traits, combined with more selective media use and a more insular nature of the conservative media ecosystem, make conservatives more likely than liberals to share and believe misinformation. Liberal citizens are more likely to share fact-checking information than conservatives. Furthermore, liberal and moderate media are more likely than conservative media to fact check their stories and to retract false stories. State regulation of social media is a problem for free speech. Instead, major social media have implemented self-regulation in order to defend their reputation. Social media are often sanctioning against hate speech, while general misinformation is more difficult to combat. The medias' own filters are often unreliable and vulnerable to manipulation. Some social media are publishing fact-checking information in order to counter misinformation. Studies of the effects of fact-checking have given mixed results. Some studies find that fact-checking is reducing the beliefs in misinformation. Other studies find that corrective information influences knowledge but not voting intentions. Fact-checking may even be counterproductive when people do not trust the fact-checking organizations or when they construct counter-arguments. Some observers have proposed media literacy education as a means to make people less susceptible to believe misinformation. Research suggests that media literacy education is most effective when it includes personal feedback. The social media are very vulnerable to manipulation because it is possible to set up fake accounts. Various propaganda agencies are secretly setting up large numbers of fake social media accounts pretending to be ordinary people. The fake accounts are often operated by automated computers programmed to act like real people, the so-called bots. Such fake accounts and bots are used for spreading and sharing propaganda, disinformation, and fake news. Business operators may spread disinformation about competitors or stock markets; political organizations may try to influence the public opinion in political matters; and military intelligence organizations may use the spreading of disinformation as a means of information warfare. For example, the Russian web brigades or troll farms have disseminated large amounts of fake news in order to influence the election of US president Donald Trump in 2016, according to an intelligence report. See also Russian interference in the 2016 Brexit referendum. Bots have also been highly involved in spreading misinformation about COVID-19. Mediatization of Politics The mediatization of politics focuses on the transformative effect media exerts on politics. It is argued that there are four dimensions of the transformation of politics. The first dimension focuses on media as the source of political information. If politics is highly mediatized, a public's only way of learning about new laws and policy is through the media. The second dimension is concerned with the media's independency from politics and whether or not the media is able to speak out against political figures. The third dimension focuses on which logic rules the media–– media logic or political logic. If politics are low to moderately mediatized, political logic (media coverage of laws and policy) will be favored whereas if politics are highly mediatized, media logic (coverage of entertaining and dramatized political stories) will be favored. Finally, the fourth dimension focuses on whether or not political figures favor media or political logic. Political populism Populism refers to a political style characterized by anti-establishment and anti-elite rhetoric and a simplified, polarized definition of political issues. The establishment is often evoked in populist rhetoric as the source of crisis, breakdown, or corruption. This can take the form of the denial of expert knowledge and the championing of common sense against the bureaucrats. Much of the appeal of populists comes from their disregard for “appropriate” ways of acting in the political realm. This includes a tabloid style with the use of slang, political incorrectness, and being overly demonstrative and colorful, as opposed to the elite behaviors of rigidness, rationality, and technocratic language. Citizens with populist attitudes have a preference for tabloid media content that simplifies issues in binary “us” versus “them” oppositions. It is often difficult for populist politicians to get their messages through the mainstream media, especially when these messages contain unverified claims or socially inappropriate speech. The internet has provided populists with new communication channels that match their needs for unfiltered communication. Populists sometimes rely on borderline truths, forged content, manipulative speech, and unverified claims that would not pass the gatekeepers at reputable news media. The availability of independent internet media and social media has thus opened a door to the spreading of biased information, selective perception, confirmation bias, motivated reasoning, and inclinations to reinforce in-group identities in echo chambers. This has paved the way to a rise in populism around the world. Another factor contributing to the rise of populism is the concentration of ownership of internet news media. This enables the dissemination of attention-catching content targeted at specific audience segments in a fragmented market. The content that is most profitable happens to also be the most emotional, incendiary, polarizing, and divisive messages. This contributes to inflating the loudest and most antagonistic voices and intensifying social conflicts by distorting facts and limiting exposure to competing ideas. Right-wing populism is characterized by short and emotional or scandalizing messages without sophisticated theorizing. The communication is controlled by strong charismatic leaders in an asymmetric top-down manner. The social media pages of populist politicians are often heavily moderated to suppress critical comments. The type of reasoning is based mostly on anecdotal evidence and emotional narratives, while abstract arguments based on statistics or theory are dismissed as elitist. Left-wing populism is less top-down controlled and more engaging than right-wing populism. For example, the Spanish party Podemos is relying on a media strategy of viral dissemination of emotional, controversial, and provocative messages. Populism has led to strong polarization in many countries. The lack of shared world view and agreed-upon facts is an obstacle to meaningful democratic dialogue. Extreme political polarization may undermine the trust in democratic institutions, leading to erosion of civil rights and free speech and in some cases even reversion to autocracy. Sport Sport is a prime example of mediatization. The organization of sports is highly influenced by the mass media, and the media in turn are influenced by sports. Sport has historically had a very close relationship with mass media through a parallel development of sports organizations and sports journalism. Big sports events, such as the Tour de France and the UEFA Champions League, were originally invented and initiated by newspapers. The mass media are important for sports organizations. The media help attract new participants, encourage spectators, and attract sponsors, advertisers, and investors. Broadcasting of sports events is important for sports organizations as well as for television stations. This has led to increasing commercialization of sports since the 1980s. We have seen the development of close partnerships between a relatively small number of highly professional sports organizations and big broadcast organizations. The rules of the games, as well as tournament structures etc., have been adjusted to fit the entertainment focus of television and other news media. The commercialization of elite sport has led to an increased focus on individual athletes and individual teams through press photos, interviews, merchandise, and fan culture leading to the rise of stardom and extremely high salaries. The most popular sports can attract huge amounts of money through sponsorship and transmission rights, while a majority of less popular sports are marginalized and find it hard to attract funding. The most popular athletes, in particular, are traded or transferred at extreme prices. Popular sports events are used not only for advertising products and companies, but also for promoting countries through the organization of large international sporting events, such as the olympic games, world championships, etc. The commercialization and professionalization of sports has led to an increasing integration of sport enterprises and entertainment media, and a growing industry involving professional coaches, consultants, biomechanical experts, etc. These developments have led to new ethical concerns about the erosion of the spirit of amateurism and the ideals of fair play. Athletes in elite sports are often forced to play to the extreme limits of the rules in order to maximize their chances of winning. This makes them poor role models for amateurs and fans. The large sums of money at stake increase the temptations to various forms of cheating, such as unfair play, doping, match fixing, bribery, etc. Among the concerns are also sponsorships with unhealthy products and the gambling industry. The competition for exclusive transmission rights to popular sports events has driven up prices to such levels that several countries have implemented anti-siphoning laws to make sure that consumers have free access to watch these events. Religion The application of mediatization theory to the study of religion was initiated by Stig Hjarvard with a main focus on Northern Europe. Hjarvard described how the media have gradually taken over many of the social functions that used to be performed by religious institutions, such as rituals, worship, mourning, celebration, and spiritual guidance. This can be considered part of a general process of modernization and secularization. Religious activities are less controlled and organized by the church and instead subsumed under the media logic and delivered through genres like news, documentaries, drama, comedy, and entertainment. The mass media and the entertainment industry are combining aspects of folk religion such as trolls, vampires, and magic with the iconography and liturgy of institutionalized religions into a mixture that Hjarvard calls banal religion. Television shows depicting astrology, séances, exorcism, chiromancy, etc. are legitimizing superstition and supporting an individualization of belief while the church's control over access to religious texts is weakened. Such TV shows, as well as novels and films like Harry Potter and The Lord of the Rings, and computer games such as the World of Warcraft are all sources of religious imagination. Hjarvard argues that these representations of banal religion are not irrelevant, but fundamental in the production of religious thoughts and feelings where the institutionalized religious texts and symbols arise as secondary features, in a sense as rationalization after the fact. David Morgan is criticizing Hjarvard's concept of mediatization for being limited to a specific historical context. Morgan argues that the mediatization of religion is not necessarily connected with modernization and secularization. Historically, communication through music, art, and writing have had a degree of ubiquity similar to the modern mass media and have shaped human society in distinct ways. Religious life has always been mediated when people believe that séances communicate with spirits of the dead, prayers communicate with deities, icons establish connection to the heavenly saint, and sacred objects are facilitating interaction between human actors and the divine. Morgan shows how British evangelical printed texts in the late eighteenth and early nineteenth centuries shaped religious life. These texts were not endorsed by the state or the church, but still explicitly Christian. This is an example of mediatization that was not connected with secularization or modernization. Morgan agrees, however, that mediatization remains a useful concept for describing the effects of certain forms of media use. The intrigue or mystery that many find in fiction, exotic religions, occultism, astrology, dreams, etc. — what Hjarvard calls banal religion — suggests that images, music, and objects carry a potency that operates independent of explicit or institutional religion. Studies of religious media in other parts of the world confirm that mediatization is not necessarily connected with secularization. Televangelism has a large influence on religious life in Northern America. The American concept of televangelism has been copied in many parts of the world and adopted not only by Christian evangelists, but also by Islamic, Buddhist, and Hinduist preachers. This has led to increased competition between the established religious institutions and self-styled televangelists, between different sects, and between different religions. Televangelism is a powerful medium for fund raising which has enabled televangelists to establish large business enterprises combining religious activity with entertainment and trade. The internet has opened many new possibilities for religious communication. Memorial sites on the internet have supplemented or replaced physical cemeteries. Dalai Lama performs religious ceremonies online which help Tibetan refugees and diaspora recreate religious practices outside of Tibet. Many religious communities around the world are using interactive internet media to communicate with believers, transmit services, give directions and advise, answer questions, and even engage in dialogues between different religions. The social media allow a more democratic and less centralized religious dialogue. Sharing of religious texts, images, and videos on social media is often encouraged by religious communities. Unlike the traditional commercial information economy based on copyright, some televangelists in Singapore are deliberately sharing their media products without intellectual property rights in order to allow their followers to share these works on social media and make new combinations, compositions, and mash-up's such that new ideas can develop and thrive. Subcultures Hjarvard and Peterson summarize the media's role in cultural change: "(1) When various forms of subcultures try to make use of media for their own purposes, they often become (re-)embedded into mainstream culture; (2) National cultural policies often serve as levers for increased mediatization; (3) Mediatization involves a transformation of the ways in which authority and expertise are performed and reputation is acquired and defended; and (4) Technological developments shape the media's affordances and thus the particular path of mediatization." Mediatization research explores the ways in which media are embedded in cultural transformation. For example, "tactical" mediatization designates the response of community organizations and activists to wider technological changes. Kim Sawchuk, professor in Communication Studies, worked with a group of elderly who managed to retain their own agency in this context. For the elderly, the pressure to mediatize comes from various institutions that are transitioning to online services (government agencies, funding, banks, etc.), among other things. A tactical approach to media is one that comes from those who are subordinates within these systems. It means to implement work-arounds to make the technologies work for them. For example, in the case of the elderly group she studies, they borrowed equipment to produce video capsules explaining their mandate and the importance of this mandate for their communities, which allowed them to reach new audiences while keeping the tone and style of face-to-face communication they privilege in their day-to-day practice. Doing this, they also subverted expectations about the ability of the elderly to use new media effectively. Another example of study is one that is focused on the media-related practices of graffiti writers and skaters, showing how media integrate and modulate their everyday practices. The analysis also demonstrates how the mediatization of these subcultural groups brings them to become part of mainstream culture, changes their rebellious and oppositional image and engages them with the global commercialization culture. Another example is how media's omnipresence informs the ways Femen's protests may take place on public scenes, allow communication between individual bodies and a shared understanding of activist imaginary. It aims to analyse how their practices are moulded by the media and how these are staged in manners that facilitate spreadability. See also Attention economy Concentration of media ownership Digital citizen Echo chamber (media) Mass communication Media culture Media literacy Media psychology Mediacracy Media effects Media studies Mediated Stylistics Social aspects of television References Media studies Sociological terminology Political science theories
0.781034
0.983163
0.767884
National power
National power is defined as the sum of all resources available to a nation in the pursuit of national objectives. Assessing the national power of political entities was already a matter of relevance during the classical antiquity, the Middle Ages and the Renaissance and today. Classics Shang Yang, Guan Zhong and Chanakya, widely discussed the power of state. Many other classics, such as Mozi, Appian, Pliny the Elder, also concerned the subject. Herodotes described whence derives the power of Babylon. The considerations of Hannibal on the matter is found in Titus Livy. Elements of national power National power stems from various elements, also called instruments or attributes; these may be put into two groups based on their applicability and origin - "natural" and "social". Natural: Geography Resources Population Social: Economic Political Military Psychological (National morale) Informational Geography Important facets of geography such as location, climate, topography, size and resources play major roles in the ability of a nation to gain national power. The relation between foreign policy and geographic space gave rise to the discipline of geopolitics including the concepts of lebensraum and "grossraum". The latter is a region with natural resources sufficient for autarky. Space has a strategic value. Russia's size permitted it to trade space for time during the Great Patriotic War. To a less extent, the same is true for China in the war against Japan. Location has an important bearing on foreign policy of a nation. The presence of a water obstacle provided protection to states such as Ancient Rome, Great Britain, Japan, and the United States. The geographic protection allowed Rome, Japan, and the United States to follow isolationist policies, and Britain the policy of non-involvement in Europe. The presence of large accessible seaboards also permitted these nations to build strong navies and expand their territories peacefully or by conquest. In contrast, Poland, with no obstacle for its powerful neighbours, lost its independence from 1795 to 1918 and again from 1939 to 1989. Since Antiquity, the importance of climate was stressed, with the temperate zone being regarded as favoring great powers. Aristotle in Politics argued that the Greeks, placed in the temperate zone, qualify for world domination. Pliny the Elder observed that in the temperate zone, there are governments, which the outer races never have possessed. The temperate zone as power factor remained widely stressed in the modern research. In fact, all modern great powers have been located in the temperate zone. A.F.K. Organski criticized this hypothesis as "an accident of history." The Industrial Revolution happened, by accident, in the temperate zone and so far, also by accident, there are no major industrial nations outside this zone. But the world will become industrial, "now that the industrial revolution is galloping triumphant throughout the world." Organski abandoned the theory of temperate zone as untenable. By contrast, Max Ostrovsky developed it further. He doubted historical accidents. Writing half-a-century after Organski, he noted that the Industrial Revolution is still not "galloping triumphant throughout the world" but remains bound to the temperate zone. Moreover, vast temperate zones of Turkestan and Mongolia do not generate great powers. It appeared that besides the mild temperature, a right amount of rain was necessary, as only humid temperate areas have been sources of great power. This observation challenged a dominant element of the temperate theory. Most of its proponents believed that temperate climate develops industrious mind. None inquired what rain has to do with mind. Instead of climate developing mind, Ostrovsky replaced mind with cereal agriculture. Rains, he argued, favor cereals rather than human mind, while the productive cereal agriculture favors industry. The more productive is the cereal agriculture, the more manpower is available to industry and other non-agricultural sectors. For this reason, and not "by accident," the Industrial Revolution followed the modern Agricultural Revolution in time and space and is not "galloping triumphant" anywhere in the world beyond the humid temperate areas. In size, Russia is larger than the United States, but its temperate zone with optimum rainfall is smaller, as most of the territory is in latitudes well north. All things being equal, Ostrovsky concluded, who rules the largest rainy temperate zone, rules the world. But all things are seldom equal. For this reason, he avoided geographic determinism and formulized an indicator of national power which combines climatic conditions and organizational level (see "National power indicator" below). Measurements Depending on the interaction of the individual elements of national power, attempts can be made to classify states and assign them a status in the international order of states. Globally important states with dominant positions in all or almost all elements of national power are called superpowers. This term was applied to the Soviet Union and the United States during the Cold War. In the 21st century, it is also increasingly applied to the People's Republic of China. Other status classification terms for states include, in descending hierarchy, world powers, great powers, regional powers, middle powers, and small powers. For states or alliances with almost absolute power, the term hyperpower is used. Despite the difficulty of the task and the multidimensional nature of power, several attempts have been made to express the power of states in objective rankings and indexes based on statistical indicators. Composite Index of National Capability The Composite Index of National Capability (CINC) was conceived by J. David Singer in 1963. It includes the six factors of total population, urban population, iron and steel production, primary energy consumption, military expenditure and number of soldiers and calculates an index from them. His methodology is considered outdated, however, as he only takes into account "hard" power factors and indicators such as steel production no longer have the same significance as in the early 20th century. Criticism: CINC suggests, “nonsensically,” that Israel is, and has always been, one of the weakest countries in the Middle East; Russia dominated Europe throughout the 1990s, with more power than Germany, France, and the United Kingdom combined; and China has dominated the world since 1996 and by 2018 twice exceeded the power of the United States. National Power Ranking of Countries The National Power Ranking of Countries was published in a paper by the University of Warsaw and the University of Wroclaw. It divides countries into the categories of economic, military and geopolitical power, which is derived from statistical indicators. The report also analyzes the evolution of the distribution of power in the world since 1992 and makes a forecast for 2050, noting an increasing shift of power from the Western world to the Asia-Pacific region. State Power Index The State Power Index was developed by Piotr Arak and Grzegorz Lewicki and takes into account the factors of economy, military, land area, population, cultural influence, natural resources and diplomacy, which is combined into an overall index. World Power Index The World Power Index (WPI) is a numerical expression that refers the accumulation of national capacities that a State possesses for the exercise of its power in the international system. The WPI is the result of adding 18 indicators, which are organized through three composite indices: Material Capacities Index (MCI), Semimaterial Capacities Index (SCI), Immaterial Capabilities Index (ICI). The WPI is presented as an analysis technique that, being quantitative, seeks to help overcome the hermeneutics that underlies the subjective interpretation of national power. In this way, the WPI contributes to the accurate comparison of the national capacities of States and the study of their position in the international structure. National power indicator This indicator was developed by Max Ostrovsky. He reduced numerous indexes to one basic indicator—cereal tonnage produced by one percent of national manpower. He argues that this indicator is defined by environmental conditions and organizational level and in its turn defines the percentage of manpower available to non-agricultural activities. See also Geopolitics Power projection Power in international relations Composite Index of National Capability Comprehensive National Power Most powerful countries References International relations terminology
0.773627
0.992565
0.767875
Geologic time scale
The geologic time scale or geological time scale (GTS) is a representation of time based on the rock record of Earth. It is a system of chronological dating that uses chronostratigraphy (the process of relating strata to time) and geochronology (a scientific branch of geology that aims to determine the age of rocks). It is used primarily by Earth scientists (including geologists, paleontologists, geophysicists, geochemists, and paleoclimatologists) to describe the timing and relationships of events in geologic history. The time scale has been developed through the study of rock layers and the observation of their relationships and identifying features such as lithologies, paleomagnetic properties, and fossils. The definition of standardised international units of geologic time is the responsibility of the International Commission on Stratigraphy (ICS), a constituent body of the International Union of Geological Sciences (IUGS), whose primary objective is to precisely define global chronostratigraphic units of the International Chronostratigraphic Chart (ICC) that are used to define divisions of geologic time. The chronostratigraphic divisions are in turn used to define geochronologic units. Principles The geologic time scale is a way of representing deep time based on events that have occurred throughout Earth's history, a time span of about 4.54 ± 0.05 Ga (4.54 billion years). It chronologically organises strata, and subsequently time, by observing fundamental changes in stratigraphy that correspond to major geological or paleontological events. For example, the Cretaceous–Paleogene extinction event, marks the lower boundary of the Paleogene System/Period and thus the boundary between the Cretaceous and Paleogene systems/periods. For divisions prior to the Cryogenian, arbitrary numeric boundary definitions (Global Standard Stratigraphic Ages, GSSAs) are used to divide geologic time. Proposals have been made to better reconcile these divisions with the rock record. Historically, regional geologic time scales were used due to the litho- and biostratigraphic differences around the world in time equivalent rocks. The ICS has long worked to reconcile conflicting terminology by standardising globally significant and identifiable stratigraphic horizons that can be used to define the lower boundaries of chronostratigraphic units. Defining chronostratigraphic units in such a manner allows for the use of global, standardised nomenclature. The International Chronostratigraphic Chart represents this ongoing effort. Several key principles are used to determine the relative relationships of rocks and thus their chronostratigraphic position. The law of superposition that states that in undeformed stratigraphic sequences the oldest strata will lie at the bottom of the sequence, while newer material stacks upon the surface. In practice, this means a younger rock will lie on top of an older rock unless there is evidence to suggest otherwise. The principle of original horizontality that states layers of sediments will originally be deposited horizontally under the action of gravity. However, it is now known that not all sedimentary layers are deposited purely horizontally, but this principle is still a useful concept. The principle of lateral continuity that states layers of sediments extend laterally in all directions until either thinning out or being cut off by a different rock layer, i.e. they are laterally continuous. Layers do not extend indefinitely; their limits are controlled by the amount and type of sediment in a sedimentary basin, and the geometry of that basin. The principle of cross-cutting relationships that states a rock that cuts across another rock must be younger than the rock it cuts across. The law of included fragments that states small fragments of one type of rock that are embedded in a second type of rock must have formed first, and were included when the second rock was forming. The relationships of unconformities which are geologic features representing a gap in the geologic record. Unconformities are formed during periods of erosion or non-deposition, indicating non-continuous sediment deposition. Observing the type and relationships of unconformities in strata allows geologist to understand the relative timing the strata. The principle of faunal succession (where applicable) that states rock strata contain distinctive sets of fossils that succeed each other vertically in a specific and reliable order. This allows for a correlation of strata even when the horizon between them is not continuous. Divisions of geologic time The geologic time scale is divided into chronostratigraphic units and their corresponding geochronologic units. An is the largest geochronologic time unit and is equivalent to a chronostratigraphic eonothem. There are four formally defined eons: the Hadean, Archean, Proterozoic and Phanerozoic. An is the second largest geochronologic time unit and is equivalent to a chronostratigraphic erathem. There are ten defined eras: the Eoarchean, Paleoarchean, Mesoarchean, Neoarchean, Paleoproterozoic, Mesoproterozoic, Neoproterozoic, Paleozoic, Mesozoic and Cenozoic, with none from the Hadean eon. A is equivalent to a chronostratigraphic system. There are 22 defined periods, with the current being the Quaternary period. As an exception two subperiods are used for the Carboniferous Period. An is the second smallest geochronologic unit. It is equivalent to a chronostratigraphic series. There are 37 defined epochs and one informal one. The current epoch is the Holocene. There are also 11 subepochs which are all within the Neogene and Quaternary. The use of subepochs as formal units in international chronostratigraphy was ratified in 2022. An is the smallest hierarchical geochronologic unit. It is equivalent to a chronostratigraphic stage. There are 96 formal and five informal ages. The current age is the Meghalayan. A is a non-hierarchical formal geochronology unit of unspecified rank and is equivalent to a chronostratigraphic chronozone. These correlate with magnetostratigraphic, lithostratigraphic, or biostratigraphic units as they are based on previously defined stratigraphic units or geologic features. The subdivisions and are used as the geochronologic equivalents of the chronostratigraphic and , e.g., Early Triassic Period (geochronologic unit) is used in place of Lower Triassic System (chronostratigraphic unit). Rocks representing a given chronostratigraphic unit are that chronostratigraphic unit, and the time they were laid down in is the geochronologic unit, e.g., the rocks that represent the Silurian System the Silurian System and they were deposited the Silurian Period. This definition means the numeric age of a geochronologic unit can be changed (and is more often subject to change) when refined by geochronometry while the equivalent chronostratigraphic unit (the revision of which is less frequent) remains unchanged. For example, in early 2022, the boundary between the Ediacaran and Cambrian periods (geochronologic units) was revised from 541 Ma to 538.8 Ma but the rock definition of the boundary (GSSP) at the base of the Cambrian, and thus the boundary between the Ediacaran and Cambrian systems (chronostratigraphic units) has not been changed; rather, the absolute age has merely been refined. Terminology is the element of stratigraphy that deals with the relation between rock bodies and the relative measurement of geological time. It is the process where distinct strata between defined stratigraphic horizons are assigned to represent a relative interval of geologic time. A is a body of rock, layered or unlayered, that is defined between specified stratigraphic horizons which represent specified intervals of geologic time. They include all rocks representative of a specific interval of geologic time, and only this time span. Eonothem, erathem, system, series, subseries, stage, and substage are the hierarchical chronostratigraphic units. A is a subdivision of geologic time. It is a numeric representation of an intangible property (time). These units are arranged in a hierarchy: eon, era, period, epoch, subepoch, age, and subage. is the scientific branch of geology that aims to determine the age of rocks, fossils, and sediments either through absolute (e.g., radiometric dating) or relative means (e.g., stratigraphic position, paleomagnetism, stable isotope ratios). is the field of geochronology that numerically quantifies geologic time. A (GSSP) is an internationally agreed-upon reference point on a stratigraphic section that defines the lower boundaries of stages on the geologic time scale. (Recently this has been used to define the base of a system) A (GSSA) is a numeric-only, chronologic reference point used to define the base of geochronologic units prior to the Cryogenian. These points are arbitrarily defined. They are used where GSSPs have not yet been established. Research is ongoing to define GSSPs for the base of all units that are currently defined by GSSAs. The standard international units of the geologic time scale are published by the International Commission on Stratigraphy on the International Chronostratigraphic Chart; however, regional terms are still in use in some areas. The numeric values on the International Chronostratigrahpic Chart are represented by the unit Ma (megaannum, for 'million years'). For example, Ma, the lower boundary of the Jurassic Period, is defined as 201,400,000 years old with an uncertainty of 200,000 years. Other SI prefix units commonly used by geologists are Ga (gigaannum, billion years), and ka (kiloannum, thousand years), with the latter often represented in calibrated units (before present). Naming of geologic time The names of geologic time units are defined for chronostratigraphic units with the corresponding geochronologic unit sharing the same name with a change to the suffix (e.g. Phanerozoic Eonothem becomes the Phanerozoic Eon). Names of erathems in the Phanerozoic were chosen to reflect major changes in the history of life on Earth: Paleozoic (old life), Mesozoic (middle life), and Cenozoic (new life). Names of systems are diverse in origin, with some indicating chronologic position (e.g., Paleogene), while others are named for lithology (e.g., Cretaceous), geography (e.g., Permian), or are tribal (e.g., Ordovician) in origin. Most currently recognised series and subseries are named for their position within a system/series (early/middle/late); however, the International Commission on Stratigraphy advocates for all new series and subseries to be named for a geographic feature in the vicinity of its stratotype or type locality. The name of stages should also be derived from a geographic feature in the locality of its stratotype or type locality. Informally, the time before the Cambrian is often referred to as the Precambrian or pre-Cambrian (Supereon). History of the geologic time scale Early history While a modern geological time scale was not formulated until 1911 by Arthur Holmes, the broader concept that rocks and time are related can be traced back to (at least) the philosophers of Ancient Greece. Xenophanes of Colophon (c. 570–487 BCE) observed rock beds with fossils of shells located above the sea-level, viewed them as once living organisms, and used this to imply an unstable relationship in which the sea had at times transgressed over the land and at other times had regressed. This view was shared by a few of Xenophanes's contemporaries and those that followed, including Aristotle (384–322 BCE) who (with additional observations) reasoned that the positions of land and sea had changed over long periods of time. The concept of deep time was also recognised by Chinese naturalist Shen Kuo (1031–1095) and Islamic scientist-philosophers, notably the Brothers of Purity, who wrote on the processes of stratification over the passage of time in their treatises. Their work likely inspired that of the 11th-century Persian polymath Avicenna (Ibn Sînâ, 980–1037) who wrote in The Book of Healing (1027) on the concept of stratification and superposition, pre-dating Nicolas Steno by more than six centuries. Avicenna also recognised fossils as "petrifications of the bodies of plants and animals", with the 13th-century Dominican bishop Albertus Magnus (c. 1200–1280) extending this into a theory of a petrifying fluid. These works appeared to have little influence on scholars in Medieval Europe who looked to the Bible to explain the origins of fossils and sea-level changes, often attributing these to the 'Deluge', including Ristoro d'Arezzo in 1282. It was not until the Italian Renaissance when Leonardo da Vinci (1452–1519) would reinvigorate the relationships between stratification, relative sea-level change, and time, denouncing attribution of fossils to the 'Deluge': These views of da Vinci remained unpublished, and thus lacked influence at the time; however, questions of fossils and their significance were pursued and, while views against Genesis were not readily accepted and dissent from religious doctrine was in some places unwise, scholars such as Girolamo Fracastoro shared da Vinci's views, and found the attribution of fossils to the 'Deluge' absurd. Establishment of primary principles Niels Stensen, more commonly known as Nicolas Steno (1638–1686), is credited with establishing four of the guiding principles of stratigraphy. In De solido intra solidum naturaliter contento dissertationis prodromus Steno states: When any given stratum was being formed, all the matter resting on it was fluid and, therefore, when the lowest stratum was being formed, none of the upper strata existed. ... strata which are either perpendicular to the horizon or inclined to it were at one time parallel to the horizon. When any given stratum was being formed, it was either encompassed at its edges by another solid substance or it covered the whole globe of the earth. Hence, it follows that wherever bared edges of strata are seen, either a continuation of the same strata must be looked for or another solid substance must be found that kept the material of the strata from being dispersed. If a body or discontinuity cuts across a stratum, it must have formed after that stratum. Respectively, these are the principles of superposition, original horizontality, lateral continuity, and cross-cutting relationships. From this Steno reasoned that strata were laid down in succession and inferred relative time (in Steno's belief, time from Creation). While Steno's principles were simple and attracted much attention, applying them proved challenging. These basic principles, albeit with improved and more nuanced interpretations, still form the foundational principles of determining the correlation of strata relative to geologic time. Over the course of the 18th-century geologists realised that: Sequences of strata often become eroded, distorted, tilted, or even inverted after deposition Strata laid down at the same time in different areas could have entirely different appearances The strata of any given area represented only part of Earth's long history Formulation of a modern geologic time scale The apparent, earliest formal division of the geologic record with respect to time was introduced during the era of Biblical models by Thomas Burnet who applied a two-fold terminology to mountains by identifying "montes primarii" for rock formed at the time of the 'Deluge', and younger "monticulos secundarios" formed later from the debris of the "primarii". Anton Moro (1687–1784) also used primary and secondary divisions for rock units but his mechanism was volcanic. In this early version of the Plutonism theory, the interior of Earth was seen as hot, and this drove the creation of primary igneous and metamorphic rocks and secondary rocks formed contorted and fossiliferous sediments. These primary and secondary divisions were expanded on by Giovanni Targioni Tozzetti (1712–1783) and Giovanni Arduino (1713–1795) to include tertiary and quaternary divisions. These divisions were used to describe both the time during which the rocks were laid down, and the collection of rocks themselves (i.e., it was correct to say Tertiary rocks, and Tertiary Period). Only the Quaternary division is retained in the modern geologic time scale, while the Tertiary division was in use until the early 21st century. The Neptunism and Plutonism theories would compete into the early 19th century with a key driver for resolution of this debate being the work of James Hutton (1726–1797), in particular his Theory of the Earth, first presented before the Royal Society of Edinburgh in 1785. Hutton's theory would later become known as uniformitarianism, popularised by John Playfair (1748–1819) and later Charles Lyell (1797–1875) in his Principles of Geology. Their theories strongly contested the 6,000 year age of the Earth as suggested determined by James Ussher via Biblical chronology that was accepted at the time by western religion. Instead, using geological evidence, they contested Earth to be much older, cementing the concept of deep time. During the early 19th century William Smith, Georges Cuvier, Jean d'Omalius d'Halloy, and Alexandre Brongniart pioneered the systematic division of rocks by stratigraphy and fossil assemblages. These geologists began to use the local names given to rock units in a wider sense, correlating strata across national and continental boundaries based on their similarity to each other. Many of the names below erathem/era rank in use on the modern ICC/GTS were determined during the early to mid-19th century. The advent of geochronometry During the 19th century, the debate regarding Earth's age was renewed, with geologists estimating ages based on denudation rates and sedimentary thicknesses or ocean chemistry, and physicists determining ages for the cooling of the Earth or the Sun using basic thermodynamics or orbital physics. These estimations varied from 15,000 million years to 0.075 million years depending on method and author, but the estimations of Lord Kelvin and Clarence King were held in high regard at the time due to their pre-eminence in physics and geology. All of these early geochronometric determinations would later prove to be incorrect. The discovery of radioactive decay by Henri Becquerel, Marie Curie, and Pierre Curie laid the ground work for radiometric dating, but the knowledge and tools required for accurate determination of radiometric ages would not be in place until the mid-1950s. Early attempts at determining ages of uranium minerals and rocks by Ernest Rutherford, Bertram Boltwood, Robert Strutt, and Arthur Holmes, would culminate in what are considered the first international geological time scales by Holmes in 1911 and 1913. The discovery of isotopes in 1913 by Frederick Soddy, and the developments in mass spectrometry pioneered by Francis William Aston, Arthur Jeffrey Dempster, and Alfred O. C. Nier during the early to mid-20th century would finally allow for the accurate determination of radiometric ages, with Holmes publishing several revisions to his geological time-scale with his final version in 1960. Modern international geologic time scale The establishment of the IUGS in 1961 and acceptance of the Commission on Stratigraphy (applied in 1965) to become a member commission of IUGS led to the founding of the ICS. One of the primary objectives of the ICS is "the establishment, publication and revision of the ICS International Chronostratigraphic Chart which is the standard, reference global Geological Time Scale to include the ratified Commission decisions". Following on from Holmes, several A Geological Time Scale books were published in 1982, 1989, 2004, 2008, 2012, 2016, and 2020. However, since 2013, the ICS has taken responsibility for producing and distributing the ICC citing the commercial nature, independent creation, and lack of oversight by the ICS on the prior published GTS versions (GTS books prior to 2013) although these versions were published in close association with the ICS. Subsequent Geologic Time Scale books (2016 and 2020) are commercial publications with no oversight from the ICS, and do not entirely conform to the chart produced by the ICS. The ICS produced GTS charts are versioned (year/month) beginning at v2013/01. At least one new version is published each year incorporating any changes ratified by the ICS since the prior version. Major proposed revisions to the ICC Proposed Anthropocene Series/Epoch First suggested in 2000, the Anthropocene is a proposed epoch/series for the most recent time in Earth's history. While still informal, it is a widely used term to denote the present geologic time interval, in which many conditions and processes on Earth are profoundly altered by human impact. the Anthropocene has not been ratified by the ICS; however, in May 2019 the Anthropocene Working Group voted in favour of submitting a formal proposal to the ICS for the establishment of the Anthropocene Series/Epoch. Nevertheless, the definition of the Anthropocene as a geologic time period rather than a geologic event remains controversial and difficult. Proposals for revisions to pre-Cryogenian timeline Shields et al. 2021 An international working group of the ICS on pre-Cryogenian chronostratigraphic subdivision have outlined a template to improve the pre-Cryogenian geologic time scale based on the rock record to bring it in line with the post-Tonian geologic time scale. This work assessed the geologic history of the currently defined eons and eras of the pre-Cambrian, and the proposals in the "Geological Time Scale" books 2004, 2012, and 2020. Their recommend revisions of the pre-Cryogenian geologic time scale were (changes from the current scale [v2023/09] are italicised): Three divisions of the Archean instead of four by dropping Eoarchean, and revisions to their geochronometric definition, along with the repositioning of the Siderian into the latest Neoarchean, and a potential Kratian division in the Neoarchean. Archean (4000–2450 Ma) Paleoarchean (4000–3500 Ma) Mesoarchean (3500–3000 Ma) Neoarchean (3000–2450 Ma) Kratian (no fixed time given, prior to the Siderian) – from Greek κράτος (krátos) 'strength'. Siderian (?–2450 Ma) – moved from Proterozoic to end of Archean, no start time given, base of Paleoproterozoic defines the end of the Siderian Refinement of geochronometric divisions of the Proterozoic, Paleoproterozoic, repositioning of the Statherian into the Mesoproterozoic, new Skourian period/system in the Paleoproterozoic, new Kleisian or Syndian period/system in the Neoproterozoic. Paleoproterozoic (2450–1800 Ma) Skourian (2450–2300 Ma) – from Greek σκουριά (skouriá) 'rust'. Rhyacian (2300–2050 Ma) Orosirian (2050–1800 Ma) Mesoproterozoic (1800–1000 Ma) Statherian (1800–1600 Ma) Calymmian (1600–1400 Ma) Ectasian (1400-1200 Ma) Stenian (1200–1000 Ma) Neoproterozoic (1000–538.8 Ma) Kleisian or Syndian (1000–800 Ma) – respectively from Greek κλείσιμο (kleísimo) 'closure' and σύνδεση (sýndesi) 'connection'. Tonian (800–720 Ma) Cryogenian (720–635 Ma) Ediacaran (635–538.8 Ma) Proposed pre-Cambrian timeline (Shield et al. 2021, ICS working group on pre-Cryogenian chronostratigraphy), shown to scale: Current ICC pre-Cambrian timeline (v2023/09), shown to scale: Van Kranendonk et al. 2012 (GTS2012) The book, Geologic Time Scale 2012, was the last commercial publication of an international chronostratigraphic chart that was closely associated with the ICS. It included a proposal to substantially revise the pre-Cryogenian time scale to reflect important events such as the formation of the Solar System and the Great Oxidation Event, among others, while at the same time maintaining most of the previous chronostratigraphic nomenclature for the pertinent time span. these proposed changes have not been accepted by the ICS. The proposed changes (changes from the current scale [v2023/09]) are italicised: Hadean Eon (4567–4030 Ma) Chaotian Era/Erathem (4567–4404 Ma) – the name alluding both to the mythological Chaos and the chaotic phase of planet formation. Jack Hillsian or Zirconian Era/Erathem (4404–4030 Ma) – both names allude to the Jack Hills Greenstone Belt which provided the oldest mineral grains on Earth, zircons. Archean Eon/Eonothem (4030–2420 Ma) Paleoarchean Era/Erathem (4030–3490 Ma) Acastan Period/System (4030–3810 Ma) – named after the Acasta Gneiss, one of the oldest preserved pieces of continental crust. Isuan Period (3810–3490 Ma) – named after the Isua Greenstone Belt. Mesoarchean Era/Erathem (3490–2780 Ma) Vaalbaran Period/System (3490–3020 Ma) – based on the names of the Kaapvaal (Southern Africa) and Pilbara (Western Australia) cratons, to reflect the growth of stable continental nuclei or proto-cratonic kernels. Pongolan Period/System (3020–2780 Ma) – named after the Pongola Supergroup, in reference to the well preserved evidence of terrestrial microbial communities in those rocks. Neoarchean Era/Erathem (2780–2420 Ma) Methanian Period/System (2780–2630 Ma) – named for the inferred predominance of methanotrophic prokaryotes Siderian Period/System (2630–2420 Ma) – named for the voluminous banded iron formations formed within its duration. Proterozoic Eon/Eonothem (2420–538.8 Ma) Paleoproterozoic Era/Erathem (2420–1780 Ma) Oxygenian Period/System (2420–2250 Ma) – named for displaying the first evidence for a global oxidising atmosphere. Jatulian or Eukaryian Period/System (2250–2060 Ma) – names are respectively for the Lomagundi–Jatuli δ13C isotopic excursion event spanning its duration, and for the (proposed) first fossil appearance of eukaryotes. Columbian Period/System (2060–1780 Ma) – named after the supercontinent Columbia. Mesoproterozoic Era/Erathem (1780–850 Ma) Rodinian Period/System (1780–850 Ma) – named after the supercontinent Rodinia, stable environment. Proposed pre-Cambrian timeline (GTS2012), shown to scale: Current ICC pre-Cambrian timeline (v2023/09), shown to scale: Table of geologic time The following table summarises the major events and characteristics of the divisions making up the geologic time scale of Earth. This table is arranged with the most recent geologic periods at the top, and the oldest at the bottom. The height of each table entry does not correspond to the duration of each subdivision of time. As such, this table is not to scale and does not accurately represent the relative time-spans of each geochronologic unit. While the Phanerozoic Eon looks longer than the rest, it merely spans ~539 million years (~12% of Earth's history), whilst the previous three eons collectively span ~3,461 million years (~76% of Earth's history). This bias toward the most recent eon is in part due to the relative lack of information about events that occurred during the first three eons compared to the current eon (the Phanerozoic). The use of subseries/subepochs has been ratified by the ICS. While some regional terms are still in use, the table of geologic time conforms to the nomenclature, ages, and colour codes set forth by the International Commission on Stratigraphy in the official International Chronostratigraphic Chart. The International Commission on Stratigraphy also provide an online interactive version of this chart. The interactive version is based on a service delivering a machine-readable Resource Description Framework/Web Ontology Language representation of the time scale, which is available through the Commission for the Management and Application of Geoscience Information GeoSciML project as a service and at a SPARQL end-point. Non-Earth based geologic time scales Some other planets and satellites in the Solar System have sufficiently rigid structures to have preserved records of their own histories, for example, Venus, Mars and the Earth's Moon. Dominantly fluid planets, such as the giant planets, do not comparably preserve their history. Apart from the Late Heavy Bombardment, events on other planets probably had little direct influence on the Earth, and events on Earth had correspondingly little effect on those planets. Construction of a time scale that links the planets is, therefore, of only limited relevance to the Earth's time scale, except in a Solar System context. The existence, timing, and terrestrial effects of the Late Heavy Bombardment are still a matter of debate. Lunar (selenological) time scale The geologic history of Earth's Moon has been divided into a time scale based on geomorphological markers, namely impact cratering, volcanism, and erosion. This process of dividing the Moon's history in this manner means that the time scale boundaries do not imply fundamental changes in geological processes, unlike Earth's geologic time scale. Five geologic systems/periods (Pre-Nectarian, Nectarian, Imbrian, Eratosthenian, Copernican), with the Imbrian divided into two series/epochs (Early and Late) were defined in the latest Lunar geologic time scale. The Moon is unique in the Solar System in that it is the only other body from which humans have rock samples with a known geological context. Martian geologic time scale The geological history of Mars has been divided into two alternate time scales. The first time scale for Mars was developed by studying the impact crater densities on the Martian surface. Through this method four periods have been defined, the Pre-Noachian (~4,500–4,100 Ma), Noachian (~4,100–3,700 Ma), Hesperian (~3,700–3,000 Ma), and Amazonian (~3,000 Ma to present). A second time scale based on mineral alteration observed by the OMEGA spectrometer on board the Mars Express. Using this method, three periods were defined, the Phyllocian (~4,500–4,000 Ma), Theiikian (~4,000–3,500 Ma), and Siderikian (~3,500 Ma to present). See also Age of the Earth Cosmic calendar Deep time Evolutionary history of life Formation and evolution of the Solar System Geological history of Earth Geology of Mars Geon (geology) Graphical timeline of the universe History of Earth History of geology History of paleontology List of fossil sites List of geochronologic names Logarithmic timeline Lunar geologic timescale Martian geologic timescale Natural history New Zealand geologic time scale Prehistoric life Timeline of the Big Bang Timeline of evolution Timeline of the geologic history of the United States Timeline of human evolution Timeline of natural history Timeline of paleontology Notes References Further reading Montenari, Michael (2022). Integrated Quaternary Stratigraphy (1st ed.). Amsterdam: Academic Press (Elsevier). ISBN 978-0-323-98913-8. Montenari, Michael (2023). Stratigraphy of Geo- and Biodynamic Processes (1st ed.). Amsterdam: Academic Press (Elsevier). ISBN 978-0-323-99242-8. Nichols, Gary (2013). Sedimentology and Stratigraphy (2nd ed.). Hoboken: Wiley-Blackwell. Williams, Aiden (2019). Sedimentology and Stratigraphy (1st ed.). Forest Hills, NY: Callisto Reference. External links The current version of the International Chronostratigraphic Chart can be found at stratigraphy.org/chart Interactive version of the International Chronostratigraphic Chart is found at stratigraphy.org/timescale A list of current Global Boundary Stratotype and Section Points is found at stratigraphy.org/gssps NASA: Geologic Time (archived 18 April 2005) GSA: Geologic Time Scale (archived 20 January 2019) British Geological Survey: Geological Timechart GeoWhen Database (archived 23 June 2004) National Museum of Natural History – Geologic Time (archived 11 November 2005) SeeGrid: Geological Time Systems. . Information model for the geologic time scale. Exploring Time from Planck Time to the lifespan of the universe Episodes, Gradstein, Felix M. et al. (2004) A new Geologic Time Scale, with special reference to Precambrian and Neogene, Episodes, Vol. 27, no. 2 June 2004 (pdf) Lane, Alfred C, and Marble, John Putman 1937. Report of the Committee on the measurement of geologic time Lessons for Children on Geologic Time (archived 14 July 2011) Deep Time – A History of the Earth : Interactive Infographic Geology Buzz: Geologic Time Scale. . + Natural history Evolution-related timelines Geochronology Articles which contain graphical timelines International Commission on Stratigraphy geologic time scale of Earth
0.768127
0.999633
0.767846
Historical determinism
Historical determinism is the belief that events in history are entirely determined or constrained by various prior forces and, therefore, in a certain sense, inevitable. It is the philosophical view of determinism applied to the process or direction by which history unfolds. Historical determinism places the cause of the event behind it. The concept of Determinism appeared in the 19th century. The main idea is that certain factors determine the existence of humans and therefore limit the scope of their free will. In history, this is an approach that holds that history is intrinsically meaningful. Used as a pejorative, it is normally meant to designate a rigid finalist or mechanist conception of historical unfolding that makes the future appear as an inevitable and predetermined result of the past. See also Determinism Dialectical materialism Economic determinism Environmental determinism Free will Geographic determinism Hegelianism Historical Materialism Marxism Myth of progress Technological determinism Bibliography External links Determinism Marxism Political theories Theories of history
0.786025
0.976824
0.767807
Stratigraphy (archaeology)
Stratigraphy is a key concept to modern archaeological theory and practice. Modern excavation techniques are based on stratigraphic principles. The concept derives from the geological use of the idea that sedimentation takes place according to uniform principles. When archaeological finds are below the surface of the ground (as is most commonly the case), the identification of the context of each find is vital in enabling the archaeologist to draw conclusions about the site and about the nature and date of its occupation. It is the archaeologist's role to attempt to discover what contexts exist and how they came to be created. Archaeological stratification or sequence is the dynamic superimposition of single units of stratigraphy, or contexts. Contexts are single events or actions that leave discrete, detectable traces in the archaeological sequence or stratigraphy. They can be deposits (such as the back-fill of a ditch), structures (such as walls), or "zero thickness surfaces", better known as "cuts". Cuts represent actions that remove other solid contexts such as fills, deposits, and walls. An example would be a ditch "cut" through earlier deposits. Stratigraphic relationships are the relationships created between contexts in time, representing the chronological order in which they were created. One example would be a ditch and the back-fill of said ditch. The temporal relationship of "the fill" context to the ditch "cut" context is such that "the fill" occurred later in the sequence; you have to dig a ditch before you can back-fill it. A relationship that is later in the sequence is sometimes referred to as "higher" in the sequence, and a relationship that is earlier, "lower", though this does not refer necessarily to the physical location of the context. It is more useful to think of "higher" as it relates to the context's position in a Harris matrix, a two-dimensional representation of a site's formation in space and time. Principles or laws Archaeological stratigraphy is based on a series of axiomatic principles or "laws". They are derived from the principles of stratigraphy in geology but have been adapted to reflect the different nature of archaeological deposits. E.C. Harris notes two principles that were widely recognised by archaeologists by the 1970s: The principle of superposition establishes that within a series of layers and interfacial features, as originally created, the upper units of stratification are younger and the lower are older, for each must have been deposited on, or created by the removal of, a pre-existing mass of archaeological stratification. The principle that layers can be no older than the age of the most recent artefact discovered within them. This is the basis for the relative dating of layers using artefact typologies. It is analogous to the geological principle of faunal succession, although Harris argued that it was not strictly applicable to archaeology. He also proposed three additional principles: The principle of original horizontality states that any archaeological layer deposited in an unconsolidated form will tend towards a horizontal deposition. Strata which are found with tilted surfaces were so originally deposited, or lie in conformity with the contours of a pre-existing basin of deposition. The principle of lateral continuity states that any archaeological deposit, as originally laid down, will be bounded by the edge of the basin of deposition, or will thin down to a feather edge. Therefore, if any edge of the deposit is exposed in a vertical plane view, a part of its original extent must have been removed by excavation or erosion: its continuity must be sought, or its absence explained. The principle of stratigraphic succession states that any given unit of archaeological stratification exists within the stratigraphic sequence from its position between the undermost of all higher units and the uppermost of all lower units and with which it has a physical contact. Combining stratigraphic contexts for interpretation Understanding a site in modern archaeology is a process of grouping single contexts together in ever larger groups by virtue of their relationships. The terminology of these larger clusters varies depending on the practitioner, but the terms interface, sub-group, and group are common. An example of a sub-group could be the three contexts that make up a burial; the grave cut, the body, and the back-filled earth on top of the body. Sub-groups can then be clustered together with other sub-groups by virtue of their stratigraphic relationship to form groups, which in turn form "phases." A sub-group burial could cluster with other sub-group burials to form a cemetery, which in turn could be grouped with a building, such as a church, to produce a "phase". Phase implies a nearly contemporaneous Archaeological horizon, representing "what you would see if you went back to time X". The production of phase interpretations is the first goal of stratigraphic interpretation and excavation. Stratigraphic dating Archaeologists investigating a site may wish to date the activity rather than artifacts on site by dating the individual contexts which represents events. Some degree of dating objects by their position in the sequence can be made with known datable elements of the archaeological record or other assumed datable contexts deduced by a regressive form of relative dating which in turn can fix events represented by contexts to some range in time. For example, the date of formation of a context which is totally sealed between two datable layers will fall between the dates of the two layers sealing it. However the date of contexts often fall in a range of possibilities so using them to date others is not a straightforward process. Take the hypothetical section figure A. Here we can see 12 contexts, each numbered with a unique context number and whose sequence is represented in the Harris matrix in figure B. A horizontal layer Masonry wall remnant Backfill of the wall construction trench (sometimes called construction cut) A horizontal layer, probably the same as 1 Construction cut for wall 2 A clay floor abutting wall 2 Fill of shallow cut 8 Shallow pit cut A horizontal layer A horizontal layer, probably the same as 9 Natural sterile ground formed before human occupation of the site Trample in the base of cut 5 formed by workmen's boots constructing the structure wall 2 and floor 6 is associated with. If we know the date of context 1 and context 9 we can deduce that context 7, the backfilling of pit 8, occurred sometime after the date for 9 but before the date for 1, and if we recover an assemblage of artifacts from context 7 that occur nowhere else in the sequence, we have isolated them with a reasonable degree of certainty to a discrete range of time. In this instance we can now use the date we have for finds in context 7 to date other sites and sequences. In practice a huge amount of cross referencing with other recorded sequences is required to produce dating series from stratigraphic relationships such as the work in seriation. Residual and intrusive finds One issue in using stratigraphic relationships is that the date of artifacts in a context does not represent the date of the context, but just the earliest date the context could be. If one looks at the sequence in figure A, one may find that the cut for the construction of wall 2, context 5, has cut through layers 9 and 10, and in doing so has introduced the possibility that artifacts from layers 9 and 10 may be redeposited higher up the sequence in the context representing the backfill of the construction cut, context 3. These artifacts are referred to as "residual" or "residual finds". It is crucial that dating a context is based on the latest dating evidence drawn from the context. We can also see that if the fill of cut 5 – the wall 2, backfill 3 and trample 12 — are not removed entirely during excavation because of "undercutting", non-residual artifacts from these later "higher" contexts 2, 3 and 12 could contaminate the excavation of earlier contexts such as 9 and 10 and give false dating information. These artifacts may be termed intrusive finds. Archiving Stratigraphic Data Stratigraphic data is a required component in archaeological archives, but there is a growing problem for digital data archives, where stratigraphic data are often only held on paper or as scanned image copies (PDFs) of matrix diagrams. This means that they cannot be easily re-used in further analysis. Some recommendations are being made to address this problem. See also (1811–1901) – Scottish antiquarian and archaeologist Relative dating – Determination of the relative order of archaeological layers and artifacts Reverse stratigraphy or inverted stratigraphy Sequence (archaeology) – Stratigraphy of the archaeological record, used as part of the 'seriation' method of relative dating References Notes Bibliography Harris, E. C. (1989) Principles of Archaeological Stratigraphy, 2nd Edition. Academic Press: London and San Diego. A. Carandini, Storie dalla terra. Manuale di scavo archeologico, Torino, Einaudi, 1991 Methods in archaeology
0.783022
0.980558
0.767798
Machine Age
The Machine Age is an era that includes the early-to-mid 20th century, sometimes also including the late 19th century. An approximate dating would be about 1880 to 1945. Considered to be at its peak in the time between the first and second world wars, the Machine Age overlaps with the late part of the Second Industrial Revolution (which ended around 1914 at the start of World War I) and continues beyond it until 1945 at the end of World War II. The 1940s saw the beginning of the Atomic Age, where modern physics saw new applications such as the atomic bomb, the first computers, and the transistor. The Digital Revolution ended the intellectual model of the machine age founded in the mechanical and heralding a new more complex model of high technology. The digital era has been called the Second Machine Age, with its increased focus on machines that do mental tasks. Universal chronology Developments Artifacts of the Machine Age include: Reciprocating steam engine replaced by gas turbines, internal combustion engines and electric motors Electrification based on large hydroelectric and thermal electric power production plants and distribution systems Mass production of high-volume goods on moving assembly lines, particularly of the automobile Gigantic production machinery, especially for producing and working metal, such as steel rolling mills, bridge component fabrication, and car body presses Powerful earthmoving equipment Steel-framed buildings of great height (skyscrapers) Radio and phonograph technology High-speed printing presses, enabling the production of low-cost newspapers and mass-market magazines Low cost appliances for the mass market that employ fractional power electric motors, such as vacuum cleaners and washing machines Fast and comfortable long-distance travel by railways, cars, and aircraft Development and employment of modern war machines such as tanks, aircraft, submarines and the modern battleship Streamline designs in cars and trains, influenced by aircraft design Social influence The rise of mass market advertising and consumerism Nationwide branding and distribution of goods, replacing local arts and crafts Nationwide cultural leveling due to exposure to films and network broadcasting Mass-produced government propaganda through print, audio, and motion pictures Replacement of skilled crafts with low skilled labor Growth of strong corporations through their abilities to exploit economies of scale in materials and equipment acquisition, manufacturing, and distribution Corporate exploitation of labor leading to the creation of strong trade unions as a countervailing force Aristocracy with weighted suffrage or male-only suffrage replaced by democracy with universal suffrage, parallel to one-party states First-wave feminism Increased economic planning, including five-year plans, public works and occasional war economy, including nationwide conscription and rationing Environmental influence Exploitation of natural resources with little concern for the ecological consequences; a continuation of 19th century practices but at a larger scale. Release of synthetic dyes, artificial flavorings, and toxic materials into the consumption stream without testing for adverse health effects. Rise of petroleum as a strategic resource International relations Conflicts between nations regarding access to energy sources (particularly oil) and material resources (particularly iron and various metals with which it is alloyed) required to ensure national self-sufficiency. Such conflicts were contributory to two devastating world wars. Climax of New Imperialism and beginning of decolonization Arts and architecture The Machine Age is considered to have influenced: Dystopian films including Charlie Chaplin's Modern Times and Fritz Lang's Metropolis Streamline Moderne appliance design and architecture Bauhaus style Modern art Cubism Art Deco decorative style Futurism Music See also Second Industrial Revolution References Historical eras History of technology Second Industrial Revolution 19th century in technology 20th century in technology Machines
0.781398
0.982565
0.767774
Race and genetics
Researchers have investigated the relationship between race and genetics as part of efforts to understand how biology may or may not contribute to human racial categorization. Today, the consensus among scientists is that race is a social construct, and that using it as a proxy for genetic differences among populations is misleading. Many constructions of race are associated with phenotypical traits and geographic ancestry, and scholars like Carl Linnaeus have proposed scientific models for the organization of race since at least the 18th century. Following the discovery of Mendelian genetics and the mapping of the human genome, questions about the biology of race have often been framed in terms of genetics. A wide range of research methods have been employed to examine patterns of human variation and their relations to ancestry and racial groups, including studies of individual traits, studies of large populations and genetic clusters, and studies of genetic risk factors for disease. Research into race and genetics has also been criticized as emerging from, or contributing to, scientific racism. Genetic studies of traits and populations have been used to justify social inequalities associated with race, despite the fact that patterns of human variation have been shown to be mostly clinal, with human genetic code being approximately 99.6%-99.9% identical between individuals and without clear boundaries between groups. Some researchers have argued that race can act as a proxy for genetic ancestry because individuals of the same racial category may share a common ancestry, but this view has fallen increasingly out of favor among experts. The mainstream view is that it is necessary to distinguish between biology and the social, political, cultural, and economic factors that contribute to conceptions of race. Phenotype may have a tangential connection to DNA, but it is still only a rough proxy that would omit various other genetic information. Today, in a somewhat similar way that "gender" is differentiated from the more clear "biological sex", scientists state that potentially "race" / phenotype can be differentiated from the more clear "ancestry". However, this system has also still come under scrutiny as it may fall into the same problems – which would be large, vague groupings with little genetic value. Overview The concept of race The concept of "race" as a classification system of humans based on visible physical characteristics emerged over the last five centuries, influenced by European colonialism. However, there is widespread evidence of what would be described in modern terms as racial consciousness throughout the entirety of recorded history. For example, in Ancient Egypt there were four broad racial divisions of human beings: Egyptians, Asiatics, Libyans, and Nubians. There was also Aristotle of Ancient Greece, who once wrote: "The peoples of Asia... lack spirit, so that they are in continuous subjection and slavery." The concept has manifested in different forms based on social conditions of a particular group, often used to justify unequal treatment. Early influential attempts to classify humans into discrete races include 4 races in Carl Linnaeus's Systema Naturae (Homo europaeus, asiaticus, americanus, and afer) and 5 races in Johann Friedrich Blumenbach's On the Natural Variety of Mankind. Notably, over the next centuries, scholars argued for anywhere from 3 to more than 60 race categories. Race concepts have changed within a society over time; for example, in the United States social and legal designations of "White" have been inconsistently applied to Native Americans, Arab Americans, and Asian Americans, among other groups (See main article: Definitions of whiteness in the United States). Race categories also vary worldwide; for example, the same person might be perceived as belonging to a different category in the United States versus Brazil. Because of the arbitrariness inherent in the concept of race, it is difficult to relate it to biology in a straightforward way. Race and human genetic variation There is broad consensus across the biological and social sciences that race is a social construct, not an accurate representation of human genetic variation. As more progress has been made on sequencing the human genome, it has been found that any two humans will share an average of 99.35% of their DNA based on the approximately 3.1 billion haploid base pairs. However, this number should be understood as an average, any two specific individuals can have their genomes differ by more or less than 0.65%. Additionally, this average is an estimate, subject to change as additional sequences are discovered and populations sampled. In 2010, the genome of Craig Venter was found to differ by an estimated 1.59% from a reference genome created by the National Center for Biotechnology Information. We nonetheless see wide individual variation in phenotype, which arises from both genetic differences and complex gene-environment interactions. The vast majority of this genetic variation occurs within groups; very little genetic variation differentiates between groups. Crucially, the between-group genetic differences that do exist do not map onto socially recognized categories of race. Furthermore, although human populations show some genetic clustering across geographic space, human genetic variation is "clinal", or continuous. This, in addition to the fact that different traits vary on different clines, makes it impossible to draw discrete genetic boundaries around human groups. Finally, insights from ancient DNA are revealing that no human population is "pure" – all populations represent a long history of migration and mixing. Sources of human genetic variation Genetic variation arises from mutations, from natural selection, migration between populations (gene flow) and from the reshuffling of genes through sexual reproduction. Mutations lead to a change in the DNA structure, as the order of the bases are rearranged. Resultantly, different polypeptide proteins are coded. Some mutations may be positive and can help the individual survive more effectively in their environment. Mutation is counteracted by natural selection and by genetic drift; note too the founder effect, when a small number of initial founders establish a population which hence starts with a correspondingly small degree of genetic variation. Epigenetic inheritance involves heritable changes in phenotype (appearance) or gene expression caused by mechanisms other than changes in the DNA sequence. Human phenotypes are highly polygenic (dependent on interaction by many genes) and are influenced by environment as well as by genetics. Nucleotide diversity is based on single mutations, single nucleotide polymorphisms (SNPs). The nucleotide diversity between humans is about 0.1 percent (one difference per one thousand nucleotides between two humans chosen at random). This amounts to approximately three million SNPs (since the human genome has about three billion nucleotides). There are an estimated ten million SNPs in the human population. Research has shown that non-SNP (structural) variation accounts for more human genetic variation than single nucleotide diversity. Structural variation includes copy-number variation and results from deletions, inversions, insertions and duplications. It is estimated that approximately 0.4 to 0.6 percent of the genomes of unrelated people differ. Genetic basis for race Much scientific research has been organized around the question of whether or not there is genetic basis for race. In Luigi Luca Cavalli-Sforza's book (circa 1994) "The History and Geography of Human Genes" he writes, "From a scientific point of view, the concept of race has failed to obtain any consensus; none is likely, given the gradual variation in existence. It may be objected that the racial stereotypes have a consistency that allows even the layman to classify individuals. However, the major stereotypes, all based on skin color, hair color and form, and facial traits, reflect superficial differences that are not confirmed by deeper analysis with more reliable genetic traits and whose origin dates from recent evolution mostly under the effect of climate and perhaps sexual selection". In 2018 geneticist David Reich reaffirmed the conclusion that the traditional views which assert a biological basis for race are wrong: In 1956, some scientists proposed that race may be similar to dog breeds within dogs. However, this theory has since been discarded, with one of the main reasons being that purebred dogs have been specifically bred artificially, whereas human races developed organically. Furthermore, the genetic variation between purebred dog breeds is far greater than that of human populations. Dog-breed intervariation is roughly 27.5%, whereas human populations inter-variation is only at 10-15.6%. Including non purebreds would substantially decrease the 27.5% genetic variance, however. Mammal taxonomy is rarely defined by genetic variance alone. Research methods Scientists investigating human variation have used a series of methods to characterize how different populations vary. Early studies of traits, proteins, and genes Early racial classification attempts measured surface traits, particularly skin color, hair color and texture, eye color, and head size and shape. (Measurements of the latter through craniometry were repeatedly discredited in the late 19th and mid-20th centuries due to a lack of correlation of phenotypic traits with racial categorization.) In actuality, biological adaptation plays the biggest role in these bodily features and skin type. A relative handful of genes accounts for the inherited factors shaping a person's appearance. Humans have an estimated 19,000–20,000 human protein-coding genes. Richard Sturm and David Duffy describe 11 genes that affect skin pigmentation and explain most variations in human skin color, the most significant of which are MC1R, ASIP, OCA2, and TYR. There is evidence that as many as 16 different genes could be responsible for eye color in humans; however, the main two genes associated with eye color variation are OCA2 and HERC2, and both are localized in chromosome 15. Analysis of blood proteins and between-group genetics Before the discovery of DNA, scientists used blood proteins (the human blood group systems) to study human genetic variation. Research by Ludwik and Hanka Herschfeld during World War I found that the incidence of blood groups A and B differed by region; for example, among Europeans 15 percent were group B and 40 percent group A. Eastern Europeans and Russians had a higher incidence of group B; people from India had the greatest incidence. The Herschfelds concluded that humans comprised two "biochemical races", originating separately. It was hypothesized that these two races later mixed, resulting in the patterns of groups A and B. This was one of the first theories of racial differences to include the idea that human variation did not correlate with genetic variation. It was expected that groups with similar proportions of blood groups would be more closely related, but instead it was often found that groups separated by great distances (such as those from Madagascar and Russia), had similar incidences. It was later discovered that the ABO blood group system is not just common to humans, but shared with other primates, and likely predates all human groups. In 1972, Richard Lewontin performed a FST statistical analysis using 17 markers (including blood-group proteins). He found that the majority of genetic differences between humans (85.4 percent) were found within a population, 8.3 percent were found between populations within a race and 6.3 percent were found to differentiate races (Caucasian, African, Mongoloid, South Asian Aborigines, Amerinds, Oceanians, and Australian Aborigines in his study). Since then, other analyses have found FST values of 6–10 percent between continental human groups, 5–15 percent between different populations on the same continent and 75–85 percent within populations. This view has been affirmed by the American Anthropological Association and the American Association of Physical Anthropologists since. Critiques of blood protein analysis While acknowledging Lewontin's observation that humans are genetically homogeneous, A. W. F. Edwards in his 2003 paper "Human Genetic Diversity: Lewontin's Fallacy" argued that information distinguishing populations from each other is hidden in the correlation structure of allele frequencies, making it possible to classify individuals using mathematical techniques. Edwards argued that even if the probability of misclassifying an individual based on a single genetic marker is as high as 30 percent (as Lewontin reported in 1972), the misclassification probability nears zero if enough genetic markers are studied simultaneously. Edwards saw Lewontin's argument as based on a political stance, denying biological differences to argue for social equality. Edwards' paper is reprinted, commented upon by experts such as Noah Rosenberg, and given further context in an interview with philosopher of science Rasmus Grønfeldt Winther in a recent anthology. As referred to before, Edwards criticises Lewontin's paper as he took 17 different traits and analysed them independently, without looking at them in conjunction with any other protein. Thus, it would have been fairly convenient for Lewontin to come up with the conclusion that racial naturalism is not tenable, according to his argument. Sesardic also strengthened Edwards' view, as he used an illustration referring to squares and triangles, and showed that if you look at one trait in isolation, then it will most likely be a bad predicator of which group the individual belongs to. In contrast, in a 2014 paper, reprinted in the 2018 Edwards Cambridge University Press volume, Rasmus Grønfeldt Winther argues that "Lewontin's Fallacy" is effectively a misnomer, as there really are two different sets of methods and questions at play in studying the genomic population structure of our species: "variance partitioning" and "clustering analysis." According to Winther, they are "two sides of the same mathematics coin" and neither "necessarily implies anything about the reality of human groups." Current studies of population genetics Researchers currently use genetic testing, which may involve hundreds (or thousands) of genetic markers or the entire genome. Structure Several methods to examine and quantify genetic subgroups exist, including cluster and principal components analysis. Genetic markers from individuals are examined to find a population's genetic structure. While subgroups overlap when examining variants of one marker only, when a number of markers are examined different subgroups have different average genetic structure. An individual may be described as belonging to several subgroups. These subgroups may be more or less distinct, depending on how much overlap there is with other subgroups. In cluster analysis, the number of clusters to search for K is determined in advance; how distinct the clusters are varies. The results obtained from cluster analyses depend on several factors: A large number of genetic markers studied facilitates finding distinct clusters. Some genetic markers vary more than others, so fewer are required to find distinct clusters. Ancestry-informative markers exhibit substantially different frequencies between populations from different geographical regions. Using AIMs, scientists can determine a person's ancestral continent of origin based solely on their DNA. AIMs can also be used to determine someone's admixture proportions. The more individuals studied, the easier it becomes to detect distinct clusters (statistical noise is reduced). Low genetic variation makes it more difficult to find distinct clusters. Greater geographic distance generally increases genetic variation, making identifying clusters easier. A similar cluster structure is seen with different genetic markers when the number of genetic markers included is sufficiently large. The clustering structure obtained with different statistical techniques is similar. A similar cluster structure is found in the original sample with a subsample of the original sample. Recent studies have been published using an increasing number of genetic markers. Focus on study of structure has been criticized for giving the general public a misleading impression of human genetic variation, obscuring the general finding that genetic variants which are limited to one region tend to be rare within that region, variants that are common within a region tend to be shared across the globe, and most differences between individuals, whether they come from the same region or different regions, are due to global variants. Distance Genetic distance is genetic divergence between species or populations of a species. It may compare the genetic similarity of related species, such as humans and chimpanzees. Within a species, genetic distance measures divergence between subgroups. Genetic distance significantly correlates to geographic distance between populations, a phenomenon sometimes known as "isolation by distance". Genetic distance may be the result of physical boundaries restricting gene flow such as islands, deserts, mountains or forests. Genetic distance is measured by the fixation index (FST). FST is the correlation of randomly chosen alleles in a subgroup to a larger population. It is often expressed as a proportion of genetic diversity. This comparison of genetic variability within (and between) populations is used in population genetics. The values range from 0 to 1; zero indicates the two populations are freely interbreeding, and one would indicate that two populations are separate. Many studies place the average FST distance between human races at about 0.125. Henry Harpending argued that this value implies on a world scale a "kinship between two individuals of the same human population is equivalent to kinship between grandparent and grandchild or between half siblings". In fact, the formulas derived in Harpending's paper in the "Kinship in a subdivided population" section imply that two unrelated individuals of the same race have a higher coefficient of kinship (0.125) than an individual and their mixed race half-sibling (0.109). Critiques of FST While acknowledging that FST remains useful, a number of scientists have written about other approaches to characterizing human genetic variation. Long & Kittles (2009) stated that FST failed to identify important variation and that when the analysis includes only humans, FST = 0.119, but adding chimpanzees increases it only to FST = 0.183. Mountain & Risch (2004) argued that an FST estimate of 0.10–0.15 does not rule out a genetic basis for phenotypic differences between groups and that a low FST estimate implies little about the degree to which genes contribute to between-group differences. Pearse & Crandall 2004 wrote that FST figures cannot distinguish between a situation of high migration between populations with a long divergence time, and one of a relatively recent shared history but no ongoing gene flow. In their 2015 article, Keith Hunley, Graciela Cabana, and Jeffrey Long (who had previously criticized Lewontin's statistical methodology with Rick Kittles) recalculate the apportionment of human diversity using a more complex model than Lewontin and his successors. They conclude: "In sum, we concur with Lewontin's conclusion that Western-based racial classifications have no taxonomic significance, and we hope that this research, which takes into account our current understanding of the structure of human diversity, places his seminal finding on firmer evolutionary footing." Anthropologists (such as C. Loring Brace), philosopher Jonathan Kaplan and geneticist Joseph Graves have argued that while it is possible to find biological and genetic variation roughly corresponding to race, this is true for almost all geographically distinct populations: the cluster structure of genetic data is dependent on the initial hypotheses of the researcher and the populations sampled. When one samples continental groups, the clusters become continental; with other sampling patterns, the clusters would be different. Weiss and Fullerton note that if one sampled only Icelanders, Mayans and Maoris, three distinct clusters would form; all other populations would be composed of genetic admixtures of Maori, Icelandic and Mayan material. Kaplan therefore concludes that, while differences in particular allele frequencies can be used to identify populations that loosely correspond to the racial categories common in Western social discourse, the differences are of no more biological significance than the differences found between any human populations (e.g., the Spanish and Portuguese). Historical and geographical analyses Current-population genetic structure does not imply that differing clusters or components indicate only one ancestral home per group; for example, a genetic cluster in the US comprises Hispanics with European, Native American and African ancestry. Geographic analyses attempt to identify places of origin, their relative importance and possible causes of genetic variation in an area. The results can be presented as maps showing genetic variation. Cavalli-Sforza and colleagues argue that if genetic variations are investigated, they often correspond to population migrations due to new sources of food, improved transportation or shifts in political power. For example, in Europe the most significant direction of genetic variation corresponds to the spread of agriculture from the Middle East to Europe between 10,000 and 6,000 years ago. Such geographic analysis works best in the absence of recent large-scale, rapid migrations. Historic analyses use differences in genetic variation (measured by genetic distance) as a molecular clock indicating the evolutionary relation of species or groups, and can be used to create evolutionary trees reconstructing population separations. Results of genetic-ancestry research are supported if they agree with research results from other fields, such as linguistics or archeology. Cavalli-Sforza and colleagues have argued that there is a correspondence between language families found in linguistic research and the population tree they found in their 1994 study. There are generally shorter genetic distances between populations using languages from the same language family. Exceptions to this rule are also found, for example Sami, who are genetically associated with populations speaking languages from other language families. The Sami speak a Uralic language, but are genetically primarily European. This is argued to have resulted from migration (and interbreeding) with Europeans while retaining their original language. Agreement also exists between research dates in archeology and those calculated using genetic distance. Self-identification studies Jorde and Wooding found that while clusters from genetic markers were correlated with some traditional concepts of race, the correlations were imperfect and imprecise due to the continuous and overlapping nature of genetic variation, noting that ancestry, which can be accurately determined, is not equivalent to the concept of race. A 2005 study by Tang and colleagues used 326 genetic markers to determine genetic clusters. The 3,636 subjects, from the United States and Taiwan, self-identified as belonging to white, African American, East Asian or Hispanic ethnic groups. The study found "nearly perfect correspondence between genetic cluster and SIRE for major ethnic groups living in the United States, with a discrepancy rate of only 0.14 percent". Paschou et al. found "essentially perfect" agreement between 51 self-identified populations of origin and the population's genetic structure, using 650,000 genetic markers. Selecting for informative genetic markers allowed a reduction to less than 650, while retaining near-total accuracy. Correspondence between genetic clusters in a population (such as the current US population) and self-identified race or ethnic groups does not mean that such a cluster (or group) corresponds to only one ethnic group. African Americans have an estimated 20–25-percent European genetic admixture; Hispanics have European, Native American and African ancestry. In Brazil there has been extensive admixture between Europeans, Amerindians and Africans. As a result, skin color differences within the population are not gradual, and there are relatively weak associations between self-reported race and African ancestry. Ethnoracial self- classification in Brazilians is certainly not random with respect to genome individual ancestry, but the strength of the association between the phenotype and median proportion of African ancestry varies largely across population. Critique of genetic-distance studies and clusters Genetic distances generally increase continually with geographic distance, which makes a dividing line arbitrary. Any two neighboring settlements will exhibit some genetic difference from each other, which could be defined as a race. Therefore, attempts to classify races impose an artificial discontinuity on a naturally occurring phenomenon. This explains why studies on population genetic structure yield varying results, depending on methodology. Rosenberg and colleagues (2005) have argued, based on cluster analysis of the 52 populations in the Human Genetic Diversity Panel, that populations do not always vary continuously and a population's genetic structure is consistent if enough genetic markers (and subjects) are included. They also wrote, regarding a model with five clusters corresponding to Africa, Eurasia (Europe, Middle East, and Central/South Asia), East Asia, Oceania, and the Americas: This applies to populations in their ancestral homes when migrations and gene flow were slow; large, rapid migrations exhibit different characteristics. Tang and colleagues (2004) wrote, "we detected only modest genetic differentiation between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry, which is highly correlated with self-identified race/ethnicity—as opposed to current residence—is the major determinant of genetic structure in the U.S. population". Cluster analysis has been criticized because the number of clusters to search for is decided in advance, with different values possible (although with varying degrees of probability). Principal component analysis does not decide in advance how many components for which to search. The 2002 study by Rosenberg et al. exemplifies why meanings of these clusterings can be disputable, though the study shows that at the K=5 cluster analysis, genetic clusterings roughly map onto each of the five major geographical regions. Similar results were gathered in further studies in 2005. Critique of ancestry-informative markers Ancestry-informative markers (AIMs) are a genealogy tracing technology that has come under much criticism due to its reliance on reference populations. In a 2015 article, Troy Duster outlines how contemporary technology allows the tracing of ancestral lineage but along only the lines of one maternal and one paternal line. That is, of 64 total great-great-great-great-grandparents, only one from each parent is identified, implying the other 62 ancestors are ignored in tracing efforts. Furthermore, the 'reference populations' used as markers for membership of a particular group are designated arbitrarily and contemporarily. In other words, using populations who currently reside in given places as references for certain races and ethnic groups is unreliable due to the demographic changes which have occurred over many centuries in those places. Furthermore, ancestry-informative markers being widely shared among the whole human population, it is their frequency which is tested, not their mere absence/presence. A threshold of relative frequency has, therefore, to be set. According to Duster, the criteria for setting such thresholds are a trade secret of the companies marketing the tests. Thus, we cannot say anything conclusive on whether they are appropriate. Results of AIMs are extremely sensitive to where this bar is set. Given that many genetic traits are found to be very similar amid many different populations, the designated threshold frequencies are very important. This can also lead to mistakes, given that many populations may share the same patterns, if not exactly the same genes. "This means that someone from Bulgaria whose ancestors go back to the fifteenth century could (and sometime does) map as partly 'Native American. This happens because AIMs rely on a '100% purity' assumption of reference populations. That is, they assume that a pattern of traits would ideally be a necessary and sufficient condition for assigning an individual to an ancestral reference populations. Race, genetics, and medicine There are certain statistical differences between racial groups in susceptibility to certain diseases. Genes change in response to local diseases; for example, people who are Duffy-negative tend to have a higher resistance to malaria. The Duffy negative phenotype is highly frequent in central Africa and the frequency decreases with distance away from Central Africa, with higher frequencies in global populations with high degrees of recent African immigration. This suggests that the Duffy negative genotype evolved in Sub-Saharan Africa and was subsequently positively selected for in the Malaria endemic zone. A number of genetic conditions prevalent in malaria-endemic areas may provide genetic resistance to malaria, including sickle cell disease, thalassaemias and glucose-6-phosphate dehydrogenase. Cystic fibrosis is the most common life-limiting autosomal recessive disease among people of European ancestry; a hypothesized heterozygote advantage, providing resistance to diseases earlier common in Europe, has been challenged. Scientists Michael Yudell, Dorothy Roberts, Rob DeSalle, and Sarah Tishkoff argue that using these associations in the practice of medicine has led doctors to overlook or misidentify disease: "For example, hemoglobinopathies can be misdiagnosed because of the identification of sickle-cell as a 'Black' disease and thalassemia as a 'Mediterranean' disease. Cystic fibrosis is underdiagnosed in populations of African ancestry, because it is thought of as a 'White' disease." Information about a person's population of origin may aid in diagnosis, and adverse drug responses may vary by group. Because of the correlation between self-identified race and genetic clusters, medical treatments influenced by genetics have varying rates of success between self-defined racial groups. For this reason, some physicians consider a patient's race in choosing the most effective treatment, and some drugs are marketed with race-specific instructions. Jorde and Wooding (2004) have argued that because of genetic variation within racial groups, when "it finally becomes feasible and available, individual genetic assessment of relevant genes will probably prove more useful than race in medical decision making". However, race continues to be a factor when examining groups (such as epidemiologic research). Some doctors and scientists such as geneticist Neil Risch argue that using self-identified race as a proxy for ancestry is necessary to be able to get a sufficiently broad sample of different ancestral populations, and in turn to be able to provide health care that is tailored to the needs of minority groups. Usage in scientific journals Some scientific journals have addressed previous methodological errors by requiring more rigorous scrutiny of population variables. Since 2000, Nature Genetics requires its authors to "explain why they make use of particular ethnic groups or populations, and how classification was achieved". Editors of Nature Genetics say that "[they] hope that this will raise awareness and inspire more rigorous designs of genetic and epidemiological studies". A 2021 study that examined over 11,000 papers from 1949 to 2018 in The American Journal of Human Genetics, found that "race" was used in only 5% of papers published in the last decade, down from 22% in the first. Together with an increase in use of the terms "ethnicity," "ancestry," and location-based terms, it suggests that human geneticists have mostly abandoned the term "race." Gene-environment interactions Lorusso and Bacchini argue that self-identified race is of greater use in medicine as it correlates strongly with risk-related exposomes that are potentially heritable when they become embodied in the epigenome. They summarise evidence of the link between racial discrimination and health outcomes due to poorer food quality, access to healthcare, housing conditions, education, access to information, exposure to infectious agents and toxic substances, and material scarcity. They also cite evidence that this process can work positively – for example, the psychological advantage of perceiving oneself at the top of a social hierarchy is linked to improved health. However they caution that the effects of discrimination do not offer a complete explanation for differential rates of disease and risk factors between racial groups, and the employment of self-identified race has the potential to reinforce racial inequalities. Objections to racial naturalism Racial naturalism is the view that racial classifications are grounded in objective patterns of genetic similarities and differences. Proponents of this view have justified it using the scientific evidence described above. However, this view is controversial and philosophers of race have put forward four main objections to it. Semantic objections, such as the discreteness objection, argue that the human populations picked out in population-genetic research are not races and do not correspond to what "race" means in the United States. "The discreteness objection does not require there to be no genetic admixture in the human species in order for there to be US 'racial groups' ... rather ... what the objection claims is that membership in US racial groups is different from membership in continental populations. ... Thus, strictly speaking, Blacks are not identical to Africans, Whites are not identical to Eurasians, Asians are not identical to East Asians and so forth." Therefore, it could be argued that scientific research is not really about race. The next two objections, are metaphysical objections which argue that even if the semantic objections fail, human genetic clustering results do not support the biological reality of race. The 'very important objection' stipulates that races in the US definition fail to be important to biology, in the sense that continental populations do not form biological subspecies. The 'objectively real objection' states that "US racial groups are not biologically real because they are not objectively real in the sense of existing independently of human interest, belief, or some other mental state of humans." Racial naturalists, such as Quayshawn Spencer, have responded to each of these objections with counter-arguments. There are also methodological critics who reject racial naturalism because of concerns relating to the experimental design, execution, or interpretation of the relevant population-genetic research. Another semantic objection is the visibility objection which refutes the claim that there are US racial groups in human population structures. Philosophers such as Joshua Glasgow and Naomi Zack believe that US racial groups cannot be defined by visible traits, such as skin colour and physical attributes: "The ancestral genetic tracking material has no effect on phenotypes, or biological traits of organisms, which would include the traits deemed racial, because the ancestral tracking genetic material plays no role in the production of proteins it is not the kind of material that 'codes' for protein production." Spencer contends that certain racial discourses require visible groups, but disagrees that this is a requirement in all US racial discourse. A different objection states that US racial groups are not biologically real because they are not objectively real in the sense of existing independently of some mental state of humans. Proponents of this second metaphysical objection include Naomi Zack and Ron Sundstrom. Spencer argues that an entity can be both biologically real and socially constructed. Spencer states that in order to accurately capture real biological entities, social factors must also be considered. It has been argued that knowledge of a person's race is limited in value, since people of the same race vary from one another. David J. Witherspoon and colleagues have argued that when individuals are assigned to population groups, two randomly chosen individuals from different populations can resemble each other more than a randomly chosen member of their own group. They found that many thousands of genetic markers had to be used for the answer to "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" to be "Never". This assumed three population groups, separated by large geographic distances (European, African and East Asian). The global human population is more complex, and studying a large number of groups would require an increased number of markers for the same answer. They conclude that "caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes", and "The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population". This is similar to the conclusion reached by anthropologist Norman Sauer in a 1992 article on the ability of forensic anthropologists to assign "race" to a skeleton, based on craniofacial features and limb morphology. Sauer said, "the successful assignment of race to a skeletal specimen is not a vindication of the race concept, but rather a prediction that an individual, while alive was assigned to a particular socially constructed 'racial' category. A specimen may display features that point to African ancestry. In this country that person is likely to have been labeled Black regardless of whether or not such a race actually exists in nature". Criticism of race-based medicines Troy Duster points out that genetics is often not the predominant determinant of disease susceptibilities, even though they might correlate with specific socially defined categories. This is because this research oftentimes lacks control for a multiplicity of socio-economic factors. He cites data collected by King and Rewers that indicates how dietary differences play a significant role in explaining variations of diabetes prevalence between populations. Duster elaborates by putting forward the example of the Pima of Arizona, a population suffering from disproportionately high rates of diabetes. The reason for such, he argues, was not necessarily a result of the prevalence of the FABP2 gene, which is associated with insulin resistance. Rather he argues that scientists often discount the lifestyle implications under specific socio-historical contexts. For instance, near the end of the 19th century, the Pima economy was predominantly agriculture-based. However, as the European American population settles into traditionally Pima territory, the Pima lifestyles became heavily Westernised. Within three decades, the incidence of diabetes increased multiple folds. Governmental provision of free relatively high-fat food to alleviate the prevalence of poverty in the population is noted as an explanation of this phenomenon. Lorusso and Bacchini argue against the assumption that "self-identified race is a good proxy for a specific genetic ancestry" on the basis that self-identified race is complex: it depends on a range of psychological, cultural and social factors, and is therefore "not a robust proxy for genetic ancestry". Furthermore, they explain that an individual's self-identified race is made up of further, collectively arbitrary factors: personal opinions about what race is and the extent to which it should be taken into consideration in everyday life. Furthermore, individuals who share a genetic ancestry may differ in their racial self-identification across historical or socioeconomic contexts. From this, Lorusso and Bacchini conclude that the accuracy in the prediction of genetic ancestry on the basis of self-identification is low, specifically in racially admixed populations born out of complex ancestral histories. See also , section; 4.2 Race, identity and cranio-facial description Zionism, race and genetics References Further reading This review of current research includes chapters by Jonathan Marks, John Dupré, Sally Haslanger, Deborah A. Bolnick, Marcus W. Feldman, Richard C. Lewontin, Sarah K. Tate, David B. Goldstein, Jonathan Kahn, Duana Fullwiley, Molly J. Dingel, Barbara A. Koenig, Mark D. Shriver, Rick A. Kittles, Henry T. Greely, Kimberly Tallbear, Alondra Nelson, Pamela Sankar, Sally Lehrman, Jenny Reardon, Jacqueline Stevens, and Sandra Soo-Jin Lee. Genetic genealogy Race (human categorization) Human population genetics Biological anthropology
0.772509
0.993866
0.767771
Paleobiology
Paleobiology (or palaeobiology) is an interdisciplinary field that combines the methods and findings found in both the earth sciences and the life sciences. Paleobiology is not to be confused with geobiology, which focuses more on the interactions between the biosphere and the physical Earth. Paleobiological research uses biological field research of current biota and of fossils millions of years old to answer questions about the molecular evolution and the evolutionary history of life. In this scientific quest, macrofossils, microfossils and trace fossils are typically analyzed. However, the 21st-century biochemical analysis of DNA and RNA samples offers much promise, as does the biometric construction of phylogenetic trees. An investigator in this field is known as a paleobiologist. Important research areas Paleobotany applies the principles and methods of paleobiology to flora, especially green land plants, but also including the fungi and seaweeds (algae). See also mycology, phycology and dendrochronology. Paleozoology uses the methods and principles of paleobiology to understand fauna, both vertebrates and invertebrates. See also vertebrate and invertebrate paleontology, as well as paleoanthropology. Micropaleontology applies paleobiologic principles and methods to archaea, bacteria, protists and microscopic pollen/spores. See also microfossils and palynology. Paleovirology examines the evolutionary history of viruses on paleobiological timescales. Paleobiochemistry uses the methods and principles of organic chemistry to detect and analyze molecular-level evidence of ancient life, both microscopic and macroscopic. Paleoecology examines past ecosystems, climates, and geographies so as to better comprehend prehistoric life. Taphonomy analyzes the post-mortem history (for example, decay and decomposition) of an individual organism in order to gain insight on the behavior, death and environment of the fossilized organism. Paleoichnology analyzes the tracks, borings, trails, burrows, impressions, and other trace fossils left by ancient organisms in order to gain insight into their behavior and ecology. Stratigraphic paleobiology studies long-term secular changes, as well as the (short-term) bed-by-bed sequence of changes, in organismal characteristics and behaviors. See also stratification, sedimentary rocks and the geologic time scale. Evolutionary developmental paleobiology examines the evolutionary aspects of the modes and trajectories of growth and development in the evolution of life – clades both extinct and extant. See also adaptive radiation, cladistics, evolutionary biology, developmental biology and phylogenetic tree. Paleobiologists The founder or "father" of modern paleobiology was Baron Franz Nopcsa (1877 to 1933), a Hungarian scientist trained at the University of Vienna. He initially termed the discipline "paleophysiology". However, credit for coining the word paleobiology itself should go to Professor Charles Schuchert. He proposed the term in 1904 so as to initiate "a broad new science" joining "traditional paleontology with the evidence and insights of geology and isotopic chemistry." On the other hand, Charles Doolittle Walcott, a Smithsonian adventurer, has been cited as the "founder of Precambrian paleobiology". Although best known as the discoverer of the mid-Cambrian Burgess shale animal fossils, in 1883 this American curator found the "first Precambrian fossil cells known to science" – a stromatolite reef then known as Cryptozoon algae. In 1899 he discovered the first acritarch fossil cells, a Precambrian algal phytoplankton he named Chuaria. Lastly, in 1914, Walcott reported "minute cells and chains of cell-like bodies" belonging to Precambrian purple bacteria. Later 20th-century paleobiologists have also figured prominently in finding Archaean and Proterozoic eon microfossils: In 1954, Stanley A. Tyler and Elso S. Barghoorn described 2.1 billion-year-old cyanobacteria and fungi-like microflora at their Gunflint Chert fossil site. Eleven years later, Barghoorn and J. William Schopf reported finely-preserved Precambrian microflora at their Bitter Springs site of the Amadeus Basin, Central Australia. In 1993, Schopf discovered O2-producing blue-green bacteria at his 3.5 billion-year-old Apex Chert site in Pilbara Craton, Marble Bar, in the northwestern part of Western Australia. So paleobiologists were at last homing in on the origins of the Precambrian "Oxygen catastrophe". During the early part of the 21st-century, two paleobiologists Anjali Goswami and Thomas Halliday, studied the evolution of mammaliaforms during the Mesozoic and Cenozoic eras (between 299 million to 12,000 years ago). Additionally, they uncovered and studied the morphological disparity and rapid evolutionary rates of living organisms near the end and in the aftermath of the Cretaceous mass extinction (145 million to 66 million years ago). Paleobiologic journals Acta Palaeontologica Polonica Biology and Geology Historical Biology PALAIOS Palaeogeography, Palaeoclimatology, Palaeoecology Paleobiology (journal) Paleoceanography Paleobiology in the general press Books written for the general public on this topic include the following: The Rise and Reign of the Mammals: A New History, from the Shadow of the Dinosaurs to Us written by Steve Brusatte Otherlands: A Journey Through Earth's Extinct Worlds written by Thomas Halliday Introduction to Paleobiology and the Fossil Record – 22 April 2020 by Michael J. Benton (Author), David A. T. Harper (Author) See also History of biology History of paleontology History of invertebrate paleozoology Molecular paleontology Taxonomy of commonly fossilised invertebrates Treatise on Invertebrate Paleontology Footnotes Derek E.G. Briggs and Peter R. Crowther, eds. (2003). Palaeobiology II. Malden, Massachusetts: Blackwell Publishing. and . The second edition of an acclaimed British textbook. Robert L. Carroll (1998). Patterns and Processes of Vertebrate Evolution. Cambridge Paleobiology Series. Cambridge, England: Cambridge University Press. and . Applies paleobiology to the adaptive radiation of fishes and quadrupeds. Matthew T. Carrano, Timothy Gaudin, Richard Blob, and John Wible, eds. (2006). Amniote Paleobiology: Perspectives on the Evolution of Mammals, Birds and Reptiles. Chicago: University of Chicago Press. and . This new book describes paleobiological research into land vertebrates of the Mesozoic and Cenozoic eras. Robert B. Eckhardt (2000). Human Paleobiology. Cambridge Studies in Biology and Evolutionary Anthropology. Cambridge, England: Cambridge University Press. and . This book connects paleoanthropology and archeology to the field of paleobiology. Douglas H. Erwin (2006). Extinction: How Life on Earth Nearly Ended 250 Million Years Ago. Princeton: Princeton University Press. . An investigation by a paleobiologist into the many theories as to what happened during the catastrophic Permian-Triassic transition. Brian Keith Hall and Wendy M. Olson, eds. (2003). Keywords and Concepts in Evolutionary Biology. Cambridge, Massachusetts: Harvard University Press. and . David Jablonski, Douglas H. Erwin, and Jere H. Lipps (1996). Evolutionary Paleobiology. Chicago: University of Chicago Press, 492 pages. and . A fine American textbook. Masatoshi Nei and Sudhir Kumar (2000). Molecular Evolution and Phylogenetics. Oxford, England: Oxford University Press. and . This text links DNA/RNA analysis to the evolutionary "tree of life" in paleobiology. Donald R. Prothero (2004). Bringing Fossils to Life: An Introduction to Paleobiology. New York: McGraw Hill. and . An acclaimed book for the novice fossil-hunter and young adults. Mark Ridley, ed. (2004). Evolution. Oxford, England: Oxford University Press. and . An anthology of analytical studies in paleobiology. Raymond Rogers, David Eberth, and Tony Fiorillo (2007). Bonebeds: Genesis, Analysis and Paleobiological Significance. Chicago: University of Chicago Press. and . A new book regarding the fossils of vertebrates, especially tetrapods on land during the Mesozoic and Cenozoic eras. Thomas J. M. Schopf, ed. (1972). Models in Paleobiology. San Francisco: Freeman, Cooper. and . A much-cited, seminal classic in the field discussing methodology and quantitative analysis. Thomas J.M. Schopf (1980). Paleoceanography. Cambridge, Massachusetts: Harvard University Press. and . A later book by the noted paleobiologist. This text discusses ancient marine ecology. J. William Schopf (2001). Cradle of Life: The Discovery of Earth's Earliest Fossils. Princeton: Princeton University Press. . The use of biochemical and ultramicroscopic analysis to analyze microfossils of bacteria and archaea. Paul Selden and John Nudds (2005). Evolution of Fossil Ecosystems. Chicago: University of Chicago Press. and . A recent analysis and discussion of paleoecology. David Sepkoski. Rereading the Fossil Record: The Growth of Paleobiology as an Evolutionary Discipline (University of Chicago Press; 2012) 432 pages; A history since the mid-19th century, with a focus on the "revolutionary" era of the 1970s and early 1980s and the work of Stephen Jay Gould and David Raup. Paul Tasch (1980). Paleobiology of the Invertebrates. New York: John Wiley & Sons. and . Applies statistics to the evolution of sponges, cnidarians, worms, brachiopods, bryozoa, mollusks, and arthropods. Shuhai Xiao and Alan J. Kaufman, eds. (2006). Neoproterozoic Geobiology and Paleobiology. New York: Springer Science+Business Media. . This new book describes research into the fossils of the earliest multicellular animals and plants, especially the Ediacaran period invertebrates and algae. Bernard Ziegler and R. O. Muir (1983). Introduction to Palaeobiology. Chichester, England: E. Horwood. and . A classic, British introductory textbook. External links Paleobiology website of the National Museum of Natural History (Smithsonian) in Washington, D.C. (archived 11 March 2007) The Paleobiology Database Developmental biology Evolutionary biology Subfields of paleontology
0.784576
0.978565
0.767759
Herstory
Herstory is a term for history written from a feminist perspective and emphasizing the role of women, or told from a woman's point of view. It originated as an alteration of the word "history", as part of a feminist critique of conventional historiography, which in their opinion is traditionally written as "his story", i.e., from the male point of view. The term is a neologism and a deliberate play on words; the word "history"—via Latin historia from the Ancient Greek word ἱστορία, a noun meaning 'knowledge obtained by inquiry'—is etymologically unrelated to the possessive pronoun his. In fact, the root word historia is grammatically feminine in Latin. Usage The Oxford English Dictionary credits Robin Morgan with first using the term "herstory" in print in her 1970 anthology Sisterhood Is Powerful. Concerning the feminist organization W.I.T.C.H., Morgan wrote: The fluidity and wit of the witches is evident in the ever-changing acronym: the basic, original title was Women's International Terrorist Conspiracy from Hell [...] and the latest heard at this writing is Women Inspired to Commit Herstory. During the 1970s and 1980s, second-wave feminists saw the study of history as a male-dominated intellectual enterprise and presented "herstory" as a means of compensation. The term, intended to be both serious and comic, became a rallying cry used on T-shirts and buttons as well as in academia. In 2017, Hridith Sudev, an inventor, environmentalist and social activist associated with various youth movements, launched 'The Herstory Movement,' an online platform to "celebrate lesser known great persons; female, queer or otherwise marginalized, who helped shape the modern World History." It is intended as an academic platform to feature stories of female historic persons and thus help facilitate more widespread knowledge about 'Great Women' History. Non-profit organizations Global G.L.O.W and LitWorld created a joint initiative called the "HerStory Campaign". This campaign works with 25 other countries to share girl's lives and stories. They encourage others to join the campaign and to "raise our voices on behalf of all world's girls". The herstory movement has spawned women-centered presses, such as Virago Press in 1973, which publishes fiction and non-fiction by noted women authors like Janet Frame and Sarah Dunant. This movement has led to an increase in activity in other female-centric disciplines such as femistry and galgebra. Criticism Christina Hoff Sommers has been a vocal critic of the concept of herstory, and presented her argument against the movement in her 1994 book Who Stole Feminism? Sommers defined herstory as an attempt to infuse education with ideology at the expense of knowledge. The "gender feminists", as she called them, were the group of feminists responsible for the movement, which she felt amounted to negationism. She regarded most attempts to make historical studies more female-inclusive as being artificial in nature and an impediment to progress. Professor and author Devoney Looser has criticized the concept of herstory for overlooking the contributions that some women made as historians before the twentieth century. Author Richard Dawkins also described his criticism in The God Delusion, arguing that "the word history has not been influenced by the male pronoun". See also Feminist history Gender-neutral language History of feminism Radical feminism Women's history Womyn References Further reading Herstory: Women Who Changed the World. . Daughters of Eve: A Herstory Book. . HerStory. . Herstory: A Woman's View of American History. . 1970 neologisms Historiography Feminism and history Lesbian history Nonstandard spelling
0.782444
0.981223
0.767752
Periods in Western art history
This is a chronological list of periods in Western art history. An art period is a phase in the development of the work of an artist, groups of artists or art movement. Ancient Classical art Minoan art Aegean art Ancient Greek art Roman art Medieval art Early Christian – 260 – 525 Migration Period – 300 – 900 Anglo-Saxon – 400 – 1066 Visigothic – 415 – 711 Pre-Romanesque – 500 – 1000 Insular – 600 – 1200 Viking – 700 – 1100 Byzantine Merovingian Carolingian Ottonian Romanesque – 1000 – 1200 Norman-Sicilian – 1100 – 1200 Gothic – 1100 – 1400 International Gothic Renaissance Italian Renaissance – late 13th century – c. 1600 – late 15th century – late 16th century Renaissance Classicism Early Netherlandish painting – 1400 – 1500 Early Cretan School – post-Byzantine art or Cretan Renaissance 1400 – 1500 Mannerism and Late Renaissance – 1520 – 1600, began in central Italy Baroque to Neoclassicism Baroque – 1600 – 1730, began in Rome Dutch Golden Age painting – 1585 – 1702 Flemish Baroque painting – 1585 – 1700 Caravaggisti – 1590 – 1650 Rococo – 1720 – 1780, began in France Neoclassicism – 1750 – 1830, began in Rome Later Cretan School, Cretan Renaissance – 1500 – 1700 Heptanese School – 1650 – 1830, began on Ionian Islands Romanticism Nazarene movement – c. 1820 – late 1840s The Ancients – 1820s – 1840s Purismo – c. 1820 – 1860s Düsseldorf school – mid-1820s – 1860s Hudson River School – 1850s – c. 1880 Luminism – 1850s – 1870s, United States Modern Greek art – 1830 – 1930s, Greece Romanticism to modern art Norwich school – 1803 – 1833, England Biedermeier – 1815 – 1848, Germany Realism – 1830 – 1870, began in France Barbizon school – 1830 – 1870, France Peredvizhniki – 1870 – 1890, Russia Abramtsevo Colony – 1870s, Russia Hague School – 1870 – 1900, Netherlands American Barbizon School 1850 – 1890s – United States Spanish Eclecticism – 1845 – 1890, Spain Macchiaioli – 1850s, Tuscany, Italy Pre-Raphaelite Brotherhood – 1848 – 1854, England Modern art Note: The countries listed are the country in which the movement or group started. Most modern art movements were international in scope. Impressionism – 1860 – 1890, France American Impressionism – 1880, United States Cos Cob Art Colony – 1890s, United States Heidelberg School – late 1880s, Australia Luminism (Impressionism) Arts and Crafts movement – 1880 – 1910, United Kingdom Tonalism – 1880 – 1920, United States Symbolism (arts) – 1880 – 1910, France/Belgium Russian Symbolism – 1884 – c. 1910, Russia Aesthetic movement – 1868 – 1901, United Kingdom Post-Impressionism – 1886 – 1905, France Les Nabis – 1888 – 1900, France Cloisonnism – c. 1885, France Synthetism – late 1880s – early 1890s, France Neo-impressionism – 1886 – 1906, France Pointillism – 1879, France Divisionism – 1880s, France Art Nouveau – 1890 – 1914, France Vienna Secession (or Secessionstil) – 1897, Austria Mir iskusstva – 1899, Russia Jugendstil – Germany, Scandinavia Modernisme – 1890 – 1910, Spain Russian avant-garde – 1890 – 1930, Russia/Soviet Union Art à la Rue – 1890s – 1905, Belgium/France Young Poland – 1890 – 1918, Poland Hagenbund – 1900 – 1930, Austria Fauvism – 1904 – 1909, France Expressionism – 1905 – 1930, Germany Die Brücke – 1905 – 1913, Germany Der Blaue Reiter – 1911, Germany Flemish Expressionism – 1911–1940, Belgium Bloomsbury Group – 1905 – c. 1945, England Cubism – 1907 – 1914, France Jack of Diamonds – 1909 – 1917, Russia Orphism – 1912, France Purism – 1918 – 1926, France Ashcan School – 1907, United States Art Deco – 1909 – 1939, France Futurism – 1910 – 1930, Italy Russian Futurism – 1912 – 1920s, Russia Cubo-Futurism – 1912 – 1915, Russia Rayonism – 1911, Russia Synchromism – 1912, United States Universal Flowering – 1913, Russia Vorticism – 1914 – 1920, United Kingdom Biomorphism – 1915 – 1940s Suprematism – 1915 – 1925, Russia UNOVIS – 1919 – 1922, Russia Dada – 1916 – 1930, Switzerland Proletkult – 1917 – 1925, Russia Productijism – after 1917, Russia De Stijl (Neoplasticism) – 1917 – 1931, Netherlands (Utrecht) Pittura Metafisica – 1917, Italy Arbeitsrat für Kunst – 1918 – 1921 Bauhaus – 1919 – 1933, Germany The "Others" – 1919, United States Constructivism – 1920s, Russia/Soviet Union Vkhutemas – 1920 – 1926, Russia Precisionism – c. 1920, United States Surrealism – since 1920s, France Acéphale – 1936 – 1939, France Lettrism – 1942 – Les Automatistes 1946 – 1951, Quebec, Canada Devetsil – 1920 – 1931 Group of Seven – 1920 – 1933, Canada Harlem Renaissance – 1920 – 1930s, United States American scene painting – c. 1920 – 1945, United States New Objectivity (Neue Sachlichkeit) – 1920s, Germany Grupo Montparnasse – 1922, France Northwest School – 1930s – 1940s, United States Social realism – 1929, international Socialist realism – c. 1920 – 1960, began in Soviet Union Leningrad School of Painting – 1930s – 1950s, Soviet Union Socrealism – 1949 – 1955, Poland Abstraction-Création – 1931 – 1936, France Allianz – 1937 – 1950s, Switzerland Abstract Expressionism – 1940s, Post WWII, United States Action painting – 1940s – 1950s, United States Tachisme – late-1940s – mid-1950s, France Color field painting Lyrical Abstraction COBRA – 1946 – 1952, Denmark/Belgium/The Netherlands Abstract Imagists – United States Art informel mid-1940s – 1950s Contemporary art Contemporary art – 1946–present Note: there is overlap with what is considered "contemporary art" and "modern art." Contemporary Greek art – 1945 Greece Vienna School of Fantastic Realism – 1946, Austria Neo-Dada – 1950s, international International Typographic Style – 1950s, Switzerland Soviet Nonconformist Art – 1953 – 1986, Soviet Union Painters Eleven – 1954 – 1960, Canada Pop Art – mid-1950s, United Kingdom/United States Woodlands School – 1958 – 1962, Canada Situationism – 1957 – early 1970s, Italy New realism – 1960 – Magic realism – 1960s, Germany Minimalism – 1960 – Hard-edge painting – early 1960s, United States Fluxus – early 1960s – late-1970s Happening – early 1960 – Video art – early 1960 – Psychedelic art – early 1960s – Conceptual art – 1960s – Graffiti – 1960s – Junk art – 1960s – Performance art – 1960s – Op Art – 1964 – Post-painterly abstraction – 1964 – Lyrical Abstraction – mid-1960s – Process art – mid-1960s – 1970s Arte Povera – 1967 – Art and Language – 1968, United Kingdom Photorealism – late 1960s – early 1970s Land art – late-1960s – early 1970s Post-minimalism – late 1960s – 1970s Postmodern art – 1970 – present Deconstructivism Metarealism – 1970 – 1980, Soviet Union Sots Art – 1972 – 1990s, Soviet Union/Russia Installation art – 1970s – Mail art – 1970s – Maximalism – 1970s – Neo-expressionism – late 1970s – Neoism – 1979 Figuration Libre – early 1980s Street art – early 1980s Young British Artists – 1988 – Digital art – 1990 – present Toyism – 1992 – present Massurrealism – 1992 – Stuckism – 1999 – Remodernism – 1999 – Excessivism – 2015 – See also Aegean art African art Indigenous Australian art Arts of the ancient world Art of Ancient Egypt Art in Ancient Greece Asian art Buddhist art Confucian art Coptic art Hindu art Indian art Islamic art Naive Art Pre-Columbian art Pre-historic art Roman art Visigothic art Visual arts by indigenous peoples of the Americas Transgressive art Outsider art Western art fr:Règles de l'art it:Elenco dei movimenti artistici per epoca zh:藝術運動
0.770444
0.996458
0.767715
Archaic globalization
Archaic globalization is a phase in the history of globalization, and conventionally refers to globalizing events and developments from the time of the earliest civilizations until roughly 1600 (the following period is known as early modern globalization). Archaic globalization describes the relationships between communities and states and how they were created by the geographical spread of ideas and social norms at both local and regional levels. States began to interact and trade with others within close proximity as a way to acquire coveted goods that were considered a luxury. This trade led to the spread of ideas such as religion, economic structures and political ideals. Merchants became connected and aware of others in ways that had not been apparent. Archaic globalization is comparable to present day globalization on a much smaller scale. It not only allowed the spread of goods and commodities to other regions, but it also allowed people to experience other cultures. Cities that partook in trading were bound together by sea lanes, rivers, and great overland trade routes, some of which had been in use since antiquity. Trading was broken up according to geographic location, with centers between flanking places serving as "break-in-bulk" and exchange points for goods destined for more distant markets. During this time period the subsystems were more self-sufficient than they are today and therefore less vitally dependent upon one another for everyday survival. While long-distance trading came with many trials and tribulations, still so much of it went on during this early time period. Linking the trade together involved eight interlinked subsystems that were grouped into three large circuits, which encompassed the western European, the Middle Eastern, and the Far Eastern circuits. This interaction during trading was early civilization's way to communicate and spread many ideas that caused modern globalization to emerge and allowed a new aspect to present-day society. Defining globalization Globalization is the process of increasing interconnectedness between regions and individuals. Steps toward globalization include economic, political, technological, social, and cultural connections around the world. The term "archaic" can be described as early ideals and functions that were once historically apparent in society but may have disintegrated over time. There are three main prerequisites for globalization to occur. The first is the idea of Eastern Origins, which shows how Western states have adapted and implemented learned principles from the East. Without the traditional ideas from the East, Western globalization would not have emerged the way it did. The second is distance. The interactions amongst states were not on a global scale and most often were confined to Asia, North Africa, the Middle East and certain parts of Europe. With early globalization it was difficult for states to interact with others that were not within close proximity. Eventually, technological advances allowed states to learn of others existence and another phase of globalization was able to occur. The third has to do with interdependency, stability and regularity. If a state is not dependent on another then there is no way for them to be mutually affected by one another. This is one of the driving forces behind global connections and trade; without either globalization would not have emerged the way it did and states would still be dependent on their own production and resources to function. This is one of the arguments surrounding the idea of early globalization. It is argued that archaic globalization did not function in a similar manner to modern globalization because states were not as interdependent on others as they are today. Emergence of a world system Historians argue that a world system was in order before the rise of capitalism between the sixteenth and nineteenth centuries. This is referred to as the early age of capitalism where long-distance trade, market exchange and capital accumulation existed amongst states. In 800 AD Greek, Roman and Muslim empires emerged covering areas known today as China and the Middle East. Major religions such as Christianity, Islam and Buddhism spread to distant lands where many are still intact today. One of the most popular examples of distant trade routes can be seen with the silk route between China and the Mediterranean, movement and trade with art and luxury goods between Arab regions, South Asia and Africa. These relationships through trade mainly formed in the east and eventually led to the development of capitalism. It was at this time that power and land shifted from the nobility and church to the bourgeoisie and division of labor in production emerged. During the later part of the twelfth century and the beginning of the thirteenth century an international trade system was developed between states ranging from northwestern Europe to China. During the 1500s other Asian empires emerged, which included trading over longer distances than before. During the early exchanges between states, Europe had little to offer with the exception of slaves, metals, wood and furs. The push for selling of items in the east drove European production and helped integrate them into the exchange. The European expansion and growth of opportunities for trade made possible by the Crusades increased the renaissance of agriculture, mining, and manufacturing. Rapid urbanization throughout Europe allowed a connection from the North Sea to Venice. Advances in industrialization coupled with the rouse of population growth and the growing demands of the eastern trade, led to the growth of true trading emporia with outlets to the sea. There is a 'multi-polar' nature to archaic globalization, which involved the active participation of non-Europeans. Because it predated the Great Divergence of the nineteenth century, in which Western Europe pulled ahead of the rest of the world in terms of industrial production and economic output, archaic globalization was a phenomenon that was driven not only by Europe but also by other economically developed Old World centers such as Gujarat, Bengal, coastal China and Japan. These pre-capitalist movements were regional rather than global and for the most part temporary. This idea of early globalization was proposed by the historian A.G. Hopkins in 2001. Hopkins main points on archaic globalization can be seen with trade, and diaspora that developed from this, as well as religious ideas and empires that spread throughout the region. This new interaction amongst states led to interconnections between parts of the world which led to the eventual interdependency amongst these state actors. The main actors that partook in the spreading of goods and ideas were kings, warriors, priests and traders. Hopkins also addresses that during this time period mini-globalizations were prominent and that some collapsed or became more insular. These mini-globalizations are referred to as episodic and ruptured, with empires sometimes overreaching and having to retract. These mini-globalizations left remnants that allowed the West to adopt these new ideals, leading to the idea of Western capitalism. The adopted ideals can be seen in the Western monetary system and are central to systems like capitalism that define modernity and modern globalization. The three principles of archaic globalization Archaic globalization consists of three principles: universalizing kingship, expansion of religious movements, and medicinal understanding. The universalizing of kingship led soldiers and monarchs far distances to find honor and prestige. However, the crossing over foreign lands also gave the traveling men opportunity to exchange prized goods. This expanded trade between distant lands, which consequently increased the amount of social and economic relations. Despite the vast distances covered by monarchs and their companies, pilgrimages remain one of the greatest global movements of people. Finally, the desire for better health was the remaining push behind archaic globalization. While the trading of spices, precious stones, animals, and weapons remained of major importance, people began to seek medicine from faraway lands. This implemented more trade routes, especially to China for their tea. Economic exchange With the increase in trade and state linkage, economic exchange extended throughout the region and caused actors to form new relationships. This early economic development can be seen in Champagne Fairs, which were outdoor markets where traveling merchants came to sell their products and make purchases. Traditionally, market fairs used barter as opposed to money, once larger itinerant merchants began to frequent them, the need for currency became greater and a money changer needed to be established. Some historical scholars argue that this was the beginning of the role of banker and the institution of credit. An example can be seen with one individual in need of an item the urban merchant does not ordinarily stock. The product seeker orders the item, which the merchant promises to bring him next time. The product seeker either gives credit to the merchant by paying them in advance, gets credit from the merchant by promising to pay them once the item is in stock, or some type of concession is made through a down payment. If the product seeker does not have the amount required by the merchant he may borrow from the capital stored by the money changer or he may mortgage part of his expected harvest, either from the money charger or the merchant he is seeking goods from. This lengthy transaction eventually resulted in a complex economic system and once the weekly market began to expand from barter to the monetized system required by long-distance trading. A higher circuit of trade developed once urban traders from outside city limits travelled from distant directions to the market center in the quest to buy or sell goods. Merchants would then begin to meet at the same spot on a weekly basis allowing for them to arrange with other merchants to bring special items for exchange that were not demanded by the local agriculturalists but for markets in their home towns. When the local individuals placed advanced orders, customers from towns of different traders may begin to place order for items in a distant town that their trader can order from their counterpart. This central meeting point, becomes the focus of long-distance trade and how it began to increase. Expansion of long distance trade In order for trade to be able to expand during this early time period, it required some basic functions of the market as well as the merchants. The first was security. Goods that were being transported began to have more value and the merchants needed to protect their coveted goods especially since they were often traveling through poor areas where the risk of theft was high. To overcome this problem merchants began to travel in caravans as a way to ensure their personal safety as well as the safety of their goods. The second prerequisite to early long distant trade had to be an agreement on a rate of exchange. Since many of the merchants came from distant lands with different monetary systems a system had to be put into place as a way to enforce repayment of previous goods, repay previous debt and to ensure contracts were upheld. Expansion was also able to thrive so long as it had a motive for exchange as a way to promote trade amongst foreign lands. Also, outside merchants access to trading sites was a critical factor in trade route growth. The spread of goods and ideas The most popular goods produced were spices, which were traded over short distances, while manufactured goods were central to the system and could not have been aided without them. The invention of money in the form of gold coins in Europe and Middle East and paper money in China around the thirteenth century allowed trade to move more easily between the different actors. The main actors involved in this system viewed gold, silver, and copper as valuable on different levels. Nevertheless, goods were transferred, prices set, exchange rates agreed upon, contracts entered into, credit extended, partnerships formed and agreements that were made were kept on record and honored. During this time of globalization, credit was also used as a means for trading. The use of credit began in the form of blood ties but later led to the emergence the "banker" as a profession. During the this period the Republic of Genoa and the Republic of Venice emerged as prominent commercial and maritime powers in Europe and in the Mediterranean area as a whole. Genoa, strategically located in the Mediterranean, controlled crucial trade routes connecting Western Europe with the Middle East and North Africa and the Black Sea. This positioning solidified its role as a vital commercial hub, facilitating the exchange of goods and ideas across continents. Meanwhile, Venice dominated trade in the Adriatic, establishing and maintaining an extensive network of routes known as the Venetian maritime empire. These routes reached into the Byzantine and Ottoman Empires, allowing Venice to wield significant economic and political influence in the region. Both republics fiercely competed for control of territories and trade routes, shaping the economic landscape and cultural exchange of their time. With the spread of people came new ideas, religion and goods throughout the land, which had never been apparent in most societies before the movement. Also, this globalization lessened the degree of feudal life by transitioning from self-sufficient society to a money economy. Most of the trade connecting North Africa and Europe was controlled by the Middle East, China and India around 1400. Because of the danger and great cost of long-distance travel in the pre-modern period, archaic globalization grew out of the trade in high-value commodities which took up a small amount of space. Most of the goods that were produced and traded were considered a luxury and many considered those with these coveted items to have a higher place on the societal scale. Examples of such luxury goods would include Chinese silks, exotic herbs, coffee, cotton, iron, Indian calicoes, Arabian horses, gems and spices or drugs such as nutmeg, cloves, pepper, ambergris and opium. The thirteenth century as well as present day favor luxury items due to the fact that small high-value goods can have high transport costs but still have a high value attached to them, whereas low-value heavy goods are not worth carrying very far. Purchases of luxury items such as these are described as archaic consumption since trade was largely popular for these items as opposed to everyday needs. The distinction between food, drugs and materia medica is often quite blurred in regards to these substances, which were valued not only for their rarity but because they appealed to humoral theories of health and the body that were prevalent throughout premodern Eurasia. Major trade routes During the time of archaic globalization there were three major trade routes which connected Europe, China and the Middle East. The northernmost route went through mostly the Mongol Empire and was nearly 5000 miles long. Even though the route consisted of mostly vast stretches of desert with little to no resources, merchants still traveled it. The route was still traveled because during the 13th century Kubilai Khan united the Mongol Empire and charged only a small protective rent to travelers. Before the unification, merchants from the Middle East used the path but were stopped and taxed at nearly every village. The middle route went from the coast of Syria to Baghdad from there the traveler could follow the land route through Persia to India, or sail to India via the Persian Gulf. Between the 8th and 10th centuries, Baghdad was a world city but in the 11th century it began to decline due to natural disasters including floods, earthquakes, and fires. In 1258, Baghdad was taken over by the Mongols. The Mongols forced high taxes on the citizens of Baghdad which led to a decrease in production, causing merchants to bypass the city The third, southernmost route, went through Mamluk controlled Egypt, After the fall of Baghdad, Cairo became the Islamic capital. Some major cities along these trading routes were wealthy and provided services for merchants and the international markets. Palmyra and Petra which are located on the fringes of the Syrian Desert, flourished mainly as power centers of trading. They would police the trade routes and be the source of supplies for the merchants caravans. They also became places where people of different ethnic and cultural backgrounds could meet and interact. These trading routes were the communication highways for the ancient civilizations and their societies. New inventions, religious beliefs, artistic styles, languages, and social customs, as well as goods and raw materials, were transmitted by people moving from one place to another to conduct business. Proto-globalization Proto-globalization is the period following archaic globalization which occurred from the 17th through the 19th centuries. The global routes established within the period of archaic globalization gave way to more distinguished expanding routes and more complex systems of trade within the period of proto-globalization. Familiar trading arrangements such as the East India Company appeared within this period, making larger-scale exchanges possible. Slave trading was especially extensive and the associated mass-production of commodities on plantations is characteristic of this time. As a result of a measurable amount of polyethnic regions due to these higher frequency trade routes, war became prominent. Such wars include the French and Indian War, American Revolutionary War. and the Anglo-Dutch War between England and the Dutch Republic. Modern globalization The modern form of globalization began to take form during the 19th century. The evolving beginnings of this period were largely responsible for the expansion of the West, capitalism and imperialism backed up by the nation-state and industrial technology. This began to emerge during the 1500s, continuing to expand exponentially over time as industrialization developed in the 18th century. The conquests of the British Empire and the Opium Wars added to the industrialization and formation of the growing global society because it created vast consumer regions. World War I is when the first phase of modern globalization began to take force. It is said by VM Yeates that the economic forces of globalization were part of the cause of the war. Since World War I, globalization has expanded greatly. The evolving improvements of multinational corporations, technology, science, and mass media have all been results of extensive worldwide exchanges. In addition, institutions such as the World Bank, the World Trade Organization and many international telecommunication companies have also shaped modern globalization. The World Wide Web has also played a large role in modern globalization. The Internet provides connectivity across national and international borders, aiding in the enlargement of a global network. See also History of globalization Military globalization References History of globalization
0.793054
0.968048
0.767715
Neorealism (international relations)
Neorealism or structural realism is a theory of international relations that emphasizes the role of power politics in international relations, sees competition and conflict as enduring features and sees limited potential for cooperation. The anarchic state of the international system means that states cannot be certain of other states' intentions and their security, thus prompting them to engage in power politics. It was first outlined by Kenneth Waltz in his 1979 book Theory of International Politics. Alongside neoliberalism, neorealism is one of the two most influential contemporary approaches to international relations; the two perspectives dominated international relations theory from the 1960s to the 1990s. Neorealism emerged from the North American discipline of political science, and reformulates the classical realist tradition of E. H. Carr, Hans Morgenthau, George Kennan, and Reinhold Niebuhr. Neorealism is subdivided into defensive and offensive neorealism. Origins Neorealism is an ideological departure from Hans Morgenthau's writing on classical realism. Classical realism originally explained the machinations of international politics as being based on human nature and therefore subject to the ego and emotion of world leaders. Neorealist thinkers instead propose that structural constraints—not strategy, egoism, or motivation—will determine behavior in international relations. John Mearsheimer made significant distinctions between his version of offensive neorealism and Morgenthau in his book titled The Tragedy of Great Power Politics. Theory Structural realism holds that the nature of the international structure is defined by its ordering principle (anarchy), units of the system (states), and by the distribution of capabilities (measured by the number of great powers within the international system), with only the last being considered an independent variable with any meaningful change over time. The anarchic ordering principle of the international structure is decentralized, meaning there is no formal central authority; every sovereign state is formally equal in this system. These states act according to the logic of egoism, meaning states seek their own interest and will not subordinate their interest to the interests of other states. States are assumed at a minimum to want to ensure their own survival as this is a prerequisite to pursue other goals. This driving force of survival is the primary factor influencing their behavior and in turn ensures states develop offensive military capabilities for foreign interventionism and as a means to increase their relative power. Because states can never be certain of other states' future intentions, there is a lack of trust between states which requires them to be on guard against relative losses of power which could enable other states to threaten their survival. This lack of trust, based on uncertainty, is called the security dilemma. States are deemed similar in terms of needs but not in capabilities for achieving them. The positional placement of states in terms of abilities determines the distribution of capabilities. The structural distribution of capabilities then limits cooperation among states through fears of relative gains made by other states, and the possibility of dependence on other states. The desire and relative abilities of each state to maximize relative power constrain each other, resulting in a 'balance of power', which shapes international relations. It also gives rise to the 'security dilemma' that all nations face. There are two ways in which states balance power: internal balancing and external balancing. Internal balancing occurs as states grow their own capabilities by increasing economic growth and/or increasing military spending. External balancing occurs as states enter into alliances to check the power of more powerful states or alliances. Neorealism sees states as "black boxes," as the structure of the international system is emphasized rather than the units and their unique characteristics within it as being causal. Neorealists contend that there are essentially three possible systems according to changes in the distribution of capabilities, defined by the number of great powers within the international system. A unipolar system contains only one great power, a bipolar system contains two great powers, and a multipolar system contains more than two great powers. Neorealists conclude that a bipolar system is more stable (less prone to great power war and systemic change) than a multipolar system because balancing can only occur through internal balancing as there are no extra great powers with which to form alliances. Because there is only internal balancing in a bipolar system, rather than external balancing, there is less opportunity for miscalculations and therefore less chance of great power war. That is a simplification and a theoretical ideal. Neorealists argue that processes of emulation and competition lead states to behave in the aforementioned ways. Emulation leads states to adopt the behaviors of successful states (for example, those victorious in war), whereas competition leads states to vigilantly ensure their security and survival through the best means possible. Due to the anarchic nature of the international system and the inability of states to rely on other states or organizations, states have to engage in "self-help." For neorealists, social norms are considered largely irrelevant. This is in contrast to some classical realists which did see norms as potentially important. Neorealists are also skeptical of the ability of international organizations to act independently in the international system and facilitate cooperation between states. Defensive realism Structural realism has become divided into two branches, defensive and offensive realism, following the publication of Mearsheimer's The Tragedy of Great Power Politics in 2001. Waltz's original formulation of neorealism is now sometimes called defensive realism, while Mearsheimer's modification of the theory is referred to as offensive realism. Both branches agree that the structure of the system is what causes states to compete, but defensive realism posits that most states concentrate on maintaining their security (i.e. states are security maximizers), while offensive realism claims that all states seek to gain as much power as possible (i.e. states are power maximizers). A foundational study in the area of defensive realism is Robert Jervis' classic 1978 article on the "security dilemma." It examines how uncertainty and the offense-defense balance may heighten or soften the security dilemma. Building on Jervis, Stephen Van Evera explores the causes of war from a defensive realist perspective. Offensive realism Offensive realism, developed by Mearsheimer differs in the amount of power that states desire. Mearsheimer proposes that states maximize relative power ultimately aiming for regional hegemony. In addition to Mearsheimer, a number of other scholars have sought to explain why states expand when opportunities to do so arise. For instance, Randall Schweller refers to states' revisionist agendas to account for their aggressive military action. Eric Labs investigates the expansion of war aims during wartime as an example of offensive behavior. Fareed Zakaria analyzes the history of US foreign relations from 1865 to 1914 and asserts that foreign interventions during this period were not motivated by worries about external threats but by a desire to expand US influence. Scholarly debate Within realist thought While neorealists agree that the structure of the international relations is the primary impetus in seeking security, there is disagreement among neorealist scholars as to whether states merely aim to survive or whether states want to maximize their relative power. The former represents the ideas of Kenneth Waltz, while the latter represents the ideas of John Mearsheimer and offensive realism. Other debates include the extent to which states balance against power (in Waltz's original neorealism and classic realism), versus the extent to which states balance against threats (as introduced in Stephen Walt's 'The Origins of Alliances' (1987)), or balance against competing interests (as introduced in Randall Schweller's 'Deadly Imbalances' (1998)). With other schools of thought Neorealists conclude that because war is an effect of the anarchic structure of the international system, it is likely to continue in the future. Indeed, neorealists often argue that the ordering principle of the international system has not fundamentally changed from the time of Thucydides to the advent of nuclear warfare. The view that long-lasting peace is not likely to be achieved is described by other theorists as a largely pessimistic view of international relations. One of the main challenges to neorealist theory is the democratic peace theory and supporting research, such as the book Never at War. Neorealists answer this challenge by arguing that democratic peace theorists tend to pick and choose the definition of democracy to achieve the desired empirical result. For example, the Germany of Kaiser Wilhelm II, the Dominican Republic of Juan Bosch, and the Chile of Salvador Allende are not considered to be "democracies of the right kind" or the conflicts do not qualify as wars according to these theorists. Furthermore, they claim several wars between democratic states have been averted only by causes other than ones covered by democratic peace theory. Advocates of democratic peace theory see the spreading of democracy as helping to mitigate the effects of anarchy. With enough democracies in the world, Bruce Russett thinks that it "may be possible in part to supersede the 'realist' principles (anarchy, the security dilemma of states) that have dominated practice since at least the seventeenth century." John Mueller believes that it is not the spreading of democracy but rather other conditions (e.g., power) that bring about democracy and peace. In consenting with Mueller's argument, Kenneth Waltz notes that "some of the major democracies—Britain in the nineteenth century and the United States in the twentieth century—have been among the most powerful states of their eras." One of the most notable schools contending with neorealist thought, aside from neoliberalism, is the constructivist school, which is often seen to disagree with the neorealist focus on power and instead emphasises a focus on ideas and identity as an explanatory point for international relations trends. Recently, however, a school of thought called the English School merges neo-realist tradition with the constructivist technique of analyzing social norms to provide an increasing scope of analysis for international relations. Criticism Neorealism has been criticized from various directions. Other major paradigms of international relations scholarship, such as liberal and constructivist approaches have criticized neorealist scholarship in terms of theory and empirics. Within realism, classical realists and neoclassical realists have also challenged some aspects of neorealism. Among the issues that neorealism has been criticized over is the neglect of domestic politics, race, gains from trade, the pacifying effects of institutions, and the relevance of regime type for foreign policy behavior. David Strang argues that neorealist predictions fail to account for transformations in sovereignty over time and across regions. These transformations in sovereignty have had implications for cooperation and competition, as polities that were recognized as sovereign have seen considerably greater stability. In response to criticisms that neorealism lacks relevance for contemporary international policy and does a poor job explaining the foreign policy behavior of major powers, Charles Glaser wrote in 2003, "this is neither surprising nor a serious problem, because scholars who use a realist lens to understand international politics can, and have, without inconsistency or contradiction also employed other theories to understand issues that fall outside realism's central focus." Notable neorealists Robert J. Art Richard K. Betts Robert Gilpin Robert W. Tucker Joseph Grieco Robert Jervis Christopher Layne Jack Snyder John Mearsheimer Stephen Walt Kenneth Waltz Stephen Van Evera Barry Posen Charles L. Glaser Marc Trachtenberg Gottfried-Karl Kindermann See also Foreign interventionism International relations theory Mercantilism Neofunctionalism Neoliberalism Realpolitik Notes References Further reading Books Waltz, Kenneth N. (1959). Man, The State, and War: A Theoretical Analysis . Walt, Stephen (1990). The Origins of Alliances Van Evera, Stephen. (2001). Causes of War Waltz, Kenneth N. (2008). Realism and International Politics Art, Robert J. (2008). America's Grand Strategy and World Politics Glaser, Charles L. (2010). Rational Theory of International Politics: The Logic of Competition and Cooperation Articles Jervis, Robert (1978). Cooperation Under the Security Dilemma (World Politics, Vol. 30, No.2, 1978) Art, Robert J. (1998). Geopolitics Updated: The Strategy of Selective Engagement (International Security, Vol. 23, No. 3, 1998–99) Farber, Henry S.; Gowa, Jeanne (1995). Polities and Peace (International Security, Vol. 20, No. 2, 1995) Gilpin, Robert (1988). The Theory of Hegemonic War (The Journal of Interdisciplinary History, Vol. 18, No. 4, 1988) Posen, Barry (2003). Command of the Commons: The Military Foundations of U.S. Hegemony (International Security, Vol. 28, No. 1, 2003) External links Theory Talks Interview with Kenneth Waltz, founder of neorealism (May 2011) Theory Talks Interview with neorealist Robert Jervis (July 2008) International relations theory
0.770759
0.996023
0.767694
Demographic history
Demographic history is the reconstructed record of human population in the past. Given the lack of population records prior to the 1950s, there are many gaps in our record of demographic history. Historical demographers must make do with estimates, models and extrapolations. For the demographic methodology, see historical demography. Historical population of the world Estimating the ancestral population of anatomically modern humans, Colin McEvedy and Richard Jones chose bounds based on gorilla and chimpanzee population densities of 1/km2 and 3-4/km2, respectively, then assumed that as Homo erectus moved up the food chain, they lost an order of magnitude in density. With a habitat of 68 million km2 ("the Old World south of latitude 50° north, minus Australia"), Homo erectus could have numbered around 1.7 million individuals. After being replaced by Homo sapiens and moving into the New World and de-glaciated territory, by 10,000 BC world population was approaching four million people. McEvedy and Jones argue that, after populating the maximum available range, this was the limit of our food-gathering ancestors, with further population growth requiring food-producing activities. The initial population "upswing" began around 5000 BC. Global population gained 50% in the 5th millennium BC, and 100% each millennium until 1000 BC, reaching 50 million people. After the beginning of the Iron Age, growth rate reached its peak with a doubling time of 500 years. However, growth slackened between 500 BC and 1 AD, before ceasing around 200 AD. This "primary cycle" was, at this time in history, confined to Europe, North Africa, and mainland Asia. McEvedy and Jones describe a secondary, "medieval cycle" being led by feudal Europe and Song China from around 900 AD. During the period from 500 to 900 world population grew slowly but the growth rate accelerated between 900 and 1300 when the population doubled. During the 14th century, there was a fall in population associated with the Black Death that spread from Asia to Europe. This was followed by a period of restrained growth for 300 years. John F. Richards estimated the following world populations from the early modern period, 1500–1800. 1500 world population 400-500 million 1600 world population 500-600 million 1700 world population 600-700 million 1800 world population 850-950 million In the 18th century world population entered a period of accelerated growth. European population reached a peak growth rate of 10 per thousand per year in the second half of the 19th century. During the 20th century, the growth rate among the European populations fell and was overtaken by a rapid acceleration in the growth rate in other continents, which reached 21 per thousand per year in the last 50 years of the millennium. Between 1900 and 2000, the population of the world increased by 277%, a fourfold increase from 1.5 billion to 6 billion. The European component increased by 124%, and the remainder by 349%. Asia India The Indian population was about 100 million in 1500. Under the Mughal Empire, the population rose to 160 million in 1700 by 1800 the population rose to 185 million. Mughal India had a relatively high degree of urbanization for its time, with 15% of its population living in urban centres, higher than the percentage of the urban population in contemporary Europe at the time and higher than that of British India in the 19th century. Under the British Raj, the population reached 255 million according to the census taken in 1881. Studies of India's population since 1881 have focused on such topics as total population, birth and death rates, growth rates, geographic distribution, literacy, the rural and urban divide, cities of a million, and the three cities with populations over eight million: Delhi, Greater Mumbai (Bombay), and Kolkata (Calcutta). Mortality rates fell in 1920-45 era, primarily due to biological immunization. Other factors included rising incomes and better living conditions, improved better nutrition, a safer and cleaner environmental, and better official health policies and medical care. Severe overcrowding in the cities caused major public health problems, as noted in an official report from 1938: In the urban and industrial areas ... cramped sites, the high values of land and the necessity for the worker to live in the vicinity of his work ... all tend to intensify congestion and overcrowding. In the busiest centres houses are built close together, eave touching eave, and frequently back to back .... Indeed space is so valuable that, in place of streets and roads, winding lanes provide the only approach to the houses. Neglect of sanitation is often evidenced by heaps of rotting garbage and pools of sewage, whilst the absence of latrines enhance the general pollution of air and soil. China China has older bureaucratic records than any other country. For example, Chinese imperial examinations can be dated back to 165 AD. British Economist Angus Maddison estimated Asia's past populations through detailed analysis of China's bureaucratic records and the country's past gross domestic product. Population of Asia 1-1820 C.E. (million) Source: Maddison In the 15th century, China had approximately 100 million population. During the Ming (1368-1644) and Qing (1644-1911) dynasties, China experienced a high population increase. From the years 1749 to 1811 the population doubled from approximately 177 million to 358 million. Advances in China's agriculture made feeding such a growing population possible. However, by 1815 increased rice prices caused landless households to favor feeding male infants which caused an increase in infant female mortality. Middle class households did the opposite due to their higher economic means and their infant female mortality rate declined. The rising cost of rice additionally affected the adult demographics, adult male mortality rate increased more than the adult female mortality rate. The growing population of China continued into the 21st century. The country continued to face the strenuous issue of how to feed its ever-growing population. In 1979 extreme reform was put into place with the implementation of China's one-child policy. Early modern Europe Karl Julius Beloch (and for Russia, Yaroslav Vodarsky) estimated the population of early modern Europe, circa 1600 as follows: See also Historical demography, Methodology and sources Classical demography, Ancient world Medieval demography Early modern demography Paleodemography Prehistoric demography :Category:Demographic history by country or region References Further reading Cipolla, Carlo M. The economic history of world population (1974 online free Fogel, Robert W. The Escape from Hunger and Premature Death, 1700-2100: Europe, America, and the Third World (2004) Fogel, Robert W. Explaining Long-Term Trends in Health and Longevity (2014) Lee, Ronald. " The Demographic Transition: Three Centuries of Fundamental Change," Journal of Economic Perspectives (2003) 17#4 pp. 167–190 online Livi-Bacci, Massimo. A concise history of world population (Wiley, 2012) excerpt McEvedy, Colin. Atlas of World Population History (1978) Basic graphs of total population for every region of the globe from 400 BC to 2000 AD online free Wrigley, E.A. Population and History (1976) Ancient Bagnall, R.S. and Frier, B.W. The Demography of Roman Egypt (1994) Using data on family registers during the first three centuries AD, and modern demographic methods and models. Reconstructs the patterns of mortality, marriage, fertility, and migration. Scheidel, Walter, ed. Debating Roman Demography (Brill: Leiden, 2001) *Scheidel, Walter. Roman Population Size: The Logic of the Debate, July 2007, Princeton/Stanford Working Papers in Classics Asia Davis, Kingsley. The Population of India and Pakistan (1951) Snippets Dyson, Tim, ed. India's Historical Demography: Studies in Famine, Disease and Society (, London: Curzon, 1989) Mari Bhat, P. N. "Mortality and fertility in India, 1881–1961: a reassessment." in India's Historical Demography (1989). Hanley, Susan B., and Kozo Yamamura. Economic and demographic change in pre-industrial Japan 1600-1868 (1977). Krishnan, Parameswara. Glimpses of Indian Historical Demography (Delhi: B.R. Publishing Corporation 2010) Lee, James Z. and Feng Wang. One Quarter of Humanity: Malthusian Mythology and Chinese Realities, 1700-2000 (2002); argues China's marital fertility was far below European levels esp, because of infanticide and abortion, as well as lower rates of marriage for men, low rates of marital fertility, and high rates of adoption Peng, Xizhe. "China’s demographic history and future challenges." Science 333.6042 (2011): 581–587. Taeuber, Irene Barnes. The population of Japan (1958). Britain Eversley, D. E. C. An Introduction to English Historical Demography (1966) Houston, R. A. The Population History of Britain and Ireland 1500-1750 (1995) Lindert, Peter H. "English living standards, population growth, and Wrigley-Schofield." Explorations in Economic History 20.2 (1983): 131–155. Wrigley, Edward Anthony, and Roger S. Schofield. The population history of England 1541-1871 (Cambridge University Press, 1989) Wrigley, E. A. ed. English Population History from Family Reconstitution 1580-1837 (1997) Western Europe Cain, L.P. and DG Paterson. The Children of Eve: Population and Well-being in History (Wiley-Blackwell, 2012) 391 pp.; Covers Europe and North America Flinn, Michael W. The European Demographic System, 1500-1820 (1981) Glass, David V. and David E.C. Eversley, Population in History: Essays in Historical Demography, London: Edward E. Arnold, 1965 Henry, Louis. "The population of France in the eighteenth century." Population in History pp 441+ Herlihy, David. "Population, Plague and Social Change in Rural Pistoia, 1201–1430." Economic History Review (1965) 18#2 pp: 225–244. [www.jstor.org/stable/2592092 in JSTOR], on Italy Imhof, Arthur E. "Historical demography as social history: possibilities in Germany." Journal of family history (1977) 2#4 pp: 305–332. Kelly, Morgan, and Cormac Ó Gráda. "Living standards and mortality since the middle ages." Economic History Review (2014) 67#2 pp: 358–381. Knodel, John. "Two and a half centuries of demographic history in a Bavarian village." Population studies 24.3 (1970): 353–376. Online Livi Bacci, Massimo et al. Population and Nutrition: An Essay on European Demographic History (Cambridge Studies in Population, Economy and Society in Past Time) (1991). Russell, Josiah Cox. "Late ancient and medieval population." Transactions of the American Philosophical Society (1958): 1–152. in JSTOR Walter, John W. and Roger Schofield, eds. Famine, Disease and the Social Order in Early Modern Society (1991) Eastern Europe Katus, Kalev. "Demographic trends in Estonia throughout the centuries." Yearbook of Population Research in Finland 28 (1990): 50–66. Katus, Kalev, et al. "Fertility Development in the Baltic Countries Since 1990: a Transformation in the Context of Long-term Trends." Finnish Yearbook of Population Research 44 (2009): 7-32. Lutz, Wolfgang, and Sergei Scherbov, eds. Demographic Trends and Patterns in the Soviet Union Before 1991 (1993) McCarthy, Justin. Population history of the Middle East and the Balkans (Isis Press, 2002) Latin America Cook, Noble David. Demographic Collapse: Indian Peru, 1520-1620 (Cambridge University Press, 2004) Sanchez-Albornoz, Nicolas, and W.A.R. Richardson. Population of Latin America: A History (1974) Middle East Karpat, Kemal H. Ottoman Population, 1830-1914: Demographic and Social Characteristics (1985) McCarthy, Justin. Population history of the Middle East and the Balkans (Isis Press, 2002) Todorov, Nikolai. "The Historical Demography of the Ottoman Empire: Problems and Tasks." in Dimitrije Djordjević, and Richard B. Spence, eds. Scholar, Patriot, Mentor: Historical Essays in Honor of Dimitrije Djordjevic (1992) pp: 151–171. North America Fogel, Robert W. "Nutrition and the decline in mortality since 1700: Some preliminary findings." in by Stanley L. Engerman and Robert E. Gallman, eds. Long-term factors in American economic growth (U of Chicago Press, 1986) pp 439–556. Hacker, J. David. "A census-based count of the Civil War Dead." Civil War History (2011) 57# pp: 307–348. Online Haines, Michael R. and Richard H. Steckel.. A Population History of North America (2000) Klein, Herbert S. A population history of the United States (Cambridge University Press, 2012) ) excerpt Smith, Daniel Scott. "The demographic history of colonial New England." The journal of economic history 32.01 (1972): 165–183. Online Smith, Daniel Scott, and Michael S. Hindus. "Premarital pregnancy in America 1640-1971: An overview and interpretation." The journal of interdisciplinary history 5.4 (1975): 537–570. in JSTOR Comparative Lundh, Christer and Satomi Kurosu. Similarity in Difference: Marriage in Europe and Asia, 1700-1900 (2014) External links http://www.history.ac.uk/makinghistory/themes/demographic_history.html Demography Population
0.778772
0.985637
0.767586
Culture-historical archaeology
Culture-historical archaeology is an archaeological theory that emphasises defining historical societies into distinct ethnic and cultural groupings according to their material culture. It originated in the late nineteenth century as cultural evolutionism began to fall out of favor with many antiquarians and archaeologists. It was gradually superseded in the mid-twentieth century by processual archaeology. Cultural-historical archaeology had in many cases been influenced by a nationalist political agenda, being utilised to prove a direct cultural and/or ethnic link from prehistoric and ancient peoples to modern nation-states, something that has in many respects been disproved by later research and archaeological evidence. First developing in Germany among those archaeologists surrounding Rudolf Virchow, culture-historical ideas would later be popularised by Gustaf Kossinna. Culture-historical thought would be introduced to British archaeology by the Australian archaeologist V. Gordon Childe in the late 1920s. In the United Kingdom and United States, culture-history came to be supplanted as the dominant theoretical paradigm in archaeology during the 1960s, with the rise of processual archaeology. Nevertheless, elsewhere in the world, culture-historical ideas continue to dominate. Background Webster remarked that the defining feature of culture-historical archaeology was its "statements which reveal common notions about the nature of ancient cultures; about their qualities; about how they related to the material record; and thus about how archaeologists might effectively study them." Webster noted that the second defining feature of culture-historical thought was its emphasis on classification and typologies. Causes Culture-historical archaeology arose during a somewhat tumultuous time in European intellectual thought. The Industrial Revolution had spread across many nations, leading to the creation of large urban centres, most of which were filled with poverty stricken proletarian workers. This new urban working class had begun to develop a political voice through socialism, threatening the established political orders of many European states. Whilst some intellectuals had championed the Industrial Revolution as a progressive step forward, there were many who had seen it as a negative turn of events, disrupting the established fabric of society. This latter view was taken up by the Romanticist movement, which was largely made up of artists and writers, who popularised the idea of an idyllic ancient agrarian society. There was also a trend that was developing among the European intelligentsia that began to oppose the concept of cultural evolutionism (that culture and society gradually evolved and progressed through stages), instead taking the viewpoint that human beings were inherently resistant to change. Geographic variability and the concept of "culture" Historian of archaeology Bruce Trigger considered the development of culture-historical archaeology to be "a response to growing awareness of geographical variability in the archaeological record" at a time when the belief in cultural evolutionary archaeology was declining in western and central Europe. Throughout the 19th century, an increasing amount of archaeological material had been collected in Europe, in part as a result of land reclamation projects, increased agricultural production and construction, the foundation of museums and establishment of archaeological teaching positions at universities. As a result of this, archaeologists had come to increasingly realise that there was a great deal of variability in the artifacts uncovered across the continent. Many felt that this variability was not comfortably explained by preexisting evolutionary paradigms. Culture-historical archaeology adopted the concept of "culture" from anthropology, where cultural evolutionary ideas had also begun to be criticised. In the late 19th century, anthropologists like Franz Boas and Friedrich Ratzel were promoting the idea that cultures represented geographically distinct entities, each with their own characteristics that had developed largely through the chance accumulation of different traits. Similar ideas were also coming from Germany's neighbour, Austria, at around this time, namely from two anthropologist Roman Catholic priests, Fritz Graebner and Wilhelm Schmidt, as well as by the archaeologist Oswald Menghin. Nationalism and racialism Bruce Trigger also argued that the development of culture-historical archaeology was in part due to the rising tide of nationalism and racism in Europe, which emphasised ethnicity as the main factor shaping history. Such nationalistic sentiment began to be adopted within academic disciplines by intellectuals who wished to emphasise solidarity within their own nations – in the face of social unrest caused by industrialization – by blaming neighbouring states. Under such a nationalist worldview, people across Europe came to see different nationalities – such as the French, Germans and English – as being biologically different from one another, and it was argued that their behaviour was determined by these racial differences as opposed to social or economic factors. Having been inspired and influenced by European nationalism, in turn, culture-historical archaeology would be utilised in support of nationalist political causes. In many cases, nationalists used culture-historical archaeological interpretations to highlight and celebrate the prehistoric and ancient past of their ancestors, and prove an ethnic and cultural link to them. As such, many members of various European nations placed an emphasis on archaeologically proving a connection with a particular historical ethnicity, for instance the French often maintained that they were the ethnic and cultural descendants of the ancient Gauls, whilst the English did the same with the Anglo-Saxons and the Welsh and Irish with the Celts, and archaeologists in these countries were encouraged to interpret the archaeological evidence to fit these conclusions. One of the most notable examples of a nationalist movement utilising culture-historical archaeology was that of the Nazi Party, who obtained power in Germany in 1933 and established a totalitarian regime that emphasised the alleged racial supremacy of the German race and sought to unify all German speakers under a single political state. The Nazis were influenced by the culture-historical ideas of Kossinna, and used archaeology to support their claims regarding the behaviour of prehistoric Germans, in turn supporting their own policies. History Early development: 1869–1925 Culture-historical archaeology first developed in Germany in the late 19th century. In 1869, the German Society for Anthropology, Ethnology, and Prehistoric Archaeology (Urgeschichte) had been founded, an organisation that was dominated by the figure of Rudolf Virchow (1821–1902), a pathologist and leftist politician. He advocated the union of prehistoric archaeology with cultural anthropology and ethnology into a singular prehistoric anthropology which would identify prehistoric cultures from the material record and try to connect them to later ethnic groups who were recorded in the written, historical record. Although the archaeological work undertaken by Virchow and his fellows was cultural-historical in basis, it did not initially gain a significant following in the country's archaeological community, the majority of whom remained devoted to the dominant cultural evolutionary trend. In 1895, a librarian who was fascinated by German prehistory, Gustaf Kossinna (1858–1931), presented a lecture in which he tried to connect the tribes who had been recorded as living between the Rhine and Vistula in 100 BCE with cultures living in that region during the Neolithic. Appointed Professor of Archaeology at the University of Berlin, in 1909 he founded the German Society for Prehistory (Vorgeschichte). He would proceed to further publicise his culture-historical approach in his subsequent books, Die Herkunft der Germanen (The Origin of the Germans), which was published in 1911, and the two-volume Ursprung und Verbreitung der Germanen (Origin and Expansion of the Germans), which was published between 1926 and 1927. A staunch nationalist and racist, Kossinna lambasted fellow German archaeologists for taking an interest in non-German societies, such as those of Egypt and the Classical World, and used his publications to support his views on German nationalism. Glorifying the German peoples of prehistory, he used an explicitly culture-historical approach in understanding them, and proclaimed that these German peoples were racially superior to their Slavic neighbours to the east. Believing that an individual's ethnicity determined their behaviour, the core of Kossinna's approach was to divide Temperate Europe into three large cultural groupings: Germans, Celts and Slavs, based upon the modern linguistic groups. He then divided each of these cultural groupings into smaller "cultures", or tribes, for instance dividing the Germans up into Saxons, Vandals, Lombards and Burgundians. He believed that each of these groups had its own distinctive traditions which were present in their material culture, and that by mapping out the material culture in the archaeological record, he could trace the movement and migration of different ethnic groups, a process he called siedlungsarchäologie (settlement archaeology). Much of Kossinna's work was criticised by other German archaeologists, but nevertheless his basic culture-historical manner of interpreting the past still came to dominance in the country's archaeological community; Trigger noted that his work "marked the final replacement of an evolutionary approach to prehistory by a culture-historical one" and that for that, he must be viewed as an "innovator" whose work was "of very great importance". As it became the dominant archaeological theory within the discipline, a number of prominent cultural-historical archaeologists rose to levels of influence. The Swedish archaeologist Oscar Montelius was one of the most notable, as he studied the entirety of the European archaeological prehistoric record, and divided it into a number of distinct temporal groups based upon grouping together various forms of artifacts. Britain and the U.S. Culture-historical archaeology was first introduced into British scholarship from continental Europe by an Australian prehistorian, V. Gordon Childe. A keen linguist, Childe was able to master a number of European languages, including German, and was well acquainted with the works on archaeological cultures written by Kossina. Following a period as Private Secretary to the Premier of New South Wales (NSW), Childe moved to London in 1921 for a position with the NSW Agent General, then spent a few years travelling Europe. In 1927, Childe took up a position as the Abercrombie Professor of Archaeology at the University of Edinburgh. This was followed by The Danube in Prehistory (1929), in which Childe examined the archaeology along the Danube river, recognising it as the natural boundary dividing the Near East from Europe, and subsequently he believed that it was via the Danube that various new technologies travelled westward in antiquity. In The Danube in Prehistory, Childe introduced the concept of an archaeological culture (which up until then had been largely restrained purely to German academics), to his British counterparts. This concept would revolutionise the way in which archaeologists understood the past, and would come to be widely accepted in future decades. Concepts Distinct historical cultures The core point to culture-historical archaeology was its belief that the human species could be subdivided into various "cultures" that were in many cases distinct from one another. Usually, each of these cultures was seen as representing a different ethnicity. From an archaeological perspective, it was believed that each of these cultures could be distinguished because of its material culture, such as the style of pottery that it produced or the forms of burial that it practiced. A number of culture-historical archaeologists subdivided and named separate cultures within their field of expertise: for instance, archaeologists working in the Aegean, in examining the Bronze Age period, divided it up between such cultures as Minoan, Helladic and Cycladic. Diffusion and migration Within culture-historical archaeology, changes in the culture of a historical society were typically explained by the diffusion of ideas from one culture into another, or by the migration of members of one society into a new area, sometimes by invasion. This was at odds with the theories held by cultural evolutionary archaeologists, who whilst accepting diffusion and migration as reasons for cultural change, also accepted the concept that independent cultural development could occur within a society, which was something culture-historical archaeologists typically refused to accept. A number of culture-historical archaeologists put forward the idea that all knowledge and technology in the ancient world had diffused from a single source in the Middle East, which had then been spread across much of the world by merchants. The Australian Grafton Elliot Smith for instance, in his works The Children of the Sun (1923) and The Growth of Civilisation (1924), put forward the idea that agriculture, architecture, religion and government had all developed in Ancient Egypt, where the conditions were perfect for the development of such things, and that these ideas were then diffused into other cultures. A similar theory was proposed by Lord Raglan in 1939, but he believed Mesopotamia to be the source rather than Egypt. Inductive reasoning Culture history uses inductive reasoning unlike its main rival, processual archaeology which stresses the importance of the hypothetico-deduction method. To work best it requires a historical record to support it. As much of early archaeology focused on the Classical World it naturally came to rely on and mirror the information provided by ancient historians who could already explain many of the events and motivations which would not necessarily survive in the archaeological record. The need to explain prehistoric societies, without this historical record, could initially be dealt with using the paradigms established for later periods but as more and more material was excavated and studied, it became clear that culture history could not explain it all. Manufacturing techniques and economic behaviour can be easily explained through cultures and culture history approaches but more complex events and explanations, involving less concrete examples in the material record are harder for it to explain. In order to interpret prehistoric religious beliefs for example, an approach based on cultures provides little to go on. Culture historians could catalogue items but in order to look beyond the material record, towards anthropology and the scientific method, they would have had to abandon their reliance on material, 'inhuman,' cultures. Such approaches were the intent of processual archaeology. Culture history is by no means useless or surpassed by more effective methods of thinking. Indeed, diffusionist explanations are still valid in many cases and the importance of describing and classifying finds has not gone away. Post-processual archaeologists stress the importance of recurring patterns in material culture, echoing culture history's approach. In many cases it can be argued that any explanation is only one factor within a whole network of influences. Criticism Another criticism of this particular archaeological theory was that it often placed an emphasis on studying peoples from the Neolithic and later ages, somewhat ignoring the earliest human era, the Palaeolithic, where distinct cultural groups and differences are less noticeable in the archaeological record. See also List of archaeological periods Nationalism and archaeology References Footnotes Bibliography Archaeological theory Nationalism and archaeology sv:Kulturhistorisk arkeologi
0.786754
0.97563
0.767581
Genetic history of the British Isles
The genetic history of the British Isles is the subject of research within the larger field of human population genetics. It has developed in parallel with DNA testing technologies capable of identifying genetic similarities and differences between both modern and ancient populations. The conclusions of population genetics regarding the British Isles in turn draw upon and contribute to the larger field of understanding the history of the human occupation of the area, complementing work in linguistics, archaeology, history and genealogy. Research concerning the most important routes of migration into the British Isles is the subject of debate. Apart from the most obvious route across the narrowest point of the English Channel into Kent, other routes may have been important over the millennia, including a land bridge in the Mesolithic period, as well as maritime connections along the Atlantic coasts. The periods of the most important migrations are contested. The Neolithic introduction of farming technologies from mainland Europe is frequently proposed as a period of major change in the British Isles. Such technology could either have been learned by locals from a small number of immigrants or have been introduced by colonists who significantly changed the population. Other potentially important historical periods of migration that have been subject to consideration in this field include the introduction of Celtic languages and technologies (during the Bronze and Iron Ages), the Roman era, the period of early Germanic influx, the Viking era, the Norman invasion of 1066, and the era of the European wars of religion. History of research Early studies by Luigi Cavalli-Sforza used polymorphisms from proteins found within human blood (such as the ABO blood groups, Rhesus blood antigens, HLA loci, immunoglobulins, G6PD isoenzymes, amongst others). One of the lasting proposals of this study with regards to Europe is that within most of the continent the majority of genetic diversity may best be explained by immigration coming from the southeast towards the northwest or in other words from the Middle East towards Britain and Ireland. Cavalli-Sforza proposed at the time that the invention of agriculture might be the best explanation for this. With the advent of DNA analysis modern populations were sampled for mitochondrial DNA to study the female line of descent and Y chromosome DNA to study male descent. As opposed to large scale sampling within the autosomal DNA, Y DNA and mitochondrial DNA represent specific types of genetic descent and can therefore reflect only particular aspects of past human movement. Later projects began to use autosomal DNA to gather a more complete picture of an individual's genome. For Britain, major research projects aimed at collecting data include the Oxford Genetic Atlas Project (OGAP) and more recently the People of the British Isles, also associated with Oxford. Owing to the difficulty of modelling the contributions of historical migration events to modern populations based purely on modern genetic data, such studies often varied significantly in their conclusions. One early Y DNA study estimated a complete genetic replacement by the Anglo-Saxons, whilst another argued that it was impossible to distinguish between the contributions of the Anglo-Saxons and Vikings and that the contribution of the latter may even have been higher. A third study argued that there was no Viking influence on British populations at all outside Orkney. Stephen Oppenheimer and Bryan Sykes, meanwhile, claimed that the majority of the DNA in the British Isles had originated from a prehistoric migration from the Iberian peninsula and that subsequent invasions had had little genetic input. In the last decade, improved technologies for extracting ancient DNA have allowed researchers to study the genetic impacts of these migrations in more detail. This led to Oppenheimer and Sykes' conclusions about the origins of the British being seriously challenged, since later research demonstrated that the majority of the DNA of much of continental Europe, including Britain and Ireland, is ultimately derived from Steppe invaders from the east rather than Iberia. This research has also suggested that subsequent migrations, such as that of the Anglo-Saxons, did have large genetic effects (though these effects varied from place to place). Analyses of nuclear and ancient DNA Paleolithic After the Last Glacial Maximum, there is evidence of repopulation of Britain and Ireland during the late Upper Paleolithic from c. 13,500 BC. Human skeletal remains from this period are rare. They include a female from Gough’s Cave, an individual who is genetically similar to the c. 15,000 year old individual ('Goyet-Q2') from Goyet Caves, Belgium. The female from Gough’s Cave carried mtDNA U8a, which is found in several individuals of the Magdalenian culture in Europe, but not in any other early ancient individuals from Britain. A second individual from Kendrick's Cave, a c. 12,000 BC male, was found to be genetically similar to the Villabruna cluster, also known as Western Hunter-Gatherer ancestry. This ancestry is found in later British Mesolithic individuals. The Kendrick’s Cave individual's mtDNA U5a2 is also found in several British Mesolithic samples. Most British people have Neanderthal ancestry, dating back 50,000 years or longer. Mesolithic population British Mesolithic hunter-gatherers, such as the famous Cheddar Man, were closely related to other Mesolithic people throughout Western Europe (the so-called Western Hunter Gatherer cluster) This population probably had blue or green eyes, lactose intolerance, dark hair and dark to very dark skin. British Mesolithic people probably contribute negligible ancestry to modern British people. Continental Neolithic farmers The change to the Neolithic in the British Isles went along with a significant population shift. Neolithic individuals were close to Iberian and Central European Early and Middle Neolithic populations, modelled as having about 75% ancestry from Anatolian farmers with the rest coming from Western Hunter-Gatherers (WHG) in continental Europe. This suggests that farming was brought to the British Isles by sea from north-west mainland Europe, by a population that was, or became in succeeding generations, relatively large. In some regions, British Neolithic individuals had a small amount (about 10%) of WHG excess ancestry when compared with Iberian Early Neolithic farmers, suggesting that there was an additional gene flow from British Mesolithic hunter-gatherers into the newly arrived farmer population: while Neolithic individuals from Wales have no detectable admixture of local Western hunter-gatherer genes, those from South East England and Scotland show the highest additional admixture of local WHG genes, and those from South-West and Central England are intermediate. Bronze Age According to Olalde et al. (2018), the spread of the Bell Beaker culture to Britain from the lower Rhine area in the early Bronze Age introduced high levels of steppe-related ancestry, resulting in a near-complete change of the local gene pool within a few centuries, replacing about 90% of the local Neolithic-derived lineages between 2,400 BC and 2,000 BC. These people exhibiting the Beaker culture were likely an offshoot of the Corded Ware culture, as they had little genetic affinity to the Iberian Beaker people. With the large steppe-derived component, they had a smaller proportion of continental Neolithic and Western Hunter Gatherer DNA. There may have been a replacement of the gene pool of over 90% by the Beaker culture population over a few centuries. An earlier study had estimated that the modern English population derived somewhat just over half of their ancestry from a combination of Neolithic and Western Hunter Gatherer ancestry, with the steppe-derived (Yamnaya-like) element making up the remainder. Scotland was found to have both more Steppe and more Western Hunter Gatherer ancestry than England. These proportions are similar to other Northwest European populations. Genetic evidence suggests that there was significant migration to Southern Britain of people from the adjacent mainland at the end of the Bronze Age around 1000 BC, around a millennium after the initial Bell-Beaker migration. This migration may have introduced the Celtic languages to Britain. Patterson et al. (2021) believes that these migrants were "genetically most similar to ancient individuals from France" and had higher levels of Early European Farmers ancestry. Anglo-Saxons Researchers have used ancient DNA to determine the nature of the Anglo-Saxon settlement, as well as its impact on modern populations in the British Isles. One 2016 study, using Iron Age and Anglo-Saxon era DNA found at grave sites in Cambridgeshire, calculated that ten modern-day eastern English samples had 38% Anglo-Saxon ancestry on average whilst ten Welsh and Scottish samples each had 30% Anglo-Saxon ancestry, with a large statistical spread in all cases. However, the authors noted that the similarity observed between the various sample groups was possibly due to more recent internal migration. Another 2016 study conducted using evidence from burials found in northern England found that a significant genetic difference was present in bodies from the Iron Age and the Roman period on the one hand and the Anglo-Saxon period on the other. Samples from modern-day Wales were found to be similar to those from the Iron Age and Roman burials whilst samples from much of modern England, East Anglia in particular, were closer to the Anglo-Saxon-era burial. This was found to demonstrate a "profound impact" from the Anglo-Saxon migrations on the modern English gene pool, though no specific percentages were given in the study. A third study combined the ancient data from both of the preceding studies and compared it to a large number of modern samples from across Britain and Ireland. This study concluded that modern southern, central and eastern English populations were of "a predominantly Anglo-Saxon-like ancestry" whilst those from northern and southwestern England had a greater degree of indigenous origin. A 2022 study focusing specifically on the question of the Anglo-Saxon settlement sampled 460 northwestern European individuals dated to the medieval period. The study concluded that in eastern England, large-scale immigration, including both men and women, occurred in the post-Roman era, with up to 76% of the ancestry of these individuals deriving from the North Sea zone of continental Europe (i.e. medieval north Germans and Danish). The authors also noted that while a large proportion of the ancestry of the present-day English derives from the Anglo-Saxon migration event, it has been diluted by later migration from a population source similar to that of Iron Age France, Belgium and western Germany, which probably "resulted from pulses of immigration or continuous gene flow between eastern England and its neighbouring regions", but which entered northern and eastern England after the arrival of the Anglo-Saxons. As a result, it is estimated that the ancestry of the present-day English ranges between 25% and 47% Continental North European (similar to historical northern Germans and Danish), 11% to 57% similar to the British Late Iron Age, and 14% to 43% IA-like (similar to France, Belgium and neighbouring parts of Germany). Welsh people The post-Roman period saw a significant alteration in the genetic makeup of southern Britain due to the arrival of the Anglo-Saxons; however, historical evidence suggests that Wales was little affected by these migrations. A study published in 2016 compared samples from modern Britain and Ireland with DNA found in skeletons from Iron Age, Roman and Anglo-Saxon era Yorkshire. The study found that most of the Iron Age and Roman era Britons showed strong similarities with both each other and modern-day Welsh populations, while modern southern and eastern English groups were closer to a later Anglo-Saxon burial. Another study, using Iron Age and Anglo-Saxon samples from Cambridgeshire, concluded that modern Welsh people carry a 30% genetic contribution from Anglo-Saxon settlers in the post-Roman period; however, this could have been brought about due to later migration from England into Wales. A third study, published in 2020 and based on Viking era data from across Europe, suggested that the Welsh trace, on average, 58% of their ancestry to the Brythonic people, up to 22% from a Danish-like source interpreted as largely representing the Anglo-Saxons, 3% from Norwegian Vikings, and 13% from further south in Europe such as Italy, to a lesser extent, Spain and can possibly be related to French immigration during the Norman period. A 2015 genetic survey of modern British population groups found a distinct genetic difference between those from northern and southern Wales, which was interpreted as the legacy of Little England beyond Wales. A study of a diverse sample of 2,039 individuals from the United Kingdom allowed the creation of a genetic map and the suggestion that there was a substantial migration of peoples from Europe prior to Roman times forming a strong ancestral component across England, Scotland, and Northern Ireland, but which had little impact in Wales. Wales forms a distinct genetic group, followed by a further division between north and south Wales, although there was evidence of a genetic difference between north and south Pembrokeshire as separated by the Landsker line. Speaking of these results, Professor Peter Donnelly, of the University of Oxford, said that the Welsh carry DNA which could be the most ancient in UK and that people from Wales are genetically relatively distinct. Vikings Historical and toponymic evidence suggests a substantial Viking migration to many parts of northern Britain; however, particularly in the case of the Danish settlers, differentiating their genetic contribution to modern populations from that of the Anglo-Saxons has posed difficulties. A study published in 2020, which used ancient DNA from across the Viking world in addition to modern data, noted that ancient samples from Denmark showed similarities to samples from both modern Denmark and modern England. Whilst most of this similarity was attributed to the earlier settlement of the Anglo-Saxons, the authors of the study noted that British populations also carried a small amount of "Swedish-like" ancestry that was present in the Danish Vikings but unlikely to have been associated with the Anglo-Saxons. From this, it was calculated that the modern English population has approximately 6% Danish Viking ancestry, with Scottish and Irish populations having up to 16%. Additionally, populations from all areas of Britain and Ireland were found to have 3–4% Norwegian Viking ancestry. Comparison between modern British and Irish populations A 2015 study using data from the Neolithic and Bronze Ages showed a considerable genetic difference between individuals during the two periods, which was interpreted as being the result of a migration from the Pontic steppes. The individuals from the latter period, with significant steppe ancestry, showed strong similarities to modern Irish population groups. The study concluded that "these findings together suggest the establishment of central aspects of the Irish genome 4,000 years ago." Another study, using modern autosomal data, found a large degree of genetic similarity between populations from northeastern Ireland, southern Scotland and Cumbria. This was interpreted as reflecting the legacy of the Plantation of Ulster in the 17th century. According to a 2024 study, Neolithic farmer ancestries are highest in modern southern and eastern England but lower in Scotland, Wales and Cornwall. Steppe-related ancestries are inversely distributed, peaking in Scotland, Outer Hebrides and Ireland. WHG-related ancestries are also much higher in central and northern England. In general, hunter-gatherer ancestries like WHG increase the likelihood of darker skin and hair, Alzheimer's disease and traits related to cholesterol, blood pressure and diabetes among British people. But they decrease the likelihood of anxiety, guilty feelings and irritiability compared to Neolithic farmer ancestries. However, it should be cautioned that this "makes no direct reference to ancient phenotypes". Haplogroups Mitochondrial DNA Bryan Sykes broke mitochondrial results into twelve haplogroups for various regions of the isles: Haplogroup H Haplogroup I Haplogroup J Haplogroup T Haplogroup V Haplogroup W Haplogroup X Haplogroup U ...and within U... Haplogroup U2 Haplogroup U3 Haplogroup U4 Haplogroup U5 Sykes found that the maternal haplogroup pattern was similar throughout England but with a distinct trend from east and north to west and south. Minor haplogroups were mainly found in the east of England. Sykes found Haplogroup H to be dominant in Ireland and Wales, though a few differences were found between north, mid and south Wales—there was a closer link between north and mid-Wales than either had with the south. Studies of ancient DNA have demonstrated that ancient Britons and Anglo-Saxon settlers carried a variety of mtDNA haplogroups, though type H was common in both. Y chromosome DNA Sykes also designated five main Y-DNA haplogroups for various regions of Britain and Ireland. Haplogroup R1b Haplogroup R1a Haplogroup I Haplogroup E1b1b Haplogroup J Haplogroup R1b is dominant throughout Western Europe. While it was once seen as a lineage connecting Britain and Ireland to Iberia, where it is also common, it is now believed that both R1b and R1a entered Europe with Indo-European migrants likely originating around the Black Sea; R1a and R1b are now the most common haplotypes in Europe. One common R1b subclade in Britain is R1b-U106, which reaches its highest frequencies in North Sea areas such as southern and eastern England, the Netherlands and Denmark. Due to its distribution, this subclade is often associated with the Anglo-Saxon migrations. Ancient DNA has shown that it was also present in Roman Britain, possibly among descendants of Germanic mercenaries. Ireland, Scotland, Wales and northwestern England are dominated by R1b-L21, which is also found in northwestern France (Brittany), the north coast of Spain (Galicia), and western Norway. This lineage is often associated with the historic Celts, as most of the regions where it is predominant have had a significant Celtic language presence into the modern period and associate with a Celtic cultural identity in the present day. It was also present among Celtic Britons in eastern England prior to the Anglo-Saxon and Viking invasions, as well as Roman soldiers in York who were of native descent. There are various smaller and geographically well-defined Y-DNA Haplogroups under R1b in Western Europe. Haplogroup R1a, a close cousin of R1b, is most common in Eastern Europe. In Britain, it has been linked to Scandinavian immigration during periods of Viking settlement. 25% of men in Norway belong to this haplogroup; it is much more common in Norway than in the rest of Scandinavia. Around 9% of all Scottish men belong to the Norwegian R1a subclade, which peaks at over 30% in Shetland and Orkney. Haplogroup I is a grouping of several quite distantly related lineages. Within Britain, the most common subclade is I1, which also occurs frequently in northwestern continental Europe and southern Scandinavia, and has thus been associated with the settlement of the Anglo-Saxons and Vikings. An Anglo-Saxon male from northern England who died between the seventh and tenth centuries was determined to have belonged to haplogroup I1. Haplogroups E1b1b and J in Europe are regarded as markers of Neolithic movements from the Middle East to Southern Europe and likely to Northern Europe from there. These haplogroups are found most often in Southern Europe and North Africa. Both are rare in Northern Europe; E1b1b is found in 1% of Norwegian men, 1.5% of Scottish, 2% of English, 2.5% of Danish, 3% of Swedish and 5.5% of German. It reaches its peak in Europe in Kosovo at 47.5% and Greece at 30%. Uncommon Y haplogroups Geneticists have found that seven men with the surname Revis, which originates in Yorkshire, carry a genetic signature previously found only in people of West African origin. All of the men belonged to Haplogroup A1a (M31), a subclade of Haplogroup A which geneticists believe originated in Eastern or Southern Africa. The men are not regarded as phenotypically African and there are no documents, anecdotal evidence or oral traditions suggesting that the Revis family has African ancestry. It has been conjectured that the presence of this haplogroup may date from the Roman era when both Africans and Romans of African descent are known to have settled in Britain. According to Bryan Sykes, "although the Romans ruled from AD 43 until 410, they left a tiny genetic footprint." The genetics of some visibly white (European) people in England suggests that they are "descended from north African, Middle Eastern and Roman clans". Geneticists have shown that former American president Thomas Jefferson, who might have been of Welsh descent, along with two other British men out of 85 British men with the surname Jefferson, carry the rare Y chromosome marker T (formerly called K2). This is typically found in East Africa and the Middle East. Haplogroup T is extremely rare in Europe but phylogenetic network analysis of its Y-STR (short tandem repeat) haplotype shows that it is most closely related to an Egyptian T haplotype, but the presence of scattered and diverse European haplotypes within the network is nonetheless consistent with Jefferson's patrilineage belonging to an ancient and rare indigenous European type. See also Prehistoric Britain Historical immigration to Great Britain Anglo-Saxon settlement of Britain Nordic migration to Britain List of haplogroups of historical and famous figures Other locations: Genetic history of the Middle East Genetic history of indigenous peoples of the Americas Genetic history of Europe Genetic history of Italy Genetics and archaeogenetics of South Asia References Bibliography Further reading Gretzinger, J., Sayer, D., Justeau, P. et al. "The Anglo-Saxon migration and the formation of the early English gene pool". In: Nature (21 September 2022). https://doi.org/10.1038/s41586-022-05247-2 . Also here Malmström et al. 2009 Mithen, Steven 2003. After the Ice: A Global Human History 20,000-5000 BC. Phoenix (Orion Books Ltd.), London. Patterson, N., Isakov, M., Booth, T. et al. "Large-scale migration into Britain during the Middle to Late Bronze Age". Nature (2021). Large-scale migration into Britain during the Middle to Late Bronze Age Stringer, Chris. 2006. Homo Britannicus. Penguin Books Ltd., London. . History of the British Isles Human population genetics British Isles
0.772056
0.994022
0.767441
Tradition
A tradition is a system of beliefs or behaviors (folk custom) passed down within a group of people or society with symbolic meaning or special significance with origins in the past. A component of cultural expressions and folklore, common examples include holidays or impractical but socially meaningful clothes (like lawyers' wigs or military officers' spurs), but the idea has also been applied to social norms and behaviors such as greetings etc. Traditions can persist and evolve for thousands of years— the word tradition itself derives from the Latin word tradere literally meaning to transmit, to hand over, to give for safekeeping. While it is reportedly assumed that traditions have an ancient history, many traditions have been invented on purpose, whether it be political or cultural, over short periods of time. Various academic disciplines also use the word in a variety of ways. The phrase "according to tradition" or "by tradition" usually means that the information that follows is known only through oral tradition, and is not supported (and perhaps may be refuted) by physical documentation, artifacts, or other reliable evidence. "Tradition" refers to the quality or origin of the information being discussed. For example, "According to tradition, Homer was born on Chios, but many other locales have historically claimed him as theirs." This tradition may never be proven or disproved. In another example, "King Arthur, according to history, a true British king, has inspired many well loved stories." Whether they are documented fact or not does not decrease their value as cultural history and literature. Traditions are subject of study in several academic fields of learning, especially in the humanities and social sciences, such as anthropology, archaeology, history, and sociology. The conceptualization of tradition, as the notion of holding on to a previous time, is also found in political and philosophical discourse. For example, it is the basis of the political concept of traditionalism, and also strands of many world religions including traditional Catholicism. In artistic contexts, tradition is used to decide the correct display of an art form. For example, in the performance of traditional genres (such as traditional dance), adherence to guidelines dictating how an art form should be composed are given greater importance than the performer's own preferences. A host of factors can exacerbate the loss of tradition, including industrialization, globalization, and the assimilation or marginalization of specific cultural groups. In response to this, tradition-preservation attempts and initiatives have now been started in many countries around the world, focusing on aspects such as traditional languages. Tradition is usually contrasted with the goal of modernity and should be differentiated from customs, conventions, laws, norms, routines, rules and similar concepts. Definition The English word tradition comes from the Latin traditio via French, the noun from the verb tradere (to transmit, to hand over, to give for safekeeping); it was originally used in Roman law to refer to the concept of legal transfers and inheritance. According to Anthony Giddens and others, the modern meaning of tradition evolved during the Enlightenment period, in opposition to modernity and progress. As with many other generic terms, there are many definitions of tradition. The concept includes a number of interrelated ideas; the unifying one is that tradition refers to beliefs, objects or customs performed or believed in the past, originating in it, transmitted through time by being taught by one generation to the other, and are performed or believed in the present. Tradition can also refer to beliefs or customs that are Prehistoric, with lost or arcane origins, existing from time immemorial. Originally, traditions were passed orally, without the need for a writing system. Tools to aid this process include poetic devices such as rhyme, epic stories and alliteration. The stories thus preserved are also referred to as tradition, or as part of an oral tradition. Even such traditions, however, are presumed to have originated (been "invented" by humans) at some point. Traditions are often presumed to be ancient, unalterable, and deeply important, though they may sometimes be much less "natural" than is presumed. It is presumed that at least two transmissions over three generations are required for a practice, belief or object to be seen as traditional. Some traditions were deliberately introduced for one reason or another, often to highlight or enhance the importance of a certain institution or truth. Traditions may also be adapted to suit the needs of the day, and the changes can become accepted as a part of the ancient tradition. Tradition changes slowly, with changes from one generation to the other being seen as significant. Thus, those carrying out the traditions will not be consciously aware of the change, and even if a tradition undergoes major changes over many generations, it will be seen as unchanged. There are various origins and fields of tradition; they can refer to: the forms of artistic heritage of a particular culture. beliefs or customs instituted and maintained by societies and governments, such as national anthems and national holidays, such as Federal holidays in the United States. beliefs or customs maintained by religious denominations and Church bodies that share history, customs, culture, and, to some extent, body of teachings. For example, one can speak of Islam's tradition or Christianity's tradition. Many objects, beliefs and customs can be traditional. Rituals of social interaction can be traditional, with phrases and gestures such as saying "thank you", sending birth announcements, greeting cards, etc. Tradition can also refer to larger concepts practiced by groups (family traditions at Christmas), organizations (company's picnic) or societies, such as the practice of national and public holidays. Some of the oldest traditions include monotheism (three millennia) and citizenship (two millennia). It can also include material objects, such as buildings, works of art or tools. Tradition is often used as an adjective, in contexts such as traditional music, traditional medicine, traditional values and others. In such constructions tradition refers to specific values and materials particular to the discussed context, passed through generations. Invention of tradition The term "invention of tradition", introduced by E. J. Hobsbawm, refers to situations when a new practice or object is introduced in a manner that implies a connection with the past that is not necessarily present. A tradition may be deliberately created and promulgated for personal, commercial, political, or national self-interest, as was done in colonial Africa; or it may be adopted rapidly based on a single highly publicized event, rather than developing and spreading organically in a population, as in the case of the white wedding dress, which only became popular after Queen Victoria wore a white gown at her wedding to Albert of Saxe-Coburg. An example of an invention of tradition is the rebuilding of the Palace of Westminster (location of the British Parliament) in the Gothic style. Similarly, most of the traditions associated with monarchy of the United Kingdom, seen as rooted deep in history, actually date to 19th century. Other examples include the invention of tradition in Africa and other colonial holdings by the occupying forces. Requiring legitimacy, the colonial power would often invent a "tradition" which they could use to legitimize their own position. For example, a certain succession to a chiefdom might be recognized by a colonial power as traditional in order to favour their own candidates for the job. Often these inventions were based in some form of tradition, but were exaggerated, distorted, or biased toward a particular interpretation. Invented traditions are central components of modern national cultures, providing a commonality of experience and promoting the unified national identity espoused by nationalism. Common examples include public holidays (particularly those unique to a particular nation), the singing of national anthems, and traditional national cuisine (see national dish). Expatriate and immigrant communities may continue to practice the national traditions of their home nation. In scholarly discourse In science, tradition is often used in the literature in order to define the relationship of an author's thoughts to that of his or her field. In 1948, philosopher of science Karl Popper suggested that there should be a "rational theory of tradition" applied to science which was fundamentally sociological. For Popper, each scientist who embarks on a certain research trend inherits the tradition of the scientists before them as he or she inherits their studies and any conclusions that superseded it. Unlike myth, which is a means of explaining the natural world through means other than logical criticism, scientific tradition was inherited from Socrates, who proposed critical discussion, according to Popper. For Thomas Kuhn, who presented his thoughts in a paper presented in 1977, a sense of such a critical inheritance of tradition is, historically, what sets apart the best scientists who change their fields is an embracement of tradition. Traditions are a subject of study in several academic fields in social sciences—chiefly anthropology, archaeology, and biology—with somewhat different meanings in different fields. It is also used in varying contexts in other fields, such as history, psychology and sociology. Social scientists and others have worked to refine the commonsense concept of tradition to make it into a useful concept for scholarly analysis. In the 1970s and 1980s, Edward Shils explored the concept in detail. Since then, a wide variety of social scientists have criticized traditional ideas about tradition; meanwhile, "tradition" has come into usage in biology as applied to nonhuman animals. Tradition as a concept variously defined in different disciplines should not be confused with various traditions (perspectives, approaches) in those disciplines. Anthropology Tradition is one of the key concepts in anthropology; it can be said that anthropology is the study of "tradition in traditional societies". There is however no "theory of tradition", as for most anthropologists the need to discuss what tradition is seems unnecessary, as defining tradition is both unnecessary (everyone can be expected to know what it is) and unimportant (as small differences in definition would be just technical). There are however dissenting views; scholars such as Pascal Boyer argue that defining tradition and developing theories about it are important to the discipline. Archaeology In archaeology, the term tradition is a set of cultures or industries which appear to develop on from one another over a period of time. The term is especially common in the study of American archaeology. Biology Biologists, when examining groups of non-humans, have observed repeated behaviors which are taught within communities from one generation to the next. Tradition is defined in biology as "a behavioral practice that is relatively enduring (i.e., is performed repeatedly over a period of time), that is shared among two or more members of a group, that depends in part on socially aided learning for its generation in new practitioners", and has been called a precursor to "culture" in the anthropological sense. Behavioral traditions have been observed in groups of fish, birds, and mammals. Groups of orangutans and chimpanzees, in particular, may display large numbers of behavioral traditions, and in chimpanzees, transfer of traditional behavior from one group to another (not just within a group) has been observed. Such behavioral traditions may have evolutionary significance, allowing adaptation at a faster rate than genetic change. Musicology and ethnomusicology In the field of musicology and ethnomusicology tradition refers to the belief systems, repertoire, techniques, style and culture that is passed down through subsequent generations. Tradition in music suggests a historical context with which one can perceive distinguishable patterns. Along with a sense of history, traditions have a fluidity that cause them to evolve and adapt over time. While both musicology and ethnomusicology are defined by being 'the scholarly study of music' they differ in their methodology and subject of research. 'Tradition, or traditions, can be presented as a context in which to study the work of a specific composer or as a part of a wide-ranging historical perspective.' Sociology The concept of tradition, in early sociological research (around the turn of the 19th and 20th century), referred to that of the traditional society, as contrasted by the more modern industrial society. This approach was most notably portrayed in Max Weber's concepts of traditional authority and modern rational-legal authority. In more modern works, One hundred years later, sociology sees tradition as a social construct used to contrast past with the present and as a form of rationality used to justify certain course of action. Traditional society is characterized by lack of distinction between family and business, division of labor influenced primarily by age, gender, and status, high position of custom in the system of values, self-sufficiency, preference to saving and accumulation of capital instead of productive investment, relative autarky. Early theories positing the simple, unilineal evolution of societies from traditional to industrial model are now seen as too simplistic. In 1981, Edward Shils in his book Tradition put forward a definition of tradition that became universally accepted. According to Shils, tradition is anything which is transmitted or handed down from the past to the present. Another important sociological aspect of tradition is the one that relates to rationality. It is also related to the works of Max Weber (see theories of rationality), and were popularized and redefined in 1992 by Raymond Boudon in his book Action. In this context tradition refers to the mode of thinking and action justified as "it has always been that way". This line of reasoning forms the basis of the logical flaw of the appeal to tradition (or argumentum ad antiquitatem), which takes the form "this is right because we've always done it this way." In most cases such an appeal can be refuted on the grounds that the "tradition" being advocated may no longer be desirable, or, indeed, may never have been despite its previous popularity. Philosophy The idea of tradition is important in philosophy. Twentieth century philosophy is often divided between an 'analytic' tradition, dominant in Anglophone and Scandinavian countries, and a 'continental' tradition, dominant in German and Romance speaking Europe. Increasingly central to continental philosophy is the project of deconstructing what its proponents, following Martin Heidegger, call 'the tradition', which began with Plato and Aristotle. In contrast, some continental philosophers - most notably, Hans-Georg Gadamer - have attempted to rehabilitate the tradition of Aristotelianism. This move has been replicated within analytic philosophy by Alasdair MacIntyre. However, MacIntyre has himself deconstructed the idea of 'the tradition', instead posing Aristotelianism as one philosophical tradition in rivalry with others. In political and religious discourse The concepts of tradition and traditional values are frequently used in political and religious discourse to establish the legitimacy of a particular set of values. In the United States in the twentieth and twenty-first centuries, the concept of tradition has been used to argue for the centrality and legitimacy of conservative religious values. Similarly, strands of orthodox theological thought from a number of world religions openly identify themselves as wanting a return to tradition. For example, the term "traditionalist Catholic" refers to those, such as Archbishop Lefebvre, who want the worship and practices of the Church to be as they were before the Second Vatican Council of 1962–65. Likewise, Sunni Muslims are referred to as Ahl el-Sunnah wa Al-Jamā‘ah, literally "people of the tradition [of Muhammad] and the community", emphasizing their attachment to religious and cultural tradition. More generally, tradition has been used as a way of determining the political spectrum, with right-wing parties having a stronger affinity to certain ways of the past than left-wing ones. Here, the concept of adherence tradition is embodied by the political philosophy of traditionalist conservatism (or simply traditionalism), which emphasizes the need for the principles of natural law and transcendent moral order, hierarchy and organic unity, agrarianism, classicism and high culture, and the intersecting spheres of loyalty. Traditionalists would therefore reject the notions of individualism, liberalism, modernity, and social progress, but promote cultural and educational renewal, and revive interest in the Church, the family, the State and local community. This view has been criticised for including in its notion of tradition practices which are no longer considered to be desirable, for example, stereotypical views of the place of women in domestic affairs. In other societies, especially ones experiencing rapid social change, the idea of what is "traditional" may be widely contested, with different groups striving to establish their own values as the legitimate traditional ones. Defining and enacting traditions in some cases can be the means of building unity between subgroups in a diverse society; in other cases, tradition is a means of othering and keeping groups distinct from one another. In artistic discourse In artistic contexts, in the performance of traditional genres (such as traditional dance), adherence to traditional guidelines is of greater importance than performer's preferences. It is often the unchanging form of certain arts that leads to their perception as traditional. For artistic endeavors, tradition has been used as a contrast to creativity, with traditional and folk art associated with unoriginal imitation or repetition, in contrast to fine art, which is valued for being original and unique. More recent philosophy of art, however, considers interaction with tradition as integral to the development of new artistic expression. Relationship to other concepts In the social sciences, tradition is often contrasted with modernity, particularly in terms of whole societies. This dichotomy is generally associated with a linear model of social change, in which societies progress from being traditional to being modern. Tradition-oriented societies have been characterized as valuing filial piety, harmony and group welfare, stability, and interdependence, while a society exhibiting modernity would value "individualism (with free will and choice), mobility, and progress." Another author discussing tradition in relationship to modernity, Anthony Giddens, sees tradition as something bound to ritual, where ritual guarantees the continuation of tradition. Gusfield and others, though, criticize this dichotomy as oversimplified, arguing that tradition is dynamic, heterogeneous, and coexists successfully with modernity even within individuals. Tradition should be differentiated from customs, conventions, laws, norms, routines, rules and similar concepts. Whereas tradition is supposed to be invariable, they are seen as more flexible and subject to innovation and change. Whereas justification for tradition is ideological, the justification for other similar concepts is more practical or technical. Over time, customs, routines, conventions, rules and such can evolve into traditions, but that usually requires that they stop having (primarily) a practical purpose. For example, wigs worn by lawyers were at first common and fashionable; spurs worn by military officials were at first practical but now are both impractical and traditional. Preservation The legal protection of tradition includes a number of international agreements and national laws. In addition to the fundamental protection of cultural property, there is also cooperation between the United Nations, UNESCO and Blue Shield International in the protection or recording of traditions and customs. The protection of culture and traditions is becoming increasingly important nationally and internationally. In many countries, concerted attempts are being made to preserve traditions that are at risk of being lost. A number of factors can exacerbate the loss of tradition, including industrialization, globalization, and the assimilation or marginalization of specific cultural groups. Customary celebrations and lifestyles are among the traditions that are sought to be preserved. Likewise, the concept of tradition has been used to defend the preservation and reintroduction of minority languages such as Cornish under the auspices of the European Charter for Regional or Minority Languages. Specifically, the charter holds that these languages "contribute to the maintenance and development of Europe's cultural wealth and traditions". The Charter goes on to call for "the use or adoption... of traditional and correct forms of place-names in regional or minority languages". Similarly, UNESCO includes both "oral tradition" and "traditional manifestations" in its definition of a country's cultural properties and heritage. So therefore it works to preserve tradition in countries such as Brazil. In Japan, certain artworks, structures, craft techniques and performing arts are considered by the Japanese government to be a precious legacy of the Japanese people, and are protected under the Japanese Law for the Protection of Cultural Properties. This law also identifies people skilled at traditional arts as "National Living Treasures", and encourages the preservation of their craft. For native peoples like the Māori in New Zealand, there is conflict between the fluid identity assumed as part of modern society and the traditional identity with the obligations that accompany it; the loss of language heightens the feeling of isolation and damages the ability to perpetuate tradition. Traditional cultural expressions The phrase "traditional cultural expressions" is used by the World Intellectual Property Organization to refer to "any form of artistic and literary expression in which traditional culture and knowledge are embodied. They are transmitted from one generation to the next, and include handmade textiles, paintings, stories, legends, ceremonies, music, songs, rhythms and dance." See also Folklore Origin myth Perennial philosophy Sacred tradition References Citations Works cited Hobsbawm, E. J., Introduction: Inventing Traditions, in Further reading Sowell, T (1980) Knowledge and Decisions Basic Books. Polanyi, M (1964) Personal Knowledge: Towards a Post-Critical Philosophy Pelikan, Jaroslav (1984). The Vindication of Tradition. New Haven, Conn.: Yale University Press. pbk. Klein, Ernest, A comprehensive etymological dictionary of the English language: Dealing with the origin of words and their sense development thus illustrating the history and civilization of culture, Elsevier, Oxford, 7th ed., 2000. External links विविधता में एकता का परिचायक है 'परंपरा- नगर चौरासी' Political culture Social agreement Social philosophy Sociological terminology
0.769863
0.996758
0.767367
Trifunctional hypothesis
The trifunctional hypothesis of prehistoric Proto-Indo-European society postulates a tripartite ideology ("idéologie tripartite") reflected in the existence of three classes or castes—priests, warriors, and commoners (farmers or tradesmen)—corresponding to the three functions of the sacral, the martial and the economic, respectively. The trifunctional thesis is primarily associated with the French mythographer Georges Dumézil, who proposed it in 1929 in the book Flamen-Brahman, and later in Mitra-Varuna. Three-way division According to Georges Dumézil (1898–1986), Proto-Indo-European society had three main groups, corresponding to three distinct functions: Sovereignty, which fell into two distinct and complementary sub-parts: one formal, juridical and priestly but worldly; the other powerful, unpredictable and priestly but rooted in the supernatural world. Military, connected with force, the military and war. Productivity, herding, farming and crafts; ruled by the other two. In the Proto-Indo-European mythology, each social group had its own god or family of gods to represent it and the function of the god or gods matched the function of the group. Many such divisions occur in the history of Indo-European societies: Southern Russia: Bernard Sergent associates the Indo-European language family with certain archaeological cultures in Southern Russia and reconstructs an Indo-European religion based upon the tripartite functions. Early Baltic society: Norbertas Vėlius, in his book Senovės baltų pasaulėžiūra (The Ancient Baltic Worldview), identified three regions with three classes. The priestly class was centered in Prussia, the warrior class was prominent in the outer highlands, and the farming class predominanted in the intermediate flatlands. Early Germanic society: the supposed division between the king, nobility and regular freemen in early Germanic society. Norse mythology: Odin (sovereignty), Týr (law and justice), the Vanir (fertility). Odin has been interpreted as a death-god and connected to cremations, and has also been associated with ecstatic practices. Classical Greece: the three divisions of the ideal society as described by Socrates in Plato's The Republic. Bernard Sergent examined the trifunctional hypothesis in Greek epic, lyric and dramatic poetry. India: the three Hindu castes, the Brahmins or priests; the Kshatriya, the warriors and military; and the Vaishya, the agriculturalists, cattle rearers and traders. The Shudra, a fourth Indian caste, is a peasant or serf. Researchers believe that Indo-European-speakers entered India in the Late Bronze Age, mixed with local Indus Valley civilisation populations and may have established a caste system, with themselves primarily in higher castes. Reception Supporters of the hypothesis include scholars such as Émile Benveniste, Bernard Sergent and Iaroslav Lebedynsky, the last of whom concludes that "the basic idea seems proven in a convincing way". The hypothesis was embraced outside the field of Indo-European studies by some mythographers, anthropologists and historians such as Mircea Eliade, Claude Lévi-Strauss, Marshall Sahlins, Rodney Needham, Jean-Pierre Vernant and Georges Duby. On the other hand, Nicholas Allen concludes that the tripartite division may be an artefact and a selection effect, rather than an organising principle that was used in the societies themselves. Benjamin W. Fortson reports a sense that Dumézil blurred the lines between the three functions and the examples that he gave often had contradictory characteristics, which had caused his detractors to reject his categories as nonexistent. John Brough surmises that societal divisions are common outside Indo-European societies as well and so the hypothesis has only limited utility in illuminating prehistoric Indo-European society. Cristiano Grottanelli states that while Dumézilian trifunctionalism may be seen in modern and medieval contexts, its projection onto earlier cultures is mistaken. Belier is strongly critical. The hypothesis has been criticised by the historians Carlo Ginzburg, Arnaldo Momigliano and Bruce Lincoln as being based on Dumézil's sympathies with the political right. Guy Stroumsa sees those criticisms as unfounded. See also Arthashastra Caste Comparative mythology Estates of the realm Mythography Proto-Indo-European religion Proto-Indo-European society Social class Trinity Triple deity Notes References Sources Anthropology Proto-Indo-Europeans Mythological archetypes Comparative mythology Social classes Sociological theories 1929 introductions
0.777979
0.986356
0.767364
Bronze Age
The Bronze Age was a historical period characterised principally by the use of bronze tools and the development of complex urban societies, as well as the adoption of writing in some areas. The Bronze Age is the middle principal period of the three-age system, following the Stone Age and preceding the Iron Age. Conceived as a global era, the Bronze Age follows the Neolithic, with a transition period between the two known as the Chalcolithic. The final decades of the Bronze Age in the Mediterranean basin are often characterized as a period of widespread societal collapse known as the Late Bronze Age collapse, although its severity and scope is debated among scholars. An ancient civilisation is deemed to be part of the Bronze Age if it either produced bronze by smelting its own copper and alloying it with tin, arsenic, or other metals, or traded other items for bronze from producing areas elsewhere. Bronze Age cultures were the first to develop writing. According to archaeological evidence, cultures in Mesopotamia, which used cuneiform script, and Egypt, which used hieroglyphs, developed the earliest practical writing systems. Metal use Bronze Age civilizations gained a technological advantage due to bronze's harder and more durable properties than other metals available at the time. While terrestrial iron is naturally abundant, the higher temperature required for smelting, , in addition to the greater difficulty of working with it, placed it out of reach of common use until the end of the second millennium BC. Tin's lower melting point of and copper's moderate melting point of placed both these metals within the capabilities of Neolithic pottery kilns, which date to 6,000 BC and were able to produce temperatures of at least . Copper and tin ores are rare since there were no tin bronzes in West Asia before trading in bronze began in the 3rd millennium BC. The Bronze Age is characterized by the widespread use of bronze (even if only by elites in the early years), though the introduction and development of bronze technology were not universally synchronous. Tin bronze technology requires systematic techniques: tin must be mined (mainly as the tin ore cassiterite) and smelted separately, then added to hot copper to make bronze alloy. The Bronze Age was a time of extensive use of metals and the development of trade networks. A 2013 report suggests that the earliest tin-alloy bronze was a foil dated to the mid-5th millennium BC from a Vinča culture site in Pločnik, Serbia, although this culture is not conventionally considered part of the Bronze Age; however, the dating of the foil has been disputed. Near East West Asia and the Near East were the first regions to enter the Bronze Age, beginning with the rise of the Mesopotamian civilization of Sumer in the mid-4th millennium BC. Cultures in the ancient Near East practised intensive year-round agriculture; developed writing systems; invented the potter's wheel, created centralized governments (usually in the form of hereditary monarchies), formulated written law codes, developed city-states, nation-states and empires; embarked on advanced architectural projects; and introduced social stratification, economic and civil administration, slavery, and practised organized warfare, medicine, and religion. Societies in the region laid the foundations for astronomy, mathematics, and astrology. The following dates are approximate. For details, consult linked articles. Near East Bronze Age divisions The Bronze Age in the Near East can be divided into Early, Middle and Late periods. The dates and phases below apply solely to the Near East, not universally. However, some archaeologists propose a "high chronology", which extends periods such as the Intermediate Bronze Age by 300 to 500–600 years, based on material analysis of the southern Levant in cities such as Hazor, Jericho, and Beit She'an. Early Bronze Age (EBA): 3300–2100 BC 3300–3000: EBA I 3000–2700: EBA II 2700–2200: EBA III 2200–2100: EBA IV Middle Bronze Age (MBA) or Intermediate Bronze Age (IBA): 2100–1550 BC 2100–2000: MBA I 2000–1750: MBA II A 1750–1650: MBA II B 1650–1550: MBA II C Late Bronze Age (LBA): 1550–1200 BC 1550–1400: LBA I 1400–1300: LBA II A 1300–1200: LBA II B (Bronze Age collapse) Anatolia The Hittite Empire was established during the 18th century BC in Hattusa, northern Anatolia. At its height in the 14th century BC, the Hittite Kingdom encompassed central Anatolia, southwestern Syria as far as Ugarit, and upper Mesopotamia. After 1180 BC, amid general turmoil in the Levant, which is conjectured to have been associated with the sudden arrival of the Sea Peoples, the kingdom disintegrated into several independent "Neo-Hittite" city-states, some of which survived into the 8th century BC. Arzawa, in Western Anatolia, during the second half of the second millennium BC, likely extended along southern Anatolia in a belt from near the Turkish Lakes Region to the Aegean coast. Arzawa was the western neighbor of the Middle and New Hittite Kingdoms, at times a rival and, at other times, a vassal. The Assuwa league was a confederation of states in western Anatolia defeated by the Hittites under the earlier Tudhaliya I, around 1400 BC. Arzawa has been associated with the more obscure Assuwa generally located to its north. It probably bordered it, and may have been an alternative term for it (at least during some periods). Egypt Early Bronze dynasties In Ancient Egypt, the Bronze Age began in the Protodynastic Period, 3150 BC. The archaic Early Bronze Age of Egypt, known as the Early Dynastic Period of Egypt, immediately followed the unification of Lower and Upper Egypt, 3100 BC. It is generally taken to include the First and Second Dynasties, lasting from the Protodynastic Period until about 2686 BC, or the beginning of the Old Kingdom. With the First Dynasty, the capital moved from Abydos to Memphis with a unified Egypt ruled by an Egyptian god-king. Abydos remained the major holy land in the south. The hallmarks of ancient Egyptian civilization, such as art, architecture, and religion, took shape in the Early Dynastic Period. Memphis, in the Early Bronze Age, was the largest city of the time. The Old Kingdom of the regional Bronze Age is the name given to the period in the 3rd millennium BC when Egyptian civilization attained its first continuous peak of complexity and achievement—the first of three "Kingdom" periods which marked the high points of civilization in the lower Nile Valley (the others being the Middle Kingdom and the New Kingdom). The First Intermediate Period of Egypt, often described as a "dark period" in ancient Egyptian history, spanned about 100 years after the end of the Old Kingdom from about 2181 to 2055 BC. Very little monumental evidence survives from this period, especially from the early part of it. The First Intermediate Period was a dynamic time when the rule of Egypt was roughly divided between two areas: Heracleopolis in Lower Egypt and Thebes in Upper Egypt. These two kingdoms eventually came into conflict, and the Theban kings conquered the north, reunifying Egypt under a single ruler during the second part of the Eleventh Dynasty. Nubia The Bronze Age in Nubia started as early as 2300 BC. Egyptians introduced copper smelting to the Nubian city of Meroë in modern-day Sudan around 2600 BC. A furnace for bronze casting found in Kerma has been dated to 2300–1900 BC. Middle Bronze dynasties The Middle Kingdom of Egypt lasted from 2055 to 1650 BC. During this period, the Osiris funerary cult rose to dominate Egyptian popular religion. The period comprises two phases: the 11th Dynasty, which ruled from Thebes, and the 12th and 13th Dynasties, centred on el-Lisht. The unified kingdom was previously considered to comprise the 11th and 12th Dynasties, but historians now consider at least part of the 13th Dynasty to have belonged to the Middle Kingdom. During the Second Intermediate Period, Ancient Egypt fell into disarray a second time between the end of the Middle Kingdom and the start of the New Kingdom, best known for the Hyksos, whose reign comprised the 15th and 16th dynasties. The Hyksos first appeared in Egypt during the 11th Dynasty, began their climb to power in the 13th Dynasty, and emerged from the Second Intermediate Period in control of Avaris and the Delta. By the 15th Dynasty, they ruled lower Egypt. They were expelled at the end of the 17th Dynasty. Late Bronze dynasties The New Kingdom of Egypt, also referred to as the Egyptian Empire, lasted from the 16th to the 11th century BC. The New Kingdom followed the Second Intermediate Period and was succeeded by the Third Intermediate Period. It was Egypt's most prosperous time and marked the peak of Egypt's power. The later New Kingdom, i.e. the 19th and 20th Dynasties (1292–1069 BC), is also known as the Ramesside period, after the eleven pharaohs who took the name of Ramesses. Iranian plateau Elam was a pre-Iranian ancient civilization located east of Mesopotamia. In the Old Elamite period (Middle Bronze Age), Elam consisted of kingdoms on the Iranian plateau, centred in Anshan. From the mid-2nd millennium BC, Elam was centred in Susa in the Khuzestan lowlands. Its culture played a crucial role in both the Gutian Empire and the Iranian Achaemenid dynasty that succeeded it. The Oxus civilization was a Bronze Age Central Asian culture dated to 2300–1700 BC and centred on the upper Amu Darya (Oxus). In the Early Bronze Age, the culture of the Kopet Dag oases and Altyndepe developed a proto-urban society. This corresponds to level IV at Namazga-Tepe. Altyndepe was a major centre even then. Pottery was wheel-turned. Grapes were grown. The height of this urban development was reached in the Middle Bronze Age 2300 BC, corresponding to level V at Namazga-Depe. This Bronze Age culture is called the Bactria–Margiana Archaeological Complex. The Kulli culture, similar to that of the Indus Valley civilisation, was located in southern Balochistan (Gedrosia) 2500–2000 BC. The economy was agricultural. Dams were found in several places, providing evidence for a highly developed water management system. Konar Sandal is associated with the hypothesized "Jiroft culture", a 3rd-millennium-BC culture postulated based on a collection of artefacts confiscated in 2001. Levant In modern scholarship, the chronology of the Bronze Age Levant is divided into: Early/Proto Syrian; corresponding to the Early Bronze Age Old Syrian; corresponding to the Middle Bronze Age Middle Syrian; corresponding to the Late Bronze Age The term Neo-Syria is used to designate the early Iron Age. The old Syrian period was dominated by the Eblaite first kingdom, Nagar and the Mariote second kingdom. The Akkadians conquered large areas of the Levant and were followed by the Amorite kingdoms, 2000–1600 BC, which arose in Mari, Yamhad, Qatna, and Assyria. From the 15th century BC onward, the term Amurru is usually applied to the region extending north of Canaan as far as Kadesh on the Orontes River. The earliest-known contact of Ugarit with Egypt (and the first exact dating of Ugaritic civilization) comes from a carnelian bead identified with the Middle Kingdom pharaoh Senusret I, dating to 1971–1926 BC. A stela and a statuette of the Egyptian pharaohs Senusret III and Amenemhet III have also been found. However, it is unclear when they first arrived at Ugarit. In the Amarna letters, messages from Ugarit 1350 BC written by Ammittamru I, Niqmaddu II, and his queen have been discovered. From the 16th to the 13th century BC, Ugarit remained in constant contact with Egypt and Cyprus (Alashiya). Mitanni was a loosely organized state in northern Syria and south-east Anatolia from 1500–1300 BC. Founded by an Indo-Aryan ruling class that governed a predominantly Hurrian population, Mitanni came to be a regional power after the Hittite destruction of Kassite Babylon created a power vacuum in Mesopotamia. At its beginning, Mitanni's major rival was Egypt under the Thutmosids. However, with the ascent of the Hittite empire, Mitanni and Egypt allied to protect their mutual interests from the threat of Hittite domination. At the height of its power during the 14th century BC, Mitanni had outposts centred on its capital, Washukanni, which archaeologists have located on the headwaters of the Khabur River. Eventually, Mitanni succumbed to the Hittites and later Assyrian attacks, eventually being reduced to a province of the Middle Assyrian Empire. The Israelites were an ancient Semitic-speaking people of the Ancient Near East who inhabited part of Canaan during the tribal and monarchic periods (15th to 6th centuries BC), and lived in the region in smaller numbers after the fall of the monarchy. The name "Israel" first appears 1209 BC, at the end of the Late Bronze Age and the very beginning of the Iron Age, on the Merneptah Stele raised by the Egyptian pharaoh Merneptah. The Arameans were a Northwest Semitic semi-nomadic pastoral people who originated in what is now modern Syria (Biblical Aram) during the Late Bronze and early Iron Age. Large groups migrated to Mesopotamia, where they intermingled with the native Akkadian (Assyrian and Babylonian) population. The Aramaeans never had a unified empire; they were divided into independent kingdoms all across the Near East. After the Bronze Age collapse, their political influence was confined to Syro-Hittite states, which were entirely absorbed into the Neo-Assyrian Empire by the 8th century BC. Mesopotamia The Mesopotamian Bronze Age began about 3500 BC and ended with the Kassite period ( 1500 BC – 1155 BC). The usual tripartite division into an Early, Middle and Late Bronze Age is not used in the context of Mesopotamia. Instead, a division primarily based on art and historical characteristics is more common. The cities of the Ancient Near East housed several tens of thousands of people. Ur, Kish, Isin, Larsa, and Nippur in the Middle Bronze Age and Babylon, Calah, and Assur in the Late Bronze Age similarly had large populations. The Akkadian Empire (2335–2154 BC) became the dominant power in the region. After its fall, the Sumerians enjoyed a renaissance with the Neo-Sumerian Empire. Assyria, along with the Old Assyrian Empire ( 1800–1600 BC), became a regional power under the Amorite king Shamshi-Adad I. The earliest mention of Babylon (then a small administrative town) appears on a tablet from the reign of Sargon of Akkad in the 23rd century BC. The Amorite dynasty established the city-state of Babylon in the 19th century BC. Over 100 years later, it briefly took over the other city-states and formed the short-lived First Babylonian Empire during what is also called the Old Babylonian Period. Akkad, Assyria, and Babylonia used the written East Semitic Akkadian language for official use and as a spoken language. By that time, the Sumerian language was no longer spoken, but was still in religious use in Assyria and Babylonia, and would remain so until the 1st century AD. The Akkadian and Sumerian traditions played a major role in later Assyrian and Babylonian culture. Despite this, Babylonia, unlike the more militarily powerful Assyria, was founded by non-native Amorites and often ruled by other non-indigenous peoples such as the Kassites, Aramaeans and Chaldeans, as well as by its Assyrian neighbours. Asia Central Asia Agropastoralism For many decades, scholars made superficial reference to Central Asia as the "pastoral realm" or alternatively, the "nomadic world", in what researchers have come to call the "Central Asian void": a 5,000-year span that was neglected in studies of the origins of agriculture. Foothill regions and glacial melt streams supported Bronze Age agropastoralists who developed complex east–west trade routes between Central Asia and China that introduced wheat and barley to China and millet to Central Asia. Bactria–Margiana Archaeological Complex The Bactria–Margiana Archaeological Complex (BMAC), also known as the Oxus civilization, was a Bronze Age civilization in Central Asia, dated to c. 2400–1600 BC, located in present-day northern Afghanistan, eastern Turkmenistan, southern Uzbekistan and western Tajikistan, centred on the upper Amu Darya (Oxus River). Its sites were discovered and named by the Soviet archaeologist Viktor Sarianidi (1976). Bactria was the Greek name for the area of Bactra (modern Balkh), in what is now northern Afghanistan, and Margiana was the Greek name for the Persian satrapy of Marguš, the capital of which was Merv, in modern-day southeastern Turkmenistan. A wealth of information indicates that the BMAC had close international relations with the Indus Valley, the Iranian plateau, and possibly even indirectly with Mesopotamia. All civilizations were familiar with lost wax casting. According to a 2019 study, the BMAC was not a primary contributor to later South-Asian genetics. Seima-Turbino phenomenon The Altai Mountains, in what is now southern Russia and central Mongolia, have been identified as the point of origin of a cultural enigma termed the Seima-Turbino Phenomenon. It is conjectured that changes in climate in this region around 2000 BC, and the ensuing ecological, economic, and political changes, triggered a rapid and massive migration westward into northeast Europe, eastward into China, and southward into Vietnam and Thailand across a frontier of some . This migration took place in just five to six generations and led to peoples from Finland in the west to Thailand in the east employing the same metalworking technology and, in some areas, horse breeding and riding. However, recent genetic testings of sites in south Siberia and Kazakhstan (Andronovo horizon) would rather support spreading of the bronze technology via Indo-European migrations eastwards, as this technology had been well known for quite a while in western regions. It is further conjectured that the same migrations spread the Uralic group of languages across Europe and Asia: some 39 languages of this group still exist, including Hungarian, Finnish and Estonian. East Asia China In China, the earliest bronze artefacts have been found in the Majiayao culture site (between 3100 and 2700 BC). The term "Bronze Age" has been transferred to the archaeology of China from that of Western Eurasia, and there is no consensus or universally used convention delimiting the "Bronze Age" in the context of Chinese prehistory. "Early Bronze Age" in China is sometimes taken as equivalent to the "Shang dynasty" period (16th to 11th centuries BC), and the "Later Bronze Age" as equivalent to the "Zhou dynasty" period (11th to 3rd centuries BC, from the 5th century, also called "Iron Age"), although there is an argument to be made that the "Bronze Age" proper never ended in China, as there is no recognizable transition to an "Iron Age". Together with the jade art that precedes it, bronze was seen as a "fine" material for ritual art when compared with iron or stone. Bronze metallurgy in China originated in what is referred to as the Erlitou period, which some historians argue places it within the Shang dynasty. Others believe the Erlitou sites belong to the preceding Xia dynasty. The U.S. National Gallery of Art defines the Chinese Bronze Age as the "period between about 2000 BC and 771 BC", a period that begins with the Erlitou culture and ends abruptly with the disintegration of Western Zhou rule. There is reason to believe that bronze work developed inside of China apart from outside influence. However, the discovery of Europoid mummies in Xinjiang has caused some archaeologists such as Johan Gunnar Andersson, Jan Romgard, and An Zhimin to suggest a possible route of transmission from the West eastwards. According to An Zhimin, "It can be imagined that initially, bronze and iron technology took its rise in West Asia, first influenced the Xinjiang region, and then reached the Yellow River valley, providing external impetus for the rise of the Shang and Zhou civilizations." According to Jan Romgard, "bronze and iron tools seem to have traveled from west to east as well as the use of wheeled wagons and the domestication of the horse." There are also possible links to Seima-Turbino culture, "a transcultural complex across northern Eurasia," the Eurasian steppe, and the Urals. However, the oldest bronze objects found in China so far were discovered at the Majiayao site in Gansu rather than at Xinjiang. The Shang dynasty (also known as the Yin dynasty) of the Yellow River Valley rose to power after the Xia dynasty around 1600 BC. While some direct information about the Shang dynasty comes from Shang-era inscriptions on bronze artefacts, most comes from oracle bones—turtle shells, cattle scapulae, or other bones—which bear glyphs that form the first significant corpus of recorded Chinese characters. The production of Erlitou in Henan represents the earliest large-scale metallurgy industry in the Central Plains of China. The influence of the Saima-Turbino metalworking tradition from the north is supported by a series of recent discoveries in China of many unique perforated spearheads with downward hooks and small loops on the same or opposite side of the socket, which could be associated with the Seima-Turbino visual vocabulary of southern Siberia. The metallurgical centres of northwestern China, especially Qijia in Gansu and Kexingzhuang culture in Shaanxi, played an intermediary role in this process. Iron has been found from the Zhou dynasty, but its use was minimal. Chinese literature dating to the 6th century BC attests to knowledge of iron smelting, yet bronze continues to occupy the seat of significance in the archaeological and historical record for some time after this. Historian W. C. White argues that iron did not supplant bronze "at any period before the end of the Zhou dynasty (256 BC)" and that bronze vessels make up the majority of metal vessels through the Later Han period, or to 221 BC. The Chinese bronze artefacts generally are either utilitarian, like spear points or adze heads, or "ritual bronzes", which are more elaborate versions in precious materials of everyday vessels, as well as tools and weapons. Examples are the numerous large sacrificial tripods known as dings in Chinese; there are many other distinct shapes. Surviving identified Chinese ritual bronzes tend to be highly decorated, often with the taotie motif, which involves stylised animal faces. These appear in three main motif types: those of demons, symbolic animals, and abstract symbols. Many large bronzes also bear cast inscriptions that are the bulk of the surviving body of early Chinese writing and have helped historians and archaeologists piece together the history of China, especially during the Zhou dynasty (1046–256 BC). The bronzes of the Western Zhou dynasty document large portions of history not found in the extant texts that were often composed by persons of varying rank and possibly even social class. Further, the medium of cast bronze lends the record they preserve a permanence not enjoyed by manuscripts. These inscriptions can commonly be subdivided into four parts: a reference to the date and place, the naming of the event commemorated, the list of gifts given to the artisan in exchange for the bronze, and a dedication. The relative points of reference these vessels provide have enabled historians to place most of the vessels within a certain time frame of the Western Zhou period, allowing them to trace the evolution of the vessels and the events they record. Japan The Japanese archipelago saw the introduction of bronze during the beginning of the Early Yayoi period (≈300 BC), which saw the introduction of metalworking and agricultural practices brought by settlers arriving from the continent. Bronze and iron smelting techniques spread to the Japanese archipelago through contact with other ancient East Asian civilizations, particularly immigration and trade from the ancient Korean peninsula, and ancient mainland China. Iron was mainly used for agricultural and other tools, whereas ritual and ceremonial artefacts were mainly made of bronze. Korea On the Korean peninsula, the Bronze Age began around 1000–800 BC. Initially centred around Liaoning and southern Manchuria, Korean Bronze Age culture exhibits unique typology and styles, especially in ritual objects. The Mumun pottery period is named after the Korean name for undecorated or plain cooking and storage vessels that form a large part of the pottery assemblage over the entire length of the period, but especially 850–550 BC. The Mumun period is known for the origins of intensive agriculture and complex societies in both the Korean Peninsula and the Japanese Archipelago. The Middle Mumun pottery period culture of the southern Korean Peninsula gradually adopted bronze production ( 700–600 BC) after a period when Liaoning-style bronze daggers and other bronze artefacts were exchanged as far as the interior part of the Southern Peninsula ( 900–700 BC). The bronze daggers lent prestige and authority to the personages who wielded and were buried with them in high-status megalithic burials at south-coastal centres such as the Igeum-dong site. Bronze was an important element in ceremonies and for mortuary offerings until 100 BC. South Asia (Dates are approximate, consult linked articles for details) Indus Valley The Bronze Age on the Indian subcontinent began around 3300 BC with the beginning of the Indus Valley Civilization. Inhabitants of the Indus Valley, the Harappans, developed new techniques in metallurgy and produced copper, bronze, lead, and tin. The Late Harappan culture, which dates from 1900 to 1400 BC, overlapped the transition from the Bronze Age to the Iron Age; thus it is difficult to date this transition accurately. It has been claimed that a 6,000-year-old copper amulet manufactured in Mehrgarh in the shape of a wheel spoke is the earliest example of lost-wax casting in the world. The civilization's cities were noted for their urban planning, baked brick houses, elaborate drainage systems, water supply systems, clusters of large non-residential buildings, and new techniques in handicraft (carnelian products, seal carving) and metallurgy (copper, bronze, lead, and tin). The large cities of Mohenjo-daro and Harappa likely grew to contain between 30,000 and 60,000 people, and the civilization during its florescence may have contained between one and five million people. Southeast Asia The Vilabouly Complex in Laos is a significant archaeological site for dating the origin of bronze metallurgy in Southeast Asia. Thailand In Ban Chiang, Thailand, (Southeast Asia) bronze artefacts have been discovered dating to 2100 BC. However, according to the radiocarbon dating on the human and pig bones in Ban Chiang, some scholars propose that the initial Bronze Age in Ban Chiang was in the late 2nd millennium. In Nyaunggan, Burma, bronze tools have been excavated along with ceramics and stone artefacts. Dating is still currently broad (3500–500 BC). Ban Non Wat, excavated by Charles Higham, was a rich site with over 640 graves excavated that gleaned many complex bronze items that may have had social value connected to them. Ban Chiang, however, is the most thoroughly documented site and has the clearest evidence of metallurgy when in Southeast Asia. With a rough date range from the late 3rd millennium BC to the first millennium AD, this site has artefacts such as burial pottery (dating from 2100 to 1700 BC) and fragments of bronze and copper-base bangles. This technology suggested on-site casting from the beginning. The on-site casting supports the theory that bronze was first introduced in Southeast Asia from a different country. Some scholars believe that copper-based metallurgy was disseminated from northwest and central China south and southwest via areas such as Guangdong province and Yunnan province and finally into southeast Asia around 1000 BC. Archaeology also suggests that Bronze Age metallurgy may not have been as significant a catalyst in social stratification and warfare in Southeast Asia as in other regions, and that social distribution shifted away from chiefdom-states to a heterarchical network. Data analyses of sites such as Ban Lum Khao, Ban Na Di, Non-Nok Tha, Khok Phanom Di, and Nong Nor have consistently led researchers to conclude that there was no entrenched hierarchy. Vietnam Dating to the Neolithic Age, the first bronze drums, called the Dong Son drums, were uncovered in and around the Red River Delta regions of Northern Vietnam and Southern China. These relate to the Dong Son culture of Vietnam. Archaeological research in Northern Vietnam indicates an increase in rates of infectious disease following the advent of metallurgy; skeletal fragments in sites dating to the early and mid-Bronze Age evidence a greater proportion of lesions than in sites of earlier periods. There are a few possible implications of this. One is the increased contact with bacterial and/or fungal pathogens due to increased population density and land clearing/cultivation. Another implication is decreased levels of immunocompetence in the Metal Age due to changes in diet caused by agriculture. The last implication is that there may have been an emergence of infectious diseases that evolved into a more virulent form in the metal period. Myanmar Europe A few examples of named Bronze Age cultures in Europe in roughly relative order (dates are approximate, consult linked articles for details). The chosen cultures overlapped in time and the indicated periods do not fully correspond to their estimated extents. Southeast Europe Radivojevic et al. (2013) reported the discovery of a tin bronze foil from the Pločnik archaeological site dated to c. 4650 BC as well as 14 other artefacts from Serbia and Bulgaria dated to before 4000 BC, showing that early tin bronze was more common than previously thought and developed independently in Europe 1500 years before the first tin bronze alloys in the Near East. The production of complex tin bronzes lasted for about 500 years in the Balkans. The authors reported that evidence for the production of such complex bronzes disappears at the end of the 5th millennium, coinciding with the "collapse of large cultural complexes in north-eastern Bulgaria and Thrace in the late fifth millennium BC". Tin bronzes using cassiterite tin were reintroduced to the area some 1500 years later. The oldest golden artefacts in the world (4600–4200 BC) were found in the Necropolis of Varna. These artefacts are on display in the Varna Archaeological Museum. The Dabene Treasure was unearthed from 2004 to 2007 near Karlovo, Plovdiv Province, central Bulgaria. The treasure consists of 20,000 gold jewellery items from 18 to 23 carats. The most important of them was a dagger made of gold and platinum with an unusual edge. The treasure was dated to the end of the 3rd millennium BC. Scientists suggest that the Karlovo valley used to be a major crafts centre that exported golden jewellery all over Europe. It is considered one of the largest prehistoric golden treasures in the world. Aegean The Aegean Bronze Age began around 3200 BC, when civilizations first established a far-ranging trade network. This network imported tin and charcoal to Cyprus, where copper was mined and alloyed with tin to produce bronze. Bronze objects were then exported far and wide. Isotopic analysis of tin in some Mediterranean bronze artefacts suggests that they may have originated from Great Britain. Knowledge of navigation was well-developed by this time and reached a peak of skill not exceeded (except perhaps by Polynesian sailors) until 1730 when the invention of the chronometer enabled the precise determination of longitude. The Minoan civilization based in Knossos on the island of Crete appears to have coordinated and defended its Bronze Age trade. Ancient empires valued luxury goods in contrast to staple foods, leading to famine. Aegean collapse Bronze Age collapse theories have described aspects of the end of the Bronze Age in this region. At the end of the Bronze Age in the Aegean region, the Mycenaean administration of the regional trade empire followed the decline of Minoan primacy. Several Minoan client states lost much of their population to famine and pestilence. This would indicate that the trade network may have failed, preventing the trade that would previously have relieved such famines and prevented illness caused by malnutrition. It is also known that in this era, the breadbasket of the Minoan empire—the area north of the Black Sea—also suddenly lost much of its population and thus probably some capacity to cultivate crops. Drought and famine in Anatolia may have also led to the Aegean collapse by disrupting trade networks, therefore preventing the Aegean from accessing bronze and luxury goods. The Aegean collapse has been attributed to the exhaustion of the Cypriot forests causing the end of the bronze trade. These forests are known to have existed in later times, and experiments have shown that charcoal production on the scale necessary for the bronze production of the late Bronze Age would have exhausted them in less than fifty years. The Aegean collapse has also been attributed to the fact that as iron tools became more common, the main justification for the tin trade ended, and that trade network ceased to function as it did formerly. The colonies of the Minoan empire then suffered drought, famine, war, or some combination of the three, and had no access to the distant resources of an empire by which they could easily recover. The Thera eruption occurred 1600 BC, north of Crete. Speculation includes that a tsunami from Thera (more commonly known today as Santorini) destroyed Cretan cities. A tsunami may have destroyed the Cretan navy in its home harbour, which then lost crucial naval battles; so that in the LMIB/LMII event ( 1450 BC) the cities of Crete burned and the Mycenaean civilization conquered Knossos. If the eruption occurred in the late 17th century BC (as most chronologists now believe), then its immediate effects belong to the Middle to Late Bronze Age transition, and not to the end of the Late Bronze Age, but it could have triggered the instability that led to the collapse first of Knossos and then of Bronze Age society overall. One such theory highlights the role of Cretan expertise in administering the empire, post-Thera. If this expertise was concentrated in Crete, then the Mycenaeans may have made political and commercial mistakes in administering the Cretan empire. Archaeological findings, including some on the island of Thera, suggest that the centre of the Minoan civilization at the time of the eruption was actually on Thera rather than on Crete. According to this theory, the catastrophic loss of the political, administrative and economic centre due to the eruption, as well as the damage wrought by the tsunami to the coastal towns and villages of Crete, precipitated the decline of the Minoans. A weakened political entity with a reduced economic and military capability and fabled riches would have then been more vulnerable to conquest. Indeed, the Santorini eruption is usually dated to 1630 BC, while the Mycenaean Greeks first enter the historical record a few decades later, 1600 BC. The later Mycenaean assaults on Crete ( 1450 BC) and Troy ( 1250 BC) would have been a continuation of the steady encroachment of the Greeks upon the weakened Minoan world. Central Europe In Central Europe, the early Bronze Age Unetice culture (2300–1600 BC) includes numerous smaller groups like the Straubing, Adlerberg and Hatvan cultures. Some very rich burials, such as the one located at Leubingen with grave gifts crafted from gold, point to an increase of social stratification already present in the Unetice culture. All in all, cemeteries of this period are small and rare. The Unetice culture was followed by the middle Bronze Age (1600–1200 BC) tumulus culture, characterized by inhumation burials in tumuli (barrows). In the eastern Hungarian Körös tributaries, the early Bronze Age first saw the introduction of the Mako culture, followed by the Otomani and Gyulavarsand cultures. The late Bronze Age Urnfield culture (1300–700 BC) was characterized by cremation burials. It included the Lusatian culture in eastern Germany and Poland (1300–500 BC) that continues into the Iron Age. The Central European Bronze Age was followed by the Iron Age Hallstatt culture (700–450 BC). Important sites include: Biskupin (Poland) Nebra (Germany) Vráble (Slovakia) Zug-Sumpf, Zug, Switzerland German prehistorian Paul Reinecke described Bronze A1 (Bz A1) period (2300–2000 BC: triangular daggers, flat axes, stone wrist-guards, flint arrowheads) and Bronze A2 (Bz A2) period (1950–1700 BC: daggers with metal hilt, flanged axes, halberds, pins with perforated spherical heads, solid bracelets) and phases Hallstatt A and B (Ha A and B). Southern Europe The Apennine culture (also called Italian Bronze Age) is a technology complex of central and southern Italy spanning the Chalcolithic and Bronze Age proper. The Camuni were an ancient people of uncertain origin (according to Pliny the Elder, they were Euganei; according to Strabo, they were Rhaetians) who lived in Val Camonica—in what is now northern Lombardy—during the Iron Age, although groups of hunters, shepherds, and farmers are known to have lived in the area since the Neolithic. Located in Sardinia and Corsica, the Nuragic civilization lasted from the early Bronze Age (18th century BC) to the 2nd century AD, when the islands were already Romanized. They take their name from the characteristic Nuragic towers, which evolved from the pre-existing megalithic culture, which built dolmens and menhirs. The towers are unanimously considered the best-preserved and largest megalithic remains in Europe. Their purpose is still debated: some scholars consider them monumental tombs, others as Houses of the Giants, other as fortresses, ovens for metal fusion, prisons, or, finally, temples for a solar cult. Around the end of the 3rd millennium BC, Sardinia exported to Sicily a culture that built small dolmens, trilithic or polygonal shaped, that served as tombs, as in the Sicilian dolmen of "Cava dei Servi". From this region, they reached Malta and other countries of Mediterranean basin. The Terramare was an early Indo-European civilization in the area of what is now Pianura Padana (in northern Italy) before the arrival of the Celts, and in other parts of Europe. They lived in square villages of wooden stilt houses. These villages were built on land, but generally near a stream, with roads that crossed each other at right angles. The whole complex was of the nature of a fortified settlement. The Terramare culture was widespread in the Pianura Padana, especially along the Panaro river, between Modena and Bologna, and in the rest of Europe. The civilization developed in the Middle and Late Bronze Age, between the 17th and the 13th centuries BC. The Castellieri culture developed in Istria during the Middle Bronze Age. It lasted for more than a millennium, from the 15th century BC until the Roman conquest in the 3rd century BC. It takes its name from the fortified boroughs (Castellieri, Friulian: cjastelir) that characterized the culture. The Canegrate culture developed from the mid-Bronze Age (13th century BC) until the Iron Age in the Pianura Padana, in what are now western Lombardy, eastern Piedmont, and Ticino. It takes its name from the township of Canegrate, where, in the 20th century, some fifty tombs with ceramics and metal objects were found. The Canegrate culture migrated from the northwest part of the Alps and descended to Pianura Padana from the Swiss Alps passes and the Ticino. The Golasecca culture developed starting from the late Bronze Age in the Po plain. It takes its name from Golasecca, a locality next to the Ticino, where, in the early 19th century, abbot excavated its first findings (some fifty tombs with ceramics and metal objects). Remains of the Golasecca culture span an area of about south to the Alps, between the Po, Sesia, and Serio rivers, dating from the 9th to the 4th century BC. West Europe Great Britain In Great Britain, the Bronze Age is considered to have been the period from around 2100 to 750 BC. Migration brought new people to the islands from the continent. Tooth enamel isotope research on bodies found in early Bronze Age graves around Stonehenge indicates that at least some of the migrants came from the area of modern Switzerland. Another example site is Must Farm near Whittlesey, host to the most complete Bronze Age wheel ever to be found. The Beaker culture displayed different behaviours from earlier Neolithic people, and cultural change was significant. Integration is thought to have been peaceful, as many of the early henge sites were seemingly adopted by the newcomers. The rich Wessex culture developed in southern Britain at this time. Additionally, the climate was deteriorating; where once the weather was warm and dry it became much wetter as the Bronze Age continued, forcing the population away from easily defended sites in the hills and into the fertile valleys. Large livestock farms developed in the lowlands and appear to have contributed to economic growth and inspired increasing forest clearances. The Deverel-Rimbury culture began to emerge in the second half of the Middle Bronze Age ( 1400–1100 BC) to exploit these conditions. Devon and Cornwall were major sources of tin for much of western Europe and copper was extracted from sites such as the Great Orme mine in northern Wales. Social groups appear to have been tribal but with growing complexity and hierarchies becoming apparent. The burials, which until this period had usually been communal, became more individual. For example, whereas in the Neolithic a large chambered cairn or long barrow housed the dead, Early Bronze Age people buried their dead in individual barrows (commonly known and marked on modern British Ordnance Survey maps as tumuli), or sometimes in cists covered with cairns. The greatest quantities of bronze objects in England were discovered in East Cambridgeshire, with the most important finds recovered in Isleham (more than 6500 pieces). Alloying of copper with zinc or tin to make brass or bronze was practised soon after the discovery of copper. One copper mine at Great Orme in North Wales, reached a depth of 70 metres. At Alderley Edge in Cheshire, carbon dating has established mining at around 2280 to 1890 BC (95% probability). The earliest identified metalworking site (Sigwells, Somerset) came much later, dated by globular urn-style pottery to approximately the 12th century BC. The identifiable sherds from over 500 mould fragments included a perfect fit of the hilt of a sword in the Wilburton style held in Somerset County Museum. Atlantic Bronze Age The Atlantic Bronze Age as cultural geographic region is a cultural complex (-/800/700 cal. BC) that includes different cultures in the context of the Atlantic Iberian Peninsula (Portugal, Andalucía, Galicia, Asturias, Cantabria, País Vasco, Navarra and Castilla and León), the Atlantic France, Britain and Ireland, while the Atlantic Bronze Age as cultural complex of the final phase of the Bronze Age period is dated between and 700 BC. It is marked by economic and cultural exchange. Commercial contacts extend to Denmark and the Mediterranean. The Atlantic Bronze Age was defined by many distinct regional centres of metal production, unified by a regular maritime exchange of products. Ireland The Bronze Age in Ireland commenced around 2000 BC when copper was alloyed with tin and used to manufacture Ballybeg type flat axes and associated metalwork. The preceding period is known as the Copper Age and is characterised by the production of flat axes, daggers, halberds and awls in copper. The period is divided into three phases: Early Bronze Age (2000–1500 BC), Middle Bronze Age (1500–1200 BC), and Late Bronze Age (1200– 500 BC). Ireland is known for a relatively large number of Early Bronze Age burials. The country's stone circles and stone rows were built during this period. One of the characteristic types of artefacts of the Early Bronze Age in Ireland is the flat axe. There are five main types of flat axes: Lough Ravel crannog ( 2200 BC), Ballybeg ( 2000 BC), Killaha ( 2000 BC), Ballyvalley ( 2000–1600 BC), Derryniggin ( 1600 BC), and a number of metal ingots in the shape of axes. Northern Europe The Bronze Age in Northern Europe spans the entire 2nd millennium BC, (Unetice culture, Urnfield culture, Tumulus culture, Terramare culture and Lusatian culture) lasting until 600 BC. The Northern Bronze Age was both a period and a Bronze Age culture in Scandinavian pre-history, 1700–500 BC, with sites as far east as Estonia. Succeeding the Late Neolithic culture, its ethnic and linguistic affinities are unknown in the absence of written sources. It was followed by the Pre-Roman Iron Age. Even though Northern European Bronze Age cultures came relatively late, and came into existence via trade, sites present rich and well-preserved objects made of wool, wood and imported Central European bronze and gold. Many rock carvings depict ships, and the large stone burial monuments known as stone ships suggest that shipping played an important role. Thousands of rock carvings depict ships, most probably representing sewn plank-built canoes for warfare, fishing, and trade. These may have a history as far back as the neolithic period and continue into the Pre-Roman Iron Age, as shown by the Hjortspring boat. There are many mounds and rock carving sites from the period. Numerous artefacts of bronze and gold are found. No written language existed in the Nordic countries during the Bronze Age. The rock carvings have been dated through comparison with depicted artefacts. Eastern Europe The Yamnaya culture (c. 3300–2600 BC) was a Late Copper Age/Early Bronze Age culture of the Pontic-Caspian steppe associated with early Indo-Europeans. It was followed on the steppe by the Catacomb culture ( 2800–2200 BC) and the Poltavka culture (c. 2800–2200 BC). The closely-related Corded Ware culture in the forest-steppe region to the north (c. 3000–2350 BC) spread eastwards with the Fatyanovo culture (c. 2900–2050 BC), which subsequently developed into the Abashevo culture (c. 2200–1850 BC) and the Sintashta culture (c. 2200–1750 BC). The earliest known chariots have been found in Sintashta burials and there is earlier evidence for chariot use in the Abashevo culture. The Sintashta culture expanded further eastwards into central Asia becoming the Andronovo culture, whilst the Srubnaya culture (c. 1900–1200 BC) continued the use of chariots in eastern Europe. Caucasus Arsenical bronze artefacts of the Maykop culture in the North Caucasus have been dated to around the 4th millennium BC. This innovation resulted in the circulation of arsenical bronze technology through southern and eastern Europe. Africa Sub-Saharan Africa Iron and copper smelting appeared around the same time in most parts of Africa. As such, most African civilizations outside Egypt did not experience a distinct Bronze Age. Evidence for iron smelting appears earlier or at the same time as copper smelting in Nigeria –800 BC, Rwanda and Burundi –500 BC and Tanzania . There is a longstanding debate about whether copper and iron metallurgy were independently developed in sub-Saharan Africa or introduced from the outside across the Sahara Desert from North Africa or the Indian Ocean. Evidence for theories of independent development and outside introduction are scarce and the subject of active scholarly debate. Scholars have suggested that both the relative dearth of archeological research in sub-Saharan Africa as well as long-standing prejudices have limited or biased our understanding of pre-historic metallurgy on the continent. One scholar characterized the state of historical knowledge: "To say that the history of metallurgy in sub-Saharan Africa is complicated is perhaps an understatement." West Africa Copper smelting took place in West Africa prior to the appearance of iron smelting in the region. Evidence for copper smelting furnaces was found near Agadez, Niger that has been dated as early as 2200 BC. However, evidence for copper production in this region before 1000 BC is debated. Evidence of copper mining and smelting has been found at Akjoujt, Mauretania that suggests small scale production 800 to 400 BC. Americas The Moche civilization of South America independently discovered and developed bronze smelting. Bronze technology was developed further by the Incas and widely used for utilitarian objects and for sculpture. A later appearance of limited bronze smelting in western Mexico suggests either contact of that region with Andean cultures or separate discovery of the technology. The Calchaquí people of northwestern Argentina had bronze technology. Trade Trade and industry played a major role in the development of Bronze Age civilizations. With artefacts of the Indus Valley Civilization found in ancient Mesopotamia and Egypt, it is clear that these civilizations were not only in touch with one another, but also trading. Early long-distance trade was limited almost exclusively to luxury goods like spices, textiles, and precious metals. Not only did this make cities with ample amounts of these products rich, but it also led to an intermingling of cultures for the first time in history. Trade routes were not just on land. The first and most extensive trade routes were along rivers such as the Nile, the Tigris, and the Euphrates, which led to the growth of cities on the banks of these rivers. The later domestication of camels also helped encourage trade routes overland, linking the Indus Valley with the Mediterranean. This further led to towns appearing where there was a pit-stop or caravan-to-ship port. See also Dover Bronze Age Boat Ferriby Boats Hillfort Langdon Bay (Kent) hoard Middle Bronze Age migrations (ancient Near East) Oxhide ingot Shropshire bulla Timeline of human evolution Tollense valley battlefield Notes References Eogan, George (1983). The hoards of the Irish later Bronze Age, Dublin, Ireland: University College, 331 p., Hall, David and Coles, John (1994). Fenland survey : an essay in landscape and persistence, Archaeological report 1, London, England: English Heritage, 170 pages, Pernicka, E., Eibner, C., Öztunah, Ö., Wagener, G. A. (2003). "Early Bronze Age Metallurgy in the Northeast Aegean", In: Wagner, G. A., Pernicka, E. and Uerpmann, H-P. (eds), Troia and the Troad: scientific approaches, Natural science in archaeology, Berlin, Germany; London, England: Springer, , pp. 143–172 Piccolo, Salvatore (2013). Ancient Stones: The Prehistoric Dolmens of Sicily. Abingdon (Great Britain): Brazen Head Publishing, Power, Denis. Archaeological inventory of County Cork, Volume 3: Mid Cork. Stationery Office, 1992. Waddell, John (1998). The prehistoric archaeology of Ireland, Galway University Press, 433 p., Further reading External links Links to the Bronze Age in Europe and beyond Commented web index, geographically structured (private website) Bronze Age Experimental Archeology and Museum Reproductions Umha Aois – Reconstructed Bronze Age metal casting Umha Aois – ancient bronze casting videoclip Aegean and Balkan Prehistory articles, site-reports and bibliography database concerning the Aegean, Balkans and Western Anatolia "The Transmission of Early Bronze Technology to Thailand: New Perspectives" Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016) Seafaring Divers unearth Bronze Age hoard off the coast of Devon Articles which contain graphical timelines Historical eras
0.767492
0.999775
0.767319
Historical method
Historical method is the collection of techniques and guidelines that historians use to research and write histories of the past. Secondary sources, primary sources and material evidence such as that derived from archaeology may all be drawn on, and the historian's skill lies in identifying these sources, evaluating their relative authority, and combining their testimony appropriately in order to construct an accurate and reliable picture of past events and environments. In the philosophy of history, the question of the nature, and the possibility, of a sound historical method is raised within the sub-field of epistemology. The study of historical method and of different ways of writing history is known as historiography. Though historians agree in very general and basic principles, in practice "specific canons of historical proof are neither widely observed nor generally agreed upon" among professional historians. Some scholars of history have observed that there are no particular standards for historical fields such as religion, art, science, democracy, and social justice as these are by their nature 'essentially contested' fields, such that they require diverse tools particular to each field beforehand in order to interpret topics from those fields. Source criticism Source criticism (or information evaluation) is the process of evaluating the qualities of an information source, such as its validity, reliability, and relevance to the subject under investigation. Gilbert J. Garraghan and Jean Delanglez (1946) divide source criticism into six inquiries: When was the source, written or unwritten, produced (date)? Where was it produced (localization)? By whom was it produced (authorship)? From what pre-existing material was it produced (analysis)? In what original form was it produced (integrity)? What is the evidential value of its contents (credibility)? The first four are known as higher criticism; the fifth, lower criticism; and, together, external criticism. The sixth and final inquiry about a source is called internal criticism. Together, this inquiry is known as source criticism. R. J. Shafer on external criticism: "It sometimes is said that its function is negative, merely saving us from using false evidence; whereas internal criticism has the positive function of telling us how to use authenticated evidence." Noting that few documents are accepted as completely reliable, Louis Gottschalk sets down the general rule, "for each particular of a document the process of establishing credibility should be separately undertaken regardless of the general credibility of the author". An author's trustworthiness in the main may establish a background probability for the consideration of each statement, but each piece of evidence extracted must be weighed individually. Procedures for contradictory sources Bernheim (1889) and Langlois & Seignobos (1898) proposed a seven-step procedure for source criticism in history: If the sources all agree about an event, historians can consider the event proven. However, majority does not rule; even if most sources relate events in one way, that version will not prevail unless it passes the test of critical textual analysis. The source whose account can be confirmed by reference to outside authorities in some of its parts can be trusted in its entirety if it is impossible similarly to confirm the entire text. When two sources disagree on a particular point, the historian will prefer the source with most "authority"—that is the source created by the expert or by the eyewitness. Eyewitnesses are, in general, to be preferred especially in circumstances where the ordinary observer could have accurately reported what transpired and, more specifically, when they deal with facts known by most contemporaries. If two independently created sources agree on a matter, the reliability of each is measurably enhanced. When two sources disagree and there is no other means of evaluation, then historians take the source which seems to accord best with common sense. Subsequent descriptions of historical method, outlined below, have attempted to overcome the credulity built into the first step formulated by the nineteenth century historiographers by stating principles not merely by which different reports can be harmonized but instead by which a statement found in a source may be considered to be unreliable or reliable as it stands on its own. Core principles for determining reliability The following core principles of source criticism were formulated by two Scandinavian historians, Olden-Jørgensen (1998) and Thurén Torsten (1997): Human sources may be relics such as a fingerprint; or narratives such as a statement or a letter. Relics are more credible sources than narratives. Any given source may be forged or corrupted. Strong indications of the originality of the source increase its reliability. The closer a source is to the event which it purports to describe, the more one can trust it to give an accurate historical description of what actually happened. An eyewitness is more reliable than testimony at second hand, which is more reliable than hearsay at further remove, and so on. If a number of independent sources contain the same message, the credibility of the message is strongly increased. The tendency of a source is its motivation for providing some kind of bias. Tendencies should be minimized or supplemented with opposite motivations. If it can be demonstrated that the witness or source has no direct interest in creating bias then the credibility of the message is increased. Criteria of Authenticity Historians sometimes have to deal with deciding what is genuine and what is not in a source. For such circumstances, there are external and internal "criteria of authenticity" that are applicable. These are technical tools for evaluating sources and separating 'genuine' sources or content from forgeries or manipulation. External criteria involve issues relating to establishing authorship of a source or range of sources. It involves things like if an author wrote something themselves, if other sources attribute authorship to the source, agreement of independent manuscript copies on the content of a source. Internal criteria involve formalities, style, and language for an author; if a source varies from the environment it was produced, inconsistencies of time or chronology, textual transmission of a source, interpolations in a source, insertions or deletions in a source. Eyewitness evidence R. J. Shafer (1974) offers this checklist for evaluating eyewitness testimony: Is the real meaning of the statement different from its literal meaning? Are words used in senses not employed today? Is the statement meant to be ironic (i.e., mean other than it says)? How well could the author observe the thing he reports? Were his senses equal to the observation? Was his physical location suitable to sight, hearing, touch? Did he have the proper social ability to observe: did he understand the language, have other expertise required (e.g., law, military); was he not being intimidated by his wife or the secret police? How did the author report?, and what was his ability to do so? Regarding his ability to report, was he biased? Did he have proper time for reporting? Proper place for reporting? Adequate recording instruments? When did he report in relation to his observation? Soon? Much later? Fifty years is much later as most eyewitnesses are dead and those who remain may have forgotten relevant material. What was the author's intention in reporting? For whom did he report? Would that audience be likely to require or suggest distortion to the author? Are there additional clues to intended veracity? Was he indifferent on the subject reported, thus probably not intending distortion? Did he make statements damaging to himself, thus probably not seeking to distort? Did he give incidental or casual information, almost certainly not intended to mislead? Do his statements seem inherently improbable: e.g., contrary to human nature, or in conflict with what we know? Remember that some types of information are easier to observe and report on than others. Are there inner contradictions in the document? Louis Gottschalk adds an additional consideration: "Even when the fact in question may not be well-known, certain kinds of statements are both incidental and probable to such a degree that error or falsehood seems unlikely. If an ancient inscription on a road tells us that a certain proconsul built that road while Augustus was princeps, it may be doubted without further corroboration that that proconsul really built the road, but would be harder to doubt that the road was built during the principate of Augustus. If an advertisement informs readers that 'A and B Coffee may be bought at any reliable grocer's at the unusual price of fifty cents a pound,' all the inferences of the advertisement may well be doubted without corroboration except that there is a brand of coffee on the market called 'A and B Coffee.'" Indirect witnesses Garraghan (1946) says that most information comes from "indirect witnesses", people who were not present on the scene but heard of the events from someone else. Gottschalk says that a historian may sometimes use hearsay evidence when no primary texts are available. He writes, "In cases where he uses secondary witnesses...he asks: (1) On whose primary testimony does the secondary witness base his statements? (2) Did the secondary witness accurately report the primary testimony as a whole? (3) If not, in what details did he accurately report the primary testimony? Satisfactory answers to the second and third questions may provide the historian with the whole or the gist of the primary testimony upon which the secondary witness may be his only means of knowledge. In such cases the secondary source is the historian's 'original' source, in the sense of being the 'origin' of his knowledge. Insofar as this 'original' source is an accurate report of primary testimony, he tests its credibility as he would that of the primary testimony itself." Gottschalk adds, "Thus hearsay evidence would not be discarded by the historian, as it would be by a law court merely because it is hearsay." Oral tradition Gilbert Garraghan (1946) maintains that oral tradition may be accepted if it satisfies either two "broad conditions" or six "particular conditions", as follows: Broad conditions stated. The tradition should be supported by an unbroken series of witnesses, reaching from the immediate and first reporter of the fact to the living mediate witness from whom we take it up, or to the one who was the first to commit it to writing. There should be several parallel and independent series of witnesses testifying to the fact in question. Particular conditions formulated. The tradition must report a public event of importance, such as would necessarily be known directly to a great number of persons. The tradition must have been generally believed, at least for a definite period of time. During that definite period it must have gone without protest, even from persons interested in denying it. The tradition must be one of relatively limited duration. [Elsewhere, Garraghan suggests a maximum limit of 150 years, at least in cultures that excel in oral remembrance.] The critical spirit must have been sufficiently developed while the tradition lasted, and the necessary means of critical investigation must have been at hand. Critical-minded persons who would surely have challenged the tradition – had they considered it false – must have made no such challenge. Other methods of verifying oral tradition may exist, such as comparison with the evidence of archaeological remains. More recent evidence concerning the potential reliability or unreliability of oral tradition has come out of fieldwork in West Africa and Eastern Europe. Anonymous sources Historians do allow for the use of anonymous texts to establish historical facts. Synthesis: historical reasoning Once individual pieces of information have been assessed in context, hypotheses can be formed and established by historical reasoning. Argument to the best explanation C. Behan McCullagh (1984) lays down seven conditions for a successful argument to the best explanation: The statement, together with other statements already held to be true, must imply yet other statements describing present, observable data. (We will henceforth call the first statement 'the hypothesis', and the statements describing observable data, 'observation statements'.) The hypothesis must be of greater explanatory scope than any other incompatible hypothesis about the same subject; that is, it must imply a greater variety of observation statements. The hypothesis must be of greater explanatory power than any other incompatible hypothesis about the same subject; that is, it must make the observation statements it implies more probable than any other. The hypothesis must be more plausible than any other incompatible hypothesis about the same subject; that is, it must be implied to some degree by a greater variety of accepted truths than any other, and be implied more strongly than any other; and its probable negation must be implied by fewer beliefs, and implied less strongly than any other. The hypothesis must be less ad hoc than any other incompatible hypothesis about the same subject; that is, it must include fewer new suppositions about the past which are not already implied to some extent by existing beliefs. It must be disconfirmed by fewer accepted beliefs than any other incompatible hypothesis about the same subject; that is, when conjoined with accepted truths it must imply fewer observation statements and other statements which are believed to be false. It must exceed other incompatible hypotheses about the same subject by so much, in characteristics 2 to 6, that there is little chance of an incompatible hypothesis, after further investigation, soon exceeding it in these respects. McCullagh sums up, "if the scope and strength of an explanation are very great, so that it explains a large number and variety of facts, many more than any competing explanation, then it is likely to be true". Statistical inference McCullagh (1984) states this form of argument as follows: There is probability (of the degree p1) that whatever is an A is a B. It is probable (to the degree p2) that this is an A. Therefore, (relative to these premises) it is probable (to the degree p1 × p2) that this is a B. McCullagh gives this example: In thousands of cases, the letters V.S.L.M. appearing at the end of a Latin inscription on a tombstone stand for Votum Solvit Libens Merito. From all appearances the letters V.S.L.M. are on this tombstone at the end of a Latin inscription. Therefore, these letters on this tombstone stand for Votum Solvit Libens Merito. This is a syllogism in probabilistic form, making use of a generalization formed by induction from numerous examples (as the first premise). Argument from analogy The structure of the argument is as follows: One thing (object, event, or state of affairs) has properties p1 . . .  pn and pn + 1. Another thing has properties p1 . . . pn. So the latter has property pn + 1. McCullagh says that an argument from analogy, if sound, is either a "covert statistical syllogism" or better expressed as an argument to the best explanation. It is a statistical syllogism when it is "established by a sufficient number and variety of instances of the generalization"; otherwise, the argument may be invalid because properties 1 through n are unrelated to property n + 1, unless property n + 1 is the best explanation of properties 1 through n. Analogy, therefore, is uncontroversial only when used to suggest hypotheses, not as a conclusive argument. See also Antiquarian Archaeology Archival research Auxiliary sciences of history Chinese whispers Historical criticism Historical significance Historiography List of history journals Philosophy of history Recorded history Scholarly method Scientific method Source criticism Unwitting testimony Footnotes References Gilbert J. Garraghan, A Guide to Historical Method, Fordham University Press: New York (1946). Louis Gottschalk, Understanding History: A Primer of Historical Method, Alfred A. Knopf: New York (1950). . Martha Howell and Walter Prevenier, From Reliable Sources: An Introduction to Historical Methods, Cornell University Press: Ithaca (2001). . C. Behan McCullagh, Justifying Historical Descriptions, Cambridge University Press: New York (1984). . Presnell, J. L. (2019). The information-literate historian: a guide to research for history students (3rd ed.). Oxford University Press. R. J. Shafer, A Guide to Historical Method, The Dorsey Press: Illinois (1974). . External links Introduction to Historical Method by Marc Comtois Philosophy of History by Paul Newall Federal Rules of Evidence in United States law Historiography Methodology
0.770884
0.995335
0.767288
A priori and a posteriori
('from the earlier') and ('from the later') are Latin phrases used in philosophy to distinguish types of knowledge, justification, or argument by their reliance on experience. knowledge is independent from any experience. Examples include mathematics, tautologies and deduction from pure reason. knowledge depends on empirical evidence. Examples include most fields of science and aspects of personal knowledge. The terms originate from the analytic methods found in Organon, a collection of works by Aristotle. Prior analytics is about deductive logic, which comes from definitions and first principles. Posterior analytics is about inductive logic, which comes from observational evidence. Both terms appear in Euclid's Elements and were popularized by Immanuel Kant's Critique of Pure Reason, an influential work in the history of philosophy. Both terms are primarily used as modifiers to the noun knowledge (e.g., knowledge). can be used to modify other nouns such as truth. Philosophers may use apriority, apriorist and aprioricity as nouns referring to the quality of being . Examples A priori Consider the proposition: "If George V reigned at least four days, then he reigned more than three days." This is something that one knows a priori because it expresses a statement that one can derive by reason alone. A posteriori Consider the proposition: "George V reigned from 1910 to 1936." This is something that (if true) one must come to know a posteriori because it expresses an empirical fact unknowable by reason alone. Aprioricity, analyticity and necessity Relation to the analytic–synthetic distinction Several philosophers, in reaction to Immanuel Kant, sought to explain a priori knowledge without appealing to, as Paul Boghossian describes as "a special faculty [intuition]... that has never been described in satisfactory terms." One theory, popular among the logical positivists of the early 20th century, is what Boghossian calls the "analytic explanation of the a priori." The distinction between analytic and synthetic propositions was first introduced by Kant. While his original distinction was primarily drawn in terms of conceptual containment, the contemporary version of such distinction primarily involves, as American philosopher W. V. O. Quine put it, the notions of "true by virtue of meanings and independently of fact." Analytic propositions are considered true by virtue of their meaning alone, while a posteriori propositions by virtue of their meaning and of certain facts about the world. According to the analytic explanation of the a priori, all a priori knowledge is analytic; so a priori knowledge need not require a special faculty of pure intuition, since it can be accounted for simply by one's ability to understand the meaning of the proposition in question. More simply, proponents of this explanation claimed to have reduced a dubious metaphysical faculty of pure reason to a legitimate linguistic notion of analyticity. The analytic explanation of a priori knowledge has undergone several criticisms. Most notably, Quine argues that the analytic–synthetic distinction is illegitimate:But for all its a priori reasonableness, a boundary between analytic and synthetic statements simply has not been drawn. That there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith. Although the soundness of Quine's proposition remains uncertain, it had a powerful effect on the project of explaining the a priori in terms of the analytic. Relation to the necessary truths and contingent truths The metaphysical distinction between necessary and contingent truths has also been related to a priori and a posteriori knowledge. A proposition that is necessarily true is one in which its negation is self-contradictory; it is true in every possible world. For example, considering the proposition "all bachelors are unmarried:" its negation (i.e. the proposition that some bachelors are married) is incoherent due to the concept of being unmarried (or the meaning of the word "unmarried") being tied to part of the concept of being a bachelor (or part of the definition of the word "bachelor"). To the extent that contradictions are impossible, self-contradictory propositions are necessarily false as it is impossible for them to be true. The negation of a self-contradictory proposition is, therefore, supposed to be necessarily true. By contrast, a proposition that is contingently true is one in which its negation is not self-contradictory. Thus, it is said not to be true in every possible world. As Jason Baehr suggests, it seems plausible that all necessary propositions are known a priori, because "[s]ense experience can tell us only about the actual world and hence about what is the case; it can say nothing about what must or must not be the case." Following Kant, some philosophers have considered the relationship between aprioricity, analyticity and necessity to be extremely close. According to Jerry Fodor, "positivism, in particular, took it for granted that a priori truths must be necessary." Since Kant, the distinction between analytic and synthetic propositions has slightly changed. Analytic propositions were largely taken to be "true by virtue of meanings and independently of fact", while synthetic propositions were not—one must conduct some sort of empirical investigation, looking to the world, to determine the truth-value of synthetic propositions. Separation Aprioricity, analyticity and necessity have since been more clearly separated from each other. American philosopher Saul Kripke (1972), for example, provides strong arguments against this position, whereby he contends that there are necessary a posteriori truths. For example, the proposition that water is H2O (if it is true): According to Kripke, this statement is both necessarily true, because water and H2O are the same thing, they are identical in every possible world, and truths of identity are logically necessary; and a posteriori, because it is known only through empirical investigation. Following such considerations of Kripke and others (see Hilary Putnam), philosophers tend to distinguish the notion of aprioricity more clearly from that of necessity and analyticity. Kripke's definitions of these terms diverge in subtle ways from Kant's. Taking these differences into account, Kripke's controversial analysis of naming as contingent and a priori would, according to Stephen Palmquist, best fit into Kant's epistemological framework by calling it "analytic a posteriori." Aaron Sloman presented a brief defence of Kant's three distinctions (analytic/synthetic, apriori/empirical and necessary/contingent), in that it did not assume "possible world semantics" for the third distinction, merely that some part of this world might have been different. The relationship between aprioricity, necessity and analyticity is not easy to discern. Most philosophers at least seem to agree that while the various distinctions may overlap, the notions are clearly not identical: the a priori/a posteriori distinction is epistemological; the analytic/synthetic distinction is linguistic; and the necessary/contingent distinction is metaphysical. History Early uses The term a priori is Latin for 'from what comes before' (or, less literally, 'from first principles, before experience'). In contrast, the term a posteriori is Latin for 'from what comes later' (or 'after experience'). They appear in Latin translations of Euclid's Elements, a work widely considered during the early European modern period as the model for precise thinking. An early philosophical use of what might be considered a notion of a priori knowledge (though not called by that name) is Plato's theory of recollection, related in the dialogue Meno, according to which something like a priori knowledge is knowledge inherent, intrinsic in the human mind. Albert of Saxony, a 14th-century logician, wrote on both a priori and a posteriori. The early modern Thomistic philosopher John Sergeant differentiates the terms by the direction of inference regarding proper causes and effects. To demonstrate something a priori is to "Demonstrate Proper Effects from Proper Efficient Causes" and likewise to demonstrate a posteriori is to demonstrate "Proper Efficient Causes from Proper Effects", according to his 1696 work The Method to Science Book III, Lesson IV, Section 7. G. W. Leibniz introduced a distinction between a priori and a posteriori criteria for the possibility of a notion in his (1684) short treatise "Meditations on Knowledge, Truth, and Ideas". A priori and a posteriori arguments for the existence of God appear in his Monadology (1714). George Berkeley outlined the distinction in his 1710 work A Treatise Concerning the Principles of Human Knowledge (para. XXI). Immanuel Kant The 18th-century German philosopher Immanuel Kant (1781) advocated a blend of rationalist and empiricist theories. Kant says, "Although all our cognition begins with experience, it does not follow that it arises from [is caused by] experience." According to Kant, a priori cognition is transcendental, or based on the form of all possible experience, while a posteriori cognition is empirical, based on the content of experience: It is quite possible that our empirical knowledge is a compound of that which we receive through impressions, and that which the faculty of cognition supplies from itself sensuous impressions [sense data] giving merely the occasion [opportunity for a cause to produce its effect]. Contrary to contemporary usages of the term, Kant believes that a priori knowledge is not entirely independent of the content of experience. Unlike the rationalists, Kant thinks that a priori cognition, in its pure form, that is without the admixture of any empirical content, is limited to the deduction of the conditions of possible experience. These a priori, or transcendental, conditions are seated in one's cognitive faculties, and are not provided by experience in general or any experience in particular (although an argument exists that a priori intuitions can be "triggered" by experience). Kant nominated and explored the possibility of a transcendental logic with which to consider the deduction of the a priori in its pure form. Space, time and causality are considered pure a priori intuitions. Kant reasoned that the pure a priori intuitions are established via his transcendental aesthetic and transcendental logic. He claimed that the human subject would not have the kind of experience that it has were these a priori forms not in some way constitutive of him as a human subject. For instance, a person would not experience the world as an orderly, rule-governed place unless time, space and causality were determinant functions in the form of perceptual faculties, i. e., there can be no experience in general without space, time or causality as particular determinants thereon. The claim is more formally known as Kant's transcendental deduction and it is the central argument of his major work, the Critique of Pure Reason. The transcendental deduction argues that time, space and causality are ideal as much as real. In consideration of a possible logic of the a priori, this most famous of Kant's deductions has made the successful attempt in the case for the fact of subjectivity, what constitutes subjectivity and what relation it holds with objectivity and the empirical. Johann Fichte After Kant's death, a number of philosophers saw themselves as correcting and expanding his philosophy, leading to the various forms of German Idealism. One of these philosophers was Johann Fichte. His student (and critic), Arthur Schopenhauer, accused him of rejecting the distinction between a priori and a posteriori knowledge: See also A priori probability A posteriori necessity Ab initio Abductive reasoning Deductive reasoning Inductive reasoning Off the verandah Relativized a priori Tabula rasa Transcendental empiricism Transcendental hermeneutic phenomenology Transcendental nominalism References Notes Citations Sources Further reading . External links A priori / a posteriori – in the Philosophical Dictionary online. "Rationalism vs. Empiricism" – an article by Peter Markie in the Stanford Encyclopedia of Philosophy. Concepts in epistemology Conceptual distinctions Justification (epistemology) Kantianism Latin philosophical phrases Definitions of knowledge Concepts in logic
0.768764
0.998072
0.767282
Middle Paleolithic
The Middle Paleolithic (or Middle Palaeolithic) is the second subdivision of the Paleolithic or Old Stone Age as it is understood in Europe, Africa and Asia. The term Middle Stone Age is used as an equivalent or a synonym for the Middle Paleolithic in African archeology. The Middle Paleolithic broadly spanned from 300,000 to 50,000 years ago. There are considerable dating differences between regions. The Middle Paleolithic was succeeded by the Upper Paleolithic subdivision which first began between 50,000 and 40,000 years ago. Pettit and White date the Early Middle Paleolithic in Great Britain to about 325,000 to 180,000 years ago (late Marine Isotope Stage 9 to late Marine Isotope Stage 7), and the Late Middle Paleolithic as about 60,000 to 35,000 years ago. The Middle Paleolithic was in the geological Chibanian (Middle Pleistocene) and Late Pleistocene ages. According to the theory of the recent African origin of modern humans, anatomically modern humans began migrating out of Africa during the Middle Stone Age/Middle Paleolithic around 125,000 years ago and began to replace earlier pre-existent Homo species such as the Neanderthals and Homo erectus. Origin of behavioral modernity The earliest evidence of behavioral modernity first appears during the Middle Paleolithic; undisputed evidence of behavioral modernity, however, only becomes common during the following Upper Paleolithic period. Middle Paleolithic burials at sites such as Krapina in Croatia (dated to 130,000 BP) and the Qafzeh and Es Skhul caves in Israel ( 100,000 BP) have led some anthropologists and archeologists (such as Philip Lieberman) to believe that Middle Paleolithic cultures may have possessed a developing religious ideology which included concepts such as an afterlife; other scholars suggest the bodies were buried for secular reasons. According to recent archeological findings from Homo heidelbergensis sites in the Atapuerca Mountains, the practice of intentional burial may have begun much earlier during the late Lower Paleolithic, but this theory is widely questioned in the scientific community. Cut-marks on Neandertal bones from various sites – such as Combe Grenal and the Moula rock shelter in France – may imply that Neanderthals, like some contemporary human cultures, may have practiced excarnation for presumably religious reasons (see Neanderthal behavior § Cannibalism or ritual defleshing?). The earliest undisputed evidence of artistic expression during the Paleolithic period comes from Middle Paleolithic/Middle Stone Age sites such as Blombos Cave in the form of bracelets, beads, art rock, ochre used as body paint and perhaps in ritual, though earlier examples of artistic expression such as the Venus of Tan-Tan and the patterns found on elephant bones from Bilzingsleben in Thuringia may have been produced by Acheulean tool-users such as Homo erectus prior to the start of the Middle Paleolithic period. Activities such as catching large fish and hunting large game animals with specialized tools indicate increased group-wide cooperation and more elaborate social organization. In addition to developing advanced cultural traits, humans also first began to take part in long-distance trade between groups for rare commodities (such as ochre (which was often used for religious purposes such as ritual)) and raw materials during the Middle Paleolithic as early as 120,000 years ago. Inter-group trade may have appeared during the Middle Paleolithic because trade between bands would have helped ensure their survival by allowing them to exchange resources and commodities such as raw materials during times of relative scarcity (i.e., famine or drought). Social stratification Evidence from archeology and comparative ethnography indicates that Middle Paleolithic people lived in small, egalitarian band societies similar to those of Upper Paleolithic societies and some modern hunter-gatherers such as the ǃKung and Mbuti peoples. Both Neanderthal and modern human societies took care of the elderly members of their societies during the Middle Paleolithic. Christopher Boehm (1999) has hypothesized that egalitarianism may have arisen in Middle Paleolithic societies because of a need to distribute resources such as food and meat equally to avoid famine and ensure a stable food supply. It has usually been assumed that women gathered plants and firewood and men hunted and scavenged dead animals through the Paleolithic. However, Steven L. Kuhn and Mary Stiner from the University of Arizona suggest that this sex-based division of labor did not exist prior to the Upper Paleolithic. The sexual division of labor may have evolved after 45,000 years ago to allow humans to acquire food and other resources more efficiently. Nutrition Although gathering and hunting comprised most of the food supply during the Middle Paleolithic, people began to supplement their diet with seafood and began smoking and drying meat to preserve and store it. For instance the Middle Stone Age inhabitants of the region now occupied by the Democratic Republic of the Congo hunted large long catfish with specialized barbed fishing points as early as 90,000 years ago, and Neandertals and Middle Paleolithic Homo sapiens in Africa began to catch shellfish for food as revealed by shellfish cooking in Neanderthal sites in Italy about 110,000 years ago and Middle Paleolithic Homo sapiens sites at Pinnacle Point, in Africa. Anthropologists such as Tim D. White suggest that cannibalism was common in human societies prior to the beginning of the Upper Paleolithic, based on the large amount of "butchered human" bones found in Neandertal and other Middle Paleolithic sites. Cannibalism in the Middle Paleolithic may have occurred because of food shortages. However it is also possible that Middle Paleolithic cannibalism occurred for religious reasons which would coincide with the development of religious practices thought to have occurred during the Upper Paleolithic. Nonetheless it remains possible that Middle Paleolithic societies never practiced cannibalism and that the damage to recovered human bones was either the result of excarnation or predation by carnivores such as saber-toothed cats, lions and hyenas. Technology Around 200,000 BP Middle Paleolithic Stone tool manufacturing spawned a tool-making technique known as the prepared-core technique, that was more elaborate than previous Acheulean techniques. Wallace and Shea split the core artifacts into two different types: formal cores and expedient cores. Formal cores are designed to extract the maximum amount from the raw material while expedient cores are based more upon functional need. This method increased efficiency by permitting the creation of more controlled and consistent flakes. This method allowed Middle Paleolithic humans correspondingly to create stone-tipped spears, which were the earliest composite tools, by hafting sharp, pointy stone flakes onto wooden shafts. Paleolithic groups such as the Neanderthals who possessed a Middle Paleolithic level of technology appear to have hunted large game just as well as Upper Paleolithic modern humans and the Neanderthals in particular may have likewise hunted with projectile weapons. Nonetheless Neanderthal usage of projectile weapons in hunting occurred very rarely (or perhaps never) and the Neanderthals hunted large game animals mostly by ambushing them and attacking them with mêlée weapons such as thrusting spears rather than attacking them from a distance with projectile weapons. An ongoing controversy about the nature of Middle Paleolithic tools is whether there were a series of functionally specific and preconceived tool forms or whether there was a simple continuum of tool morphology that reflect the extent of edge maintenance, as Harold L. Dibble has suggested. The use of fire became widespread for the first time in human prehistory during the Middle Paleolithic, and humans began to cook their food c. 250,000 years ago. Some scientists have hypothesized that hominids began cooking food to defrost frozen meat which would help ensure their survival in cold regions. Robert K. Wayne, a molecular biologist, has controversially claimed, based on a comparison of canine DNA, that dogs may have been first domesticated during the Middle Paleolithic around or even before 100,000 BCE. Sites Cave sites Western Europe Axlor, Spain Grotte de Spy, Spy, Belgium La Cotte de St Brelade, Jersey Le Moustier, France—see also Mousterian Neandertal (valley), Germany Petralona, Greece Middle East and Africa Aterian, North Africa Bisitun Cave, Iran Daş Salahlı, Azerbaijan Wezmeh, Iran Open-air sites Biache-Saint-Vaast, France Maastricht-Belvédère, The Netherlands Veldwezelt-Hezerwater, Belgium See also Early human migrations Recent African origin of modern humans Timeline of prehistory References External links Veldwezelt-Hezerwater Picture Gallery of the Paleolithic (reconstructional palaeoethnology), Libor Balák at the Czech Academy of Sciences, the Institute of Archaeology in Brno, The Center for Paleolithic and Paleoethnological Research Pleistocene Prehistoric cannibalism sv:Paleolitikum#Mellanpaleolitikum
0.770789
0.995352
0.767206
Regionalisation
Regionalisation is the tendency to form decentralised regions. Regionalisation or land classification can be observed in various disciplines: In agriculture, see Agricultural Land Classification. In biogeography, see Biogeography#Biogeographic units. In ecology, see Ecological land classification. In geography, it has two ways: the process of delineating the Earth, its small areas or other units into regions and a state of such a delineation. In globalisation discourse, it represents a world that becomes less interconnected, with a stronger regional focus. In politics, it is the process of dividing a political entity or country into smaller jurisdictions (administrative divisions or subnational units) and transferring power from the central government to the regions; the opposite of unitarisation. See Regionalism (politics). In sport, it is when a team has multiple "home" venues in different cities. Examples of regionalized teams include a few teams in the defunct American Basketball Association, or the Green Bay Packers when they played in both Green Bay and Milwaukee from 1933 to 1994. In linguistics, it is when a prestige language adopts features of a regional language, such as how, in medieval times, Church Latin developed regional pronunciation differences in the countries it was used, including Italy, France, Spain, Portugal, England, Germany, Denmark, Hungary, and Slavic countries. See also Regionalism Regional autonomy Autonomous administrative division Regions Decentralization
0.793566
0.966621
0.767078
Qualitative research
Qualitative research is a type of research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behavior. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis. Qualitative research methods have been used in sociology, anthropology, political science, psychology, communication studies, social work, folklore, educational research, information science and software engineering research. Background Qualitative research has been informed by several strands of philosophical thought and examines aspects of human life, including culture, expression, beliefs, morality, life stress, and imagination. Contemporary qualitative research has been influenced by a number of branches of philosophy, for example, positivism, postpositivism, critical theory, and constructivism. The historical transitions or 'moments' in qualitative research, together with the notion of 'paradigms' (Denzin & Lincoln, 2005), have received widespread popularity over the past decades. However, some scholars have argued that the adoptions of paradigms may be counterproductive and lead to less philosophically engaged communities. Approaches to inquiry The use of nonquantitative material as empirical data has been growing in many areas of the social sciences, including learning sciences, development psychology and cultural psychology. Several philosophical and psychological traditions have influenced investigators' approaches to qualitative research, including phenomenology, social constructionism, symbolic interactionism, and positivism. Philosophical traditions Phenomenology refers to the philosophical study of the structure of an individual's consciousness and general subjective experience. Approaches to qualitative research based on constructionism, such as grounded theory, pay attention to how the subjectivity of both the researcher and the study participants can affect the theory that develops out of the research. The symbolic interactionist approach to qualitative research examines how individuals and groups develop an understanding of the world. Traditional positivist approaches to qualitative research seek a more objective understanding of the social world. Qualitative researchers have also been influenced by the sociology of knowledge and the work of Alfred Schütz, Peter L. Berger, Thomas Luckmann, and Harold Garfinkel. Sources of data Qualitative researchers use different sources of data to understand the topic they are studying. These data sources include interview transcripts, videos of social interactions, notes, verbal reports and artifacts such as books or works of art. The case study method exemplifies qualitative researchers' preference for depth, detail, and context. Data triangulation is also a strategy used in qualitative research. Autoethnography, the study of self, is a qualitative research method in which the researcher uses his or her personal experience to understand an issue. Grounded theory is an inductive type of research, based on ("grounded" in) a very close look at the empirical observations a study yields. Thematic analysis involves analyzing patterns of meaning. Conversation analysis is primarily used to analyze spoken conversations. Biographical research is concerned with the reconstruction of life histories, based on biographical narratives and documents. Narrative inquiry studies the narratives that people use to describe their experience. Data collection Qualitative researchers may gather information through observations, note-taking, interviews, focus groups (group interviews), documents, images and artifacts. Interviews Research interviews are an important method of data collection in qualitative research. An interviewer is usually a professional or paid researcher, sometimes trained, who poses questions to the interviewee, in an alternating series of usually brief questions and answers, to elicit information. Compared to something like a written survey, qualitative interviews allow for a significantly higher degree of intimacy, with participants often revealing personal information to their interviewers in a real-time, face-to-face setting. As such, this technique can evoke an array of significant feelings and experiences within those being interviewed. Sociologists Bredal, Stefansen and Bjørnholt identified three "participant orientations", that they described as "telling for oneself", "telling for others" and "telling for the researcher". They also proposed that these orientations implied "different ethical contracts between the participant and researcher". Participant observation In participant observation ethnographers get to understand a culture by directly participating in the activities of the culture they study. Participant observation extends further than ethnography and into other fields, including psychology. For example, by training to be an EMT and becoming a participant observer in the lives of EMTs, Palmer studied how EMTs cope with the stress associated with some of the gruesome emergencies they deal with. Recursivity In qualitative research, the idea of recursivity refers to the emergent nature of research design. In contrast to standardized research methods, recursivity embodies the idea that the qualitative researcher can change a study's design during the data collection phase. Recursivity in qualitative research procedures contrasts to the methods used by scientists who conduct experiments. From the perspective of the scientist, data collection, data analysis, discussion of the data in the context of the research literature, and drawing conclusions should be each undertaken once (or at most a small number of times). In qualitative research however, data are collected repeatedly until one or more specific stopping conditions are met, reflecting a nonstatic attitude to the planning and design of research activities. An example of this dynamism might be when the qualitative researcher unexpectedly changes their research focus or design midway through a study, based on their first interim data analysis. The researcher can even make further unplanned changes based on another interim data analysis. Such an approach would not be permitted in an experiment. Qualitative researchers would argue that recursivity in developing the relevant evidence enables the researcher to be more open to unexpected results and emerging new constructs. Data analysis Qualitative researchers have a number of analytic strategies available to them. Coding In general, coding refers to the act of associating meaningful ideas with the data of interest. In the context of qualitative research, interpretative aspects of the coding process are often explicitly recognized and articulated; coding helps to produce specific words or short phrases believed to be useful abstractions from the data. Pattern thematic analysis Data may be sorted into patterns for thematic analyses as the primary basis for organizing and reporting the study findings. Content analysis According to Krippendorf, "Content analysis is a research technique for making replicable and valid inference from data to their context" (p. 21). It is applied to documents and written and oral communication. Content analysis is an important building block in the conceptual analysis of qualitative data. It is frequently used in sociology. For example, content analysis has been applied to research on such diverse aspects of human life as changes in perceptions of race over time, the lifestyles of contractors, and even reviews of automobiles. Issues Computer-assisted qualitative data analysis software (CAQDAS) Contemporary qualitative data analyses can be supported by computer programs (termed computer-assisted qualitative data analysis software). These programs have been employed with or without detailed hand coding or labeling. Such programs do not supplant the interpretive nature of coding. The programs are aimed at enhancing analysts' efficiency at applying, retrieving, and storing the codes generated from reading the data. Many programs enhance efficiency in editing and revising codes, which allow for more effective work sharing, peer review, data examination, and analysis of large datasets. Common qualitative data analysis software includes: ATLAS.ti Dedoose (mixed methods) MAXQDA (mixed methods) NVivo QDA MINER A criticism of quantitative coding approaches is that such coding sorts qualitative data into predefined (nomothetic) categories that are reflective of the categories found in objective science. The variety, richness, and individual characteristics of the qualitative data are reduced or, even, lost. To defend against the criticism that qualitative approaches to data are too subjective, qualitative researchers assert that by clearly articulating their definitions of the codes they use and linking those codes to the underlying data, they preserve some of the richness that might be lost if the results of their research boiled down to a list of predefined categories. Qualitative researchers also assert that their procedures are repeatable, which is an idea that is valued by quantitatively oriented researchers. Sometimes researchers rely on computers and their software to scan and reduce large amounts of qualitative data. At their most basic level, numerical coding schemes rely on counting words and phrases within a dataset; other techniques involve the analysis of phrases and exchanges in analyses of conversations. A computerized approach to data analysis can be used to aid content analysis, especially when there is a large corpus to unpack. Trustworthiness A central issue in qualitative research is trustworthiness (also known as credibility or, in quantitative studies, validity). There are many ways of establishing trustworthiness, including member check, interviewer corroboration, peer debriefing, prolonged engagement, negative case analysis, auditability, confirmability, bracketing, and balance. Data triangulation and eliciting examples of interviewee accounts are two of the most commonly used methods of establishing the trustworthiness of qualitative studies. Transferability of results has also been considered as an indicator of validity. Limitations of qualitative research Qualitative research is not without limitations. These limitations include participant reactivity, the potential for a qualitative investigator to over-identify with one or more study participants, "the impracticality of the Glaser-Strauss idea that hypotheses arise from data unsullied by prior expectations," the inadequacy of qualitative research for testing cause-effect hypotheses, and the Baconian character of qualitative research. Participant reactivity refers to the fact that people often behave differently when they know they are being observed. Over-identifying with participants refers to a sympathetic investigator studying a group of people and ascribing, more than is warranted, a virtue or some other characteristic to one or more participants. Compared to qualitative research, experimental research and certain types of nonexperimental research (e.g., prospective studies), although not perfect, are better means for drawing cause-effect conclusions. Glaser and Strauss, influential members of the qualitative research community, pioneered the idea that theoretically important categories and hypotheses can emerge "naturally" from the observations a qualitative researcher collects, provided that the researcher is not guided by preconceptions. The ethologist David Katz wrote "a hungry animal divides the environment into edible and inedible things....Generally speaking, objects change...according to the needs of the animal." Karl Popper carrying forward Katz's point wrote that "objects can be classified and can become similar or dissimilar, only in this way--by being related to needs and interests. This rule applied not only to animals but also to scientists." Popper made clear that observation is always selective, based on past research and the investigators' goals and motives and that preconceptionless research is impossible. The Baconian character of qualitative research refers to the idea that a qualitative researcher can collect enough observations such that categories and hypotheses will emerge from the data. Glaser and Strauss developed the idea of theoretical sampling by way of collecting observations until theoretical saturation is obtained and no additional observations are required to understand the character of the individuals under study. Bertrand Russell suggested that there can be no orderly arrangement of observations such that a hypothesis will jump out of those ordered observations; some provisional hypothesis usually guides the collection of observations. In psychology Community psychology Autobiographical narrative research has been conducted in the field of community psychology. A selection of autobiographical narratives of community psychologists can be found in the book Six Community Psychologists Tell Their Stories: History, Contexts, and Narrative. Educational psychology Edwin Farrell used qualitative methods to understand the social reality of at-risk high school students. Later he used similar methods to understand the reality of successful high school students who came from the same neighborhoods as the at-risk students he wrote about in his previously mentioned book. Health psychology In the field of health psychology, qualitative methods have become increasingly employed in research on understanding health and illness and how health and illness are socially constructed in everyday life. Since then, a broad range of qualitative methods have been adopted by health psychologists, including discourse analysis, thematic analysis, narrative analysis, and interpretative phenomenological analysis. In 2015, the journal Health Psychology published a special issue on qualitative research.<ref>Gough, B., & Deatrick, J.A. (eds.)(2015). Qualitative research in health psychology [special issue]. Health Psychology, 34 (4).</ref> Industrial and organizational psychology According to Doldor and colleagues organizational psychologists extensively use qualitative research "during the design and implementation of activities like organizational change, training needs analyses, strategic reviews, and employee development plans." Occupational health psychology Although research in the field of occupational health psychology (OHP) has predominantly been quantitatively oriented, some OHP researchers have employed qualitative methods. Qualitative research efforts, if directed properly, can provide advantages for quantitatively oriented OHP researchers. These advantages include help with (1) theory and hypothesis development, (2) item creation for surveys and interviews, (3) the discovery of stressors and coping strategies not previously identified, (4) interpreting difficult-to-interpret quantitative findings, (5) understanding why some stress-reduction interventions fail and others succeed, and (6) providing rich descriptions of the lived lives of people at work.Schonfeld, I. S., & Farrell, E. (2010). Qualitative methods can enrich quantitative research on occupational stress: An example from one occupational group. In D. C. Ganster & P. L. Perrewé (Eds.), Research in occupational stress and wellbeing series. Vol. 8. New developments in theoretical and conceptual approaches to job stress (pp. 137-197). Bingley, UK: Emerald. Some OHP investigators have united qualitative and quantitative methods within a single study (e.g., Elfering et al., [2005]); these investigators have used qualitative methods to assess job stressors that are difficult to ascertain using standard measures and well validated standardized instruments to assess coping behaviors and dependent variables such as mood. Social media psychology Since the advent of social media in the early 2000s, formerly private accounts of personal experiences have become widely shared with the public by millions of people around the world. Disclosures are often made openly, which has contributed to social media's key role in movements like the #metoo movement. The abundance of self-disclosure on social media has presented an unprecedented opportunity for qualitative and mixed methods researchers; mental health problems can now be investigated qualitatively more widely, at a lower cost, and with no intervention by the researchers. To take advantage of these data, researchers need to have mastered the tools for conducting qualitative research. Academic journals Consumption Markets & Culture Journal of Consumer Research Qualitative Inquiry Qualitative Market Research Qualitative Research The Qualitative ReportSee also Computer-assisted qualitative data analysis software (CAQDAS) References Further reading Adler, P. A. & Adler, P. (1987). : context and meaning in social inquiry / edited by Richard Jessor, Anne Colby, and Richard A. Shweder Baškarada, S. (2014) "Qualitative Case Study Guidelines", in The Qualitative Report, 19(40): 1-25. Available from Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed method approaches. Thousand Oaks, CA: Sage Publications. Denzin, N. K., & Lincoln, Y. S. (2000). Handbook of qualitative research ( 2nd ed.). Thousand Oaks, CA: Sage Publications. Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE Handbook of qualitative research ( 4th ed.). Los Angeles: Sage Publications. DeWalt, K. M. & DeWalt, B. R. (2002). Participant observation. Walnut Creek, CA: AltaMira Press. Fischer, C.T. (Ed.) (2005). Qualitative research methods for psychologists: Introduction through empirical studies. Academic Press. . Franklin, M. I. (2012), "Understanding Research: Coping with the Quantitative-Qualitative Divide". London/New York. Routledge Giddens, A. (1990). The consequences of modernity. Stanford, CA: Stanford University Press. Gubrium, J. F. and J. A. Holstein. (2000). "The New Language of Qualitative Method." New York: Oxford University Press. Gubrium, J. F. and J. A. Holstein (2009). "Analyzing Narrative Reality." Thousand Oaks, CA: Sage. Gubrium, J. F. and J. A. Holstein, eds. (2000). "Institutional Selves: Troubled Identities in a Postmodern World." New York: Oxford University Press. Hammersley, M. (2008) Questioning Qualitative Inquiry, London, Sage. Hammersley, M. (2013) What is qualitative research?, London, Bloomsbury. Holliday, A. R. (2007). Doing and Writing Qualitative Research, 2nd Edition. London: Sage Publications Holstein, J. A. and J. F. Gubrium, eds. (2012). "Varieties of Narrative Analysis." Thousand Oaks, CA: Sage. Kaminski, Marek M. (2004). Games Prisoners Play. Princeton University Press. . Malinowski, B. (1922/1961). Argonauts of the Western Pacific. New York: E. P. Dutton. Miles, M. B. & Huberman, A. M. (1994). Qualitative Data Analysis. Thousand Oaks, CA: Sage. Pamela Maykut, Richard Morehouse. 1994 Beginning Qualitative Research. Falmer Press. Pernecky, T. (2016). Epistemology and Metaphysics for Qualitative Research. London, UK: Sage Publications. Patton, M. Q. (2002). Qualitative research & evaluation methods ( 3rd ed.). Thousand Oaks, CA: Sage Publications. Pawluch D. & Shaffir W. & Miall C. (2005). Doing Ethnography: Studying Everyday Life. Toronto, ON Canada: Canadian Scholars' Press. Racino, J. (1999). Policy, Program Evaluation and Research in Disability: Community Support for All." New York, NY: Haworth Press (now Routledge imprint, Francis and Taylor, 2015). Ragin, C. C. (1994). Constructing Social Research: The Unity and Diversity of Method, Pine Forge Press, Riessman, Catherine K. (1993). "Narrative Analysis." Thousand Oaks, CA: Sage. Rosenthal, Gabriele (2018). Interpretive Social Research. An Introduction. Göttingen, Germany: Universitätsverlag Göttingen. Savin-Baden, M. and Major, C. (2013). "Qualitative research: The essential guide to theory and practice." London, Rutledge. Silverman, David, (ed), (2011), "Qualitative Research: Issues of Theory, Method and Practice". Third Edition. London, Thousand Oaks, New Delhi, Sage Publications Stebbins, Robert A. (2001) Exploratory Research in the Social Sciences. Thousand Oaks, CA: Sage. Taylor, Steven J., Bogdan, Robert, Introduction to Qualitative Research Methods, Wiley, 1998, Van Maanen, J. (1988) Tales of the field: on writing ethnography, Chicago: University of Chicago Press. Wolcott, H. F. (1995). The art of fieldwork. Walnut Creek, CA: AltaMira Press. Wolcott, H. F. (1999). Ethnography: A way of seeing. Walnut Creek, CA: AltaMira Press. Ziman, John (2000). Real Science: what it is, and what it means''. Cambridge, Uk: Cambridge University Press. External links Qualitative Philosophy C.Wright Mills, On intellectual Craftsmanship, The Sociological Imagination,1959 Participant Observation, Qualitative research methods: a Data collector's field guide Analyzing and Reporting Qualitative Market Research Overview of available QDA Software Videos Research methods Psychological methodology
0.768897
0.997602
0.767053
Critical theory
A critical theory is any approach to humanities and social philosophy that focuses on society and culture to attempt to reveal, critique, and challenge or dismantle power structures. With roots in sociology and literary criticism, it argues that social problems stem more from social structures and cultural assumptions than from individuals. Some hold it to be an ideology, others argue that ideology is the principal obstacle to human liberation. Critical theory finds applications in various fields of study, including psychoanalysis, film theory, literary theory, cultural studies, history, communication theory, philosophy, and feminist theory. Critical Theory (capitalized) is a school of thought practiced by the Frankfurt School theoreticians Herbert Marcuse, Theodor Adorno, Walter Benjamin, Erich Fromm, and Max Horkheimer. Horkheimer described a theory as critical insofar as it seeks "to liberate human beings from the circumstances that enslave them". Although a product of modernism, and although many of the progenitors of Critical Theory were skeptical of postmodernism, Critical Theory is one of the major components of both modern and postmodern thought, and is widely applied in the humanities and social sciences today. In addition to its roots in the first-generation Frankfurt School, critical theory has also been influenced by György Lukács and Antonio Gramsci. Some second-generation Frankfurt School scholars have been influential, notably Jürgen Habermas. In Habermas's work, critical theory transcended its theoretical roots in German idealism and progressed closer to American pragmatism. Concern for social "base and superstructure" is one of the remaining Marxist philosophical concepts in much contemporary critical theory. The legacy of Critical Theory as a major offshoot of Marxism is controversial. The common thread linking Marxism and Critical theory is an interest in struggles to dismantle structures of oppression, exclusion, and domination. Philosophical approaches within this broader definition include feminism, critical race theory, post-structuralism, queer theory and forms of postcolonialism. History Max Horkheimer first defined critical theory in his 1937 essay "Traditional and Critical Theory", as a social theory oriented toward critiquing and changing society as a whole, in contrast to traditional theory oriented only toward understanding or explaining it. Wanting to distinguish critical theory as a radical, emancipatory form of Marxist philosophy, Horkheimer critiqued both the model of science put forward by logical positivism, and what he and his colleagues saw as the covert positivism and authoritarianism of orthodox Marxism and Communism. He described a theory as critical insofar as it seeks "to liberate human beings from the circumstances that enslave them". Critical theory involves a normative dimension, either by criticizing society in terms of some general theory of values or norms (oughts), or by criticizing society in terms of its own espoused values (i.e. immanent critique). Significantly, critical theory not only conceptualizes and critiques societal power structures, but also establishes an empirically grounded model to link society to the human subject. It defends the universalist ambitions of the tradition, but does so within a specific context of social-scientific and historical research. The core concepts of critical theory are that it should: be directed at the totality of society in its historical specificity (i.e., how it came to be configured at a specific point in time) improve understanding of society by integrating all the major social sciences, including geography, economics, sociology, history, political science, anthropology, and psychology Postmodern critical theory is another major product of critical theory. It analyzes the fragmentation of cultural identities in order to challenge modernist-era constructs such as metanarratives, rationality, and universal truths, while politicizing social problems "by situating them in historical and cultural contexts, to implicate themselves in the process of collecting and analyzing data, and to relativize their findings". Marx Marx explicitly developed the notion of critique into the critique of ideology, linking it with the practice of social revolution, as stated in the 11th section of his Theses on Feuerbach: "The philosophers have only interpreted the world, in various ways; the point is to change it." In early works, including The German Ideology, Marx developed his concepts of false consciousness and of ideology as the interests of one section of society masquerading as the interests of society as a whole. Adorno and Horkheimer One of the distinguishing characteristics of critical theory, as Theodor W. Adorno and Max Horkheimer elaborated in their Dialectic of Enlightenment (1947), is an ambivalence about the ultimate source or foundation of social domination, an ambivalence that gave rise to the "pessimism" of the new critical theory about the possibility of human emancipation and freedom. This ambivalence was rooted in the historical circumstances in which the work was originally produced, particularly the rise of Nazism, state capitalism, and culture industry as entirely new forms of social domination that could not be adequately explained in the terms of traditional Marxist sociology. For Adorno and Horkheimer, state intervention in the economy had effectively abolished the traditional tension between Marxism's "relations of production" and "material productive forces" of society. The market (as an "unconscious" mechanism for the distribution of goods) had been replaced by centralized planning. Contrary to Marx's prediction in the Preface to a Contribution to the Critique of Political Economy, this shift did not lead to "an era of social revolution" but to fascism and totalitarianism. As a result, critical theory was left, in Habermas's words, without "anything in reserve to which it might appeal, and when the forces of production enter into a baneful symbiosis with the relations of production that they were supposed to blow wide open, there is no longer any dynamism upon which critique could base its hope". For Adorno and Horkheimer, this posed the problem of how to account for the apparent persistence of domination in the absence of the very contradiction that, according to traditional critical theory, was the source of domination itself. Habermas In the 1960s, Habermas, a proponent of critical social theory, raised the epistemological discussion to a new level in his Knowledge and Human Interests (1968), by identifying critical knowledge as based on principles that differentiated it either from the natural sciences or the humanities, through its orientation to self-reflection and emancipation. Although unsatisfied with Adorno and Horkheimer's thought in Dialectic of Enlightenment, Habermas shares the view that, in the form of instrumental rationality, the era of modernity marks a move away from the liberation of enlightenment and toward a new form of enslavement. In Habermas's work, critical theory transcended its theoretical roots in German idealism, and progressed closer to American pragmatism. Habermas's ideas about the relationship between modernity and rationalization are in this sense strongly influenced by Max Weber. He further dissolved the elements of critical theory derived from Hegelian German idealism, though his epistemology remains broadly Marxist. Perhaps his two most influential ideas are the concepts of the public sphere and communicative action, the latter arriving partly as a reaction to new post-structural or so-called "postmodern" challenges to the discourse of modernity. Habermas engaged in regular correspondence with Richard Rorty, and a strong sense of philosophical pragmatism may be felt in his thought, which frequently traverses the boundaries between sociology and philosophy. Modern critical theorists Contemporary philosophers and researchers who have focused on understanding and critiquing critical theory include Nancy Fraser, Axel Honneth, Judith Butler, and Rahel Jaeggi. Honneth is known for his works Pathology of Reason and The Legacy of Critical Theory, in which he attempts to explain critical theory's purpose in a modern context. Jaeggi focuses on both critical theory's original intent and a more modern understanding that some argue has created a new foundation for modern usage of critical theory. Butler contextualizes critical theory as a way to rhetorically challenge oppression and inequality, specifically concepts of gender. Honneth established a theory that many use to understand critical theory, the theory of recognition. In this theory, he asserts that in order for someone to be responsible for themselves and their own identity they must be also recognized by those around them: without recognition in this sense from peers and society, individuals can never become wholly responsible for themselves and others, nor experience true freedom and emancipation—i.e., without recognition, the individual cannot achieve self-actualization. Like many others who put stock in critical theory, Jaeggi is vocal about capitalism's cost to society. Throughout her writings, she has remained doubtful about the necessity and use of capitalism in regard to critical theory. Most of Jaeggi's interpretations of critical theory seem to work against the foundations of Habermas and follow more along the lines of Honneth in terms of how to look at the economy through the theory's lens. She shares many of Honneth's beliefs, and many of her works try to defend them against criticism Honneth has received. To provide a dialectical opposite to Jaeggi's conception of alienation as 'a relation of relationlessness', Hartmut Rosa has proposed the concept of resonance. Rosa uses this term to refer to moments when late modern subjects experience momentary feelings of self-efficacy in society, bringing them into a temporary moment of relatedness with some aspect of the world. Rosa describes himself as working within the critical theory tradition of the Frankfurt School, providing an extensive critique of late modernity through his concept of social acceleration. However his resonance theory has been questioned for moving too far beyond the Adornoian tradition of "looking coldly at society". Schools and Derivates Postmodern critical social theory Focusing on language, symbolism, communication, and social construction, critical theory has been applied in the social sciences as a critique of social construction and postmodern society. While modernist critical theory (as described above) concerns itself with "forms of authority and injustice that accompanied the evolution of industrial and corporate capitalism as a political-economic system", postmodern critical theory politicizes social problems "by situating them in historical and cultural contexts, to implicate themselves in the process of collecting and analyzing data, and to relativize their findings". Meaning itself is seen as unstable due to social structures' rapid transformation. As a result, research focuses on local manifestations rather than broad generalizations. Postmodern critical research is also characterized by the crisis of representation, which rejects the idea that a researcher's work is an "objective depiction of a stable other". Instead, many postmodern scholars have adopted "alternatives that encourage reflection about the 'politics and poetics' of their work. In these accounts, the embodied, collaborative, dialogic, and improvisational aspects of qualitative research are clarified." The term critical theory is often appropriated when an author works in sociological terms, yet attacks the social or human sciences, thus attempting to remain "outside" those frames of inquiry. Michel Foucault has been described as one such author. Jean Baudrillard has also been described as a critical theorist to the extent that he was an unconventional and critical sociologist; this appropriation is similarly casual, holding little or no relation to the Frankfurt School. In contrast, Habermas is one of the key critics of postmodernism. Communication studies When, in the 1970s and 1980s, Habermas redefined critical social theory as a study of communication, with communicative competence and communicative rationality on the one hand, and distorted communication on the other, the two versions of critical theory began to overlap to a much greater degree than before. Critical legal studies Immigration studies Critical theory can be used to interpret the right of asylum and immigration law. Critical finance studies Critical finance studies apply critical theory to financial markets and central banks. Critical management studies Critical international relations theory Critical race theory Critical pedagogy Critical theorists have widely credited Paulo Freire for the first applications of critical theory to education/pedagogy, considering his best-known work to be Pedagogy of the Oppressed, a seminal text in what is now known as the philosophy and social movement of critical pedagogy. Dedicated to the oppressed and based on his own experience helping Brazilian adults learn to read and write, Freire includes a detailed class analysis in his exploration of the relationship between the colonizer and the colonized. In the book, he calls traditional pedagogy the "banking model of education", because it treats the student as an empty vessel to be filled with knowledge. He argues that pedagogy should instead treat the learner as a co-creator of knowledge. In contrast to the banking model, the teacher in the critical-theory model is not the dispenser of all knowledge, but a participant who learns with and from the students—in conversation with them, even as they learn from the teacher. The goal is to liberate the learner from an oppressive construct of teacher versus student, a dichotomy analogous to colonizer and colonized. It is not enough for the student to analyze societal power structures and hierarchies, to merely recognize imbalance and inequity; critical theory pedagogy must also empower the learner to reflect and act on that reflection to challenge an oppressive status quo. Critical consciousness Critical university studies Critical psychology Critical criminology Critical animal studies Critical social work Critical ethnography Critical data studies Critical environmental justice Critical environmental justice applies critical theory to environmental justice. Criticism While critical theorists have often been called Marxist intellectuals, their tendency to denounce some Marxist concepts and to combine Marxian analysis with other sociological and philosophical traditions has resulted in accusations of revisionism by Orthodox Marxist and by Marxist–Leninist philosophers. Martin Jay has said that the first generation of critical theory is best understood not as promoting a specific philosophical agenda or ideology, but as "a gadfly of other systems". Critical theory has been criticized for not offering any clear road map to political action (praxis), often explicitly repudiating any solutions. Those objections mostly apply to first-generation Frankfurt School, while the issue of politics is addressed in a much more assertive way in contemporary theory. Another criticism of critical theory "is that it fails to provide rational standards by which it can show that it is superior to other theories of knowledge, science, or practice." Rex Gibson argues that critical theory suffers from being cliquish, conformist, elitist, immodest, anti-individualist, naive, too critical, and contradictory. Hughes and Hughes argue that Habermas' theory of ideal public discourse "says much about rational talkers talking, but very little about actors acting: Felt, perceptive, imaginative, bodily experience does not fit these theories". Some feminists argue that critical theory "can be as narrow and oppressive as the rationalization, bureaucratization, and cultures they seek to unmask and change. Critical theory's language has been criticized as being too dense to understand, although "Counter arguments to these issues of language include claims that a call for clearer and more accessible language is anti-intellectual, a new 'language of possibility' is needed, and oppressed peoples can understand and contribute to new languages." Bruce Pardy, writing for the National Post, argued that any challenges to the "legitimacy [of critical theory] can be interpreted as a demonstration of their [critical theory's proponents'] thesis: the assertion of reason, logic and evidence is a manifestation of privilege and power. Thus, any challenger risks the stigma of a bigoted oppressor." Robert Danisch, writing for The Conversation, argued that critical theory, and the modern humanities more broadly, focus too much on criticizing the current world rather than trying to make a better world. See also Modernism Antipositivism Critical military studies Cultural studies Information criticism Marxist cultural analysis Outline of critical theory Popular culture studies Outline of organizational theory Postcritique Lists List of critical theorists List of works in critical theory Journals Constellations Representations Critical Inquiry Telos Law and Critique References Footnotes Works cited Bibliography "Problematizing Global Knowledge." Theory, Culture & Society 23(2–3). 2006. . Calhoun, Craig. 1995. Critical Social Theory: Culture, History, and the Challenge of Difference. Blackwell. – A survey of and introduction to the current state of critical social theory. Charmaz, K. 1995. "Between positivism and postmodernism: Implications for methods." Studies in Symbolic Interaction 17:43–72. Conquergood, D. 1991. "Rethinking ethnography: Towards a critical cultural politics." Communication Monographs 58(2):179–94. . Corchia, Luca. 2010. La logica dei processi culturali. Jürgen Habermas tra filosofia e sociologia. Genova: Edizioni ECIG. . Dahms, Harry, ed. 2008. No Social Science Without Critical Theory, (Current Perspectives in Social Theory 25). Emerald/JAI. Gandler, Stefan. 2009. Fragmentos de Frankfurt. Ensayos sobre la Teoría crítica. México: 21st Century Publishers/Universidad Autónoma de Querétaro. . Geuss, Raymond. 1981. The Idea of a Critical Theory. Habermas and the Frankfurt School. Cambridge University Press. . Honneth, Axel. 2006. La société du mépris. Vers une nouvelle Théorie critique, La Découverte. . Horkheimer, Max. 1982. Critical Theory Selected Essays. New York: Continuum Publishing. Morgan, Marcia. 2012. Kierkegaard and Critical Theory. New York: Lexington Books. Rolling, James H. 2008. "Secular blasphemy: Utter(ed) transgressions against names and fathers in the postmodern era." Qualitative Inquiry 14(6):926–48. – An example of critical postmodern work. Sim, Stuart, and Borin Van Loon. 2001. Introducing Critical Theory. . – A short introductory volume with illustrations. Thomas, Jim. 1993. Doing Critical Ethnography. London: Sage. pp. 1–5 & 17–25. Tracy, S. J. 2000. "Becoming a character for commerce: Emotion labor, self subordination and discursive construction of identity in a total institution." Management Communication Quarterly 14(1):90–128. – An example of critical qualitative research. Willard, Charles Arthur. 1982. Argumentation and the Social Grounds of Knowledge. University of Alabama Press. — 1989. A Theory of Argumentation. University of Alabama Press. — 1996. Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. Chicago: University of Chicago Press. Chapter 9. Critical Theory External links Gerhardt, Christina. "Frankfurt School". The International Encyclopedia of Revolution and Protest. Ness, Immanuel (ed). Blackwell Publishing, 2009. Blackwell Reference Online. "Theory: Death Is Not the End" N+1 magazine's short history of academic Critical Theory. Critical Legal Thinking A Critical Legal Studies website which uses Critical Theory in an analysis of law and politics. L. Corchia, Jürgen Habermas. A Bibliography: works and studies (1952–2013), Pisa, Edizioni Il Campano – Arnus University Books, 2013, 606 pages. Sim, S.; Van Loon, B. (2009). Introducing Critical Theory: A Graphic Guide. Icon Books Ltd. Archival collections Guide to the Critical Theory Offprint Collection. Special Collections and Archives, The UC Irvine Libraries, Irvine, Cali Guide to the Critical Theory Institute Audio and Video Recordings, University of California, Irvine. Special Collections and Archives, The UC Irvine Libraries, Irvine, California. University of California, Irvine, Critical Theory Institute Manuscript Materials. Special Collections and Archives, The UC Irvine Libraries, Irvine, California. Conflict theory Humanities Philosophical schools and traditions Social philosophy Political ideologies
0.767805
0.998965
0.767011
A Study of History
A Study of History is a 12-volume universal history by the British historian Arnold J. Toynbee, published from 1934 to 1961. It received enormous popular attention but according to historian Richard J. Evans, "enjoyed only a brief vogue before disappearing into the obscurity in which it has languished." Toynbee's goal was to trace the development and decay of 19 or 21 world civilizations in the historical record, applying his model to each of these civilizations, detailing the stages through which they all pass: genesis, growth, time of troubles, universal state, and disintegration. The 19 (or 21) major civilizations, as Toynbee sees them, are: Egyptian, Andean, Sumerian, Babylonic, Hittite, Minoan, Indic, Hindu, Syriac, Hellenic, Western, Orthodox Christian (having two branches: the main or Byzantine body and the Russian branch), Far Eastern (having two branches: the main or Chinese body and the Japanese-Korean branch), Islamic (having two branches which later merged: Arabic and Iranic), Mayan, Mexican and Yucatec. Moreover, there are three "abortive civilizations" (Abortive Far Western Christian, Abortive Far Eastern Christian, Abortive Scandinavian) and five "arrested civilizations" (Polynesian, Eskimo, Nomadic, Ottoman, Spartan), for a total of 27 or 29. Titles of the volumes The 12-volume work contains more than 3 million words and about 7,000 pages, plus 412 pages of indices. Publication of A Study of History Vol I: Introduction: The Geneses of Civilizations, part one (Oxford University Press, 1934) Vol II: The Geneses of Civilizations, part two (Oxford University Press, 1934) Vol III: The Growths of Civilizations (Oxford University Press, 1934) Vol IV: The Breakdowns of Civilizations (Oxford University Press, 1939) Vol V: The Disintegrations of Civilizations, part one (Oxford University Press, 1939) Vol VI: The Disintegrations of Civilizations, part two (Oxford University Press, 1939) Vol VII: Universal States; Universal Churches (Oxford University Press, 1954) [as two volumes in paperback] Vol VIII: Heroic Ages; Contacts between Civilizations in Space (Encounters between Contemporaries) (Oxford University Press, 1954) Vol IX: Contacts between Civilizations in Time (Renaissances); Law and Freedom in History; The Prospects of the Western Civilization (Oxford University Press, 1954) Vol X: The Inspirations of Historians; A Note on Chronology (Oxford University Press, 1954) Vol XI: Historical Atlas and Gazetteer (Oxford University Press, 1959) Vol XII: Reconsiderations (Oxford University Press, 1961) Abridgements by D. C. Somervell: A Study of History: Abridgement of Vols I–VI, with a preface by Toynbee (Oxford University Press, 1946) A Study of History: Abridgement of Vols VII–X (Oxford University Press, 1957) A Study of History: Abridgement of Vols I–X in One Volume, with new preface by Toynbee & new tables (Oxford Univ. Press, 1960) Abridgement by the author and Jane Caplan: A Study of History: The Abridged One-Volume Edition, with new foreword by Toynbee & a new chapter (Oxford University Press and Thames & Hudson, 1972) Genesis and Growth Toynbee argues that civilizations are born out of more primitive societies, not as the result of racial or environmental factors, but as a response to challenges, such as hard country, new ground, blows and pressures from other civilizations, and penalization. He argues that for civilizations to be born, the challenge must be a golden mean; that excessive challenge will crush the civilization, and too little challenge will cause it to stagnate. He argues that civilizations continue to grow only when they meet one challenge only to be met by another, in a continuous cycle of "Challenge and Response". He argues that civilizations develop in different ways due to their different environments and different approaches to the challenges they face. He argues that growth is driven by "Creative Minorities": those who find solutions to the challenges, who inspire (rather than compel) others to follow their innovative lead. This is done through the "faculty of mimesis." Creative minorities find solutions to the challenges a civilization faces, while the great mass follow these solutions by imitation, solutions they otherwise would be incapable of discovering on their own. In 1939, Toynbee wrote, "The challenge of being called upon to create a political world-order, the framework for an economic world-order … now confronts our Modern Western society." Breakdown and Disintegration Toynbee does not see the breakdown of civilizations as caused by loss of control over the physical environment, by loss of control over the human environment, or by attacks from outside. Rather, it comes from the deterioration of the "Creative Minority", which eventually ceases to be creative and degenerates into merely a "Dominant Minority". He argues that creative minorities deteriorate due to a worship of their "former self," by which they become prideful and fail adequately to address the next challenge they face. Results of the breakdown The final breakdown results in "positive acts of creation;" the dominant minority seeks to create a Universal state to preserve its power and influence, and the internal proletariat seeks to create a Universal church to preserve its spiritual values and cultural norms. Universal state He argues that the ultimate sign a civilization has broken down is when the dominant minority forms a "universal state", which stifles political creativity within the existing social order. The classic example of this is the Roman Empire, though many other imperial regimes are cited as examples. Toynbee writes: "First the Dominant Minority attempts to hold by force—against all right and reason—a position of inherited privilege which it has ceased to merit; and then the Proletariat repays injustice with resentment, fear with hate, and violence with violence. Yet the whole movement ends in positive acts of creation. The Dominant Minority creates a universal state, the Internal Proletariat a universal church, and the External Proletariat a bevy of barbarian war-bands." Universal church Toynbee developed his concept of an "internal proletariat" and an "external proletariat" to describe quite different opposition groups within and outside the frontiers of a civilization. These groups, however, find themselves bound to the fate of the civilization. During its decline and disintegration, they are increasingly disenfranchised or alienated, and thus lose their immediate sense of loyalty or of obligation. Nonetheless an "internal proletariat," untrusting of the dominant minority, may form a "universal church" which survives the civilization's demise, co-opting the useful structures such as marriage laws of the earlier time while creating a new philosophical or religious pattern for the next stage of history. Before the process of disintegration, the dominant minority had held the internal proletariat in subjugation within the confines of the civilization, causing these oppressed to grow bitter. The external proletariat, living outside the civilization in poverty and chaos, grows envious. Then, in the social stress resulting from the failure of the civilization, the bitterness and envy increase markedly. Toynbee argues that as civilizations decay, there is a "schism" within the society. In this environment of discord, people resort to archaism (idealization of the past), futurism (idealization of the future), detachment (removal of oneself from the realities of a decaying world), and transcendence (meeting the challenges of the decaying civilization with new insight, e.g., by following a new religion). From among members of an "internal proletariat" who transcend the social decay a "church" may arise. Such an association would contain new and stronger spiritual insights, around which a subsequent civilization may begin to form. Toynbee here uses the word "church" in a general sense, e.g., to refer to a collective spiritual bond found in common worship, or the unity found in an agreed social order. Predictions It remains to be seen what will come of the four remaining civilizations of the 21st century: Western civilization, Islamic society, Hindu society, and the Far East. Toynbee argues two possibilities: they might all merge with Western Civilization, or Western civilization might develop a 'Universal State' after its 'Time of Troubles', decay, and die. List of civilizations The following table lists the 23 civilizations identified by Toynbee in vol. Vii. This table does not include what Toynbee terms primitive societies, arrested civilizations, or abortive civilizations. Civilizations are shown in boldface. Toynbee's "Universal Churches" are written in italic and are chronologically located between second- and third- generation civilizations, as is described in volume VII. Reception Historian Carroll Quigley expanded upon Toynbee's notion of civilizational collapse in The Evolution of Civilizations (1961, 1979). He argued that societal disintegration involves the metamorphosis of social instruments, set up to meet actual needs, into institutions, which serve their own interest at the expense of social needs. Social scientist Ashley Montagu assembled 29 other historians' articles to form a symposium on Toynbee's A Study of History, published as Toynbee and History: Critical Essays and Reviews. The book includes three of Toynbee's own essays: "What I am Trying to Do" (originally published in International Affairs vol. 31, 1955); What the Book is For: How the Book Took Shape (a pamphlet written upon completion of the final volumes of A Study of History) and a comment written in response to the articles by Edward Fiess and Pieter Geyl (originally published in Journal of the History of Ideas, vol. 16, 1955.) David Wilkinson suggests that there is an even larger unit than civilisation. Using the ideas drawn from "World Systems Theory" he suggests that since at least 1500 BC that there was a connection established between a number of formerly separate civilisations to form a single interacting "Central Civilisation", which expanded to include formerly separate civilisations such as India, the Far East, and eventually Western Europe and the Americas into a single "World System". In some ways, it resembles what William H. McNeill calls the "Closure of the Eurasian Ecumene, 500 B.C.-200 A.D." Legacy After 1960, Toynbee's ideas faded both in academia and the media, to the point of seldom being cited today. Toynbee's approach to history, his style of civilizational analysis, faced skepticism from mainstream historians who thought it put an undue emphasis on the divine, which led to his academic reputation declining, though for a time, Toynbee's Study remained popular outside academia. Nevertheless, interest revived decades later with the publication of The Clash of Civilizations (1997) by political scientist Samuel P. Huntington. Huntington viewed human history as broadly the history of civilizations and posited that the world after the end of the Cold War will be a multi-polar one of competing major civilizations divided by "fault lines." In popular culture, Toynbee's theories of historical cycles and civilisational collapse are said to have been a major inspiration for Isaac Asimov's seminal science-fiction novels, the Foundation series. Jews and Armenians as a "fossil society" In the introduction of his work Toynbee refers to a number of "fossilized relics" of societies, among others he mentions the Armenians, who according to Toynbee played a similar role to that of the Jews in the world of Islam. "...and yet another Syriac remnant, the Armenian Gregorian Monophysites, have played much the same part in the World of Islam." Volume 1 of the book, written in the 1930s, contains a discussion of Jewish culture which begins with the sentence "There remains the case where victims of religious discrimination represent an extinct society which only survives as a fossil. .... by far the most notable is one of the fossil remnants of the Syriac Society, the Jews." That sentence has been the subject of controversy, and some reviewers have interpreted the line as antisemitic (notably after 1945). In later printings, a footnote was appended which read "Mr. Toynbee wrote this part of the book before the Nazi persecution of the Jews opened a new and terrible chapter of the story...". The subject is extensively debated with input from critics in Vol XII, Reconsiderations, published in 1961. References Further reading Costello, Paul. World Historians and Their Goals: Twentieth-Century Answers to Modernism (1993). Compares Toynbee with H. G. Wells, Oswald Spengler, Pitirim Sorokin, Christopher Dawson, Lewis Mumford, and William H. McNeill Hutton, Alexander. "‘A belated return for Christ?’: the reception of Arnold J. Toynbee's A Study of History in a British context, 1934–1961." European Review of History 21.3 (2014): 405-424. Lang, Michael. "Globalization and Global History in Toynbee," Journal of World History 22#4 Dec 2011 pp. 747–783 in project MUSE McIntire, C. T. and Marvin Perry, eds. Toynbee: Reappraisals (1989) 254pp McNeill, William H. Arnold J. Toynbee: a life (Oxford UP, 1989). The standard scholarly biography. Montagu, Ashley M. F., ed. Toynbee and History: Critical Essays and Reviews (1956) online edition Toynbee, Arnold J. A Study of History abridged edition by D. C. Somervell (2 vol 1947); 617pp online edition of vol 1, covering vol 1–6 of the original; A Study of History online edition External links A Study of History https://archive.org/details/in.ernet.dli.2015.12118/page/n5 (first volume of A Study of History) 1934 non-fiction books History books about civilization English-language books English non-fiction books Universal history books Book series introduced in 1934
0.773594
0.991481
0.767003
Interdisciplinarity
Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity (e.g., a research project). It draws knowledge from several fields like sociology, anthropology, psychology, economics, etc. It is related to an interdiscipline or an interdisciplinary field, which is an organizational unit that crosses traditional boundaries between academic disciplines or schools of thought, as new needs and professions emerge. Large engineering teams are usually interdisciplinary, as a power station or mobile phone or other project requires the melding of several specialties. However, the term "interdisciplinary" is sometimes confined to academic settings. The term interdisciplinary is applied within education and training pedagogies to describe studies that use methods and insights of several established disciplines or traditional fields of study. Interdisciplinarity involves researchers, students, and teachers in the goals of connecting and integrating several academic schools of thought, professions, or technologies—along with their specific perspectives—in the pursuit of a common task. The epidemiology of HIV/AIDS or global warming requires understanding of diverse disciplines to solve complex problems. Interdisciplinary may be applied where the subject is felt to have been neglected or even misrepresented in the traditional disciplinary structure of research institutions, for example, women's studies or ethnic area studies. Interdisciplinarity can likewise be applied to complex subjects that can only be understood by combining the perspectives of two or more fields. The adjective interdisciplinary is most often used in educational circles when researchers from two or more disciplines pool their approaches and modify them so that they are better suited to the problem at hand, including the case of the team-taught course where students are required to understand a given subject in terms of multiple traditional disciplines. Interdisciplinary education fosters cognitive flexibility and prepares students to tackle complex, real-world problems by integrating knowledge from multiple fields. This approach emphasizes active learning, critical thinking, and problem-solving skills, equipping students with the adaptability needed in an increasingly interconnected world. For example, the subject of land use may appear differently when examined by different disciplines, for instance, biology, chemistry, economics, geography, and politics. Development Although "interdisciplinary" and "interdisciplinarity" are frequently viewed as twentieth century terms, the concept has historical antecedents, most notably Greek philosophy. Julie Thompson Klein attests that "the roots of the concepts lie in a number of ideas that resonate through modern discourse—the ideas of a unified science, general knowledge, synthesis and the integration of knowledge", while Giles Gunn says that Greek historians and dramatists took elements from other realms of knowledge (such as medicine or philosophy) to further understand their own material. The building of Roman roads required men who understood surveying, material science, logistics and several other disciplines. Any broadminded humanist project involves interdisciplinarity, and history shows a crowd of cases, as seventeenth-century Leibniz's task to create a system of universal justice, which required linguistics, economics, management, ethics, law philosophy, politics, and even sinology. Interdisciplinary programs sometimes arise from a shared conviction that the traditional disciplines are unable or unwilling to address an important problem. For example, social science disciplines such as anthropology and sociology paid little attention to the social analysis of technology throughout most of the twentieth century. As a result, many social scientists with interests in technology have joined science, technology and society programs, which are typically staffed by scholars drawn from numerous disciplines. They may also arise from new research developments, such as nanotechnology, which cannot be addressed without combining the approaches of two or more disciplines. Examples include quantum information processing, an amalgamation of quantum physics and computer science, and bioinformatics, combining molecular biology with computer science. Sustainable development as a research area deals with problems requiring analysis and synthesis across economic, social and environmental spheres; often an integration of multiple social and natural science disciplines. Interdisciplinary research is also key to the study of health sciences, for example in studying optimal solutions to diseases. Some institutions of higher education offer accredited degree programs in Interdisciplinary Studies. At another level, interdisciplinarity is seen as a remedy to the harmful effects of excessive specialization and isolation in information silos. On some views, however, interdisciplinarity is entirely indebted to those who specialize in one field of study—that is, without specialists, interdisciplinarians would have no information and no leading experts to consult. Others place the focus of interdisciplinarity on the need to transcend disciplines, viewing excessive specialization as problematic both epistemologically and politically. When interdisciplinary collaboration or research results in new solutions to problems, much information is given back to the various disciplines involved. Therefore, both disciplinarians and interdisciplinarians may be seen in complementary relation to one another. Barriers Because most participants in interdisciplinary ventures were trained in traditional disciplines, they must learn to appreciate differences of perspectives and methods. For example, a discipline that places more emphasis on quantitative rigor may produce practitioners who are more scientific in their training than others; in turn, colleagues in "softer" disciplines who may associate quantitative approaches with difficulty grasp the broader dimensions of a problem and lower rigor in theoretical and qualitative argumentation. An interdisciplinary program may not succeed if its members remain stuck in their disciplines (and in disciplinary attitudes). Those who lack experience in interdisciplinary collaborations may also not fully appreciate the intellectual contribution of colleagues from those disciplines. From the disciplinary perspective, however, much interdisciplinary work may be seen as "soft", lacking in rigor, or ideologically motivated; these beliefs place barriers in the career paths of those who choose interdisciplinary work. For example, interdisciplinary grant applications are often refereed by peer reviewers drawn from established disciplines; interdisciplinary researchers may experience difficulty getting funding for their research. In addition, untenured researchers know that, when they seek promotion and tenure, it is likely that some of the evaluators will lack commitment to interdisciplinarity. They may fear that making a commitment to interdisciplinary research will increase the risk of being denied tenure. Interdisciplinary programs may also fail if they are not given sufficient autonomy. For example, interdisciplinary faculty are usually recruited to a joint appointment, with responsibilities in both an interdisciplinary program (such as women's studies) and a traditional discipline (such as history). If the traditional discipline makes the tenure decisions, new interdisciplinary faculty will be hesitant to commit themselves fully to interdisciplinary work. Other barriers include the generally disciplinary orientation of most scholarly journals, leading to the perception, if not the fact, that interdisciplinary research is hard to publish. In addition, since traditional budgetary practices at most universities channel resources through the disciplines, it becomes difficult to account for a given scholar or teacher's salary and time. During periods of budgetary contraction, the natural tendency to serve the primary constituency (i.e., students majoring in the traditional discipline) makes resources scarce for teaching and research comparatively far from the center of the discipline as traditionally understood. For these same reasons, the introduction of new interdisciplinary programs is often resisted because it is perceived as a competition for diminishing funds. Due to these and other barriers, interdisciplinary research areas are strongly motivated to become disciplines themselves. If they succeed, they can establish their own research funding programs and make their own tenure and promotion decisions. In so doing, they lower the risk of entry. Examples of former interdisciplinary research areas that have become disciplines, many of them named for their parent disciplines, include neuroscience, cybernetics, biochemistry and biomedical engineering. These new fields are occasionally referred to as "interdisciplines". On the other hand, even though interdisciplinary activities are now a focus of attention for institutions promoting learning and teaching, as well as organizational and social entities concerned with education, they are practically facing complex barriers, serious challenges and criticism. The most important obstacles and challenges faced by interdisciplinary activities in the past two decades can be divided into "professional", "organizational", and "cultural" obstacles. Interdisciplinary studies and studies of interdisciplinarity An initial distinction should be made between interdisciplinary studies, which can be found spread across the academy today, and the study of interdisciplinarity, which involves a much smaller group of researchers. The former is instantiated in thousands of research centers across the US and the world. The latter has one US organization, the Association for Interdisciplinary Studies (founded in 1979), two international organizations, the International Network of Inter- and Transdisciplinarity (founded in 2010) and the Philosophy of/as Interdisciplinarity Network (founded in 2009). The US's research institute devoted to the theory and practice of interdisciplinarity, the Center for the Study of Interdisciplinarity at the University of North Texas, was founded in 2008 but is closed as of 1 September 2014, the result of administrative decisions at the University of North Texas. An interdisciplinary study is an academic program or process seeking to synthesize broad perspectives, knowledge, skills, interconnections, and epistemology in an educational setting. Interdisciplinary programs may be founded in order to facilitate the study of subjects which have some coherence, but which cannot be adequately understood from a single disciplinary perspective (for example, women's studies or medieval studies). More rarely, and at a more advanced level, interdisciplinarity may itself become the focus of study, in a critique of institutionalized disciplines' ways of segmenting knowledge. In contrast, studies of interdisciplinarity raise to self-consciousness questions about how interdisciplinarity works, the nature and history of disciplinarity, and the future of knowledge in post-industrial society. Researchers at the Center for the Study of Interdisciplinarity have made the distinction between philosophy 'of' and 'as' interdisciplinarity, the former identifying a new, discrete area within philosophy that raises epistemological and metaphysical questions about the status of interdisciplinary thinking, with the latter pointing toward a philosophical practice that is sometimes called 'field philosophy'. Perhaps the most common complaint regarding interdisciplinary programs, by supporters and detractors alike, is the lack of synthesis—that is, students are provided with multiple disciplinary perspectives but are not given effective guidance in resolving the conflicts and achieving a coherent view of the subject. Others have argued that the very idea of synthesis or integration of disciplines presupposes questionable politico-epistemic commitments. Critics of interdisciplinary programs feel that the ambition is simply unrealistic, given the knowledge and intellectual maturity of all but the exceptional undergraduate; some defenders concede the difficulty, but insist that cultivating interdisciplinarity as a habit of mind, even at that level, is both possible and essential to the education of informed and engaged citizens and leaders capable of analyzing, evaluating, and synthesizing information from multiple sources in order to render reasoned decisions. While much has been written on the philosophy and promise of interdisciplinarity in academic programs and professional practice, social scientists are increasingly interrogating academic discourses on interdisciplinarity, as well as how interdisciplinarity actually works—and does not—in practice. Some have shown, for example, that some interdisciplinary enterprises that aim to serve society can produce deleterious outcomes for which no one can be held to account. Politics of interdisciplinary studies Since 1998, there has been an ascendancy in the value of interdisciplinary research and teaching and a growth in the number of bachelor's degrees awarded at U.S. universities classified as multi- or interdisciplinary studies. The number of interdisciplinary bachelor's degrees awarded annually rose from 7,000 in 1973 to 30,000 a year by 2005 according to data from the National Center of Educational Statistics (NECS). In addition, educational leaders from the Boyer Commission to Carnegie's President Vartan Gregorian to Alan I. Leshner, CEO of the American Association for the Advancement of Science have advocated for interdisciplinary rather than disciplinary approaches to problem-solving in the 21st century. This has been echoed by federal funding agencies, particularly the National Institutes of Health under the direction of Elias Zerhouni, who has advocated that grant proposals be framed more as interdisciplinary collaborative projects than single-researcher, single-discipline ones. At the same time, many thriving longstanding bachelor's in interdisciplinary studies programs in existence for 30 or more years, have been closed down, in spite of healthy enrollment. Examples include Arizona International (formerly part of the University of Arizona), the School of Interdisciplinary Studies at Miami University, and the Department of Interdisciplinary Studies at Wayne State University; others such as the Department of Interdisciplinary Studies at Appalachian State University, and George Mason University's New Century College, have been cut back. Stuart Henry has seen this trend as part of the hegemony of the disciplines in their attempt to recolonize the experimental knowledge production of otherwise marginalized fields of inquiry. This is due to threat perceptions seemingly based on the ascendancy of interdisciplinary studies against traditional academia. Examples Communication science: Communication studies takes up theories, models, concepts, etc. of other, independent disciplines such as sociology, political science and economics and thus decisively develops them. Environmental science: Environmental science is an interdisciplinary earth science aimed at addressing environmental issues such as global warming and pollution, and involves the use of a wide range of scientific disciplines including geology, chemistry, physics, ecology, and oceanography. Faculty members of environmental programs often collaborate in interdisciplinary teams to solve complex global environmental problems. Those who study areas of environmental policy such as environmental law, sustainability, and environmental justice, may also seek knowledge in the environmental sciences to better develop their expertise and understanding in their fields. Knowledge management: Knowledge management discipline exists as a cluster of divergent schools of thought under an overarching knowledge management umbrella by building on works in computer science, economics, human resource management, information systems, organizational behavior, philosophy, psychology, and strategic management. Liberal arts education: A select realm of disciplines that cut across the humanities, social sciences, and hard sciences, initially intended to provide a well-rounded education. Several graduate programs exist in some form of Master of Arts in Liberal Studies to continue to offer this interdisciplinary course of study. Materials science: Field that combines the scientific and engineering aspects of materials, particularly solids. It covers the design, discovery and application of new materials by incorporating elements of physics, chemistry, and engineering. Permaculture: A holistic design science that provides a framework for making design decisions in any sphere of human endeavor, but especially in land use and resource security. Provenance research: Interdisciplinary research comes into play when clarifying the path of artworks into public and private art collections and also in relation to human remains in natural history collections. Sports science: Sport science is an interdisciplinary science that researches the problems and manifestations in the field of sport and movement in cooperation with a number of other sciences, such as sociology, ethics, biology, medicine, biomechanics or pedagogy. Transport sciences: Transport sciences are a field of science that deals with the relevant problems and events of the world of transport and cooperates with the specialised legal, ecological, technical, psychological or pedagogical disciplines in working out the changes of place of people, goods, messages that characterise them.<ref>Hendrik Ammoser, Mirko Hoppe: Glossary of Transport and Transport Sciences (PDF; 1,3 MB), published in the series Discussion Papers from the Institute of Economics and Transport, Technische Universität Dresden. Dresden 2006. </ref> Venture research: Venture research is an interdisciplinary research area located in the human sciences that deals with the conscious entering into and experiencing of borderline situations. For this purpose, the findings of evolutionary theory, cultural anthropology, social sciences, behavioral research, differential psychology, ethics or pedagogy are cooperatively processed and evaluated.Siegbert A. Warwitz: Vom Sinn des Wagens. Why people take on dangerous challenges. In: German Alpine Association (ed.): Berg 2006. Tyrolia Publishing House. Munich-Innsbruck-Bolzano. P. 96-111. Historical examples There are many examples of when a particular idea, almost in the same period, arises in different disciplines. One case is the shift from the approach of focusing on "specialized segments of attention" (adopting one particular perspective), to the idea of "instant sensory awareness of the whole", an attention to the "total field", a "sense of the whole pattern, of form and function as a unity", an "integral idea of structure and configuration". This has happened in painting (with cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from an era shaped by mechanization, which brought sequentiality, to the era shaped by the instant speed of electricity, which brought simultaneity. Efforts to simplify and defend the concept An article in the Social Science Journal attempts to provide a simple, common-sense, definition of interdisciplinarity, bypassing the difficulties of defining that concept and obviating the need for such related concepts as transdisciplinarity, pluridisciplinarity, and multidisciplinary: In turn, interdisciplinary richness of any two instances of knowledge, research, or education can be ranked by weighing four variables: number of disciplines involved, the "distance" between them, the novelty of any particular combination, and their extent of integration. Interdisciplinary knowledge and research are important because: "Creativity often requires interdisciplinary knowledge. Immigrants often make important contributions to their new field. Disciplinarians often commit errors which can be best detected by people familiar with two or more disciplines. Some worthwhile topics of research fall in the interstices among the traditional disciplines. Many intellectual, social, and practical problems require interdisciplinary approaches. Interdisciplinary knowledge and research serve to remind us of the unity-of-knowledge ideal. Interdisciplinarians enjoy greater flexibility in their research. More so than narrow disciplinarians, interdisciplinarians often treat themselves to the intellectual equivalent of traveling in new lands. Interdisciplinarians may help breach communication gaps in the modern academy, thereby helping to mobilize its enormous intellectual resources in the cause of greater social rationality and justice. By bridging fragmented disciplines, interdisciplinarians might play a role in the defense of academic freedom." Quotations See also Commensurability (philosophy of science) Double degree Encyclopedism Holism Holism in science Integrative learning Interdiscipline Interdisciplinary arts Interdisciplinary teaching Interprofessional education Meta-functional expertise Methodology Polymath Science of team science Social ecological model Science and technology studies (STS) Synoptic philosophy Systems theory Thematic learning Periodic table of human sciences in Tinbergen's four questions Transdisciplinarity References Further reading Association for Interdisciplinary Studies Center for the Study of Interdisciplinarity Centre for Interdisciplinary Research in the Arts (University of Manchester) College for Interdisciplinary Studies, University of British Columbia, Vancouver, British Columbia, Canada Frank, Roberta: Interdisciplitarity': The First Half Century", Issues in Integrative Studies 6 (1988): 139-151. Frodeman, R., Klein, J.T., and Mitcham, C. Oxford Handbook of Interdisciplinarity. Oxford University Press, 2010. The Evergreen State College, Olympia, Washington Gram Vikas (2007) Annual Report, p. 19. Hang Seng Centre for Cognitive Studies Indiresan, P.V. (1990) Managing Development: Decentralisation, Geographical Socialism And Urban Replication. India: Sage Interdisciplinary Arts Department, Columbia College Chicago Interdisciplinarity and tenure/ Interdisciplinary Studies Project, Harvard University School of Education, Project Zero Klein, Julie Thompson (1996) Crossing Boundaries: Knowledge, Disciplinarities, and Interdisciplinarities (University Press of Virginia) Klein, Julie Thompson (2006) "Resources for interdisciplinary studies." Change, (Mark/April). 52–58 Klein, Julie Thompson and Thorsten Philipp (2023), "Interdisciplinarity" in Handbook Transdisciplinary Learning. Eds. Thorsten Philipp und Tobias Schmohl, 195-204. Bielefeld: transcript. doi: 10.14361/9783839463475-021. Kockelmans, Joseph J. editor (1979) Interdisciplinarity and Higher Education, The Pennsylvania State University Press . Yifang Ma, Roberta Sinatra, Michael Szell, Interdisciplinarity: A Nobel Opportunity, November 2018 Gerhard Medicus Gerhard Medicus: Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin 2017 VWB] Moran, Joe. (2002). Interdisciplinarity. Morson, Gary Saul and Morton O. Schapiro (2017). Cents and Sensibility: What Economics Can Learn from the Humanities. (Princeton University Press) NYU Gallatin School of Individualized Study, New York, NY Poverty Action Lab Rhoten, D. (2003). A multi-method analysis of the social and technical conditions for interdisciplinary collaboration. School of Social Ecology at the University of California, Irvine Siskin, L.S. & Little, J.W. (1995). The Subjects in Question. Teachers College Press. about the departmental organization of high schools and efforts to change that. Stiglitz, Joseph (2002) Globalisation and its Discontents, United States of America, W.W. Norton and Company Sumner, A and M. Tribe (2008) International Development Studies: Theories and Methods in Research and Practice, London: Sage Thorbecke, Eric. (2006) "The Evolution of the Development Doctrine, 1950–2005". UNU-WIDER Research Paper No. 2006/155. United Nations University, World Institute for Development Economics Research Trans- & inter-disciplinary science approaches- A guide to on-line resources on integration and trans- and inter-disciplinary approaches. Truman State University's Interdisciplinary Studies Program Peter Weingart and Nico Stehr, eds. 2000. Practicing Interdisciplinarity (University of Toronto Press) External links Association for Interdisciplinary Studies National Science Foundation Workshop Report: Interdisciplinary Collaboration in Innovative Science and Engineering Fields'' Rethinking Interdisciplinarity online conference, organized by the Institut Nicod, CNRS, Paris [broken] Center for the Study of Interdisciplinarity at the University of North Texas Labyrinthe. Atelier interdisciplinaire, a journal (in French), with a special issue on La Fin des Disciplines? Rupkatha Journal on Interdisciplinary Studies in Humanities: An Online Open Access E-Journal, publishing articles on a number of areas Article about interdisciplinary modeling (in French with an English abstract) Wolf, Dieter. Unity of Knowledge, an interdisciplinary project Soka University of America has no disciplinary departments and emphasizes interdisciplinary concentrations in the Humanities, Social and Behavioral Sciences, International Studies, and Environmental Studies. SystemsX.ch – The Swiss Initiative in Systems Biology Tackling Your Inner 5-Year-Old: Saving the world requires an interdisciplinary perspective Academia Academic discipline interactions Knowledge Occupations Pedagogy Philosophy of education
0.769398
0.996882
0.766999
Field research
Field research, field studies, or fieldwork is the collection of raw data outside a laboratory, library, or workplace setting. The approaches and methods used in field research vary across disciplines. For example, biologists who conduct field research may simply observe animals interacting with their environments, whereas social scientists conducting field research may interview or observe people in their natural environments to learn their languages, folklore, and social structures. Field research involves a range of well-defined, although variable, methods: informal interviews, direct observation, participation in the life of the group, collective discussions, analyses of personal documents produced within the group, self-analysis, results from activities undertaken off- or on-line, and life-histories. Although the method generally is characterized as qualitative research, it may (and often does) include quantitative dimensions. History Field research has a long history. Cultural anthropologists have long used field research to study other cultures. Although the cultures do not have to be different, this has often been the case in the past with the study of so-called primitive cultures, and even in sociology the cultural differences have been ones of class. The work is done... in Fields' that is, circumscribed areas of study which have been the subject of social research". Fields could be education, industrial settings, or Amazonian rain forests. Field research may be conducted by ethologists such as Jane Goodall. Alfred Radcliffe-Brown [1910] and Bronisław Malinowski [1922] were early anthropologists who set the models for future work. Conducting field research The quality of results obtained from field research depends on the data gathered in the field. The data in turn, depend upon the field worker, their level of involvement, and ability to see and visualize things that other individuals visiting the area of study may fail to notice. The more open researchers are to new ideas, concepts, and things which they may not have seen in their own culture, the better will be the absorption of those ideas. Better grasping of such material means a better understanding of the forces of culture operating in the area and the ways they modify the lives of the people under study. Social scientists (i.e. anthropologists, social psychologists, etc.) have always been taught to be free from ethnocentrism (i.e. the belief in the superiority of one's own ethnic group), when conducting any type of field research. When humans themselves are the subject of study, protocols must be devised to reduce the risk of observer bias and the acquisition of too theoretical or idealized explanations of the workings of a culture. Participant observation, data collection, and survey research are examples of field research methods, in contrast to what is often called experimental or lab research. Field notes When conducting field research, keeping an ethnographic record is essential to the process. Field notes are a key part of the ethnographic record. The process of field notes begin as the researcher participates in local scenes and experiences in order to make observations that will later be written up. The field researcher tries first to take mental notes of certain details in order that they be written down later. Kinds of field notes Field Note Chart Interviewing Another method of data collection is interviewing, specifically interviewing in the qualitative paradigm. Interviewing can be done in different formats, this all depends on individual researcher preferences, research purpose, and the research question asked. Analyzing data In qualitative research, there are many ways of analyzing data gathered in the field. One of the two most common methods of data analysis are thematic analysis and narrative analysis. As mentioned before, the type of analysis a researcher decides to use depends on the research question asked, the researcher's field, and the researcher's personal method of choice. Field research across different disciplines Anthropology In anthropology, field research is organized so as to produce a kind of writing called ethnography. Ethnography can refer to both a methodology and a product of research, namely a monograph or book. Ethnography is a grounded, inductive method that heavily relies on participant-observation. Participant observation is a structured type of research strategy. It is a widely used methodology in many disciplines, particularly, cultural anthropology, but also sociology, communication studies, and social psychology. Its aim is to gain a close and intimate familiarity with a given group of individuals (such as a religious, occupational, or sub cultural group, or a particular community) and their practices through an intensive involvement with people in their natural environment, usually over an extended period of time. The method originated in field work of social anthropologists, especially the students of Franz Boas in the United States, and in the urban research of the Chicago School of sociology. Max Gluckman noted that Bronisław Malinowski significantly developed the idea of fieldwork, but it originated with Alfred Cort Haddon in England and Franz Boas in the United States. Robert G. Burgess concluded that "it is Malinowski who is usually credited with being the originator of intensive anthropological field research". Anthropological fieldwork uses an array of methods and approaches that include, but are not limited to: participant observation, structured and unstructured interviews, archival research, collecting demographic information from the community the anthropologist is studying, and data analysis. Traditional participant observation is usually undertaken over an extended period of time, ranging from several months to many years, and even generations. An extended research time period means that the researcher is able to obtain more detailed and accurate information about the individuals, community, and/or population under study. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time. A strength of observation and interaction over extended periods of time is that researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior. Archaeology Field research lies at the heart of archaeological research. It may include the undertaking of broad area surveys (including aerial surveys); of more localised site surveys (including photographic, drawn, and geophysical surveys, and exercises such as fieldwalking); and of excavation. Biology and ecology In biology, field research typically involves studying of free-living wild animals in which the subjects are observed in their natural habitat, without changing, harming, or materially altering the setting or behavior of the animals under study. Field research is an indispensable part of biological science. Animal migration tracking (including bird ringing/banding) is a frequently-used field technique, allowing field scientists to track migration patterns and routes, and animal longevity in the wild. Knowledge about animal migrations is essential to accurately determining the size and location of protected areas. Field research also can involve study of other kingdoms of life, such as plantae, fungi, and microbes, as well as ecological interactions among species. Field courses have been shown to be efficacious for generating long-term interest in and commitment for undergraduate students in STEM, but the number of field courses has not kept pace with demand. Cost has been a barrier to student participation. Consumer research In applied business disciplines, such as in marketing, fieldwork is a standard research method both for commercial purposes, like market research, and academic research. For instance, researchers have used ethnography, netnography, and in-depth interviews within Consumer Culture Theory, a field that aims to understand the particularities of contemporary consumption. Several academic journals such as Consumption Markets & Culture, and the Journal of Consumer Research regularly publish qualitative research studies that use fieldwork. Earth and atmospheric sciences In geology fieldwork is considered an essential part of training and remains an important component of many research projects. In other disciplines of the Earth and atmospheric sciences, field research refers to field experiments (such as the VORTEX projects) utilizing in situ instruments. Permanent observation networks are also maintained for other uses but are not necessarily considered field research, nor are permanent remote sensing installations. Economics The objective of field research in economics is to get beneath the surface, to contrast observed behaviour with the prevailing understanding of a process, and to relate language and description to behavior (Deirdre McCloskey, 1985). The 2009 Nobel Prize Winners in Economics, Elinor Ostrom and Oliver Williamson, have advocated mixed methods and complex approaches in economics and hinted implicitly to the relevance of field research approaches in economics. In a recent interview Oliver Williamson and Elinor Ostrom discuss the importance of examining institutional contexts when performing economic analyses. Both Ostrom and Williamson agree that "top-down" panaceas or "cookie cutter" approaches to policy problems don't work. They believe that policymakers need to give local people a chance to shape the systems used to allocate resources and resolve disputes. Sometimes, Ostrom points out, local solutions can be the most efficient and effective options. This is a point of view that fits very well with anthropological research, which has for some time shown us the logic of local systems of knowledge — and the damage that can be done when "solutions" to problems are imposed from outside or above without adequate consultation. Elinor Ostrom, for example, combines field case studies and experimental lab work in her research. Using this combination, she contested longstanding assumptions about the possibility that groups of people could cooperate to solve common pool problems, as opposed to being regulated by the state or governed by the market. Edward J. Nell argued in 1998 that there are two types of field research in economics. One kind can give us a carefully drawn picture of institutions and practices, general in that it applies to all activities of a certain kind of particular society or social setting, but still specialized to that society or setting. Although institutions and practices are intangibles, such a picture will be objective, a matter of fact, independent of the state of mind of the particular agents reported on. Approaching the economy from a different angle, another kind of fieldwork can give us a picture of the state of mind of economic agents (their true motivations, their beliefs, state knowledge, expectations, their preferences and values). Business use of field research is an applied form of anthropology and is as likely to be advised by sociologists or statisticians in the case of surveys. Consumer marketing field research is the primary marketing technique that is used by businesses to research their target market. Ethnomusicology Fieldwork in ethnomusicology has changed greatly over time. Alan P. Merriam cites the evolution of fieldwork as a constant interplay between the musicological and ethnological roots of the discipline. Before the 1950s, before ethnomusicology resembled what it is today, fieldwork and research were considered separate tasks. Scholars focused on analyzing music outside of its context through a scientific lens, drawing from the field of musicology. Notable scholars include Carl Stumf and Eric von Hornbostel, who started as Stumpf's assistant. They are known for making countless recordings and establishing a library of music to be analyzed by other scholars. Methodologies began to shift in the early 20th century. George Herzog, an anthropologist and ethnomusicologist, published a seminal paper titled "Plains Ghost Dance and Great Basin Music", reflecting the increased importance of fieldwork through his extended residency in the Great Basin and his attention to cultural contexts. Herzog also raised the question of how the formal qualities of the music he was studying demonstrated the social function of the music itself. Ethnomusicology today relies heavily on the relationship between the researcher and their teachers and consultants. Many ethnomusicologists have assumed the role of student in order to fully learn an instrument and its role in society. Research in the discipline has grown to consider music as a cultural product, and thus cannot be understood without consideration of context. Law Legal researchers conduct field research to understand how legal systems work in practice. Social, economic, cultural and other factors influence how legal processes, institutions and the law work (or do not work). Management Mintzberg played a crucial role in the popularization of field research in management. The tremendous amount of work that Mintzberg put into the findings earned him the title of leader of a new school of management, the descriptive school, as opposed to the prescriptive and normative schools that preceded his work. The schools of thought derive from Taylor, Henri Fayol, Lyndall Urwick, Herbert A. Simon, and others endeavored to prescribe and expound norms to show what managers must or should do. With the arrival of Mintzberg, the question was no longer what must or should be done, but what a manager actually does during the day. More recently, in his 2004 book Managers Not MBAs, Mintzberg examined what he believes to be wrong with management education today. Aktouf (2006, p. 198) summed-up Mintzberg observations about what takes place in the field:‘’First, the manager’s job is not ordered, continuous, and sequential, nor is it uniform or homogeneous. On the contrary, it is fragmented, irregular, choppy, extremely changeable and variable. This work is also marked by brevity: no sooner has a manager finished one activity than he or she is called up to jump to another, and this pattern continues nonstop. Second, the manager’s daily work is a not a series of self-initiated, willful actions transformed into decisions, after examining the circumstances. Rather, it is an unbroken series of reactions to all sorts of request that come from all around the manager, from both the internal and external environments. Third, the manager deals with the same issues several times, for short periods of time; he or she is far from the traditional image of the individual who deals with one problem at a time, in a calm and orderly fashion. Fourth, the manager acts as a focal point, an interface, or an intersection between several series of actors in the organization: external and internal environments, collaborators, partners, superiors, subordinates, colleagues, and so forth. He or she must constantly ensure, achieve, or facilitate interactions between all these categories of actors to allow the firm to function smoothly.’’ Public health In public health, the use of the term field research refers to epidemiology or the study of epidemics through the gathering of data about the epidemic (such as the pathogen and vector(s) as well as social or sexual contacts, depending upon the situation). Sociology Pierre Bourdieu played a crucial role in popularizing fieldwork in sociology. During the Algerian War in 1958–1962, Bourdieu undertook ethnographic research into the clash through a study of the Kabyle people (a subgroup of the Berbers), which provided the groundwork for his anthropological reputation. His first book, Sociologie de L'Algerie (The Algerians), was successful in France and published in America in 1962. A follow-up, Algeria 1960: The Disenchantment of the World: The Sense of Honour: The Kabyle House or the World Reversed: Essays, published in English in 1979 by Cambridge University Press, established him as a significant figure in the field of ethnology and a pioneer advocate scholar for more intensive fieldwork in social sciences. The book was based on his decade of work as a participant-observer in Algerian society. One of the outstanding qualities of his work has been his innovative combination of different methods and research strategies and his analytical skills in interpreting the obtained data. Throughout his career, Bourdieu sought to connect his theoretical ideas with empirical research grounded in everyday life. His work can be seen as a sociology of culture, which Bourdieu labeled a "theory of practice". His contributions to sociology were both empirical and theoretical. His conceptual apparatus is based on three key terms: habitus, capital, and field. Furthermore, Bourdieu fiercely opposed rational choice theory as grounded in a misunderstanding of how social agents operate. Bourdieu argued that social agents do not continuously calculate according to explicit rational and economic criteria. According to Bourdieu, social agents operate according to an implicit practical logic—a practical sense—and bodily dispositions. Social agents act according to their "feel for the game" (the "feel" being, roughly, habitus, and the "game" being the field). Bourdieu's anthropological work was focused on the analysis of the mechanisms of reproduction of social hierarchies. Bourdieu criticized the primacy given to the economic factors, and stressed that the capacity of social actors to actively impose and engage their cultural productions and symbolic systems plays an essential role in the reproduction of social structures of domination. Bourdieu's empirical work played a crucial role in the popularization of correspondence analysis and particularly multiple correspondence analysis. Bourdieu held that these geometric techniques of data analysis are, like his sociology, inherently relational. In the preface to his book The Craft of Sociology, Bourdieu argued that: "I use Correspondence Analysis very much, because I think that it is essentially a relational procedure whose philosophy fully expresses what in my view constitutes social reality. It is a procedure that 'thinks' in relations, as I try to do it with the concept of field." One of the classic ethnographies in Sociology is the book Ain't No Makin' It: Aspirations & Attainment in a Low-Income Neighborhood by Jay MacLeod. The study addresses the reproduction of social inequality among low-income, male teenagers. The researcher spent time studying two groups of teenagers in a housing project in a Northeastern city of the United States. The study concludes that three different levels of analysis play their part in the reproduction of social inequality: the individual, the cultural, and the structural. An additional perspective of sociology includes interactionism. This point of view focuses on understanding people's actions based on their experience of the world around them. Similar to Bourdieu's work, this perspective gathers statements, observations and facts from real-world situations to create more robust research outcomes. Notable field-workers In anthropology Napoleon Chagnon - ethnographer of the Yanomamö people of the Amazon Georg Forster - ethnographer (1772–1775) to Captain James Cook George M. Foster Clifford Geertz Alfred Cort Haddon Claude Lévi-Strauss Bronislaw Malinowski Margaret Mead Alfred Reginald Radcliffe-Brown W.H.R. Rivers Renato Rosaldo James C. Scott Colin Turnbull Victor Turner In sociology William Foote Whyte Erving Goffman Pierre Bourdieu Harriet Martineau In management Henry Mintzberg In economics Truman Bewley Alan Blinder Trygve Haavelmo John Johnston Lawrence Klein Wassily Leontief Edward J. Nell Robert M. Townsend In music Alan Lomax John Peel (with his Peel Sessions) See also Citizen science Empirical research Exploration Observational study Participant observation Public Health Advisor Wildlife observation Market research Usability Industrial design Requirements analysis References Further reading Mason, Peter.(2013). "Scientists and Scholars in the Field. Studies in the History of Fieldwork and Expeditions." Journal of the History of Collections. V. 25 (November): 428–430. Robben, Antonius C.G.M. and Jeffrey A. Sluka, eds. (2012). Ethnographic Fieldwork: An Anthropological Reader. Oxford Wiley-Blackwell. . Nelson, Katie. 2019. “Doing Fieldwork: Methods in Cultural Anthropology” in Perspectives: An Open Invitation to Cultural Anthropology 2nd edition, Edited by Nina Brown, Thomas McIlwraith, and Laura Tubelle de González. Arlington: American Anthropological Association. pp. 45–69. External links
0.773273
0.991882
0.766996
Modernism
Modernism was an early 20th-century movement in literature, visual arts, and music that emphasized experimentation, abstraction, and subjective experience. Philosophy, politics, architecture, and social issues were all aspects of this movement. Modernism centered around beliefs in a "growing alienation" from prevailing "morality, optimism, and convention" and a desire to change how "human beings in a society interact and live together". The modernist movement emerged during the late 19th century in response to significant changes in Western culture, including secularization and the growing influence of science. It is characterized by a self-conscious rejection of tradition and the search for newer means of cultural expression. Modernism was influenced by widespread technological innovation, industrialization, and urbanization, as well as the cultural and geopolitical shifts that occurred after World War I. Artistic movements and techniques associated with modernism include abstract art, literary stream-of-consciousness, cinematic montage, musical atonality and twelve-tonality, modernist architecture, and urban planning. Modernism took a critical stance towards the Enlightenment concept of rationalism. The movement also rejected the concept of absolute originality — the idea of "creation from nothingness" — upheld in the 19th century by both realism and Romanticism, replacing it with techniques of collage, reprise, incorporation, rewriting, recapitulation, revision, and parody. Another feature of modernism was reflexivity about artistic and social convention, which led to experimentation highlighting how works of art are made as well as the material from which they are created. Debate about the timeline of modernism continues, with some scholars arguing that it evolved into late modernism or high modernism. Postmodernism, meanwhile, rejects many of the principles of modernism. Overview and definition Modernism was a cultural movement that impacted the arts as well as the broader Zeitgeist. It is commonly described as a system of thought and behavior marked by self-consciousness or self-reference, prevalent within the avant-garde of various arts and disciplines. It is also often perceived, especially in the West, as a socially progressive movement that affirms the power of human beings to create, improve, and reshape their environment with the aid of practical experimentation, scientific knowledge, or technology. From this perspective, modernism encourages the re-examination of every aspect of existence. Modernists analyze topics to find the ones they believe to be holding back progress, replacing them with new ways of reaching the same end. According to historian Roger Griffin, modernism can be defined as a broad cultural, social, or political initiative sustained by the ethos of "the temporality of the new". Griffin believed that modernism aspired to restore a "sense of sublime order and purpose to the contemporary world, thereby counteracting the (perceived) erosion of an overarching 'nomos', or 'sacred canopy', under the fragmenting and secularizing impact of modernity". Therefore, phenomena apparently unrelated to each other such as "Expressionism, Futurism, Vitalism, Theosophy, Psychoanalysis, Nudism, Eugenics, Utopian town planning and architecture, modern dance, Bolshevism, Organic Nationalism — and even the cult of self-sacrifice that sustained the Hecatomb of the First World War — disclose a common cause and psychological matrix in the fight against (perceived) decadence." All of them embody bids to access a "supra-personal experience of reality" in which individuals believed they could transcend their mortality and eventually that they would cease to be victims of history to instead become its creators. Modernism, Romanticism, Philosophy and Symbol Literary modernism is often summed up in a line from W. B. Yeats: "Things fall apart; the centre cannot hold" (in 'The Second Coming'). Modernists often search for a metaphysical 'centre' but experience its collapse. (Postmodernism, by way of contrast, celebrates that collapse, exposing the failure of metaphysics, such as Jacques Derrida's deconstruction of metaphysical claims.) Philosophically, the collapse of metaphysics can be traced back to the Scottish philosopher David Hume (1711–1776), who argued that we never actually perceive one event causing another. We only experience the 'constant conjunction' of events, and do not perceive a metaphysical 'cause'. Similarly, Hume argued that we never know the self as object, only the self as subject, and we are thus blind to our true natures. Moreover, if we only 'know' through sensory experience—such as sight, touch and feeling—then we cannot 'know' and neither can we make metaphysical claims. Thus, modernism can be driven emotionally by the desire for metaphysical truths, while understanding their impossibility. Some modernist novels, for instance, feature characters like Marlow in Heart of Darkness or Nick Carraway in The Great Gatsby who believe that they have encountered some great truth about nature or character, truths that the novels themselves treat ironically while offering more mundane explanations. Similarly, many poems of Wallace Stevens convey a struggle with the sense of nature's significance, falling under two headings: poems in which the speaker denies that nature has meaning, only for nature to loom up by the end of the poem; and poems in which the speaker claims nature has meaning, only for that meaning to collapse by the end of the poem. Modernism often rejects nineteenth century realism, if the latter is understood as focusing on the embodiment of meaning within a naturalistic representation. At the same time, some modernists aim at a more 'real' realism, one that is uncentered. Picasso's proto-cubist painting, Les Demoiselles d'Avignon of 1907 (see picture above), does not present its subjects from a single point of view (that of a single viewer), but instead presents a flat, two-dimensional picture plane. 'The Poet' of 1911 is similarly decentred, presenting the body from multiple points of view. As the Peggy Guggenheim Collection website puts it, 'Picasso presents multiple views of each object, as if he had moved around it, and synthesizes them into a single compound image'. Modernism, with its sense that 'things fall apart,' can be seen as the apotheosis of romanticism, if romanticism is the (often frustrated) quest for metaphysical truths about character, nature, a higher power and meaning in the world. Modernism often yearns for a romantic or metaphysical centre, but later finds its collapse. This distinction between modernism and romanticism extends to their respective treatments of 'symbol'. The romantics at times see an essential relation (the 'ground') between the symbol (or the 'vehicle', in I.A. Richards's terms) and its 'tenor' (its meaning)—for example in Coleridge's description of nature as 'that eternal language which thy God / Utters'. But while some romantics may have perceived nature and its symbols as God's language, for other romantic theorists it remains inscrutable. As Goethe (not himself a romantic) said, ‘the idea [or meaning] remains eternally and infinitely active and inaccessible in the image’. This was extended in modernist theory which, drawing on its symbolist precursors, often emphasizes the inscrutability and failure of symbol and metaphor. For example, Wallace Stevens seeks and fails to find meaning in nature, even if he at times seems to sense such a meaning. As such, symbolists and modernists at times adopt a mystical approach to suggest a non-rational sense of meaning. For these reasons, modernist metaphors may be unnatural, as for instance in T.S. Eliot's description of an evening 'spread out against the sky / Like a patient etherized upon a table'. Similarly, for many later modernist poets nature is unnaturalized and at times mechanized, as for example in Stephen Oliver's image of the moon busily 'hoisting' itself into consciousness. Origins and early history Romanticism and realism Modernism developed out of Romanticism's revolt against the effects of the Industrial Revolution and bourgeois values. Literary scholar Gerald Graff, argues that, "The ground motive of modernism was criticism of the 19th-century bourgeois social order and its world view; the modernists, carrying the torch of Romanticism." While J. M. W. Turner (1775–1851), one of the most notable landscape painters of the 19th century, was a member of the Romantic movement, his pioneering work in the study of light, color, and atmosphere "anticipated the French Impressionists" and therefore modernism "in breaking down conventional formulas of representation; though unlike them, he believed that his works should always express significant historical, mythological, literary, or other narrative themes." However, the modernists were critical of the Romantics' belief that art serves as a window into the nature of reality. They argued that since each viewer interprets art through their own subjective perspective, it can never convey the ultimate metaphysical truth that the Romantics sought. Nonetheless, the modernists did not completely reject the idea of art as a means of understanding the world. To them, it was a tool for challenging and disrupting the viewer's point of view, rather than as a direct means of accessing a higher reality. Modernism often rejects 19th-century realism when the latter is understood as focusing on the embodiment of meaning within a naturalistic representation. Instead, some modernists aim at a more 'real' realism, one that is uncentered. For instance, Picasso's 1907 Proto-Cubist painting Les Demoiselles d'Avignon does not present its subjects from a single point of view, instead presenting a flat, two-dimensional picture plane. The Poet of 1911 is similarly decentered, presenting the body from multiple points of view. As the Peggy Guggenheim Collection comments, "Picasso presents multiple views of each object, as if he had moved around it, and synthesizes them into a single compound image." Modernism, with its sense that "things fall apart," is often seen as the apotheosis of Romanticism. As August Wilhelm Schlegel, an early German Romantic, described it, while Romanticism searches for metaphysical truths about character, nature, higher power, and meaning in the world, modernism, although yearning for such a metaphysical center, only finds its collapse. The early 19th century In the context of the Industrial Revolution (~1760–1840), influential innovations included steam-powered industrialization, especially the development of railways starting in Britain in the 1830s, and the subsequent advancements in physics, engineering, and architecture they led to. A major 19th-century engineering achievement was the Crystal Palace, the huge cast-iron and plate-glass exhibition hall built for the Great Exhibition of 1851 in London. Glass and iron were used in a similar monumental style in the construction of major railway terminals throughout the city, including King's Cross station (1852) and Paddington Station (1854). These technological advances spread abroad, leading to later structures such as the Brooklyn Bridge (1883) and the Eiffel Tower (1889), the latter of which broke all previous limitations on how tall man-made objects could be. While such engineering feats radically altered the 19th-century urban environment and the daily lives of people, the human experience of time itself was altered with the development of the electric telegraph in 1837, as well as the adoption of "standard time" by British railway companies from 1845, a concept which would be adopted throughout the rest of the world over the next fifty years. Despite continuing technological advances, the ideas that history and civilization were inherently progressive and that such advances were always good came under increasing attack in the 19th century. Arguments arose that the values of the artist and those of society were not merely different, but in fact oftentimes opposed, and that society's current values were antithetical to further progress; therefore, civilization could not move forward in its present form. Early in the century, the philosopher Schopenhauer (1788–1860) (The World as Will and Representation, 1819/20) called into question previous optimism. His ideas had an important influence on later thinkers, including Friedrich Nietzsche (1844–1900). Similarly, Søren Kierkegaard (1813–1855) and Nietzsche both later rejected the idea that reality could be understood through a purely objective lens, a rejection that had a significant influence on the development of existentialism and nihilism. Around 1850, the Pre-Raphaelite Brotherhood (a group of English poets, painters, and art critics) began to challenge the dominant trends of industrial Victorian England in "opposition to technical skill without inspiration." They were influenced by the writings of the art critic John Ruskin (1819–1900), who had strong feelings about the role of art in helping to improve the lives of the urban working classes in the rapidly expanding industrial cities of Britain. Art critic Clement Greenberg described the Pre-Raphaelite Brotherhood as proto-modernists: "There the proto-modernists were, of all people, the Pre-Raphaelites (and even before them, as proto-proto-modernists, the German Nazarenes). The Pre-Raphaelites foreshadowed Manet (1832–1883), with whom modernist painting most definitely begins. They acted on a dissatisfaction with painting as practiced in their time, holding that its realism wasn't truthful enough." Two of the most significant thinkers of the mid-19th century were biologist Charles Darwin (1809–1882), author of On the Origin of Species through Natural Selection (1859), and political scientist Karl Marx (1818–1883), author of Das Kapital (1867). Despite coming from different fields, both of their theories threatened the established order. Darwin's theory of evolution by natural selection undermined religious certainty and the idea of human uniqueness; in particular, the notion that human beings are driven by the same impulses as "lower animals" proved to be difficult to reconcile with the idea of an ennobling spirituality. Meanwhile, Marx's arguments that there are fundamental contradictions within the capitalist system and that workers are anything but free led to the formulation of Marxist theory. The late 19th century Art historians have suggested various dates as starting points for modernism. Historian William Everdell argued that modernism began in the 1870s when metaphorical (or ontological) continuity began to yield to the discrete with mathematician Richard Dedekind's (1831–1916) Dedekind cut and Ludwig Boltzmann's (1844–1906) statistical thermodynamics. Everdell also believed modernism in painting began in 1885–1886 with post-Impressionist artist Georges Seurat's development of Divisionism, the "dots" used to paint A Sunday Afternoon on the Island of La Grande Jatte. On the other hand, visual art critic Clement Greenberg called German philosopher Immanuel Kant (1724–1804) "the first real modernist", although he also wrote, "What can be safely called modernism emerged in the middle of the last century—and rather locally, in France, with Charles Baudelaire (1821–1867) in literature and Manet in painting, and perhaps with Gustave Flaubert (1821–1880), too, in prose fiction. (It was a while later, and not so locally, that modernism appeared in music and architecture)." The poet Baudelaire's Les Fleurs du mal (The Flowers of Evil) and the author Flaubert's Madame Bovary were both published in 1857. Baudelaire's essay "The Painter of Modern Life" (1863) inspired young artists to break away from tradition and innovate new ways of portraying their world in art. Beginning in the 1860s, two approaches in the arts and letters developed separately in France. The first was Impressionism, a school of painting that initially focused on work done not in studios, but outdoors (en plein air). Impressionist paintings attempted to convey that human beings do not see objects, but instead see light itself. The school gathered adherents despite internal divisions among its leading practitioners and became increasingly influential. Initially rejected from the most important commercial show of the time, the government-sponsored Paris Salon, the Impressionists organized yearly group exhibitions in commercial venues during the 1870s and 1880s, timing them to coincide with the official Salon. In 1863, the Salon des Refusés, created by Emperor Napoleon III, displayed all of the paintings rejected by the Paris Salon. While most were in standard styles, but by inferior artists, the work of Manet attracted attention and opened commercial doors to the movement. The second French school was symbolism, which literary historians see beginning with Charles Baudelaire and including the later poets Arthur Rimbaud (1854–1891) with Une Saison en Enfer (A Season in Hell, 1873), Paul Verlaine (1844–1896), Stéphane Mallarmé (1842–1898), and Paul Valéry (1871–1945). The symbolists "stressed the priority of suggestion and evocation over direct description and explicit analogy," and were especially interested in "the musical properties of language." Cabaret, which gave birth to so many of the arts of modernism, including the immediate precursors of film, may be said to have begun in France in 1881 with the opening of the Black Cat in Montmartre, the beginning of the ironic monologue, and the founding of the Society of Incoherent Arts. The theories of Sigmund Freud (1856–1939), Krafft-Ebing and other sexologists were influential in the early days of modernism. Freud's first major work was Studies on Hysteria (with Josef Breuer, 1895). Central to Freud's thinking is the idea "of the primacy of the unconscious mind in mental life", so that all subjective reality was based on the interactions between basic drives and instincts, through which the outside world was perceived. Freud's description of subjective states involved an unconscious mind full of primal impulses, and counterbalancing self-imposed restrictions derived from social values. The works of Friedrich Nietzsche (1844–1900) were another major precursor of modernism, with a philosophy in which psychological drives, specifically the "will to power" (Wille zur macht), were of central importance: "Nietzsche often identified life itself with 'will to power', that is, with an instinct for growth and durability." Henri Bergson (1859–1941), on the other hand, emphasized the difference between scientific, clock time and the direct, subjective human experience of time. His work on time and consciousness "had a great influence on 20th-century novelists" especially those modernists who used the "stream of consciousness" technique, such as Dorothy Richardson, James Joyce, and Virginia Woolf (1882–1941). Also important in Bergson's philosophy was the idea of élan vital, the life force, which "brings about the creative evolution of everything." His philosophy also placed a high value on intuition, though without rejecting the importance of the intellect. Important literary precursors of modernism included esteemed writers such as Fyodor Dostoevsky (1821–1881), whose novels include Crime and Punishment (1866) and The Brothers Karamazov (1880); Walt Whitman (1819–1892), who published the poetry collection Leaves of Grass (1855–1891); and August Strindberg (1849–1912), especially his later plays, including the trilogy To Damascus 1898–1901,A Dream Play (1902) and The Ghost Sonata (1907). Henry James has also been suggested as a significant precursor to modernism in works as early as The Portrait of a Lady (1881). Modernism emerges 1901 to 1930 Out of the collision of ideals derived from Romanticism and an attempt to find a way for knowledge to explain that which was as yet unknown, came the first wave of modernist works in the opening decade of the 20th century. Although their authors considered them to be extensions of existing trends in art, these works broke the implicit understanding the general public had of art: that artists were the interpreters and representatives of bourgeois culture and ideas. These "modernist" landmarks include the atonal ending of Arnold Schoenberg's Second String Quartet in 1908, the Expressionist paintings of Wassily Kandinsky starting in 1903, and culminating with his first abstract painting and the founding of the Blue Rider group in Munich in 1911, and the rise of fauvism and the inventions of Cubism from the studios of Henri Matisse, Pablo Picasso, Georges Braque, and others, in the years between 1900 and 1910. An important aspect of modernism is how it relates to tradition through its adoption of techniques like reprise, incorporation, rewriting, recapitulation, revision, and parody in new forms. T. S. Eliot made significant comments on the relation of the artist to tradition, including: "[W]e shall often find that not only the best, but the most individual parts of [a poet's] work, may be those in which the dead poets, his ancestors, assert their immortality most vigorously." However, the relationship of modernism with tradition was complex, as literary scholar Peter Child's indicates: "There were paradoxical if not opposed trends towards revolutionary and reactionary positions, fear of the new and delight at the disappearance of the old, nihilism and fanatical enthusiasm, creativity, and despair." An example of how modernist art can apply older traditions while also incorporating new techniques can be found within the music of the composer Arnold Schoenberg. On the one hand, he rejected traditional tonal harmony, the hierarchical system of organizing works of music that had guided musical composition for at least a century and a half. Schoenberg believed he had discovered a wholly new way of organizing sound based on the use of twelve-note rows. Yet, while this was indeed a wholly new technique, its origins can be traced back to the work of earlier composers such as Franz Liszt, Richard Wagner, Gustav Mahler, Richard Strauss, and Max Reger. In the world of art, in the first decade of the 20th century, young painters such as Pablo Picasso and Henri Matisse caused much controversy and attracted great criticism with their rejection of traditional perspective as the means of structuring paintings, though the Impressionist Claude Monet had already been innovative in his use of perspective. In 1907, as Picasso was painting , Oskar Kokoschka was writing Mörder, Hoffnung der Frauen (Murderer, Hope of Women), the first Expressionist play (produced with scandal in 1909), and Arnold Schoenberg was composing his String Quartet No.2 in F sharp minor (1908), his first composition without a tonal center. A primary influence that led to Cubism was the representation of three-dimensional form in the late works of Paul Cézanne, which were displayed in a retrospective at the 1907 Salon d'Automne. In Cubist artwork, objects are analyzed, broken up, and reassembled in an abstract form; instead of depicting objects from one viewpoint, the artist depicts the subject from a multitude of viewpoints to represent the subject in a greater context. Cubism was brought to the attention of the general public for the first time in 1911 at the Salon des Indépendants in Paris (held 21 April – 13 June). Jean Metzinger, Albert Gleizes, Henri Le Fauconnier, Robert Delaunay, Fernand Léger and Roger de La Fresnaye were shown together in Room 41, provoking a 'scandal' out of which Cubism emerged and spread throughout Paris and beyond. Also in 1911, Kandinsky painted Bild mit Kreis (Picture with a Circle), which he later called the first abstract painting. In 1912, Metzinger and Gleizes wrote the first (and only) major Cubist manifesto, Du "Cubisme", published in time for the Salon de la Section d'Or, the largest Cubist exhibition to date. In 1912 Metzinger painted and exhibited his enchanting La Femme au Cheval (Woman with a Horse) and Danseuse au Café (Dancer in a Café). Albert Gleizes painted and exhibited his Les Baigneuses (The Bathers) and his monumental Le Dépiquage des Moissons (Harvest Threshing). This work, along with La Ville de Paris (City of Paris) by Robert Delaunay, was the largest and most ambitious Cubist painting undertaken during the pre-war Cubist period. In 1905, a group of four German artists, led by Ernst Ludwig Kirchner, formed Die Brücke (The Bridge) in the city of Dresden. This was arguably the founding organization for the German Expressionist movement, though they did not use the word itself. A few years later, in 1911, a like-minded group of young artists formed Der Blaue Reiter (The Blue Rider) in Munich. The name came from Wassily Kandinsky's Der Blaue Reiter painting of 1903. Among their members were Kandinsky, Franz Marc, Paul Klee, and August Macke. However, the term "Expressionism" did not firmly establish itself until 1913. Though initially mainly a German artistic movement, most predominant in painting, poetry and the theatre between 1910 and 1930, most precursors of the movement were not German. Furthermore, there have been Expressionist writers of prose fiction, as well as non-German speaking Expressionist writers, and, while the movement had declined in Germany with the rise of Adolf Hitler in the 1930s, there were subsequent Expressionist works. Expressionism is notoriously difficult to define, in part because it "overlapped with other major 'isms' of the modernist period: with Futurism, Vorticism, Cubism, Surrealism and Dada." Richard Murphy also comments: "[The] search for an all-inclusive definition is problematic to the extent that the most challenging Expressionists," such as the novelist Franz Kafka, poet Gottfried Benn, and novelist Alfred Döblin were simultaneously the most vociferous anti-Expressionists. What, however, can be said, is that it was a movement that developed in the early 20th century mainly in Germany in reaction to the dehumanizing effect of industrialization and the growth of cities, and that "one of the central means by which Expressionism identifies itself as an avant-garde movement, and by which it marks its distance to traditions and the cultural institution as a whole is through its relationship to realism and the dominant conventions of representation." More explicitly: the Expressionists rejected the ideology of realism. There was a concentrated Expressionist movement in early 20th-century German theater, of which Georg Kaiser and Ernst Toller were the most famous playwrights. Other notable Expressionist dramatists included Reinhard Sorge, Walter Hasenclever, Hans Henny Jahnn, and Arnolt Bronnen. They looked back to Swedish playwright August Strindberg and German actor and dramatist Frank Wedekind as precursors of their dramaturgical experiments. Oskar Kokoschka's Murderer, the Hope of Women was the first fully Expressionist work for the theater, which opened on 4 July 1909 in Vienna. The extreme simplification of characters to mythic types, choral effects, declamatory dialogue and heightened intensity would become characteristic of later Expressionist plays. The first full-length Expressionist play was The Son by Walter Hasenclever, which was published in 1914 and first performed in 1916. Futurism is another modernist movement. In 1909, the Parisian newspaper Le Figaro published F. T. Marinetti's first manifesto. Soon afterward, a group of painters (Giacomo Balla, Umberto Boccioni, Carlo Carrà, Luigi Russolo, and Gino Severini) co-signed the Futurist Manifesto. Modeled on Marx and Engels' famous "Communist Manifesto" (1848), such manifestos put forward ideas that were meant to provoke and to gather followers. However, arguments in favor of geometric or purely abstract painting were, at this time, largely confined to "little magazines" which had only tiny circulations. Modernist primitivism and pessimism were controversial, and the mainstream in the first decade of the 20th century was still inclined towards a faith in progress and liberal optimism. Abstract artists, taking as their examples the Impressionists, as well as Paul Cézanne (1839–1906) and Edvard Munch (1863–1944), began with the assumption that color and shape, not the depiction of the natural world, formed the essential characteristics of art. Western art had been, from the Renaissance up to the middle of the 19th century, underpinned by the logic of perspective and an attempt to reproduce an illusion of visible reality. The arts of cultures other than the European had become accessible and showed alternative ways of describing visual experience to the artist. By the end of the 19th century, many artists felt a need to create a new kind of art that encompassed the fundamental changes taking place in technology, science and philosophy. The sources from which individual artists drew their theoretical arguments were diverse and reflected the social and intellectual preoccupations in all areas of Western culture at that time. Wassily Kandinsky, Piet Mondrian, and Kazimir Malevich all believed in redefining art as the arrangement of pure color. The use of photography, which had rendered much of the representational function of visual art obsolete, strongly affected this aspect of modernism. Modernist architects and designers, such as Frank Lloyd Wright and Le Corbusier, believed that new technology rendered old styles of building obsolete. Le Corbusier thought that buildings should function as "machines for living in", analogous to cars, which he saw as machines for traveling in. Just as cars had replaced the horse, so modernist design should reject the old styles and structures inherited from Ancient Greece or the Middle Ages. Following this machine aesthetic, modernist designers typically rejected decorative motifs in design, preferring to emphasize the materials used and pure geometrical forms. The skyscraper is the archetypal modernist building, and the Wainwright Building, a 10-story office building completed in 1891 in St. Louis, Missouri, United States, is among the first skyscrapers in the world. Ludwig Mies van der Rohe's Seagram Building in New York (1956–1958) is often regarded as the pinnacle of this modernist high-rise architecture. Many aspects of modernist design persist within the mainstream of contemporary architecture, though previous dogmatism has given way to a more playful use of decoration, historical quotation, and spatial drama. In 1913—which was the year of philosopher Edmund Husserl's Ideas, physicist Niels Bohr's quantized atom, Ezra Pound's founding of imagism, the Armory Show in New York, and in Saint Petersburg the "first futurist opera", Mikhail Matyushin's Victory over the Sun—another Russian composer, Igor Stravinsky, composed The Rite of Spring, a ballet that depicts human sacrifice and has a musical score full of dissonance and primitive rhythm. This caused an uproar on its first performance in Paris. At this time, though modernism was still "progressive", it increasingly saw traditional forms and social arrangements as hindering progress and recast the artist as a revolutionary, engaged in overthrowing rather than enlightening society. Also in 1913, a less violent event occurred in France with the publication of the first volume of Marcel Proust's important novel sequence À la recherche du temps perdu (1913–1927) (In Search of Lost Time). This is often presented as an early example of a writer using the stream-of-consciousness technique, but Robert Humphrey comments that Proust "is concerned only with the reminiscent aspect of consciousness" and that he "was deliberately recapturing the past for the purpose of communicating; hence he did not write a stream-of-consciousness novel." Stream of consciousness was an important modernist literary innovation, and it has been suggested that Arthur Schnitzler (1862–1931) was the first to make full use of it in his short story "Leutnant Gustl" ("None but the brave") (1900). Dorothy Richardson was the first English writer to use it, in the early volumes of her novel sequence Pilgrimage (1915–1967). Other modernist novelists that are associated with the use of this narrative technique include James Joyce in Ulysses (1922) and Italo Svevo in La coscienza di Zeno (1923). However, with the coming of the Great War of 1914–1918 (World War I) and the Russian Revolution of 1917, the world was drastically changed, and doubt was cast on the beliefs and institutions of the past. The failure of the previous status quo seemed self-evident to a generation that had seen millions die fighting over scraps of earth: before 1914, it had been argued that no one would fight such a war, since the cost was too high. The birth of a machine age, which had made major changes in the conditions of daily life in the 19th century had now radically changed the nature of warfare. The traumatic nature of recent experience altered basic assumptions, and a realistic depiction of life in the arts seemed inadequate when faced with the fantastically surreal nature of trench warfare. The view that mankind was making steady moral progress now seemed ridiculous in the face of the senseless slaughter, described in works such as Erich Maria Remarque's novel All Quiet on the Western Front (1929). Therefore, modernism's view of reality, which had been a minority taste before the war, became more generally accepted in the 1920s. In literature and visual art, some modernists sought to defy expectations mainly to make their art more vivid or to force the audience to take the trouble to question their own preconceptions. This aspect of modernism has often seemed a reaction to consumer culture, which developed in Europe and North America in the late 19th century. Whereas most manufacturers try to make products that will be marketable by appealing to preferences and prejudices, high modernists reject such consumerist attitudes to undermine conventional thinking. The art critic Clement Greenberg expounded this theory of modernism in his essay Avant-Garde and Kitsch. Greenberg labeled the products of consumer culture "kitsch", because their design aimed simply to have maximum appeal, with any difficult features removed. For Greenberg, modernism thus formed a reaction against the development of such examples of modern consumer culture as commercial popular music, Hollywood, and advertising. Greenberg associated this with the revolutionary rejection of capitalism. Some modernists saw themselves as part of a revolutionary culture that included political revolution. In Russia after the 1917 Revolution, there was indeed initially a burgeoning of avant-garde cultural activity, which included Russian Futurism. However, others rejected conventional politics as well as artistic conventions, believing that a revolution of political consciousness had greater importance than a change in political structures. But many modernists saw themselves as apolitical. Others, such as T. S. Eliot, rejected mass popular culture from a conservative position. Some even argue that Modernism in literature and art functioned to sustain an elite culture that excluded the majority of the population. Surrealism, which originated in the early 1920s, came to be regarded by the public as the most extreme form of modernism, or "the avant-garde of modernism". The word "surrealist" was coined by Guillaume Apollinaire and first appeared in the preface to his play Les Mamelles de Tirésias, which was written in 1903 and first performed in 1917. Major surrealists include Paul Éluard, Robert Desnos, Max Ernst, Hans Arp, Antonin Artaud, Raymond Queneau, Joan Miró, and Marcel Duchamp. By 1930, modernism had won a place in the political and artistic establishment, although by this time modernism itself had changed. Modernism continues: 1930–1945 Modernism continued to evolve during the 1930s. Between 1930 and 1932 composer Arnold Schoenberg worked on Moses und Aron, one of the first operas to make use of the twelve-tone technique, Pablo Picasso painted in 1937 Guernica, his cubist condemnation of fascism, while in 1939 James Joyce pushed the boundaries of the modern novel further with Finnegans Wake. Also by 1930 modernism began to influence mainstream culture, so that, for example, The New Yorker magazine began publishing work, influenced by modernism, by young writers and humorists like Dorothy Parker, Robert Benchley, E. B. White, S. J. Perelman, and James Thurber, amongst others. Perelman is highly regarded for his humorous short stories that he published in magazines in the 1930s and 1940s, most often in The New Yorker, which are considered to be the first examples of surrealist humor in America. Modern ideas in art also began to appear more frequently in commercials and logos, an early example of which, from 1916, is the famous London Underground logo designed by Edward Johnston. One of the most visible changes of this period was the adoption of new technologies into the daily lives of ordinary people in Western Europe and North America. Electricity, the telephone, the radio, the automobile—and the need to work with them, repair them and live with them—created social change. The kind of disruptive moment that only a few knew in the 1880s became a common occurrence. For example, the speed of communication reserved for the stock brokers of 1890 became part of family life, at least in middle class North America. Associated with urbanization and changing social mores also came smaller families and changed relationships between parents and their children. Another strong influence at this time was Marxism. After the generally primitivistic/irrationalism aspect of pre-World War I modernism (which for many modernists precluded any attachment to merely political solutions) and the neoclassicism of the 1920s (as represented most famously by T. S. Eliot and Igor Stravinsky—which rejected popular solutions to modern problems), the rise of fascism, the Great Depression, and the march to war helped to radicalize a generation. Bertolt Brecht, W. H. Auden, André Breton, Louis Aragon, and the philosophers Antonio Gramsci and Walter Benjamin are perhaps the most famous exemplars of this modernist form of Marxism. There were, however, also modernists explicitly of 'the right', including Salvador Dalí, Wyndham Lewis, T. S. Eliot, Ezra Pound, the Dutch author Menno ter Braak and others. Significant modernist literary works continued to be created in the 1920s and 1930s, including further novels by Marcel Proust, Virginia Woolf, Robert Musil, and Dorothy Richardson. The American modernist dramatist Eugene O'Neill's career began in 1914, but his major works appeared in the 1920s, 1930s and early 1940s. Two other significant modernist dramatists writing in the 1920s and 1930s were Bertolt Brecht and Federico García Lorca. D. H. Lawrence's Lady Chatterley's Lover was privately published in 1928, while another important landmark for the history of the modern novel came with the publication of William Faulkner's The Sound and the Fury in 1929. In the 1930s, in addition to further major works by Faulkner, Samuel Beckett published his first major work, the novel Murphy (1938). Then in 1939 James Joyce's Finnegans Wake appeared. This is written in a largely idiosyncratic language, consisting of a mixture of standard English lexical items and neologistic multilingual puns and portmanteau words, which attempts to recreate the experience of sleep and dreams. In poetry T. S. Eliot, E. E. Cummings, and Wallace Stevens were writing from the 1920s until the 1950s. While modernist poetry in English is often viewed as an American phenomenon, with leading exponents including Ezra Pound, T. S. Eliot, Marianne Moore, William Carlos Williams, H.D., and Louis Zukofsky, there were important British modernist poets, including David Jones, Hugh MacDiarmid, Basil Bunting, and W. H. Auden. European modernist poets include Federico García Lorca, Anna Akhmatova, Constantine Cavafy, and Paul Valéry. The modernist movement continued during this period in Soviet Russia. In 1930 composer Dimitri Shostakovich's (1906–1975) opera The Nose was premiered, in which he uses a montage of different styles, including folk music, popular song and atonality. Among his influences was Alban Berg's (1885–1935) opera Wozzeck (1925), which "had made a tremendous impression on Shostakovich when it was staged in Leningrad." However, from 1932 socialist realism began to oust modernism in the Soviet Union, and in 1936 Shostakovich was attacked and forced to withdraw his 4th Symphony. Alban Berg wrote another significant, though incomplete, modernist opera, Lulu, which premiered in 1937. Berg's Violin Concerto was first performed in 1935. Like Shostakovich, other composers faced difficulties in this period. In Germany Arnold Schoenberg (1874–1951) was forced to flee to the U.S. when Hitler came to power in 1933, because of his modernist atonal style as well as his Jewish ancestry. His major works from this period are a Violin Concerto, Op. 36 (1934/36), and a Piano Concerto, Op. 42 (1942). Schoenberg also wrote tonal music in this period with the Suite for Strings in G major (1935) and the Chamber Symphony No. 2 in E minor, Op. 38 (begun in 1906, completed in 1939). During this time Hungarian modernist Béla Bartók (1881–1945) produced a number of major works, including Music for Strings, Percussion and Celesta (1936) and the Divertimento for String Orchestra (1939), String Quartet No. 5 (1934), and No. 6 (his last, 1939). But he too left for the US in 1940, because of the rise of fascism in Hungary. Igor Stravinsky (1882–1971) continued writing in his neoclassical style during the 1930s and 1940s, writing works like the Symphony of Psalms (1930), Symphony in C (1940), and Symphony in Three Movements (1945). He also emigrated to the US because of World War II. Olivier Messiaen (1908–1992), however, served in the French army during the war and was imprisoned at Stalag VIII-A by the Germans, where he composed his famous Quatuor pour la fin du temps ("Quartet for the End of Time"). The quartet was first performed in January 1941 to an audience of prisoners and prison guards. In painting, during the 1920s and 1930s and the Great Depression, modernism was defined by Surrealism, late Cubism, Bauhaus, De Stijl, Dada, German Expressionism, and modernist and masterful color painters like Henri Matisse and Pierre Bonnard as well as the abstractions of artists like Piet Mondrian and Wassily Kandinsky which characterized the European art scene. In Germany, Max Beckmann, Otto Dix, George Grosz and others politicized their paintings, foreshadowing the coming of World War II, while in America, modernism is seen in the form of American Scene painting and the social realism and Regionalism movements that contained both political and social commentary dominated the art world. Artists like Ben Shahn, Thomas Hart Benton, Grant Wood, George Tooker, John Steuart Curry, Reginald Marsh, and others became prominent. Modernism is defined in Latin America by painters Joaquín Torres-García from Uruguay and Rufino Tamayo from Mexico, while the muralist movement with Diego Rivera, David Siqueiros, José Clemente Orozco, Pedro Nel Gómez and Santiago Martínez Delgado, and Symbolist paintings by Frida Kahlo, began a renaissance of the arts for the region, characterized by a freer use of color and an emphasis on political messages. Diego Rivera is perhaps best known by the public world for his 1933 mural, Man at the Crossroads, in the lobby of the RCA Building at Rockefeller Center. When his patron Nelson Rockefeller discovered that the mural included a portrait of Vladimir Lenin and other communist imagery, he fired Rivera, and the unfinished work was eventually destroyed by Rockefeller's staff. Frida Kahlo's works are often characterized by their stark portrayals of pain. Kahlo was deeply influenced by indigenous Mexican culture, which is apparent in her paintings' bright colors and dramatic symbolism. Christian and Jewish themes are often depicted in her work as well; she combined elements of the classic religious Mexican tradition, which were often bloody and violent. Frida Kahlo's Symbolist works relate strongly to surrealism and to the magic realism movement in literature. Political activism was an important piece of David Siqueiros' life, and frequently inspired him to set aside his artistic career. His art was deeply rooted in the Mexican Revolution. The period from the 1920s to the 1950s is known as the Mexican Renaissance, and Siqueiros was active in the attempt to create an art that was at once Mexican and universal. The young Jackson Pollock attended the workshop and helped build floats for the parade. During the 1930s, radical leftist politics characterized many of the artists connected to surrealism, including Pablo Picasso. On 26 April 1937, during the Spanish Civil War, the Basque town of Gernika was bombed by Nazi Germany's Luftwaffe. The Germans were attacking to support the efforts of Francisco Franco to overthrow the Basque government and the Spanish Republican government. Pablo Picasso painted his mural-sized Guernica to commemorate the horrors of the bombing. During the Great Depression of the 1930s and through the years of World War II, American art was characterized by social realism and American Scene painting, in the work of Grant Wood, Edward Hopper, Ben Shahn, Thomas Hart Benton, and several others. Nighthawks (1942) is a painting by Edward Hopper that portrays people sitting in a downtown diner late at night. It is not only Hopper's most famous painting, but one of the most recognizable in American art. The scene was inspired by a diner in Greenwich Village. Hopper began painting it immediately after the attack on Pearl Harbor. After this event there was a large feeling of gloominess over the country, a feeling that is portrayed in the painting. The urban street is empty outside the diner, and inside none of the three patrons is apparently looking or talking to the others but instead is lost in their own thoughts. This portrayal of modern urban life as empty or lonely is a common theme throughout Hopper's work. American Gothic is a painting by Grant Wood from 1930 portraying a pitchfork-holding farmer and a younger woman in front of a house of Carpenter Gothic style, it is one of the most familiar images in 20th-century American art. Art critics had favorable opinions about the painting; like Gertrude Stein and Christopher Morley, they assumed the painting was meant to be a satire of rural small-town life. It was thus seen as part of the trend towards increasingly critical depictions of rural America, along the lines of Sherwood Anderson's 1919 Winesburg, Ohio, Sinclair Lewis's 1920 Main Street, and Carl Van Vechten's The Tattooed Countess in literature. However, with the onset of the Great Depression, the painting came to be seen as a depiction of steadfast American pioneer spirit. The situation for artists in Europe during the 1930s deteriorated rapidly as the Nazis' power in Germany and across Eastern Europe increased. Degenerate art was a term adopted by the Nazi regime in Germany for virtually all modern art. Such art was banned because it was un-German or Jewish Bolshevist in nature, and those identified as degenerate artists were subjected to sanctions. These included being dismissed from teaching positions, being forbidden to exhibit or to sell their art, and in some cases being forbidden to produce art entirely. Degenerate Art was also the title of an exhibition, mounted by the Nazis in Munich in 1937. The climate became so hostile for artists and art associated with modernism and abstraction that many left for the Americas. German artist Max Beckmann and scores of others fled Europe for New York. In New York City a new generation of young and exciting modernist painters led by Arshile Gorky, Willem de Kooning, and others were just beginning to come of age. Arshile Gorky's portrait of someone who might be Willem de Kooning is an example of the evolution of Abstract Expressionism from the context of figure painting, Cubism and Surrealism. Along with his friends de Kooning and John D. Graham, Gorky created bio morphically shaped and abstracted figurative compositions that by the 1940s evolved into totally abstract paintings. Gorky's work seems to be a careful analysis of memory, emotion and shape, using line and color to express feeling and nature. Attacks on early modernism Modernism's stress on freedom of expression, experimentation, radicalism, and primitivism disregards conventional expectations. In many art forms this often meant startling and alienating audiences with bizarre and unpredictable effects, as in the strange and disturbing combinations of motifs in Surrealism or the use of extreme dissonance and atonality in modernist music. In literature this often involved the rejection of intelligible plots or characterization in novels, or the creation of poetry that defied clear interpretation. Within the Catholic Church, the specter of Protestantism and Martin Luther was at play in anxieties over modernism and the notion that doctrine develops and changes over time. From 1932, socialist realism began to oust modernism in the Soviet Union, where it had previously endorsed Russian Futurism and Constructivism, primarily under the homegrown philosophy of Suprematism. The Nazi government of Germany deemed modernism narcissistic and nonsensical, as well as "Jewish" (see Antisemitism) and "Negro". The Nazis exhibited modernist paintings alongside works by the mentally ill in an exhibition entitled "Degenerate Art". Accusations of "formalism" could lead to the end of a career, or worse. For this reason, many modernists of the post-war generation felt that they were the most important bulwark against totalitarianism, the "canary in the coal mine", whose repression by a government or other group with supposed authority represented a warning that individual liberties were being threatened. Louis A. Sass compared madness, specifically schizophrenia, and modernism in a less fascist manner by noting their shared disjunctive narratives, surreal images, and incoherence. After 1945 While The Oxford Encyclopedia of British Literature states that modernism ended by c. 1939 with regard to British and American literature, "When (if) modernism petered out and postmodernism began has been contested almost as hotly as when the transition from Victorianism to modernism occurred." Clement Greenberg sees modernism ending in the 1930s, with the exception of the visual and performing arts, but with regard to music, Paul Griffiths notes that, while modernism "seemed to be a spent force" by the late 1920s, after World War II, "a new generation of composers—Boulez, Barraqué, Babbitt, Nono, Stockhausen, Xenakis" revived modernism". In fact, many literary modernists lived into the 1950s and 1960s, though generally they were no longer producing major works. The term "late modernism" is also sometimes applied to modernist works published after 1930. Among the modernists (or late modernists) still publishing after 1945 were Wallace Stevens, Gottfried Benn, T. S. Eliot, Anna Akhmatova, William Faulkner, Dorothy Richardson, John Cowper Powys, and Ezra Pound. Basil Bunting, born in 1901, published his most important modernist poem, Briggflatts in 1965. In addition, Hermann Broch's The Death of Virgil was published in 1945 and Thomas Mann's Doctor Faustus in 1947. Samuel Beckett, who died in 1989, has been described as a "later modernist". Beckett is a writer with roots in the Expressionist tradition of modernism, who produced works from the 1930s until the 1980s, including Molloy (1951), Waiting for Godot (1953), Happy Days (1961), and Rockaby (1981). The terms "minimalist" and "post-modernist" have also been applied to his later works. The poets Charles Olson (1910–1970) and J. H. Prynne (born 1936) are among the writers in the second half of the 20th century who have been described as late modernists. More recently, the term "late modernism" has been redefined by at least one critic and used to refer to works written after 1945, rather than 1930. With this usage goes the idea that the ideology of modernism was significantly re-shaped by the events of World War II, especially the Holocaust and the dropping of the atom bomb. The post-war period left the capitals of Europe in upheaval, with an urgency to economically and physically rebuild and to politically regroup. In Paris (the former center of European culture and the former capital of the art world), the climate for art was a disaster. Important collectors, dealers, and modernist artists, writers, and poets fled Europe for New York and America. The surrealists and modern artists from every cultural center of Europe had fled the onslaught of the Nazis for safe haven in the United States. Many of those who did not flee perished. A few artists, notably Pablo Picasso, Henri Matisse, and Pierre Bonnard, remained in France and survived. The 1940s in New York City heralded the triumph of American Abstract Expressionism, a modernist movement that combined lessons learned from Henri Matisse, Pablo Picasso, Surrealism, Joan Miró, Cubism, Fauvism, and early modernism via great teachers in America like Hans Hofmann and John D. Graham. American artists benefited from the presence of Piet Mondrian, Fernand Léger, Max Ernst and the André Breton group, Pierre Matisse's gallery, and Peggy Guggenheim's gallery The Art of This Century, as well as other factors. Paris, moreover, recaptured much of its luster in the 1950s and 1960s as the center of a machine art florescence, with both of the leading machine art sculptors Jean Tinguely and Nicolas Schöffer having moved there to launch their careers—and which florescence, in light of the technocentric character of modern life, may well have a particularly long-lasting influence. Theatre of the Absurd The term "Theatre of the Absurd" is applied to plays, written primarily by Europeans, that express the belief that human existence has no meaning or purpose and therefore all communication breaks down. Logical construction and argument gives way to irrational and illogical speech and to its ultimate conclusion, silence. While there are significant precursors, including Alfred Jarry (1873–1907), the Theatre of the Absurd is generally seen as beginning in the 1950s with the plays of Samuel Beckett. Critic Martin Esslin coined the term in his 1960 essay "Theatre of the Absurd". He related these plays based on a broad theme of the absurd, similar to the way Albert Camus uses the term in his 1942 essay, The Myth of Sisyphus. The Absurd in these plays takes the form of man's reaction to a world apparently without meaning, and/or man as a puppet controlled or menaced by invisible outside forces. Though the term is applied to a wide range of plays, some characteristics coincide in many of the plays: broad comedy, often similar to vaudeville, mixed with horrific or tragic images; characters caught in hopeless situations forced to do repetitive or meaningless actions; dialogue full of clichés, wordplay, and nonsense; plots that are cyclical or absurdly expansive; either a parody or dismissal of realism and the concept of the "well-made play". Playwrights commonly associated with the Theatre of the Absurd include Samuel Beckett (1906–1989), Eugène Ionesco (1909–1994), Jean Genet (1910–1986), Harold Pinter (1930–2008), Tom Stoppard (born 1937), Alexander Vvedensky (1904–1941), Daniil Kharms (1905–1942), Friedrich Dürrenmatt (1921–1990), Alejandro Jodorowsky (born 1929), Fernando Arrabal (born 1932), Václav Havel (1936–2011) and Edward Albee (1928–2016). Pollock and abstract influences During the late 1940s, Jackson Pollock's radical approach to painting revolutionized the potential for all contemporary art that followed him. To some extent, Pollock realized that the journey toward making a work of art was as important as the work of art itself. Like Pablo Picasso's innovative reinventions of painting and sculpture in the early 20th century via Cubism and constructed sculpture, Pollock redefined the way art is made. His move away from easel painting and conventionality was a liberating signal to the artists of his era and to all who came after. Artists realized that Jackson Pollock's process—placing unstretched raw canvas on the floor where it could be attacked from all four sides using artistic and industrial materials; dripping and throwing linear skeins of paint; drawing, staining, and brushing; using imagery and non-imagery—essentially blasted art-making beyond any prior boundary. Abstract Expressionism generally expanded and developed the definitions and possibilities available to artists for the creation of new works of art. The other Abstract Expressionists followed Pollock's breakthrough with new breakthroughs of their own. In a sense the innovations of Jackson Pollock, Willem de Kooning, Franz Kline, Mark Rothko, Philip Guston, Hans Hofmann, Clyfford Still, Barnett Newman, Ad Reinhardt, Robert Motherwell, Peter Voulkos and others opened the floodgates to the diversity and scope of all the art that followed them. Re-readings into abstract art by art historians such as Linda Nochlin, Griselda Pollock and Catherine de Zegher critically show, however, that pioneering women artists who produced major innovations in modern art had been ignored by official accounts of its history. International figures from British art Henry Moore (1898–1986) emerged after World War II as Britain's leading sculptor. He was best known for his semi-abstract monumental bronze sculptures which are located around the world as public works of art. His forms are usually abstractions of the human figure, typically depicting mother-and-child or reclining figures, usually suggestive of the female body, apart from a phase in the 1950s when he sculpted family groups. His forms are generally pierced or contain hollow spaces. In the 1950s, Moore began to receive increasingly significant commissions, including a reclining figure for the UNESCO building in Paris in 1958. With many more public works of art, the scale of Moore's sculptures grew significantly. The last three decades of Moore's life continued in a similar vein, with several major retrospectives taking place around the world, notably a prominent exhibition in the summer of 1972 in the grounds of the Forte di Belvedere overlooking Florence. By the end of the 1970s, there were some 40 exhibitions a year featuring his work. On the campus of the University of Chicago in December 1967, 25 years to the minute after the team of physicists led by Enrico Fermi achieved the first controlled, self-sustaining nuclear chain reaction, Moore's Nuclear Energy was unveiled. Also in Chicago, Moore commemorated science with a large bronze sundial, locally named Man Enters the Cosmos (1980), which was commissioned to recognize the space exploration program. The "London School" of figurative painters, including Francis Bacon (1909–1992), Lucian Freud (1922–2011), Frank Auerbach (born 1931), Leon Kossoff (born 1926), and Michael Andrews (1928–1995), have received widespread international recognition. Francis Bacon was an Irish-born British figurative painter known for his bold, graphic and emotionally raw imagery. His painterly but abstracted figures typically appear isolated in glass or steel geometrical cages set against flat, nondescript backgrounds. Bacon began painting during his early 20s but worked only sporadically until his mid-30s. His breakthrough came with the 1944 triptych Three Studies for Figures at the Base of a Crucifixion which sealed his reputation as a uniquely bleak chronicler of the human condition. His output can be crudely described as consisting of sequences or variations on a single motif; beginning with the 1940s male heads isolated in rooms, the early 1950s screaming popes, and mid to late 1950s animals and lone figures suspended in geometric structures. These were followed by his early 1960s modern variations of the crucifixion in the triptych format. From the mid-1960s to early 1970s, Bacon mainly produced strikingly compassionate portraits of friends. Following the suicide of his lover George Dyer in 1971, his art became more personal, inward-looking, and preoccupied with themes and motifs of death. During his lifetime, Bacon was equally reviled and acclaimed. Lucian Freud was a German-born British painter, known chiefly for his thickly impastoed portrait and figure paintings, who was widely considered the pre-eminent British artist of his time. His works are noted for their psychological penetration, and for their often discomforting examination of the relationship between artist and model. According to William Grimes of The New York Times, "Lucien Freud and his contemporaries transformed figure painting in the 20th century. In paintings like Girl with a White Dog (1951–1952), Freud put the pictorial language of traditional European painting in the service of an anti-romantic, confrontational style of portraiture that stripped bare the sitter's social facade. Ordinary people—many of them his friends—stared wide-eyed from the canvas, vulnerable to the artist's ruthless inspection." After Abstract Expressionism In abstract painting during the 1950s and 1960s, several new directions like hard-edge painting and other forms of geometric abstraction began to appear in artist studios and in radical avant-garde circles as a reaction against the subjectivism of Abstract Expressionism. Clement Greenberg became the voice of post-painterly abstraction when he curated an influential exhibition of new painting that toured important art museums throughout the United States in 1964. Color field painting, hard-edge painting, and lyrical abstraction emerged as radical new directions. By the late 1960s however, postminimalism, process art and Arte Povera also emerged as revolutionary concepts and movements that encompassed both painting and sculpture, via lyrical abstraction and the post-minimalist movement, and in early conceptual art. Process art, as inspired by Pollock enabled artists to experiment with and make use of a diverse encyclopaedia of style, content, material, placement, sense of time, aplastic, and real space. Nancy Graves, Ronald Davis, Howard Hodgkin, Larry Poons, Jannis Kounellis, Brice Marden, Colin McCahon, Bruce Nauman, Richard Tuttle, Alan Saret, Walter Darby Bannard, Lynda Benglis, Dan Christensen, Larry Zox, Ronnie Landfield, Eva Hesse, Keith Sonnier, Richard Serra, Pat Lipsky, Sam Gilliam, Mario Merz and Peter Reginato were some of the younger artists who emerged during the era of late modernism that spawned the heyday of the art of the late 1960s. Pop art In 1962, the Sidney Janis Gallery mounted The New Realists, the first major pop art group exhibition in an uptown art gallery in New York City. Janis mounted the exhibition in a 57th Street storefront near his gallery. The show had a great impact on the New York School as well as the greater worldwide art scene. Earlier in England in 1958 the term "Pop Art" was used by Lawrence Alloway to describe paintings associated with the consumerism of the post World War II era. This movement rejected Abstract Expressionism and its focus on the hermeneutic and psychological interior in favor of art that depicted material consumer culture, advertising, and the iconography of the mass production age. The early works of David Hockney and the works of Richard Hamilton and Eduardo Paolozzi (who created the ground-breaking I was a Rich Man's Plaything, 1947) are considered seminal examples in the movement. Meanwhile, in the downtown scene in New York's East Village 10th Street galleries, artists were formulating an American version of pop art. Claes Oldenburg had his storefront, and the Green Gallery on 57th Street began to show the works of Tom Wesselmann and James Rosenquist. Later Leo Castelli exhibited the works of other American artists, including those of Andy Warhol and Roy Lichtenstein for most of their careers. There is a connection between the radical works of Marcel Duchamp and Man Ray, the rebellious Dadaists with a sense of humor, and pop artists like Claes Oldenburg, Andy Warhol, and Roy Lichtenstein, whose paintings reproduce the look of Ben-Day dots, a technique used in commercial reproduction . Minimalism Minimalism describes movements in various forms of art and design, especially visual art and music, wherein artists intend to expose the essence or identity of a subject through eliminating all nonessential forms, features, or concepts. Minimalism is any design or style wherein the simplest and fewest elements are used to create the maximum effect. As a specific movement in the arts, it is identified with developments in post–World War II Western art, most strongly with American visual arts in the 1960s and early 1970s. Prominent artists associated with this movement include Donald Judd, John McCracken, Agnes Martin, Dan Flavin, Robert Morris, Ronald Bladen, Anne Truitt, and Frank Stella. It derives from the reductive aspects of modernism and is often interpreted as a reaction against Abstract Expressionism and a bridge to Post minimal art practices. By the early 1960s, minimalism emerged as an abstract movement in art (with roots in the geometric abstraction of Kazimir Malevich, the Bauhaus and Piet Mondrian) that rejected the idea of relational and subjective painting, the complexity of Abstract Expressionist surfaces, and the emotional zeitgeist and polemics present in the arena of action painting. Minimalism argued that extreme simplicity could capture all of the sublime representation needed in art. Minimalism is variously construed either as a precursor to postmodernism, or as a postmodern movement itself. In the latter perspective, early Minimalism yielded advanced modernist works, but the movement partially abandoned this direction when some artists like Robert Morris changed direction in favor of the anti-form movement. Hal Foster, in his essay The Crux of Minimalism, examines the extent to which Donald Judd and Robert Morris both acknowledge and exceed Greenbergian modernism in their published definitions of minimalism. He argues that minimalism is not a "dead end" of modernism, but a "paradigm shift toward postmodern practices that continue to be elaborated today." Minimal music The terms have expanded to encompass a movement in music that features such repetition and iteration as those of the compositions of La Monte Young, Terry Riley, Steve Reich, Philip Glass, and John Adams. Minimalist compositions are sometimes known as systems music. The term 'minimal music' is generally used to describe a style of music that developed in America in the late 1960s and 1970s; and that was initially connected with the composers. The minimalism movement originally involved some composers, and other lesser known pioneers included Pauline Oliveros, Phill Niblock, and Richard Maxfield. In Europe, the music of Louis Andriessen, Karel Goeyvaerts, Michael Nyman, Howard Skempton, Eliane Radigue, Gavin Bryars, Steve Martland, Henryk Górecki, Arvo Pärt and John Tavener. Postminimalism In the late 1960s, Robert Pincus-Witten coined the term "postminimalism" to describe minimalist-derived art which had content and contextual overtones that minimalism rejected. The term was applied by Pincus-Witten to the work of Eva Hesse, Keith Sonnier, Richard Serra and new work by former minimalists Robert Smithson, Robert Morris, Sol LeWitt, Barry Le Va, and others. Other minimalists, including Donald Judd, Dan Flavin, Carl Andre, Agnes Martin, John McCracken and others, continued to produce late modernist paintings and sculpture for the remainder of their careers. Since then, many artists have embraced minimal or post-minimal styles, and the label "postmodern" has been attached to them. Collage, assemblage, installations Related to Abstract Expressionism was the emergence of combining manufactured items with artist materials, moving away from previous conventions of painting and sculpture. The work of Robert Rauschenberg exemplifies this trend. His "combines" of the 1950s were forerunners of pop art and installation art, and used assemblages of large physical objects, including stuffed animals, birds and commercial photographs. Rauschenberg, Jasper Johns, Larry Rivers, John Chamberlain, Claes Oldenburg, George Segal, Jim Dine, and Edward Kienholz were among important pioneers of both abstraction and pop art. Creating new conventions of art-making, they made acceptable in serious contemporary art circles the radical inclusion in their works of unlikely materials. Another pioneer of collage was Joseph Cornell, whose more intimately scaled works were seen as radical because of both his personal iconography and his use of found objects. Neo-Dada In 1917, Marcel Duchamp submitted a urinal as a sculpture for the inaugural exhibition of the Society of Independent Artists, which was to be staged at the Grand Central Palace in New York. He professed his intent that people look at the urinal as if it were a work of art because he said it was a work of art. This urinal, named Fountain was signed with the pseudonym "R. Mutt". It is also an example of what Duchamp would later call "readymades". This and Duchamp's other works are generally labelled as Dada. Duchamp can be seen as a precursor to conceptual art, other famous examples being John Cage's 4′33″, which is four minutes and thirty three seconds of silence, and Rauschenberg's Erased de Kooning Drawing. Many conceptual works take the position that art is the result of the viewer viewing an object or act as art, not of the intrinsic qualities of the work itself. In choosing "an ordinary article of life" and creating "a new thought for that object", Duchamp invited onlookers to view Fountain as a sculpture. Marcel Duchamp famously gave up "art" in favor of chess. Avant-garde composer David Tudor created a piece, Reunion (1968), written jointly with Lowell Cross, that features a chess game in which each move triggers a lighting effect or projection. Duchamp and Cage played the game at the work's premier. Steven Best and Douglas Kellner identify Rauschenberg and Jasper Johns as part of the transitional phase, influenced by Duchamp, between modernism and postmodernism. Both used images of ordinary objects, or the objects themselves, in their work, while retaining the abstraction and painterly gestures of high modernism. Performance and happenings During the late 1950s and 1960s artists with a wide range of interests began to push the boundaries of contemporary art. Yves Klein in France, Carolee Schneemann, Yayoi Kusama, Charlotte Moorman and Yoko Ono in New York City, and Joseph Beuys, Wolf Vostell and Nam June Paik in Germany were pioneers of performance-based works of art. Groups like The Living Theatre with Julian Beck and Judith Malina collaborated with sculptors and painters to create environments, radically changing the relationship between audience and performer, especially in their piece Paradise Now. The Judson Dance Theater, located at the Judson Memorial Church, New York; and the Judson dancers, notably Yvonne Rainer, Trisha Brown, Elaine Summers, Sally Gross, Simonne Forti, Deborah Hay, Lucinda Childs, Steve Paxton and others; collaborated with artists Robert Morris, Robert Whitman, John Cage, Robert Rauschenberg, and engineers like Billy Klüver. Park Place Gallery was a center for musical performances by electronic composers Steve Reich, Philip Glass, and other notable performance artists, including Joan Jonas. These performances were intended as works of a new art form combining sculpture, dance, and music or sound, often with audience participation. They were characterized by the reductive philosophies of Minimalism and the spontaneous improvisation and expressivity of Abstract Expressionism. Images of Schneemann's performances of pieces meant to create shock within the audience are occasionally used to illustrate these kinds of art, and she is often photographed while performing her piece Interior Scroll. However, according to modernist philosophy surrounding performance art, it is cross-purposes to publish images of her performing this piece, for performance artists reject publication entirely: the performance itself is the medium. Thus, other media cannot illustrate performance art; performance is momentary, evanescent, and personal, not for capturing; representations of performance art in other media, whether by image, video, narrative or, otherwise, select certain points of view in space or time or otherwise involve the inherent limitations of each medium. The artists deny that recordings illustrate the medium of performance as art. During the same period, various avant-garde artists created Happenings, mysterious and often spontaneous and unscripted gatherings of artists and their friends and relatives in various specified locations, often incorporating exercises in absurdity, physicality, costuming, spontaneous nudity, and various random or seemingly disconnected acts. Notable creators of happenings included Allan Kaprow—who first used the term in 1958, Claes Oldenburg, Jim Dine, Red Grooms, and Robert Whitman. Intermedia, multi-media Another trend in art which has been associated with the term postmodern is the use of a number of different media together. Intermedia is a term coined by Dick Higgins and meant to convey new art forms along the lines of Fluxus, concrete poetry, found objects, performance art, and computer art. Higgins was the publisher of the Something Else Press, a concrete poet married to artist Alison Knowles and an admirer of Marcel Duchamp. Ihab Hassan includes "Intermedia, the fusion of forms, the confusion of realms," in his list of the characteristics of postmodern art. One of the most common forms of "multi-media art" is the use of video-tape and CRT monitors, termed video art. While the theory of combining multiple arts into one art is quite old, and has been revived periodically, the postmodern manifestation is often in combination with performance art, where the dramatic subtext is removed, and what is left is the specific statements of the artist in question or the conceptual statement of their action. Fluxus Fluxus was named and loosely organized in 1962 by George Maciunas (1931–1978), a Lithuanian-born American artist. Fluxus traces its beginnings to John Cage's 1957 to 1959 Experimental Composition classes at The New School for Social Research in New York City. Many of his students were artists working in other media with little or no background in music. Cage's students included Fluxus founding members Jackson Mac Low, Al Hansen, George Brecht and Dick Higgins. Fluxus encouraged a do-it-yourself aesthetic and valued simplicity over complexity. Like Dada before it, Fluxus included a strong current of anti-commercialism and an anti-art sensibility, disparaging the conventional market-driven art world in favor of an artist-centered creative practice. Fluxus artists preferred to work with whatever materials were at hand, and either created their own work or collaborated in the creation process with their colleagues. Andreas Huyssen criticizes attempts to claim Fluxus for postmodernism as "either the master-code of postmodernism or the ultimately unrepresentable art movement—as it were, postmodernism's sublime." Instead he sees Fluxus as a major Neo-Dadaist phenomenon within the avant-garde tradition. It did not represent a major advance in the development of artistic strategies, though it did express a rebellion against "the administered culture of the 1950s, in which a moderate, domesticated modernism served as ideological prop to the Cold War." Avant-garde popular music Modernism had an uneasy relationship with popular forms of music (both in form and aesthetic) while rejecting popular culture. Despite this, Stravinsky used jazz idioms on his pieces like "Ragtime" from his 1918 theatrical work Histoire du Soldat and 1945's Ebony Concerto. In the 1960s, as popular music began to gain cultural importance and question its status as commercial entertainment, musicians began to look to the post-war avant-garde for inspiration. In 1959, music producer Joe Meek recorded I Hear a New World (1960), which Tiny Mix Tapes Jonathan Patrick calls a "seminal moment in both electronic music and avant-pop history [...] a collection of dreamy pop vignettes, adorned with dubby echoes and tape-warped sonic tendrils" which would be largely ignored at the time. Other early Avant-pop productions included the Beatles's 1966 song "Tomorrow Never Knows", which incorporated techniques from musique concrète, avant-garde composition, Indian music, and electro-acoustic sound manipulation into a 3-minute pop format, and the Velvet Underground's integration of La Monte Young's minimalist and drone music ideas, beat poetry, and 1960s pop art. Late period The continuation of Abstract Expressionism, color field painting, lyrical abstraction, geometric abstraction, minimalism, abstract illusionism, process art, pop art, postminimalism, and other late 20th-century modernist movements in both painting and sculpture continued through the first decade of the 21st century and constitute radical new directions in those mediums. At the turn of the 21st century, well-established artists such as Sir Anthony Caro, Lucian Freud, Cy Twombly, Robert Rauschenberg, Jasper Johns, Agnes Martin, Al Held, Ellsworth Kelly, Helen Frankenthaler, Frank Stella, Kenneth Noland, Jules Olitski, Claes Oldenburg, Jim Dine, James Rosenquist, Alex Katz, Philip Pearlstein, and younger artists including Brice Marden, Chuck Close, Sam Gilliam, Isaac Witkin, Sean Scully, Mahirwan Mamtani, Joseph Nechvatal, Elizabeth Murray, Larry Poons, Richard Serra, Walter Darby Bannard, Larry Zox, Ronnie Landfield, Ronald Davis, Dan Christensen, Pat Lipsky, Joel Shapiro, Tom Otterness, Joan Snyder, Ross Bleckner, Archie Rand, Susan Crile, and others continued to produce vital and influential paintings and sculpture. Modern architecture Many skyscrapers in Hong Kong and Frankfurt have been inspired by Le Corbusier and modernist architecture, and his style is still used as influence for buildings worldwide. Modernism in Asia The terms "modernism" and "modernist", according to scholar William J. Tyler, "have only recently become part of the standard discourse in English on modern Japanese literature and doubts concerning their authenticity vis-à-vis Western European modernism remain". Tyler finds this odd, given "the decidedly modern prose" of such "well-known Japanese writers as Kawabata Yasunari, Nagai Kafu, and Jun'ichirō Tanizaki". However, "scholars in the visual and fine arts, architecture, and poetry readily embraced "modanizumu" as a key concept for describing and analysing Japanese culture in the 1920s and 1930s". In 1924, various young Japanese writers, including Kawabata and Riichi Yokomitsu started a literary journal Bungei Jidai ("The Artistic Age"). This journal was "part of an 'art for art's sake' movement, influenced by European Cubism, Expressionism, Dada, and other modernist styles". Japanese modernist architect Kenzō Tange (1913–2005) was one of the most significant architects of the 20th century, combining traditional Japanese styles with modernism, and designing major buildings on five continents. Tange was also an influential patron of the Metabolist movement. He said: "It was, I believe, around 1959 or at the beginning of the sixties that I began to think about what I was later to call structuralism", He was influenced from an early age by the Swiss modernist, Le Corbusier, Tange gained international recognition in 1949 when he won the competition for the design of Hiroshima Peace Memorial Park. In China, the "New Sensationists" (新感觉派, Xīn Gǎnjué Pài) were a group of writers based in Shanghai who in the 1930s and 1940s, were influenced, to varying degrees, by Western and Japanese modernism. They wrote fiction that was more concerned with the unconscious and with aesthetics than with politics or social problems. Among these writers were Mu Shiying and Shi Zhecun. In India, the Progressive Artists' Group was a group of modern artists, mainly based in Mumbai, India formed in 1947. Though it lacked any particular style, it synthesized Indian art with European and North America influences from the first half of the 20th century, including Post-Impressionism, Cubism and Expressionism. Modernism in Africa Peter Kalliney suggests that "Modernist concepts, especially aesthetic autonomy, were fundamental to the literature of decolonization in anglophone Africa." In his opinion, Rajat Neogy, Christopher Okigbo, and Wole Soyinka, were among the writers who "repurposed modernist versions of aesthetic autonomy to declare their freedom from colonial bondage, from systems of racial discrimination, and even from the new postcolonial state". Relationship with postmodernism By the early 1980s, the postmodern movement in art and architecture began to establish its position through various conceptual and intermedia formats. Postmodernism in music and literature began to take hold earlier. In music, postmodernism is described in one reference work as a "term introduced in the 1970s", while in British literature, The Oxford Encyclopaedia of British Literature sees modernism "ceding its predominance to postmodernism" as early as 1939. However, dates are highly debatable, especially as, according to Andreas Huyssen: "one critic's postmodernism is another critic's modernism." This includes those who are critical of the division between the two, see them as two aspects of the same movement, and believe that late modernism continues. Modernism is an all-encompassing label for a wide variety of cultural movements. Postmodernism is essentially a centralized movement that named itself, based on socio-political theory, although the term is now used in a wider sense to refer to activities from the 20th century onwards which exhibit awareness of and reinterpret the modern. Postmodern theory asserts that the attempt to canonize modernism "after the fact" is doomed to unresolvable contradictions. And since the crux of postmodernism critiques any claim to a single discernible truth, postmodernism and modernism conflict on the existence of truth. Where modernists approach the issue of 'truth' with different theories (correspondence, coherence, pragmatist, semantic, etc.), postmodernists approach the issue of truth negatively by disproving the very existence of an accessible truth. In a narrower sense, what was modernist was not necessarily also postmodernist. Those elements of modernism which accentuated the benefits of rationality and socio-technological progress were only modernist. Modernist reactions against postmodernism include remodernism, which rejects the cynicism and deconstruction of postmodern art in favor of reviving early modernist aesthetic currents. Criticism of late modernity Although artistic modernism tended to reject capitalist values such as consumerism, 20th century civil society embraced global mass production and the proliferation of cheap and accessible commodities. This period of social development is known as "late or high modernity" and originates in advanced in Western societies. The German sociologist Jürgen Habermas, in The Theory of Communicative Action (1981), developed the first substantive critique of the culture of late modernity. Another important early critique of late modernity is the American sociologist George Ritzer's The McDonaldization of Society (1993). Ritzer describes how late modernity became saturated with fast food consumer culture. Other authors have demonstrated how modernist devices appeared in popular cinema, and later on in music videos. Modernist design has entered the mainstream of popular culture, as simplified and stylized forms became popular, often associated with dreams of a space age high-tech future. In 2008, Janet Bennett published Modernity and Its Critics through The Oxford Handbook of Political Theory. Merging of consumer and high -end versions of modernist culture led to a radical transformation of the meaning of "modernism". First, it implied that a movement based on the rejection of tradition had become a tradition of its own. Second, it demonstrated that the distinction between elite modernist and mass consumerist culture had lost its precision. Modernism had become so institutionalized that it was now "post avant-garde", indicating that it had lost its power as a revolutionary movement. Many have interpreted this transformation as the beginning of the phase that became known as postmodernism. For others, such as art critic Robert Hughes, postmodernism represents an extension of modernism. "Anti-Modern" or "Counter-Modern" movements seek to emphasize holism, connection and spirituality as remedies or antidotes to modernism. Such movements see modernism as reductionist, and therefore subject to an inability to see systemic and emergent effects. Some traditionalist artists like Alexander Stoddart reject modernism generally as the product of "an epoch of false money allied with false culture". In some fields, the effects of modernism have remained stronger and more persistent than in others. Visual art has made the most complete break with its past. Most major capital cities have museums devoted to modern art as distinct from post-Renaissance art ( to ). Examples include the Museum of Modern Art in New York, the Tate Modern in London, and the Centre Pompidou in Paris. These galleries make no distinction between modernist and postmodernist phases, seeing both as developments within modern art. See also Footnotes References Sources John Barth (1979) The Literature of Replenishment, later republished in The Friday Book (1984). Eco, Umberto (1990) Interpreting Serials in The limits of interpretation, pp. 83–100, excerpt Everdell, William R. (1997) The First Moderns: Profiles in the Origins of Twentieth Century Thought (Chicago: University of Chicago Press). Orton, Fred and Pollock, Griselda (1996) Avant-Gardes and Partisans Reviewed, Manchester University. Steiner, George (1998) After Babel, ch.6 Topologies of culture, 3rd revised edition Art Berman (1994) Preface to Modernism, University of Illinois Press. Further reading Robert Archambeau. "The Avant-Garde in Babel. Two or Three Notes on Four or Five Words", Action-Yes vol. 1, issue 8 Autumn 2008. Armstrong, Carol and de Zegher, Catherine (eds.), Women Artists as the Millennium, Cambridge, MA: October Books, MIT Press, 2006. . Aspray, William & Philip Kitcher, eds., History and Philosophy of Modern Mathematics, Minnesota Studies in the Philosophy of Science vol. XI, Minneapolis: University of Minnesota Press, 1988 Bäckström, Per (ed.), Centre-Periphery. The Avant-Garde and the Other , Nordlit. University of Tromsø, no. 21, 2007. Bäckström, Per. "One Earth, Four or Five Words. The Peripheral Concept of 'Avant-Garde'" , Action-Yes vol. 1, issue 12 Winter 2010 Bäckström, Per & Bodil Børset (eds.), Norsk avantgarde (Norwegian Avant-Garde), Oslo: Novus, 2011. Bäckström, Per & Benedikt Hjartarson (eds.), Decentring the Avant-Garde , Amsterdam & New York: Rodopi, Avantgarde Critical Studies, 2014. Bäckström, Per and Benedikt Hjartarson. "Rethinking the Topography of the International Avant-Garde", in Decentring the Avant-Garde , Per Bäckström & Benedikt Hjartarson (eds.), Amsterdam & New York: Rodopi, Avantgarde Critical Studies, 2014. Baker, Houston A. Jr., Modernism and the Harlem Renaissance, Chicago: University of Chicago Press, 1987 Berman, Marshall, All That Is Solid Melts into Air: The Experience of Modernity. Second ed. London: Penguin, 1982. . Bradbury, Malcolm, & James McFarlane (eds.), Modernism: A Guide to European Literature 1890–1930 (Penguin "Penguin Literary Criticism" series, 1978, ). Brush, Stephen G., The History of Modern Science: A Guide to the Second Scientific Revolution, 1800–1950, Ames, IA: Iowa State University Press, 1988 Centre Georges Pompidou, Face a l'Histoire, 1933–1996. Flammarion, 1996. . Crouch, Christopher, Modernism in art design and architecture, New York: St. Martin's Press, 2000 Eysteinsson, Astradur, The Concept of Modernism, Ithaca, NY: Cornell University Press, 1992 Friedman, Julia . Beyond Symbolism and Surrealism: Alexei Remizov's Synthetic Art, Northwestern University Press, 2010. (Trade Cloth) Frascina, Francis, and Charles Harrison (eds.). Modern Art and Modernism: A Critical Anthology. Published in association with The Open University. London: Harper and Row, Ltd. Reprinted, London: Paul Chapman Publishing, Ltd., 1982. Gates, Henry Louis. The Norton Anthology of African American Literature. W.W. Norton & Company, Inc., 2004. Hughes, Robert, The Shock of the New: Art and the Century of Change (Gardners Books, 1991, ). Kenner, Hugh, The Pound Era (1971), Berkeley, CA: University of California Press, 1973 Kern, Stephen, The Culture of Time and Space, Cambridge, MA: Harvard University Press, 1983 Klein, Jürgen, On Modernism, Berlin, Bruxelles, Lausanne, New York Oxford: Peter Lang, 2022 ISBN 978-3-631-87869-9. Kolocotroni, Vassiliki et al., ed.,Modernism: An Anthology of Sources and Documents (Edinburgh: Edinburgh University Press, 1998). Levenson, Michael, (ed.), The Cambridge Companion to Modernism (Cambridge University Press, "Cambridge Companions to Literature" series, 1999, ). Lewis, Pericles. The Cambridge Introduction to Modernism (Cambridge: Cambridge University Press, 2007). Nicholls, Peter, Modernisms: A Literary Guide (Hampshire and London: Macmillan, 1995). Pevsner, Nikolaus, Pioneers of Modern Design: From William Morris to Walter Gropius (New Haven, CT: Yale University Press, 2005, ). The Sources of Modern Architecture and Design (Thames & Hudson, "World of Art" series, 1985, ). Pollock, Griselda, Generations and Geographies in the Visual Arts. (Routledge, London, 1996. ). Pollock, Griselda, and Florence, Penny, Looking Back to the Future: Essays by Griselda Pollock from the 1990s. (New York: G&B New Arts Press, 2001. ) Sass, Louis A. (1992). Madness and Modernism: Insanity in the Light of Modern Art, Literature, and Thought. New York: Basic Books. Cited in Bauer, Amy (2004). "Cognition, Constraints, and Conceptual Blends in Modernist Music", in The Pleasure of Modernist Music. . Schorske, Carl. Fin-de-Siècle Vienna: Politics and Culture. Vintage, 1980. . Schwartz, Sanford, The Matrix of Modernism: Pound, Eliot, and Early Twentieth Century Thought, Princeton, NJ: Princeton University Press, 1985 Tyler, William J., ed. Modanizumu: Modernist Fiction from Japan, 1913–1938. University of Hawai'i Press, 2008. Van Loo, Sofie (ed.), Gorge(l). Royal Museum of Fine Arts, Antwerp, 2006. . Weir, David, Decadence and the Making of Modernism, 1995, University of Massachusetts Press, . Weston, Richard, Modernism (Phaidon Press, 2001, ). de Zegher, Catherine, Inside the Visible. (Cambridge, MA: MIT Press, 1996). External links Ballard, J. G., on Modernism. Denzer, Anthony S., PhD, Masters of Modernism. Hoppé, E. O., photographer, Edwardian Modernists. Malady of Writing. Modernism you can dance to An online radio show that presents a humorous version of Modernism Modernism Lab @ Yale University Modernism/Modernity , official publication of the Modernist Studies Association Modernism vs. Postmodernism Aesthetics Architectural styles Art movements Modernism Theories of aesthetics
0.767515
0.999289
0.76697
People's history
A people's history, or history from below, is a type of historical narrative which attempts to account for historical events from the perspective of common people rather than leaders. There is an emphasis on disenfranchised, the oppressed, the poor, the nonconformists, and otherwise marginal groups. The authors typically have a Marxist model in mind, as in the approach of the History Workshop movement in Britain in the 1960s. "History from below" and "people's history" Georges Lefebvre first used the phrase (history seen from below and not from above) in 1932 when praising Albert Mathiez for seeking to tell the (history of the masses and not of starlets). It was also used in the title of A. L. Morton's 1938 book, A People's History of England. Yet it was E. P. Thompson's essay History from Below in The Times Literary Supplement (1966) which brought the phrase to the forefront of historiography from the 1970s. Thompson did not use the phrase in his TLS piece. History From Below appeared as the title of the Thompson article, put there by an anonymous editor. It was popularized among non-historians by Howard Zinn's 1980 book, A People's History of the United States. Description A people's history is the history as the story of mass movements and of the outsiders. Individuals not included in the past in other type of writing about history are part of history-from-below theory's primary focus, which includes the disenfranchised, the oppressed, the poor, the nonconformists, the subaltern and the otherwise forgotten people. This theory also usually focuses on events occurring in the fullness of time, or when an overwhelming wave of smaller events cause certain developments to occur. This approach to writing history is in direct opposition to methods which tend to emphasize single great figures in history, referred to as the Great Man theory; it argues that the driving factor of history is the daily life of ordinary people, their social status and profession. These are the factors that "push and pull" on opinions and allow for trends to develop, as opposed to great people introducing ideas or initiating events. In his book A People's History of the United States, Howard Zinn wrote: "The history of any country, presented as the history of a family, conceals fierce conflicts of interest (sometimes exploding, most often repressed) between conquerors and conquered, masters and slaves, capitalists and workers, dominators and dominated in race and sex. And in such a world of conflict, a world of victims and executioners, it is the job of thinking people, as Albert Camus suggested, not to be on the side of the executioners." Criticism Historian Guy Beiner wrote that "the Neo-Marxist flag-bearers of history from below have at times resorted to idealized and insufficiently sophisticated notions of 'the people', unduly ascribing to them innate progressive values. In practice, democratic history is by no means egalitarian". See also Social history Canada: A People's History (television documentary series) The Assassination of Julius Caesar: A People's History of Ancient Rome Montaillou (book) George Rudé Chris Harman Marxist historiography New labor history Subaltern (postcolonialism) References Further reading A People's History of England by A. L. Morton (Victor Gollancz: London, 1938) An Indigenous Peoples' History of the United States by Roxanne Dunbar-Ortiz (Washington; Beacon Press, 2014) A People's History of the United States (in 8 volumes) by Page Smith (New York: McGraw-Hill, 1976–1987) A People's History of the Supreme Court by Peter Irons (New York: Viking, 1999) A People's History of the World by Chris Harman (London: Bookmarks, 1999) A People's History of the Second World War by Donny Gluckstein (Pluto Press, 2012) A People's History of World War II by Marc Favreau (New press, 2011) The Hundred Years War: A People's History by David green (Yale University Press, 2014) A People's History of the American Revolution: How Common People Shaped the Fight for Independence by Ray Raphael (New York: New Press, 2001)The Congo: From Leopold to Kabila: A People's History by Georges Nzongola-Ntalaja (London, NY: Zed, 2002)A People's History of the Vietnam War by Jonathan Neale (New York: New Press, 2003) The Assassination of Julius Caesar: A People's History of Ancient Rome by Michael Parenti (New York : New Press, 2003)A History of the Swedish People, Vol. 1: From Prehistory to the Renaissance by Vilhelm Moberg (Minneapolis: University of Minnesota Press, 2005)A History of the Swedish People, Vol. 2: From Renaissance to Revolution by Vilhelm Moberg (Minneapolis: University of Minnesota Press, 2005)A People's History of Science: Miners, Midwives, and "Low Mechaniks" by Clifford D. Conner (New York: Nation, 2005)A People's History of the Civil War: Struggles for the Meaning of Freedom by David Williams (New York: New Press, 2005) A People's Tragedy: The Russian Revolution: 1891–1924 by Orlando Figes (Penguin Books, 1998) A People's History of the Mexican Revolution by Adolfo Gilly (New York, NY: New Press, 2005) A People's History of the French Revolution by Eric Hazan (Verso, 2014) A People's History of Christianity: The Other Side of the Story by Dianna Butler Bass (Harper One, 2010)Christian Origins: A People's History of Christianity, Vol. 1 by Richard A. Horsley (Minneapolis: Fortress Press, 2005)Late Ancient Christianity: A People's History of Christianity, Vol. 2 by Virginia Burrus (Minneapolis: Fortress Press, 2005)The English Civil War: A People's History by Diane Purkiss (New York: Basic Books, 2006)Reformation Christianity: A People's History of Christianity by Peter Matheson and Denis R. Janz (Minneapolis: Fortress Press, 2007)The Darker Nations: A People's History of the Third World by Vijay Prashad (New York: New Press: W.W. Norton, 2007)A History of the Arab Peoples by Albert Hourani (Warner Books, 1992) Hearts and Minds: A People's History of Counterinsurgency by Hannah Gurman (New Press, 2013) A People's History of the U.S. Military by Michael A. Bellesiles (New Press, 2013) A People's History of Poverty in America by Stephen Pimpare (New York: New Press ; London : Turnaround, 2008) A People's History of Environmentalism in the United States by Chad Montrie (Bloomsbury Academic, 2011) For All the People: Uncovering the Hidden History of Cooperation, Cooperative Movements, and Communalism in America by John Curl (PM Press, 2012) Collective Courage: A History of African American Cooperative Economic Thought and Practice by Jessica Gordon Nembhard (Penn State university press, 2014) A People's History of Sports in the United States by Dave Zirin (New York; London: New Press, c. 2008) A People's Art History of the United States by Nicolas Lampert (New press, 2010) Downwind: A People's History of the Nuclear West by Sarah Alisabeth Fox (Bison Books, 2014) A People's History of London by Lindsey German & John rees (Verso, 2012) The Blood Never Dried: A People's History of the British Empire by John Newsinger (London: Bookmarks, 2009) A Renegade History of the United States by Thaddeus Russell (New York: Free Press, 2010) A People's History of Scotland by Chris Bambery (Verso, 2014) Montaillou: Cathars and Catholics in a French village: 1294–1324'' by Emmanuel Le Roy Ladurie (Penguin Books Ltd, 2013) External links (formerly peopleshistory.co.uk) – a people's history website World history Historiography
0.779823
0.983509
0.766963
Paleoconservatism
Paleoconservatism is a political philosophy and a paternalistic strain of conservatism in the United States stressing American nationalism, Christian ethics, regionalism, traditionalist conservatism, and non-interventionism. Paleoconservatism's concerns overlap with those of the Old Right that opposed the New Deal in the 1930s and 1940s as well as with paleolibertarianism. By the start of the 21st century, the movement had begun to focus more on issues of race. The terms neoconservative and paleoconservative were coined following the outbreak of the Vietnam War and a divide in American conservatism between the interventionists and the isolationists. Those in favor of the Vietnam War then became known as the neoconservatives (interventionists), as they marked a decisive split from the nationalist-isolationism that the traditionalist conservatives (isolationists) had subscribed to up until this point. Paleoconservatives press for restrictions on immigration, a rollback of multicultural programs and large-scale demographic change, the decentralization of federal policy, the restoration of controls upon free trade, a greater emphasis upon economic nationalism and non-interventionism in the conduct of American foreign policy. Historian George Hawley states that although influenced by paleoconservatism, Donald Trump is not a paleoconservative, but rather a nationalist and a right-wing populist. Hawley also argued in 2017 that paleoconservatism was an exhausted force in American politics, but that for a time it represented the most serious right-wing threat to the mainstream conservative movement. Regardless of how Trump himself is categorized, others regard the movement known as Trumpism as supported by, if not a rebranding of, paleoconservatism. From this view, the followers of the Old Right did not fade away so easily and continue to have significant influence in the Republican Party and the entire country. Terminology The prefix paleo derives from the Greek root παλαιός (palaiós), meaning "ancient" or "old". It is somewhat tongue-in-cheek and refers to the paleoconservatives' claim to represent a more historic, authentic conservative tradition than that found in neoconservatism. Adherents of paleoconservatism often describe themselves simply as "paleo". Rich Lowry of National Review claims the prefix "is designed to obscure the fact that it is a recent ideological creation of post-Cold War politics". Samuel T. Francis, Thomas Fleming, and some other paleoconservatives de-emphasized the conservative part of the paleoconservative label, saying that they do not want the status quo preserved. Fleming and Paul Gottfried called such thinking "stupid tenacity" and described it as "a series of trenches dug in defense of last year's revolution". Francis defined authentic conservatism as "the survival and enhancement of a particular people and its institutionalized cultural expressions". Ideology Paleoconservatives support restrictions on immigration, decentralization, trade tariffs and protectionism, economic nationalism, isolationism, and a return to traditional conservative ideals relating to gender, race, sexuality, culture, and society. Paleoconservatism differs from neoconservatism in opposing free trade and promoting republicanism. Paleoconservatives see neoconservatives as imperialists and themselves as defenders of the republic. Paleoconservatives tend to oppose abortion, gay marriage, and LGBTQ rights. Human nature, tradition, and reason Paleoconservatives believe that tradition is a form of reason, rather than a competing force. Mel Bradford wrote that certain questions are settled before any serious deliberation concerning a preferred course of conduct may begin. This ethic is based in a "culture of families, linked by friendship, common enemies, and common projects", so a good conservative keeps "a clear sense of what Southern grandmothers have always meant in admonishing children, 'we don't do that'". Pat Buchanan argues that a good politician must "defend the moral order rooted in the Old and New Testament and Natural Law"—and that "the deepest problems in our society are not economic or political, but moral". Southern traditionalism According to historian Paul V. Murphy, paleoconservatives developed a focus on localism and states' rights. From the mid-1980s onward, Chronicles promoted a Southern traditionalist worldview focused on national identity, regional particularity, and skepticism of abstract theory and centralized power. According to Hague, Beirich, and Sebesta (2009), the antimodernism of the paleoconservative movement defined the neo-Confederate movement of the 1980s and 1990s. During this time, notable paleoconservatives argued that desegregation, welfare, tolerance of gay rights, and church-state separation had been damaging to local communities, and that these issues had been imposed by federal legislation and think tanks. Paleoconservatives also claimed the Southern Agrarians as forebears in this regard. Opposition to Israel Paleoconservatives are generally strong opponents of Israel and supporters of the Arab cause in the Israeli-Palestinian conflict; they have argued that supporting the country damages foreign relations with the Islamic world and American interests abroad. Buchanan has asserted that "Capitol Hill is Israeli occupied territory". Kirk argued that "Not seldom has it seemed... as if some eminent Neoconservatives mistook Tel Aviv for the capital of the United States". During the Israel-Hamas War, paleoconservative Tucker Carlson argued Israel was guilty of war crimes, and that President Joe Biden's support of the country risked American complicitness in the actions. Notable people Philosophers and scholars Mel Bradford (1934–1993) Paul Gottfried (born 1941) E. Christian Kopff (born 1946) William S. Lind (born 1947) Clyde N. Wilson (born 1941) Commentators and columnists Pat Buchanan (born 1938), White House Communications Director (1985–1987), 1992 and 1996 Republican presidential candidate, 2000 Reform Party presidential nominee Peter Brimelow (born 1947) Tucker Carlson (born 1969) John Derbyshire (born 1945) Nick Fuentes (born 1998) Thomas Fleming (born 1945) Samuel T. Francis (1947–2005) Alex Jones (born 1974) Razib Khan (born 1977) Robert Novak (1931–2009) Steve Sailer (born 1958) Joseph Sobran (1946–2010) Taki Theodoracopulos (born 1936) Notable organizations and outlets Organizations Abbeville Institute John Birch Society Periodicals and websites The American Conservative Chronicles (magazine) Observer & Review Intercollegiate Review Taki's Magazine See also References Bibliography External links American nationalism Anti-communism in the United States Criticism of neoconservatism Criticism of multiculturalism Non-interventionism Paleoconservatism Reactionary Right-wing populism
0.768145
0.998351
0.766878
Estates of the realm
The estates of the realm, or three estates, were the broad orders of social hierarchy used in Christendom (Christian Europe) from the Middle Ages to early modern Europe. Different systems for dividing society members into estates developed and evolved over time. The best known system is the French Ancien Régime (Old Regime), a three-estate system which was made up of a First Estate of clergy, a Second Estate of titled nobles, and a Third Estate of all other subjects (both peasants and bourgeoisie). In some regions, notably Sweden and Russia, burghers (the urban merchant class) and rural commoners were split into separate estates, creating a four-estate system with rural commoners ranking the lowest as the Fourth Estate. In Norway, the taxpaying classes were considered as one, and with a very small aristocracy; this class/estate was as powerful as the monarchy itself. In Denmark, however, only owners of large tracts of land had any influence. Furthermore, the non-landowning poor could be left outside the estates, leaving them without political rights. In England, a two-estate system evolved that combined nobility and clergy into one lordly estate with "commons" as the second estate. This system produced the two houses of parliament, the House of Commons and the House of Lords. In southern Germany, a three-estate system of nobility (princes and high clergy), knights, and burghers was used; this system excluded lower clergy and peasants altogether. In Scotland, the Three Estates were the Clergy (First Estate), Nobility (Second Estate), and Shire Commissioners, or "burghers" (Third Estate), representing the bourgeoisie and lower commoners. The Estates made up a Scottish Parliament. Today, the terms three estates and estates of the realm may sometimes be re-interpreted to refer to the modern separation of powers in government into the legislature, administration, and the judiciary. The modern term the fourth estate invokes medieval three-estate systems, and usually refers to some particular force outside that medieval power structure, most commonly the independent press or the mass media. Social mobility During the Middle Ages, advancing to different social classes was uncommon and difficult, and when it did happen, it generally took the form of a gradual increase in status over several generations of a family rather than within a single lifetime. One field in which commoners could appreciably advance within a single lifetime was the Church. The medieval Church was an institution where social mobility was most likely achieved up to a certain level (generally to that of vicar general or abbot/abbess for commoners). Typically, only nobility were appointed to the highest church positions (bishops, archbishops, heads of religious orders, etc.), although low nobility could aspire to the highest church positions. Since clergy could not marry, such mobility was theoretically limited to one generation. Nepotism was common in this period. Dynamics Johan Huizinga observed that "Medieval political speculation is imbued to the marrow with the idea of a structure of society based upon distinct orders". The virtually synonymous terms estate and order designated a great variety of social realities, not at all limited to a class, Huizinga concluded applying to every social function, every trade, every recognisable grouping. This static view of society was predicated on inherited positions. Commoners were universally considered the lowest order. The higher estates' necessary dependency on the commoners' production, however, often further divided the otherwise equal common people into burghers (also known as bourgeoisie) of the realm's cities and towns, and the peasants and serfs of the realm's surrounding lands and villages. A person's estate and position within it were usually inherited from the father and his occupation, similar to a caste within that system. In many regions and realms, there also existed population groups born outside these specifically defined resident estates. Legislative bodies or advisory bodies to a monarch were traditionally grouped along lines of these estates, with the monarch above all three estates. Meetings of the estates of the realm became early legislative and judicial parliaments. Monarchs often sought to legitimize their power by requiring oaths of fealty from the estates. Today, in most countries, the estates have lost all their legal privileges, and are mainly of historical interest. The nobility may be an exception, for instance due to legislation against false titles of nobility. One of the earliest political pamphlets to address these ideas was called "What Is the Third Estate?" It was written by Abbé Emmanuel Joseph Sieyès in January 1789, shortly before the start of the French Revolution. Background After the fall of the Western Roman Empire, numerous geographic and ethnic kingdoms developed among the endemic peoples of Europe, affecting their day-to-day secular lives; along with those, the growing influence of the Catholic Church and its Papacy affected the ethical, moral and religious lives and decisions of all. This led to mutual dependency between the secular and religious powers for guidance and protection, but over time and with the growing power of the kingdoms, competing secular realities increasingly diverged from religious idealism and Church decisions. The new lords of the land identified themselves primarily as warriors, but because new technologies of warfare were expensive, and the fighting men required substantial material resources and considerable leisure to train, these needs had to be filled. The economic and political transformation of the countryside in the period were filled by a large growth in population, agricultural production, technological innovations and urban centers; movements of reform and renewal attempted to sharpen the distinction between clerical and lay status, and power, recognized by the Church also had their effect. In his book The Three Orders: Feudal Society Imagined, the French medievalist Georges Duby has shown that in the period 1023–1025 the first theorist who justified the division of European society into the three estates of the realm was Gerard of Florennes, the bishop of Cambrai. As a result of the Investiture Controversy of the late 11th and early 12th centuries, the powerful office of Holy Roman Emperor lost much of its religious character and retained a more nominal universal preeminence over other rulers, though it varied. The struggle over investiture and the reform movement also legitimized all secular authorities, partly on the grounds of their obligation to enforce discipline. In the 11th and 12th centuries, thinkers argued that human society consisted of three orders: those who pray, those who fight, and those who labour. The structure of the first order, the clergy, was in place by 1200 and remained singly intact until the religious reformations of the 16th century. The second order, those who fight, was the rank of the politically powerful, ambitious, and dangerous. Kings took pains to ensure that it did not resist their authority. The general category of those who labour (specifically, those who were not knightly warriors or nobles) diversified rapidly after the 11th century into the lively and energetic worlds of peasants, skilled artisans, merchants, financiers, lay professionals, and entrepreneurs, which together drove the European economy to its greatest achievements. By the 12th century, most European political thinkers agreed that monarchy was the ideal form of governance. This was because it imitated on earth the model set by God for the universe; it was the form of government of the ancient Hebrews and the Christian Biblical basis, the later Roman Empire, and also the peoples who succeeded Rome after the 4th century. Kingdom of France France under the Ancien Régime (before the French Revolution) divided society into three estates: the First Estate (clergy); the Second Estate (nobility); and the Third Estate (commoners). The king was not part of any estate. First Estate The First Estate comprised the entire clergy and the religious orders, traditionally divided into "higher" and "lower" clergy. Although there was no formally recognized demarcation between the two categories, the upper clergy were, effectively, clerical nobility, from Second Estate families. In the time of Louis XVI, every bishop in France was a nobleman, a situation that had not existed before the 18th century. The "lower clergy" (about equally divided between parish priests, monks, and nuns) constituted about 90 percent of the First Estate, which in 1789 numbered around 130,000 (about 0.5% of the population). Second Estate The Second Estate was the French nobility and (technically, though not in common use) royalty, other than the monarch himself, who stood outside of the system of estates. The Second Estate is traditionally divided into ("nobility of the sword"), and ("nobility of the robe"), the magisterial class that administered royal justice and civil government. The Second Estate constituted approximately 1.5% of France's population. Under the ("old rule/old government", i.e. before the revolution), the Second Estate were exempt from the (forced labor on the roads) and from most other forms of taxation such as the (salt tax), and most important, the (France's oldest form of direct taxation). This exemption from paying taxes was a major reason for their opposition to political reform. Third Estate The Third Estate comprised all of those who were not members of either of the above, and can be divided into two groups, urban and rural, together making up over 98% of France's population. The urban included wage-labourers. The rural included free peasants (who owned their own land and could be prosperous) and villeins (serfs, or peasants working on a noble's land). The free peasants paid disproportionately high taxes compared to the other Estates and were unhappy because they wanted more rights. In addition, the First and Second Estates relied on the labour of the Third, which made the latter's inferior status all the more glaring. There were an estimated 27 million in the Third Estate when the French Revolution started. They had a hard life of physical labour and food shortages. Most people were born in this group, and most remained in it for their entire lives. It was extremely rare for people of this ascribed status to become part of another estate; those who did were usually being rewarded for extraordinary bravery in battle, or entering religious life. A few commoners were able to marry into the Second Estate, but this was a rare occurrence. Estates General The Estates General (not to be confused with a "class of citizen") was a general citizens' assembly that was first called by Philip IV in 1302 and then met intermittently at the request of the King until 1614; after that, it was not called again for over 170 years. In the period leading up to the Estates General of 1789, France was in the grip of an unmanageable public debt. In May 1776, finance minister Turgot was dismissed, after failing to enact reforms. The next year, Jacques Necker, a foreigner, was appointed Controller-General of Finances. (He could not officially be made the finance minister because he was a Protestant.) Drastic inflation and simultaneous scarcity of food created a major famine in the winter of 1788–89. This led to widespread discontent, and produced a group of Third Estate representatives (612 exactly) pressing a comparatively radical set of reforms, much of it in alignment with the goals of acting finance minister Jacques Necker, but very much against the wishes of Louis XVI's court and many of the hereditary nobles who were his Second Estate allies (allies at least to the extent that they were against being taxed themselves and in favour of maintaining high taxation for commoners). Louis XVI called a meeting of the Estates General to deal with the economic problems and quell the growing discontent, but when he could not persuade them to rubber-stamp his 'ideal program', he sought to dissolve the assembly and take legislation into his own hands. However, the Third Estate held out for their right to representation. The lower clergy (and some nobles and upper clergy) eventually sided with the Third Estate, and the King was forced to yield. Thus, the 1789 Estates General meeting became an invitation to revolution. By June, when continued impasses led to further deterioration in relations, the Estates General was reconstituted in a different form, first as the National Assembly (June 17, 1789), seeking a solution independent of the King's management. (The Estates General, under that name and directed by the King, did continue to meet occasionally.) These independently-organized meetings are now seen as the epoch event of the French Revolution, during which – after several more weeks of civil unrest – the body assumed a new status as a revolutionary legislature, the National Constituent Assembly (July 9, 1789). This unitary body of former representatives of the three estates began governing, along with an emergency committee, in the power vacuum after the Bourbon monarchy fled from Paris. Among the Assembly was Maximilien Robespierre, an influential president of the Jacobins who would years later become instrumental in the turbulent period of violence and political upheaval in France known as the Reign of Terror (5 September 1793 – 28 July 1794). Great Britain and Ireland Whilst the estates were never formulated in a way that prevented social mobility, the English (subsequently the British) parliament was formed along the classic estate lines, being composed of the "Lords Spiritual and Temporal, and Commons". The tradition where the Lords Spiritual and Temporal sat separately from the Commons began during the reign of Edward III in the 14th century. Notwithstanding the House of Lords Act 1999, the British Parliament still recognises the existence of the three estates: the Commons in the House of Commons, the nobility (Lords Temporal) in the House of Lords, and the clergy in the form of the Church of England bishops also entitled to sit in the upper House as the Lords Spiritual. Scotland The members of the Parliament of Scotland were collectively referred to as the Three Estates (Older Scots: Thre Estaitis), also known as the community of the realm, and until 1690 composed of: the first estate of prelates (bishops and abbots) the second estate of lairds (dukes, earls, parliamentary peers (after 1437) and lay tenants-in-chief) the third estate of burgh commissioners (representatives chosen by the royal burghs) The First Estate was overthrown during the Glorious Revolution and the accession of William III. The Second Estate was then split into two to retain the division into three. A shire commissioner was the closest equivalent of the English office of Member of Parliament, namely a commoner or member of the lower nobility. Because the Parliament of Scotland was unicameral, all members sat in the same chamber, as opposed to the separate English House of Lords and House of Commons. The parliament also had university constituencies (see Ancient universities of Scotland). The system was also adopted by the Parliament of England when James VI ascended to the English throne. It was believed that the universities were affected by the decisions of Parliament and ought therefore to have representation in it. This continued in the Parliament of Great Britain after 1707 and the Parliament of the United Kingdom until 1950. Ireland After the 12th-century Norman invasion of Ireland, administration of the Anglo-Norman Lordship of Ireland was modelled on that of the Kingdom of England. As in England, the Parliament of Ireland evolved out of the Magnum Concilium "great council" summoned by the chief governor of Ireland, attended by the council (curia regis), magnates (feudal lords), and prelates (bishops and abbots). Membership was based on fealty to the king, and the preservation of the king's peace, and so the fluctuating number of autonomous Irish Gaelic kings were outside of the system; they had their own local brehon law taxation arrangements. Elected representatives are first attested in 1297 and continually from the later 14th century. In 1297, counties were first represented by elected knights of the shire (sheriffs had previously represented them). In 1299, towns were represented. From the 14th century, a distinction from the English parliament was that deliberations on church funding were held in Parliament rather than in Convocation. The separation of the Irish House of Lords from the elected Irish House of Commons had developed by the fifteenth century. The clerical proctors elected by the lower clergy of each diocese formed a separate house or estate until 1537, when they were expelled for their opposition to the Irish Reformation. The Parliament of Ireland was dissolved after the Act of Union 1800, and instead Ireland was joined to the Kingdom of Great Britain to form the United Kingdom; 100 Irish MPs instead represented the Third Estate in the House of Commons in London, while a selection of hereditary peers (typically about 28 representative peers) represented the Irish nobility in the House of Lords. In addition, four seats as Lords Spiritual were reserved for Church of Ireland clergy: one archbishop and three bishops at a time, alternating place after each legislative session. After the disestablishment of the Church of Ireland in 1871, no more seats were created for Irish bishops. Sweden and Finland The Estates in Sweden (including Finland) and later also Russia's Grand Duchy of Finland were the two higher estates, nobility and clergy, and the two lower estates, burghers and land-owning peasants. Each were free men, and had specific rights and responsibilities, and the right to send representatives to the Riksdag of the Estates. The Riksdag, and later the Diet of Finland was tetracameral: at the Riksdag, each Estate voted as a single body. Since early 18th century, a bill needed the approval of at least three Estates to pass, and constitutional amendments required the approval of all Estates. Prior to the 18th century, the King had the right to cast a deciding vote if the Estates were split evenly. After Russia's conquest of Finland in 1809, the estates in Finland swore an oath to the Emperor in the Diet of Porvoo. A Finnish House of Nobility was codified in 1818 in accordance with the old Swedish law of 1723. However, after the Diet of Porvoo, the Diet of Finland was reconvened only in 1863. In the meantime, for a period of 54 years, the country was governed only administratively. There was also a population outside the estates. Unlike in other areas, people had no "default" estate, and were not peasants unless they came from a land-owner's family. A summary of this division is: Nobility (see Finnish nobility and Swedish nobility) was exempt from tax, had an inherited rank and the right to keep a fief, and had a tradition of military service and government. Nobility was codified in 1280 with the Swedish king granting exemption from taxation (frälse) to land-owners that could equip a cavalryman (or be one themselves) for the king's army. Around 1400, letters patent were introduced, in 1561 the ranks of Count and Baron were added, and in 1625 the House of Nobility was codified as the First Estate of the land. Following Axel Oxenstierna's reform, higher government offices were open only to nobles. However, the nobility still owned only their own property, not the peasants or their land as in much of Europe. Heads of the noble houses were hereditary members of the assembly of nobles. The Nobility is divided into titled nobility (counts and barons) and lower nobility. Until the 18th century, the lower nobility was in turn divided into Knights and Esquires such that each of the three classes would first vote internally, giving one vote per class in the assembly. This resulted in great political influence for the higher nobility. Clergy, or priests, were exempt from tax, and collected tithes for the church. After the Swedish Reformation, the church became Lutheran. In later centuries, the estate included teachers of universities and certain state schools. The estate was governed by the state church which consecrated its ministers and appointed them to positions with a vote in choosing diet representatives. Burghers were city-dwellers, tradesmen and craftsmen. Trade was allowed only in the cities when the mercantilistic ideology had got the upper hand, and the burghers had the exclusive right to conduct commerce within the framework of guilds. Entry to this Estate was controlled by the autonomy of the towns themselves. Peasants were allowed to sell their produce within the city limits, but any further trade, particularly foreign trade, was allowed only for burghers. In order for a settlement to become a city, a royal charter granting market right was required, and foreign trade required royally chartered staple port rights. After the annexation of Finland into Imperial Russia in 1809, mill-owners and other proto-industrialists would gradually be included in this estate. Peasants were land-owners of land-taxed farms and their families (comparable in status to yeomen in England), which represented the majority in medieval times. Since most of the population were independent farmer families until the 19th century, not serfs nor villeins, there is a remarkable difference in tradition compared to other European countries. Entry was controlled by ownership of farmland, which was not generally for sale but a hereditary property. After 1809, Swedish tenants renting a large enough farm (ten times larger than what was required of peasants owning their own farm) were included as well as non-nobility owning tax-exempt land. Their representatives to the Diet were elected indirectly: each municipality sent electors to elect the representative of an electoral district. To no estate belonged propertyless cottagers, villeins, tenants of farms owned by others, farmhands, servants, some lower administrative workers, rural craftsmen, travelling salesmen, vagrants, and propertyless and unemployed people (who sometimes lived in strangers' houses). To reflect how the people belonging to the estates saw them, the Finnish word for "obscene", säädytön, has the literal meaning "estateless". They had no political rights and could not vote. Their mobility was severely limited by the policy of "legal protection" (Finnish: laillinen suojelu): every estateless person had to be employed by a taxed citizen from the estates, or they could be charged with vagrancy and sentenced to forced labor. In Finland, this policy lasted until 1883. In Sweden, the Riksdag of the Estates existed until it was replaced with a bicameral Riksdag in 1866, which gave political rights to anyone with a certain income or property. Nevertheless, many of the leading politicians of the 19th century continued to be drawn from the old estates, in that they were either noblemen themselves, or represented agricultural and urban interests. Ennoblements continued even after the estates had lost their political importance, with the last ennoblement of explorer Sven Hedin taking place in 1902; this practice was formally abolished with the adoption of the new Constitution January 1, 1975, while the status of the House of Nobility continued to be regulated in law until 2003. In Finland, this legal division existed until 1906, still drawing on the Swedish constitution of 1772. However, at the start of the 20th century most of the population did not belong to any Estate and had no political representation. A particularly large class were the rent farmers, who did not own the land they cultivated but had to work in the land-owner's farm to pay their rent (unlike Russia, there were no slaves or serfs.) Furthermore, the industrial workers living in the city were not represented by the four-estate system. The political system was reformed as a result of the Finnish general strike of 1905, with the last Diet instituting a new constitutional law to create the modern parliamentary system, ending the political privileges of the estates. The post-independence constitution of 1919 forbade ennoblement, and all tax privileges were abolished in 1920. The privileges of the estates were officially and finally abolished in 1995, although in legal practice, the privileges had long been unenforceable. As in Sweden, the nobility has not been officially abolished and records of nobility are still voluntarily maintained by the Finnish House of Nobility. In Finland, it is still illegal and punishable by jail time (up to one year) to defraud into marriage by declaring a false name or estate (Rikoslaki 18 luku § 1/Strafflagen 18 kap. § 1). Low Countries The Low Countries, which until the late sixteenth century consisted of several counties, prince bishoprics, duchies etc. in the area that is now modern Belgium, Luxembourg and the Netherlands, had no States General until 1464, when Duke Philip of Burgundy assembled the first States General in Bruges. Later in the 15th and 16th centuries, Brussels became the place where the States General assembled. On these occasions, deputies from the States of the various provinces (as the counties, prince-bishoprics and duchies were called) asked for more liberties. For this reason, the States General were not assembled very often. As a consequence of the Union of Utrecht in 1579 and the events that followed afterwards, the States General declared that they no longer obeyed King Philip II of Spain, who was also overlord of the Netherlands. After the reconquest of the southern Netherlands (roughly Belgium and Luxemburg), the States General of the Dutch Republic first assembled permanently in Middelburg, and in The Hague from 1585 onward. Without a king to rule the country, the States General became the sovereign power. It was the level of government where all things were dealt with that were of concern to all the seven provinces that became part of the Republic of the United Netherlands. During that time, the States General were formed by representatives of the States (i.e. provincial parliaments) of the seven provinces. In each States (a plurale tantum) sat representatives of the nobility and the cities (the clergy were no longer represented; in Friesland the peasants were indirectly represented by the Grietmannen). In the Southern Netherlands, the last meetings of the States General loyal to the Habsburgs took place in the Estates General of 1600 and the Estates General of 1632. As a government, the States General of the Dutch Republic were abolished in 1795. A new parliament was created, called Nationale Vergadering (National Assembly). It no longer consisted of representatives of the States, let alone the Estates: all men were considered equal under the 1798 Constitution. Eventually, the Netherlands became part of the French Empire under Napoleon (1810: La Hollande est reunie à l'Empire). After regaining independence in November 1813, the name "States General" was resurrected for a legislature constituted in 1814 and elected by the States-Provincial. In 1815, when the Netherlands were united with Belgium and Luxemburg, the States General were divided into two chambers: the First Chamber and the Second Chamber. The members of the First Chamber were appointed for life by the King, while the members of the Second Chamber were elected by the members of the States Provincial. The States General resided in The Hague and Brussels in alternate years until 1830, when, as a result of the Belgian Revolution, The Hague became once again the sole residence of the States General, Brussels instead hosting the newly founded Belgian Parliament. From 1848 on, the Dutch Constitution provides that members of the Second Chamber be elected by the people (at first only by a limited portion of the male population; universal male and female suffrage exists since 1919), while the members of the First Chamber are chosen by the members of the States Provincial. As a result, the Second Chamber became the most important. The First Chamber is also called Senate. This however, is not a term used in the Constitution. Occasionally, the First and Second Chamber meet in a Verenigde Vergadering (Joint Session), for instance on Prinsjesdag, the annual opening of the parliamentary year, and when a new king is inaugurated. Holy Roman Empire The Holy Roman Empire had the Imperial Diet (Reichstag). The clergy was represented by the independent prince-bishops, prince-archbishops and prince-abbots of the many monasteries. The nobility consisted of independent aristocratic rulers: secular prince-electors, kings, dukes, margraves, counts and others. Burghers consisted of representatives of the independent imperial cities. Many peoples whose territories within the Holy Roman Empire had been independent for centuries had no representatives in the Imperial Diet, and this included the Imperial Knights and independent villages. The power of the Imperial Diet was limited, despite efforts of centralization. Large realms of the nobility or clergy had estates of their own that could wield great power in local affairs. Power struggles between ruler and estates were comparable to similar events in the history of the British and French parliaments. The Swabian League, a significant regional power in its part of Germany during the 15th Century, also had its own kind of Estates, a governing Federal Council comprising three Colleges: those of Princes, Cities, and Knights. Russian Empire In the late Russian Empire, the estates were called sosloviyes. The four major estates were: nobility (dvoryanstvo), clergy, rural dwellers, and urban dwellers, with a more detailed stratification therein. The division in estates was of mixed nature: traditional, occupational, as well as formal: for example, voting in Duma was carried out by estates. Russian Empire Census recorded the reported estate of a person. Kingdom of Portugal In the Medieval Kingdom of Portugal, the "Cortes" was an assembly of representatives of the estates of the realm – the nobility, clergy and bourgeoisie. It was called and dismissed by the King of Portugal at will, at a place of his choosing. Cortes which brought all three estates together are sometimes distinguished as "Cortes-Gerais" (General Courts), in contrast to smaller assemblies which brought only one or two estates, to negotiate a specific point relevant only to them. Principality of Catalonia The Parliament of Catalonia was first established in 1283 as the Catalan Courts (Catalan: Corts Catalanes), according to American historian Thomas Bisson, and it has been considered by several historians as a model of medieval parliament. For instance, English historian of constitutionalism Charles Howard McIlwain wrote that the General Court of Catalonia, during the 14th century, had a more defined organization and met more regularly than the parliaments of England or France. The roots of the parliament institution in Catalonia are in the Peace and Truce Assemblies (assemblees de pau i treva) that started in the 11th century. The members of the Catalan Courts were organized in the Three Estates (Catalan: Tres Estats or Tres Braços): the "military estate" (braç militars) with representatives of the feudal nobility the "ecclesiastical estate" (braç eclesiàstic) with representatives of the religious hierarchy the "royal estate" (braç reial or braç popular) with representatives of the free municipalities under royal privilege The parliamentary institution was abolished in 1716, together with the rest of institutions of the Principality of Catalonia, after the War of the Spanish Succession. See also Churches Militant, Penitent, and Triumphant Communalism before 1800 Four occupations (Asian equivalents) Fourth Estate Fifth Estate Sphere sovereignty Trifunctional hypothesis Varna (Hinduism) What Is the Third Estate? Location-specific Prussian estates Estates of the Netherlands Antilles Estates of Brittany The Canterbury Tales (the division of society into three estates is one of the key themes) A Satire of the Three Estates General Honorary males Social class Caste Notes References Steven Kreis lecture on "The Origins of the French Revolution" Notes on France and the Old Regime Giles Constable. "The Orders of Society", chap. 3 of Three Studies in Medieval Religious and Social Thought. Cambridge–New York: Cambridge University Press, 1995, pp. 249–360. Bernhard Jussen, ed. Ordering Medieval Society: Perspectives on Intellectual and Practical Modes of Shaping Social Relations. Trans. by Pamela Selwyn. Philadelphia: University of Pennsylvania Press, 2001. Jackson J. Spielvogel, Western Civilization, West Publishing Co. Minneapolis, 1994 for the English-language version of the quote from Abbé Sieyès, quoted at . Abbé Sieyès : Qu'est-ce que le Tiers-Etat ? Tout. Qu'a-t-il été jusque-là dans l'ordre politique ? Rien. Que demande-t-il ? A être quelque chose. for French-language original of this quotation. Michael P. Fitzsimmons, The Night the Old Regime Ended: August 4, 1789 and the French Revolution, Pennsylvania State University Press, 2003. , quoted and paraphrased at H-France Reviews. Konstantin M. Langmaier: Felix Hemmerli und der Dialog über den Adel und den Bauern (De nobilitate et rusticitate dialogus). Seine Bedeutung für die Erforschung der Mentalität des Adels im 15. Jahrhundert, in: Zeitschrift für die Geschichte des Oberrheins 166, 2018 (PDF) Felix Hemmerli und der Dialog über den Adel und den Bauern External links Constitutional law Feudalism Government institutions Kingdom of France Medieval society Political history of the Ancien Régime Religion and politics Social divisions
0.768806
0.997418
0.766821
The Structure of Scientific Revolutions
The Structure of Scientific Revolutions is a book about the history of science by the philosopher Thomas S. Kuhn. Its publication was a landmark event in the history, philosophy, and sociology of science. Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity and cumulative progress, referred to as periods of "normal science", were interrupted by periods of revolutionary science. The discovery of "anomalies" accumulating and precipitating revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, alter the rules of the game and change the "map" directing new research. For example, Kuhn's analysis of the Copernican Revolution emphasized that, in its beginning, it did not offer more accurate predictions of celestial events, such as planetary positions, than the Ptolemaic system, but instead appealed to some practitioners based on a promise of better, simpler solutions that might be developed at some point in the future. Kuhn called the core concepts of an ascendant revolution its "paradigms" and thereby launched this word into widespread analogical use in the second half of the 20th century. Kuhn's insistence that a paradigm shift was a mélange of sociology, enthusiasm and scientific promise, but not a logically determinate procedure, caused an uproar in reaction to his work. Kuhn addressed concerns in the 1969 postscript to the second edition. For some commentators The Structure of Scientific Revolutions introduced a realistic humanism into the core of science, while for others the nobility of science was tarnished by Kuhn's introduction of an irrational element into the heart of its greatest achievements. History The Structure of Scientific Revolutions was first published as a monograph in the International Encyclopedia of Unified Science, then as a book by University of Chicago Press in 1962. In 1969, Kuhn added a postscript to the book in which he replied to critical responses to the first edition. A 50th Anniversary Edition (with an introductory essay by Ian Hacking) was published by the University of Chicago Press in April 2012. Kuhn dated the genesis of his book to 1947, when he was a graduate student at Harvard University and had been asked to teach a science class for humanities undergraduates with a focus on historical case studies. Kuhn later commented that until then, "I'd never read an old document in science." Aristotle's Physics was astonishingly unlike Isaac Newton's work in its concepts of matter and motion. Kuhn wrote: "as I was reading him, Aristotle appeared not only ignorant of mechanics, but a dreadfully bad physical scientist as well. About motion, in particular, his writings seemed to me full of egregious errors, both of logic and of observation." This was in an apparent contradiction with the fact that Aristotle was a brilliant mind. While perusing Aristotle's Physics, Kuhn formed the view that in order to properly appreciate Aristotle's reasoning, one must be aware of the scientific conventions of the time. Kuhn concluded that Aristotle's concepts were not "bad Newton," just different. This insight was the foundation of The Structure of Scientific Revolutions. Central ideas regarding the process of scientific investigation and discovery had been anticipated by Ludwik Fleck in . Fleck had developed the first system of the sociology of scientific knowledge. He claimed that the exchange of ideas led to the establishment of a thought collective, which, when developed sufficiently, separated the field into esoteric (professional) and exoteric (laymen) circles. Kuhn wrote the foreword to the 1979 edition of Fleck's book, noting that he read it in 1950 and was reassured that someone "saw in the history of science what I myself was finding there." Kuhn was not confident about how his book would be received. Harvard University had denied his tenure a few years prior. By the mid-1980s, however, his book had achieved blockbuster status. When Kuhn's book came out in the early 1960s, "structure" was an intellectually popular word in many fields in the humanities and social sciences, including linguistics and anthropology, appealing in its idea that complex phenomena could reveal or be studied through basic, simpler structures. Kuhn's book contributed to that idea. One theory to which Kuhn replies directly is Karl Popper's "falsificationism," which stresses falsifiability as the most important criterion for distinguishing between that which is scientific and that which is unscientific. Kuhn also addresses verificationism, a philosophical movement that emerged in the 1920s among logical positivists. The verifiability principle claims that meaningful statements must be supported by empirical evidence or logical requirements. Synopsis Basic approach Kuhn's approach to the history and philosophy of science focuses on conceptual issues like the practice of normal science, influence of historical events, emergence of scientific discoveries, nature of scientific revolutions and progress through scientific revolutions. What sorts of intellectual options and strategies were available to people during a given period? What types of lexicons and terminology were known and employed during certain epochs? Stressing the importance of not attributing traditional thought to earlier investigators, Kuhn's book argues that the evolution of scientific theory does not emerge from the straightforward accumulation of facts, but rather from a set of changing intellectual circumstances and possibilities. Kuhn did not see scientific theory as proceeding linearly from an objective, unbiased accumulation of all available data, but rather as paradigm-driven: Historical examples of chemistry Kuhn explains his ideas using examples taken from the history of science. For instance, eighteenth-century scientists believed that homogenous solutions were chemical compounds. Therefore, a combination of water and alcohol was generally classified as a compound. Nowadays it is considered to be a solution, but there was no reason then to suspect that it was not a compound. Water and alcohol would not separate spontaneously, nor will they separate completely upon distillation (they form an azeotrope). Water and alcohol can be combined in any proportion. Under this paradigm, scientists believed that chemical reactions (such as the combination of water and alcohol) did not necessarily occur in fixed proportion. This belief was ultimately overturned by Dalton's atomic theory, which asserted that atoms can only combine in simple, whole-number ratios. Under this new paradigm, any reaction which did not occur in fixed proportion could not be a chemical process. This type of world-view transition among the scientific community exemplifies Kuhn's paradigm shift. Copernican Revolution A famous example of a revolution in scientific thought is the Copernican Revolution. In Ptolemy's school of thought, cycles and epicycles (with some additional concepts) were used for modeling the movements of the planets in a cosmos that had a stationary Earth at its center. As accuracy of celestial observations increased, complexity of the Ptolemaic cyclical and epicyclical mechanisms had to increase to maintain the calculated planetary positions close to the observed positions. Copernicus proposed a cosmology in which the Sun was at the center and the Earth was one of the planets revolving around it. For modeling the planetary motions, Copernicus used the tools he was familiar with, namely the cycles and epicycles of the Ptolemaic toolbox. Yet Copernicus' model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, his model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus' contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus' cosmology lacked credibility. Kuhn illustrates how a paradigm shift later became possible when Galileo Galilei introduced his new ideas concerning motion. Intuitively, when an object is set in motion, it soon comes to a halt. A well-made cart may travel a long distance before it stops, but unless something keeps pushing it, it will eventually stop moving. Aristotle had argued that this was presumably a fundamental property of nature: for the motion of an object to be sustained, it must continue to be pushed. Given the knowledge available at the time, this represented sensible, reasonable thinking. Galileo put forward a bold alternative conjecture: suppose, he said, that we always observe objects coming to a halt simply because some friction is always occurring. Galileo had no equipment with which to objectively confirm his conjecture, but he suggested that without any friction to slow down an object in motion, its inherent tendency is to maintain its speed without the application of any additional force. The Ptolemaic approach of using cycles and epicycles was becoming strained: there seemed to be no end to the mushrooming growth in complexity required to account for the observable phenomena. Johannes Kepler was the first person to abandon the tools of the Ptolemaic paradigm. He started to explore the possibility that the planet Mars might have an elliptical orbit rather than a circular one. Clearly, the angular velocity could not be constant, but it proved very difficult to find the formula describing the rate of change of the planet's angular velocity. After many years of calculations, Kepler arrived at what we now know as the law of equal areas. Galileo's conjecture was merely that – a conjecture. So was Kepler's cosmology. But each conjecture increased the credibility of the other, and together, they changed the prevailing perceptions of the scientific community. Later, Newton showed that Kepler's three laws could all be derived from a single theory of motion and planetary motion. Newton solidified and unified the paradigm shift that Galileo and Kepler had initiated. Coherence One of the aims of science is to find models that will account for as many observations as possible within a coherent framework. Together, Galileo's rethinking of the nature of motion and Keplerian cosmology represented a coherent framework that was capable of rivaling the Aristotelian/Ptolemaic framework. Once a paradigm shift has taken place, the textbooks are rewritten. Often the history of science too is rewritten, being presented as an inevitable process leading up to the current, established framework of thought. There is a prevalent belief that all hitherto-unexplained phenomena will in due course be accounted for in terms of this established framework. Kuhn states that scientists spend most (if not all) of their careers in a process of puzzle-solving. Their puzzle-solving is pursued with great tenacity, because the previous successes of the established paradigm tend to generate great confidence that the approach being taken guarantees that a solution to the puzzle exists, even though it may be very hard to find. Kuhn calls this process normal science. As a paradigm is stretched to its limits, anomalies – failures of the current paradigm to take into account observed phenomena – accumulate. Their significance is judged by the practitioners of the discipline. Some anomalies may be dismissed as errors in observation, others as merely requiring small adjustments to the current paradigm that will be clarified in due course. Some anomalies resolve themselves spontaneously, having increased the available depth of insight along the way. But no matter how great or numerous the anomalies that persist, Kuhn observes, the practicing scientists will not lose faith in the established paradigm until a credible alternative is available; to lose faith in the solvability of the problems would in effect mean ceasing to be a scientist. In any community of scientists, Kuhn states, there are some individuals who are bolder than most. These scientists, judging that a crisis exists, embark on what Kuhn calls revolutionary science, exploring alternatives to long-held, obvious-seeming assumptions. Occasionally this generates a rival to the established framework of thought. The new candidate paradigm will appear to be accompanied by numerous anomalies, partly because it is still so new and incomplete. The majority of the scientific community will oppose any conceptual change, and, Kuhn emphasizes, so they should. To fulfill its potential, a scientific community needs to contain both individuals who are bold and individuals who are conservative. There are many examples in the history of science in which confidence in the established frame of thought was eventually vindicated. Kuhn cites, as an example, that Alexis Clairaut, in 1750, was able to account accurately for the precession of the Moon's orbit using Newtonian theory, after sixty years of failed attempts. It is almost impossible to predict whether the anomalies in a candidate for a new paradigm will eventually be resolved. Those scientists who possess an exceptional ability to recognize a theory's potential will be the first whose preference is likely to shift in favour of the challenging paradigm. There typically follows a period in which there are adherents of both paradigms. In time, if the challenging paradigm is solidified and unified, it will replace the old paradigm, and a paradigm shift will have occurred. Phases Kuhn explains the process of scientific change as the result of various phases of paradigm change. Phase 1 – It exists only once and is the pre-paradigm phase, in which there is no consensus on any particular theory. This phase is characterized by several incompatible and incomplete theories. Consequently, most scientific inquiry takes the form of lengthy books, as there is no common body of facts that may be taken for granted. When the actors in the pre-paradigm community eventually gravitate to one of these conceptual frameworks and ultimately to a widespread consensus on the appropriate choice of methods, terminology and on the kinds of experiment that are likely to contribute to increased insights, the old schools of thought disappear. The new paradigm leads to a more rigid definition of the research field, and those who are reluctant or unable to adapt are isolated or have to join rival groups. Phase 2 – Normal science begins, in which puzzles are solved within the context of the dominant paradigm. As long as there is consensus within the discipline, normal science continues. Over time, progress in normal science may reveal anomalies, facts that are difficult to explain within the context of the existing paradigm. While usually these anomalies are resolved, in some cases they may accumulate to the point where normal science becomes difficult and where weaknesses in the old paradigm are revealed. Phase 3 – If the paradigm proves chronically unable to account for anomalies, the community enters a crisis period. Crises are often resolved within the context of normal science. However, after significant efforts of normal science within a paradigm fail, science may enter the next phase. Phase 4 – Paradigm shift, or scientific revolution, is the phase in which the underlying assumptions of the field are reexamined and a new paradigm is established. Phase 5 – Post-revolution, the new paradigm's dominance is established and so scientists return to normal science, solving puzzles within the new paradigm. A science may go through these cycles repeatedly, though Kuhn notes that it is a good thing for science that such shifts do not occur often or easily. Incommensurability According to Kuhn, the scientific paradigms preceding and succeeding a paradigm shift are so different that their theories are incommensurable—the new paradigm cannot be proven or disproven by the rules of the old paradigm, and vice versa. (A later interpretation by Kuhn of "commensurable" versus "incommensurable" was as a distinction between "languages", namely, that statements in commensurable languages were translatable fully from one to the other, while in incommensurable languages, strict translation is not possible. The paradigm shift does not merely involve the revision or transformation of an individual theory, it changes the way terminology is defined, how the scientists in that field view their subject, and, perhaps most significantly, what questions are regarded as valid, and what rules are used to determine the truth of a particular theory. The new theories were not, as the scientists had previously thought, just extensions of old theories, but were instead completely new world views. Such incommensurability exists not just before and after a paradigm shift, but in the periods in between conflicting paradigms. It is simply not possible, according to Kuhn, to construct an impartial language that can be used to perform a neutral comparison between conflicting paradigms, because the very terms used are integral to the respective paradigms, and therefore have different connotations in each paradigm. The advocates of mutually exclusive paradigms are in a difficult position: "Though each may hope to convert the other to his way of seeing science and its problems, neither may hope to prove his case. The competition between paradigms is not the sort of battle that can be resolved by proofs." Scientists subscribing to different paradigms end up talking past one another. Kuhn states that the probabilistic tools used by verificationists are inherently inadequate for the task of deciding between conflicting theories, since they belong to the very paradigms they seek to compare. Similarly, observations that are intended to falsify a statement will fall under one of the paradigms they are supposed to help compare, and will therefore also be inadequate for the task. According to Kuhn, the concept of falsifiability is unhelpful for understanding why and how science has developed as it has. In the practice of science, scientists will only consider the possibility that a theory has been falsified if an alternative theory is available that they judge credible. If there is not, scientists will continue to adhere to the established conceptual framework. If a paradigm shift has occurred, the textbooks will be rewritten to state that the previous theory has been falsified. Kuhn further developed his ideas regarding incommensurability in the 1980s and 1990s. In his unpublished manuscript The Plurality of Worlds, Kuhn introduces the theory of kind concepts: sets of interrelated concepts that are characteristic of a time period in a science and differ in structure from the modern analogous kind concepts. These different structures imply different "taxonomies" of things and processes, and this difference in taxonomies constitutes incommensurability. This theory is strongly naturalistic and draws on developmental psychology to "found a quasi-transcendental theory of experience and of reality." Exemplar Kuhn introduced the concept of an exemplar in a postscript to the second edition of The Structure of Scientific Revolutions (1970). He noted that he was substituting the term "exemplars" for "paradigm", meaning the problems and solutions that students of a subject learn from the beginning of their education. For example, physicists might have as exemplars the inclined plane, Kepler's laws of planetary motion, or instruments like the calorimeter. According to Kuhn, scientific practice alternates between periods of normal science and revolutionary science. During periods of normalcy, scientists tend to subscribe to a large body of interconnecting knowledge, methods, and assumptions which make up the reigning paradigm (see paradigm shift). Normal science presents a series of problems that are solved as scientists explore their field. The solutions to some of these problems become well known and are the exemplars of the field. Those who study a scientific discipline are expected to know its exemplars. There is no fixed set of exemplars, but for a physicist today it would probably include the harmonic oscillator from mechanics and the hydrogen atom from quantum mechanics. Kuhn on scientific progress The first edition of The Structure of Scientific Revolutions ended with a chapter titled "Progress through Revolutions", in which Kuhn spelled out his views on the nature of scientific progress. Since he considered problem solving (or "puzzle solving") to be a central element of science, Kuhn saw that for a new candidate paradigm to be accepted by a scientific community, In the second edition, Kuhn added a postscript in which he elaborated his ideas on the nature of scientific progress. He described a thought experiment involving an observer who has the opportunity to inspect an assortment of theories, each corresponding to a single stage in a succession of theories. What if the observer is presented with these theories without any explicit indication of their chronological order? Kuhn anticipates that it will be possible to reconstruct their chronology on the basis of the theories' scope and content, because the more recent a theory is, the better it will be as an instrument for solving the kinds of puzzle that scientists aim to solve. Kuhn remarked: "That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress." Influence and reception The Structure of Scientific Revolutions has been credited with producing the kind of "paradigm shift" Kuhn discussed. Since the book's publication, over one million copies have been sold, including translations into sixteen different languages. In 1987, it was reported to be the twentieth-century book most frequently cited in the period 1976–1983 in the arts and the humanities. Philosophy The first extensive review of The Structure of Scientific Revolutions was authored by Dudley Shapere, a philosopher who interpreted Kuhn's work as a continuation of the anti-positivist sentiment of other philosophers of science, including Paul Feyerabend and Norwood Russell Hanson. Shapere noted the book's influence on the philosophical landscape of the time, calling it "a sustained attack on the prevailing image of scientific change as a linear process of ever-increasing knowledge". According to the philosopher Michael Ruse, Kuhn discredited the ahistorical and prescriptive approach to the philosophy of science of Ernest Nagel's The Structure of Science (1961). Kuhn's book sparked a historicist "revolt against positivism" (the so-called "historical turn in philosophy of science" which looked to the history of science as a source of data for developing a philosophy of science), although this may not have been Kuhn's intention; in fact, he had already approached the prominent positivist Rudolf Carnap about having his work published in the International Encyclopedia of Unified Science. The philosopher Robert C. Solomon noted that Kuhn's views have often been suggested to have an affinity to those of Georg Wilhelm Friedrich Hegel. Kuhn's view of scientific knowledge, as expounded in The Structure of Scientific Revolutions, has been compared to the views of the philosopher Michel Foucault. Sociology The first field to claim descent from Kuhn's ideas was the sociology of scientific knowledge. Sociologists working within this new field, including Harry Collins and Steven Shapin, used Kuhn's emphasis on the role of non-evidential community factors in scientific development to argue against logical empiricism, which discouraged inquiry into the social aspects of scientific communities. These sociologists expanded upon Kuhn's ideas, arguing that scientific judgment is determined by social factors, such as professional interests and political ideologies. Barry Barnes detailed the connection between the sociology of scientific knowledge and Kuhn in his book T. S. Kuhn and Social Science. In particular, Kuhn's ideas regarding science occurring within an established framework informed Barnes's own ideas regarding finitism, a theory wherein meaning is continuously changed (even during periods of normal science) by its usage within the social framework. The Structure of Scientific Revolutions elicited a number of reactions from the broader sociological community. Following the book's publication, some sociologists expressed the belief that the field of sociology had not yet developed a unifying paradigm, and should therefore strive towards homogenization. Others argued that the field was in the midst of normal science, and speculated that a new revolution would soon emerge. Some sociologists, including John Urry, doubted that Kuhn's theory, which addressed the development of natural science, was necessarily relevant to sociological development. Economics Developments in the field of economics are often expressed and legitimized in Kuhnian terms. For instance, neoclassical economists have claimed "to be at the second stage [normal science], and to have been there for a very long time – since Adam Smith, according to some accounts (Hollander, 1987), or Jevons according to others (Hutchison, 1978)". In the 1970s, post-Keynesian economists denied the coherence of the neoclassical paradigm, claiming that their own paradigm would ultimately become dominant. While perhaps less explicit, Kuhn's influence remains apparent in recent economics. For instance, the abstract of Olivier Blanchard's paper "The State of Macro" (2008) begins: Political science In 1974, The Structure of Scientific Revolutions was ranked as the second most frequently used book in political science courses focused on scope and methods. In particular, Kuhn's theory has been used by political scientists to critique behavioralism, which claims that accurate political statements must be both testable and falsifiable. The book also proved popular with political scientists embroiled in debates about whether a set of formulations put forth by a political scientist constituted a theory, or something else. The changes that occur in politics, society and business are often expressed in Kuhnian terms, however poor their parallel with the practice of science may seem to scientists and historians of science. The terms "paradigm" and "paradigm shift" have become such notorious clichés and buzzwords that they are sometimes viewed as effectively devoid of content. Criticisms The Structure of Scientific Revolutions was soon criticized by Kuhn's colleagues in the history and philosophy of science. In 1965, a special symposium on the book was held at an International Colloquium on the Philosophy of Science that took place at Bedford College, London, and was chaired by Karl Popper. The symposium led to the publication of the symposium's presentations plus other essays, most of them critical, which eventually appeared in an influential volume of essays. Kuhn expressed the opinion that his critics' readings of his book were so inconsistent with his own understanding of it that he was "tempted to posit the existence of two Thomas Kuhns," one the author of his book, the other the individual who had been criticized in the symposium by Professors Popper, Feyerabend, Lakatos, Toulmin and Watkins. A number of the included essays question the existence of normal science. In his essay, Feyerabend suggests that Kuhn's conception of normal science fits organized crime as well as it does science. Popper expresses distaste with the entire premise of Kuhn's book, writing, "the idea of turning for enlightenment concerning the aims of science, and its possible progress, to sociology or to psychology (or ... to the history of science) is surprising and disappointing." Concept of paradigm Stephen Toulmin defined paradigm as "the set of common beliefs and agreements shared between scientists about how problems should be understood and addressed". In his 1972 work, Human Understanding, he argued that a more realistic picture of science than that presented in The Structure of Scientific Revolutions would admit the fact that revisions in science take place much more frequently, and are much less dramatic than can be explained by the model of revolution/normal science. In Toulmin's view, such revisions occur quite often during periods of what Kuhn would call "normal science". For Kuhn to explain such revisions in terms of the non-paradigmatic puzzle solutions of normal science, he would need to delineate what is perhaps an implausibly sharp distinction between paradigmatic and non-paradigmatic science. Incommensurability of paradigms In a series of texts published in the early 1970s, Carl R. Kordig asserted a position somewhere between that of Kuhn and the older philosophy of science. His criticism of the Kuhnian position was that the incommensurability thesis was too radical, and that this made it impossible to explain the confrontation of scientific theories that actually occurs. According to Kordig, it is in fact possible to admit the existence of revolutions and paradigm shifts in science while still recognizing that theories belonging to different paradigms can be compared and confronted on the plane of observation. Those who accept the incommensurability thesis do not do so because they admit the discontinuity of paradigms, but because they attribute a radical change in meanings to such shifts. Kordig maintains that there is a common observational plane. For example, when Kepler and Tycho Brahe are trying to explain the relative variation of the distance of the sun from the horizon at sunrise, both see the same thing (the same configuration is focused on the retina of each individual). This is just one example of the fact that "rival scientific theories share some observations, and therefore some meanings". Kordig suggests that with this approach, he is not reintroducing the distinction between observations and theory in which the former is assigned a privileged and neutral status, but that it is possible to affirm more simply the fact that, even if no sharp distinction exists between theory and observations, this does not imply that there are no comprehensible differences at the two extremes of this polarity. At a secondary level, for Kordig there is a common plane of inter-paradigmatic standards or shared norms that permit the effective confrontation of rival theories. In 1973, Hartry Field published an article that also sharply criticized Kuhn's idea of incommensurability. In particular, he took issue with this passage from Kuhn: Field takes this idea of incommensurability between the same terms in different theories one step further. Instead of attempting to identify a persistence of the reference of terms in different theories, Field's analysis emphasizes the indeterminacy of reference within individual theories. Field takes the example of the term "mass", and asks what exactly "mass" means in modern post-relativistic physics. He finds that there are at least two different definitions: Relativistic mass: the mass of a particle is equal to the total energy of the particle divided by the speed of light squared. Since the total energy of a particle in relation to one system of reference differs from the total energy in relation to other systems of reference, while the speed of light remains constant in all systems, it follows that the mass of a particle has different values in different systems of reference. "Real" mass: the mass of a particle is equal to the non-kinetic energy of a particle divided by the speed of light squared. Since non-kinetic energy is the same in all systems of reference, and the same is true of light, it follows that the mass of a particle has the same value in all systems of reference. Projecting this distinction backwards in time onto Newtonian dynamics, we can formulate the following two hypotheses: HR: the term "mass" in Newtonian theory denotes relativistic mass. Hp: the term "mass" in Newtonian theory denotes "real" mass. According to Field, it is impossible to decide which of these two affirmations is true. Prior to the theory of relativity, the term "mass" was referentially indeterminate. But this does not mean that the term "mass" did not have a different meaning than it now has. The problem is not one of meaning but of reference. The reference of such terms as mass is only partially determined: we do not really know how Newton intended his use of this term to be applied. As a consequence, neither of the two terms fully denotes (refers). It follows that it is improper to maintain that a term has changed its reference during a scientific revolution; it is more appropriate to describe terms such as "mass" as "having undergone a denotional refinement". In 1974, Donald Davidson objected that the concept of incommensurable scientific paradigms competing with each other is logically inconsistent. In his article Davidson goes well beyond the semantic version of the incommensurability thesis: to make sense of the idea of a language independent of translation requires a distinction between conceptual schemes and the content organized by such schemes. But, Davidson argues, no coherent sense can be made of the idea of a conceptual scheme, and therefore no sense may be attached to the idea of an untranslatable language." Incommensurability and perception The close connection between the interpretationalist hypothesis and a holistic conception of beliefs is at the root of the notion of the dependence of perception on theory, a central concept in The Structure of Scientific Revolutions. Kuhn maintained that the perception of the world depends on how the percipient conceives the world: two scientists who witness the same phenomenon and are steeped in two radically different theories will see two different things. According to this view, our interpretation of the world determines what we see. Jerry Fodor attempts to establish that this theoretical paradigm is fallacious and misleading by demonstrating the impenetrability of perception to the background knowledge of subjects. The strongest case can be based on evidence from experimental cognitive psychology, namely the persistence of perceptual illusions. Knowing that the lines in the Müller-Lyer illusion are equal does not prevent one from continuing to see one line as being longer than the other. This impenetrability of the information elaborated by the mental modules limits the scope of interpretationalism. In epistemology, for example, the criticism of what Fodor calls the interpretationalist hypothesis accounts for the common-sense intuition (on which naïve physics is based) of the independence of reality from the conceptual categories of the experimenter. If the processes of elaboration of the mental modules are in fact independent of the background theories, then it is possible to maintain the realist view that two scientists who embrace two radically diverse theories see the world exactly in the same manner even if they interpret it differently. The point is that it is necessary to distinguish between observations and the perceptual fixation of beliefs. While it is beyond doubt that the second process involves the holistic relationship between beliefs, the first is largely independent of the background beliefs of individuals. Other critics, such as Israel Scheffler, Hilary Putnam and Saul Kripke, have focused on the Fregean distinction between sense and reference in order to defend scientific realism. Scheffler contends that Kuhn confuses the meanings of terms such as "mass" with their referents. While their meanings may very well differ, their referents (the objects or entities to which they correspond in the external world) remain fixed. Subsequent commentary by Kuhn In 1995 Kuhn argued that the Darwinian metaphor in the book should have been taken more seriously than it had been. Awards and honors 1998 Modern Library 100 Best Nonfiction: The Board's List (69) 1999 National Review 100 Best Nonfiction Books of the Century (25) 2015 Mark Zuckerberg book club selection for March. Publication history Bibliography * See also Epistemological rupture Groupthink Scientific Revolution Further reading Wray, K. Brad, ed. (2024). Kuhn's The Structure of Scientific Revolutions at 60. Cambridge University Press. References External links Article on Thomas Kuhn by Alexander Bird Text of chapter 9 and a postscript at Marxists.org "Thomas Kuhn, 73; Devised Science Paradigm", obituary by Lawrence Van Gelder, New York Times, 19 June 1996 (archived 7 February 2012). 1962 non-fiction books American non-fiction books Books about the history of science Books by Thomas Kuhn English-language books Philosophy of science literature Science studies Scientific Revolution University of Chicago Press books
0.769443
0.996582
0.766814
Division of labour
The division of labour is the separation of the tasks in any economic system or organisation so that participants may specialise (specialisation). Individuals, organisations, and nations are endowed with or acquire specialised capabilities, and either form combinations or trade to take advantage of the capabilities of others in addition to their own. Specialised capabilities may include equipment or natural resources as well as skills. Training and combinations of equipment and other assets acting together are often important. For example, an individual may specialise by acquiring tools and the skills to use them effectively just as an organisation may specialise by acquiring specialised equipment and hiring or training skilled operators. The division of labour is the motive for trade and the source of economic interdependence. An increasing division of labour is associated with the growth of total output and trade, the rise of capitalism, and the increasing complexity of industrialised processes. The concept and implementation of division of labour has been observed in ancient Sumerian (Mesopotamian) culture, where assignment of jobs in some cities coincided with an increase in trade and economic interdependence. Division of labour generally also increases both producer and individual worker productivity. After the Neolithic Revolution, pastoralism and agriculture led to more reliable and abundant food supplies, which increased the population and led to specialisation of labour, including new classes of artisans, warriors, and the development of elites. This specialisation was furthered by the process of industrialisation, and Industrial Revolution-era factories. Accordingly, many classical economists as well as some mechanical engineers, such as Charles Babbage, were proponents of division of labour. Also, having workers perform single or limited tasks eliminated the long training period required to train craftsmen, who were replaced with less-paid but more productive unskilled workers. Pre-modern theories Plato In Plato's Republic, the origin of the state lies in the natural inequality of humanity, which is embodied in the division of labour: Silvermintz (2010) noted that "Historians of economic thought credit Plato, primarily on account of arguments advanced in his Republic, as an early proponent of the division of labour." Notwithstanding this, Silvermintz argues that "While Plato recognises both the economic and political benefits of the division of labour, he ultimately critiques this form of economic arrangement insofar as it hinders the individual from ordering his own soul by cultivating acquisitive motives over prudence and reason." Xenophon Xenophon, in the 4th century BC, makes a passing reference to division of labour in his Cyropaedia (a.k.a. Education of Cyrus). Augustine of Hippo A simile used by Augustine of Hippo shows that the division of labour was practised and understood in late Imperial Rome. In a brief passage of his The City of God, Augustine seems to be aware of the role of different social layers in the production of goods, like household (familiae), corporations (collegia) and the state. Medieval Muslim scholars The division of labour was discussed by multiple medieval Persian scholars. They considered the division of labour between members of a household, between members of society and between nations. For Nasir al-Din al-Tusi and al-Ghazali the division of labour was necessary and useful. The similarity of the examples provided by these scholars with those provided by Adam Smith (such as al-Ghazali's needle factory and Tusi's claim that exchange, and by extension the division of labour, are the consequences of the human reasoning capability and that no animals have been observed to exchange one bone for another) led some scholars to conjecture that Smith was influenced by the medieval Persian scholarship. Modern theories William Petty Sir William Petty was the first modern writer to take note of the division of labour, showing its has worth in existence and usefulness in Dutch shipyards. Classically, the workers in a shipyard would build ships as units, finishing one before starting another. But the Dutch had it organised with several teams each doing the same tasks for successive ships. People with a particular task to do must have discovered new methods that were only later observed and justified by writers on political economy. Petty also applied the principle to his survey of Ireland. His breakthrough was to divide up the work so that large parts of it could be done by people with no extensive training. Bernard de Mandeville Bernard de Mandeville discussed the matter in the second volume of The Fable of the Bees (1714). This elaborates many matters raised by the original poem about a 'Grumbling Hive'. He says: David Hume - David Hume, A Treatise on Human Nature Henri-Louis Duhamel du Monceau In his introduction to The Art of the Pin-Maker (Art de l'Épinglier, 1761), Henri-Louis Duhamel du Monceau writes about the "division of this work": By "division of this work," du Monceau is referring to the subdivisions of the text describing the various trades involved in the pin making activity; this can also be described as a division of labour. Adam Smith In the first sentence of An Inquiry into the Nature and Causes of the Wealth of Nations (1776), Adam Smith foresaw the essence of industrialism by determining that division of labour represents a substantial increase in productivity. Like du Monceau, his example was the making of pins. Unlike Plato, Smith famously argued that the difference between a street porter and a philosopher was as much a consequence of the division of labour as its cause. Therefore, while for Plato the level of specialisation determined by the division of labour was externally determined, for Smith it was the dynamic engine of economic progress. However, in a further chapter of the same book, Smith criticised the division of labour, saying that it makes man "as stupid and ignorant as it is possible for a human creature to become" and that it can lead to "the almost entire corruption and degeneracy of the great body of the people.…unless the government takes some pains to prevent it." The contradiction has led to some debate over Smith's opinion of the division of labour. Alexis de Tocqueville agreed with Smith: "Nothing tends to materialize man, and to deprive his work of the faintest trace of mind, more than extreme division of labor." Adam Ferguson shared similar views to Smith, though was generally more negative. The specialisation and concentration of the workers on their single subtasks often leads to greater skill and greater productivity on their particular subtasks than would be achieved by the same number of workers each carrying out the original broad task, in part due to increased quality of production, but more importantly because of increased efficiency of production, leading to a higher nominal output of units produced per time unit. Smith uses the example of a production capability of an individual pin maker compared to a manufacturing business that employed 10 men:One man draws out the wire; another straights it; a third cuts it; a fourth points it; a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on is a peculiar business; to whiten the pins is another; it is even a trade by itself to put them into the paper; and the important business of making a pin is, in this manner, divided into about eighteen distinct operations, which, in some manufactories, are all performed by distinct hands, though in others the same man will sometimes perform two or three of them. I have seen a small manufactory of this kind, where ten men only were employed, and where some of them consequently performed two or three distinct operations. But though they were very poor, and therefore but indifferently accommodated with the necessary machinery, they could, when they exerted themselves, make among them about twelve pounds of pins in a day. There are in a pound upwards of four thousand pins of a middling size. Those ten persons, therefore, could make among them upwards of forty-eight thousand pins in a day. Each person, therefore, making a tenth part of forty-eight thousand pins, might be considered as making four thousand eight hundred pins in a day. But if they had all wrought separately and independently, and without any of them having been educated to this peculiar business, they certainly could not each of them have made twenty, perhaps not one pin in a day.Smith saw the importance of matching skills with equipment—usually in the context of an organisation. For example, pin makers were organised with one making the head, another the body, each using different equipment. Similarly, he emphasised a large number of skills, used in cooperation and with suitable equipment, were required to build a ship. In the modern economic discussion, the term human capital would be used. Smith's insight suggests that the huge increases in productivity obtainable from technology or technological progress are possible because human and physical capital are matched, usually in an organisation. See also a short discussion of Adam Smith's theory in the context of business processes. Babbage wrote a seminal work "On the Economy of Machinery and Manufactures" analysing perhaps for the first time the division of labour in factories. Immanuel Kant In the Groundwork of the Metaphysics of Morals (1785), Immanuel Kant notes the value of the division of labour:All crafts, trades and arts have profited from the division of labour; for when each worker sticks to one particular kind of work that needs to be handled differently from all the others, he can do it better and more easily than when one person does everything. Where work is not thus differentiated and divided, where everyone is a jack-of-all-trades, the crafts remain at an utterly primitive level. Karl Marx Marx argued that increasing the specialisation may also lead to workers with poorer overall skills and a lack of enthusiasm for their work. He described the process as alienation: workers become more and more specialised and work becomes repetitive, eventually leading to complete alienation from the process of production. The worker then becomes "depressed spiritually and physically to the condition of a machine." Additionally, Marx argued that the division of labour creates less-skilled workers. As the work becomes more specialised, less training is needed for each specific job, and the workforce, overall, is less skilled than if one worker did one job entirely. Among Marx's theoretical contributions is his sharp distinction between the economic and the social division of labour. That is, some forms of labour co-operation are purely due to "technical necessity", but others are a result of a "social control" function related to a class and status hierarchy. If these two divisions are conflated, it might appear as though the existing division of labour is technically inevitable and immutable, rather than (in good part) socially constructed and influenced by power relationships. He also argues that in a communist society, the division of labour is transcended, meaning that balanced human development occurs where people fully express their nature in the variety of creative work that they do. Henry David Thoreau and Ralph Waldo Emerson Henry David Thoreau criticised the division of labour in Walden (1854), on the basis that it removes people from a sense of connectedness with society and with the world at large, including nature. He claimed that the average man in a civilised society is less wealthy, in practice than one in "savage" society. The answer he gave was that self-sufficiency was enough to cover one's basic needs. Thoreau's friend and mentor, Ralph Waldo Emerson, criticised the division of labour in his "The American Scholar" speech: a widely informed, holistic citizenry is vital for the spiritual and physical health of the country. Émile Durkheim In his seminal work, The Division of Labor in Society, Émile Durkheim observes that the division of labour appears in all societies and positively correlates with societal advancement because it increases as a society progresses. Durkheim arrived at the same conclusion regarding the positive effects of the division of labour as his theoretical predecessor, Adam Smith. In The Wealth of Nations, Smith observes the division of labour results in "a proportionable increase of the productive powers of labour." While they shared this belief, Durkheim believed the division of labour applied to all "biological organisms generally," while Smith believed this law applied "only to human societies." This difference may result from the influence of Charles Darwin's On the Origin of Species on Durkheim's writings. For example, Durkheim observed an apparent relationship between "the functional specialisation of the parts of an organism" and "the extent of that organism's evolutionary development," which he believed "extended the scope of the division of labour so as to make its origins contemporaneous with the origins of life itself…implying that its conditions must be found in the essential properties of all organised matter." Since Durkheim's division of labour applied to all organisms, he considered it a "natural law" and worked to determine whether it should be embraced or resisted by first analysing its functions. Durkheim hypothesised that the division of labour fosters social solidarity, yielding "a wholly moral phenomenon" that ensures "mutual relationships" among individuals. As social solidarity cannot be directly quantified, Durkheim indirectly studies solidarity by "classify[ing] the different types of law to find...the different types of social solidarity which correspond to it." Durkheim categorises: criminal laws and their respective punishments as promoting mechanical solidarity, a sense of unity resulting from individuals engaging in similar work who hold shared backgrounds, traditions, and values; and civil laws as promoting organic solidarity, a society in which individuals engage in different kinds of work that benefit society and other individuals. Durkheim believes that organic solidarity prevails in more advanced societies, while mechanical solidarity typifies less developed societies. He explains that in societies with more mechanical solidarity, the diversity and division of labour is much less, so individuals have a similar worldview. Similarly, Durkheim opines that in societies with more organic solidarity, the diversity of occupations is greater, and individuals depend on each other more, resulting in greater benefits to society as a whole. Durkheim's work enabled social science to progress more efficiently "in…the understanding of human social behavior." Ludwig von Mises Marx's theories, including his negative claims regarding the division of labour, have been criticised by the Austrian economists, notably Ludwig von Mises. The primary argument is that the economic gains accruing from the division of labour far outweigh the costs, thus developing on the thesis that division of labour leads to cost efficiencies. It is argued that it is fully possible to achieve balanced human development within capitalism and alienation is downplayed as mere romantic fiction. According to Mises, the idea has led to the concept of mechanisation in which a specific task is performed by a mechanical device, instead of an individual labourer. This method of production is significantly more effective in both yield and cost-effectiveness, and utilises the division of labour to the fullest extent possible. Mises saw the very idea of a task being performed by a specialised mechanical device as being the greatest achievement of division of labour. Friedrich A. Hayek In "The Use of Knowledge in Society", Friedrich A. Hayek states: Globalisation and global division of labour The issue reaches its broadest scope in the controversies about globalisation, which is often interpreted as a euphemism for the expansion of international trade based on comparative advantage. This would mean that countries specialise in the work they can do at the lowest relative cost measured in terms of the opportunity cost of not using resources for other work, compared to the opportunity costs experienced by countries. Critics, however, allege that international specialisation cannot be explained sufficiently in terms of "the work nations do best", rather that this specialisation is guided more by commercial criteria, which favour some countries over others. The OECD advised in June 2005 that: Few studies have taken place regarding the global division of labour. Information can be drawn from ILO and national statistical offices. In one study, Deon Filmer estimated that 2.474 billion people participated in the global non-domestic labour force in the mid-1990s. Of these: around 15%, or 379 million people, worked in the industry; a third, or 800 million worked in services and over 40%, or 1,074 million, in agriculture. The majority of workers in industry and services were wage and salary earners—58 per cent of the industrial workforce and 65 per cent of the services workforce. But a large portion was self-employed or involved in family labour. Filmer suggests the total of employees worldwide in the 1990s was about 880 million, compared with around a billion working on their own account on the land (mainly peasants), and some 480 million working on their own account in industry and services. The 2007 ILO Global Employment Trends Report indicated that services have surpassed agriculture for the first time in human history:In 2006 the service sector's share of global employment overtook agriculture for the first time, increasing from 39.5 to 40 per cent. Agriculture decreased from 39.7 per cent to 38.7 per cent. The industry sector accounted for 21.3 per cent of total employment. Contemporary theories In the modern world, those specialists most preoccupied in their work with theorising about the division of labour are those involved in management and organisation. In general, in capitalist economies, such things are not decided consciously. Different people try different things, and that which is most effective cost-wise (produces the most and best output with the least input) will generally be adopted. Often, techniques that work in one place or time do not work as well in another. Styles of division of labour Two styles of management that are seen in modern organisations are control and commitment: Control management, the style of the past, is based on the principles of job specialisation and the division of labour. This is the assembly-line style of job specialisation, where employees are given a very narrow set of tasks or one specific task. Commitment division of labour, the style of the future, is oriented on including the employee and building a level of internal commitment towards accomplishing tasks. Tasks include more responsibility and are coordinated based on expertise rather than a formal position. Job specialisation is advantageous in developing employee expertise in a field and boosting organisational production. However, disadvantages of job specialisation included limited employee skill, dependence on entire department fluency, and employee discontent with repetitive tasks. Labour hierarchy It is widely accepted among economists and social theorists that the division of labour is, to a great extent, inevitable within capitalist societies, simply because no one can do all tasks at once. Labour hierarchy is a very common feature of the modern capitalist workplace structure, and the way these hierarchies are structured can be influenced by a variety of different factors, including: Size: as organisations increase in size, there is a correlation in the rise of the division of labour. Cost: cost limits small organisations from dividing their labour responsibilities. Development of new technology: technological developments have led to a decrease in the amount of job specialisation in organisations as new technology makes it easier for fewer employees to accomplish a variety of tasks and still enhance production. New technology has also been helpful in the flow of information between departments helping to reduce the feeling of department isolation. It is often argued that the most equitable principle in allocating people within hierarchies is that of true (or proven) competency or ability. This concept of meritocracy could be read as an explanation or as a justification of why a division of labour is the way it is. This claim, however, is often disputed by various sources, particularly: Marxists claim hierarchy is created to support the power structures in capitalist societies which maintain the capitalist class as the owner of the labour of workers, in order to exploit it. Anarchists often add to this analysis by defending that the presence of coercive hierarchy in any form is contrary to the values of liberty and equality. Anti-imperialists see the globalised labour hierarchy between first world and third world countries necessitated by companies (through unequal exchange) that create a labour aristocracy by exploiting the poverty of workers in the developing world, where wages are much lower. These increased profits enable these companies to pay higher wages and taxes in the developed world (which fund welfare in first world countries), thus creating a working class satisfied with their standard of living and not inclined to revolution. This concept is further explored in dependency theory, notably by Samir Amin and Zak Cope. Limitations Adam Smith famously said in The Wealth of Nations that the division of labour is limited by the extent of the market. This is because it is by the exchange that each person can be specialised in their work and yet still have access to a wide range of goods and services. Hence, reductions in barriers to exchange lead to increases in the division of labour and so help to drive economic growth. Limitations to the division of labour have also been related to coordination and transportation costs. There can be motivational advantages to a reduced division of labour (which has been termed ‘job enlargement’ and 'job enrichment'). Jobs that are too specialised in a narrow range of tasks are said to result in demotivation due to boredom and alienation. Hence, a Taylorist approach to work design contributed to worsened industrial relations. There are also limitations to the division of labour (and the division of work) that result from workflow variations and uncertainties. These help to explain issues in modern work organisation, such as task consolidations in business process re-engineering and the use of multi-skilled work teams. For instance, one stage of a production process may temporarily work at a slower pace, forcing other stages to slow down. One answer to this is to make some portion of resources mobile between stages so that those resources must be capable of undertaking a wider range of tasks. Another is to consolidate tasks so that they are undertaken one after another by the same workers and other resources. Stocks between stages can also help to reduce the problem to some extent but are costly and can hamper quality control. Modern flexible manufacturing systems require both flexible machines and flexible workers. In project-based work, the coordination of resources is a difficult issue for the project manager as project schedules and resulting resource bookings are based on estimates of task durations and so are subject to subsequent revisions. Again, consolidating tasks so that they are undertaken consecutively by the same resources and having resources available that can be called on at short-notice from other tasks can help to reduce such problems, though at the cost of reduced specialisation. There are also advantages in a reduced division of labour where knowledge would otherwise have to be transferred between stages. For example, having a single person deal with a customer query means that only that one person has to be familiar with the customer's details. It is also likely to result in the query being handled faster due to the elimination of delays in passing the query between different people. Gendered division of labour The clearest exposition of the principles of sexual division of labour across the full range of human societies can be summarised by a large number of logically complementary implicational constraints of the following form: if women of childbearing ages in a given community tend to do X (e.g., preparing soil for planting) they will also do Y (e.g., the planting); while for men the logical reversal in this example would be that if men plant, they will prepare the soil. White, Brudner, and Burton's (1977) "Entailment Theory and Method: A Cross-Cultural Analysis of the Sexual Division of Labor", using statistical entailment analysis, shows that tasks more frequently chosen by women in these order relations are those more convenient in relation to child rearing. This type of finding has been replicated in a variety of studies, including those on modern industrial economies. These entailments do not restrict how much work for any given task could be done by men (e.g., in cooking) or by women (e.g., in clearing forests), but are only least-effort or role-consistent tendencies. To the extent that women clear forests for agriculture, for example, they tend to do the entire agricultural sequence of tasks on those clearings. In theory, these types of constraints could be removed by provisions of child care, but ethnographic examples are lacking. Industrial organisational psychology Job satisfaction has been shown to improve as an employee is given the task of a specific job. Students who have received PhDs in a chosen field later report increased satisfaction compared to their previous jobs. This can be attributed to their high levels of specialisation. The higher the training needed for the specialised job position, the higher is the level of job satisfaction as well, although many highly specialised jobs can be monotonous and produce high rates of burnout periodically. Division of work In contrast to the division of labour, a division of work refers to the division of a large task, contract, or project into smaller tasks—each with a separate schedule within the overall project schedule. Division of labour, instead, refers to the allocation of tasks to individuals or organisations according to the skills and/or equipment those people or organisations possess. Often division of labour and division of work are both part of the economic activity within an industrial nation or organisation. Disaggregated work A job divided into elemental parts is sometimes called "disaggregated work". Workers specialising in particular parts of the job are called professionals. The workers doing a portion of a non-recurring work may be called contractors, freelancers, or temporary workers. Modern communication technologies, particularly the Internet, gave rise to the sharing economy, which is orchestrated by online marketplaces for various kinds of disaggregated work. See also Asset poverty Complex society Economic sector Economies of scale Family economy Fordism Identity performance Industrialisation Kyriarchy Mechanisation New international division of labour Newly industrialised country Precariat Precarious work Productive and unproductive labour Price system Role suction Surplus product Temporary work Urbanisation Winner and loser culture References Further reading Becker, Gary S. 1991. "Division of Labor in Households and Families." Ch. 2 in A Treatise on the Family. Harvard University Press, . —— 1985. "Human Capital, Effort, and the Sexual Division of Labor." Journal of Labor Economics 3(1.2):S33–S58. Braverman, Harry. 1974. Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. Monthly Review Press. Coontz, Stephanie, and Peta Henderson. Women's Work, Men's Property: The Origins of Gender and Class. Durkheim, Émile. 1893. The Division of Labour in Society. Emerson, Ralph Waldo. "The American Scholar." Filmer, Deon. "Estimating the World at Work" (a background report). Florida, Richard. 2002. The Rise of the Creative Class. —— The Flight of the Creative Class. Froebel, F., J. Heinrichs, and O. Krey. The New International Division of Labour. Cambridge, UK: Cambridge University Press. Gintis, Herbert, Samuel Bowles, Robert T. Boyd, and Ernst Feghr. Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life. Goodin, Robert E., James Mahmud Rice, Antti Parpo, and Lina Eriksson. 2008. "Household Regimes Matter." Pp. 197–257 in Discretionary Time: A New Measure of Freedom. Cambridge, UK: Cambridge University Press. . Cambridge ID: 9780521709514. Gorz, André. The Division of Labour: The Labour Process and Class Struggle in Modern Capitalism. Groenewegen, Peter. 1987. "division of labour." Pp. 901–07 in The New Palgrave: A Dictionary of Economics 1. Heartfield, James. 2001. "The Economy of Time." Cultural Trends 43/44:155–59 Ollman, Bertell. Sexual and Social Revolution. Rattansi, Ali. Marx and the Division of Labour. Reisman, George. [1990] 1998. Capitalism: A Treatise on Economics. Laguna Hills, CA: TJS Books. . Solow, Robert M., and Jean-Philippe Touffut, eds. 2010. The Shape of the Division of Labour: Nations, Industries and Households. Cheltenham, UK: Edward Elgar. Contributors: Bina Agarwal, Martin Baily, Jean-Louis Beffa, Richard N. Cooper, Jan Fagerberg, Elhanan Helpman, Shelly Lundberg, Valentina Meliciani, and Peter Nunnenkamp. Rothbard, Murray. 19 March 2018. "Freedom, Inequality, Primitivism and the Division of Labor." Mises Institute. Retrieved 2 July 2020. von Mises, Ludwig. "Human Society: The Division of Labor ." Pp. 157–58 in Human Action: A Treatise on Economics. —— "Human Society: The Ricardian Law of Association." Pp. 158–60 in Human Action: A Treatise on Economics. Stigler, George J. 1951. "The Division of Labor is Limited by the Extent of the Market." Journal of Political Economy 59(3):185–93. World Development Report 1995. Washington, DC: World Bank. 1996. External links Summary of Smith's example of pin-making Conference: "The New International Division of Labour". Speakers: Bina Agarwal, Martin Baily, Jean-Louis Beffa, Richard N. Cooper, Jan Fagerberg, Elhanan Helpman, Shelly Lundberg, Valentina Meliciani, Peter Nunnenkamp. Recorded in 2009. Economic anthropology Industrial history Labor history Marxism Production and manufacturing Production economics Industry (economics)
0.769035
0.997096
0.766802
Gradualism
Gradualism, from the Latin ("step"), is a hypothesis, a theory or a tenet assuming that change comes about gradually or that variation is gradual in nature and happens over time as opposed to in large steps. Uniformitarianism, incrementalism, and reformism are similar concepts. Gradualism can also refer to desired, controlled change in society, institutions, or policies. For example, social democrats and democratic socialists see the socialist society as achieved through gradualism. Geology and biology In the natural sciences, gradualism is the theory which holds that profound change is the cumulative product of slow but continuous processes, often contrasted with catastrophism. The theory was proposed in 1795 by James Hutton, a Scottish geologist, and was later incorporated into Charles Lyell's theory of uniformitarianism. Tenets from both theories were applied to biology and formed the basis of early evolutionary theory. Charles Darwin was influenced by Lyell's Principles of Geology, which explained both uniformitarian methodology and theory. Using uniformitarianism, which states that one cannot make an appeal to any force or phenomenon which cannot presently be observed (see catastrophism), Darwin theorized that the evolutionary process must occur gradually, not in saltations, since saltations are not presently observed, and extreme deviations from the usual phenotypic variation would be more likely to be selected against. Gradualism is often confused with the concept of phyletic gradualism. It is a term coined by Stephen Jay Gould and Niles Eldredge to contrast with their model of punctuated equilibrium, which is gradualist itself, but argues that most evolution is marked by long periods of evolutionary stability (called stasis), which is punctuated by rare instances of branching evolution. Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual. When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs. Punctuated gradualism is a microevolutionary hypothesis that refers to a species that has "relative stasis over a considerable part of its total duration [and] underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching". It is one of the three common models of evolution. While the traditional model of palaeontology, the phylogenetic model, states that features evolved slowly without any direct association with speciation, the relatively newer and more controversial idea of punctuated equilibrium claims that major evolutionary changes do not happen over a gradual period but in localized, rare, rapid events of branching speciation. Punctuated gradualism is considered to be a variation of these models, lying somewhere in between the phyletic gradualism model and the punctuated equilibrium model. It states that speciation is not needed for a lineage to rapidly evolve from one equilibrium to another but may show rapid transitions between long-stable states. Politics and society In politics, gradualism is the hypothesis that social change can be achieved in small, discrete increments rather than in abrupt strokes such as revolutions or uprisings. Gradualism is one of the defining features of political liberalism and reformism. Machiavellian politics pushes politicians to espouse gradualism. Gradualism in social change implemented through reformist means is a moral principle to which the Fabian Society is committed. In a more general way, reformism is the assumption that gradual changes through and within existing institutions can ultimately change a society's fundamental economic system and political structures; and that an accumulation of reforms can lead to the emergence of an entirely different economic system and form of society than present-day capitalism. That hypothesis of social change grew out of opposition to revolutionary socialism, which contends that revolution is necessary for fundamental structural changes to occur. In socialist politics and within the socialist movement, the concept of gradualism is frequently distinguished from reformism, with the former insisting that short-term goals need to be formulated and implemented in such a way that they inevitably lead into long-term goals. It is most commonly associated with the libertarian socialist concept of dual power and is seen as a middle way between reformism and revolutionism. Martin Luther King Jr. was opposed to the idea of gradualism as a method of eliminating segregation. The United States government wanted to try to integrate African-Americans and European-Americans slowly into the same society, but many believed it was a way for the government to put off actually doing anything about racial segregation: Conspiracy theories In the terminology of NWO-related speculations, gradualism refers to the gradual implementation of a totalitarian world government. Linguistics and language change In linguistics, language change is seen as gradual, the product of chain reactions and subject to cyclic drift. The view that creole languages are the product of catastrophism is heavily disputed. Morality Christianity Buddhism, Theravada and Yoga Gradualism is the approach of certain schools of Buddhism and other Eastern philosophies (e.g. Theravada or Yoga), that enlightenment can be achieved step by step, through an arduous practice. The opposite approach, that insight is attained all at once, is called subitism. The debate on the issue was very important to the history of the development of Zen, which rejected gradualism, and to the establishment of the opposite approach within the Tibetan Buddhism, after the Debate of Samye. It was continued in other schools of Indian and Chinese philosophy. Philosophy Contradictorial gradualism is the paraconsistent treatment of fuzziness developed by Lorenzo Peña which regards true contradictions as situations wherein a state of affairs enjoys only partial existence. See also Evolution Uniformitarianism Incrementalism Normalization (sociology) Reformism Catastrophism Saltation Punctuated equilibrium Accelerationism Boiling frog References Geology theories Rate of evolution Liberalism Social democracy Democratic socialism Historical linguistics Social theories
0.778456
0.984947
0.766738
Chronicle
A chronicle (, from Greek chroniká, from , chrónos – "time") is a historical account of events arranged in chronological order, as in a timeline. Typically, equal weight is given for historically important events and local events, the purpose being the recording of events that occurred, seen from the perspective of the chronicler. A chronicle which traces world history is a universal chronicle. This is in contrast to a narrative or history, in which an author chooses events to interpret and analyze and excludes those the author does not consider important or relevant. The information sources for chronicles vary. Some are written from the chronicler's direct knowledge, others from witnesses or participants in events, still others are accounts passed down from generation to generation by oral tradition. Some used written material, such as charters, letters, and earlier chronicles. Still others are tales of unknown origin that have mythical status. Copyists also changed chronicles in creative copying, making corrections or in updating or continuing a chronicle with information not available to the original chronicler. Determining the reliability of particular chronicles is important to historians. Many newspapers and other periodical literature have adopted "chronicle" as part of their name. Subgroups Scholars categorize the genre of chronicle into two subgroups: live chronicles, and dead chronicles. A dead chronicle is one where the author assembles a list of events up to the time of their writing, but does not record further events as they occur. A live chronicle is where one or more authors add to a chronicle in a regular fashion, recording contemporary events shortly after they occur. Because of the immediacy of the information, historians tend to value live chronicles, such as annals, over dead ones. The term often refers to a book written by a chronicler in the Middle Ages describing historical events in a country, or the lives of a nobleman or a clergyman, although it is also applied to a record of public events. The earliest medieval chronicle to combine both retrospective (dead) and contemporary (live) entries, is the Chronicle of Ireland, which spans the years 431 to 911. Chronicles are the predecessors of modern "time lines" rather than analytical histories. They represent accounts, in prose or verse, of local or distant events over a considerable period of time, both the lifetime of the individual chronicler and often those of several subsequent continuators. If the chronicles deal with events year by year, they are often called annals. Unlike the modern historian, most chroniclers tended to take their information as they found it, and made little attempt to separate fact from legend. The point of view of most chroniclers is highly localised, to the extent that many anonymous chroniclers can be sited in individual abbeys. It is impossible to say how many chronicles exist, as the many ambiguities in the definition of the genre make it impossible to draw clear distinctions of what should or should not be included. However, the Encyclopedia of the Medieval Chronicle lists some 2,500 items written between 300 and 1500 AD. Citation of entries Entries in chronicles are often cited using the abbreviation s.a., meaning sub anno (under the year), according to the year under which they are listed. For example, "ASC MS A, s.a. 855" means the entry for the year 855 in manuscript A of the Anglo-Saxon Chronicle. The same event may be recorded under a different year in another manuscript of the chronicle, and may be cited for example as "ASC MS D, s.a. 857". English chronicles The most important English chronicles are the Anglo-Saxon Chronicle, started under the patronage of King Alfred in the 9th century and continued until the 12th century, and the Chronicles of England, Scotland and Ireland (1577–87) by Raphael Holinshed and other writers; the latter documents were important sources of materials for Elizabethan drama. Later 16th century Scottish chronicles, written after the Reformation, shape history according to Catholic or Protestant viewpoints. Cronista A cronista is a term for a historical chronicler, a role that held historical significance in the European Middle Ages. Until the European Enlightenment, the occupation was largely equivalent to that of a historian, describing events chronologically that were of note in a given country or region. As such, it was often an official governmental position rather than an independent practice. The appointment of the official chronicler often favored individuals who had distinguished themselves by their efforts to study, investigate and disseminate population-related issues. The position was granted on a local level based on the mutual agreements of a city council in plenary meetings. Often, the occupation was honorary, unpaid, and stationed for life. In modern usage, the term usually refers to a type of journalist who writes chronicles as a form of journalism or non-professional historical documentation. Cronista in the Middle Ages Before the development of modern journalism and the systematization of chronicles as a journalistic genre, cronista were tasked with narrating chronological events considered worthy of remembrance that were recorded year by year. Unlike writers who created epic poems regarding living figures, cronista recorded historical events in the lives of individuals in an ostensibly truthful and reality-oriented way. Even from the time of early Christian historiography, cronistas were clearly expected to place human history in the context of a linear progression, starting with the creation of man until the second coming of Christ, as prophesied in biblical texts. Lists of chronicles Babylonian Chronicles (loosely-defined set of 25 clay tablets) Burmese chronicles Cambodian Royal Chronicles (loosely-defined collection) List of collections of Crusader sources (most of them chronicles) List of Danish chronicles List of English chronicles Muslim chronicles for Indian history Chronicles of Nepal List of Rus' chronicles Serbian chronicles Alphabetical list of notable chronicles History of Alam Aray Abbasi – Safavid dynasty Alamgirnama – Mughal Empire Alexandrian World Chronicle - Greek history of the world until 392 ADAltan Tobchi - Mongol EmpireAnglo-Saxon Chronicle – England Annales Bertiniani – West FranciaAnnales Cambriae – WalesAnnales Posonienses – Kingdom of HungaryAnnales seu cronicae incliti Regni Poloniae – PolandAnnals of Inisfallen – IrelandAnnals of Lough Cé – IrelandAnnals of the Four Masters – IrelandAnnals of Spring and Autumn – ChinaAnnals of Thutmose III – Ancient EgyptThe Annals of the Choson Dynasty – KoreaBabylonian Chronicles – MesopotamiaAnonymous Bulgarian Chronicle – BulgariaBarnwell Chronicle - EnglandBodhi Vamsa – Sri LankaBooks of Chronicles attributed to Ezra – IsraelBuranji – Ahoms, Assam, IndiaBychowiec Chronicle LithuaniaCāmadevivaṃsa – Northern ThailandCulavamsa – Sri Lanka (Chronica Polonorum): see Cheitharol Kumbaba – Manipur, IndiaChronica Gentis ScotorumChronica Hungarorum – History of HungaryChronica seu originale regum et principum Poloniae – PolandChronicle of 754 - SpainChronicle (Crònica) by Ramon Muntaner – 13th/14th-century Crown of Aragon. Third and longest of the Grand Catalan Chronicles.Chronicle of Finland (Chronicon Finlandiae) by Johannes Messenius – FinlandChronicle of Fredegar - FranceChronicle of the Slavs – EuropeChronicle of Greater Poland – PolandChronicle of Jean de Venette – FranceChronicle of the Bishops of England (De Gestis Pontificum Anglorum) by William of MalmesburyChronicle of the Kings of Alba - ScotlandChronicle of the Kings of England (De Gestis Regum Anglorum) by William of MalmesburyChronicles of Mann - Isle of Man Chronicon of EusebiusChronicon Scotorum – Ireland Chronicon of Thietmar of MerseburgChronicon Paschale - 7th century Greek chronicle of the worldChronicon Pictum – History of HungaryChronographia – 11th century History of the Eastern Roman Empire (Byzantium) by Michael PsellosComentarios Reales de los IncasConversion of Kartli – Georgia Cronaca- Chronicle of Cyprus from the 4th up to the 15th century by Cypriot chronicler Leontios MachairasCronaca fiorentina – Chronicle of Florence up to the end of the 14th Century by Baldassarre BonaiutiCronicae et gesta ducum sive principum Polonorum – PolandCroyland Chronicle – EnglandDawn-Breakers (Nabil's Narrative) – Baháʼí Faith and Middle EastDioclean Priest's Chronicle – EuropeDipavamsa – Sri LankaDivan of the Abkhazian Kings – GeorgiaEpic of Sundiata - West AfricaEpitome rerum Hungarorum – History of HungaryEric's Chronicle – SwedenEusebius Chronicle – Mediterranean and Middle EastFragmentary Annals of Ireland – IrelandFroissart's Chronicles – France and Western EuropeGalician-Volhynian Chronicle – UkraineGeorgian Chronicles – GeorgiaGesta Hungarorum – History of HungaryGesta Hunnorum et Hungarorum – History of HungaryGesta Normannorum Ducum – NormandyGrandes Chroniques de France – FranceGeneral Estoria by Alfonso X – c. 1275-1284 Castile, Spain.Henry of Livona Chronicle – Eastern EuropeHistoria Ecclesiastica – Norman England Historia Scholastica by Petrus Comestor - 12th century France The Historie and Chronicles of Scotland, Robert Lindsay of PitscottieHistória da Província Santa Cruz a que vulgarmente chamamos Brasil – BrazilHistory of the Prophets and Kings – Middle East and MediterraneanHustyn Chronicle – Eastern EuropeJami' al-tawarikh by Rashid-al-Din Hamadani - Universal historyJans der Enikel – Europe and MediterraneanJerome's Chronicle – Mediterranean and Middle EastJinakalamali – Northern ThailandJoannis de Czarnkow chronicon Polonorum – PolandKaiserchronik – Central and southern Europe, GermanyKano Chronicle – NigeriaKhulasat-ut-Tawarikh by Sujan Rai - History of IndiaKhwaday-Namag - History of PersiaKilwa Chronicle - East AfricaKojiki - JapanLethrense Chronicle – DenmarkLivonian Chronicle of Henry - LivoniaLivonian Rhymed Chronicle - LivoniaLibre dels Feyts – Book of the Deeds by James I of Aragon, first of the Grand Catalan ChroniclesMadala Panji – Chronicle of the Jagannath Temple in Puri, India, related to the History of OdishaMahavamsa – Sri LankaMaronite Chronicle – The Levant, anonymous annalistic chronicle in the Syriac language completed shortly after 664. Manx Chronicle – Isle of ManNabonidus Chronicle – MesopotamiaNihon Shoki - JapanNovgorod First Chronicle - RussiaNuova Cronica – FlorenceNuremberg ChronicleOld Tibetan Chronicle - History of TibetParian Chronicle - Ancient GreecePaschale Chronicle – MediterraneanPictish Chronicle - ScotlandPrimary Chronicle – Eastern EuropePuranas – IndiaRajatarangini – KashmirRoit and Quheil of Tyme – Scotland, Adam AbellRoskildense Chronicle – DenmarkRoyal Frankish Annals – Frankish EmpireScotichronicon – by the Scottish historian Walter BowerShahnama-yi-Al-i Osman by Fethullah Arifi Çelebi – Ottoman empire (1300 ac – the end of Sultan Suleyman I's reign) which is the fifth volume of it SüleymannameSkibby Chronicle – Danish Latin chronicle from the 1530sSwiss illustrated chronicles – SwitzerlandTimbuktu Chronicles – MaliZizhi Tongjian – China Rhymed chronicles Rhymed or poetic chronicles, as opposed to prosaic chronicles, include: Rhymed Chronicle of Armenia Minor ("Chronicle of L'Aquila"), both in prose and verse form Brabantsche Yeesten ( 1315–1351) by Jan van Boendale (continued by an anonymous author) Cornicke van Brabant (1415) by Hennen van Merchtenen Cronijck van Brabant ( 1435–1460), anonymous, until 1430 by Gottfried Hagen Chronicle of Dalimil Erik's Chronicle Rhymed Chronicle of Flanders, part of the . It is unique as all other surviving Dutch-language chronicles of Flanders were written in prose. Die olde Freesche cronike (1474), anonymous history of Friesland until 1248 Rhymed Chronicle of Holland by Melis Stoke Rhymed Chronicle of Kastl (Kastler Reimchronik) , notorious 17th-century forgery pretending to be written in the 12th century Livonian Rhymed Chronicle Rhymed Chronicle of Mecklenburg by Ernest of Kirchberg Chronique métrique de Philippe le Bel or Chronique rimée (1316) by Geoffrey of Paris Chronique rimée ( 1250) by Philippe Mouskes New Prussian Chronicle by Wigand of Marburg Roman de Brut by Wace Spieghel Historiael by Jacob van Maerlant Rhymed Chronicle of Utrecht ( 1378) Rhyming Chronicle of Worringen'' See also Books of Chronicles Medieval Chronicle Society References External links Medieval literature Works about history
0.775253
0.988967
0.766699
Cultural anthropology
Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans. It is in contrast to social anthropology, which perceives cultural variation as a subset of a posited anthropological constant. The term sociocultural anthropology includes both cultural and social anthropology traditions. Anthropologists have pointed out that through culture, people can adapt to their environment in non-genetic ways, so people living in different environments will often have different cultures. Much of anthropological theory has originated in an appreciation of and interest in the tension between the local (particular cultures) and the global (a universal human nature, or the web of connections between people in distinct places/circumstances). Cultural anthropology has a rich methodology, including participant observation (often called fieldwork because it requires the anthropologist spending an extended period of time at the research location), interviews, and surveys. History Modern anthropology emerged in the 19th century alongside developments in the Western world. With these developments came a renewed interest in humankind, such as its origins, unity, and plurality. It is, however, in the 20th century that cultural anthropology shifts to having a more pluralistic view of cultures and societies. The rise of cultural anthropology took place within the context of the late 19th century, when questions regarding which cultures were "primitive" and which were "civilized" occupied the mind of not only Freud, but many others. Colonialism and its processes increasingly brought European thinkers into direct or indirect contact with "primitive others". The first generation of cultural anthropologists were interested in the relative status of various humans, some of whom had modern advanced technologies, while others lacked anything but face-to-face communication techniques and still lived a Paleolithic lifestyle. Theoretical foundations The concept of culture One of the earliest articulations of the anthropological meaning of the term "culture" came from Sir Edward Tylor: "Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society." The term "civilization" later gave way to definitions given by V. Gordon Childe, with culture forming an umbrella term and civilization becoming a particular kind of culture. According to Kay Milton, former director of anthropology research at Queens University Belfast, culture can be general or specific. This means culture can be something applied to all human beings or it can be specific to a certain group of people such as African American culture or Irish American culture. Specific cultures are structured systems which means they are organized very specifically and adding or taking away any element from that system may disrupt it. The critique of evolutionism Anthropology is concerned with the lives of people in different parts of the world, particularly in relation to the discourse of beliefs and practices. In addressing this question, ethnologists in the 19th century divided into two schools of thought. Some, like Grafton Elliot Smith, argued that different groups must have learned from one another somehow, however indirectly; in other words, they argued that cultural traits spread from one place to another, or "diffused". Other ethnologists argued that different groups had the capability of creating similar beliefs and practices independently. Some of those who advocated "independent invention", like Lewis Henry Morgan, additionally supposed that similarities meant that different groups had passed through the same stages of cultural evolution (See also classical social evolutionism). Morgan, in particular, acknowledged that certain forms of society and culture could not possibly have arisen before others. For example, industrial farming could not have been invented before simple farming, and metallurgy could not have developed without previous non-smelting processes involving metals (such as simple ground collection or mining). Morgan, like other 19th century social evolutionists, believed there was a more or less orderly progression from the primitive to the civilized. 20th-century anthropologists largely reject the notion that all human societies must pass through the same stages in the same order, on the grounds that such a notion does not fit the empirical facts. Some 20th-century ethnologists, like Julian Steward, have instead argued that such similarities reflected similar adaptations to similar environments. Although 19th-century ethnologists saw "diffusion" and "independent invention" as mutually exclusive and competing theories, most ethnographers quickly reached a consensus that both processes occur, and that both can plausibly account for cross-cultural similarities. But these ethnographers also pointed out the superficiality of many such similarities. They noted that even traits that spread through diffusion often were given different meanings and function from one society to another. Analyses of large human concentrations in big cities, in multidisciplinary studies by Ronald Daus, show how new methods may be applied to the understanding of man living in a global world and how it was caused by the action of extra-European nations, so highlighting the role of Ethics in modern anthropology. Accordingly, most of these anthropologists showed less interest in comparing cultures, generalizing about human nature, or discovering universal laws of cultural development, than in understanding particular cultures in those cultures' own terms. Such ethnographers and their students promoted the idea of "cultural relativism", the view that one can only understand another person's beliefs and behaviors in the context of the culture in which they live or lived. Others, such as Claude Lévi-Strauss (who was influenced both by American cultural anthropology and by French Durkheimian sociology), have argued that apparently similar patterns of development reflect fundamental similarities in the structure of human thought (see structuralism). By the mid-20th century, the number of examples of people skipping stages, such as going from hunter-gatherers to post-industrial service occupations in one generation, were so numerous that 19th-century evolutionism was effectively disproved. Cultural relativism Cultural relativism is a principle that was established as axiomatic in anthropological research by Franz Boas and later popularized by his students. Boas first articulated the idea in 1887: "...civilization is not something absolute, but ... is relative, and ... our ideas and conceptions are true only so far as our civilization goes." Although Boas did not coin the term, it became common among anthropologists after Boas' death in 1942, to express their synthesis of a number of ideas Boas had developed. Boas believed that the sweep of cultures, to be found in connection with any sub-species, is so vast and pervasive that there cannot be a relationship between culture and race. Cultural relativism involves specific epistemological and methodological claims. Whether or not these claims require a specific ethical stance is a matter of debate. This principle should not be confused with moral relativism. Cultural relativism was in part a response to Western ethnocentrism. Ethnocentrism may take obvious forms, in which one consciously believes that one's people's arts are the most beautiful, values the most virtuous, and beliefs the most truthful. Boas, originally trained in physics and geography, and heavily influenced by the thought of Kant, Herder, and von Humboldt, argued that one's culture may mediate and thus limit one's perceptions in less obvious ways. This understanding of culture confronts anthropologists with two problems: first, how to escape the unconscious bonds of one's own culture, which inevitably bias our perceptions of and reactions to the world, and second, how to make sense of an unfamiliar culture. The principle of cultural relativism thus forced anthropologists to develop innovative methods and heuristic strategies. Boas and his students realized that if they were to conduct scientific research in other cultures, they would need to employ methods that would help them escape the limits of their own ethnocentrism. One such method is that of ethnography. This method advocates living with people of another culture for an extended period of time to learn the local language and be enculturated, at least partially, into that culture. In this context, cultural relativism is of fundamental methodological importance, because it calls attention to the importance of the local context in understanding the meaning of particular human beliefs and activities. Thus, in 1948 Virginia Heyer wrote, "Cultural relativity, to phrase it in starkest abstraction, states the relativity of the part to the whole. The part gains its cultural significance by its place in the whole, and cannot retain its integrity in a different situation." Theoretical approaches Actor–network theory Cultural materialism Culture theory Feminist anthropology Functionalism Symbolic and interpretive anthropology Political economy in anthropology Practice theory Structuralism Post-structuralism Systems theory in anthropology Comparison with social anthropology The rubric cultural anthropology is generally applied to ethnographic works that are holistic in approach, are oriented to the ways in which culture affects individual experience or aim to provide a rounded view of the knowledge, customs, and institutions of a people. Social anthropology is a term applied to ethnographic works that attempt to isolate a particular system of social relations such as those that comprise domestic life, economy, law, politics, or religion, give analytical priority to the organizational bases of social life, and attend to cultural phenomena as somewhat secondary to the main issues of social scientific inquiry. Parallel with the rise of cultural anthropology in the United States, social anthropology developed as an academic discipline in Britain and in France. Foundational thinkers Lewis Henry Morgan Lewis Henry Morgan (1818–1881), a lawyer from Rochester, New York, became an advocate for and ethnological scholar of the Iroquois. His comparative analyses of religion, government, material culture, and especially kinship patterns proved to be influential contributions to the field of anthropology. Like other scholars of his day (such as Edward Tylor), Morgan argued that human societies could be classified into categories of cultural evolution on a scale of progression that ranged from savagery, to barbarism, to civilization. Generally, Morgan used technology (such as bowmaking or pottery) as an indicator of position on this scale. Franz Boas, founder of the modern discipline Franz Boas (1858–1942) established academic anthropology in the United States in opposition to Morgan's evolutionary perspective. His approach was empirical, skeptical of overgeneralizations, and eschewed attempts to establish universal laws. For example, Boas studied immigrant children to demonstrate that biological race was not immutable, and that human conduct and behavior resulted from nurture, rather than nature. Influenced by the German tradition, Boas argued that the world was full of distinct cultures, rather than societies whose evolution could be measured by the extent of "civilization" they had. He believed that each culture has to be studied in its particularity, and argued that cross-cultural generalizations, like those made in the natural sciences, were not possible. In doing so, he fought discrimination against immigrants, blacks, and indigenous peoples of the Americas. Many American anthropologists adopted his agenda for social reform, and theories of race continue to be popular subjects for anthropologists today. The so-called "Four Field Approach" has its origins in Boasian Anthropology, dividing the discipline in the four crucial and interrelated fields of sociocultural, biological, linguistic, and archaic anthropology (e.g. archaeology). Anthropology in the United States continues to be deeply influenced by the Boasian tradition, especially its emphasis on culture. Kroeber, Mead, and Benedict Boas used his positions at Columbia University and the American Museum of Natural History (AMNH) to train and develop multiple generations of students. His first generation of students included Alfred Kroeber, Robert Lowie, Edward Sapir, and Ruth Benedict, who each produced richly detailed studies of indigenous North American cultures. They provided a wealth of details used to attack the theory of a single evolutionary process. Kroeber and Sapir's focus on Native American languages helped establish linguistics as a truly general science and free it from its historical focus on Indo-European languages. The publication of Alfred Kroeber's textbook Anthropology (1923) marked a turning point in American anthropology. After three decades of amassing material, Boasians felt a growing urge to generalize. This was most obvious in the 'Culture and Personality' studies carried out by younger Boasians such as Margaret Mead and Ruth Benedict. Influenced by psychoanalytic psychologists including Sigmund Freud and Carl Jung, these authors sought to understand the way that individual personalities were shaped by the wider cultural and social forces in which they grew up. Though such works as Mead's Coming of Age in Samoa (1928) and Benedict's The Chrysanthemum and the Sword (1946) remain popular with the American public, Mead and Benedict never had the impact on the discipline of anthropology that some expected. Boas had planned for Ruth Benedict to succeed him as chair of Columbia's anthropology department, but she was sidelined in favor of Ralph Linton, and Mead was limited to her offices at the AMNH. Wolf, Sahlins, Mintz, and political economy In the 1950s and mid-1960s anthropology tended increasingly to model itself after the natural sciences. Some anthropologists, such as Lloyd Fallers and Clifford Geertz, focused on processes of modernization by which newly independent states could develop. Others, such as Julian Steward and Leslie White, focused on how societies evolve and fit their ecological niche—an approach popularized by Marvin Harris. Economic anthropology as influenced by Karl Polanyi and practiced by Marshall Sahlins and George Dalton challenged standard neoclassical economics to take account of cultural and social factors and employed Marxian analysis into anthropological study. In England, British Social Anthropology's paradigm began to fragment as Max Gluckman and Peter Worsley experimented with Marxism and authors such as Rodney Needham and Edmund Leach incorporated Lévi-Strauss's structuralism into their work. Structuralism also influenced a number of developments in the 1960s and 1970s, including cognitive anthropology and componential analysis. In keeping with the times, much of anthropology became politicized through the Algerian War of Independence and opposition to the Vietnam War; Marxism became an increasingly popular theoretical approach in the discipline. By the 1970s the authors of volumes such as Reinventing Anthropology worried about anthropology's relevance. Since the 1980s issues of power, such as those examined in Eric Wolf's Europe and the People Without History, have been central to the discipline. In the 1980s books like Anthropology and the Colonial Encounter pondered anthropology's ties to colonial inequality, while the immense popularity of theorists such as Antonio Gramsci and Michel Foucault moved issues of power and hegemony into the spotlight. Gender and sexuality became popular topics, as did the relationship between history and anthropology, influenced by Marshall Sahlins, who drew on Lévi-Strauss and Fernand Braudel to examine the relationship between symbolic meaning, sociocultural structure, and individual agency in the processes of historical transformation. Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan. Geertz, Schneider, and interpretive anthropology Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state: Geertz's interpretive method involved what he called "thick description". The cultural symbols of rituals, political and economic action, and of kinship, are "read" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the "experience-near" but foreign concepts of the other culture, into the "experience-distant" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of "religion as a cultural system" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on "blood connections" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic, but a cultural relationship established on very different terms in different societies. Prominent British symbolic anthropologists include Victor Turner and Mary Douglas. The post-modern turn In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became de rigueur as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology; cultural, gendered, and racial positioning; and their influence on the ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies. Socio-cultural anthropology subfields Anthropology of art Cognitive anthropology Anthropology of development Disability anthropology Ecological anthropology Economic anthropology Feminist anthropology and anthropology of gender and sexuality Ethnohistory and historical anthropology Kinship and family Legal anthropology Multimodal anthropology Media anthropology Medical anthropology Political anthropology Political economy in anthropology Psychological anthropology Public anthropology Anthropology of religion Cyborg anthropology Transpersonal anthropology Urban anthropology Visual anthropology Methods Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others—usually missionaries, traders, explorers, or colonial officials—earning them the moniker of "arm-chair anthropologists". Participant observation Participant observation is one of the principal research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior. Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and their subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and it is more likely that accurate and complete information is being shared with the anthropologist. Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, they will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding. The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions they are trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision. One common criticism of participant observation is its lack of objectivity. Because each anthropologist has their own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what they will eventually write about a culture, because each researcher is influenced by their own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead. Participant observation has also raised ethical questions, since an anthropologist is in control of what they report about a culture. In terms of representation, an anthropologist has greater power than their subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence. Ethnography In the 20th century, most cultural and social anthropologists turned to the crafting of ethnographies. An ethnography is a piece of writing about a people, at a particular place and time. Typically, the anthropologist lives among people in another society for a period of time, simultaneously participating in and observing the social and cultural life of the group. Numerous other ethnographic techniques have resulted in ethnographic writing or details being preserved, as cultural anthropologists also curate materials, spend long hours in libraries, churches and schools poring over records, investigate graveyards, and decipher ancient scripts. A typical ethnography will also include information about physical geography, climate and habitat. It is meant to be a holistic piece of writing about the people in question, and today often includes the longest possible timeline of past events that the ethnographer can obtain through primary and secondary research. Bronisław Malinowski developed the ethnographic method, and Franz Boas taught it in the United States. Boas' students such as Alfred L. Kroeber, Ruth Benedict and Margaret Mead drew on his conception of culture and cultural relativism to develop cultural anthropology in the United States. Simultaneously, Malinowski and A.R. Radcliffe Brown's students were developing social anthropology in the United Kingdom. Whereas cultural anthropology focused on symbols and values, social anthropology focused on social groups and institutions. Today socio-cultural anthropologists attend to all these elements. In the early 20th century, socio-cultural anthropology developed in different forms in Europe and in the United States. European "social anthropologists" focused on observed social behaviors and on "social structure", that is, on relationships among social roles (for example, husband and wife, or parent and child) and social institutions (for example, religion, economy, and politics). American "cultural anthropologists" focused on the ways people expressed their view of themselves and their world, especially in symbolic forms, such as art and myths. These two approaches frequently converged and generally complemented one another. For example, kinship and leadership function both as symbolic systems and as social institutions. Today almost all socio-cultural anthropologists refer to the work of both sets of predecessors and have an equal interest in what people do and in what people say. Cross-cultural comparison One means by which anthropologists combat ethnocentrism is to engage in the process of cross-cultural comparison. It is important to test so-called "human universals" against the ethnographic record. Monogamy, for example, is frequently touted as a universal human trait, yet comparative study shows that it is not. The Human Relations Area Files, Inc. (HRAF) is a research agency based at Yale University. Since 1949, its mission has been to encourage and facilitate worldwide comparative studies of human culture, society, and behavior in the past and present. The name came from the Institute of Human Relations, an interdisciplinary program/building at Yale at the time. The Institute of Human Relations had sponsored HRAF's precursor, the Cross-Cultural Survey (see George Peter Murdock), as part of an effort to develop an integrated science of human behavior and culture. The two eHRAF databases on the Web are expanded and updated annually. eHRAF World Cultures includes materials on cultures, past and present, and covers nearly 400 cultures. The second database, eHRAF Archaeology, covers major archaeological traditions and many more sub-traditions and sites around the world. Comparison across cultures includes the industrialized (or de-industrialized) West. Cultures in the more traditional standard cross-cultural sample of small-scale societies are: Multi-sited ethnography Ethnography dominates socio-cultural anthropology. Nevertheless, many contemporary socio-cultural anthropologists have rejected earlier models of ethnography as treating local cultures as bounded and isolated. These anthropologists continue to concern themselves with the distinct ways people in different locales experience and understand their lives, but they often argue that one cannot understand these particular ways of life solely from a local perspective; they instead combine a focus on the local with an effort to grasp larger political, economic, and cultural frameworks that impact local lived realities. Notable proponents of this approach include Arjun Appadurai, James Clifford, George Marcus, Sidney Mintz, Michael Taussig, Eric Wolf and Ronald Daus. A growing trend in anthropological research and analysis is the use of multi-sited ethnography, discussed in George Marcus' article, "Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography". Looking at culture as embedded in macro-constructions of a global social order, multi-sited ethnography uses traditional methodology in various locations both spatially and temporally. Through this methodology, greater insight can be gained when examining the impact of world-systems on local and global communities. Also emerging in multi-sited ethnography are greater interdisciplinary approaches to fieldwork, bringing in methods from cultural studies, media studies, science and technology studies, and others. In multi-sited ethnography, research tracks a subject across spatial and temporal boundaries. For example, a multi-sited ethnography may follow a "thing", such as a particular commodity, as it is transported through the networks of global capitalism. Multi-sited ethnography may also follow ethnic groups in diaspora, stories or rumours that appear in multiple locations and in multiple time periods, metaphors that appear in multiple ethnographic locations, or the biographies of individual people or groups as they move through space and time. It may also follow conflicts that transcend boundaries. An example of multi-sited ethnography is Nancy Scheper-Hughes' work on the international black market for the trade of human organs. In this research, she follows organs as they are transferred through various legal and illegal networks of capitalism, as well as the rumours and urban legends that circulate in impoverished communities about child kidnapping and organ theft. Sociocultural anthropologists have increasingly turned their investigative eye on to "Western" culture. For example, Philippe Bourgois won the Margaret Mead Award in 1997 for In Search of Respect, a study of the entrepreneurs in a Harlem crack-den. Also growing more popular are ethnographies of professional communities, such as laboratory researchers, Wall Street investors, law firms, or information technology (IT) computer employees. Topics Kinship and family Kinship refers to the anthropological study of the ways in which humans form and maintain relationships with one another and how those relationships operate within and define social organization. Research in kinship studies often crosses over into different anthropological subfields including medical, feminist, and public anthropology. This is likely due to its fundamental concepts, as articulated by linguistic anthropologist Patrick McConvell: Throughout history, kinship studies have primarily focused on the topics of marriage, descent, and procreation. Anthropologists have written extensively on the variations within marriage across cultures and its legitimacy as a human institution. There are stark differences between communities in terms of marital practice and value, leaving much room for anthropological fieldwork. For instance, the Nuer of Sudan and the Brahmans of Nepal practice polygyny, where one man has several marriages to two or more women. The Nyar of India and Nyimba of Tibet and Nepal practice polyandry, where one woman is often married to two or more men. The marital practice found in most cultures, however, is monogamy, where one woman is married to one man. Anthropologists also study different marital taboos across cultures, most commonly the incest taboo of marriage within sibling and parent-child relationships. It has been found that all cultures have an incest taboo to some degree, but the taboo shifts between cultures when the marriage extends beyond the nuclear family unit. There are similar foundational differences where the act of procreation is concerned. Although anthropologists have found that biology is acknowledged in every cultural relationship to procreation, there are differences in the ways in which cultures assess the constructs of parenthood. For example, in the Nuyoo municipality of Oaxaca, Mexico, it is believed that a child can have partible maternity and partible paternity. In this case, a child would have multiple biological mothers in the case that it is born of one woman and then breastfed by another. A child would have multiple biological fathers in the case that the mother had sex with multiple men, following the commonplace belief in Nuyoo culture that pregnancy must be preceded by sex with multiple men in order have the necessary accumulation of semen. Late twentieth-century shifts in interest In the twenty-first century, Western ideas of kinship have evolved beyond the traditional assumptions of the nuclear family, raising anthropological questions of consanguinity, lineage, and normative marital expectation. The shift can be traced back to the 1960s, with the reassessment of kinship's basic principles offered by Edmund Leach, Rodney Neeham, David Schneider, and others. Instead of relying on narrow ideas of Western normalcy, kinship studies increasingly catered to "more ethnographic voices, human agency, intersecting power structures, and historical context". The study of kinship evolved to accommodate for the fact that it cannot be separated from its institutional roots and must pay respect to the society in which it lives, including that society's contradictions, hierarchies, and individual experiences of those within it. This shift was progressed further by the emergence of second-wave feminism in the early 1970s, which introduced ideas of marital oppression, sexual autonomy, and domestic subordination. Other themes that emerged during this time included the frequent comparisons between Eastern and Western kinship systems and the increasing amount of attention paid to anthropologists' own societies, a swift turn from the focus that had traditionally been paid to largely "foreign", non-Western communities. Kinship studies began to gain mainstream recognition in the late 1990s with the surging popularity of feminist anthropology, particularly with its work related to biological anthropology and the intersectional critique of gender relations. At this time, there was the arrival of "Third World feminism", a movement that argued kinship studies could not examine the gender relations of developing countries in isolation and must pay respect to racial and economic nuance as well. This critique became relevant, for instance, in the anthropological study of Jamaica: race and class were seen as the primary obstacles to Jamaican liberation from economic imperialism, and gender as an identity was largely ignored. Third World feminism aimed to combat this in the early twenty-first century by promoting these categories as coexisting factors. In Jamaica, marriage as an institution is often substituted for a series of partners, as poor women cannot rely on regular financial contributions in a climate of economic instability. In addition, there is a common practice of Jamaican women artificially lightening their skin tones in order to secure economic survival. These anthropological findings, according to Third World feminism, cannot see gender, racial, or class differences as separate entities, and instead must acknowledge that they interact together to produce unique individual experiences. Rise of reproductive anthropology Kinship studies have also experienced a rise in the interest of reproductive anthropology with the advancement of assisted reproductive technologies (ARTs), including in vitro fertilization (IVF). These advancements have led to new dimensions of anthropological research, as they challenge the Western standard of biogenetically based kinship, relatedness, and parenthood. According to anthropologists Maria C. Inhorn and Daphna Birenbaum-Carmeli, "ARTs have pluralized notions of relatedness and led to a more dynamic notion of "kinning" namely, kinship as a process, as something under construction, rather than a natural given". With this technology, questions of kinship have emerged over the difference between biological and genetic relatedness, as gestational surrogates can provide a biological environment for the embryo while the genetic ties remain with a third party. If genetic, surrogate, and adoptive maternities are involved, anthropologists have acknowledged that there can be the possibility for three "biological" mothers to a single child. With ARTs, there are also anthropological questions concerning the intersections between wealth and fertility: ARTs are generally only available to those in the highest income bracket, meaning the infertile poor are inherently devalued in the system. There have also been issues of reproductive tourism and bodily commodification, as individuals seek economic security through hormonal stimulation and egg harvesting, which are potentially harmful procedures. With IVF, specifically, there have been many questions of embryotic value and the status of life, particularly as it relates to the manufacturing of stem cells, testing, and research. Current issues in kinship studies, such as adoption, have revealed and challenged the Western cultural disposition towards the genetic, "blood" tie. Western biases against single parent homes have also been explored through similar anthropological research, uncovering that a household with a single parent experiences "greater levels of scrutiny and [is] routinely seen as the 'other' of the nuclear, patriarchal family". The power dynamics in reproduction, when explored through a comparative analysis of "conventional" and "unconventional" families, have been used to dissect the Western assumptions of child bearing and child rearing in contemporary kinship studies. Critiques of kinship studies Kinship, as an anthropological field of inquiry, has been heavily criticized across the discipline. One critique is that, as its inception, the framework of kinship studies was far too structured and formulaic, relying on dense language and stringent rules. Another critique, explored at length by American anthropologist David Schneider, argues that kinship has been limited by its inherent Western ethnocentrism. Schneider proposes that kinship is not a field that can be applied cross-culturally, as the theory itself relies on European assumptions of normalcy. He states in the widely circulated 1984 book A critique of the study of kinship that "[K]inship has been defined by European social scientists, and European social scientists use their own folk culture as the source of many, if not all of their ways of formulating and understanding the world about them". However, this critique has been challenged by the argument that it is linguistics, not cultural divergence, that has allowed for a European bias, and that the bias can be lifted by centering the methodology on fundamental human concepts. Polish anthropologist Anna Wierzbicka argues that "mother" and "father" are examples of such fundamental human concepts and can only be Westernized when conflated with English concepts such as "parent" and "sibling". A more recent critique of kinship studies is its solipsistic focus on privileged, Western human relations and its promotion of normative ideals of human exceptionalism. In Critical Kinship Studies, social psychologists Elizabeth Peel and Damien Riggs argue for a move beyond this human-centered framework, opting instead to explore kinship through a "posthumanist" vantage point where anthropologists focus on the intersecting relationships of human animals, non-human animals, technologies and practices. Institutional anthropology The role of anthropology in institutions has expanded significantly since the end of the 20th century. Much of this development can be attributed to the rise in anthropologists working outside of academia and the increasing importance of globalization in both institutions and the field of anthropology. Anthropologists can be employed by institutions such as for-profit business, nonprofit organizations, and governments. For instance, cultural anthropologists are commonly employed by the United States federal government. The two types of institutions defined in the field of anthropology are total institutions and social institutions. Total institutions are places that comprehensively coordinate the actions of people within them, and examples of total institutions include prisons, convents, and hospitals. Social institutions, on the other hand, are constructs that regulate individuals' day-to-day lives, such as kinship, religion, and economics. Anthropology of institutions may analyze labor unions, businesses ranging from small enterprises to corporations, government, medical organizations, education, prisons, and financial institutions. Nongovernmental organizations have garnered particular interest in the field of institutional anthropology because they are capable of fulfilling roles previously ignored by governments, or previously realized by families or local groups, in an attempt to mitigate social problems. The types and methods of scholarship performed in the anthropology of institutions can take a number of forms. Institutional anthropologists may study the relationship between organizations or between an organization and other parts of society. Institutional anthropology may also focus on the inner workings of an institution, such as the relationships, hierarchies and cultures formed, and the ways that these elements are transmitted and maintained, transformed, or abandoned over time. Additionally, some anthropology of institutions examines the specific design of institutions and their corresponding strength. More specifically, anthropologists may analyze specific events within an institution, perform semiotic investigations, or analyze the mechanisms by which knowledge and culture are organized and dispersed. In all manifestations of institutional anthropology, participant observation is critical to understanding the intricacies of the way an institution works and the consequences of actions taken by individuals within it. Simultaneously, anthropology of institutions extends beyond examination of the commonplace involvement of individuals in institutions to discover how and why the organizational principles evolved in the manner that they did. Common considerations taken by anthropologists in studying institutions include the physical location at which a researcher places themselves, as important interactions often take place in private, and the fact that the members of an institution are often being examined in their workplace and may not have much idle time to discuss the details of their everyday endeavors. The ability of individuals to present the workings of an institution in a particular light or frame must additionally be taken into account when using interviews and document analysis to understand an institution, as the involvement of an anthropologist may be met with distrust when information being released to the public is not directly controlled by the institution and could potentially be damaging. See also References External links Official website of Human Relations Area Files (HRAF) based at Yale University A Basic Guide to Cross-Cultural Research from HRAF eHRAF World Cultures eHRAF Archaeology
0.769684
0.99609
0.766675
Classical realism (international relations)
Classical realism is an international relations theory from the realist school of thought. Realism makes the following assumptions: states are the main actors in the international relations system, there is no supranational international authority, states act in their own self-interest, and states want power for self-preservation. Classical realism differs from other forms of realism in that it places specific emphasis on human nature and domestic politics as the key factor in explaining state behavior and the causes of inter-state conflict. Classical realist theory adopts a pessimistic view of human nature and argues that humans are not inherently benevolent but instead they are self-interested and act out of fear or aggression. Furthermore, it emphasizes that this human nature is reflected by states in international politics due to international anarchy. Classical realism first arose in its modern form during the interwar period of (1918-1939) as the academic field of international relations began to grow during this era. Classical realism during the inter-war period developed as a response to the prominence of idealist and utopian theories in international relations during the time. Liberal scholars at the time attributed conflict to poor social conditions and political systems whilst, prominent policy makers focused on establishing a respected body of international law and institutions to manage the international system. These ideas were critiqued by realists during the 1930s. After World War II, classical realism became more popular in academic and foreign policy settings. E. H. Carr, George F. Kennan, Hans Morgenthau, Raymond Aron, and Robert Gilpin are central contributors to classical realism. During the 1960s and 70s classical realist theories declined in popularity and became less prominent as structural realist (neorealist) theorists argued against using human nature as a basis of analysis and instead proposed that explaining inter-state conflict through the anarchic structure of the international system was more empirical. In contrast to neorealism, classical realism argues that the structure of the international system (e.g. anarchy) shapes the kinds of behaviors that states can engage in but does not determine state behavior. In contrast to neorealism, classical realists do not hold that states' main goal is survival. State behavior is ultimately uncertain and contingent. Theoretical origins Classical realist writers have drawn from the ideas of earlier political thinkers, most notably, Niccolò Machiavelli, Thomas Hobbes and Thucydides. These political theorists are not considered to be a part of the modern classical realism school of thought, but their writings are considered important to the development of the theory. These thinkers are sometimes evoked to demonstrate the "timelessness" of realist thought; scholars have disputed to what extent these thinkers adhered to realist views. Thucydides Thucydides was an ancient Athenian historian (460BC to 400BC). Thucydides works contains significant parallels with the writings of classical realists. In the Melian Dialogue, Thucydides critiques moralistic arguments made by states by arguing that it is instead self-interest and state power which motivate states and that idealistic arguments disguise this. His writings have been a significant topic for debate in the international relations field. Scholarly interest in Thucydides peaked during the Cold War as International Relations scholars made comparisons between the bi-polarity of the US and Russia and his account of the conflict between Athens and Sparta. Rusten describes Thucydides influence on international relations as "after the Second World War, Thucydides was read by many American opinion-makers (and by those academics who taught them) as a prototypical cold war policy analyst." Niccolo Machiavelli Niccolò Machiavelli was a political theorist and diplomat in the Republic of Florence (1469-1527). His work diverged from the traditions of political theory during his time. In his text the Prince he advocated for a separation of morals and politics whilst, at the time political theory was heavily influenced by religious ideals. Machiavelli also argues that people should view things as they are, not how they should be, and justified the use of power as a means of achieving an end. Machiavelli's writings have been prominent in western political science and this has extended to the international relations field where his writings have been the source of liberal and realist debate. Thomas Hobbes Thomas Hobbes was an English political philosopher (1588-1679). Hobbes' major focus was not on international relations but he influenced classical realist theory through his descriptions of human nature, theories of the state and anarchy and his focus on politics as a contest for power. Hobbes' theory of the "international state of nature" stems from his concept that a world without a government leads to anarchy. This expands upon Hobbes' concept of the "state of nature," which is a hypothetical scenario about how people lived before societies were formed and the role of societies in placing restrictions upon natural rights or freedoms to create order and potential peace. Due to the lack of an international society the international system is therefore understood to be permanently anarchic. Michael Smith describes the significance of this theory to realism as "[Hobbes'] state of nature remains the defining feature of realist thought. His notion of the international state of nature as a state of war is shared by virtually everyone calling himself a realist." Assumptions and theories As many of the 20th century figures associated with classical realism were strongly influenced by historians and/or sought to influence policymakers, works in classical realism tended to point to a multiplicity of causes for a wide range of outcomes, as well as cross analytical levels of analysis. Human nature Classical realist theory explains international relations through assumptions about human nature. The theory is pessimistic about human behaviour and emphasizes that individuals are primarily motivated by self-interest and not higher moral or ethical aspirations. The behavior of states is theorized to be dictated by basic primal emotions, for example Thomas Hobbes described fear or aggression as fundamental motivations. Human nature is not seen to be changeable but only controllable when placed within societal boundaries. Classical realism takes a pessimistic view of human nature but the exact form this takes is debated as some classical realists focus on self-interest and a desire for survival as the primary aspects of human nature whilst, others believe in humans being inherently cruel, egoistic and savage. Classical realists believe that their pessimistic vision of human nature is reflected in politics and international relations. Hans Morgenthau in his book Politics Among Nations states that "politics is governed by objective laws that have their roots in human nature." The theory emphasizes that international relations are shaped by the tendencies of human nature since is not changeable but only controllable by a higher power such as the state implementing order. Due to the anarchic international system, which means that there is no central power in the international system, states are unrestrained due to a lack of order and are free to express their human nature as a result. Understanding of the state Classical realist theory views the state as the most significant unit of analysis and understands it to be more ontologically significant than the structure of the international system. Classical realist theory attributes significant agency to state actors and believes that as states change so does the international system. This contrasts neo-realist theory which argues that the structure of the international system is ontologically superior and views states as unitary meaning they are seen as rational actors objectively pursuing their national interest. Classical realists do not view states as unitary and recognise that they are shaped by state to society relationships as well as international norms; due to this conception of the state they do not regard state actions as inherently rational pursuits of the national interest. When analyzing the international system, classical realists differentiate between revisionist states and status quo states. This means that they attempt to understand which states are striving to create a new international order how this affects the international security and translates into acts of aggression or causes of war. This contrasts neo-realist theory which has a unitary view of states and therefore does not account for the role of revisionism in accounting for state aggression in the international system. State pursuit of power Classical realists explain state conflict and the pursuit of power by suggesting they are result of human nature. It is theorized that within human nature there is a lust for power which drives states to accumulate it where possible. States are not just motivated to pursue power for the sake of security and survival, but may also be motivated by fear, honor, and glory or just pursue power for its own sake. States are understood to be a reflection of human nature and the anarchic international system is not considered to be the root cause of the pursuit of power but instead a facilitating factor. In regards to explaining states pursuit of power, classical realism is distinct as later theories places less emphasis on assumptions about human nature but instead focuses on the structure of the international system. Neorealist scholars argue that states seek security and explain the pursuit of power as a means of creating security which contrasts classical realist theory. Modern International relations scholars have noted that classical realists debated about the extent to which the pursuit of power is an inherent biological drive as opposed to power being a method of self-preservation. Balance of power The balance of power is a key analytical tool used by realist theory. There are two key aspects to the balance of power in classical realism: Firstly, a balance of power is understood to be an unintentional result of great power competition which occurs due to a constant pursuit of power by multiple states to dominate others leading to balance. Secondly, the balance of power is also understood as the efforts of states to create an equilibrium through the use of ideational or material forces such as alliances. Realists view a balance of power as desirable as it creates an inability to be dominated by another state and therefore provides security as it is less likely that states will engage in conflict or war that they cannot win. Realists also theorise that the balance of power leads to the "security dilemma." The security dilemma is the scenario in which one state increases its power in order to defend themselves and create security, but this prompts other states to increase their power leading to a spiralling effect where both sides are drawn into continually increasing their defence capabilities despite not desiring conflict. Classical realists often place a focus on the inevitability of this process due to the focus on a pessimistic understanding of human nature as egotistic leading states to constantly desire power. This contrasts neo-realists who emphasise that the security dilemma is not inevitable but instead often a self-fulfilling prophecy. Hans Morgenthau's "Six Principles of Political Realism" The second edition of Hans Morgenthau's book Politics Among Nations features the section "The Six Principles of Political Realism." The significance of Hans Morgenthau to international relations and classical realism was described by Thompson in 1959 as "much of the literature in international politics is a dialogue, explicit or not, between Morgenthau and his critics." Morgenthau's six principles of political realism (paraphrased) are that: International politics is governed by the laws derived from human nature. Realism analyses power and power allows the pursuit of national interest meaning that the national interest is defined as power. Realism acknowledges the moral significance of political action but recognises the necessity for immorality in successful politics. Political realism does not identify the morals of a particular nation with universal morals. Key debates Idealism and realism During the 1920s and 1930s the "first great debate" in international relations between realists and idealists occurred. Some modern historians however dispute the claim and instead suggest that this oversimplifies a wider ranging series of discussions. In the interwar period liberalism was the dominant paradigm in international relations theory but this was contested by classical realist theorists. The publication of E. H. Carr's The Twenty Years' Crisis is seen to be central to the arguments of classical realism during this time period. Carr argued against Utopian and Idealist views on international relations as well as the merit and success of the League of Nations. Following World War 2 and the inability for the international relations system to prevent war, many saw this as a victory for realist theory. Neorealism and classical realism During the 1960s and 1970s the "second great debate" of international relations occurred. Following the behavioral revolution scholars began to place a new emphasis on creating a more empirical methodology for analyzing international relations. Neorealist scholars criticized how classical realist scholars had created methodologies which lacked the standards of proof to be considered scientific theories. Classical realists had emphasized human nature as the primary form of explaining the international system; neo-realists emphasized the international structure instead. Kenneth Waltz's Theory of International Politics was a critical text in this debate as it argued that international anarchy was a core element of international politics. After this era classical realist doctrines became less prominent in favor of neo-realism. References International relations theory Political realism
0.773646
0.990987
0.766673
Atlanticism
Atlanticism, also known as Transatlanticism, is the ideology which advocates a close alliance between nations in Northern America (the United States and Canada) and in Europe on political, economic, and defense issues. The purpose is to maintain or increase the security and prosperity of the participating countries and protect liberal democracy and the progressive values of an open society that unite them under multiculturalism. The term derives from the North Atlantic Ocean, which is bordered by North America and Europe. The term can be used in a more specific way to refer to support for North Atlantic military alliances against the Soviet Union, or in a more expansive way to imply broader cooperation, perceived deeply shared values, a merging of diplomatic cultures,<ref>Weisbrode, Kenneth. [http://www.nortiapress.com/portfolio-item/atlanticists/ The Atlanticists.']' Nortia Press, 2017.</ref> as well as a sense of community and some degree of integration between North America and Europe. In practice, the philosophy of Atlanticism encourages active North American, particularly American, engagement in Europe and close cooperation between states on both sides of the ocean. Atlanticism manifested itself most strongly during the Second World War and in its aftermath, the Cold War, through the establishment of various Euro-Atlantic institutions, most importantly NATO and the Marshall Plan. Atlanticism varies in strength from region to region and from country to country based on a variety of historical and cultural factors. It is often considered to be particularly strong in Eastern Europe, Central Europe, Ireland and the United Kingdom (linked to the Special Relationship). Politically, it has tended to be associated most heavily and enthusiastically but not exclusively with classical liberals or the political right in Europe. Atlanticism often implies an affinity for U.S. political or social culture, or affinity for Europe in North America, as well as the historical bonds between the two continents. There is some tension between Atlanticism and continentalism on both sides of the Atlantic, with some people emphasising increased regional cooperation or integration over trans-Atlantic cooperation. The relationship between Atlanticism and North American or European integrations is complex, and they are not seen in direct opposition to one another by many commentators. Internationalism is the foreign policy belief combining both Atlanticism and continentalism. History Prior to the World Wars, western European countries were generally preoccupied with continental concerns and creating colonial empires in Africa and Asia, and not relations with North America. Likewise, the United States was busy with domestic issues and interventions in Latin America, but had little interest in European affairs, and Canada, despite gaining self-governing dominion status through Confederation in 1867, had yet to exercise full foreign policy independence as a part of the British Empire. Following World War I, New York lawyer Paul D. Cravath was a noted leader in establishing Atlanticism in the United States. Cravath had become devoted to international affairs during the war, and was later a co-founder and director of the Council on Foreign Relations. In the aftermath of World War I, while the US Senate was discussing whether or not to ratify the Treaty of Versailles (it ultimately did not), some Congressional Republicans expressed their support for a legally binding US alliance with Britain and France as an alternative to the League of Nations's and especially Article X's open-ended commitments; however, US President Woodrow Wilson never seriously explored their offer, instead preferring to focus on his (ultimately unsuccessful) fight to secure US entry into the League of Nations. The experience of having American and Canadian troops fighting with British, French, and other Europeans in Europe during the World Wars fundamentally changed this situation. Though the U.S. (and to some extent Canada) adopted a more isolationist position between the wars, by the time of the Normandy landings the Allies were well integrated on all policies. The Atlantic Charter of 1941 declared by U.S. President Franklin D. Roosevelt and British Prime Minister Winston Churchill established the goals of the Allies for the post-war world, and was later adopted by all the Western allies. Following the Second World War, the Western European countries were anxious to convince the U.S. to remain engaged in European affairs to deter any possible aggression by the Soviet Union. This led to the 1949 North Atlantic Treaty'' which established the North Atlantic Treaty Organization, the main institutional consequence of Atlanticism, which binds all members to defend the others, and led to the long-term garrisoning of American and Canadian troops in Western Europe. After the end of the Cold War, the relationship between the United States and Europe changed fundamentally, and made the sides less interested in each other. Without the threat of the Soviet Union dominating Europe, the continent became much less of a military priority for the U.S., and likewise, Europe no longer felt as much need for military protection from the U.S. As a result, the relationship lost much of its strategic importance. However, the new democracies of the former Warsaw Pact, and parts of the fragments of the fractured Yugoslavia, took a different view, eagerly embracing Atlanticism, as a bulwark against their continued fear of the Soviet Union's key now-separate great power fragment: Russia. Atlanticism has undergone significant changes in the 21st century in light of terrorism and the Iraq War, the net effect being a renewed questioning of the idea itself and a new insight that the security of the respective countries may require alliance action outside the North Atlantic territory. After the September 11, 2001, attacks, NATO for the first time invoked Article 5, which states that any attack on a member state will be considered an attack against the entire group of members. Planes of NATO's multi-national AWACS unit patrolled the U.S. skies and European countries deployed personnel and equipment. However, the Iraq War caused fissures within NATO and the sharp difference of opinion between the U.S.-led backers of the invasion and opponents strained the alliance. Some commentators, such as Robert Kagan and Ivo Daalder questioned whether Europe and the United States had diverged to such a degree that their alliance was no longer relevant. Later, in 2018, Kagan said that "we actually need the United States to be working actively to support and strengthen Europe". The importance of NATO was reaffirmed during Barack Obama's administration, though some called him relatively non-Atlanticist compared to predecessors. As part of the Obama Doctrine, Washington supported multilateralism with allies in Europe. Obama also enforced sanctions on Russia with European (and Pacific) allies after Russia's first invasion of Ukraine in Crimea. After his presidency, Obama also stressed the Atlantic alliance's importance during the Trump administration, indirectly opposing Trump in the matter. During the Trump years, tensions rose within NATO, as a result of democratic backsliding in Hungary and Turkey, and Trump's comments against NATO members and the alliance. Robert Kagan echoed common criticisms that Trump undermined the alliance. Despite this, NATO gained two new member countries (Montenegro and North Macedonia) during that time. The importance of NATO in Europe increased due to the continuing threat of the Russian military and intelligence apparatus and the uncertainty of Russian actions in former Soviet Union countries, and various threats in the Middle East. German-Russian economic relations became an issue in the Atlantic relationship due to Nord Stream 2, among other disagreements such as trade disputes between the United States and the European Union. As the Biden administration began, top officials of the European Union expressed optimism about the Atlantic relationship. Following the February 2022 Russian invasion of Ukraine, journalists noted that the Russian aggression led to a united political response from the European Union, making the defensive relevance of the Atlantic alliance more widely known, and increasing the popularity of NATO accession in countries like Sweden and Finland. Finland joined NATO on 4 April 2023 and Sweden on 7 March 2024. Ideology Atlanticism is a belief in the necessity of cooperation between North America and Europe. The term can imply a belief that the bilateral relationship between Europe and the United States is important above all others, including intra-European cooperation, especially when it comes to security issues. The term can also be used "as a shorthand for the transatlantic security architecture." Supranational integration of the North Atlantic area had emerged as a focus of thinking among intellectuals on both sides of the Atlantic already in the late 19th century. Although it was not known as Atlanticism at the time (the term was coined in 1950), they developed an approach coupling soft and hard power which would to some extent integrate the two sides of the Atlantic. The idea of an attractive "nucleus" union was the greatest soft power element; the empirical fact of the hegemonic global strength such a union would hold was the hard power element. This approach was eventually implemented to a certain degree in the form of NATO, the G7 grouping and other Atlanticist institutions. In the long debate between Atlanticism and its critics in the 20th century, the main argument was whether deep and formal Atlantic integration would serve to attract those still outside to seek to join, as Atlanticists argued, or alienate the rest of the world and drive them into opposite alliances. The Atlanticist perspective that informed the scheme of relations between the United States and the Western European countries after the end of World War Two was informed by political expedience and a strong civilizational bond. Realists, neutralists, and pacifists, nationalists and internationalists tended to believe it would do the latter, citing the Warsaw Pact as the proof of their views and treating it as the inevitable realpolitik counterpart of NATO. Broadly speaking, Atlanticism is particularly strong in the United Kingdom (linked to the Special Relationship) and eastern and central Europe (i.e. the area between Germany and Russia). There are numerous reasons for its strength in Eastern Europe: primarily the role of the United States in bringing political freedom there after the First World War (Wilson's Fourteen Points), the major role of the U.S. during the Cold War (culminating in the geopolitical defeat of the Soviet empire and its withdrawal from the region), its relative enthusiasm for bringing the countries of the region into Atlanticist institutions such as NATO, and a suspicion of the intentions of the major Western European powers. Some commentators see countries such as Poland and the United Kingdom among those who generally hold strong Atlanticist views, while seeing countries such as Germany and France tending to promote continental views and a strong European Union. In the early 21st century, Atlanticism has tended to be slightly stronger on the political right in Europe (although many variations do exist from country to country), but on the political center-left in the United States. The partisan division should not be overstated, but it exists and has grown since the end of the Cold War. While trans-Atlantic trade and political ties have remained mostly strong throughout the Cold War and beyond, the larger trend has been continentalist economic integration with the European Economic Area and the North American Free Trade Agreement notably dividing the Atlantic region into two rival trade blocs. However, many political actors and commentators do not see the two processes as being necessarily opposed to one another, in fact some commentators believe regional integration can reinforce Atlanticism. Article 2 of the North Atlantic Treaty, added by Canada, also attempted to bind the nations together on economic and political fronts. Institutions The North Atlantic Council is the premier, governmental forum for discussion and decision-making in an Atlanticist context. Other organizations that can be considered Atlanticist in origin: NATO Organisation for Economic Co-operation and Development (OECD) G-6/7/8 North Atlantic Cooperation Council (NACC) Euro-Atlantic Partnership Council (EAPC) The German Marshall Fund of the United States (GMF) European Horizons The Atlantic Council The World Bank and International Monetary Fund are also considered Atlanticist. Under a tacit agreement, the former is led by an American and the latter European. Prominent Atlanticists Well-known Atlanticists include former U.S. Presidents Franklin D. Roosevelt, Harry Truman, and Ronald Reagan; U.K. Prime Ministers Winston Churchill, Margaret Thatcher, Tony Blair, and Gordon Brown; former U.S. Secretary of State Dean Acheson; former Assistant Secretary of War and perennial presidential advisor John J. McCloy; former U.S. National Security Advisor Zbigniew Brzezinski; former NATO Secretaries-General Javier Solana and Joseph Luns; and Council on Foreign Relations co-founder Paul D. Cravath. See also Transatlantic relations United States–European Union relations Canada–European Union relations Special Relationship Western World North Atlantic Treaty Organization (NATO) Mid-Atlantic English Transatlantic Free Trade Area (TAFTA) Eurasianism German Marshall Fund, an Atlanticist think tank. Atlantik-Brücke, a German-American non-profit association and Atlanticist think tank Atlantic Council, an Atlanticist think tank Organization for Security and Cooperation in Europe Bilderberg Group British-American Project Pacificism Columbian exchange References Political theories International relations theory Politics of NATO Transatlantic relations Western culture
0.769391
0.996447
0.766658
Feudalism in England
Feudalism as practiced in the Kingdoms of England during the medieval period was a state of human society that organized political and military leadership and force around a stratified formal structure based on land tenure. As a military defence and socio-economic paradigm designed to direct the wealth of the land to the king while it levied military troops to his causes, feudal society was ordered around relationships derived from the holding of land. Such landholdings are termed fiefdoms, traders, fiefs, or fees. Origins The word, "feudalism", was not a medieval term, but an invention of sixteenth century French and English lawyers to describe certain traditional obligations between members of the warrior aristocracy. Not until 1748 did it become a popular and widely used word, thanks to Montesquieu's De L'Esprit des Lois ("The Spirit of the Laws"). The coined word feudal derives from an ancient Gothic source faihu signifying simply "property" which in its most basic sense was "cattle" and is a cognate of the classical Latin word pecus, which means both "cattle", "money" and "power". European feudalism had its roots in the Roman manorial system - [Colonus] (in which workers were compensated with protection while living on large estates) and in the 8th century CE Kingdom of the Franks where a king gave out land for life (benefice) to reward loyal nobles and receive service in return. Anglo-Saxon feudal structures Following the end of Roman rule in Britain, feudalism emerged in the subsequent Anglo-Saxon period, though not in as comprehensive or uniform manner as in the later Norman era. Anglo-Saxon kings, within the Heptarchy period and united English kingdom post-King Athelstan, often granted supporters and nobles lands in exchange for military service. These were often thegns, who were warriors controlling lands and often fought with kings at their call-up and behest. Similarly, ealdormen ruled counties or groups of counties, and similarly were appointed by the king to grant service accordingly when called upon. Various writs survive from Anglo-Saxon monarchs, where specific grants of land were given to nobility throughout England. Thegns often worked along with ealdormen and shire reeves to enforce law and order and collect taxes in given areas. This system was indigenous to the Anglo-Saxons, and greatly mimicked feudalism as practiced in Europe at the time. Armies used in various conflicts were drawn from such arrangements. The invasion of Scotland by King Athelstan in the 930s drew from thegns whom he had established. The English army at the Battle of Hastings also was similar, and as the English lost to the Normans, much of the standing native English nobility had been wiped out following the loss. A primary difference between this form of feudalism, as practiced in Anglo-Saxon England vis a vis the Norman period, was that it was a more native form of ties between the king and his nobles. It drew heavily on longstanding Germanic practices, distinct in evolution from the Frankish models employed contemporaneously. By 1066, England was a steady patchwork of lands owned by thegns and ealdormen, though the Anglo-Saxon nobility would steadily lose their lands after the Norman Conquest. The Domesday Book often remarked on who owned lands prior to the Conquest, which often were native English lords or King Edward the Confessor himself. Classic English feudalism Feudalism took root in England with William of Normandy's conquest in 1066. Over a century earlier, before the unification of England, the seven relatively small individual English kingdoms, known collectively as the Heptarchy, maintained an unsteady relationship of raids, ransoms, and truces with Vikings from Denmark and Normandy from around the seventh to tenth centuries. This fracture in the stability of the Heptarchy paved the way for the successful Norman Conquest, and England's new king, William I, initiated a system of land grants to his vassals, the powerful knights who fought alongside him, in order to have them maintain his new order throughout the kingdom. The feudal system of governance and economics thrived in England throughout the high medieval period, a time in which the wealthy prospered while the poor labored on the land with relatively little hope of economic autonomy or representative government. In the later medieval period, feudalism began to diminish in England with the eventual centralization of government that began around the first quarter of the fourteenth century, and it remained in decline until its eventual abolition in England with the Tenures Abolition Act 1660. By then, a deeply embedded socio-economic class disparity had laid the foundation for the rise of capitalism to take the place of feudalism as the British Empire grew. Under the English feudal system, the person of the king (asserting his allodial right) was the only absolute "owner" of land. All nobles, knights and other tenants, termed vassals, merely "held" land from the king, who was thus at the top of the "feudal pyramid". When feudal land grants were of indefinite or indeterminate duration, such grants were deemed freehold, while fixed term and non-hereditable grants were deemed non-freehold. However, even freehold fiefs were not unconditionally heritable—before inheriting, the heir had to pay a suitable feudal relief. Beneath the king in the feudal pyramid was a tenant-in-chief (generally a baron or knight) who, as the king's vassal, held and drew profit from a piece of the king's land. At the next tier of feudalism, holding land from the vassal was a mesne tenant (generally a knight, sometimes a baron, including tenants-in-chief in their capacity as holders of other fiefs) who in turn held parcels of land when sub-enfeoffed by the tenant-in-chief. Below the mesne tenant, further mesne tenants could hold from each other in series, creating a thriving, if complicated, feudal pyramid. Fall of English feudalism English feudalism first began to fall during the Anarchy, in which there were two factions: the supporters of Empress Matilda and of Stephen Of Blois. Matilda was the daughter of Henry I of England who had recently died in 1135, when his ship sunk due to weather conditions. Matilda was Henry's only heir, so she was first in line to the English throne. However, many of the barons who had promised to allow Matilda to rule broke their promise and supported Stephen (Henry's nephew) as king. This divide led to a civil war, which allowed many peasants to gain political power as there was no undisputed ruler. Eventually there was a compromise, although England still felt the repercussions of the civil war for event for years afterwards. The next important event was on 15 June 1215, when King John of England was forced to put his seal to the Magna Carta by pressure from his rebellious barons. This event began the collapse of the King's power within the feudal system. In the mid 13th century, Simon de Montfort and several barons formed a crucial part of British politics with the creation of what became the House of Commons alongside what became the House of Lords. The final major event in the fall of English feudalism is considered to be the Black Death, which killed many of the peasants. The reduced supply of labour led to it commanding a higher price. This led to the Statute of Labourers 1351, which prohibited the payment of wages above their pre-plague wages levels. This caused peasants to work more and be paid less for their work. This led to peasant uprisings demanding higher wages, such as the Peasants' Revolt in 1381. The king, Richard II, met the peasants' delegation and promised that their demands would be met. Instead of keeping his word however, the king instead tortured the leaders of peasant rebellions. The peasants could no longer be forced to accept the servile conditions that they held under the Feudal system. Vassalage Before a lord (or king) could grant land (a fief) to a tenant, he had to make that person a vassal. This was done at a formal and symbolic ceremony called a commendation ceremony, composed of the two-part act of homage and oath of fealty. During homage, the lord and vassal entered a contract in which the vassal promised to fight for the lord at his command, whilst the lord agreed to protect the vassal from external forces, a valuable right in a society without police and with only a rudimentary justice system. The contract, once entered, could not be broken lightly. It was often sworn on a relic like a saint's bone or on a copy of the Gospel, and the gravity of the commendation was accentuated by the clasping of the vassal's hands between the lord's as the oath was spoke. A ceremonial kiss often sealed the contract though the kiss was less significant than the ritual of homage and the swearing of fealty. The word fealty derives from the Latin fidelitas and denotes the fidelity owed by a vassal to his feudal lord. Fealty also refers to an oath that more explicitly reinforces the commitments of the vassal made during homage. Once the commendation was complete, the lord and vassal were now in a feudal relationship with agreed-upon mutual obligations to one another. The vassal's principal obligation to the lord was the performance of military service. Using whatever equipment the vassal could obtain by virtue of the revenues from the fief, the vassal was responsible to answer calls to military service on behalf of the lord. The equipment required and the duration of the service was usually agreed upon between the parties in detail in advance. For example, a vassal such as a baron, with a wealthy fiefdom lived well off the revenues of his lands and was able (and required) to provide a correspondingly impressive number of knights when called upon. Considering that each knight needed to attend his service with horses, armor, weapons, and even food and provisions to keep himself, his animals, and his attendants for the demanded period of time, a baron's service to the king could be costly in the extreme. This security of military help was the primary reason the lord entered into the feudal relationship, but the vassal had another obligation to his lord, namely attendance at his court, whether manorial, baronial or at the king's court itself in the form of parliament. This involved the vassal providing "counsel", so that if the lord faced a major decision he would summon all his vassals and hold a council. On the manorial level this might be a fairly mundane matter of agricultural policy, but the duty also included service as a juror when the lord handed down sentences for criminal offenses, up to and including in cases of capital punishment. Concerning the king's feudal court, the prototype of parliament, such deliberation could include the question of declaring war. Depending on the period of time and the location of the court, baronial, or manorial estate, feudal customs and practices varied. See examples of feudalism. Varieties of feudal tenure Under the feudal system several different forms of land tenure existed, each effectively a contract with differing rights and duties attached thereto. The main varieties are as follows: Military tenure Freehold (indeterminate & hereditable): by barony (per baroniam). Such tenure is good constituted the holder a feudal baron, and was the highest degree of tenure. It imposed duties of military service. In time barons were differentiated between greater and lesser barons, with only greater barons being guaranteed attendance at parliament. All such holders were necessarily tenants-in-chief. by knight-service. This was a tenure ranking below barony, and was likewise for military service, of a lesser extent. It could be held in capite from the king or as a mesne tenancy from a tenant-in-chief. by castle-guard. This was a form of military service which involved guarding a nearby castle for a specified number of days per year. by scutage where the military service obligations had been commuted, or replaced, by money payments. Common during the decline of the feudal era and symbolic of the change from tenure by personal Non-military tenure Freehold (indeterminate & hereditable): by serjeanty. Such tenure was in return for acting as a servant to the king, in a non-military capacity. Service in a ceremonial form is termed “grand serjeanty” whilst that of a more functional or menial nature is termed “petty sergeanty”. by frankalmoinage, generally a tenure restricted to clerics. Non-freehold (fixed-term & non-hereditable): by copyhold, where the duties and rights were tailored to the requirements of the lord of the manor and a copy of the terms agreed was entered on the roll of the manorial court as a record of such non-standard terms. by socage. This was the lowest form of tenure, involving payment in produce or in money. See also References and sources References Sources Encyclopædia Britannica, 9th. ed. vol. 9, pp. 119–123, "Feudalism" Further reading Barlow, F. (1988) The Feudal Kingdom of England 1042-1216. 4th edition, London. Round, J. Horace. (1909) Feudal England. London. Molyneux-Child, J.W. (1987) The Evolution of the English Manorial System. Lewes: The Book Guild. External links "Feudalism", by Thomas. D. Crage. Encyclopædia Britannica Online. "Feudalism?", by Paul Halsall. Internet Medieval Sourcebook. Economy of medieval England
0.773383
0.99129
0.766647
Political history
Political history is the narrative and survey of political events, ideas, movements, organs of government, voters, parties and leaders. It is closely related to other fields of history, including diplomatic history, constitutional history, social history, people's history, and public history. Political history studies the organization and operation of power in large societies. From approximately the 1960s onwards, the rise of competing subdisciplines, particularly social history and cultural history, led to a decline in the prominence of "traditional" political history, which tended to focus on the activities of political elites. In the two decades from 1975 to 1995, the proportion of professors of history in American universities identifying with social history rose from 31% to 41%, and the proportion of political historians fell from 40% to 30%. Political world history The political history of the world examines the history of politics and government on a global scale, including international relations. Aspects of political history The first "scientific" political history was written by Leopold von Ranke in Germany in the 19th century. His methodologies profoundly affected the way historians critically examine sources; see historiography for a more complete analysis of the methodology of various approaches to history. An important aspect of political history is the study of ideology as a force for historical change. One author asserts that "political history as a whole cannot exist without the study of ideological differences and their implications." Studies of political history typically centre around a single nation and its political change and development. Some historians identify the growing trend towards narrow specialization in political history during recent decades: "while a college professor in the 1940s sought to identify himself as a "historian", by the 1950s "American historian" was the designation." From the 1970s onwards, new movements challenged traditional approaches to political history. The development of social history shifted the emphasis away from the study of leaders and national decisions, and towards the role of ordinary people, especially outsiders and minorities. Younger scholars shifted to different issues, usually focused on race, class and gender, with little room for elites. After 1990 social history itself began to fade, replaced with postmodern and cultural approaches that rejected grand narrative. United States: The new political history Traditional political history focused on major leaders and had long played a dominant role beyond academic historians in the United States. These studies accounted for about 25% of the scholarly books and articles written by American historians before 1950, and about 33% into the 1960s, followed by diplomacy. The arrival in the 1960s and 1970s of a new interest in social history led to the emergence of the "new political history" which saw young scholars put much more emphasis on the voters' behavior and motivation, rather than just the politicians. It relied heavily on quantitative methods to integrate social themes, especially regarding ethnicity and religion. The new social science approach was a harbinger of the fading away of interest in Great Men. The eclipse of traditional political approaches during the 1970s was a major shock, though diplomatic history fell even further. It was upstaged by social history, with a race/class/gender model. The number of political articles submitted to the Journal of American History fell by half from 33% to 15%. Patterson argued that contemporary events, especially the Vietnam War and Watergate, alienated younger scholars away from the study of politicians and their deeds. Political history never disappeared, but it never recovered its dominance among scholars, despite its sustained high popularity among the reading public. Some political historians made fun of their own predicament, as when William Leuchtenburg wrote, "the status of the political historians within the profession has sunk to somewhere between that of a faith healer and a chiropractor. Political historians were all right in a way, but you might not want to bring one home to meet the family." Others were more analytical, as when Hugh Davis Graham observed: The ranks of traditional political historians are depleted, their assumptions and methods discredited, along with the Great White Man whose careers they chronicled. Britain Readman (2009) discusses the historiography of British political history in the 20th century. He describes how British political scholarship mostly ignored 20th century history due to temporal proximity to the recent past, the unavailability of primary sources, and the potential for bias. The article explores how transitions in scholarship have allowed for greater interest in 20th century history among scholars, which include less reliance on archival sources, methodological changes in historiography, and the flourishing of new forms of history such as oral history. Germany In the course of the 1960s, however, some German historians (notably Hans-Ulrich Wehler and his cohort) began to rebel against this idea, instead suggesting a "Primacy of Domestic Politics" (Primat der Innenpolitik), in which the insecurities of (in this case German) domestic policy drove the creation of foreign policy. This led to a considerable body of work interpreting the domestic policies of various states and the ways this influenced their conduct of foreign policy. France The French Annales School had already put an emphasis on the role of geography and economics on history, and of the importance of broad, slow cycles rather than the constant apparent movement of the "history of events" of high politics. It downplayed politics and diplomacy. The most important work of the Annales school, Fernand Braudel's The Mediterranean and the Mediterranean World in the Age of Philip II, contains a traditional Rankean diplomatic history of Philip II's Mediterranean policy, but only as the third and shortest section of a work largely focusing on the broad cycles of history in the longue durée ("long term"). The Annales were broadly influential, leading to a turning away from political history towards an emphasis on broader trends of economic and environmental change. Social history In the 1960s and 1970s, an increasing emphasis on giving a voice to the voiceless and writing the history of the underclasses, whether by using the quantitative statistical methods of social history or the more postmodern assessments of cultural history, also undermined the centrality of politics to the historical discipline. Leff noted how social historians, "disdained political history as elitist, shallow, altogether passe, and irrelevant to the drama of everyday lives." History of political regimes and institutions MaxRange data is a project that defines and shows in detail the political status and development of institutional regimes of all states in the world from 1789. MaxRange also describes the background, development, external sources and major causes behind all political changes. MaxRange is a dataset defining level of democracy and institutional structure (regime-type) on a 100-graded scale where every value represents a unique regimetype. Values are sorted from 1-100 based on level of democracy and political accountability. MaxRange defines the value (regimetype) corresponding to all states and every month from 1789 to 2015 and updating. MaxRange is created and developed by Max Range, and is now associated with the university of Halmstad, Sweden References Further reading Callaghan, John, et al. eds., Interpreting the Labour Party: Approaches to Labour Politics and History (2003) online; also online free; British Craig, David M. "'High Politics' and the 'New Political History'". Historical Journal (2010): 453–475; British online Elton, G. R. The practice of history (1968), British. French, John D.. "Women in Postrevolutionary Mexico: The Emergence of a New Feminist Political History", Latin American Politics and Society, (2008) 50#2, pp. 175–184. Huret, Romain, "All in the Family Again? Political Historians and the Challenge of Social History", Journal of Policy History, 21 (no. 3, 2009), 239–63. Kowol, Kit. "Renaissance on the Right? New Directions in the History of the Post-War Conservative Party". Twentieth Century British History 27#2 (2016): 290–304. online Pasquino, Gianfranco. "Political History in Italy", Journal of Policy History July 2009, Vol. 21 Issue 3, pp. 282–297; discusses political historians such as Silvio Lanaro, Aurelio Lepre, and Nicola Tranfaglia, and studies of Fascism, the Italian Communist party, the role of the Christian Democrats in Italian society, and the development of the Italian parliamentary Republic. excerpt Ranger, Terence. "Nationalist historiography, patriotic history and the history of the nation: the struggle over the past in Zimbabwe". Journal of Southern African Studies 30.2 (2004): 215–234. Readman, Paul. "The State of Twentieth-Century British Political History", Journal of Policy History, July 2009, Vol. 21 Issue 3, pp. 219–238 Smith, Anthony D. The nation in history: historiographical debates about ethnicity and nationalism (UP of New England, 2000) Sreedharan, E. A manual of historical research methodology. (Trivandrum, Centre for South Indian Studies, 2007. Sreedharan, E. A textbook of historiography: 500 BC to AD 2000 (New Delhi: Orient Longman, 2004). In USA Bogue, Allan G. "United States: The 'new' political history." Journal of Contemporary History (1968) 3#1 pp: 5–27. in JSTOR. Brinkley, Alan. "The Challenges and Rewards of Textbook Writing: An Interview with Alan Brinkley". Journal of American History 91#4 (2005): 1391–97 online; focus on political history Gillon, Steven M. "The future of political history". Journal of Policy History 9.2 (1997): 240–255, in USA. Graham, Hugh Davis. "The stunted career of policy history: a critique and an agenda". Public Historian 15.2 (1993): 15–37; policy history is a closely related topic online. Jacobs, Meg, William J. Novak, and Julian Zelizer, eds. The democratic experiment: New directions in American political history (Princeton UP, 2009). Jensen, Richard J. "Historiography of American Political History" in Jack Greene, ed., Encyclopedia of American Political History (Scribner's, 1984), vol 1. pp 1–25 online Larson, John Lauritz, and Michael A. Morrison, eds. Whither the Early Republic: A Forum on the Future of the Field (U of Pennsylvania Press, 2012). Leuchtenburg, William E. "The Pertinence of Political History: Reflections on the Significance of the State in America", Journal of American History 73, (1986), 585–600. Newman, Richard. "Bringing Politics Back in... to Abolition." Reviews in American History 45.1 (2017): 57–64. Silbey, Joel H. "The State and Practice of American Political History at the Millennium: The Nineteenth Century as a Test Case". Journal of Policy History 11.1 (1999): 1–30. Swirski, Peter (2011). American Utopia and Social Engineering in Literature, Social Thought, and Political History. New York, Routledge. External links Conference: Rethinking Modern British Studies, 2015, numerous papers and reports on the historiography of British politics. Abstracts of 2015 papers scholarly journal Diplomatic History Documents of Diplomatic History Fletcher School at Tufts International Relations Resources A New Nation Votes: American Elections Returns 1787–1825 French Website of the Comité d'histoire parlementaire et politique (Parliamentary and Political History Committee) and Parlement(s), Revue d'histoire politique, published three times a year. It contains a lot of information about French political history, including about 900 references of scholarly political history studies and a bibliography of parliamentary history. Fields of history
0.773268
0.991417
0.766631
Historical thinking
Historical thinking is a set of critical literacy skills for evaluating and analyzing primary source documents to construct a meaningful account of the past. Sometimes called historical reasoning skills, historical thinking skills are frequently described in contrast to historical content knowledge such as names, dates, and places. This dichotomous presentation is often misinterpreted as a claim for the superiority of one form of knowing over the other. The distinction is generally made to underscore the importance of developing thinking skills that can be applied when individuals encounter any historical content. History educators have varying perspectives about the extent they should emphasize facts about the past, moral lessons, connections to current events, or historical thinking skills and different belief about what historical thinking involves. U.S. Academic Standards and Disciplinary Frameworks In the United States, the National Center for History in the Schools at the University of California, Los Angeles has developed history standards that include benchmarks for both content in U.S. and world history and historical thinking skills in Kindergarten thru grade 4 and grades 5-12. In both of these age ranges, the Center defines historical thinking in five parts: Chronological Thinking Historical Comprehension Historical Analysis and Interpretation Historical Research Capabilities Historical Issues-Analysis and Decision-Making As part of the national assessment effort called “The Nation’s Report Card, ” the United States Department of Education also developed benchmarks for student achievement in U.S. history. Their rubric divides history learning into three basic dimensions: major historical themes, chronological periods, and ways of knowing and thinking about history. The third dimension is further divided into two parts: historical knowledge and perspective, and historical analysis and interpretation. History Textbooks History textbooks draw much attention from history educators and educational researchers. The use of textbooks is nearly universal in history, government, and other social studies courses at the primary, and secondary levels in the U.S.; however, the role of textbooks in these courses remains controversial. Arguments against reliance on textbooks have ranged from ideological to pragmatic. While textbooks are often presented as the objective truth, they include constructed versions of a selected period in the past. The construction and adoption of textbooks can be political, with groups fighting over the version of history they think should be presented to schoolchildren. For example, Texas history textbooks did not include slavery as a central cause of the Civil War until 2018, even though slavery has long been understood to be a core issue causing the Civil War. Historical thinking has been suggested as a way to avoid presenting only one narrative as the truth. In response to the controversy over Texas textbooks, University of Northern Colorado History Department Chair, Fritz Fischer said that "many of these problems could be solved if the school board prioritized making primary documents available to students, rather than deciding on which version of events ought to be taught." Still, other critics believe that using textbooks undermines the process of learning history by sacrificing thinking skills for content—that textbooks allow teachers to cover vast amounts of names, dates, and places while encouraging students simply to memorize instead of question or analyze historical events. For example, Sam Wineburg argues, "Traditional history instruction constitutes a form of information, not a form of knowledge. Students might master an agreed-upon narrative, but they lacked any way of evaluating it, of deciding whether it or any other narrative, was compelling or true” (41). Despite their drawbacks, most textbook critics concede that textbooks are necessary tools in history education. Proponents of textbook-based curricula point out that history teachers require resources to support the broad scope of topics covered in the typical history classroom. Well-designed textbooks can provide a foundation on which enterprising educators can build other classroom activities that develop historical thinking. Teaching Models Models for historical thinking have been developed to better prepare educators in facilitating historical thinking literacies in students. Benchmarks for Historical Thinking Peter Seixas, a professor at the University of British Columbia and creator of The Historical Thinking Project, outlines six distinct but closely interrelated historical thinking concepts that constitute historical thinking literacy in students. The concepts focus on developing the skills necessary for students to create an account of the past using primary source documents and narratives, or what Seixas terms "traces" and "accounts." Although these benchmarks provide a model to develop historical literacies, Seixas states that these concepts can only can be applied with substantial learning about the past Establishing Historical Significance is the ability to identify which events, issues, and trends are historically significant and how they connect to one another. Historical significance varies over time and between groups thus the criteria for determining what to study varies (e.g. Canadians will study Canadian history due to national connections). Using Primary Sources as Evidence involves locating, choosing, understanding, and providing context for the past using primary sources of evidence. When teaching with primary sources, it is important to understand the differences between First, Second, and Third order sources, which are ordered by the originality of material and their proximity to the source of origin. First Order Sources or primary sources are contemporaneous records of events as they were first described or as they happened, without interpretive commentary (e.g., reports, speeches, letters). Meanwhile, second order or secondary sources are those that corroborate, restate, explain, or interpret first order documents (e.g., commentaries, reviews, analyses). Finally third order or tertiary sources list or catalog other sources (e.g., encyclopedias). While second and third order sources are read for information, first order sources require students to draw inferences and make comparisons about the purposes, values, and world views of the authors to construct a narrative argument about history or assess what questions cannot be answered. Identifying Continuity is the ability to understand how issues change or stay the same over time and identify the change as progress or decline. Placing historical events in chronological order is a way of identifying continuity and the ability to group events into identifiable periods helps to better understand their interconnection. Analyze Case and Consequences requires students to recognize how humans can cause change that impacts present day social, political and natural (e.g., geographic) issues. Seixas and Peck note that this benchmark requires understanding causes and/or circumstances, that include "...long-term ideologies, institutions, and conditions, and short-term motivations, actions and events" that lead to particular consequences in history and affect the present. Taking a Historical Perspective is understanding different social, cultural, intellectual, and emotional perspectives that formed the experiences and actions of people from the past. Understanding the Moral Dimension of History involves learning about moral issues today by examining the past. This is an important step in historical literacy because it requires reserving present day moral judgments to understand actions from the past without approving of those actions. SCIM-C Strategy Created by David Hicks, Peter E. Doolittle & E. Thomas Ewing, the SCIM-C strategy of historical thinking focuses on developing self-regulating practices when engaging with primary sources. The SCIM-C strategy focuses on the development of historical questions to be answered when analyzing primary sources. This strategy provides a scaffold for students as they build more complex investigation and analysis practices identified in the "capstone stage". The capstone stage in the SCIM-C model relies on students having analyzed a number of historical documents and having built some historical knowledge about the time, event, or issue being studied. Summarizing is the process of finding information using a primary source. This information can include the type of source (e.g. text, photograph), creator, subject, date it was created, and the opinion or perspective of the author. Contextualizing is the skill of identifying when and in what context the primary source was created. By placing the primary source in context the source can more easily be treated a historical document separate from contemporary morals, ethics, and values. Inferring is the ability to use the information gathered during the summarizing and contextualizing of a source to develop a greater understanding of the sub-text of a primary source. This stage relies on the ability to ask questions requiring inference on what is not stated directly in the source. Monitoring (Capstone Stage) involves identifying initial assumptions that may have been a part of the historical question asked. This stage requires an analysis of the original question and whether the historical information found has answered that question or whether more questions need to be considered. Corroborating is the final stage that can only occurs once several historical documents have been analyzed. This stage involves comparing evidence from a number of sources. This comparison includes looking for similarities and differences in perspectives, gaps in the information, and contradictions. Resources Kobrin, David. Beyond the Textbook: Teaching History Using Primary Sources. Portsmouth, NH: Heinemann, 1996. Lesh, Bruce. "Why Won't You Just Tell Us the Answer?" Teaching Historical Thinking in Grades 7-12." Portsmouth, Stenhouse, 2011. Loewen, James. Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong. New York: Touchstone, 1995. National Center for Education Statistics. National Assessment of Educational Progress: Nation’s Report Card. 2003. <> (last accessed 29 June 2004). National Center for History in the Schools. National Standards for History. 1996. <> (last accessed 14 February 2011). Stearns, P., Seixas, P, Wineburg, S (Eds.). Knowing, Teaching and Learning History: National and International Perspectives. New York: NYU Press, 2000. Wineburg, Sam. Historical Thinking and Other Unnatural Acts. Philadelphia, PA: Temple University Press, 2001. Wineburg, Sam, Martin, Daisy, Monte-Sano, Chauncey. Reading like a Historian: Teaching Literacy in Middle and High School Classrooms. New York: Teachers College Press, 2012. National History Education Clearinghouse References Historiography
0.800734
0.957397
0.766621
Political socialization
Political socialization is the process by which individuals internalize and develop their political values, ideas, attitudes, and perceptions via the agents of socialization. Political socialization occurs through processes of socialization, that can be structured as primary and secondary socialization. Primary socialization agents include the family, whereas secondary socialization refers to agents outside the family. Agents such as family, education, media, and peers influence the most in establishing varying political lenses that frame one's perception of political values, ideas, and attitudes. These perceptions, in turn, shape and define individuals' definitions of who they are and how they should behave in the political and economic institutions in which they live. This learning process shapes perceptions that influence which norms, behaviors, values, opinions, morals, and priorities will ultimately shape their political ideology: it is a "study of the developmental processes by which people of all ages and adolescents acquire political cognition, attitudes, and behaviors." These agents expose individuals through varying degrees of influence, inducing them into the political culture and their orientations towards political objects. Throughout a lifetime, these experiences influence your political identity and shape your political outlook. Agents of socialization Agents of socialization, sometimes called institutions, work together to influence and shape people's political norms and values. In the case of political socialization, the most significant agents include, but are not limited to, families, media, education, and peers. Other agents include religion, the state, and community. These agents shape your understanding of politics by exposing you to political ideas, values, and behaviors. Family Across the decades, literature has heavily emphasized that the agent of the family is the most influential, with literature suggesting that family and the transmission of attitudes from parent to child are the most prominent agents of socialization. This claim has found especially strong support for the transmission of voting behaviour, partisanship and religious attitudes. Especially in contexts of high politization and homogeneity in political views, transmissions are argued to be higher. Literature examines how aspects of family structures and dynamics change the varying influence of the offspring's values as a function of the distribution of their parent's attitudes. Families perpetuate values that support political authorities and can heavily contribute to children's initial political ideological views, or party affiliations. Literature suggest that the transmission of intergenerational political attitudes shows a strong lineage concerning their parents and siblings. Families have an effect on "political knowledge, identification, efficacy, and participation", depending on variables such as "family demographics, life cycle, parenting style, parental level of political cynicism", interest and politization, and "frequency of political discussions", as well as the saliency of the issues that are being discussed. Parents: Earliest literature of the influence of parents suggests that the varying ways parents raise their children become a significant catalyst in influencing their political attitudes, and behaviour. However, the initial view of parent-youth political socialisations studies of the simple inheritance of partisan views from parents to their children has been challenged by various studies, arguing that while the family still plays an important role in the political orientation of their offspring, the intensity is reduced over time and also taking other influential aspects into account. The most frequently found correlation of intergenerational transmission between parents and their children is partisanship and religious beliefs. What has been found is that especially salient and frequently discussed topics are more likely to transmit, and stronger transmissions occur in highly politicised households with homogenous views on political issues. It is further argued that the different methods of raising a child result in the child establishing formative values about all aspects of one's social life, such as religion and cultural traditions. . Especially the parenting style has been argued to be influential, with nurturant parenting resulting in more liberal views, and strict parenting promoting conservatism. This is argued to be due to a state-family heuristic, which assumes that complex phenomena such as politics and the state- society-relationship are understood from what we know from parent-child dynamics. In turn, suggesting that approaching social institutions in this context is vital as they bring a primary influence as opposed to economic or more formative organizations. Literature also suggest that predjudice influence from parents was found to be more influential towards political attitudes rather than economic and social stratification. Ultimately, literature has found it significant in studying the transmission of attitudes from parent to child as parents are generally more conservative, commonly associated with traditional attitudes and pushing towards continuity, opposite of their offspring. Siblings: The composition of the household in terms of the sex of the siblings has been argued to influence boys in becoming more or less conservative. What has been suggested is that for boys, having sisters influences them to develop more conservative views on gender roles and partisanship. This has been argued to be caused by a different approach to child raising and the division of household chores that varies in families depending if there are sons and daughters, or just sons. Therefore, if brothers do have sisters, they are less likely to be exposed to traditionally feminized household chores, wherefore they might adopt this view on gender roles and in turn also hold more traditional views in politics. Furthermore, literature suggests that children and adolescents are far more successfully socialized with cohort-centric attitudes with siblings close in age. Thus, this cohort-centric difference of multiple generations within the family with parents and siblings supports the idea that siblings who share the core agents of familial socialization develop coherent political attitudes. Linkage to the transmission of political attitudes via siblings include social trust and civic engagement. Media Mass media is not only a source of political information; it is an influence on political values and beliefs. The culmination of information gained from entertainment becomes the values and standards by which people judge. Most people choose what media they are exposed to based on their already existing values, and they use information from the media to reaffirm what they already believe. From news coverage and late-night programs, to exposure to social media apps, present varying political stances that are often associated with increasing political participation. However literature suggests that media coverage increasingly motivates users to delve into politics, as media outlets are leaning toward what stories will get them more views and engagement.  In turn, suggesting that having more political motives and financial motives presents more partisanship polarization if it means they will have an increase in viewership. These reinforced segments that bring more viewership have been proven to be more likely for individuals to rewatch or pay for reinforcing congruent evidence. Suggesting that reinforced media segments become confirmatory evidence that continuously polarizes biased political information.  This has become the perfect environment to enhance partisan polarization among voters through national outlets that reinforce extremist positions. These extremist positions have consistently found their way into partisan positions that have moved both parties towards supporting more extremist values, increasing mass partisan polarization. Ultimately, however, the common core of information, and the interpretation the media applies to it, leads to a shared knowledge and basic values throughout a given entity. Most media entertainment and information does not vary much throughout the country, and it is consumed by all types of audiences. Although there are still disagreements and different political beliefs and party affiliations, generally there are not huge ideological disparities among the population because the media helps create a broad consensus on basic US democratic principles.Overall, the increase in the media market demand for viewership has encouraged more polarized political discourse, and with advancing technologies, our dependency on the Internet and the media's vulnerability will only continue increasing. In turn, making it more vital in addressing the threat misinformation holds to the integrity of democracy. Print Media: In the case of print media, it is the oldest form of political socialization of media, as this includes books and poems. and newspapers. Until 1900, after the invention of radio, print media was the primary way individuals received information that shaped their political attitudes and beliefs. Studies show two-thirds of newspaper readers do not know their newspaper's position on specific issues- and most media stories are quickly forgotten. Older people read more newspapers than younger people, and people from the ages of twelve to seventeen (although they consume the most media) consume the least amount of news. Broadcast Media: Media influence in political socialization continues with both fictional and factual media sources. Adults have increased exposure to news and political information embedded in entertainment; fictional entertainment (mostly television) is the most common source of political information. The most common form of broadcast media is television and radio, increasing attention to politics as people become more informed with information and beliefs shaping political attitudes. Studies on public opinion of the Bush administration's energy policies show that the public pays more attention to issues that receive a lot of media coverage and form collective opinions about these issues. This demonstrates that the mass media's attention to an issue affects public opinion. More so, extensive exposure to television has led to "mainstreaming," aligning people's perception of political life and society with television's portrayal of it. Digital Media: Digital media, such as YouTube, Vimeo, and Twitch, accounted for viewership rates of 27.9 billion hours in 2020. examples of digital media include content created, distributed, and viewed on a given platform that one views from a digital electronic device. In turn, increases political polarization, with the recommended algorithms that confine users to echo chambers and content they agree with and enjoy while also increasing their political polarization. Social Media: The role of social media in political socialization, from scrolling on TikTok to checking the trending page on Twitter, has increasingly become powerful in presenting news and varying political perspectives. This shows political socialization at the palm of your hand with the constant production of new content, bringing a new variable in its infancy and shaping how people establish their political beliefs. The media has significantly increased the number of users who cover relevant political information, increasing exposure to political discourse. These platforms have created the perfect environment for individuals to be presented with and reinforce their beliefs through the advancing programming of echo chambers. Algorithms are increasingly confining users in a cycle of content-related videos. This gives media outlets an increasing power to manipulate biased presented news as supporting evidence to a partisan issue, reinforcing confirmatory information to potential false realities shaped by misinformation. Exposure to political accountability through the media motivates increasing polarization as it exposes how potential candidates stand on a given issue. Education In the numerous years in school, through primary, secondary, and high schools, students are taught vital political principles such as voting, elected representatives, individual rights, personal responsibility, and the political history of their state and country. There is also evidence that education is a significant factor in establishing political attitudes during the crucial period of adolescence, with three central themes examining how civic courses, teachers, and peer groups often provide alternative perspectives to their parent's political attitudes. In turn, identifying that civic influences towards change in an educational setting are vital in establishing generational political differences during socialization during adolescence. Other literature has found that involvement with high school activities provides adolescents with direct experience with political and civic engagement and implementing activist orientations toward one's attitudes with increased political discourse. Religion Religious beliefs and practices influence political opinions, priorities, and political participation. The theological and moral perspectives offered by religious institutions shape judgment regarding political attitudes and, ultimately, translate to direct influence on political matters such as "the redistribution of wealth, equality, tolerance for deviance, individual freedom, the severity of criminal punishment, policies relating to family structure, gender roles, abortion, anti-gay rhetoric, and the value of human life." The State State governments can control mass media to "inform, misinform, or misinform the press and thus the public '', a strategy referred to as propaganda. The ability to control agents of socialization, such as the media, brings control to the state to serve a political, economic, or personal agenda that benefits the state. Community Community mobilization brings significant experiences of political socialization events that could influence one's political attitudes with a collective community goal. An example is how Prop 187 was specifically targeting illegal immigrants in LA County within the state of California. Given the severity of the policy targeting a specific community, this created a mass mobilization of the Latino and immigrant community, creating a voting bloc that prevented the initiative of Prop 187. Harvey Milk was a significant political mobilizing the queer community during the 1978 race for California governor with increasing support for Prop 6, a law that would mandate firing any queer teacher or employee in any California public school. As this threatened the queer community and increased immigration of gay, lesbian, and transgender individuals, specifically in the San Francisco area, Milk was able to mobilize the queer community to gain enough momentum to vote against Prop 6 successfully. This could have been a pivotal introduction to political participation for those in these areas, motivating many to continue voting in future elections. In many cases, the experience of community mobilization is the first introduction to political policies and political participation, starting their political journey connected with their home. Region Geographical location also plays a role in one's political media socialization. For example, news outlets on the East Coast tend to cover international affairs in Europe and the Middle East the most, while West Coast news outlets are more likely to cover Asian affairs; this demonstrates that community region affects patterns in political socialization. The region is also significant for specific political attitudes. Living near the Pakistan-India border, an individual will likely have solid political attitudes toward the Pakistan-India tension. Given the socialization of their parents, cousins, grandparents, peers, and education, all have a significant role in teaching their youth about the relationship one has with the other state. Suppose one immigrated from Cuba to the United States. In that case, they will be the political socialization inclination to obtain conservative attitudes in the United States because of the regional movements from a leftist government in Cuba. Life Stages of Political Socialization Childhood Political socialization begins in childhood. It has been found that family is the first main influence, and following social circles are often chosen based on these initial views. Therefore, views are assumed to persist. Research suggest that family and educational environment are the most influential factors in socializing children. In terms of family, in early childhood, indirect exposure, such as parenting styles, or sibling composition might be more relevant, than direct political discussions. However recent literature suggest that increasing influence is coming from mass media such as digital and social media. On average, both young children and teenagers in the United States spend more time a week consuming television and digital media than they spend in school. Young children consume an average of thirty-one hours a week, while teenagers consume forty-eight hours of media a week. Given that childhood is when a human is the most impressionable the influential of agents of socialization is significant as children's brains are "prime for learning", thus more likely to take messages of political attitudes of the world at face value. Adolescence With media influence carrying into adolescence, high school students attribute the information that forms their opinions and attitudes about race, war, economics, and patriotism to mass media much more than their friends, family, or teachers. Other literature suggests that political identification and political participation often stem from values and attitudes attained during one's adolescence. The literature argues that pre-adult socialization in both childhood and adolescence has a longstanding and stable catalyzing influence from political events. This political socialization of these catalytic events establishes predispositions that are often felt by mass and collective political socialization. Literature also suggests that adult political participation shows a longitudinal influence from adolescent forces of political socialization, with three models assessing the effects of parental influence in the context of socioeconomic status, political activity, and civic orientation. A decade later, this literature observed a longitudinal impact of socialization that stems from adolescent political socialization. Literature found that parents' socioeconomic status and high school activities impact the most, although the family is an important and stable agent of socialisation, especially if views and stances are consistent, stable, clear, salient and frequently communicated, which favours higher transmission rates. Primary carriers of pre-adult political socialization play a crucial role in later political participation, with the parent political participation model contributing to an understanding of political activity. This literature suggests that political socialization during the adolescent period significantly influences political participation and voting behavior. It is argued that views persist when the initial socialisation in the family has been strong, hence views have been strongly formed when the adolescents enter adulthood. Adulthood While political socialization is lifelong process, after adolescence, people's basic values generally do not change. Most people choose what content they are exposed to based on their already existing values, and they use information from a favorable source to simply reaffirm what they already believe. Internal outcomes during adulthood can have far more significant development if these beliefs remain constant over time, especially if an attitude is present for the remainderdisambiguated of one's adolescence, the odds of that belief being consistent during adulthood are very likely. See also Agency (sociology) Cohort effect Groupthink Political culture Political cognition Public opinion References Socialization Socialization
0.771333
0.993884
0.766615
East Asia
East Asia is a geographical and cultural region of Asia including the countries of China, Japan, Mongolia, North Korea, South Korea, and Taiwan. Additionally, Hong Kong and Macau are the two special administrative regions of China. The economies of China, Japan, South Korea, and Taiwan are among the world's largest and most prosperous. East Asia borders North Asia to the north, Southeast Asia to the south, South Asia to the southwest, and Central Asia to the west. To its east is the Pacific Ocean. East Asia has long been a crossroads of civilizations, as the region's prominence has facilitated the transmission of ideas, cultural exchanges, commercial trade, scientific and technological cooperation, and migration, as its position and proximity to both the Pacific Ocean and the Continental Asian landmass makes it strategically significant for facilitating international maritime trade and transportation. The contemporary economic, technological, political, and social integration of East Asia coupled with its rich history of diversity, division, and divergent development have all contributed to its enduring complexity, scientific and technological advancement, cultural richness, economic prosperity, and geopolitical significance on the world stage. With the region having been home to various influential empires, kingdoms, and dynasties throughout history, each leaving its mark on the region and transforming the region's geopolitical landscape ranging from distinct dynastic kingdoms to colonial possessions to independent modern nation-states. East Asia, especially Chinese civilization, is regarded as one of the earliest cradles of civilization. Other ancient civilizations in East Asia that still exist as independent countries in the present day include the Japanese, Korean, and Mongolian civilizations. Various other civilizations existed as independent polities in East Asia in the past but have since been absorbed into neighbouring civilizations in the present day, such as Tibet, Manchuria, and Ryukyu (Okinawa), among many others. Taiwan has a relatively young history in the region after the prehistoric era; originally, it was a major site of Austronesian civilisation prior to colonisation by European colonial powers and China from the 17th century onward. For thousands of years, China was the leading civilization in the region, exerting influence on its neighbours. Historically, societies in East Asia have fallen within the Chinese sphere of influence, and East Asian vocabularies and scripts are often derived from Classical Chinese and Chinese script. The Chinese calendar serves as the root from which many other East Asian calendars are derived. Major religions in East Asia include Buddhism (mostly Mahayana), Confucianism and Neo-Confucianism, Taoism, ancestral worship, and Chinese folk religion in Mainland China, Hong Kong, Macau and Taiwan, Shinto in Japan, and Christianity and Musok in Korea. Tengerism and Tibetan Buddhism are prevalent among Mongols and Tibetans while other religions such as Shamanism are widespread among the indigenous populations of northeastern China such as the Manchus. The major languages in East Asia include Mandarin Chinese, Japanese, and Korean. The major ethnic groups of East Asia include the Han in China and Taiwan, Yamato in Japan, and Koreans in North and South Korea. and Mongols in Mongolia. There are 76 officially-recognized minority or indigenous ethnic groups in East Asia; 55 native to mainland China (including Hui, Manchus, Chinese Mongols, Tibetans, Uyghurs, and Zhuang in the frontier regions), 16 native to the island of Taiwan (collectively known as Taiwanese indigenous peoples), one native to the major Japanese island of Hokkaido (the Ainu) and four native to Mongolia (Turkic peoples). The Ryukyuan people are an unrecognized ethnic group indigenous to the Ryukyu Islands in southern Japan, which stretch from Kyushu to Taiwan. There are also several unrecognized indigenous ethnic groups in mainland China and Taiwan. East Asians comprise around billion people, making up about 33% of the population in Continental Asia and 20% of the global population. The region is home to major world metropolises such as Beijing - Tianjin, Busan - Daegu - Ulsan - Changwon, Guangzhou, Hong Kong, Osaka - Kyoto - Kobe, Seoul, Shanghai, Shenzhen, Taipei, and Tokyo. Although the coastal and riparian areas of the region form one of the world's most populated places, the population in Mongolia and Western China, both landlocked areas, is very sparsely distributed, with Mongolia having the lowest population density of a sovereign state. The overall population density of the region is , about three times the world average of . History Ancient era China was the first region settled in East Asia and was undoubtedly the core of East Asian civilization from where other parts of East Asia were formed. The various other regions in East Asia were selective in the Chinese influences they adopted into their local customs. Historian Ping-ti Ho referred to China as the cradle of Eastern civilization, in parallel with the cradle of Middle Eastern civilization along the Fertile Crescent encompassing Mesopotamia and Ancient Egypt as well as the cradle of Western civilization encompassing Ancient Greece. Chinese civilization emerged early, and prefigured other East Asian civilisations. Throughout history, imperial China would exert cultural, economic, technological, and political influence on its neighbours. Succeeding Chinese dynasties exerted enormous influence across East Asia culturally, economically, politically and militarily for over two millennia. The tributary system of China shaped much of East Asia's history for over two millennia due to Imperial China's economic and cultural influence over the region, and thus played a huge role in the history of East Asia in particular. Imperial China's cultural preeminence not only led the country to become East Asia's first literate nation in the entire region, it also supplied Japan and Korea with Chinese loanwords and linguistic influences rooted in their writing systems. Under Emperor Wu of Han, the Han dynasty made China the regional powerhouse in East Asia, projecting much of its imperial influence onto its neighbours. Han China hosted the largest unified population in East Asia, the most literate and urbanised as well as being the most economically developed, as well as the most technologically and culturally advanced civilization in the region at the time. Cultural and religious interaction between the Chinese and other regional East Asian dynasties and kingdoms occurred. China's impact and influence on Korea began with the Han dynasty's northeastern expansion in 108 BC when the Han Chinese conquered the northern part of the Korean peninsula and established a province called Lelang. Chinese influences were transmitted and soon took root in Korea through the inclusion of the Chinese writing system, monetary system, rice culture, philosophical schools of thought, and Confucian political institutions. Jomon society in ancient Japan incorporated wet-rice cultivation and metallurgy through its contact with Korea. Starting in the fourth century AD, Japan adopted Chinese characters, which remain integral to the Japanese writing system. Utilizing the Chinese writing system allowed the Japanese to conduct their daily activities, maintain historical records and give form to various ideas, thoughts, and philosophies. Medieval era During the Tang dynasty, China exerted its greatest influence on East Asia as various aspects of Chinese culture spread to Japan and Korea. The establishment of the medieval Tang dynasty rekindled the impetus of Chinese expansionism across the geopolitical confines of East Asia. Similar to its Han predecessor, Tang China reasserted itself as the center of East Asian geopolitical influence during the early medieval period which spearheaded and marked another golden age in Chinese history. During the Tang dynasty, China exerted its greatest influence on East Asia as various aspects of Chinese culture spread to Japan and Korea. In addition, Tang China also managed to maintain control over northern Vietnam and northern Korea. As full-fledged medieval East Asian states were established, Korea by the fourth century AD and Japan by the seventh century AD, Japan and Korea actively began to incorporate Chinese influences such as Confucianism, the use of Chinese characters, architecture, state institutions, political philosophies, religion, urban planning, and various scientific and technological methods into their culture and society through direct contacts with Tang China and succeeding Chinese dynasties. Drawing inspiration from the Tang political system, Prince Naka no oe launched the Taika Reform in 645 AD where he radically transformed Japan's political bureaucracy into a more centralised bureaucratic empire. The Japanese also adopted Mahayana Buddhism, Chinese style architecture, and the imperial court's rituals and ceremonies, including the orchestral music and state dances had Tang influences. Written Chinese gained prestige and aspects of Tang culture such as poetry, calligraphy, and landscape painting became widespread. During the Nara period, Japan began to aggressively import Chinese culture and styles of government which included Confucian protocol that served as a foundation for Japanese culture as well as political and social philosophy. The Japanese also created laws adopted from the Chinese legal system that was used to govern in addition to the kimono, which was inspired from Chinese hanfu during the eighth century. Modern era For many centuries, most notably from the 7th to the 14th centuries, China stood as East Asia's most advanced civilization and foremost military and economic power, exerting its influence as the transmission of advanced Chinese cultural practices and ways of thinking greatly shaped the region up until the nineteenth century. As East Asia's connections with Europe and the Western world strengthened during the late nineteenth century, China's power began to decline. By the mid-nineteenth century, the weakening Qing dynasty became fraught with political corruption, obstacles and stagnation that was incapable of rejuvenating itself as a world power in contrast to the industrializing Imperial European colonial powers and a rapidly modernizing Japan. The United States Commodore Matthew C. Perry would open Japan to Western influence, and the country would expand in earnest after the 1860s. Around the same time, the Meiji Restoration in Japan sparked rapid societal transformation from an isolated feudal state into East Asia's first industrialised nation. The modern and militarily powerful Japan would galvanise its position in the Orient as East Asia's greatest power with a global mission poised to advance to lead the entire world. By the early 1900s, the Empire of Japan succeeded in asserting itself as East Asia's most dominant geopolitical force. With its newly found international status, Japan would begin to challenge the European colonial powers and inextricably took on a more active role within the East Asian geopolitical order and world affairs at large. Flexing its nascent political and military might, Japan soundly defeated the stagnant Qing dynasty during the First Sino-Japanese War as well as defeating Russia in the Russo-Japanese War in 1905; the first major military victory in the modern era of an East Asian power over a European one. Its hegemony was the heart of an empire that would include Taiwan and Korea. During World War II, Japanese expansionism with its imperialist aspirations through the Greater East Asia Co-Prosperity Sphere would incorporate Korea, Taiwan, much of eastern China and Manchuria, Hong Kong, and Southeast Asia under its control establishing itself as a maritime colonial power in East Asia. Contemporary era After a century of exploitation by the European and Japanese colonialists, post-colonial East Asia saw the defeat and occupation of Japan by the victorious Allies as well as the division of China and Korea during the Cold War. The Korean peninsula became independent but then it was divided into two rival states, while Taiwan became the main territory of de facto state Republic of China after the latter lost Mainland China to the People's Republic of China in the Chinese Civil War. During the latter half of the twentieth century, the region would see the post war economic miracle of Japan, which ushered in three decades of unprecedented growth, only to experience an economic slowdown during the 1990s, but nonetheless Japan continues to remain a global economic power. East Asia would also see the economic rise of Hong Kong, South Korea, and Taiwan, in addition to the respective handovers of Hong Kong and Macau near the end of the twentieth century. The onset of the 21st-century in East Asia led to the integration of Mainland China into the global economy through its entry in the World Trade Organization while also enhancing its emerging international status as a potential world power reinforced with its aim of restoring its historical established significance and enduring international prominence in the world economy. As of at least 2022, the region is more peaceful, integrated, wealthy, and stable than any time in the previous 150 years. Definitions In common usage, the term "East Asia" typically refers to a region including Greater China, Japan, Korea and Mongolia. China, Japan, and Korea represent the three core countries and civilizations of traditional East Asia - as they once shared a common written language, culture, as well as sharing Confucian philosophical tenets and the Confucian societal value system once instituted by Imperial China. Other usages define China, Hong Kong, Macau, Japan, North Korea, South Korea and Taiwan as countries that constitute East Asia based on their geographic proximity as well as historical and modern cultural and economic ties, particularly with Japan and Korea in having retained strong cultural influences that originated from China. Some scholars include Vietnam as part of East Asia as it has been considered part of the greater Chinese cultural sphere. Though Confucianism continues to play an important role in Vietnamese culture, Chinese characters are no longer used in its written language and many scholarly organizations classify Vietnam as a Southeast Asian country. Mongolia is geographically north of Mainland China yet Confucianism and the Chinese writing system and culture had limited impact on Mongolian society. Thus, Mongolia is sometimes grouped with Central Asian countries such as Turkmenistan, Kyrgyzstan, and Kazakhstan. Xinjiang and Tibet are sometimes seen as part of Central Asia (see also Greater Central Asia). Broader and looser definitions by international agencies and organisations such as the World Bank refer to East Asia as the "three major Northeast Asian economies, i.e. mainland China, Japan, and South Korea", as well as Mongolia, North Korea, the Russian Far East, and Siberia. The Council on Foreign Relations includes the Russia Far East, Mongolia, and Nepal. The World Bank also acknowledges the roles of Chinese special administrative regions Hong Kong and Macau, as well as Taiwan, a country with limited recognition. The Economic Research Institute for Northeast Asia defines the region as "China, Japan, the Koreas, Nepal, Mongolia, and eastern regions of the Russian Federation". The UNSD definition of East Asia is based on statistical convenience, but others commonly use the same definition of Mainland China, Hong Kong, Macau, Mongolia, North Korea, South Korea, Taiwan, and Japan. Certain Japanese islands are associated with Oceania due to non-continental geology, distance from mainland Asia or biogeographical similarities with Micronesia. Some groups, such as the World Health Organization, categorize China, Japan and Korea with Australia and the rest of Oceania. The World Health Organization label this region the "Western Pacific", with East Asia not being used in their concept of major world regions. Their definition of this region further includes Mongolia and the adjacent area of Cambodia, as well as the countries of the South East Asia Archipelago (excluding East Timor and Indonesia). Alternative definitions In the context of business and economics, "East Asia" is sometimes used to refer to the geographical area covering ten Southeast Asian countries in ASEAN, Greater China, Japan, and Korea. However, in this context, the term "Far East" is used by the Europeans to cover ASEAN countries and the countries in East Asia. On rare occasions, the term is also sometimes taken to include India and other South Asian countries that are not situated within the bounds of the Asia-Pacific, although the term Indo-Pacific is more commonly used for such a definition. Observers preferring a broader definition of "East Asia" often use the term Northeast Asia to refer to China, the Korean Peninsula, and Japan, with the region of Southeast Asia covering the ten ASEAN countries. This usage, which is seen in economic and diplomatic discussions, is at odds with the historical meanings of both "East Asia" and "Northeast Asia". The Council on Foreign Relations of the United States defines Northeast Asia as Japan and Korea. Climate East Asia is home to many climatic zones. It also has unique weather patterns such as the East Asian rainy season and the East Asian Monsoon. Climate change Like the rest of the world, East Asia has been getting warmer due to climate change, and there had been a measurable increase in the frequency and severity of heatwaves. The region is also expected to see the intensification of its monsoon, leading to more flooding. China has notably embarked on the sponge cities program, where cities are designed to increase the area of urban green spaces and permeable pavings in order to help deal with flash floods caused by greater precipitation extremes. Under high-warming scenarios, "critical health thresholds" for heat stress during the 21st century will be at times breached, in areas like the North China Plain. China, Japan and the Republic of Korea are expected to see some of the largest economic losses caused by sea level rise. The city of Guangzhou is projected to experience the single largest annual economic losses from sea level rise in the world, potentially reaching US$254 million by 2050. Under the highest climate change scenario and in the absence of adaptation, cumulative economic losses caused by sea level rise in Guangzhou would exceed US$1 trillion by 2100. Shanghai is also expected to experience annual losses of around 1% of the local GDP in the absence of adaptation. The Yangtze River basin is a sensitive and biodiverse ecosystem, yet around 20% of its species may be lost throughout the century under and ~43% under . Economy Territorial and regional data China, North Korea, South Korea and Taiwan are all unrecognised by at least one other East Asian state because of severe ongoing political tensions in the region, specifically the division of Korea and the political status of Taiwan. Etymology Demographics Ethnic groups Note: The order of states/territories follows the population ranking of each ethnicity, within East Asia only. Culture Overview The culture of East Asia has been deeply influenced by China, as it was the civilization that had the most dominant influence in the region throughout the ages that ultimately laid the foundation for East Asian civilization. The vast knowledge and ingenuity of Chinese civilization and the classics of Chinese literature and culture were seen as the foundations for a civilized life in East Asia. Imperial China served as a vehicle through which the adoption of Confucian ethical philosophy, Chinese calendar system, political and legal systems, architectural style, diet, terminology, institutions, religious beliefs, imperial examinations that emphasised a knowledge of Chinese classics, political philosophy and cultural value systems, as well as historically sharing a common writing system reflected in the histories of Japan and Korea. The Imperial Chinese tributary system was the bedrock of network of trade and foreign relations between China and its East Asian tributaries, which helped to shape much of East Asian affairs during the ancient and medieval eras. Through the tributary system, the various dynasties of Imperial China facilitated frequent economic and cultural exchange that influenced the cultures of Japan and Korea and drew them into a Chinese international order. The Imperial Chinese tributary system shaped much of East Asia's foreign policy and trade for over two millennia due to Imperial China's economic and cultural dominance over the region, and thus played a huge role in the history of East Asia in particular. The relationship between China and its cultural influence on East Asia has been compared to the historical influence of Greco-Roman civilization on classical Western civilisation. Religion Festivals *Japan switched the date to the Gregorian calendar after the Meiji Restoration. *Not always on that Gregorian date, sometimes April 4. Collaboration East Asian Youth Games Formerly the East Asian Games, it is a multi-sport event organized by the East Asian Games Association (EAGA) and held every four years since 2019 among athletes from East Asian countries and territories of the Olympic Council of Asia (OCA), as well as the Pacific island of Guam, which is a member of the Oceania National Olympic Committees. It is one of five Regional Games of the OCA. The others are the Central Asian Games, the Southeast Asian Games (SEA Games), the South Asian Games and the West Asian Games. Free trade agreements Military alliances Major cities See also East Asia–United States relations East Asian Community China–Japan–South Korea trilateral summit East Asia Summit East Asian studies Notes References Further reading Church, Peter. A short history of South-East Asia (John Wiley & Sons, 2017). Chung, Eunbin. Pride, Not Prejudice: National Identity as a Pacifying Force in East Asia (University of Michigan Press, 2022) online reviews by six scholars Clyde, Paul H., and Burton F. Beers. The Far East: A History of Western Impacts and Eastern Responses, 1830–1975 (1975) online 3rd edition 1958 Crofts, Alfred. A history of the Far East (1958) online free to borrow Dennett, Tyler. Americans in Eastern Asia (1922) online free Ebrey, Patricia Buckley, and Anne Walthall. East Asia: A cultural, social, and political history (Cengage Learning, 2013). Embree, Ainslie T., ed. Encyclopedia of Asian history (1988) vol. 1 online; vol 2 online; vol 3 online; vol 4 online Fairbank, John K., Edwin Reischauer, and Albert M. Craig. East Asia: The great tradition and East Asia: The modern transformation (1960) [2 vol 1960] online free to borrow, famous textbook. Flynn, Matthew J. China Contested: Western Powers in East Asia (2006), for secondary schools Gelber, Harry. The dragon and the foreign devils: China and the world, 1100 BC to the present (2011). Green, Michael J. By more than providence: grand strategy and American power in the Asia Pacific since 1783 (2017) a major scholarly survey excerpt Hall, D.G.E. History of South East Asia (Macmillan International Higher Education, 1981). Holcombe, Charles. A History of East Asia (2d ed. Cambridge UP, 2017). excerpt Iriye, Akira. After Imperialism; The Search for a New Order in the Far East 1921–1931. (1965). Jensen, Richard, Jon Davidann, and Yoneyuki Sugita, eds. Trans-Pacific Relations: America, Europe, and Asia in the Twentieth Century (Praeger, 2003), 304 pp online review Keay, John. Empire's End: A History of the Far East from High Colonialism to Hong Kong (Scribner, 1997). online free to borrow Levinson, David, and Karen Christensen, eds. Encyclopedia of Modern Asia. (6 vol. Charles Scribner's Sons, 2002). Mackerras, Colin. Eastern Asia: an introductory history (Melbourne: Longman Cheshire, 1992). Macnair, Harley F. & Donald Lach. Modern Far Eastern International Relations. (2nd ed 1955) 1950 edition online free, 780pp; focus on 1900–1950. Miller, David Y. Modern East Asia: An Introductory History (Routledge, 2007) Murphey, Rhoads. East Asia: A New History (1996) Norman, Henry. The Peoples and Politics of the Far East: Travels and studies in the British, French, Spanish and Portuguese colonies, Siberia, China, Japan, Korea, Siam and Malaya (1904) online Paine, S. C. M. The Wars for Asia, 1911–1949 (2014) excerpt Prescott, Anne. East Asia in the World: An Introduction (Routledge, 2015) Ring, George C. Religions of the Far East: Their History to the Present Day (Kessinger Publishing, 2006). Szpilman, Christopher W. A., Sven Saaler. "Japan and Asia" in Routledge Handbook of Modern Japanese History (2017) online Steiger, G. Nye. A history of the Far East (1936). Vinacke, Harold M. A History of the Far East in Modern Times (1964) online free Vogel, Ezra. China and Japan: Facing History (2019) excerpt Woodcock, George. The British in the Far East (1969) online External links Regions of Asia Asia-Pacific Articles containing Mongolian script text Articles containing video clips
0.766892
0.999631
0.766608
Haecceity
Haecceity (; from the Latin haecceitas, which translates as "thisness") is a term from medieval scholastic philosophy, first coined by followers of Duns Scotus to denote a concept that he seems to have originated: the irreducible determination of a thing that makes it this particular thing. Haecceity is a person's or object's thisness, the individualising difference between the concept "a man" and the concept "Socrates" (i.e., a specific person). In modern philosophy of physics, it is sometimes referred to as primitive thisness. Etymology Haecceity is a Latin neologism formed as an abstract noun derived from the demonstrative pronoun "haec(ce)", meaning "this (very)" (feminine singular) or "these (very)" (feminine or neuter plural). It is apparently formed on the model of another (much older) neologism, viz. "qui(d)ditas" ("whatness"), which is a calque of Aristotle's Greek to ti esti (τὸ τί ἐστι) or "the what (it) is." Haecceity vs. quiddity Haecceity may be defined in some dictionaries as simply the "essence" of a thing, or as a simple synonym for quiddity or hypokeimenon. However, in proper philosophical usage these terms have not only distinct but opposite meanings. Whereas haecceity refers to aspects of a thing that make it a particular thing, quiddity refers to the universal qualities of a thing, its "whatness", or the aspects of a thing it may share with other things and by which it may form part of a genus of things. Haecceity in scholasticism Duns Scotus makes the following distinction: In Scotism and the scholastic usage in general, therefore, "haecceity" properly means the irreducible individuating differentia which together with the specific essence (i.e. quiddity) constitutes the individual (or the individual essence), in analogy to the way specific differentia combined with the genus (or generic essence) constitutes the species (or specific essence). Haecceity differs, however, from the specific differentia, by not having any conceptually specifiable content: it does not add any further specification to the whatness of a thing but merely determines it to be a particular unrepeatable instance of the kind specified by the quiddity. This is connected with Aristotle's notion that an individual cannot be defined. Individuals are more perfect than the specific essence and thus have not solely a higher degree of unity, but also a greater degree of truth and goodness. God multiplied individuals to communicate to them His goodness and beatitude. Haecceity in anglophone philosophy In analytical philosophy, the meaning of "haecceity" shifted somewhat. Charles Sanders Peirce used the term as a non-descriptive reference to an individual. Alvin Plantinga and other analytical philosophers used "haecceity" in the sense of "individual essence". The "haecceity" of analytical philosophers thus comprises not only the individuating differentia (the scholastic haecceity) but the entire essential determination of an individual (i.e., including that which the scholastics would call its quiddity). Haecceity in sociology and continental philosophy Harold Garfinkel, the founder of ethnomethodology, used the term "haecceity", to emphasize the unavoidable and irremediable indexical character of any expression, behavior or situation. For Garfinkel indexicality was not a problem. He treated the haecceities and contingencies of social practices as a resource for making sense together. In contrast to theoretical generalizations, Garfinkel introduced "haecceities" in "Parson's Plenum" (1988), to indicate the importance of the infinite contingencies in both situations and practices for the local accomplishment of social order. According to Garfinkel, members display and produce the social order they refer to within the setting that they contribute to. The study of practical action and situations in their "haecceities" — aimed at disclosing the ordinary, ongoing social order that is constructed by the members' practices — is the work of ethnomethodology. Garfinkel described ethnomethodological studies as investigations of "haecceities", i.e., Gilles Deleuze uses the term in a different way to denote entities that exist on the plane of immanence. The usage was likely chosen in line with his esoteric concept of difference and individuation, and critique of object-centered metaphysics. Michael Lynch (1991) described the ontological production of objects in the natural sciences as "assemblages of haecceities", thereby offering an alternate reading of Deleuze and Guattari's (1980) discussion of "memories of haecceity" in the light of Garfinkel's treatment of "haecceity". Other uses Gerard Manley Hopkins drew on Scotus — whom he described as “of reality the rarest-veined unraveller” — to construct his poetic theory of inscape. James Joyce made similar use of the concept of haecceitas to develop his idea of the secular epiphany. James Wood refers extensively to haecceitas (as "thisness") in developing an argument about conspicuous detail in aesthetic literary criticism. See also Entitativity Formal distinction Haecceitism Hypostasis Identity of indiscernibles Irreducibility Objective precision Tathātā Open individualism Ostensive definition Personal identity Principle of individuation Quiddity Rigid designation Scotism Scotistic realism Ship of Theseus Sine qua non Cf. Sanskrit tathata, "thus-ness" Type-token distinction Vertiginous question References Further reading E. Gilson, The Philosophy of the Middle Ages (1955) A. Heuser, The Shaping Vision of Gerard Manley Hopkins (OUP 1955) E. Longpre, La Philosophie du B. Duns Scotus (Paris 1924) Gilles Deleuze and Félix Guattari. 1980. A Thousand Plateaus. Trans. Brian Massumi. London and New York: Continuum, 2004. Vol. 2 of Capitalism and Schizophrenia. 2 vols. 1972–1980. Trans. of Mille Plateaux. Paris: Les Editions de Minuit. ISBN Gilles Deleuze and Félix Guattari. 1991/1994. "What is Philosophy?". Trans. Hugh Tomlinson and Gregory Burchell. New York: Columbia University Press, 1994. Harold Garfinkel, 'Evidence for Locally Produced, Naturally Accountable Phenomena of Order, Logic, Meaning, Method, etc., in and as of the Essentially Unavoidable and Irremediable Haecceity of Immortal Ordinary Society', Sociological Theory Spring 1988, (6)1:103-109 External links Singularity Stanford Encyclopedia of Philosophy article — "Medieval Theories of Haecceity" Essentialism Ontology Scotism Substance theory
0.774785
0.989428
0.766594
Misanthropy
Misanthropy is the general hatred, dislike, or distrust of the human species, human behavior, or human nature. A misanthrope or misanthropist is someone who holds such views or feelings. Misanthropy involves a negative evaluative attitude toward humanity that is based on humankind's flaws. Misanthropes hold that these flaws characterize all or at least the greater majority of human beings. They claim that there is no easy way to rectify them short of a complete transformation of the dominant way of life. Various types of misanthropy are distinguished in the academic literature based on what attitude is involved, at whom it is directed, and how it is expressed. Either emotions or theoretical judgments can serve as the foundation of the attitude. It can be directed toward all humans without exception or exclude a few idealized people. In this regard, some misanthropes condemn themselves while others consider themselves superior to everyone else. Misanthropy is sometimes associated with a destructive outlook aiming to hurt other people or an attempt to flee society. Other types of misanthropic stances include activism by trying to improve humanity, quietism in the form of resignation, and humor mocking the absurdity of the human condition. The negative misanthropic outlook is based on different types of human flaws. Moral flaws and unethical decisions are often seen as the foundational factor. They include cruelty, selfishness, injustice, greed, and indifference to the suffering of others. They may result in harm to humans and animals, such as genocides and factory farming of livestock. Other flaws include intellectual flaws, like dogmatism and cognitive biases, as well as aesthetic flaws concerning ugliness and lack of sensitivity to beauty. Many debates in the academic literature discuss whether misanthropy is a valid viewpoint and what its implications are. Proponents of misanthropy usually point to human flaws and the harm they have caused as a sufficient reason for condemning humanity. Critics have responded to this line of thought by claiming that severe flaws concern only a few extreme cases, like mentally ill perpetrators, but not humanity at large. Another objection is based on the claim that humans also have virtues besides their flaws and that a balanced evaluation might be overall positive. A further criticism rejects misanthropy because of its association with hatred, which may lead to violence, and because it may make people friendless and unhappy. Defenders of misanthropy have responded by claiming that this applies only to some forms of misanthropy but not to misanthropy in general. A related issue concerns the question of the psychological and social factors that cause people to become misanthropes. They include socio-economic inequality, living under an authoritarian regime, and undergoing personal disappointments in life. Misanthropy is relevant in various disciplines. It has been discussed and exemplified by philosophers throughout history, like Heraclitus, Diogenes, Thomas Hobbes, Jean-Jacques Rousseau, Arthur Schopenhauer, and Friedrich Nietzsche. Misanthropic outlooks form part of some religious teachings discussing the deep flaws of human beings, like the Christian doctrine of original sin. Misanthropic perspectives and characters are also found in literature and popular culture. They include William Shakespeare's portrayal of Timon of Athens, Molière's play The Misanthrope, and Gulliver's Travels by Jonathan Swift. Misanthropy is closely related to but not identical to philosophical pessimism. Some misanthropes promote antinatalism, the view that humans should abstain from procreation. Definition Misanthropy is traditionally defined as hatred or dislike of humankind. The word originated in the 17th century and has its roots in the Greek words μῖσος mīsos 'hatred' and ἄνθρωπος ānthropos 'man, human'. In contemporary philosophy, the term is usually understood in a wider sense as a negative evaluation of humanity as a whole based on humanity's vices and flaws. This negative evaluation can express itself in various forms, hatred being only one of them. In this sense, misanthropy has a cognitive component based on a negative assessment of humanity and is not just a blind rejection. Misanthropy is usually contrasted with philanthropy, which refers to the love of humankind and is linked to efforts to increase human well-being, for example, through good will, charitable aid, and donations. Both terms have a range of meanings and do not necessarily contradict each other. In this regard, the same person may be a misanthrope in one sense and a philanthrope in another sense. One central aspect of all forms of misanthropy is that their target is not local but ubiquitous. This means that the negative attitude is not just directed at some individual persons or groups but at humanity as a whole. In this regard, misanthropy is different from other forms of negative discriminatory attitudes directed at a particular group of people. This distinguishes it from the intolerance exemplified by misogynists, misandrists, and racists, which hold a negative attitude toward women, men, or certain races. According to literature theorist Andrew Gibson, misanthropy does not need to be universal in the sense that a person literally dislikes every human being. Instead, it depends on the person's horizon. For instance, a villager who loathes every other villager without exception is a misanthrope if their horizon is limited to only this village. Both misanthropes and their critics agree that negative features and failings are not equally distributed, i.e. that the vices and bad traits are exemplified much more strongly in some than in others. But for misanthropy, the negative assessment of humanity is not based on a few extreme and outstanding cases: it is a condemnation of humanity as a whole that is not just directed at exceptionally bad individuals but includes regular people as well. Because of this focus on the ordinary, it is sometimes held that these flaws are obvious and trivial but people may ignore them due to intellectual flaws. Some see the flaws as part of human nature as such. Others also base their view on non-essential flaws, i.e. what humanity has come to be. This includes flaws seen as symptoms of modern civilization in general. Nevertheless, both groups agree that the relevant flaws are "entrenched". This means that there is either no or no easy way to rectify them and nothing short of a complete transformation of the dominant way of life would be required if that is possible at all. Types Various types of misanthropy are distinguished in the academic literature. They are based on what attitude is involved, how it is expressed, and whether the misanthropes include themselves in their negative assessment. The differences between them often matter for assessing the arguments for and against misanthropy. An early categorization suggested by Immanuel Kant distinguishes between positive and negative misanthropes. Positive misanthropes are active enemies of humanity. They wish harm to other people and undertake attempts to hurt them in one form or another. Negative misanthropy, by contrast, is a form of peaceful anthropophobia that leads people to isolate themselves. They may wish others well despite seeing serious flaws in them and prefer to not involve themselves in the social context of humanity. Kant associates negative misanthropy with moral disappointment due to previous negative experiences with others. Another distinction focuses on whether the misanthropic condemnation of humanity is only directed at other people or at everyone including oneself. In this regard, self-inclusive misanthropes are consistent in their attitude by including themselves in their negative assessment. This type is contrasted with self-aggrandizing misanthropes, who either implicitly or explicitly exclude themselves from the general condemnation and see themselves instead as superior to everyone else. In this regard, it may be accompanied by an exaggerated sense of self-worth and self-importance. According to literature theorist Joseph Harris, the self-aggrandizing type is more common. He states that this outlook seems to undermine its own position by constituting a form of hypocrisy. A closely related categorization developed by Irving Babbitt distinguishes misanthropes based on whether they allow exceptions in their negative assessment. In this regard, misanthropes of the naked intellect regard humanity as a whole as hopeless. Tender misanthropes exclude a few idealized people from their negative evaluation. Babbitt cites Rousseau and his fondness for natural uncivilized man as an example of tender misanthropy and contrasts it with Jonathan Swift's thorough dismissal of all of humanity. A further way to categorize forms of misanthropy is in relation to the type of attitude involved toward humanity. In this regard, philosopher Toby Svoboda distinguishes the attitudes of dislike, hate, contempt, and judgment. A misanthrope based on dislike harbors a distaste in the form of negative feelings toward other people. Misanthropy focusing on hatred involves an intense form of dislike. It includes the additional component of wishing ill upon others and at times trying to realize this wish. In the case of contempt, the attitude is not based on feelings and emotions but on a more theoretical outlook. It leads misanthropes to see other people as worthless and look down on them while excluding themselves from this assessment. If the misanthropic attitude has its foundation in judgment, it is also theoretical but does not distinguish between self and others. It is the view that humanity is in general bad without implying that the misanthrope is in any way better than the rest. According to Svoboda, only misanthropy based on judgment constitutes a serious philosophical position. He holds that misanthropy focusing on contempt is biased against other people while misanthropy in the form of dislike and hate is difficult to assess since these emotional attitudes often do not respond to objective evidence. Misanthropic forms of life Misanthropy is usually not restricted to a theoretical opinion but involves an evaluative attitude that calls for a practical response. It can express itself in different forms of life. They come with different dominant emotions and practical consequences for how to lead one's life. These responses to misanthropy are sometimes presented through simplified archetypes that may be too crude to accurately capture the mental life of any single person. Instead, they aim to portray common attitudes among groups of misanthropes. The two responses most commonly linked to misanthropy involve either destruction or fleeing from society. The destructive misanthrope is said to be driven by a hatred of humankind and aims at tearing it down, with violence if necessary. For the fugitive misanthrope, fear is the dominant emotion and leads the misanthrope to seek a secluded place in order to avoid the corrupting contact with civilization and humanity as much as possible. The contemporary misanthropic literature has also identified further less-known types of misanthropic lifestyles. The activist misanthrope is driven by hope despite their negative appraisal of humanity. This hope is a form of meliorism based on the idea that it is possible and feasible for humanity to transform itself and the activist tries to realize this ideal. A weaker version of this approach is to try to improve the world incrementally to avoid some of the worst outcomes without the hope of fully solving the basic problem. Activist misanthropes differ from quietist misanthropes, who take a pessimistic approach toward what the person can do for bringing about a transformation or significant improvements. In contrast to the more drastic reactions of the other responses mentioned, they resign themselves to quiet acceptance and small-scale avoidance. A further approach is focused on humor based on mockery and ridicule at the absurdity of the human condition. An example is that humans hurt each other and risk future self-destruction for trivial concerns like a marginal increase in profit. This way, humor can act both as a mirror to portray the terrible truth of the situation and as its palliative at the same time. Forms of human flaws A core aspect of misanthropy is that its negative attitude toward humanity is based on human flaws. Various misanthropes have provided extensive lists of flaws, including cruelty, greed, selfishness, wastefulness, dogmatism, self-deception, and insensitivity to beauty. These flaws can be categorized in many ways. It is often held that moral flaws constitute the most serious case. Other flaws discussed in the contemporary literature include intellectual flaws, aesthetic flaws, and spiritual flaws. Moral flaws are usually understood as tendencies to violate moral norms or as mistaken attitudes toward what is the good. They include cruelty, indifference to the suffering of others, selfishness, moral laziness, cowardice, injustice, greed, and ingratitude. The harm done because of these flaws can be divided into three categories: harm done directly to humans, harm done directly to other animals, and harm done indirectly to both humans and other animals by harming the environment. Examples of these categories include the Holocaust, factory farming of livestock, and pollution causing climate change. In this regard, it is not just relevant that human beings cause these forms of harm but also that they are morally responsible for them. This is based on the idea that they can understand the consequences of their actions and could act differently. However, they decide not to, for example, because they ignore the long-term well-being of others in order to get short-term personal benefits. Intellectual flaws concern cognitive capacities. They can be defined as what leads to false beliefs, what obstructs knowledge, or what violates the demands of rationality. They include intellectual vices, like arrogance, wishful thinking, and dogmatism. Further examples are stupidity, gullibility, and cognitive biases, like the confirmation bias, the self-serving bias, the hindsight bias, and the anchoring bias. Intellectual flaws can work in tandem with all kinds of vices: they may deceive someone about having a vice. This prevents the affected person from addressing it and improving themselves, for instance, by being mindless and failing to recognize it. They also include forms of self-deceit, wilful ignorance, and being in denial about something. Similar considerations have prompted some traditions to see intellectual failings, like ignorance, as the root of all evil. Aesthetic flaws are usually not given the same importance as moral and intellectual flaws, but they also carry some weight for misanthropic considerations. These flaws relate to beauty and ugliness. They concern ugly aspects of human life itself, like defecation and aging. Other examples are ugliness caused by human activities, like pollution and litter, and inappropriate attitudes toward aesthetic aspects, like being insensitive to beauty. Causes Various psychological and social factors have been identified in the academic literature as possible causes of misanthropic sentiments. The individual factors by themselves may not be able to fully explain misanthropy but can show instead how it becomes more likely. For example, disappointments and disillusionments in life can cause a person to adopt a misanthropic outlook. In this regard, the more idealistic and optimistic the person initially was, the stronger this reversal and the following negative outlook tend to be. This type of psychological explanation is found as early as Plato's Phaedo. In it, Socrates considers a person who trusts and admires someone without knowing them sufficiently well. He argues that misanthropy may arise if it is discovered later that the admired person has serious flaws. In this case, the initial attitude is reversed and universalized to apply to all others, leading to general distrust and contempt toward other humans. Socrates argues that this becomes more likely if the admired person is a close friend and if it happens more than once. This form of misanthropy may be accompanied by a feeling of moral superiority in which the misanthrope considers themselves to be better than everyone else. Other types of negative personal experiences in life may have a similar effect. Andrew Gibson uses this line of thought to explain why some philosophers became misanthropes. He uses the example of Thomas Hobbes to explain how a politically unstable environment and the frequent wars can foster a misanthropic attitude. Regarding Arthur Schopenhauer, he states that being forced to flee one's home at an early age and never finding a place to call home afterward can have a similar effect. Another psychological factor concerns negative attitudes toward the human body, especially in the form of general revulsion from sexuality. Besides the psychological causes, some wider social circumstances may also play a role. Generally speaking, the more negative the circumstances are, the more likely misanthropy becomes. For instance, according to political scientist Eric M. Uslaner, socio-economic inequality in the form of unfair distribution of wealth increases the tendency to adopt a misanthropic perspective. This has to do with the fact that inequality tends to undermine trust in the government and others. Uslaner suggests that it may be possible to overcome or reduce this source of misanthropy by implementing policies that build trust and promote a more equal distribution of wealth. The political regime is another relevant factor. This specifically concerns authoritarian regimes using all means available to repress their population and stay in power. For example, it has been argued that the severe forms of repression of the Ancien Régime in the late 17th century made it more likely for people to adopt a misanthropic outlook because their freedom was denied. Democracy may have the opposite effect since it allows more personal freedom due to its more optimistic outlook on human nature. Empirical studies often use questions related to trust in other people to measure misanthropy. This concerns specifically whether the person believes that others would be fair and helpful. In an empirical study on misanthropy in American society, Tom W. Smith concludes that factors responsible for an increased misanthropic outlook are low socioeconomic status, being from racial and ethnic minorities, and having experienced recent negative events in one's life. In regard to religion, misanthropy is higher for people who do not attend church and for fundamentalists. Some factors seem to play no significant role, like gender, having undergone a divorce, and never having been married. Another study by Morris Rosenberg finds that misanthropy is linked to certain political outlooks. They include being skeptical about free speech and a tendency to support authoritarian policies. This concerns, for example, tendencies to suppress political and religious liberties. Arguments Various discussions in the academic literature concern the question of whether misanthropy is an accurate assessment of humanity and what the consequences of adopting it are. Many proponents of misanthropy focus on human flaws together with examples of when they exercise their negative influences. They argue that these flaws are so severe that misanthropy is an appropriate response. Special importance in this regard is usually given to moral faults. This is based on the idea that humans do not merely cause a great deal of suffering and destruction but are also morally responsible for them. The reason is that they are intelligent enough to understand the consequences of their actions and could potentially make balanced long-term decisions instead of focusing on personal short-term gains. Proponents of misanthropy sometimes focus on extreme individual manifestations of human flaws, like mass killings ordered by dictators. Others emphasize that the problem is not limited to a few cases, for example, that many ordinary people are complicit in their manifestation by supporting the political leaders committing them. A closely related argument is to claim that the underlying flaws are there in everyone, even if they reach their most extreme manifestation only in a few. Another approach is to focus not on the grand extreme cases but on the ordinary small-scale manifestations of human flaws in everyday life, such as lying, cheating, breaking promises, and being ungrateful. Some arguments for misanthropy focus not only on general tendencies but on actual damage caused by humans in the past. This concerns, for instance, damages done to the ecosystem, like ecological catastrophes resulting in mass extinctions. Criticism Various theorists have criticized misanthropy. Some opponents acknowledge that there are extreme individual manifestations of human flaws, like mentally ill perpetrators, but claim that these cases do not reflect humanity at large and cannot justify the misanthropic attitude. For instance, while there are cases of extreme human brutality, like the mass killings committed by dictators and their forces, listing such cases is not sufficient for condemning humanity at large. Some critics of misanthropy acknowledge that humans have various flaws but state that they present just one side of humanity while evaluative attitudes should take all sides into account. This line of thought is based on the idea that humans possess equally important virtues that make up for their shortcomings. For example, accounts that focus only on the great wars, cruelties, and tragedies in human history ignore its positive achievements in the sciences, arts, and humanities. Another explanation given by critics is that the negative assessment should not be directed at humanity but at some social forces. These forces can include capitalism, racism, religious fundamentalism, or imperialism. Supporters of this argument would adopt an opposition to one of these social forces rather than a misanthropic opposition to humanity. Some objections to misanthropy are based not on whether this attitude appropriately reflects the negative value of humanity but on the costs of accepting such a position. The costs can affect both the individual misanthrope and the society at large. This is especially relevant if misanthropy is linked to hatred, which may turn easily into violence against social institutions and other humans and may result in harm. Misanthropy may also deprive the person of most pleasures by making them miserable and friendless. Another form of criticism focuses more on the theoretical level and claims that misanthropy is an inconsistent and self-contradictory position. An example of this inconsistency is the misanthrope's tendency to denounce the social world while still being engaged in it and being unable to fully leave it behind. This criticism applies specifically to misanthropes who exclude themselves from the negative evaluation and look down on others with contempt from an arrogant position of inflated ego but it may not apply to all types of misanthropy. A closely related objection is based on the claim that misanthropy is an unnatural attitude and should therefore be seen as an aberration or a pathological case. In various disciplines History of philosophy Misanthropy has been discussed and exemplified by philosophers throughout history. One of the earliest cases was the pre-Socratic philosopher Heraclitus. He is often characterized as a solitary person who is not fond of social interactions with others. A central factor to his negative outlook on human beings was their lack of comprehension of the true nature of reality. This concerns especially cases in which they remain in a state of ignorance despite having received a thorough explanation of the issue in question. Another early discussion is found in Plato's Phaedo, where misanthropy is characterized as the result of frustrated expectations and excessively naïve optimism. Various reflections on misanthropy are also found in the cynic school of philosophy. There it is argued, for instance, that humans keep on reproducing and multiplying the evils they are attempting to flee. An example given by the first-century philosopher Dio Chrysostom is that humans move to cities to defend themselves against outsiders but this process thwarts their initial goal by leading to even more violence due to high crime rates within the city. Diogenes is a well-known cynic misanthrope. He saw other people as hypocritical and superficial. He openly rejected all kinds of societal norms and values, often provoking others by consciously breaking conventions and behaving rudely. Thomas Hobbes is an example of misanthropy in early modern philosophy. His negative outlook on humanity is reflected in many of his works. For him, humans are egoistic and violent: they act according to their self-interest and are willing to pursue their goals at the expense of others. In their natural state, this leads to a never-ending war in which "every man to every man ... is an enemy". He saw the establishment of an authoritative state characterized by the strict enforcement of laws to maintain order as the only way to tame the violent human nature and avoid perpetual war. A further type of misanthropy is found in Jean-Jacques Rousseau. He idealizes the harmony and simplicity found in nature and contrasts them with the confusion and disorder found in humanity, especially in the form of society and institutions. For instance, he claims that "Man is born free; and everywhere he is in chains". This negative outlook was also reflected in his lifestyle: he lived solitary and preferred to be with plants rather than humans. Arthur Schopenhauer is often mentioned as a prime example of misanthropy. According to him, everything in the world, including humans and their activities, is an expression of one underlying will. This will is blind, which causes it to continuously engage in futile struggles. On the level of human life, this "presents itself as a continual deception" since it is driven by pointless desires. They are mostly egoistic and often result in injustice and suffering to others. Once they are satisfied, they only give rise to new pointless desires and more suffering. In this regard, Schopenhauer dismisses most things that are typically considered precious or meaningful in human life, like romantic love, individuality, and liberty. He holds that the best response to the human condition is a form of asceticism by denying the expression of the will. This is only found in rare humans and "the dull majority of men" does not live up to this ideal. Friedrich Nietzsche, who was strongly influenced by Schopenhauer, is also often cited as an example of misanthropy. He saw man as a decadent and "sick animal" that shows no progress over other animals. He even expressed a negative attitude toward apes since they are more similar to human beings than other animals, for example, with regard to cruelty. For Nietzsche, a noteworthy flaw of human beings is their tendency to create and enforce systems of moral rules that favor weak people and suppress true greatness. He held that the human being is something to be overcome and used the term Übermensch to describe an ideal individual who has transcended traditional moral and societal norms. Religion Some misanthropic views are also found in religious teachings. In Christianity, for instance, this is linked to the sinful nature of humans and the widespread manifestation of sin in everyday life. Common forms of sin are discussed in terms of the seven deadly sins. Examples are an excessive sense of self-importance in the form of pride and strong sexual cravings constituting lust. They also include the tendency to follow greed for material possessions as well as being envious of the possessions of others. According to the doctrine of original sin, this flaw is found in every human being since the doctrine states that human nature is already tainted by sin from birth by inheriting it from Adam and Eve's rebellion against God's authority. John Calvin's theology of Total depravity has been described by some theologians as misanthropic. Misanthropic perspectives can also be discerned in various Buddhist teachings. For example, Buddha had a negative outlook on the widespread flaws of human beings, including lust, hatred, delusion, sorrow, and despair. These flaws are identified with some form of craving or attachment (taṇhā) and cause suffering (dukkha). Buddhists hold that it is possible to overcome these failings in the process of achieving Buddhahood or enlightenment due to an innate Buddha nature. However, this is seen as a rare achievement in one lifetime in some of the Indian traditions whereas within East Asian Mahayana Chan, Zen and Pureland practice's it is achievable to become suddenly enlightened in one life time , In contrast to Indian Buddhist's doctrine regard that most human beings carry these deep flaws with them throughout their lives and to the next through the law of karma. However, there are also many religious teachings opposed to misanthropy, such as the emphasis on kindness and helping others. In Christianity, this is found in the concept of agape, which involves selfless and unconditional love in the form of compassion and a willingness to help others. Buddhists see the practice of loving kindness (metta) as a central aspect that implies a positive intention of compassion and the expression of kindness toward all sentient beings. Literature and popular culture Many examples of misanthropy are also found in literature and popular culture. Timon of Athens by William Shakespeare is a famous portrayal of the life of the Ancient Greek Timon, who is widely known for his extreme misanthropic attitude. Shakespeare depicts him as a wealthy and generous gentleman. However, he becomes disillusioned with his ungrateful friends and humanity at large. This way, his initial philanthropy turns into an unrestrained hatred of humanity, which prompts him to leave society in order to live in a forest. Molière's play The Misanthrope is another famous example. Its protagonist, Alceste, has a low opinion of the people around him. He tends to focus on their flaws and openly criticizes them for their superficiality, insincerity, and hypocrisy. He rejects most social conventions and thereby often offends others, for example, by refusing to engage in social niceties like polite small talk. The author Jonathan Swift had a reputation for being misanthropic. In some statements, he openly declares that he hates and detests "that animal called man". Misanthropy is also found in many of his works. An example is Gulliver's Travels, which tells the adventures of the protagonist Gulliver, who journeys to various places, like an island inhabited by tiny people and a land ruled by intelligent horses. Through these experiences of the contrast between humans and other species, he comes to see more and more the deep flaws of humanity, leading him to develop a revulsion toward other human beings. Ebenezer Scrooge from Charles Dickens's A Christmas Carol is an often-cited example of misanthropy. He is described as a cold-hearted, solitary miser who detests Christmas. He is greedy, selfish, and has no regard for the well-being of others. Other writers associated with misanthropy include Gustave Flaubert and Philip Larkin. The Joker from the DC Universe is an example of misanthropy in popular culture. He is one of the main antagonists of Batman and acts as an agent of chaos. He believes that people are selfish, cruel, irrational, and hypocritical. He is usually portrayed as a sociopath with a twisted sense of humor who uses violent means to expose and bring down organized society. Related concepts Philosophical pessimism Misanthropy is closely related but not identical to philosophical pessimism. Philosophical pessimism is the view that life is not worth living or that the world is a bad place, for example, because it is meaningless and full of suffering. This view is exemplified by Arthur Schopenhauer and Philipp Mainländer. Philosophical pessimism is often accompanied by misanthropy if the proponent holds that humanity is also bad and partially responsible for the negative value of the world. However, the two views do not require each other and can be held separately. A non-misanthropic pessimist may hold, for instance, that humans are just victims of a terrible world but not to blame for it. Eco-misanthropists, by contrast, may claim that the world and its nature are valuable but that humanity exerts a negative and destructive influence. Antinatalism and human extinction Antinatalism is the view that coming into existence is bad and that humans have a duty to abstain from procreation. A central argument for antinatalism is called the misanthropic argument. It sees the deep flaws of humans and their tendency to cause harm as a reason for avoiding the creation of more humans. These harms include wars, genocides, factory farming, and damages done to the environment. This argument contrasts with philanthropic arguments, which focus on the future suffering of the human about to come into existence. They argue that the only way to avoid their future suffering is to prevent them from being born. The Voluntary Human Extinction Movement and the Church of Euthanasia are well-known examples of social movements in favor of antinatalism and human extinction. Antinatalism is commonly endorsed by misanthropic thinkers but there are also many other ways that could lead to the extinction of the human species. This field is still relatively speculative but various suggestions have been made about threats to the long-term survival of the human species, like nuclear wars, self-replicating nanorobots, or super-pathogens. Such cases are usually seen as terrible scenarios and dangerous threats but misanthropes may instead interpret them as reasons for hope because the abhorrent age of humanity in history may soon come to an end. A similar sentiment is expressed by Bertrand Russell. He states in relation to the existence of human life on earth and its misdeeds that they are "a passing nightmare; in time the earth will become again incapable of supporting life, and peace will return." Human exceptionalism and deep ecology Human exceptionalism is the claim that human beings have unique importance and are exceptional compared to all other species. It is often based on the claim that they stand out because of their special capacities, like intelligence, rationality, and autonomy. In religious contexts, it is frequently explained in relation to a unique role that God foresaw for them or that they were created in God's image. Human exceptionalism is usually combined with the claim that human well-being matters more than the well-being of other species. This line of thought can be used to draw various ethical conclusions. One is the claim that humans have the right to rule the planet and impose their will on other species. Another is that inflicting harm on other species may be morally acceptable if it is done with the purpose of promoting human well-being and excellence. Generally speaking, the position of human exceptionalism is at odds with misanthropy in relation to the value of humanity. But this is not necessarily the case and it may be possible to hold both positions at the same time. One way to do this is to claim that humanity is exceptional because of a few rare individuals but that the average person is bad. Another approach is to hold that human beings are exceptional in a negative sense: given their destructive and harmful history, they are much worse than any other species. Theorists in the field of deep ecology are also often critical of human exceptionalism and tend to favor a misanthropic perspective. Deep ecology is a philosophical and social movement that stresses the inherent value of nature and advocates a radical change in human behavior toward nature. Various theorists have criticized deep ecology based on the claim that it is misanthropic by privileging other species over humans. For example, the deep ecology movement Earth First! faced severe criticism when they praised the AIDS epidemic in Africa as a solution to the problem of human overpopulation in their newsletter. See also Asociality – lack of motivation to engage in social interaction Antihumanism – rejection of humanism Antisocial personality disorder Cosmicism Emotional isolation Hatred (video game) Nihilism Social alienation References Citations Sources External links Anti-social behaviour Concepts in social philosophy Human behavior Philosophical pessimism Philosophy of life Psychological attitude Social emotions
0.766708
0.99978
0.766539
Stone Age
The Stone Age was a broad prehistoric period during which stone was widely used to make stone tools with an edge, a point, or a percussion surface. The period lasted for roughly 3.4 million years and ended between 4000 BC and 2000 BC, with the advent of metalworking. It therefore represents nearly 99.3% of human history. Though some simple metalworking of malleable metals, particularly the use of gold and copper for purposes of ornamentation, was known in the Stone Age, it is the melting and smelting of copper that marks the end of the Stone Age. In Western Asia, this occurred by about 3000 BC, when bronze became widespread. The term Bronze Age is used to describe the period that followed the Stone Age, as well as to describe cultures that had developed techniques and technologies for working copper alloys (bronze: originally copper and arsenic, later copper and tin) into tools, supplanting stone in many uses. Stone Age artifacts that have been discovered include tools used by modern humans, by their predecessor species in the genus Homo, and possibly by the earlier partly contemporaneous genera Australopithecus and Paranthropus. Bone tools have been discovered that were used during this period as well but these are rarely preserved in the archaeological record. The Stone Age is further subdivided by the types of stone tools in use. The Stone Age is the first period in the three-age system frequently used in archaeology to divide the timeline of human technological prehistory into functional periods, with the next two being the Bronze Age and the Iron Age, respectively. The Stone Age is also commonly divided into three distinct periods: the earliest and most primitive being the Paleolithic era; a transitional period with finer tools known as the Mesolithic era; and the final stage known as the Neolithic era. Neolithic peoples were the first to transition away from hunter-gatherer societies into the settled lifestyle of inhabiting towns and villages as agriculture became widespread. In the chronology of prehistory, the Neolithic era usually overlaps with the Chalcolithic ("Copper") era preceding the Bronze Age. Historical significance The Stone Age is contemporaneous with the evolution of the genus Homo, with the possible exception of the early Stone Age, when species prior to Homo may have manufactured tools. According to the age and location of the current evidence, the cradle of the genus is the East African Rift System, especially toward the north in Ethiopia, where it is bordered by grasslands. The closest relative among the other living primates, the genus Pan, represents a branch that continued on in the deep forest, where the primates evolved. The rift served as a conduit for movement into southern Africa and also north down the Nile into North Africa and through the continuation of the rift in the Levant to the vast grasslands of Asia. Starting from about 4 million years ago (mya) a single biome established itself from South Africa through the rift, North Africa, and across Asia to modern China. This has been called "transcontinental 'savannahstan recently. Starting in the grasslands of the rift, Homo erectus, the predecessor of modern humans, found an ecological niche as a tool-maker and developed a dependence on it, becoming a "tool-equipped savanna dweller". Stone Age in archaeology Beginning of the Stone Age The oldest indirect evidence found of stone tool use is fossilised animal bones with tool marks; these are 3.4 million years old and were found in the Lower Awash Valley in Ethiopia. Archaeological discoveries in Kenya in 2015, identifying what may be the oldest evidence of hominin use of tools known to date, have indicated that Kenyanthropus platyops (a 3.2 to 3.5-million-year-old Pliocene hominin fossil discovered in Lake Turkana, Kenya, in 1999) may have been the earliest tool-users known. The oldest stone tools were excavated from the site of Lomekwi 3 in West Turkana, northwestern Kenya, and date to 3.3 million years old. Prior to the discovery of these "Lomekwian" tools, the oldest known stone tools had been found at several sites at Gona, Ethiopia, on sediments of the paleo-Awash River, which serve to date them. All the tools come from the Busidama Formation, which lies above a disconformity, or missing layer, which would have been from 2.9 to 2.7 mya. The oldest sites discovered to contain tools are dated to 2.6–2.55 mya. One of the most striking circumstances about these sites is that they are from the Late Pliocene, where prior to their discovery tools were thought to have evolved only in the Pleistocene. Excavators at the locality point out that: The species that made the Pliocene tools remains unknown. Fragments of Australopithecus garhi, Australopithecus aethiopicus, and Homo, possibly Homo habilis, have been found in sites near the age of the Gona tools. In July 2018, scientists reported the discovery in China of the known oldest stone tools outside Africa, estimated at 2.12 million years old. End of the Stone Age Innovation in the technique of smelting ore is regarded as the end of the Stone Age and the beginning of the Bronze Age. The first highly significant metal manufactured was bronze, an alloy of copper and tin or arsenic, each of which was smelted separately. The transition from the Stone Age to the Bronze Age was a period during which modern people could smelt copper, but did not yet manufacture bronze, a time known as the Copper Age (or more technically the Chalcolithic or Eneolithic, both meaning 'copper–stone'). The Chalcolithic by convention is the initial period of the Bronze Age. The Bronze Age was followed by the Iron Age. The transition out of the Stone Age occurred between 6000 and 2500 BC for much of humanity living in North Africa and Eurasia. The first evidence of human metallurgy dates to between the 6th and 5th millennia BC in the archaeological sites of the Vinča culture, including Majdanpek, Jarmovac, Pločnik, Rudna Glava in modern-day Serbia. Ötzi the Iceman, a mummy from about 3300 BC, carried with him a copper axe and a flint knife. In some regions, such as Sub-Saharan Africa, the Stone Age was followed directly by the Iron Age. The Middle East and Southeast Asian regions progressed past Stone Age technology around 6000 BC. Europe, and the rest of Asia became post-Stone Age societies by about 4000 BC. The proto-Inca cultures of South America continued at a Stone Age level until around 2000 BC, when gold, copper, and silver made their entrance. The peoples of the Americas notably did not develop a widespread behavior of smelting bronze or iron after the Stone Age period, although the technology existed. Stone tool manufacture continued even after the Stone Age ended in a given area. In Europe and North America, millstones were in use until well into the 20th century, and still are in many parts of the world. Concept of the Stone Age The terms "Stone Age", "Bronze Age", and "Iron Age" are not intended to suggest that advancements and time periods in prehistory are only measured by the type of tool material, rather than, for example, social organization, food sources exploited, adaptation to climate, adoption of agriculture, cooking, settlement, and religion. Like pottery, the typology of the stone tools combined with the relative sequence of the types in various regions provide a chronological framework for the evolution of humanity and society. They serve as diagnostics of date, rather than characterizing the people or the society. Lithic analysis is a major and specialised form of archaeological investigation. It involves the measurement of stone tools to determine their typology, function and technologies involved. It includes the scientific study of the lithic reduction of the raw materials and methods used to make the prehistoric artifacts that are discovered. Much of this study takes place in the laboratory in the presence of various specialists. In experimental archaeology, researchers attempt to create replica tools, to understand how they were made. Flintknappers are craftsmen who use sharp tools to reduce flintstone to flint tool. In addition to lithic analysis, field prehistorians use a wide range of techniques derived from multiple fields. The work of archaeologists in determining the paleocontext and relative sequence of the layers is supplemented by the efforts of geologic specialists in identifying layers of rock developed or deposited over geologic time; of paleontological specialists in identifying bones and animals; of palynologists in discovering and identifying pollen, spores and plant species; of physicists and chemists in laboratories determining ages of materials by carbon-14, potassium-argon and other methods. The study of the Stone Age has never been limited to stone tools and archaeology, even though they are important forms of evidence. The chief focus of study has always been on the society and the living people who belonged to it. Useful as it has been, the concept of the Stone Age has its limitations. The date range of this period is ambiguous, disputed, and variable, depending upon the region in question. While it is possible to speak of a general 'Stone Age' period for the whole of humanity, some groups never developed metal-smelting technology, and so remained in the so-called 'Stone Age' until they encountered technologically developed cultures. The term was innovated to describe the archaeological cultures of Europe. It may not always be the best in relation to regions such as some parts of the Indies and Oceania, where farmers or hunter-gatherers used stone for tools until European colonisation began. Archaeologists of the late 19th and early 20th centuries CE, who adapted the three-age system to their ideas, hoped to combine cultural anthropology and archaeology in such a way that a specific contemporaneous tribe could be used to illustrate the way of life and beliefs of the people exercising a particular Stone-Age technology. As a description of people living today, the term Stone Age is controversial. The Association of Social Anthropologists discourages this use, asserting:To describe any living group as 'primitive' or 'Stone Age' inevitably implies that they are living representatives of some earlier stage of human development that the majority of humankind has left behind. Three-stage system In the 1920s, South African archaeologists organizing the stone tool collections of that country observed that they did not fit the newly detailed Three-Age System. In the words of J. Desmond Clark: It was early realized that the threefold division of culture into Stone, Bronze and Iron Ages adopted in the nineteenth century for Europe had no validity in Africa outside the Nile valley. Consequently, they proposed a new system for Africa, the Three-stage System. Clark regarded the Three-age System as valid for North Africa; in sub-Saharan Africa, the Three-stage System was best. In practice, the failure of African archaeologists either to keep this distinction in mind, or to explain which one they mean, contributes to the considerable equivocation already present in the literature. There are in effect two Stone Ages, one part of the Three-age and the other constituting the Three-stage. They refer to one and the same artifacts and the same technologies, but vary by locality and time. The three-stage system was proposed in 1929 by Astley John Hilary Goodwin, a professional archaeologist, and Clarence van Riet Lowe, a civil engineer and amateur archaeologist, in an article titled "Stone Age Cultures of South Africa" in the journal Annals of the South African Museum. By then, the dates of the Early Stone Age, or Paleolithic, and Late Stone Age, or Neolithic (neo = new), were fairly solid and were regarded by Goodwin as absolute. He therefore proposed a relative chronology of periods with floating dates, to be called the Earlier and Later Stone Age. The Middle Stone Age would not change its name, but it would not mean Mesolithic. The duo thus reinvented the Stone Age. In Sub-Saharan Africa, however, iron-working technologies were either invented independently or came across the Sahara from the north (see iron metallurgy in Africa). The Neolithic was characterized primarily by herding societies rather than large agricultural societies, and although there was copper metallurgy in Africa as well as bronze smelting, archaeologists do not currently recognize a separate Copper Age or Bronze Age. Moreover, the technologies included in those 'stages', as Goodwin called them, were not exactly the same. Since then, the original relative terms have become identified with the technologies of the Paleolithic and Mesolithic, so that they are no longer relative. Moreover, there has been a tendency to drop the comparative degree in favor of the positive: resulting in two sets of Early, Middle and Late Stone Ages of quite different content and chronologies. By voluntary agreement, archaeologists respect the decisions of the Pan-African Congress on Prehistory, which meets every four years to resolve the archaeological business brought before it. Delegates are actually international; the organization takes its name from the topic. Louis Leakey hosted the first one in Nairobi in 1947. It adopted Goodwin and Lowe's 3-stage system at that time, the stages to be called Early, Middle and Later. Problem of the transitions The problem of the transitions in archaeology is a branch of the general philosophic continuity problem, which examines how discrete objects of any sort that are contiguous in any way can be presumed to have a relationship of any sort. In archaeology, the relationship is one of causality. If Period B can be presumed to descend from Period A, there must be a boundary between A and B, the A–B boundary. The problem is in the nature of this boundary. If there is no distinct boundary, then the population of A suddenly stopped using the customs characteristic of A and suddenly started using those of B, an unlikely scenario in the process of evolution. More realistically, a distinct border period, the A/B transition, existed, in which the customs of A were gradually dropped and those of B acquired. If transitions do not exist, then there is no proof of any continuity between A and B. The Stone Age of Europe is characteristically in deficit of known transitions. The 19th and early 20th-century innovators of the modern three-age system recognized the problem of the initial transition, the "gap" between the Paleolithic and the Neolithic. Louis Leakey provided something of an answer by proving that man evolved in Africa. The Stone Age must have begun there to be carried repeatedly to Europe by migrant populations. The different phases of the Stone Age thus could appear there without transitions. The burden on African archaeologists became all the greater, because now they must find the missing transitions in Africa. The problem is difficult and ongoing. After its adoption by the First Pan African Congress in 1947, the Three-Stage Chronology was amended by the Third Congress in 1955 to include a First Intermediate Period between Early and Middle, to encompass the Fauresmith and Sangoan technologies, and the Second Intermediate Period between Middle and Later, to encompass the Magosian technology and others. The chronologic basis for the definition was entirely relative. With the arrival of scientific means of finding an absolute chronology, the two intermediates turned out to be will-of-the-wisps. They were in fact Middle and Lower Paleolithic. Fauresmith is now considered to be a facies of Acheulean, while Sangoan is a facies of Lupemban. Magosian is "an artificial mix of two different periods". Once seriously questioned, the intermediates did not wait for the next Pan African Congress two years hence, but were officially rejected in 1965 (again on an advisory basis) by Burg Wartenstein Conference #29, Systematic Investigation of the African Later Tertiary and Quaternary, a conference in anthropology held by the Wenner-Gren Foundation, at Burg Wartenstein Castle, which it then owned in Austria, attended by the same scholars that attended the Pan African Congress, including Louis Leakey and Mary Leakey, who was delivering a pilot presentation of her typological analysis of Early Stone Age tools, to be included in her 1971 contribution to Olduvai Gorge, "Excavations in Beds I and II, 1960–1963." However, although the intermediate periods were gone, the search for the transitions continued. Chronology In 1859 Jens Jacob Worsaae first proposed a division of the Stone Age into older and younger parts based on his work with Danish kitchen middens that began in 1851. In the subsequent decades this simple distinction developed into the archaeological periods of today. The major subdivisions of the Three-age Stone Age cross two epoch boundaries on the geologic time scale: The geologic Pliocene–Pleistocene boundary (highly glaciated climate) The Paleolithic period of archaeology The geologic Pleistocene–Holocene boundary (modern climate) Mesolithic or Epipaleolithic period of archaeology Neolithic period of archaeology The succession of these phases varies enormously from one region (and culture) to another. Three-age chronology The Paleolithic or Palaeolithic (from Greek: παλαιός, palaios, "old"; and λίθος, lithos, "stone" lit. "old stone", coined by archaeologist John Lubbock and published in 1865) is the earliest division of the Stone Age. It covers the greatest portion of humanity's time (roughly 99% of "human technological history", where "human" and "humanity" are interpreted to mean the genus Homo), extending from 2.5 or 2.6 million years ago, with the first documented use of stone tools by hominins such as Homo habilis, to the end of the Pleistocene around 10,000 BC. The Paleolithic era ended with the Mesolithic, or in areas with an early neolithisation, the Epipaleolithic. Lower Paleolithic At sites dating from the Lower Paleolithic Period (about 2,500,000 to 200,000 years ago), simple pebble tools have been found in association with the remains of what may have been the earliest human ancestors. A somewhat more sophisticated Lower Paleolithic tradition, known as the Chopper chopping tool industry, is widely distributed in the Eastern Hemisphere. This tradition is thought to have been the work of the hominin species named Homo erectus. Although no such fossil tools have yet been found, it is believed that H. erectus probably made tools of wood and bone as well as stone. About 700,000 years ago, a new Lower Paleolithic tool, the hand axe, appeared. The earliest European hand axes are assigned to the Abbevillian industry, which developed in northern France in the valley of the Somme River; a later, more refined hand-axe tradition is seen in the Acheulian industry, evidence of which has been found in Europe, Africa, the Middle East, and Asia. Some of the earliest known hand axes were found at Olduvai Gorge (Tanzania) in association with remains of H. erectus. Alongside the hand-axe tradition, there developed a distinct and very different stone-tool industry, based on flakes of stone: special tools were made from worked (carefully shaped) flakes of flint. In Europe, the Clactonian industry is one example of a flake tradition. The early flake industries probably contributed to the development of the Middle Paleolithic flake tools of the Mousterian industry, which is associated with the remains of Neanderthal man. Oldowan in Africa The earliest documented stone tools have been found in eastern Africa, manufacturers unknown, at the 3.3 million-year-old site of Lomekwi 3 in Kenya. Better known are the later tools belonging to an industry known as Oldowan, after the type site of Olduvai Gorge in Tanzania. The tools were formed by knocking pieces off a river pebble, or stones like it, with a hammerstone to obtain large and small pieces with one or more sharp edges. The original stone is called a core; the resultant pieces, flakes. Typically, but not necessarily, small pieces are detached from a larger piece, in which case the larger piece may be called the core and the smaller pieces the flakes. The prevalent usage, however, is to call all the results flakes, which can be confusing. A split in half is called bipolar flaking. Consequently, the method is often called "core-and-flake". More recently, the tradition has been called "small flake" since the flakes were small compared to subsequent Acheulean tools. Another naming scheme is "Pebble Core Technology (PBC)": Various refinements in the shape have been called choppers, discoids, polyhedrons, subspheroid, etc. To date no reasons for the variants have been ascertained: However, they would not have been manufactured for no purpose: The whole point of their utility is that each is a "sharp-edged rock" in locations where nature has not provided any. There is additional evidence that Oldowan, or Mode 1, tools were used in "percussion technology"; that is, they were designed to be gripped at the blunt end and strike something with the edge, from which use they were given the name of choppers. Modern science has been able to detect mammalian blood cells on Mode 1 tools at Sterkfontein, Member 5 East, in South Africa. As the blood must have come from a fresh kill, the tool users are likely to have done the killing and used the tools for butchering. Plant residues bonded to the silicon of some tools confirm the use to chop plants. Although the exact species authoring the tools remains unknown, Mode 1 tools in Africa were manufactured and used predominantly by Homo habilis. They cannot be said to have developed these tools or to have contributed the tradition to technology. They continued a tradition of yet unknown origin. As chimpanzees sometimes naturally use percussion to extract or prepare food in the wild, and may use either unmodified stones or stones that they have split, creating an Oldowan tool, the tradition may well be far older than its current record. Towards the end of Oldowan in Africa the new species Homo erectus appeared over the range of Homo habilis. The earliest "unambiguous" evidence is a whole cranium, KNM-ER 3733 (a find identifier) from Koobi Fora in Kenya, dated to 1.78 mya. An early skull fragment, KNM-ER 2598, dated to 1.9 mya, is considered a good candidate also. Transitions in paleoanthropology are always hard to find, if not impossible, but based on the "long-legged" limb morphology shared by H. habilis and H. rudolfensis in East Africa, an evolution from one of those two has been suggested. The most immediate cause of the new adjustments appears to have been an increasing aridity in the region and consequent contraction of parkland savanna, interspersed with trees and groves, in favor of open grassland, dated 1.8–1.7 mya. During that transitional period the percentage of grazers among the fossil species increased from around 15–25% to 45%, dispersing the food supply and requiring a facility among the hunters to travel longer distances comfortably, which H. erectus obviously had. The ultimate proof is the "dispersal" of H. erectus "across much of Africa and Asia, substantially before the development of the Mode 2 technology and use of fire". H. erectus carried Mode 1 tools over Eurasia. According to the current evidence (which may change at any time) Mode 1 tools are documented from about 2.6 mya to about 1.5 mya in Africa, and to 0.5 mya outside of it. The genus Homo is known from H. habilis and H. rudolfensis from 2.3 to 2.0 mya, with the latest habilis being an upper jaw from Koobi Fora, Kenya, from 1.4 mya. H. erectus is dated 1.8–0.6 mya. According to this chronology Mode 1 was inherited by Homo from unknown Hominans, probably Australopithecus and Paranthropus, who must have continued on with Mode 1 and then with Mode 2 until their extinction no later than 1.1 mya. Meanwhile, living contemporaneously in the same regions H. habilis inherited the tools around 2.3 mya. At about 1.9 mya H. erectus came on stage and lived contemporaneously with the others. Mode 1 was now being shared by a number of Hominans over the same ranges, presumably subsisting in different niches, but the archaeology is not precise enough to say which. Oldowan out of Africa Tools of the Oldowan tradition first came to archaeological attention in Europe, where, being intrusive and not well defined, compared to the Acheulean, they were puzzling to archaeologists. The mystery would be elucidated by African archaeology at Olduvai, but meanwhile, in the early 20th century, the term "Pre-Acheulean" came into use in climatology. C. E. P. Brooks, a British climatologist working in the United States, used the term to describe a "chalky boulder clay" underlying a layer of gravel at Hoxne, central England, where Acheulean tools had been found. Whether any tools would be found in it and what type was not known. Hugo Obermaier, a contemporary German archaeologist working in Spain, stated: This uncertainty was clarified by the subsequent excavations at Olduvai; nevertheless, the term is still in use for pre-Acheulean contexts, mainly across Eurasia, that are yet unspecified or uncertain but with the understanding that they are or will turn out to be pebble-tool. There are ample associations of Mode 2 with H. erectus in Eurasia. H. erectus – Mode 1 associations are scantier but they do exist, especially in the Far East. One strong piece of evidence prevents the conclusion that only H. erectus reached Eurasia: at Yiron, Israel, Mode 1 tools have been found dating to 2.4 mya, about 0.5 my earlier than the known H. erectus finds. If the date is correct, either another Hominan preceded H. erectus out of Africa or the earliest H. erectus has yet to be found. After the initial appearance at Gona in Ethiopia at 2.7 mya, pebble tools date from 2.0 mya at Sterkfontein, Member 5, South Africa, and from 1.8 mya at El Kherba, Algeria, North Africa. The manufacturers had already left pebble tools at Yiron, Israel, at 2.4 mya, Riwat, Pakistan, at 2.0 mya, and Renzidong, South China, at over 2 mya. The identification of a fossil skull at Mojokerta, Pernung Peninsula on Java, dated to 1.8 mya, as H. erectus, suggests that the African finds are not the earliest to be found in Africa, or that, in fact, erectus did not originate in Africa after all but on the plains of Asia. The outcome of the issue waits for more substantial evidence. Erectus was found also at Dmanisi, Georgia, from 1.75 mya in association with pebble tools. Pebble tools are found the latest first in southern Europe and then in northern Europe. They begin in the open areas of Italy and Spain, the earliest dated to 1.6 mya at Pirro Nord, Italy. The mountains of Italy are rising at a rapid rate in the framework of geologic time; at 1.6 mya they were lower and covered with grassland (as much of the highlands still are). Europe was otherwise mountainous and covered over with dense forest, a formidable terrain for warm-weather savanna dwellers. Similarly there is no evidence that the Mediterranean was passable at Gibraltar or anywhere else to H. erectus or earlier hominins. They might have reached Italy and Spain along the coasts. In northern Europe, pebble tools are found earliest at Happisburgh, United Kingdom, from 0.8 mya. The last traces are from Kent's Cavern, dated 0.5 mya. By that time H. erectus is regarded as having been extinct; however, a more modern version apparently had evolved, Homo heidelbergensis, who must have inherited the tools. He also explains the last of the Acheulean in Germany at 0.4 mya. In the late 19th and early 20th centuries, archaeologists worked on the assumption that a succession of hominins and cultures prevailed, that one replaced another. Today the presence of multiple hominins living contemporaneously near each other for long periods is accepted as proven true; moreover, by the time the previously assumed "earliest" culture arrived in northern Europe, the rest of Africa and Eurasia had progressed to the Middle and Upper Palaeolithic, so that across the earth all three were for a time contemporaneous. In any given region there was a progression from Oldowan to Acheulean, Lower to Upper, no doubt. Acheulean in Africa The end of Oldowan in Africa was brought on by the appearance of Acheulean, or Mode 2, stone tools. The earliest known instances are in the 1.7–1.6 mya layer at Kokiselei, West Turkana, Kenya. At Sterkfontein, South Africa, they are in Member 5 West, 1.7–1.4 mya. The 1.7 is a fairly certain, fairly standard date. Mode 2 is often found in association with H. erectus. It makes sense that the most advanced tools should have been innovated by the most advanced hominin; consequently, they are typically given credit for the innovation. A Mode 2 tool is a biface consisting of two concave surfaces intersecting to form a cutting edge all the way around, except in the case of tools intended to feature a point. More work and planning go into the manufacture of a Mode 2 tool. The manufacturer hits a slab off a larger rock to use as a blank. Then large flakes are struck off the blank and worked into bifaces by hard-hammer percussion on an anvil stone. Finally the edge is retouched: small flakes are hit off with a bone or wood soft hammer to sharpen or resharpen it. The core can be either the blank or another flake. Blanks are ported for manufacturing supply in places where nature has provided no suitable stone. Although most Mode 2 tools are easily distinguished from Mode 1, there is a close similarity of some Oldowan and some Acheulean, which can lead to confusion. Some Oldowan tools are more carefully prepared to form a more regular edge. One distinguishing criterion is the size of the flakes. In contrast to the Oldowan "small flake" tradition, Acheulean is "large flake": "The primary technological distinction remaining between Oldowan and the Acheulean is the preference for large flakes (>10 cm) as blanks for making large cutting tools (handaxes and cleavers) in the Acheulean." "Large Cutting Tool" (LCT) has become part of the standard terminology as well. In North Africa, the presence of Mode 2 remains a mystery, as the oldest finds are from Thomas Quarry in Morocco at 0.9 mya. Archaeological attention, however, shifts to the Jordan Rift Valley, an extension of the East African Rift Valley (the east bank of the Jordan is slowly sliding northward as East Africa is thrust away from Africa). Evidence of use of the Nile Valley is in deficit, but Hominans could easily have reached the palaeo-Jordan River from Ethiopia along the shores of the Red Sea, one side or the other. A crossing would not have been necessary, but it is more likely there than over a theoretical but unproven land bridge through either Gibraltar or Sicily. Meanwhile, Acheulean went on in Africa past the 1.0 mya mark and also past the extinction of H. erectus there. The last Acheulean in East Africa is at Olorgesailie, Kenya, dated to about 0.9 mya. Its owner was still H. erectus, but in South Africa, Acheulean at Elandsfontein, 1.0–0.6 mya, is associated with Saldanha man, classified as H. heidelbergensis, a more advanced, but not yet modern, descendant most likely of H. erectus. The Thoman Quarry Hominans in Morocco similarly are most likely Homo rhodesiensis, in the same evolutionary status as H. heidelbergensis. Acheulean out of Africa Mode 2 is first known out of Africa at 'Ubeidiya, Israel, a site now on the Jordan River, then frequented over the long term (hundreds of thousands of years) by Homo on the shore of a variable-level palaeo-lake, long since vanished. The geology was created by successive "transgression and regression" of the lake resulting in four cycles of layers. The tools are located in the first two, Cycles Li (Limnic Inferior) and Fi (Fluviatile Inferior), but mostly in Fi. The cycles represent different ecologies and therefore different cross-sections of fauna, which makes it possible to date them. They appear to be the same faunal assemblages as the Ferenta Faunal Unit in Italy, known from excavations at Selvella and Pieterfitta, dated to 1.6–1.2 mya. At 'Ubeidiya the marks on the bones of the animal species found there indicate that the manufacturers of the tools butchered the kills of large predators, an activity that has been termed "scavenging". There are no living floors, nor did they process bones to obtain the marrow. These activities cannot be understood therefore as the only or even the typical economic activity of Hominans. Their interests were selective: they were primarily harvesting the meat of Cervids, which is estimated to have been available without spoiling for up to four days after the kill. The majority of the animals at the site were of "Palaearctic biogeographic origin". However, these overlapped in range on 30–60% of "African biogeographic origin". The biome was Mediterranean, not savanna. The animals were not passing through; there was simply an overlap of normal ranges. Of the Hominans, H. erectus left several cranial fragments. Teeth of undetermined species may have been H. ergaster. The tools are classified as "Lower Acheulean" and "Developed Oldowan". The latter is a disputed classification created by Mary Leakey to describe an Acheulean-like tradition in Bed II at Olduvai. It is dated 1.53–1.27 mya. The date of the tools therefore probably does not exceed 1.5 mya; 1.4 is often given as a date. This chronology, which is definitely later than in Kenya, supports the "out of Africa" hypothesis for Acheulean, if not for the Hominans. From Southwest Asia, as the Levant is now called, the Acheulean extended itself more slowly eastward, arriving at Isampur, India, about 1.2 mya. It does not appear in China and Korea until after 1mya and not at all in Indonesia. There is a discernible boundary marking the furthest extent of the Acheulean eastward before 1 mya, called the Movius Line, after its proposer, Hallam L. Movius. On the east side of the line the small flake tradition continues, but the tools are additionally worked Mode 1, with flaking down the sides. In Athirampakkam at Chennai in Tamil Nadu the Acheulean age started at 1.51 mya and it is also prior than North India and Europe. The cause of the Movius Line remains speculative, whether it represents a real change in technology or a limitation of archeology, but after 1 mya evidence not available to Movius indicates the prevalence of Acheulean. For example, the Acheulean site at Bose, China, is dated 0.803±3K mya. The authors of this chronologically later East Asian Acheulean remain unknown, as does whether it evolved in the region or was brought in. There is no named boundary line between Mode 1 and Mode 2 on the west; nevertheless, Mode 2 is equally late in Europe as it is in the Far East. The earliest comes from a rock shelter at Estrecho de Quípar in Spain, dated to greater than 0.9 mya. Teeth from an undetermined Hominan were found there also. The last Mode 2 in Southern Europe is from a deposit at Fontana Ranuccio near Anagni in Italy dated to 0.45 mya, which is generally linked to Homo cepranensis, a "late variant of H. erectus", a fragment of whose skull was found at Ceprano nearby, dated 0.46 mya. Middle Paleolithic This period is best known as the era during which the Neanderthals lived in Europe and the Near East (c. 300,000–28,000 years ago). Their technology is mainly the Mousterian, but Neanderthal physical characteristics have been found also in ambiguous association with the more recent Châtelperronian archeological culture in Western Europe and several local industries like the Szeletian in Eastern Europe/Eurasia. There is no evidence for Neanderthals in Africa, Australia or the Americas. Neanderthals nursed their elderly and practised ritual burial indicating an organised society. The earliest evidence (Mungo Man) of settlement in Australia dates to around 40,000 years ago when modern humans likely crossed from Asia by island-hopping. Evidence for symbolic behavior such as body ornamentation and burial is ambiguous for the Middle Paleolithic and still subject to debate. The Bhimbetka rock shelters exhibit the earliest traces of human life in India, some of which are approximately 30,000 years old. Upper Paleolithic From 50,000 to 10,000 years ago in Europe, the Upper Paleolithic ends with the end of the Pleistocene and onset of the Holocene era (the end of the Last Glacial Period). Modern humans spread out further across the Earth during the period known as the Upper Paleolithic. The Upper Paleolithic is marked by a relatively rapid succession of often complex stone artifact technologies and a large increase in the creation of art and personal ornaments. During period between 35 and 10 kya evolved: from 38 to 30 kya Châtelperronian, 40–28 Aurignacian, 28–22 Gravettian, 22–17 Solutrean, and 18–10 Magdalenian. All of these industries except the Châtelperronian are associated with anatomically modern humans. Authorship of the Châtelperronian is still the subject of much debate. Most scholars date the arrival of humans in Australia at 40,000 to 50,000 years ago, with a possible range of up to 125,000 years ago. The earliest anatomically modern human remains found in Australia (and outside of Africa) are those of Mungo Man; they have been dated at 42,000 years old. The Americas were colonised via the Bering land bridge which was exposed during this period by lower sea levels. These people are called the Paleo-Indians, and the earliest accepted dates are those of the Clovis culture sites, some 13,500 years ago. Globally, societies were hunter-gatherers but evidence of regional identities begins to appear in the wide variety of stone tool types being developed to suit very different environments. Epipaleolithic/Mesolithic The period starting from the end of the last ice age, 10,000 years ago, to around 6,000 years ago was characterized by rising sea levels and a need to adapt to a changing environment and find new food sources. The development of Mode 5 (microlith) tools began in response to these changes. They were derived from the previous Paleolithic tools, hence the term Epipaleolithic, or were intermediate between the Paleolithic and the Neolithic, hence the term Mesolithic (Middle Stone Age), used for parts of Eurasia, but not outside it. The choice of a word depends on exact circumstances and the inclination of the archaeologists excavating the site. Microliths were used in the manufacture of more efficient composite tools, resulting in an intensification of hunting and fishing and with increasing social activity the development of more complex settlements, such as Lepenski Vir. Domestication of the dog as a hunting companion probably dates to this period. The earliest known battle occurred during the Mesolithic period at a site in Egypt known as Cemetery 117. Neolithic The Neolithic, or New Stone Age, was approximately characterized by the adoption of agriculture. The shift from food gathering to food producing, in itself one of the most revolutionary changes in human history, was accompanied by the so-called Neolithic Revolution: the development of pottery, polished stone tools, and construction of more complex, larger settlements such as Göbekli Tepe and Çatalhöyük. Some of these features began in certain localities even earlier, in the transitional Mesolithic. The first Neolithic cultures started around 7000 BC in the fertile crescent and spread concentrically to other areas of the world; however, the Near East was probably not the only nucleus of agriculture, the cultivation of maize in Meso-America and of rice in the Far East being others. Due to the increased need to harvest and process plants, ground stone and polished stone artifacts became much more widespread, including tools for grinding, cutting, and chopping. Skara Brae, located in Orkney, Scotland, is one of Europe's best examples of a Neolithic village. The community contains stone beds, shelves and even an indoor toilet linked to a stream. The first large-scale constructions were built, including settlement towers and walls, e.g., Jericho (Tell es-Sultan) and ceremonial sites, e.g. Stonehenge. The Ġgantija temples of Gozo in the Maltese archipelago are the oldest surviving free standing structures in the world, erected –2500 BC. The earliest evidence for established trade exists in the Neolithic with newly settled people importing exotic goods over distances of many hundreds of miles. These facts show that there were sufficient resources and co-operation to enable large groups to work on these projects. To what extent this was a basis for the development of elites and social hierarchies is a matter of ongoing debate. Although some late Neolithic societies formed complex stratified chiefdoms similar to Polynesian societies such as the Ancient Hawaiians, based on the societies of modern tribesmen at an equivalent technological level, most Neolithic societies were relatively simple and egalitarian. A comparison of art in the two ages leads some theorists to conclude that Neolithic cultures were noticeably more hierarchical than the Paleolithic cultures that preceded them. African chronology Early Stone Age (ESA) The Early Stone Age in Africa is not to be identified with "Old Stone Age", a translation of Paleolithic, or with Paleolithic, or with the "Earlier Stone Age" that originally meant what became the Paleolithic and Mesolithic. In the initial decades of its definition by the Pan-African Congress of Prehistory, it was parallel in Africa to the Upper and Middle Paleolithic. However, since then Radiocarbon dating has shown that the Middle Stone Age is in fact contemporaneous with the Middle Paleolithic. The Early Stone Age therefore is contemporaneous with the Lower Paleolithic and happens to include the same main technologies, Oldowan and Acheulean, which produced Mode 1 and Mode 2 stone tools respectively. A distinct regional term is warranted, however, by the location and chronology of the sites and the exact typology. Middle Stone Age (MSA) The Middle Stone Age was a period of African prehistory between Early Stone Age and Late Stone Age. It began around 300,000 years ago and ended around 50,000 years ago. It is considered as an equivalent of European Middle Paleolithic. It is associated with anatomically modern or almost modern Homo sapiens. Early physical evidence comes from Omo and Herto, both in Ethiopia and dated respectively at c. 195 ka and at c. 160 ka. Later Stone Age (LSA) The Later Stone Age (LSA, sometimes also called the Late Stone Age) refers to a period in African prehistory. Its beginnings are roughly contemporaneous with the European Upper Paleolithic. It lasts until historical times and this includes cultures corresponding to Mesolithic and Neolithic in other regions. Material culture Tools Stone tools were made from a variety of stones. For example, flint and chert were shaped (or chipped) for use as cutting tools and weapons, while basalt and sandstone were used for ground stone tools, such as quern-stones. Wood, bone, shell, antler (deer) and other materials were widely used, as well. During the most recent part of the period, sediments (such as clay) were used to make pottery. Agriculture was developed and certain animals were domesticated as well. Some species of non-primates are able to use stone tools, such as the sea otter, which breaks abalone shells with them. Primates can both use and manufacture stone tools. This combination of abilities is more marked in apes and humans, but only humans, or more generally hominins, depend on tool use for survival. The key anatomical and behavioral features required for tool manufacture, which are possessed only by hominins, are the larger thumb and the ability to hold by means of an assortment of grips. Food and drink Food sources of the Palaeolithic hunter-gatherers were wild plants and animals harvested from the environment. They liked animal organ meats, including the livers, kidneys and brains. Large seeded legumes were part of the human diet long before the agricultural revolution, as is evident from archaeobotanical finds from the Mousterian layers of Kebara Cave, in Israel. Moreover, recent evidence indicates that humans processed and consumed wild cereal grains as far back as 23,000 years ago in the Upper Paleolithic. Near the end of the Wisconsin glaciation, 15,000 to 9,000 years ago, mass extinction of Megafauna such as the woolly mammoth occurred in Asia, Europe, North America and Australia. This was the first Holocene extinction event. It possibly forced modification in the dietary habits of the humans of that age and with the emergence of agricultural practices, plant-based foods also became a regular part of the diet. A number of factors have been suggested for the extinction: certainly over-hunting, but also deforestation and climate change. The net effect was to fragment the vast ranges required by the large animals and extinguish them piecemeal in each fragment. Shelter and habitat Around 2 million years ago, Homo habilis is believed to have constructed the first man-made structure in East Africa, consisting of simple arrangements of stones to hold branches of trees in position. A similar stone circular arrangement believed to be around 380,000 years old was discovered at Terra Amata, near Nice, France. (Concerns about the dating have been raised: see Terra Amata.) Several human habitats dating back to the Stone Age have been discovered around the globe, including: A tent-like structure inside a cave near the Grotte du Lazaret, Nice, France. A structure with a roof supported with timber, discovered in Dolní Věstonice, the Czech Republic, dates to around 23,000 BC. The walls were made of packed clay blocks and stones. Many huts made of mammoth bones have been found in East-Central Europe and Siberia. The people who made these huts were expert mammoth hunters. Examples have been found along the Dniepr river valley of Ukraine, including near Chernihiv, in Moravia, Czech Republic and in southern Poland. An animal hide tent dated to around 15000 to 10000 BC, in the Magdalenian, was discovered at Plateau Parain, France. Art Prehistoric art is visible in the artifacts. Prehistoric music is inferred from found instruments, while parietal art can be found on rocks of any kind. The latter are petroglyphs and rock paintings. The art may or may not have had a religious function. Petroglyphs Petroglyphs appeared in the Neolithic. A Petroglyph is an intaglio abstract or symbolic image engraved on natural stone by various methods, usually by prehistoric peoples. They were a dominant form of pre-writing symbols. Petroglyphs have been discovered in different parts of the world, including Australia (Sydney rock engravings), Asia (Bhimbetka, India), North America (Death Valley National Park), South America (Cumbe Mayo, Peru), and Europe (Finnmark, Norway). Rock paintings In paleolithic times, mostly animals were painted, in theory ones that were used as food or represented strength, such as the rhinoceros or large cats (as in the Chauvet Cave). Signs such as dots were sometimes drawn. Rare human representations include handprints and half-human/half-animal figures. The Cave of Chauvet in the Ardèche department, France, contains the most important cave paintings of the paleolithic era, dating from about 36,000 BC. The Altamira cave paintings in Spain were done 14,000 to 12,000 BC and show, among others, bisons. The hall of bulls in Lascaux, Dordogne, France, dates from about 15,000 to 10,000 BC. The meaning of many of these paintings remains unknown. They may have been used for seasonal rituals. The animals are accompanied by signs that suggest a possible magic use. Arrow-like symbols in Lascaux are sometimes interpreted as calendar or almanac use, but the evidence remains interpretative. Some scenes of the Mesolithic, however, can be typed and therefore, judging from their various modifications, are fairly clear. One of these is the battle scene between organized bands of archers. For example, "the marching warriors", a rock painting at Cingle de la Mola, Castellón in Spain, dated to about 7,000–4,000 BC, depicts about 50 bowmen in two groups marching or running in step toward each other, each man carrying a bow in one hand and a fistful of arrows in the other. A file of five men leads one band, one of whom is a figure with a "high crowned hat". In other scenes elsewhere, the men wear head-dresses and knee ornaments but otherwise fight nude. Some scenes depict the dead and wounded, bristling with arrows. One is reminded of Ötzi the Iceman, a Copper Age mummy revealed by an Alpine melting glacier, who collapsed from loss of blood due to an arrow wound in the back. Stone Age rituals and beliefs Modern studies and the in-depth analysis of finds dating from the Stone Age indicate certain rituals and beliefs of the people in those prehistoric times. It is now believed that activities of the Stone Age humans went beyond the immediate requirements of procuring food, body coverings, and shelters. Specific rites relating to death and burial were practiced, though certainly differing in style and execution between cultures. Megalithic tombs, multichambered, and dolmens, single-chambered, were graves with a huge stone slab stacked over other similarly large stone slabs; they have been discovered all across Europe and Asia and were built in the Neolithic and the Bronze Age. Modern popular culture The image of the caveman is commonly associated with the Stone Age. For example, a 2003 documentary series showing the evolution of humans through the Stone Age was called Walking with Cavemen, but only the last programme showed humans living in caves. While the idea that human beings and dinosaurs coexisted is sometimes portrayed in popular culture in cartoons, films and computer games, such as The Flintstones, One Million Years B.C. and Chuck Rock, the notion of hominids and non-avian dinosaurs co-existing is not supported by any scientific evidence. Other depictions of the Stone Age include the best-selling Earth's Children series of books by Jean M. Auel, which are set in the Paleolithic and are loosely based on archaeological and anthropological findings. The 1981 film Quest for Fire by Jean-Jacques Annaud tells the story of a group of early homo sapiens searching for their lost fire. A 21st-century series, Chronicles of Ancient Darkness by Michelle Paver tells of two New Stone Age children fighting to fulfil a prophecy and save their clan. See also List of Stone Age art Prehistoric warfare Timeline of prehistory Notes References Further reading External links The stone age in North America Vol. 1 of 2, Warren K. Moorehead 1910, Boston: Houghton Mifflin company The Stone Age Robert A. Giusepi, 2000. History World International Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016).
0.766707
0.999751
0.766516
Verstehen
Verstehen (, ), in the context of German philosophy and social sciences in general, has been used since the late 19th century – in English as in German – with the particular sense of the "interpretive or participatory" examination of social phenomena. The term is closely associated with the work of the German sociologist Max Weber, whose antipositivism established an alternative to prior sociological positivism and economic determinism, rooted in the analysis of social action. In anthropology, Verstehen has come to mean a systematic interpretive process in which an outside observer of a culture attempts to relate to it and understand others. Verstehen is now seen as a concept and a method central to a rejection of positivist social science (although Weber appeared to think that the two could be united). Verstehen refers to understanding the meaning of action from the actor's point of view. It is entering into the shoes of the other, and adopting this research stance requires treating the actor as a subject, rather than an object of your observations. It also implies that unlike objects in the natural world human actors are not simply the product of the pulls and pushes of external forces. Individuals are seen to create the world by organizing their own understanding of it and giving it meaning. To do research on actors without taking into account the meanings they attribute to their actions or environment is to treat them like objects. Meaning Interpretive sociology is the study of society that concentrates on the meanings people associate to their social world. Interpretive sociology strives to show that reality is constructed by people themselves in their daily lives. Verstehen roughly translates to "meaningful understanding" or "putting yourself in the shoes of others to see things from their perspective." Interpretive sociology differs from positivist sociology in three ways: It deals with the meaning attached to action, unlike positivist sociology which focuses on behavior; It sees reality as being constructed by people, unlike positivist sociology which sees an objective reality "out there;" and It relies on qualitative data, unlike positivist sociology which tends to make use of quantitative data. Dilthey and hermeneutics Verstehen was introduced into philosophy and the human sciences by the German historist philosopher Johann Gustav Droysen. Droysen first made a distinction between nature and history in terms of the categories of space and time. The method of the natural sciences is explanation, while that of history is understanding. The concept of Verstehen was later used by the German philosopher Wilhelm Dilthey to describe the first-person participatory perspective that agents have on their individual experience as well as their culture, history, and society. In this sense, it is developed in the context of the theory and practice of interpretation (as understood in the context of hermeneutics) and contrasted with the external objectivating third-person perspective of explanation in which human agency, subjectivity, and its products are analyzed as effects of impersonal natural forces in the natural sciences and social structures in sociology. Twentieth-century philosophers such as Martin Heidegger and Hans-Georg Gadamer were critical of what they considered to be the romantic and subjective character of Verstehen in Dilthey, although both Dilthey and the early Heidegger were interested in the "facticity" and "life-context" of understanding, and sought to universalize it as the way humans exist through language on the basis of ontology. Verstehen also played a role in Edmund Husserl and Alfred Schutz's analysis of the "lifeworld." Jürgen Habermas and Karl-Otto Apel further transformed the concept of Verstehen, reformulating it on the basis of a transcendental-pragmatic philosophy of language and the theory of communicative action. Weber and the social sciences Max Weber and Georg Simmel introduced interpretive understanding into sociology, where it has come to mean a systematic interpretive process in which an outside observer of a culture (such as an anthropologist or sociologist) relates to an indigenous people or sub-cultural group on their own terms and from their own point of view, rather than interpreting them in terms of the observer's own culture. Verstehen can mean either a kind of empathic or participatory understanding of social phenomena. In anthropological terms this is sometimes described as cultural relativism, especially by those that have a tendency to argue toward universal ideals. In sociology it is an aspect of the comparative-historical approach, where the context of a society like twelfth century "France" can be potentially better understood by the sociologist than it could have been by people living in a village in Burgundy. It relates to how people in life give meaning to the social world around them and how the social scientist accesses and evaluates this "first-person perspective." This concept has been both expanded and criticized by later social scientists. Proponents laud this concept as the only means by which researchers from one culture can examine and explain behaviors in another. While the exercise of Verstehen has been more popular among social scientists in Europe, such as Habermas, Verstehen was introduced into the practice of sociology in the United States by Talcott Parsons, an American sociologist influenced by Max Weber. Parsons used his structural functionalism to incorporate this concept into his 1937 work, The Structure of Social Action. Weber had more specific beliefs than Marx where he put value to understanding and meaning of key elements—not just with intuition or sympathy with the individual but also the product of "systematic and rigorous research". The goal is to identify human actions and interpreting them as observable events leading us to believe that it not only provides for a good explanation for individual actions but also for group interactions. The meaning attached needs to include constraints and limitations and analyze the motivation for action. Weber believed that this gives the sociologist an advantage over a natural scientist because "We can accomplish something which is never attainable in the natural sciences, namely the subjective understanding of the action of the component individuals." Criticism Critics of the social scientific concept of Verstehen such as Mikhail Bakhtin and Dean MacCannell counter that it is simply impossible for a person born of one culture to ever completely understand another culture, and that it is arrogant and conceited to attempt to interpret the significance of one culture's symbols through the terms of another (supposedly superior) culture. Just as in physical science all knowledge is asymptotic to the full explanation, a high degree of cross-cultural understanding is very valuable. The opposite of Verstehen would seem to be ignorance of all but that which is immediately observable, meaning that we would not be able to understand any time and place but our own. A certain level of interpretive understanding is necessary for our own cultural setting, however, and it can easily be argued that even the full participant in a culture does not fully understand it in every regard. Critics also believe that it is the sociologist's job to not just observe people and what people do but also share in their world of meaning and come to appreciate why they act as they do. Subjective thoughts and feelings regarded as bias in the sciences is an important aspect to be controlled for while doing sociological research. See also Antinaturalism (sociology) Emic and etic Humanistic sociology Humanistic coefficient Nomothetic and idiographic Reflexivity (social theory) References External links Philosophy of social science Phenomenology Hermeneutics German philosophy Critical theory German words and phrases Max Weber Social concepts 1850s neologisms Wilhelm Dilthey
0.775916
0.987864
0.766499
Glaciology
Glaciology (; ) is the scientific study of glaciers, or, more generally, ice and natural phenomena that involve ice. Glaciology is an interdisciplinary Earth science that integrates geophysics, geology, physical geography, geomorphology, climatology, meteorology, hydrology, biology, and ecology. The impact of glaciers on people includes the fields of human geography and anthropology. The discoveries of water ice on the Moon, Mars, Europa and Pluto add an extraterrestrial component to the field, which is referred to as "astroglaciology". Overview A glacier is an extended mass of ice formed from snow falling and accumulating over a long period of time; glaciers move very slowly, either descending from high mountains, as in valley glaciers, or moving outward from centers of accumulation, as in continental glaciers. Areas of study within glaciology include glacial history and the reconstruction of past glaciation. A glaciologist is a person who studies glaciers. A glacial geologist studies glacial deposits and glacial erosive features on the landscape. Glaciology and glacial geology are key areas of polar research. Types Glaciers can be identified by their geometry and the relationship to the surrounding topography. There are two general categories of glaciation which glaciologists distinguish: alpine glaciation, accumulations or "rivers of ice" confined to valleys; and continental glaciation, unrestricted accumulations which once covered much of the northern continents. Alpine – ice flows down the valleys of mountainous areas and forms a tongue of ice moving towards the plains below. Alpine glaciers tend to make topography more rugged by adding and improving the scale of existing features. Various features include large ravines called cirques and arêtes, which are ridges where the rims of two cirques meet. Continental – an ice sheet found today, only in high latitudes (Greenland/Antarctica), thousands of square kilometers in area and thousands of meters thick. These tend to smooth out the landscapes. Zones of glaciers Accumulation zone – where the formation of ice is faster than its removal. Ablation (or wastage) zone – when the sum of melting, calving, and evaporation (sublimation) is greater than the amount of snow added each year. Glacier equilibrium line and ELA The glacier equilibrium line is the line separating the glacial accumulation area above from the ablation area below. The equilibrium line altitude (ELA) and its change over the years is a key indicator of the health of a glacier. A long term monitoring of the ELA may be used as indication to climate change. Movement When a glacier is experiencing an accumulation input by precipitation (snow or refreezing rain) that exceeds the output by ablation, the glacier shows a positive glacier mass balance and will advance. Conversely, if the loss of volume (from evaporation, sublimation, melting, and calving) exceeds the accumulation, the glacier shows a negative glacier mass balance and the glacier will melt back. During times in which the volume input to the glacier by precipitation is equivalent to the ice volume lost from calving, evaporation, and melting, the glacier has a steady-state condition. Some glaciers show periods where the glacier is advancing at an extreme rate, that is typically 100 times faster than what is considered normal, it is referred to as a surging glacier. Surge periods may occur at an interval of 10 to 15 years, e.g. on Svalbard. This is caused mainly due to a long lasting accumulation period on subpolar glaciers frozen to the ground in the accumulation area. When the stress due to the additional volume in the accumulation area increases, the pressure melting point of the ice at its base may be reached, the basal glacier ice will melt, and the glacier will surge on a film of meltwater. Rate of movement The movement of glaciers is usually slow. Its velocity varies from a few centimeters to a few meters per day. The rate of movement depends upon the factors listed below: Temperature of the ice. A polar glacier shows cold ice with temperatures well below the freezing point from its surface to its base. It is frozen to its bed. A temperate glacier is at a melting point temperature throughout the year, from its surface to its base. This allows the glacier to slide on a thin layer of meltwater. Most glaciers in alpine regions are temperate glaciers. Gradient of the slope. Thickness of the glacier Subglacial water dynamics Glacial Terminology Ablation Wastage of the glacier through sublimation, ice melting and iceberg calving. Ablation zone Area of a glacier in which the annual loss of ice through ablation exceeds the annual gain from precipitation. Arête An acute ridge of rock where two cirques meet. Bergschrund Crevasse formed near the head of a glacier, where the mass of ice has rotated, sheared and torn itself apart in the manner of a geological fault. Cirque, Corrie or cwm Bowl shaped depression excavated by the source of a glacier. Creep Adjustment to stress at a molecular level. Flow Movement (of ice) in a constant direction. Fracture Brittle failure (breaking of ice) under the stress raised when movement is too rapid to be accommodated by creep. It happens, for example, as the central part of a glacier moves faster than the edges. Glacial landform Collective name for the morphologic structures in/on/under/around a glacier. Moraine Accumulated debris that has been carried by a glacier and deposited at its sides (lateral moraine) or at its foot (terminal moraine). Névé Area at the top of a glacier (often a cirque) where snow accumulates and feeds the glacier. Nunatak/Rognon/Glacial Island Visible peak of a mountain otherwise covered by a glacier. Horn Spire of rock, also known as a pyramidal peak, formed by the headward erosion of three or more cirques around a single mountain. It is an extreme case of an arête. Plucking/Quarrying Where the adhesion of the ice to the rock is stronger than the cohesion of the rock, part of the rock leaves with the flowing ice. Tarn A post-glacial lake in a cirque. Tunnel valley The tunnel that is formed by hydraulic erosion of ice and rock below an ice sheet margin. The tunnel valley is what remains of it in the underlying rock when the ice sheet has melted. Glacial deposits Source: Stratified Outwash sand/gravel From front of glaciers, found on a plain. Kettles When a lock of stagnant ice leaves a depression or pit. Eskers Steep sided ridges of gravel/sand, possibly caused by streams running under stagnant ice. Kames Stratified drift builds up low, steep hills. Varves Alternating thin sedimentary beds (coarse and fine) of a proglacial lake. Summer conditions deposit more and coarser material and those of the winter, less and finer. Unstratified Till-unsorted (Glacial flour to boulders) deposited by receding/advancing glaciers, forming moraines, and drumlins. Moraines (Terminal) material deposited at the end; (ground) material deposited as glacier melts; (lateral) material deposited along the sides. Drumlins Smooth elongated hills composed of till. Ribbed moraines Large subglacial elongated hills transverse to former ice flow. See also Continental Glaciation Ice cap International Glaciological Society International Association of Cryospheric Sciences Irish Sea Glacier List of glaciers Cryosphere Notes Further reading Benn, Douglas I. and David J. A. Evans. Glaciers and Glaciation. London; Arnold, 1998. Greve, Ralf and Heinz Blatter. Dynamics of Ice Sheets and Glaciers. Berlin etc.; Springer, 2009. Hambrey, Michael and Jürg Alean. Glaciers. 2nd ed. Cambridge and New York; Cambridge University Press, 2004. Hooke, Roger LeB. Principles of Glacier Mechanics. 2nd ed. Cambridge and New York; Cambridge University Press, 2005. Paterson, W. Stanley B. The Physics of Glaciers. 3rd ed. Oxford etc.; Pergamon Press, 1994. van der Veen, Cornelis J. Fundamentals of Glacier Dynamics. Rotterdam; A. A. Balkema, 1999. van der Veen, Cornelis J. Fundamentals of Glacier Dynamics. 2nd ed. Boca Raton, FL; CRC Press, 2013. External links International Glaciological Society (IGS) International Association of Cryospheric Sciences (IACS) Snow, Ice, and Permafrost Group, University of Alaska Fairbanks Arctic and Alpine Research Group, University of Alberta Glaciers online World Data Centre for Glaciology, Cambridge, UK National Snow and Ice Data Center, Boulder, Colorado Global Land Ice Measurements from Space (GLIMS) Glacial structures – photo atlas North Cascade Glacier Climate Project Centre for Glaciology, University of Wales Caltech Glaciology Group Glaciology Group, University of Copenhagen Institute of Low Temperature Science, Sapporo National Institute of Polar Research, Tokyo Glaciology Group, University of Washington Glaciology Laboratory, Universidad de Chile-Centro de Estudios Científicos, Valdivia Russian Geographical Society (Moscow Centre) – Glaciology Commission Institute of Meteorology and Geophysics, Univ. of Innsbruck, Austria. > >
0.777811
0.985401
0.766456
Political ideologies in the United States
American political ideologies conventionally align with the left–right political spectrum, with most Americans identifying as conservative, liberal, or moderate. Contemporary American conservatism includes social conservatism and fiscal conservatism. The former ideology developed as a response to communism and the civil rights movement, while the latter developed as a response to the New Deal. Contemporary American liberalism includes social liberalism and progressivism, developing during the Progressive Era and the Great Depression. Besides conservatism and liberalism, the United States has a notable libertarian movement, developing during the mid-20th century as a revival of classical liberalism. Historical political movements in the United States have been shaped by ideologies as varied as republicanism, populism, separatism, fascism, socialism, monarchism, and nationalism. Political ideology in the United States began with the country's formation during the American Revolution, when republicanism challenged the preexisting monarchism that had defined the colonial governments. After the formation of an independent federal government, republicanism split into new ideologies, including classical republicanism, Jeffersonian democracy, and Jacksonian democracy. In the years preceding the American Civil War, abolitionism and secessionism became prominent. Progressivism developed at the beginning of the 20th century, evolving into modern liberalism over the following decades, while modern conservatism developed in response. The Cold War popularized anti-communism and neoconservatism among conservatives, while the civil rights movement popularized support for racial justice among liberals. Populist movements grew in the early-21st century, including Progressivism and Trumpism. Americans of different demographic groups are likely to hold different political beliefs. Men, white Americans, the elderly, Christians, and people without college degrees are more likely to be conservative, while women, African Americans, young adults, non-Christians, and people with college degrees are more likely to be liberal. Conservatism and liberalism in the United States are different from conservatism and liberalism in other parts of the world, and ideology in the United States is defined by individualism rather than collectivism. History Early republicanism Political ideology in the United States first developed during the American Revolution as a dispute between monarchism and republicanism. Republican ideas developed gradually over the 18th century and challenged monarchism directly through the Declaration of Independence in 1776. The monarchists, known as Loyalists, advocated that the Thirteen Colonies retain their colonial status under the monarchy of Great Britain, while the republicans, known as Patriots, advocated independence from Great Britain and the establishment of a liberal government based on popular sovereignty with no king and no inherited aristocracy. Instead, republicans advocated an elite based on achievement, and that elite had a duty to provide leadership. Patriot victory made republicanism into the foundational ideology of the United States. Advocates of republicanism at the time emphasized the importance of Enlightenment values (such as civic virtue and benevolence) to republican ideology and their vision of society involved a select group of elites that represented the people and served in government. The Constitution of the United States was ratified in 1789 to establish republicanism as the governmental system of the United States, introducing traditions such as separation of powers and federalism to the country. Early American republicanism was the first major liberal ideology in the United States, and it became the foundation for both modern conservatism and modern liberalism. As the federal government evolved in the 1790s, the classical republican ideals of civic virtue and aristocracy were challenged by more liberal ideas of democracy and self-interest. The Federalist Party was founded by Alexander Hamilton to support political candidates that advocated classical republicanism, stronger federal government, and the American School of economics, while the Democratic-Republican Party was founded by Thomas Jefferson to support political candidates that advocated the agrarian and anti-federalist ideals of Jeffersonian democracy. The Federalists saw most of their support in New England, with the other states supporting the Democratic-Republicans. The influence of Federalists declined during the 1800s, and Jeffersonian democracy came to be the only major ideology during the Era of Good Feelings. The Democratic-Republican Party fractured in the 1820s as a result of the political rivalry between John Quincy Adams and Andrew Jackson. Jackson established his ideology of Jacksonian democracy, and the Democratic Party was created to support Jackson. Much like Jefferson, Jackson supported popular democracy, rule by the people over elites, and minimal government intervention in the economy. However, the Democratic Party was not a direct successor to the Democratic-Republican Party, and they differed in other areas. Unlike Jefferson, Jackson's Democrats advocated political patronage and a stronger executive branch. The National Republican Party was created to oppose Jackson, advocating government intervention in the economy and opposing unrestrained individualism. Anti-Masonry also saw prominence at this time, and the National Republican Party merged with the Anti-Masonic Party in 1833 to form the Whig Party. The Whig Party and the Democratic Party became the two major parties. The Whigs advocated for the American System, which consisted of protectionism through tariffs, a national bank, and internal improvements. Slavery and the Civil War Slavery had been present in North America since colonial times, but it did not become a major political issue in the United States until the 1830s. National political ideology was not as influential during this period, with sectional politics between the northern and southern states driving political activity. All of the northern states had abolished slavery by 1805, but it was still widely practiced in the southern states until the Civil War (1861–1865). Abolitionism had been present in the United States since the country's foundation, but this period of sectionalism brought it into the mainstream, and by the 1840s slavery had become the nation's primary political issue. The Republican Party was formed after the collapse of the Whig Party in the 1850s to reflect the political ideologies of the northern states. It immediately replaced the Whig Party as a major political party, supporting social mobility, egalitarianism, and limitations on slavery. The two major political factions of the Republican Party were the Radical Republicans, who supported total abolition of slavery and strong action against the secessionist states, and the moderates, who supported concessions with the southern states. At the same time, some nationalist Americans advocated expansionism and manifest destiny, seeking to acquire additional territory. Many of these individuals wished for additional territory to create additional slave states. Secessionism became prominent in South Carolina during the Nullification crisis in 1832. Secessionists opposed the protectionist tariffs of 1828 and 1832, threatening to secede if the federal government attempted to enforce them. Secession and military conflict were averted by a compromise tariff in 1833. The secessionist movement in South Carolina grew more popular in the 1850s as the issue of slavery became more contentious. In 1861, fearing that the federal government would restrict or abolish slavery, South Carolina was the first of 11 states to secede from the United States and form the Confederate States of America, prompting the Civil War. Democrats in the northern states were split between the War Democrats that supported military action to prevent secession and the Copperheads that opposed military action. During the Reconstruction era from 1865 to 1877, politics focused on resolving the issues of the Civil War. The ratification of the Thirteenth Amendment abolished slavery in the United States, and ideologies based on the issue of slavery were made irrelevant. The Radical Republicans supported liberal reforms during Reconstruction to advance the rights of African Americans, including suffrage and education for freedmen. White supremacy was a major ideology in the southern states, and restrictions on the rights of African Americans saw widespread support in the region, often enforced through both political and violent means. The conservative Bourbon Democrats were prominent in the south during this period, supporting fiscal conservatism or classical liberalism, and setting the foundation for the period of conservative Democratic control in the region known as Solid South. The Gilded Age The Gilded Age took place between the 1870s and 1900. During this time, the Republican Party fractured on the issue of the spoils system in the federal government. Senator Roscoe Conkling led the conservative Stalwarts, who supported the traditional political machine and wished to retain the spoils system. Those that opposed Conkling, especially supporters of Senator James G. Blaine, made up the liberal Half-Breeds, who supported civil service reform to abolish the spoils system. The Stalwarts primarily resided in the three states most influenced by machine politics: New York, Pennsylvania, and Illinois. They were also prevalent among southern Republicans, though the Solid South was overwhelmingly Democratic. The Democratic Party continued to be divided by sectional politics during the Gilded Age. Ideologies based on monetary issues produced conflict within both major parties. Silverites opposed the nation's de facto gold standard and supported a return to bimetallism. Small government ideals were still prominent at this time, with neither major party seeking to expand the government. By the 1870s, both major political parties supported industrialization, and in response, supporters of populist agrarianism established the People's Party in 1892. The Panic of 1893 accelerated these disputes, causing a major party realignment. The People's Party was absorbed by the Democratic Party, and the conservative Bourbon Democrats lost influence. Populism, agrarianism, and bimetallism became the dominant ideologies in the Democratic Party, led by William Jennings Bryan. Other major ideological groups during the Gilded Age include the Mugwumps, the Greenbacks, and the Prohibitionists. The Mugwumps were a loosely formed collection of anti-corruption conservatives that left the Republican Party. The Greenbacks were the largest of a series of labor related movements that advocated an increased money supply, increased government regulation, and an income tax. The Prohibitionists were a single-issue group that advocated prohibition of alcohol. The Progressive Era In the 1890s and 1900s, progressivism developed as a major political ideology in the United States. Progressives opposed the effects of industrialization in the United States, supporting major governmental and societal reform to counteract them. These reforms were inspired by the moral ethos of evangelicalism and the development of the social sciences. Progressives sought to end corruption, increase public participation in government, and expand government with the goal of improving society. The progressive movement resulted in the rejection of laissez-faire capitalism in the United States and the foundation of welfare capitalism. Progressives came from multiple political traditions and developed many new political ideas. Progressives typically supported direct democracy and oversaw several reforms that gave more voting power to the citizens. These reforms included the implementation of primary elections to choose party candidates and the direct election of senators through the ratification of the Seventeenth Amendment. Regarding social issues, progressives typically believed that the government was best fit to make decisions about behavior through social control. The most prominent example of this was the prohibition on alcohol in the 1920s. Progressives also advocated for compulsory sterilization of those deemed "unfit". Progressives in the early-20th century raised first-wave feminism and women's suffrage into the mainstream, guaranteeing universal suffrage to all women through the ratification of the Nineteenth Amendment. The Democrats during the Progressive Era moved away from the conservative, small government ideology under which they had operated in the late-19th century. The Democratic Party at this time did not advocate a single ideological system but was composed of several competing populist factions that opposed the Republican Party. The Democrats adopted a reformed view of democracy in which political candidates sought support directly rather than through intermediaries such as political machines. Many progressive reforms became popular within the Democratic Party to increase direct democracy and give citizens more power over government operations, and they also adopted the idea of the Living Constitution during this period. During the presidency of Woodrow Wilson, Wilsonianism was developed as a liberal internationalist foreign relations ideology. Republicans during the Progressive Era were divided between a conservative faction and a progressive faction. Theodore Roosevelt split from the Republican Party in 1912, and his supporters formed the short-lived Progressive Party. This party advocated a strong collectivist government and a large number of social and political reforms. Far-left ideologies also saw brief popularity during this time. The Socialist Party of America was led by Eugene V. Debs and advocated for collective ownership of many industries. The anarchist movement in the United States was responsible for several terrorist attacks during the 1910s. The Red Scare, a strong backlash to these leftist movements, formed in 1919. The New Deal coalition During the Great Depression, small government conservatism became less popular, and Franklin D. Roosevelt formed the New Deal coalition. The Democratic Party at this time expanded on the reformist beliefs of progressivism, establishing social liberalism and welfare capitalism as the predominant liberal ideology in the United States. Supporters of Roosevelt's liberalism advocated financial reform, increased government regulation, and social welfare programs, encapsulated in the New Deal. Conservative Republicans and southern conservative Democrats formed the conservative coalition during Roosevelt's second term. Following the presidencies of Roosevelt and Truman, however, the Democratic Party moved away from populism in the 1950s. American liberalism also shifted its perspective on poverty during this time, emphasizing it as a long term social issue rather than a crisis that could be fixed with a sufficient response. The Republican Party's progressive wing had dissipated in the build up to the Great Depression. The party instead began to advocate for small business, equal opportunity, and individualism. These ideas became the foundation of modern fiscal conservatism that would define the Republican Party through the 20th century. The foundations of modern social conservatism were also developed by the Republican Party of the 1920s and 1930s, with Herbert Hoover emphasizing politics as a means to protect the American family and American morality. Rather than strengthening of government to do good as advocated by progressivism, conservative Republicans sought to restrict the government to prevent harm. The Republican Party came out strongly against the New Deal programs of the 1930s, arguing that "big government" threatened to become tyrannical. American entry into World War II was debated between isolationists and interventionists from the onset of conflict in the European theatre in 1939 until the attack on Pearl Harbor in 1941. During the war, an ideology of self-sacrifice was promoted and adopted by the American people, including both military service and home front activities such as rationing. After the end of the war, interventionism persisted through programs such as the Marshall Plan. Fascism briefly saw popularity in the 1930s, though it was no longer relevant after World War II. Cold War and the Civil Rights Era The Cold War began in 1947, causing a shift in foreign policy. Americanism developed as its own distinct conservative ideology that rejected foreign ideas and communism in particular. The United States as a whole supported liberal democracy and capitalism in contrast with Marxism–Leninism which was supported by the Soviet Union. Anti-communism was prevalent in the United States during the Cold War, while American communist organizations typically operated in secret and often conducted espionage in collaboration with the Soviet Union. Among conservatives, this anti-communism overlapped with anti-liberalism as McCarthyism, in which all political opponents of conservatives were accused of communist sympathies. Neoconservatism also developed within the conservative movement, made up of former Democrats that were disillusioned with the party's liberalism. The Vietnam War took place during the Cold War, causing a significant anti-war movement within the contemporary counterculture. Both the anti-war movement and the war itself were unpopular with the public. Libertarianism developed as a minor ideology in the 1960s, and Libertarian Party was founded in 1971 after the gold standard was abolished by President Nixon. In the 1960s, national politics focused heavily on the civil rights movement, and the New Deal coalition ended as support for civil rights and racial justice became major aspects of liberalism in the United States. Civil rights legislation such as the Civil Rights Act of 1964 alienated the conservative Southern Democrats. White supremacy was widespread in the southern United States, with third-party white supremacist candidates winning in southern states in the 1948 and 1968 presidential elections. Political ideology evolved significantly in the African American community during the civil rights movement as the community developed its own political voice. The two most prominent civil rights ideologies were the liberal ideology of racial integration through political demonstration championed by Martin Luther King Jr. and the separatist ideology of Black nationalism championed by Malcolm X. Other civil rights ideologies included liberal ideas of incentivizing integration through private action, socialist ideas of forgoing race issues in favor of class issues, and Black conservative ideas of personal responsibility for African Americans. Conservatives opposed government intervention designed to increase employment for African Americans and opposed extending civil rights protections, believing that these policies would hurt African Americans economically and would make the United States a liberal welfare state. Reagan Era Though conservatives opposed welfare spending during the New Deal era, this opposition did not become a core tenet of American conservatism until the 1970s. Southern conservatives were united under the Republican Party at this time through the Southern strategy. Conservatism had been seen as a dying ideology following the defeat of Barry Goldwater in the 1964 presidential election, but the Reagan administration in the 1980s returned American conservatism to the political mainstream. The Reagan coalition brought together segregationists, businessmen, conservatives, neoconservatives, libertarians, and the religious new right, including Christian fundamentalists, Evangelicals, Catholics and Jews. They rejected the leftward shift of the country in the previous decades, instead advocating laissez-faire economics and traditional values while opposing communism and the civil rights movement. Social conservatism became a prominent ideology in politics during the Reagan Era, fueled by opposition to abortion and the Equal Rights Amendment. The Tuesday Group was founded in 1995 to represent the moderate wing of the Republican Party in Congress. Liberals in the 1970s and 1980s expanded their focus on inclusivity and minority rights. In the 1990s, support for conservative policies resulted in Third Way politics to become popular in the Democratic Party, led by the New Democrats. This ideology consisted of support for free trade, free markets, and reduction of government spending. The left-wing Congressional Progressive Caucus, the centrist and conservative Blue Dog Coalition, and the Third Way New Democrat Coalition formed in the 1990s to represent different factions of the Democratic Party in Congress. 21st century After the September 11 attacks, neoconservatism became a dominant force in the conservative movement, and conservatives supported the Bush Doctrine, a foreign policy principle that encouraged foreign military involvement as the Bush administration pursued the war on terror. The peace movement subsequently resurged in the United States in response to the War in Afghanistan and the Iraq War. In the years following the September 11 attacks, a distinct form of patriotism developed based on American values, democracy promotion, and nationality derived from principle. Following the end of the Cold War, the focus of American anti-communism shifted to China as it became a world power. The early-21st century saw the emergence or reemergence of several social issues as subjects of political debate. Liberals increasingly expressed support for LGBT rights (including same-sex marriage) while conservatives predominantly expressed opposition to LGBT rights. Among young single women, the percentage of them identifying as liberal increased from about 15 percent in the early 1980s to 32 percent in the 2020s. The past decade has seen single young men move slightly to the right and single young women move significantly to the left, meaning that the ideological divide between the sexes is widening. Illegal immigration became more prominent as a political issue, with liberals advocating pluralism and conservatives advocating nativism. The COVID-19 pandemic in 2020 became a political issue in which liberals supported COVID-19 lockdowns and the use of face masks while conservatives opposed such measures and considered the pandemic a non-issue. The 2010s were marked by increasing polarization and populism among candidates and voters. The Tea Party movement formed as a libertarian, right-wing populist and conservative response to the election of Barack Obama in 2008. Members of the movement advocated for smaller government, lower taxes, and decreased government spending. This populism in turn led to Trumpism following the election of Donald Trump in 2016. Right-wing populism during this period focused on protectionist fiscal conservatism as well as cultural issues surrounding immigration and identity politics. Trumpism also incorporated an opposition to democratic norms and an acceptance of political conspiracy theories as mainstream ideas. Left-wing populism became more influential during the 2010s, beginning with the Occupy movement in 2011. Left-wing populist ideologies popularized in the 2010s include social democracy and democratic socialism due to the popularity of politicians such as Bernie Sanders. Prominent ideologies Political ideology in the United States is usually described with the left–right spectrum. Liberalism is the predominant left-leaning ideology and conservatism is the predominant right-leaning ideology. Those who hold beliefs between liberalism and conservatism or a mix of beliefs on this scale are called moderates. Within this system, there are different ways to divide these ideologies even further and determine one's ideology. Ideological positions can be divided into social issues and economic issues, and the positions a person holds on social or economic policy might be different than their position on the political spectrum. The United States has a de facto two-party system. The political parties are flexible and have undergone several ideological shifts over time. Since the mid-20th century, the Democratic Party has typically supported liberal policies and the Republican Party has typically supported conservative policies. Third parties play a minor role in American politics, and members of third parties rarely hold office at the federal level. Instead, ideas with popular support are often adopted by one of the two major parties. Conservatism Modern conservatism in the United States traces its origins to the small government principles of the Republican Party in the 1920s, and it developed through opposition to communism, the New Deal coalition and the civil rights movement in the mid-20th century. The rise of the Reagan coalition led to the election of Ronald Reagan in 1980, establishing conservatism as a major ideology in the United States. This coalition advocated laissez-faire economics, social conservatism, and anti-communism, with support from libertarians, northern businessmen, southern segregationists, and the Christian right. In the early 21st century, right-wing populism and neo-nationalism gained considerable influence among the conservative movement. Right-wing populism became the predominant conservative faction in response to the increasing liberalization of society, beginning with the Tea Party movement of 2009 and continuing with the presidency of Donald Trump. There are several different schools of thought within American conservatism. Social conservatives and the Christian right advocate traditional values, decentralization, and religious law, fearing that the United States is undergoing moral decline. Fiscal conservatives (or classical liberals) advocate small government, tax cuts, and lower government spending. Americans that identify as conservative will typically support most or all of these ideas to some extent, arguing that small government and traditional values are closely linked. American right-wing populists advocate tax cuts, protectionism, and opposition to immigration, framing politics as a battle against "elites" from above and "subversives" from below. Conservatism in the United States does not advocate a unified foreign policy ideology, though common tenets include support for American hegemony, promotion of free markets abroad, and combat readiness. Realism was prominent in conservative foreign policy during the mid-20th century, advocating cautious advances in influence through diplomacy to advance American interests. Support for realism fell among conservatives during the Reagan administration in favor of American exceptionalism and more aggressive anti-communism. Neoconservatives form an interventionist wing of the conservative movement, advocating peace through strength and the use of force to promote democracy and combat threats abroad. Other conservative ideologies support isolationism and limited involvement in foreign affairs. As of 2021, over one-third of the American public self-identifies as conservative. The Republican Party represents conservatives in the United States, with 74% of Republicans identifying as conservative, compared to only 12% of Democrats. As of 2022, Republican leaning voters are more likely than Democrats to prioritize the issues of immigration, the budget deficit, and strengthening the military. A Pew Research study in 2015 found that the most reliable Republican demographics were Mormons and Evangelicals, particularly white Americans in each group. Liberalism Modern liberalism in the United States originates from the reforms advocated by the progressive movement of the early 20th century. Franklin D. Roosevelt implemented the New Deal in response to the Great Depression, and the New Deal programs defined social liberalism in the United States, establishing it as a major ideology. In the 1960s, it expanded to include support for the civil rights movement. Following the rise of the Reagan coalition in the 1980s and the shift toward conservatism in the United States, American liberals adopted Third Way liberalism. A movement of left-wing populism emerged within liberalism following the Great Recession and Occupy Wall Street. Liberalism in the United States is founded on support for strong civil liberties, cultural liberalism, and cultural pluralism. Liberal social beliefs include support for more government intervention to fight poverty and other social issues through programs such as welfare and a social safety net, as well as opposition to government intervention in moral and social behavior. Liberal economic beliefs include support for a mixed economy that uses a capitalist system maintained with economic interventionism and regulation, as well as opposition to both laissez-faire capitalism and socialism as means to distribute economic resources. Keynesian economics commonly factor into liberal economic policy. Those that identify as liberal will typically support liberal economic policies as a means to support liberal social policies. Liberals within the modern progressive movement support greater redistribution of wealth, increases to the federal minimum wage, a mandatory single-payer healthcare system, and environmental justice. Liberal internationalism is a key component of American foreign policy, supporting increased involvement in the affairs of other countries to promote liberalism and seek liberal peace. This ideology was first developed in the United States as Wilsonianism during World War I, replacing the expansionism of the Roosevelt Corollary. Liberal internationalism has been the dominant foreign policy ideology of the United States since the 1950s. Realism grew in popularity among liberals in the early-21st century in response to the interventionist neoconservatism of the Bush administration. Progressive Americans support pacifism and antihegemonism in foreign policy. As of 2021, about one quarter of the American public self-identifies as liberal, making it the smallest of the mainstream ideological groups. The Democratic Party represents liberals in the United States, with 50% of Democrats identifying as liberal, compared to only 4% of Republicans. As of 2022, Democratic leaning voters are more likely than Republicans to prioritize the issues of the COVID-19 pandemic, climate change, race, and poverty. A Pew Research study in 2015 found that the most reliable Democratic demographics were African Americans, atheists, and Asian Americans. Moderates Moderates prioritize compromise and pragmatism, and moderate politics vary depending on the political circumstances of the era. During the American Revolution, moderates generally supported the ideas of the revolutionary Patriots, but they were concerned about the potential consequences of open revolution. During the Civil War, southern moderates opposed secession, while northern moderates advocated a more gradual response to slavery than the total abolitionism and enforcement of civil rights proposed by Radical Republicans. During Reconstruction, moderate Republicans sought to increase support for civil rights in the South instead of implementing them through force. In the 1950s, Dwight D. Eisenhower operated under his policy of "Modern Republicanism" that promoted moderate politics in response to the New Deal coalition and the Conservative coalition. Moderates identify as neither liberal nor conservative, holding a mix of beliefs that does not necessarily correspond to either group. They typically believe that issues are too complex for simple partisan solutions to work and that the two major political parties are too ideological. Some policy stances have strong support from moderates, including background checks on gun purchases and investing in renewable energy. Beyond a resistance to the terms liberal and conservative, there is little that unites moderates ideologically, and moderates can hold a variety of political positions. As of 2021, over one-third of the American public self-identifies as moderate. Self-identified moderates make up about one-third of the Democratic Party, about one-fifth of the Republican Party, and about half of independents. Minor ideologies Fascism Fascism never achieved success in American politics. There were, however, prominent American supporters of fascism in the 1930s, including Henry Ford. Charles Coughlin, at one point the second most popular radio host in the United States, openly advocated fascist ideals during his program. A minority of Americans at the time were also sympathetic to fascism because of its antisemitism, its anti-communism, and what was perceived as its economic success. Antisemitism in the United States was common at the time, and many antisemitic groups openly expressed these views. The Friends of New Germany and its successor the German American Bund represented the largest Nazi organizations in the United States, which is estimated to have had 25,000 members. Present day Nazism is called Neo-Nazism. Many factors have been proposed that cause someone to radicalize and adopt Nazism, including a traumatic past, a search for meaning through extremism, and a propensity to violence or aggression. A 2017 poll found that 9% of Americans believe neo-Nazi beliefs are "acceptable". The Federal Bureau of Investigation recognized neo-Nazis as a major domestic terror threat in 2020. The words "fascist" and "Nazi" are sometimes used erroneously as epithets to describe political figures and ideologies, but these uses of the terms are generally disputed by academics that study the subject. The mid-2010s saw the brief rise in the alt-right movement, which was characterized by its sympathy to fascist ideologies and opposition to multiracial liberal democracy. Libertarianism Developed in the mid-20th century as a revival of classical liberalism, libertarianism in the United States (dominantly right-libertarianism) is founded on the ideas of severely limited government, with supporters of libertarianism advocating fiscal conservatism and reduction of social programs, social liberalism, and isolationist foreign policy. Libertarians make up a notable minority group in American politics, with about 11% of Americans saying that the term describes them well as of 2014. Men were twice as likely to identify with the term as women, and Democrats were half as likely to identify with the term as Republicans or independents. As of 2013, 68% of libertarians were men, 94% of libertarians were white, and 62% of libertarians were under the age of 50. Religiously, 50% of libertarians were Protestant, 27% were religiously unaffiliated, and 11% were Catholic. Libertarianism is promoted by the Libertarian Party, the largest minor party in the United States. Libertarians in the United States typically vote for the Republican Party, with only a small portion voting for the Democratic Party or the Libertarian Party. Some libertarians have begun voting for the Democratic Party since 2020 in response to the rise of right-wing populism in the Republican Party. Some major think tanks in the United States operate from a libertarian perspective, including the Cato Institute and the Reason Foundation. Monarchism Before the American Revolution, the Thirteen Colonies were ruled by the Crown of Great Britain. The Founding Fathers of the United States largely rejected monarchism in favor of republicanism, and the Revolutionary War was fought to free the colonies from monarchy. About one-fifth of Americans during the revolution were part of the loyalist faction that wished to remain a monarchy under the British crown, and after the United States became an independent country, thousands of loyalists emigrated to Britain or to other colonies. Following the revolution, some individuals supported the continuation of monarchism in the United States. Most notably, Alexander Hamilton proposed an elective monarchy as the American system of government, favoring a strong executive with lifetime rule. Other supporters of monarchism at the time include the military officers that advocated in the Newburgh letter that George Washington become a monarch and the alleged Prussian scheme that sought to put the United States under the rule of Prince Henry of Prussia. No major monarchist movements have emerged since the 18th century. The Constantian Society advocated monarchy in the late 20th century, but it did not see mainstream success. Elements of monarchism still exist, however, in the function of the United States presidency. The office had many of its functions based on those of the British monarch, including its status as a unitary executive, its capacity over foreign affairs, and powers such as the presidential veto. Dark Enlightenment, a fringe neoreactionary movement that emerged in the late 2000s whose adherents include Curtis Yarvin, Peter Thiel, and Steve Bannon, largely rejects democracy and advocates for the implementation of either an absolute monarchy or a style of governance similar to a monarchy. Separatism Many separatist movements have advocated secession from the United States, though none have achieved major support since the American Civil War. The most significant separatist movement was secessionism in the southern United States in the 1860s. Politicians from the southern states declared independence and established the Confederate States of America, an unrecognized government led by Jefferson Davis, resulting in the American Civil War. Following the Civil War, the states were reincorporated into the union, and the Supreme Court ruled that unilateral secession was unconstitutional in Texas v. White. The Republic of Texas was created when it seceded from Mexico before ultimately joining the United States. Since the admission of Texas as a state, various Texas secession movements have developed. A common misconception purports that Texas reserved the right to secede when it was admitted, but no such legal provision exists. Since at least the 1970s, various groups within the Pacific Northwest have advocated for the region to separate and form its own nation, largely based on the strong cultural, environmental, and demographic similarities the various states in the region share. Notable examples include the Cascadia independence movement, which advocates for Oregon, Washington, and the Canadian province of British Columbia to separate based on their economic, environmental, and cultural ties, and the Northwest Territorial Imperative, in which white supremacists advocated creating an ethnostate in the region due to its largely white demographics and isolated geography. The status of Puerto Rico in the United States has long been debated, with independence being considered as an alternative to statehood. Other notable separatist groups in the United States include Ka Lahui Hawaii and the Alaskan Independence Party, both of which have had membership in the tens of thousands. Other notable proposals for secession have been suggested in the past. The Kentucky Resolution by Thomas Jefferson threatened secession in response to the Alien and Sedition Acts in 1798. The Nullification crisis represented another threat of secession in 1832. In the 21st century, political polarization has resulted in higher support for a division of the United States. As of 2021, two-thirds of Republicans in Southern states support a renewed Confederacy. Some extremist groups support racial separatism, which advocates separatism on the basis of race or ethnicity instead of geography. White separatism and Black separatism advocate the creation of ethnostates along racial lines. Socialism Socialists advocate the abolition of private property and social hierarchy in favor of collective ownership of the means of production. The Socialist Party of America was founded in 1901, and it saw moderate success as a third party, electing two members to Congress and running Eugene V. Debs as a notable third-party candidate in the 1912 and 1920 presidential elections. At the same time, anarchism gained a following in the United States and became the motivating ideology behind a wave of left-wing terrorism, including several bombings and the assassination of William McKinley. Following the Russian Revolution, socialism was negatively received by Americans, and strong social backlash to socialism resulted in the Red Scare. Anti-socialism and anti-communism began to play a larger role in American politics during the Cold War. The New Left briefly existed as a socialist movement in the 1960s and 1970s. In the 21st century, perceptions of socialism have improved in the United States, especially among young Americans. , the Democratic Socialists of America is the largest socialist group in the United States, reporting over 92,000 members and having elected members to Congress under the Democratic Party. This group advocates democratic socialism, including the nationalization of major industries and the transfer of other industries from private ownership to workers' ownership. The words "socialist" and "communist" are sometimes used erroneously as epithets to describe political figures and ideologies. Many politicians, political groups, and policies in the United States have been referred to as socialist despite supporting welfare capitalism with government programs and regulations. When polled, a significant portion of Americans were unable to accurately identify what socialism was, believing it to refer to government spending, welfare programs, equal rights, or liberalism, and 23 percent had no opinion. Demographics of ideological groups Men in the United States tend to be slightly more conservative than women. As of 2021, 41% of men identified as conservative, compared to 32% of women. Voter turnout tends to be slightly higher among women than among men. A gender gap has been found to exist in voting patterns, with women more likely to vote for the Democratic Party since the 1970s. Military intervention and the death penalty are significantly more popular among men than women, while gun control and social welfare programs are significantly more popular among women than men. Men that identify with hypermasculinity and women that identify with hyperfemininity have been found to lean more conservative than those that do not. Race is correlated with partisanship in the United States. White Americans are more likely to support Republican candidates. The majority of African Americans have been Democrats since 1936, and they continue to be seen as a reliable voting bloc for the Democratic Party, with as many as 82% of African Americans identifying as Democrats in 2000. Black political candidates are generally perceived as more liberal than white candidates. Asian Americans do not have a shared national and political identity, and as such are not considered a distinct voting bloc, though they have increasingly supported the Democratic Party in the 21st century. Native Americans slightly favor the Democratic Party, though Native American tribes are often separated from American society and do not participate heavily in national politics. Younger Americans tend to lean liberal, while older Americans tend to lean conservative. As of 2021, 23% of Americans aged 18 to 29 are conservative, compared to 45% of Americans aged 65 and up. Likewise, 34% of Americans aged 18 to 29 are liberal compared to 21% aged 65 and up. Americans' political ideologies generally do not change much as they grow older, but ideological shifts in one's life are more likely to move to the right than to the left. Younger voters and older voters typically consider the same factors when voting. After reaching their mid-60s, correct voting sharply declines among voters, with a majority of elderly voters in their 80s and 90s casting votes that contradict their stated beliefs. This is attributed to decreasing cognitive capabilities as well as an ability to access up-to-date information due to slower manual dexterity and difficulty using technology. As of 2014, Christians make up 85% of conservatives and 52% of liberals, non-Christian faiths make up 3% of conservatives and 10% of liberals, and the religiously unaffiliated make up 11% of conservatives and 36% of liberals. A majority of Mormons and Evangelical Protestants and a plurality of Catholics in the United States identify as conservative, while a plurality of Buddhists, Hindus, Jews, and the irreligious identify as liberal. Identifying with a religious tradition has been found to reduce political participation, but participation in church activities has been found to increase political participation. Religious Americans that believe in a God who intervenes in human affairs are less likely to participate in politics. Political beliefs and religious beliefs in the United States are closely intertwined, with both affecting the other. Highly educated Americans are more likely to be liberal. In 2015, 44% of Americans with college degrees identified as liberal, while 29% identified as conservative. Americans without college experience were about equally likely to identify as liberal or conservative, with roughly half identifying as having mixed political values. This divide primarily exists between educated and uneducated white voters, and it marks a reversal of previous trends where college-educated whites were more conservative. Several reasons for this phenomenon have been proposed, including college graduates spending more time in liberal cities, a prioritization of science over traditional authority, college students being exposed to new ideas, and conservative distrust of higher education. Income is not a major factor in political ideology. In 2021, each income group had a nearly identical distribution of ideologies, matching the general population. Comparison to global politics While liberal and conservative are the primary ideological descriptors in the United States, they do not necessarily correlate to usage of the terms in other countries. In the United States, liberalism refers specifically to social liberalism and cultural liberalism, and it leans farther to the left than liberalism in other countries. Conservatism is derived from the traditions of a society, so American conservatism reflects the ideas of classical liberalism and Christian belief that were dominant in the early history of the United States. The American conception of freedom is distinct on the world stage, with freedom often recognized as limitations on state power rather than obligations of the state. The right to property is given high priority, and taxes are particularly unpopular. Activism and personal participation in politics are encouraged, and civic engagement is considered a trait of good citizenship. Membership in civic organizations and participation in protests are common forms of civic engagement. Equal opportunity is typically more popular than equality of outcome. Historically, the ideology of the United States was based in constitutional republicanism. This came directly in opposition to the monarchism and aristocracy of European kingdoms and of Great Britain in particular. This political history of constitutional republicanism is closely related to that of South America. Both regions have a shared history of colonialism, revolutionary war, federalist republicanism, and presidential systems. Political traits that are sometimes considered distinct to the United States are also common in South America, including common ideological positions on religion, crime, economy, national identity, multiculturalism, and guns. Political ideology is one of the primary factors to which the Cold War is attributed, and it affects how the United States operates as a global superpower. American ideology is centered in liberal democracy and capitalism, and global politics in second half of the 20th century was defined by its opposition to the Marxism–Leninism of the Soviet Union and the Eastern Bloc. The United States has undertaken nation-building in several countries, directly influencing the political systems of the Philippines, Germany, Austria, Japan, Somalia, Haiti, Bosnia, Kosovo, Afghanistan, and Iraq. American politics is dominated by individualist ideology instead of the collectivist ideology that influences politics in some European countries. American citizens expect less influence and intervention by the government and are less likely to accept government intervention compared to citizens of European countries. Ideologies that advocate collective rights are not well received by American voters if they come at the cost of individual rights. Americans and Western Europeans have a similar conception of democracy and governance, prioritizing a free judiciary and fair elections at about equal levels. Americans and Western Europeans are also similarly progressive on issues such as LGBT rights and gender equality. Americans, however, place higher priority on freedom of religion than Western European countries, and Americans are more likely to believe that individual success is within a person's control. Both social democracy and nativism have become more prominent in the 21st century United States, resembling their counterparts in many European countries. Democracy in both the United States and European countries are threatened by rising anti-establishmentism and the resulting extremism and polarization. The two-party system and Congressional gridlock make the United States more susceptible to polarization than countries with other systems, though this structure also prevents extremist parties from taking power. See also Anarchism in the United States Pacifism in the United States Factions in the Democratic Party (United States) Factions in the Republican Party (United States) Pew Research Center political typology Political culture of the United States Politics of the United States Red states and blue states Southernization References Bibliography Wood, Gordon S. "Classical Republicanism and the American Revolution." Chicago-Kent Law Review 66 (1990): 13+ online. United States Political movements in the United States
0.768076
0.997883
0.76645
History of philosophy
The history of philosophy is the systematic study of the development of philosophical thought. It focuses on philosophy as rational inquiry based on argumentation, but some theorists also include myth, religious traditions, and proverbial lore. Western philosophy originated with an inquiry into the fundamental nature of the cosmos in Ancient Greece. Subsequent philosophical developments covered a wide range of topics including the nature of reality and the mind, how people should act, and how to arrive at knowledge. The medieval period was focused more on theology. The Renaissance period saw a renewed interest in Ancient Greek philosophy and the emergence of humanism. The modern period was characterized by an increased focus on how philosophical and scientific knowledge is created. Its new ideas were used during the Enlightenment period to challenge traditional authorities. Influential developments in the 19th and 20th centuries included German idealism, pragmatism, positivism, formal logic, linguistic analysis, phenomenology, existentialism, and postmodernism. Arabic–Persian philosophy was strongly influenced by Ancient Greek philosophers. It had its peak period during the Islamic Golden Age. One of its key topics was the relation between reason and revelation as two compatible ways of arriving at the truth. Avicenna developed a comprehensive philosophical system that synthesized Islamic faith and Greek philosophy. After the Islamic Golden Age, the influence of philosophical inquiry waned, partly due to Al-Ghazali's critique of philosophy. In the 17th century, Mulla Sadra developed a metaphysical system based on mysticism. Islamic modernism emerged in the 19th and 20th centuries as an attempt to reconcile traditional Islamic doctrines with modernity. Indian philosophy is characterized by its combined interest in the nature of reality, the ways of arriving at knowledge, and the spiritual question of how to reach enlightenment. Its roots are in the religious scriptures known as the Vedas. Subsequent Indian philosophy is often divided into orthodox schools, which are closely associated with the teachings of the Vedas, and heterodox schools, like Buddhism and Jainism. Influential schools based on them include the Hindu schools of Advaita Vedanta and Navya-Nyāya as well as the Buddhist schools of Madhyamaka and Yogācāra. In the modern period, the exchange between Indian and Western thought led various Indian philosophers to develop comprehensive systems. They aimed to unite and harmonize diverse philosophical and religious schools of thought. Central topics in Chinese philosophy were right social conduct, government, and self-cultivation. In early Chinese philosophy, Confucianism explored moral virtues and how they lead to harmony in society while Daoism focused on the relation between humans and nature. Later developments include the introduction and transformation of Buddhist teachings and the emergence of the schools of Xuanxue and Neo-Confucianism. The modern period in Chinese philosophy was characterized by its encounter with Western philosophy, specifically with Marxism. Other influential traditions in the history of philosophy were Japanese philosophy, Latin American philosophy, and African philosophy. Definition and related disciplines The history of philosophy is the field of inquiry that studies the historical development of philosophical thought. It aims to provide a systematic and chronological exposition of philosophical concepts and doctrines, as well as the philosophers who conceived them and the schools of thought to which they belong. It is not merely a collection of theories but attempts to show how these theories are interconnected. For example, some schools of thought build on earlier theories, while others reject them and offer alternative explanations. Purely mystical and religious traditions are often excluded from the history of philosophy if their claims are not based on rational inquiry and argumentation. However, some theorists treat the topic broadly, including the philosophical aspects of traditional worldviews, religious myths, and proverbial lore. The history of philosophy has both a historical and a philosophical component. The historical component is concerned with how philosophical thought has unfolded throughout the ages. It explores which philosophers held particular views and how they were influenced by their social and cultural contexts. The philosophical component, on the other hand, evaluates the studied theories for their truth and validity. It reflects on the arguments presented for these positions and assesses their hidden assumptions, making the philosophical heritage accessible to a contemporary audience while evaluating its continued relevance. Some historians of philosophy focus primarily on the historical component, viewing the history of philosophy as part of the broader discipline of intellectual history. Others emphasize the philosophical component, arguing that the history of philosophy transcends intellectual history because its interest is not exclusively historical. It is controversial to what extent the history of philosophy can be understood as a discipline distinct from philosophy itself. Some theorists contend that the history of philosophy is an integral part of philosophy. For example, Neo-Kantians like Wilhelm Windelband argue that philosophy is essentially historical and that it is not possible to understand a philosophical position without understanding how it emerged. Closely related to the history of philosophy is the historiography of philosophy, which examines the methods used by historians of philosophy. It is also interested in how dominant opinions in this field have changed over time. Different methods and approaches are used to study the history of philosophy. Some historians focus primarily on philosophical theories, emphasizing their claims and ongoing relevance rather than their historical evolution. Another approach sees the history of philosophy as an evolutionary process, assuming clear progress from one period to the next, with earlier theories being refined or replaced by more advanced later theories. Other historians seek to understand past philosophical theories as products of their time, focusing on the positions accepted by past philosophers and the reasons behind them, often without concern for their relevance today. These historians study how the historical context and the philosopher's biography influenced their philosophical outlook. Another important methodological feature is the use of periodization, which involves dividing the history of philosophy into distinct periods, each corresponding to one or several philosophical tendencies prevalent during that historical timeframe. Traditionally, the history of philosophy has focused primarily on Western philosophy. However, in a broader sense, it includes many non-Western traditions such as Arabic–Persian philosophy, Indian philosophy, and Chinese philosophy. Western Western philosophy refers to the philosophical traditions and ideas associated with the geographical region and cultural heritage of the Western world. It originated in Ancient Greece and subsequently expanded to the Roman Empire, later spreading to Western Europe and eventually reaching other regions, including North America, Latin America, and Australia. Spanning over 2,500 years, Western philosophy began in the 6th century BCE and continues to evolve today. Ancient Western philosophy originated in Ancient Greece in the 6th century BCE. This period is conventionally considered to have ended in 529 CE when the Platonic Academy and other philosophical schools in Athens were closed by order of the Byzantine Emperor Justinian I, who sought to suppress non-Christian teachings. Presocratic The first period of Ancient Greek philosophy is known as Presocratic philosophy, which lasted until about the mid-4th century BCE. Studying Presocratic philosophy can be challenging because many of the original texts have only survived in fragments and often have to be reconstructed based on quotations found in later works. A key innovation of Presocratic philosophy was its attempt to provide rational explanations for the cosmos as a whole. This approach contrasted with the prevailing Greek mythology, which offered theological interpretations—such as the myth of Uranus and Gaia—to emphasize the roles of gods and goddesses who continued to be worshipped even as Greek philosophy evolved. The Presocratic philosophers were among the first to challenge traditional Greek theology, seeking instead to provide empirical theories to explain how the world came into being and why it functions as it does. Thales (c. 624–545 BCE), often regarded as the first philosopher, sought to describe the cosmos in terms of a first principle, or arche. He identified water as this primal source of all things. Anaximander (c. 610–545 BCE) proposed a more abstract explanation, suggesting that the eternal substance responsible for the world's creation lies beyond human perception. He referred to this arche as the apeiron, meaning "the boundless". Heraclitus (c. 540–480 BCE) viewed the world as being in a state of constant flux, stating that one cannot step into the same river twice. He also emphasized the role of logos, which he saw as an underlying order governing both the inner self and the external world. In contrast, Parmenides (c. 515–450 BCE) argued that true reality is unchanging, eternal, and indivisible. His student Zeno of Elea (c. 490–430 BCE) formulated several paradoxes to support this idea, asserting that motion and change are illusions, as illustrated by his paradox of Achilles and the Tortoise. Another significant theory from this period was the atomism of Democritus (c. 460–370 BCE), who posited that reality is composed of indivisible particles called atoms. Other notable Presocratic philosophers include Anaximenes, Pythagoras, Xenophanes, Empedocles, Anaxagoras, Leucippus, and the sophists, such as Protagoras and Gorgias. Socrates, Plato, and Aristotle The philosophy of Socrates (469–399 BCE) and Plato (427–347 BCE) built on Presocratic philosophy but also introduced significant changes in focus and methodology. Socrates did not write anything himself, and his influence is largely due to the impact he made on his contemporaries, particularly through his approach to philosophical inquiry. This method, often conducted in the form of Socratic dialogues, begins with simple questions to explore a topic and critically reflect on underlying ideas and assumptions. Unlike the Presocratics, Socrates was less concerned with metaphysical theories and more focused on moral philosophy. Many of his dialogues explore the question of what it means to lead a good life by examining virtues such as justice, courage, and wisdom. Despite being regarded as a great teacher of ethics, Socrates did not advocate specific moral doctrines. Instead, he aimed to prompt his audience to think for themselves and recognize their own ignorance. Most of what is known about Socrates comes from the writings of his student Plato. Plato's works are presented in the form of dialogues between various philosophers, making it difficult to determine which ideas are Socrates' and which are Plato's own theories. Plato's theory of forms asserts that the true nature of reality is found in abstract and eternal forms or ideas, such as the forms of beauty, justice, and goodness. The physical and changeable world of the senses, according to Plato, is merely an imperfect copy of these forms. The theory of forms has had a lasting influence on subsequent views of metaphysics and epistemology. Plato is also considered a pioneer in the field of psychology. He divided the soul into three faculties: reason, spirit, and desire, each responsible for different mental phenomena and interacting in various ways. Plato also made contributions to ethics and political philosophy. Additionally, Plato founded the Academy, which is often considered the first institution of higher education. Aristotle (384–322 BCE), who began as a student at Plato's Academy, became a systematic philosopher whose teachings were transcribed into treatises on various subjects, including the philosophy of nature, metaphysics, logic, and ethics. Aristotle introduced many technical terms in these fields that are still used today. While he accepted Plato's distinction between form and matter, he rejected the idea that forms could exist independently of matter, arguing instead that forms and matter are interdependent. This debate became central to the problem of universals, which was discussed by many subsequent philosophers. In metaphysics, Aristotle presented a set of basic categories of being as a framework for classifying and analyzing different aspects of existence. He also introduced the concept of the four causes to explain why change and movement occur in nature. According to his teleological cause, for example, everything in nature has a purpose or goal toward which it moves. Aristotle's ethical theory emphasizes that leading a good life involves cultivating virtues to achieve eudaimonia, or human flourishing. In logic, Aristotle codified rules for correct inferences, laying the foundation for formal logic that would influence philosophy for centuries. Hellenistic and Roman After Aristotle, ancient philosophy saw the rise of broader philosophical movements, such as Epicureanism, Stoicism, and Skepticism, which are collectively known as the Hellenistic schools of thought. These movements primarily focused on fields like ethics, physics, logic, and epistemology. This period began with the death of Alexander the Great in 323 BCE and had its main influence until the end of the Roman Republic in 31 BCE. The Epicureans built upon and refined Democritus's idea that nature is composed of indivisible atoms. In ethics, they viewed pleasure as the highest good but rejected the notion that luxury and indulgence in sensory pleasures lead to long-term happiness. Instead, they advocated a nuanced form of hedonism, where a simple life characterized by tranquillity was the best way to achieve happiness. The Stoics rejected this hedonistic outlook, arguing that desires and aversions are obstacles to living in accordance with reason and virtue. To overcome these desires, they advocated self-mastery and an attitude of indifference. The skeptics focused on how judgments and opinions impact well-being. They argued that dogmatic beliefs lead to emotional disturbances and recommended that people suspend judgments on matters where certainty is unattainable. Some skeptics went further, claiming that this suspension of judgment should apply to all beliefs, suggesting that any form of knowledge is impossible. The school of Neoplatonism, which emerged in the later part of the ancient period, began in the 3rd century CE and reached its peak by the 6th century CE. Neoplatonism inherited many ideas from Plato and Aristotle, transforming them in creative ways. Its central doctrine posits a transcendent and ineffable entity responsible for all existence, referred to as "the One" or "the Good." From the One emerges the Intellect, which contemplates the One, and this, in turn, gives rise to the Soul, which generates the material world. Influential Neoplatonists include Plotinus (204–270 CE) and his student Porphyry (234–305 CE). Medieval The medieval period in Western philosophy began between 400 and 500 CE and ended between 1400 and 1500 CE. A key distinction between this period and earlier philosophical traditions was its emphasis on religious thought. The Christian Emperor Justinian ordered the closure of philosophical schools, such as Plato's Academy. As a result, intellectual activity became concentrated within the Church, and diverging from doctrinal orthodoxy was fraught with risks. Due to these developments, some scholars consider this era a "dark age" compared to what preceded and followed it. Central topics during this period included the problem of universals, the nature of God, proofs for the existence of God, and the relationship between reason and faith. The early medieval period was heavily influenced by Plato's philosophy, while Aristotelian ideas became dominant later. Augustine of Hippo (354–430 CE) was deeply influenced by Platonism and utilized this perspective to interpret and explain key concepts and problems within Christian doctrine. He embraced the Neoplatonist idea that God, or the ultimate source, is both good and incomprehensible. This led him to address the problem of evil—specifically, how evil could exist in a world created by a benevolent, all-knowing, and all-powerful God. Augustine's explanation centered on the concept of free will, asserting that God granted humans the ability to choose between good and evil, along with the responsibility for those choices. Augustine also made significant contributions in other areas, including arguments for the existence of God, his theory of time, and his just war theory. Boethius (477–524 CE) had a profound interest in Greek philosophy. He translated many of Aristotle's works and sought to integrate and reconcile them with Christian doctrine. Boethius addressed the problem of universals and developed a theory to harmonize Plato's and Aristotle's views. He proposed that universals exist in the mind without matter in one sense, but also exist within material objects in another sense. This idea influenced subsequent medieval debates on the problem of universals, inspiring nominalists to argue that universals exist only in the mind. Boethius also explored the problem of the trinity, addressing the Christian doctrine of how God can exist as three persons—Father, Son, and Holy Spirit—simultaneously. Scholasticism The later part of the medieval period was dominated by scholasticism, a philosophical method heavily influenced by Aristotelian philosophy and characterized by systematic and methodological inquiry. The intensified interest in Aristotle during this period was largely due to the Arabic–Persian tradition, which preserved, translated, and interpreted many of Aristotle's works that had been lost in the Western world. Anselm of Canterbury (1033–1109 CE) is often regarded as the father of scholasticism. He viewed reason and faith as complementary, each depending on the other for a fuller understanding. Anselm is best known for his ontological argument for the existence of God, where he defined God as the greatest conceivable being and argued that such a being must exist outside of the mind. He posited that if God existed only in the mind, He would not be the greatest conceivable being, since a being that exists in reality is greater than one that exists only in thought. Peter Abelard (1079–1142) similarly emphasized the harmony between reason and faith, asserting that both emerge from the same divine source and therefore cannot be in contradiction. Abelard was also known for his nominalism, which claimed that universals exist only as mental constructs. Thomas Aquinas (1224–1274 CE) is often considered the most influential medieval philosopher. Rooted in Aristotelianism, Aquinas developed a comprehensive system of scholastic philosophy that encompassed areas such as metaphysics, theology, ethics, and political theory. Many of his insights were compiled in his seminal work, the Summa Theologiae. A key goal in Aquinas's writings was to demonstrate how faith and reason work in harmony. He argued that reason supports and reinforces Christian tenets, but faith in God's revelation is still necessary since reason alone cannot comprehend all truths. This is particularly relevant to claims such as the eternality of the world and the intricate relationship between God and His creation. In metaphysics, Aquinas posited that every entity is characterized by two aspects: essence and existence. Understanding a thing involves grasping its essence, which can be done without perceiving whether it exists. However, in the case of God, Aquinas argued that His existence is identical to His essence, making God unique. In ethics, Aquinas held that moral principles are rooted in human nature. He believed that ethics is about pursuing what is good and that humans, as rational beings, have a natural inclination to pursue the Good. In natural theology, Aquinas's famous Five Ways are five arguments for the existence of God. Duns Scotus (1266–1308 CE) engaged critically with many of Aquinas's ideas. In metaphysics, Scotus rejected Aquinas's claim of a real distinction between essence and existence. Instead, he argued that this distinction is only formal, meaning essence and existence are two aspects of a thing that cannot be separated. Scotus further posited that each individual entity has a unique essence, known as haecceity, which distinguishes it from other entities of the same kind. William of Ockham (1285–1347 CE) is one of the last scholastic philosophers. He is known for formulating the methodological principle known as Ockham's Razor, which is used to choose between competing explanations of the same phenomenon. Ockham's Razor states that the simplest explanation, the one that assumes the existence of fewer entities, should be preferred. Ockham employed this principle to argue for nominalism and against realism about universals, contending that nominalism is the simpler explanation since it does not require the assumption of the independent existence of universals. Renaissance The Renaissance period began in the mid-14th century and lasted until the early 17th century. This cultural and intellectual movement originated in Italy and gradually spread to other regions of Western Europe. Key aspects of the Renaissance included a renewed interest in Ancient Greek philosophy and the emergence of humanism, as well as a shift toward scientific inquiry. This represented a significant departure from the medieval period, which had been primarily focused on religious and scholastic traditions. Another notable change was that intellectual activity was no longer as closely tied to the Church as before; most scholars of this period were not clerics. An important aspect of the resurgence of Ancient Greek philosophy during the Renaissance was a revived enthusiasm for the teachings of Plato. This Renaissance Platonism was still conducted within the framework of Christian theology and often aimed to demonstrate how Plato's philosophy was compatible with and could be applied to Christian doctrines. For example, Marsilio Ficino (1433–1499) argued that souls form a connection between the realm of Platonic forms and the sensory realm. According to Plato, love can be understood as a ladder leading to higher forms of understanding. Ficino interpreted this concept in an intellectual sense, viewing it as a way to relate to God through the love of knowledge. The revival of Ancient Greek philosophy during the Renaissance was not limited to Platonism; it also encompassed other schools of thought, such as Skepticism, Epicureanism, and Stoicism. This revival was closely associated with the rise of Renaissance humanism, a human-centered worldview that highly valued the academic disciplines studying human society and culture. This shift in perspective also involved seeing humans as genuine individuals. Although Renaissance humanism was not primarily a philosophical movement, it brought about many social and cultural changes that affected philosophical activity. These changes were also accompanied by an increased interest in political philosophy. Niccolò Machiavelli (1469–1527) argued that a key responsibility of rulers is to ensure stability and security. He believed they should govern effectively to benefit the state as a whole, even if harsh circumstances require the use of force and ruthless actions. In contrast, Thomas More (1478–1535) envisioned an ideal society characterized by communal ownership, egalitarianism, and devotion to public service. The Renaissance also witnessed various developments in the philosophy of nature and science, which helped lay the groundwork for the scientific revolution. One such development was the emphasis on empirical observation in scientific inquiry. Another was the idea that mathematical explanations should be employed to understand these observations. Francis Bacon (1561–1626 CE) is often seen as a transitional figure between the Renaissance and modernity. He sought to revolutionize logic and scientific inquiry with his work Novum Organum, which was intended to replace Aristotle's influential treatises on logic. Bacon's work discussed, for example, the role of inductive reasoning in empirical inquiry, which involves deriving general laws from numerous individual observations. Another key transitional figure was Galileo Galilei (1564–1642 CE), who played a crucial role in the Copernican Revolution by asserting that the Sun, rather than the Earth, is at the center of the Solar System. Early modern Early modern philosophy encompasses the 17th and 18th centuries. The philosophers of this period are traditionally divided into empiricists and rationalists. However, contemporary historians argue that this division is not a strict dichotomy but rather a matter of varying degrees. These schools share a common goal of establishing a clear, rigorous, and systematic method of inquiry. This philosophical emphasis on method mirrored the advances occurring simultaneously during the scientific revolution. Empiricism and rationalism differ concerning the type of method they advocate. Empiricism focuses on sensory experience as the foundation of knowledge. In contrast, rationalism emphasizes reason—particularly the principles of non-contradiction and sufficient reason—and the belief in innate knowledge. While the emphasis on method was already foreshadowed in Renaissance thought, it only came to full prominence during the early modern period. The second half of this period saw the emergence of the Enlightenment movement, which used these philosophical advances to challenge traditional authorities while promoting progress, individual freedom, and human rights. Empiricism Empiricism in the early modern period was mainly associated with British philosophy. John Locke (1632–1704) is often considered the father of empiricism. In his book An Essay Concerning Human Understanding, he rejected the notion of innate knowledge and argued that all knowledge is derived from experience. He asserted that the mind is a blank slate at birth, relying entirely on sensory experience to acquire ideas. Locke distinguished between primary qualities, which he believed are inherent in external objects and exist independently of any observer, and secondary qualities, which are the powers of objects to produce sensations in observers. George Berkeley (1685–1753) was strongly influenced by Locke but proposed a more radical form of empiricism. He developed a form of idealism, giving primacy to perceptions and ideas over material objects. Berkeley argued that objects only exist insofar as they are perceived by the mind, leading to the conclusion that there is no reality independent of perception. David Hume (1711–1776) also upheld the empiricist principle that knowledge is derived from sensory experience. However, he took this idea further by arguing that it is impossible to know with certainty that one event causes another. Hume's reasoning was that the connection between cause and effect is not directly perceivable. Instead, the mind observes consistent patterns between events and develops a habit of expecting certain outcomes based on prior experiences. The empiricism promoted by Hume and other philosophers had a significant impact on the development of the scientific method, particularly in its emphasis on observation, experimentation, and rigorous testing. Rationalism Another dominant school of thought in this period was rationalism. René Descartes (1596–1650) played a pivotal role in its development. He sought to establish absolutely certain knowledge and employed methodological doubt, questioning all his beliefs to find an indubitable foundation for knowledge. He discovered this foundation in the statement "I think, therefore I am." Descartes used various rationalist principles, particularly the focus on deductive reasoning, to build a comprehensive philosophical system upon this foundation. His philosophy is rooted in substance dualism, positing that the mind and body are distinct, independent entities that coexist. The rationalist philosophy of Baruch Spinoza (1632–1677) placed even greater emphasis on deductive reasoning. He developed and employed the so-called geometrical method to construct his philosophical system. This method begins with a small set of self-evident axioms and proceeds to derive a comprehensive philosophical system through deductive reasoning. Unlike Descartes, Spinoza arrived at a metaphysical monism, asserting that there is only one substance in the universe. Another influential rationalist was Gottfried Wilhelm Leibniz (1646–1716). His principle of sufficient reason posits that everything has a reason or explanation. Leibniz used this principle to develop his metaphysical system known as monadology. Enlightenment and other late modern philosophy The latter half of the modern period saw the emergence of the cultural and intellectual movement known as the Enlightenment. This movement drew on both empiricism and rationalism to challenge traditional authorities and promote the pursuit of knowledge. It advocated for individual freedom and held an optimistic view of progress and the potential for societal improvement. Immanuel Kant (1724–1804) was one of the central thinkers of the Enlightenment. He emphasized the role of reason in understanding the world and used it to critique dogmatism and blind obedience to authority. Kant sought to synthesize both empiricism and rationalism within a comprehensive philosophical system. His transcendental idealism explored how the mind, through its pre-established categories, shapes human experience of reality. In ethics, he developed a deontological moral system based on the categorical imperative, which defines universal moral duties. Other important Enlightenment philosophers included Voltaire (1694–1778), Montesquieu (1689–1755), and Jean-Jacques Rousseau (1712–1778). Political philosophy during this period was shaped by Thomas Hobbes's (1588–1679) work, particularly his book Leviathan. Hobbes had a pessimistic view of the natural state of humans, arguing that it involves a war of all against all. According to Hobbes, the purpose of civil society is to avoid this state of chaos. This is achieved through a social contract in which individuals cede some of their rights to a central and immensely powerful authority in exchange for protection from external threats. Jean-Jacques Rousseau also theorized political life using the concept of a social contract, but his political outlook differed significantly due to his more positive assessment of human nature. Rousseau's views led him to advocate for democracy. 19th century The 19th century was a rich and diverse period in philosophy, during which the term "philosophy" acquired the distinctive meaning it holds today: a discipline distinct from the empirical sciences and mathematics. A rough division between two types of philosophical approaches in this period can be drawn. Some philosophers, like those associated with German and British idealism, sought to provide comprehensive and all-encompassing systems. In contrast, other thinkers, such as Bentham, Mill, and the American pragmatists, focused on more specific questions related to particular fields, such as ethics and epistemology. Among the most influential philosophical schools of this period was German idealism, a tradition inaugurated by Immanuel Kant, who argued that the conceptual activity of the subject is always partially constitutive of experience and knowledge. Subsequent German idealists critiqued what they saw as theoretical problems with Kant's dualisms and the contradictory status of the thing-in-itself. They sought a single unifying principle as the foundation of all reality. Johann Gottlieb Fichte (1762–1814) identified this principle as the activity of the subject or transcendental ego, which posits both itself and its opposite. Friedrich Wilhelm Joseph Schelling (1775–1854) rejected this focus on the ego, instead proposing a more abstract principle, referred to as the absolute or the world-soul, as the foundation of both consciousness and nature. The philosophy of Georg Wilhelm Friedrich Hegel (1770–1831) is often described as the culmination of this tradition. Hegel reconstructed a philosophical history in which the measure of progress is the actualization of freedom. He applied this not only to political life but also to philosophy, which he claimed aims for self-knowledge characterized by the identity of subject and object. His term for this is "the absolute" because such knowledge—achieved through art, religion, and philosophy—is entirely self-conditioned. Further influential currents of thought in this period included historicism and neo-Kantianism. Historicists such as Johann Gottfried Herder emphasized the validity and unique nature of historical knowledge of individual events, contrasting this with the universal knowledge of eternal truths. Neo-Kantianism was a diverse philosophical movement that revived and reinterpreted Kant's ideas. British idealism developed later in the 19th century and was strongly influenced by Hegel. For example, Francis Herbert Bradley (1846–1924) argued that reality is an all-inclusive totality of being, identified with absolute spirit. He is also famous for claiming that external relations do not exist. Karl Marx (1818–1883) was another philosopher inspired by Hegel's ideas. He applied them to the historical development of society based on class struggle. However, he rejected the idealistic outlook in favor of dialectical materialism, which posits that economics rather than spirit is the basic force behind historical development. Arthur Schopenhauer (1788–1860) proposed that the underlying principle of all reality is the will, which he saw as an irrational and blind force. Influenced by Indian philosophy, he developed a pessimistic outlook, concluding that the expressions of the will ultimately lead to suffering. He had a profound influence on Friedrich Nietzsche, who saw the will to power as a fundamental driving force in nature. Nietzsche used this concept to critique many religious and philosophical ideas, arguing that they were disguised attempts to wield power rather than expressions of pure spiritual achievement. In the field of ethics, Jeremy Bentham (1748–1832) developed the philosophy of utilitarianism. He argued that whether an action is right depends on its utility, i.e., on the pleasure and pain it produces. The goal of actions, according to Bentham, is to maximize happiness or to produce "the greatest good for the greatest number." His student John Stuart Mill (1806–1873) became one of the foremost proponents of utilitarianism, further refining the theory by asserting that what matters is not just the quantity of pleasure and pain, but also their quality. Toward the end of the 19th century, the philosophy of pragmatism emerged in the United States. Pragmatists evaluate philosophical ideas based on their usefulness and effectiveness in guiding action. Charles Sanders Peirce (1839–1914) is usually considered the founder of pragmatism. He held that the meaning of ideas and theories lies in their practical and observable consequences. For example, to say that an object is hard means that, on a practical level, it is difficult to break, pierce, or scratch. Peirce argued that a true belief is a stable belief that works, even if it must be revised in the future. His pragmatist philosophy gained wider popularity through his lifelong friend William James (1842–1910), who applied Peirce's ideas to psychology. James argued that the meaning of an idea consists of its experiential consequences and rejected the notion that experiences are isolated events, instead proposing the concept of a stream of consciousness. 20th century Philosophy in the 20th century is usually divided into two main traditions: analytic philosophy and continental philosophy. Analytic philosophy was dominant in English-speaking countries and emphasized clarity and precise language. It often employed tools like formal logic and linguistic analysis to examine traditional philosophical problems in fields such as metaphysics, epistemology, science, and ethics. Continental philosophy was more prominent in European countries, particularly in Germany and France. It is an umbrella term without a precisely established meaning and covers philosophical movements like phenomenology, hermeneutics, existentialism, deconstruction, critical theory, and psychoanalytic theory. Interest in academic philosophy increased rapidly in the 20th century, as evidenced by the growing number of philosophical publications and the increasing number of philosophers working at academic institutions. Another change during this period was the increased presence of female philosophers. However, despite this progress, women remained underrepresented in the field. Some schools of thought in 20th-century philosophy do not clearly fall into either analytic or continental traditions. Pragmatism evolved from its 19th-century roots through scholars like Richard Rorty (1931–2007) and Hilary Putnam (1926–2016). It was applied to new fields of inquiry, such as epistemology, politics, education, and the social sciences. The 20th century also saw the rise of feminism in philosophy, which studies and critiques traditional assumptions and power structures that disadvantage women. Prominent feminist philosophers include Simone de Beauvoir (1908–1986), Martha Nussbaum (1947–present), and Judith Butler (1956–present). Analytic George Edward Moore (1873–1958) was one of the founding figures of analytic philosophy. He emphasized the importance of common sense and used it to argue against radical forms of philosophical skepticism. Moore was particularly influential in the field of ethics, where he claimed that our actions should promote the good. He argued that the concept of "good" cannot be defined in terms of other concepts and that whether something is good can be known through intuition. Gottlob Frege (1848–1925) was another pioneer of the analytic tradition. His development of modern symbolic logic had a significant impact on subsequent philosophers, even outside the field of logic. Frege employed these advances in his attempt to prove that arithmetic can be reduced to logic, a thesis known as logicism. The logicist project of Bertrand Russell (1872–1970) was even more ambitious since it included not only arithmetic but also geometry and analysis. Although their attempts were very fruitful, they did not fully succeed, as additional axioms beyond those of logic are required. In the philosophy of language, Russell's theory of definite descriptions was influential. It explains how to make sense of paradoxical expressions like "the present King of France," which do not refer to any existing entity. Russell also developed the theory of logical atomism, which was further refined by his student Ludwig Wittgenstein (1889–1951). According to Wittgenstein's early philosophy, as presented in the Tractatus Logico-Philosophicus, the world is made up of a multitude of atomic facts. The world and language have the same logical structure, making it possible to represent these facts using propositions. Despite the influence of this theory, Wittgenstein came to reject it in his later philosophy. He argued instead that language consists of a variety of games, each with its own rules and conventions. According to this view, meaning is determined by usage and not by referring to facts. Logical positivism developed in parallel to these ideas and was strongly influenced by empiricism. It is primarily associated with the Vienna Circle and focused on logical analysis and empirical verification. One of its prominent members was Rudolf Carnap (1891–1970), who defended the verification principle. This principle claims that a statement is meaningless if it cannot be verified through sensory experience or the laws of logic. Carnap used this principle to reject the discipline of metaphysics in general. However, this principle was later criticized by Carnap's student Willard Van Orman Quine (1908–2000) as one of the dogmas of empiricism. A core idea of Quine's philosophy was naturalism, which he understood as the claim that the natural sciences provide the most reliable framework for understanding the world. He used this outlook to argue that mathematical entities have real existence because they are indispensable to science. Wittgenstein's later philosophy formed part of ordinary language philosophy, which analyzed everyday language to understand philosophical concepts and problems. The theory of speech acts by John Langshaw Austin (1911–1960) was an influential early contribution to this field. Other prominent figures in this tradition include Gilbert Ryle (1900–1976) and Sir Peter Frederick Strawson (1919–2006). The shift in emphasis on the role of language is known as the linguistic turn. Richard Mervyn Hare (1919–2002) and John Leslie Mackie (1917–1981) were influential ethical philosophers in the analytic tradition, while John Rawls (1921–2002) and Robert Nozick (1938–2002) made significant contributions to political philosophy. Continental Phenomenology was an important early movement in the tradition of continental philosophy. It aimed to provide an unprejudiced description of human experience from a subjective perspective, using this description as a method to analyze and evaluate philosophical problems across various fields such as epistemology, ontology, philosophy of mind, and ethics. The founder of phenomenology was Edmund Husserl (1859–1938), who emphasized the importance of suspending all antecedent beliefs to achieve a pure and unbiased description of experience as it unfolds. His student, Martin Heidegger (1889–1976), adopted this method into an approach he termed fundamental ontology. Heidegger explored how human pre-understanding of reality shapes the experience of and engagement with the world. He argued that pure description alone is insufficient for phenomenology and should be accompanied by interpretation to uncover and avoid possible misunderstandings. This line of thought was further developed by his student Hans-Georg Gadamer (1900–2002), who held that human pre-understanding is dynamic and evolves through the process of interpretation. Gadamer explained this process as a fusion of horizons, which involves an interplay between the interpreter's current horizon and the horizon of the object being interpreted. Another influential aspect of Heidegger's philosophy is his focus on how humans care about the world. He explored how this concern is related to phenomena such as anxiety and authenticity. These ideas influenced Jean-Paul Sartre (1905–1980), who developed the philosophy of existentialism. Existentialists hold that humans are fundamentally free and responsible for their choices. They also assert that life lacks a predetermined purpose, and the act of choosing one's path without such a guiding purpose can lead to anxiety. The idea that the universe is inherently meaningless was especially emphasized by absurdist thinkers like Albert Camus (1913–1960). Critical Theory emerged in the first half of the 20th century within the Frankfurt School of philosophy. It is a form of social philosophy that aims to provide a reflective assessment and critique of society and culture. Unlike traditional theory, its goal is not only to understand and explain but also to bring about practical change, particularly to emancipate people and liberate them from domination and oppression. Key themes of Critical Theory include power, inequality, social justice, and the role of ideology. Notable figures include Theodor Adorno (1903–1969), Max Horkheimer (1895–1973), and Herbert Marcuse (1898–1979). The second half of 20th-century continental philosophy was marked by a critical attitude toward many traditional philosophical concepts and assumptions, such as truth, objectivity, universal explanations, reason, and progress. This outlook is sometimes labeled postmodernism. Michel Foucault (1926–1984) examined the relationship between knowledge and power, arguing that knowledge is always shaped by power. Jacques Derrida (1930–2004) developed the philosophy of deconstruction, which aims to expose hidden contradictions within philosophical texts by subverting the oppositions they rely on, such as the opposition between presence and absence or between subject and object. Gilles Deleuze (1925–1995) drew on psychoanalytic theory to critique and reimagine traditional concepts like desire, subjectivity, identity, and knowledge. Arabic–Persian Arabic–Persian philosophy refers to the philosophical tradition associated with the intellectual and cultural heritage of Arabic- and Persian-speaking regions. This tradition is also commonly referred to as Islamic philosophy or philosophy in the Islamic world. The classical period of Arabic–Persian philosophy began in the early 9th century CE, roughly 200 years after the death of Muhammad. It continued until the late 12th century CE and was an integral part of the Islamic Golden Age. The early classical period, prior to the work of Avicenna, focused particularly on the translation and interpretation of Ancient Greek philosophy. The late classical period, following Avicenna, was shaped by the engagement with his comprehensive philosophical system. Arabic–Persian philosophy had a profound influence on Western philosophy. During the early medieval period, many of the Greek texts were unavailable in Western Europe. They became accessible in the later medieval period largely due to their preservation and transmission by the Arabic–Persian intellectual tradition. Kalam and early classical The early Arabic intellectual tradition before the classical period was characterized by various theological discussions, primarily focused on understanding the correct interpretation of Islamic revelation. Some historians view this as part of Arabic–Persian philosophy, while others draw a more narrow distinction between theology (kalam) and philosophy proper (falsafa). Theologians, who implicitly accepted the truth of revelation, restricted their inquiries to religious topics, such as proofs of the existence of God. Philosophers, on the other hand, investigated a broader range of topics, including those not directly covered by the scriptures. Early classical Arabic–Persian philosophy was strongly influenced by Ancient Greek philosophy, particularly the works of Aristotle, but also other philosophers such as Plato. This influence came through both translations and comprehensive commentaries. A key motivation for this process was to integrate and reconcile Greek philosophy with Islamic thought. Islamic philosophers emphasized the role of rational inquiry and examined how to harmonize reason and revelation. Al-Kindi (801–873) is often considered the first philosopher of this tradition, in contrast to the more theological works of his predecessors. He followed Aristotle in regarding metaphysics as the first philosophy and the highest science. From his theological perspective, metaphysics studies the essence and attributes of God. He drew on Plotinus's doctrine of the One to argue for the oneness and perfection of God. For Al-Kindi, God emanates the universe by "bringing being to be from non-being." In the field of psychology, he argued for a dualism that strictly distinguishes the immortal soul from the mortal body. Al-Kindi was a prolific author, producing around 270 treatises during his lifetime. Al-Farabi (c. 872–950), strongly influenced by Al-Kindi, accepted his emanationist theory of creation. Al-Farabi claimed that philosophy, rather than theology, is the best pathway to truth. His interest in logic earned him the title "the Second Master" after Aristotle. He concluded that logic is universal and forms the foundation of all language and thought, a view that contrasts with certain passages in the Quran that assign this role to Arabic grammar. In his political philosophy, Al-Farabi endorsed Plato's idea that a philosopher-king would be the best ruler. He discussed the virtues such a ruler should possess, the tasks they should undertake, and why this ideal is rarely realized. Al-Farabi also provided an influential classification of the different sciences and fields of inquiry. Later classical Avicenna (980–1037) drew on the philosophies of the Ancient Greeks and Al-Farabi to develop a comprehensive philosophical system aimed at providing a holistic and rational understanding of reality that encompasses science, religion, and mysticism. He regarded logic as the foundation of rational inquiry. In the field of metaphysics, Avicenna argued that substances can exist independently, while accidents always depend on something else to exist. For example, color is an accident that requires a body to manifest. Avicenna distinguished between two forms of existence: contingent existence and necessary existence. He posited that God has necessary existence, meaning that God's existence is inherent and not dependent on anything else. In contrast, everything else in the world is contingent, meaning that it was caused by God and depends on Him for its existence. In psychology, Avicenna viewed souls as substances that give life to beings. He categorized souls into different levels: plants possess the simplest form of souls, while the souls of animals and humans have additional faculties, such as the ability to move, sense, and think rationally. In ethics, Avicenna advocated for the pursuit of moral perfection, which can be achieved by adhering to the teachings of the Quran. His philosophical system profoundly influenced both Islamic and Western philosophy. Al-Ghazali (1058–1111) was highly critical of Avicenna's rationalist approach and his adoption of Greek philosophy. He was skeptical of reason's ability to arrive at a true understanding of reality, God, and religion. Al-Ghazali viewed the philosophy of other Islamic philosophers as problematic, describing it as an illness. In his influential work, The Incoherence of the Philosophers, he argued that many philosophical teachings were riddled with contradictions and incompatible with Islamic faith. However, Al-Ghazali did not completely reject philosophy; he acknowledged its value but believed it should be subordinate to a form of mystical intuition. This intuition, according to Al-Ghazali, relied on direct personal experience and spiritual insight, which he considered essential for attaining a deeper understanding of reality. Averroes (1126–1198) rejected Al-Ghazali's skeptical outlook and sought to demonstrate the harmony between the philosophical pursuit of knowledge and the spiritual dimensions of faith. Averroes' philosophy was heavily influenced by Aristotle, and he frequently criticized Avicenna for diverging too much from Aristotle's teachings. In the field of psychology, Averroes proposed that there is only one universal intellect shared by all humans. Although Averroes' work did not have a significant impact on subsequent Islamic scholarship, it had a considerable influence on European philosophy. Post-classical Averroes is often considered the last major philosopher of the classical era of Islamic philosophy. The traditional view holds that the post-classical period was marked by a decline on several levels. This decline is understood both in terms of the global influence of Islam and in the realm of scientific and philosophical inquiry within the Islamic world. Al-Ghazali's skepticism regarding the power of reason and the role of philosophy played a significant part in this development, leading to a shift in focus towards theology and religious doctrine. However, some contemporary scholars have questioned the extent of this so-called decline. They argue that it is better understood as a shift in philosophical interest rather than an outright decline. According to this view, philosophy did not disappear but was instead integrated into and continued within the framework of theology. Mulla Sadra (1571–1636) is often regarded as the most influential philosopher of the post-classical era. He was a prominent figure in the philosophical and mystical school known as illuminationism. Mulla Sadra saw philosophy as a spiritual practice aimed at fostering wisdom and transforming oneself into a sage. His metaphysical theory of existence was particularly influential. He rejected the traditional Aristotelian notion that reality is composed of static substances with fixed essences. Instead, he advocated a process philosophy that emphasized continuous change and novelty. According to this view, the creation of the world is not a singular event in the past but an ongoing process. Mulla Sadra synthesized monism and pluralism by claiming that there is a transcendent unity of being that encompasses all individual entities. He also defended panpsychism, arguing that all entities possess consciousness to varying degrees. The movement of Islamic modernism emerged in the 19th and 20th centuries in response to the cultural changes brought about by modernity and the increasing influence of Western thought. Islamic modernists aimed to reassess the role of traditional Islamic doctrines and practices in the modern world. They sought to reinterpret and adapt Islamic teachings to demonstrate how the core tenets of Islam are compatible with modern principles, particularly in areas such as democracy, human rights, science, and the response to colonialism. Indian Indian philosophy is the philosophical tradition that originated on the Indian subcontinent. It can be divided into three main periods: the ancient period, which lasted until the end of the 2nd century BCE, the classical and medieval period, which lasted until the end of the 18th century CE, and the modern period that followed. Indian philosophy is characterized by a deep interest in the nature of ultimate reality, often relating this topic to spirituality and asking questions about how to connect with the divine and reach a state of enlightenment. In this regard, Indian philosophers frequently served as gurus, guiding spiritual seekers. Indian philosophy is traditionally divided into orthodox and heterodox schools of thought, referred to as āstikas and nāstikas. The exact definitions of these terms are disputed. Orthodox schools typically accept the authority of the Vedas, the religious scriptures of Hinduism, and tend to acknowledge the existence of the self (Atman) and ultimate reality (Brahman). There are six orthodox schools: Nyāya, Vaiśeṣika, Sāṃkhya, Yoga, Mīmāṃsā, and Vedānta. The heterodox schools are defined negatively, as those that do not adhere to the orthodox views. The main heterodox schools are Buddhism and Jainism. Ancient The ancient period of Indian philosophy began around 900 BCE and lasted until 200 BCE. During this time, the Vedas were composed. These religious texts form the foundation of much of Indian philosophy, covering a wide range of topics, including hymns and rituals. Of particular philosophical interest are the Upanishads, which are late Vedic texts that discuss profound philosophical topics. Some scholars consider the Vedas as part of philosophy proper, while others view them as a form of proto-philosophy. This period also saw the emergence of non-Vedic movements, such as Buddhism and Jainism. The Upanishads introduce key concepts in Indian philosophy, such as Atman and Brahman. Atman refers to the self, regarded as the eternal soul that constitutes the essence of every conscious being. Brahman represents the ultimate reality and the highest principle governing the universe. The Upanishads explore the relationship between Atman and Brahman, with a key idea being that understanding their connection is a crucial step on the spiritual path toward liberation. Some Upanishads advocate an ascetic lifestyle, emphasizing withdrawal from the world to achieve self-realization. Others emphasize active engagement with the world, rooted in the belief that individuals have social duties to their families and communities. These duties are prescribed by the concept of dharma, which varies according to one's social class and stage of life. Another influential idea from this period is the concept of rebirth, where individual souls are caught in a cycle of reincarnation. According to this belief, a person's actions in previous lives determine their circumstances in future lives, a principle known as the law of karma. While the Vedas had a broad influence, not all Indian philosophical traditions originated from them. For example, the non-Vedic movements of Buddhism and Jainism emerged in the 6th century BCE. These movements agreed with certain Vedic teachings about the cycle of rebirth and the importance of seeking liberation but rejected many of the rituals and the social hierarchy described in the Vedas. Buddhism was founded by Gautama Siddhartha (563–483 BCE), who challenged the Vedic concept of Atman by arguing that there is no permanent, stable self. He taught that the belief in a permanent self leads to suffering and that liberation can be attained by realizing the absence of a permanent self. Jainism was founded by Mahavira (599–527 BCE). Jainism emphasizes respect for all forms of life, a principle expressed in its commitment to non-violence. This principle prohibits harming or killing any living being, whether in action or thought. Another central tenet of Jainism is the doctrine of non-absolutism, which posits that reality is complex and multifaceted, and thus cannot be fully captured by any single perspective or expressed adequately in language. The third pillar of Jainism is the practice of asceticism or non-attachment, which involves detaching oneself from worldly possessions and desires to avoid emotional entanglement with them. Classical and medieval The classical and medieval periods in Indian philosophy span roughly from 200 BCE to 1800 CE. Some scholars refer to this entire duration as the "classical period," while others divide it into two distinct periods: the classical period up until 1300 CE, and the medieval period afterward. During the first half of this era, the orthodox schools of Indian philosophy, known as the darsanas, developed. Their foundational scriptures usually take the form of sūtras, which are aphoristic or concise texts that explain key philosophical ideas. The latter half of this period was characterized by detailed commentaries on these sutras, aimed at providing comprehensive explanations and interpretations. Samkhya is the oldest of the darśanas. It is a dualistic philosophy that asserts that reality is composed of two fundamental principles: Purusha, or pure consciousness, and Prakriti, or matter. Samkhya teaches that Prakriti is characterized by three qualities known as gunas. Sattva represents calmness and harmony, Rajas corresponds to passion and activity, and Tamas involves ignorance and inertia. The Yoga school initially formed a part of Samkhya and later became an independent school. It is based on the Yoga Sutras of Patanjali and emphasizes the practice of physical postures and various forms of meditation. Nyaya and Vaisheshika are two other significant orthodox schools. In epistemology, Nyaya posits that there are four sources of knowledge: perception, inference, analogical reasoning, and testimony. Nyaya is particularly known for its theory of logic, which emphasizes that inference depends on prior perception and aims to generate new knowledge, such as understanding the cause of an observed phenomenon. Vaisheshika, on the other hand, is renowned for its atomistic metaphysics. Although Nyaya and Vaisheshika were originally distinct schools, they later became intertwined and were often treated as a single tradition. The schools of Vedānta and Mīmāṃsā focus primarily on interpreting the Vedic scriptures. Vedānta is concerned mainly with the Upanishads, discussing metaphysical theories and exploring the possibilities of knowledge and liberation. In contrast, Mīmāṃsā is more focused on the ritualistic practices outlined in the Vedas. Buddhist philosophy also flourished during this period, leading to the development of four main schools of Indian Buddhism: Sarvāstivāda, Sautrāntika, Madhyamaka, and Yogācāra. While these schools agree on the core teachings of Gautama Buddha, they differ on certain key points. The Sarvāstivāda school holds that "all exists," including past, present, and future entities. This view is rejected by the Sautrāntika school, which argues that only the present exists. The Madhyamaka school, founded by Nagarjuna (c. 150–250 CE), asserts that all phenomena are inherently empty, meaning that nothing possesses a permanent essence or independent existence. The Yogācāra school is traditionally interpreted as a form of idealism, arguing that the external world is an illusion created by the mind. The latter half of the classical period saw further developments in both the orthodox and heterodox schools of Indian philosophy, often through detailed commentaries on foundational sutras. The Vedanta school gained significant influence during this time, particularly with the rise of the Advaita Vedanta school under Adi Shankara (c. 700–750 CE). Shankara advocated for a radical form of monism, asserting that Atman and Brahman are identical, and that the apparent multiplicity of the universe is merely an illusion, or Maya. This view was modified by Ramanuja (1017–1137 CE), who developed the Vishishtadvaita Vedanta school. Ramanuja agreed that Brahman is the ultimate reality, but he argued that individual entities, such as qualities, persons, and objects, are also real as parts of the underlying unity of Brahman. He emphasized the importance of Bhakti, or devotion to the divine, as a spiritual path and was instrumental in popularizing the Bhakti movement, which continued until the 17th to 18th centuries. Another significant development in this period was the emergence of the Navya-Nyāya movement within the Nyaya school, which introduced a more sophisticated framework of logic with a particular focus on linguistic analysis. Modern The modern period in Indian philosophy began around 1800 CE, during a time of social and cultural changes, particularly due to the British rule and the introduction of English education. These changes had various effects on Indian philosophers. Whereas previously, philosophy was predominantly conducted in the language of Sanskrit, many philosophers of this period began to write in English. An example of this shift is the influential multi-volume work A History of Indian Philosophy by Surendranath Dasgupta (1887–1952). Philosophers during this period were influenced both by their own traditions and by new ideas from Western philosophy. During this period, various philosophers attempted to create comprehensive systems that would unite and harmonize the diverse philosophical and religious schools of thought in India. For example, Swami Vivekananda (1863–1902) emphasized the validity and universality of all religions. He used the principles of Advaita Vedanta to argue that different religious traditions are merely different paths leading to the same spiritual truth. According to Advaita Vedanta, there is only one ultimate reality, without any distinctions or divisions. This school of thought considers the diversity and multiplicity in the world as an illusion that obscures the underlying divine oneness. Vivekananda believed that different religions represent various ways of realizing this divine oneness. A similar project was pursued by Sri Aurobindo in his integral philosophy. His complex philosophical system seeks to demonstrate how different historical and philosophical movements are part of a global evolution of consciousness. Other contributions to modern Indian philosophy were made by spiritual teachers like Sri Ramakrishna, Ramana Maharshi, and Jiddu Krishnamurti. Chinese Chinese philosophy encompasses the philosophical thought associated with the intellectual and cultural heritage of China. Various periodizations of this tradition exist. One common periodization divides Chinese philosophy into four main eras: an early period before the Qin dynasty, a period up to the emergence of the Song dynasty, a period lasting until the end of the Qing dynasty, and a modern era that follows. The three main schools of Chinese philosophy are Confucianism, Daoism, and Buddhism. Other influential schools include Mohism and Legalism. In traditional Chinese thought, philosophy was not distinctly separated from religious thought and other types of inquiry. It was primarily concerned with ethics and societal matters, often placing less emphasis on metaphysics compared to other traditions. Philosophical practice in China tended to focus on practical wisdom, with philosophers often serving as sages or thoughtful advisors. Pre-Qin The first period in Chinese philosophy began in the 6th century BCE and lasted until the rise of the Qin dynasty in 221 BCE. The concept of Dao, often translated as "the Way," played a central role during this period, with different schools of thought interpreting it in various ways. Early Chinese philosophy was heavily influenced by the teachings of Confucius (551–479 BCE). Confucius emphasized that a good life is one that aligns with the Dao, which he understood primarily in terms of moral conduct and virtuous behavior. He argued for the importance of filial piety, the respect for one's elders, and advocated for universal altruism. In Confucian thought, the family is fundamental, with each member fulfilling their role to ensure the family's overall flourishing. Confucius extended this idea to society, viewing the state as a large family where harmony is essential. Laozi (6th century BCE) is traditionally regarded as the founder of Daoism. Like Confucius, he believed that living a good life involves being in harmony with the Dao. However, unlike Confucius, Laozi focused not only on society but also on the relationship between humans and nature. His concept of wu wei, often translated as "effortless action," was particularly influential. It refers to acting in a natural, spontaneous way that is in accordance with the Dao, which Laozi saw as an ideal state of being characterized by ease and spontaneity. The Daoist philosopher Zhuangzi (399–295 BCE) employed parables and allegories to express his ideas. To illustrate the concept of wu wei in daily life, he used the example of a butcher who, after years of practice, could cut an ox effortlessly, with his knife naturally following the optimal path without any conscious effort. Zhuangzi is also famous for his story of the butterfly dream, which explores the nature of subjective experience. In this story, Zhuangzi dreams of being a butterfly and, upon waking, questions whether he is a man who dreamt of being a butterfly or a butterfly dreaming of being a man. The school of Mohism was founded by Mozi (c. 470–391 BCE). Central to Mozi's philosophy is the concept of jian ai, which advocates for universal love or impartial caring. Based on this concept, he promoted an early form of consequentialism, arguing that political actions should be evaluated based on how they contribute to the welfare of the people. Qin to pre-Song dynasties The next period in Chinese philosophy began with the establishment of the Qin dynasty in 221 BCE and lasted until the rise of the Song dynasty in 960 CE. This period was influenced by Xuanxue philosophy, legalist philosophy, and the spread of Buddhism. Xuanxue, also known as Neo-Daoism, sought to synthesize Confucianism and Daoism while developing a metaphysical framework for these schools of thought. It posited that the Dao is the root of ultimate reality, leading to debates about whether this root should be understood as being or non-being. Philosophers such as He Yan (c. 195–249 CE) and Wang Bi (226–249 CE) argued that the Dao is a formless non-being that acts as the source of all things and phenomena. This view was contested by Pei Wei (267–300 CE), who claimed that non-being could not give rise to being; instead, he argued that being gives rise to itself. In the realm of ethics and politics, the school of Legalism became particularly influential. Legalists rejected the Mohist idea that politics should aim to promote general welfare. Instead, they argued that statecraft is about wielding power and establishing order. They also dismissed the Confucian emphasis on virtues and moral conduct as the foundation of a harmonious society. In contrast, Legalists believed that the best way to achieve order was through the establishment of strict laws and the enforcement of punishments for those who violated them. Buddhism, which arrived in China from India in the 1st century CE, initially focused on the translation of original Sanskrit texts into Chinese. Over time, however, new and distinctive forms of Chinese Buddhism emerged. For instance, Tiantai Buddhism, founded in the 6th century CE, introduced the doctrine of the Threefold Truth, which sought to reconcile two opposing views. The first truth, conventional realism, affirms the existence of ordinary things. The second truth posits that all phenomena are illusory or empty. The third truth attempts to reconcile these positions by claiming that the mundane world is both real and empty at the same time. This period also witnessed the rise of Chan Buddhism, which later gave rise to Zen Buddhism in Japan. In epistemology, Chan Buddhists advocated for a form of immediate acquaintance with reality, asserting that it transcends the distortions of linguistic distinctions and leads to direct knowledge of ultimate reality. Song to Qing dynasties and modern The next period in Chinese philosophy began with the emergence of the Song dynasty in 960 CE. Some scholars consider this period to end with the Opium Wars in 1840, while others extend it to the establishment of the Republic of China in 1912. During this era, neo-Confucianism became particularly influential. Unlike earlier forms of Confucianism, Neo-Confucianism placed greater emphasis on metaphysics, largely in response to similar developments in Daoism and Buddhism. It rejected the Daoist and Buddhist focus on non-being and emptiness, instead centering on the concept of li as the positive foundation of metaphysics. Li is understood as the rational principle that underlies being and governs all entities. It also forms the basis of human nature and is the source of virtues. Li is often contrasted with qi, which is seen as a material and vital force. The later part of the Qing dynasty and the subsequent modern period were marked by an encounter with Western philosophy, including the ideas of philosophers like Plato, Kant, and Mill, as well as movements like pragmatism. However, Marx's ideas of class struggle, socialism, and communism were particularly significant. His critique of capitalism and his vision of a classless society led to the development of Chinese Marxism. In this context, Mao Zedong (1893–1976) played a dual role as both a philosopher who expounded these ideas and a revolutionary leader committed to their practical implementation. Chinese Marxism diverged from classical Marxism in several ways. For instance, while classical Marxism assigns the proletariat the responsibility for both the rise of the capitalist economy and the subsequent socialist revolution, in Mao's Marxism, this role is assigned to the peasantry under the guidance of the Communist Party. Traditional Chinese thought also remained influential during the modern period. This is exemplified in the philosophy of Liang Shuming (1893–1988), who was influenced by Confucianism, Buddhism, and Western philosophy. Liang is often regarded as a founder of New Confucianism. He advocated for a balanced life characterized by harmony between humanity and nature as the path to true happiness. Liang criticized the modern European attitude for its excessive focus on exploiting nature to satisfy desires, and he viewed the Indian approach, with its focus on the divine and renunciation of desires, as an extreme in the opposite direction. Others Various philosophical traditions developed their own distinctive ideas. In some cases, these developments occurred independently, while in others, they were influenced by the major philosophical traditions. Japanese Japanese philosophy is characterized by its engagement with various traditions, including Chinese, Indian, and Western schools of thought. Ancient Japanese philosophy was shaped by Shinto, the indigenous religion of Japan, which included a form of animism that saw natural phenomena and objects as spirits, known as kami. The arrival of Confucianism and Buddhism in the 5th and 6th centuries CE transformed the intellectual landscape and led to various subsequent developments. Confucianism influenced political and social philosophy and was further developed into different strands of neo-Confucianism. Japanese Buddhist thought evolved particularly within the traditions of Pure Land Buddhism and Zen Buddhism. In the 19th and 20th centuries, interaction with Western thinkers had a major influence on Japanese philosophy, particularly through the schools of existentialism and phenomenology. This period saw the foundation of the Kyoto School, established by Kitaro Nishida (1870–1945). Nishida criticized Western philosophy, particularly Kantianism, for its reliance on the distinction between subject and object. He sought to overcome this dichotomy by developing the concept of basho, which is usually translated as "place" and may be understood as an experiential domain that transcends the subject-object distinction. Other influential members of the Kyoto School include Tanabe Hajime (1885–1962) and Nishitani Keiji (1900–1990). Latin American Philosophy in Latin America is often considered part of Western philosophy. However, in a more specific sense, it represents a distinct tradition with its own unique characteristics, despite strong Western influence. Philosophical ideas concerning the nature of reality and the role of humans within it can be found in the region's indigenous civilizations, such as the Aztecs, the Maya, and the Inca. These ideas developed independently of European influence. However, most discussions typically focus on the colonial and post-colonial periods, as very few texts from the pre-colonial period have survived. The colonial period was dominated by religious philosophy, particularly in the form of scholasticism. In the 18th and 19th centuries, the emphasis shifted to Enlightenment philosophy and the adoption of a scientific outlook, particularly through positivism. An influential current in the later part of the 20th century was the philosophy of liberation, which was inspired by Marxism and focused on themes such as political liberation, intellectual independence, and education. African In the broadest sense, African philosophy encompasses philosophical ideas that originated across the entire African continent. However, the term is often understood more narrowly to refer primarily to the philosophical traditions of Western and sub-Saharan Africa. The philosophical tradition in Africa draws from both ancient Egypt and scholarly texts from medieval Africa. While early African intellectual history primarily focused on folklore, wise sayings, and religious ideas, it also included philosophical concepts, such as the idea of Ubuntu. Ubuntu is usually translated as "humanity" or "humanness" and emphasizes the deep moral connections between people, advocating for kindness and compassion. African philosophy before the 20th century was primarily conducted and transmitted orally as ideas by philosophers whose names have been lost to history. This changed in the 1920s with the emergence of systematic African philosophy. A significant movement during this period was excavationism, which sought to reconstruct traditional African worldviews, often with the goal of rediscovering a lost African identity. However, this approach was contested by Afro-deconstructionists, who questioned the existence of a singular African identity. Other influential strands and topics in modern African thought include ethnophilosophy, négritude, pan-Africanism, Marxism, postcolonialism, and critiques of Eurocentrism. References Notes Citations Sources Intellectual history History by topic
0.768851
0.996868
0.766443
Pro-Europeanism
Pro-Europeanism, sometimes called European Unionism, is a political position that favours European integration and membership of the European Union (EU). The opposite of Pro-Europeanism is Euroscepticism. Political position Pro-Europeans are mostly classified as centrist (Renew Europe) in the context of European politics, including centre-right liberal conservatives (EPP Group) and centre-left social democrats (S&D and Greens/EFA). Pro-Europeanism is ideologically closely related to the European and Global liberal movement. Pro-EU political parties Pan-European level EU: Alliance of Liberals and Democrats for Europe Party, European Free Alliance, European Green Party, European People's Party, Party of European Socialists, Volt Europa Within the EU Austria: Austrian People's Party, Social Democratic Party of Austria, The Greens – The Green Alternative, NEOS – The New Austria, Volt Austria Belgium: Reformist Mouvement, Open Flemish Liberals and Democrats, Socialist Party, Vooruit, Christian Democratic and Flemish, Les Engagés, Ecolo, Green, Democratic Federalist Independent, Volt Belgium Bulgaria: We Continue the Change, Yes, Bulgaria!, Union of Democratic Forces, Citizens for European Development of Bulgaria, Democrats for a Strong Bulgaria, Bulgarian Socialist Party (factions), Volt Bulgaria, Republicans for Bulgaria, Stand Up.BG, United People's Party, Bulgaria for Citizens Movement, Movement 21, There Is Such a People Croatia: Croatian Democratic Union, Social Democratic Party of Croatia, Croatian Peasant Party, Croatian People's Party – Liberal Democrats, Civic Liberal Alliance, Istrian Democratic Assembly, People's Party – Reformists, Croatian Social Liberal Party, Centre Cyprus: Democratic Rally, Democratic Party, Movement for Social Democracy, Democratic Alignment, New Wave – The Other Cyprus, Volt Cyprus Czech Republic: Mayors and Independents, TOP 09, Czech Pirate Party, KDU-ČSL, Social Democracy, Green Party, Volt Czech Republic Denmark: Social Democrats, Venstre, Danish Social Liberal Party, Conservative People's Party, Moderates, Volt Denmark Estonia: Estonian Reform Party, Estonian Social Democratic Party, Estonia 200, Isamaa, Estonian Greens Finland: National Coalition Party, Social Democratic Party of Finland, Centre Party, Green League, Swedish People's Party of Finland France: Renaissance, Democratic Movement, The Republicans, Socialist Party, Public Place, Radical Party of the Left, Europe Ecology – The Greens, The New Democrats, Génération.s, Radical Party, Union of Democrats and Independents, New Deal, Agir, En Commun, Horizons, Territories of Progress, Progressive Federation, Centrist Alliance, The Centrists, Ecologist Party, Democratic European Force, Volt France Germany: Christian Democratic Union, Social Democratic Party of Germany, Alliance 90/The Greens, Free Democratic Party, Christian Social Union in Bavaria, Die PARTEI, Volt Germany Greece: New Democracy, Syriza, PASOK – Movement for Change, Union of Centrists, Movement of Democratic Socialists, Volt Greece Hungary: Democratic Coalition, Jobbik, Hungarian Socialist Party, Momentum Movement, Dialogue – The Greens' Party, LMP – Hungary's Green Party, Hungarian Liberal Party, New Start Ireland: Fine Gael, Fianna Fáil, Labour Party, Social Democrats, Green Party, Volt Ireland Italy: Democratic Party, Forza Italia, Italia Viva, Italian Left, More Europe, Volt Italy, Civic Commitment, Action, Italian Socialist Party, Social Democrats, Italian Republican Party, Solidary Democracy, Green Europe, Italian Radicals, Possible, Us of the Centre, Europeanists, Centrists for Europe, Moderates, Article One, European Republicans Movement, Forza Europa, Liberal Democratic Alliance for Italy, Alliance of the Centre (Italy), èViva, Sicilian Socialist Party, Team K Latvia: Unity, The Progressives, For Latvia's Development, Movement For!, The Conservatives Lithuania: Homeland Union, Social Democratic Party of Lithuania, Liberals' Movement, Union of Democrats "For Lithuania", Freedom Party, Lithuanian Green Party Luxembourg: Christian Social People's Party, Luxembourg Socialist Workers' Party, Democratic Party, The Greens, Volt Luxembourg Malta: Nationalist Party, Labour Party (factions), AD+PD, Volt Malta Netherlands: Democrats 66, People's Party for Freedom and Democracy, Labour Party, Christian Democratic Appeal, GroenLinks, Volt Netherlands Poland: Civic Platform, Poland 2050, Polish People's Party, New Left, .Modern, Left Together, Your Movement, The Greens, Polish Initiative, Union of European Democrats, Polish Socialist Party Portugal: Social Democratic Party, Socialist Party, Liberal Initiative, LIVRE, People–Animals–Nature Party, Volt Portugal Romania: Save Romania Union, National Liberal Party, People's Movement Party, Renewing Romania's European Project, PRO Romania, Green Party, NOW Party, Volt Romania Slovakia: Progressive Slovakia, Christian Democratic Movement, Slovakia, Democrats, For the People, Voice – Social Democracy, Hungarian Alliance, Volt Slovakia Slovenia: Freedom Movement, Social Democrats, New Slovenia, Democratic Party of Pensioners of Slovenia, Slovenian People's Party Spain: People's Party, Spanish Socialist Workers' Party, Citizens, Volt Spain Sweden: Swedish Social Democratic Party, Moderate Party, Centre Party, Liberals, Christian Democrats, Volt Sweden Outside the EU Albania: Democratic Party of Albania, Socialist Party of Albania, Freedom Party of Albania, Libra Party, Social Democratic Party of Albania, Republican Party of Albania, Unity for Human Rights Party, Alliance for Equality and European Justice, Volt Albania Armenia: Armenian Democratic Liberal Party, Armenian National Movement Party, Bright Armenia, Christian-Democratic Rebirth Party, Civil Contract, Conservative Party, European Party of Armenia, For The Republic Party, Free Democrats, Heritage, Liberal Democratic Union of Armenia, National Progress Party of Armenia, People's Party of Armenia, Union for National Self-Determination, Republic Party, Rule of Law, Sasna Tsrer Pan-Armenian Party, Social Democrat Hunchakian Party, Sovereign Armenia Party Belarus: Belarusian Christian Democracy, BPF Party, United Democratic Forces of Belarus, Party of Freedom and Progress, United Civic Party of Belarus, Belarusian Social Democratic Party (Assembly), Belarusian Social Democratic Assembly Bosnia and Herzegovina: Party of Democratic Action, Croatian Democratic Union of Bosnia and Herzegovina, Social Democratic Party of Bosnia and Herzegovina, Democratic Front, People and Justice, Party of Democratic Progress, Our Party, People's European Union, For New Generations, Union for a Better Future, Independent Bloc Georgia: United National Movement, For Georgia, Lelo, European Georgia, Girchi - More Freedom, Strategy Aghmashenebeli, Ahali, Republican Party of Georgia, Georgian Labour Party, For the People, Citizens, Droa, Free Democrats, For Justice, Tavisupleba, State for the People, National Democratic Party, Solidarity Alliance of Georgia Iceland: Social Democratic Alliance, Reform Party Kosovo: Alliance for the Future of Kosovo, Democratic League of Kosovo, Partia e Fortë Moldova: Party of Action and Solidarity, National Alternative Movement, Liberal Party, European Social Democratic Party, Liberal Democratic Party, European People's Party, Pro Moldova, People's Party of the Republic of Moldova Montenegro: Europe Now!, Democratic Party of Socialists of Montenegro, Social Democratic Party of Montenegro, DEMOS, United Reform Action, Democratic Montenegro, Socialist People's Party, Liberal Party, Social Democrats, Bosniak Party, Civis, We won't give up Montenegro North Macedonia: Social Democratic Union, BESA, New Social Democratic Party, Democratic Union for Integration, Alliance for Albanians, Liberal Democratic Party, VMRO-NP Norway: Conservative Party, Labour Party (factions), Liberal Party Russia: Yabloko, People's Freedom Party, Green Alternative, Russia of the Future, Democratic Party of Russia San Marino: Civic 10, Euro-Populars for San Marino, Future Republic, Party of Democrats, Party of Socialists and Democrats, Sammarineses for Freedom, Socialist Party, Union for the Republic Serbia: Democratic Party, Social Democratic Party, Liberal Democratic Party, Movement of Free Citizens, People's Party, New Party, Serbian Progressive Party, Social Democratic Party of Serbia, Party of Freedom and Justice, Together for Serbia, Serbia 21, Civic Democratic Forum, Party of Modern Serbia, Movement for Reversal Switzerland: Social Democratic Party of Switzerland (factions), Green Party of Switzerland (factions), Green Liberal Party of Switzerland, Volt Switzerland Turkey: Democracy and Progress Party, Democratic Left Party, Democrat Party, Future Party, Good Party, Homeland Party, Liberal Democratic Party, Peoples' Democratic Party, Peoples' Equality and Democracy Party, Republican People's Party Ukraine: Servant of the People, Fatherland, European Solidarity, Voice, Self Reliance, Ukrainian People's Party, Our Ukraine, European Party of Ukraine, People's Front, Ukrainian Democratic Alliance for Reform, Volt Ukraine United Kingdom: Liberal Democrats, Green Party of England and Wales, Scottish National Party (SNP), Social Democratic and Labour Party (SDLP), Scottish Greens, Women's Equality Party, Alliance Party of Northern Ireland, Green Party Northern Ireland, Plaid Cymru, Mebyon Kernow, Alliance EPP: European People's Party UK, Volt UK, Animal Welfare Party Pro-EU newspapers and magazines Note: Media outside of Europe may also be included. Denmark: Dagbladet Børsen, Politiken France: Le Figaro, Le Monde, Le Parisien Germany: Frankfurter Allgemeine Zeitung, Der Spiegel, Süddeutsche Zeitung, Der Tagesspiegel Hungary: Blikk Ireland: The Irish Times Italy: Corriere della Sera, la Repubblica, La Stampa Japan: Chunichi Shimbun, Mainichi Shimbun South Korea: The Hankyoreh Spain: El Confidencial, El País, El Mundo United Kingdom: Financial Times, The Independent, The Guardian, The New European, The Economist Multinational European partnerships Council of Europe: an international organisation whose stated aim is to uphold human rights, democracy, rule of law in Europe and to promote European culture. It has 46 member states. Organization for Security and Co-operation in Europe: the world's largest security-oriented intergovernmental organization, with 57 participating states mostly in Europe and the Northern Hemisphere. Paneuropean Union: the oldest European unification movement. See also Eastern Partnership Euromyth European Federalist Party European Union as an emerging superpower Europeanism Eurosphere Eurovoc Euronest Parliamentary Assembly Federal Europe Federalisation of the European Union Liberalism in Europe List of European federalist political parties Pan-European identity Pan-Europeanism Politics of Europe Potential enlargement of the European Union Pulse of Europe Initiative United States of Europe Volt Europa WhyEurope References European integration Globalization Political movements in Europe Politics of the European Union
0.771259
0.993739
0.76643
History of the social sciences
The history of the social sciences has origin in the common stock of Western philosophy and shares various precursors, but began most intentionally in the early 18th century with the positivist philosophy of science. Since the mid-20th century, the term "social science" has come to refer more generally, not just to sociology, but to all those disciplines which analyze society and culture; from anthropology to psychology to media studies. The idea that society may be studied in a standardized and objective manner, with scholarly rules and methodology, is comparatively recent. Philosophers such as Confucius had long since theorised on topics such as social roles, the scientific analysis of human society is peculiar to the intellectual break away from the Age of Enlightenment and toward the discourses of Modernity. Social sciences came forth from the moral philosophy of the time and was influenced by the Age of Revolutions, such as the Industrial Revolution and the French Revolution. The beginnings of the social sciences in the 18th century are reflected in the grand encyclopedia of Diderot, with articles from Rousseau and other pioneers. Around the start of the 20th century, Enlightenment philosophy was challenged in various quarters. After the use of classical theories since the end of the scientific revolution, various fields substituted mathematics studies for experimental studies and examining equations to build a theoretical structure. The development of social science subfields became very quantitative in methodology. Conversely, the interdisciplinary and cross-disciplinary nature of scientific inquiry into human behavior and social and environmental factors affecting it made many of the natural sciences interested in some aspects of social science methodology. Examples of boundary blurring include emerging disciplines like social studies of medicine, biocultural anthropology, neuropsychology, and the history and sociology of science. Increasingly, quantitative and qualitative methods are being integrated in the study of human action and its implications and consequences. In the first half of the 20th century, statistics became a free-standing discipline of applied mathematics. Statistical methods were used confidently. In the contemporary period, there continues to be little movement toward consensus on what methodology might have the power and refinement to connect a proposed "grand theory" with the various midrange theories that, with considerable success, continue to provide usable frameworks for massive, growing data banks. See consilience. Timeframes Antiquity Plato's Republic is an influential treatise on political philosophy and the just life. Aristotle published several works on social organization, such as his Politics, and Constitution of the Athenians. Islamic developments Significant contributions to the social sciences were made in Medieval Islamic civilization. Al-Biruni (973–1048) wrote detailed comparative studies on the anthropology of peoples, religions and cultures in the Middle East, Mediterranean and South Asia. Biruni has also been praised by several scholars for his Islamic anthropology. Ibn Khaldun (1332–1406) worked in areas of demography, historiography, the philosophy of history, sociology, and economics. He is best known for his Muqaddimah. Modern period Early modern Near the Renaissance, which began around the 14th century, Jean Buridan and Nicole Oresme wrote on money. In the 15th century St. Atonine of Florence wrote of a comprehensive economic process. In the 16th century Leonard de Leys (Lessius), Juan de Lugo, and particularly Luis Molina wrote on economic topics. These writers focused on explaining property as something for "public good". Representative figures of the 17th century include David Hartley, Hugo Grotius, Thomas Hobbes, John Locke, and Samuel von Putendorf. Thomas Hobbes argued that deductive reasoning from axioms created a scientific framework, and hence his Leviathan was a scientific description of a political commonwealth. In the 18th century, social science was called moral philosophy, as contrasted from natural philosophy and mathematics, and included the study of natural theology, natural ethics, natural jurisprudence, and policy ("police"), which included economics and finance ("revenue"). Pure philosophy, logic, literature, and history were outside these two categories. Adam Smith was a professor of moral philosophy, and he was taught by Francis Hutcheson. Figures of the time included François Quesnay, Jean-Jacques Rousseau, Giambattista Vico, William Godwin, Gabriel Bonnet de Mably, and Andre Morellet. The Encyclopédie of the time contained various works on the social sciences. Late modern This unity of science as descriptive remains, for example, in the time of Thomas Hobbes who argued that deductive reasoning from axioms created a scientific framework, and hence his Leviathan was a scientific description of a political commonwealth. What would happen within decades of his work was a revolution in what constituted "science", particularly the work of Isaac Newton in physics. Newton, by revolutionizing what was then called "natural philosophy", changed the basic framework by which individuals understood what was "scientific". While he was merely the archetype of an accelerating trend, the important distinction is that for Newton, the mathematical flowed from a presumed reality independent of the observer, and working by its own rules. For philosophers of the same period, mathematical expression of philosophical ideals was taken to be symbolic of natural human relationships as well: the same laws moved physical and spiritual reality. For examples see Blaise Pascal, Gottfried Leibniz and Johannes Kepler, each of whom took mathematical examples as models for human behavior directly. In Pascal's case, the famous wager; for Leibniz, the invention of binary computation; and for Kepler, the intervention of angels to guide the planets . In the realm of other disciplines, this created a pressure to express ideas in the form of mathematical relationships. Such relationships, called "Laws" after the usage of the time (see philosophy of science) became the model which other disciplines would emulate. 19th century The term "social science" was coined in French by Mirabeau in 1767, before becoming a distinct conceptual field in the nineteenth century. Auguste Comte (1797–1857) argued that ideas pass through three rising stages, theological, philosophical and scientific. He defined the difference as the first being rooted in assumption, the second in critical thinking, and the third in positive observation. This framework, still rejected by many, encapsulates the thinking which was to push economic study from being a descriptive to a mathematically based discipline. Karl Marx was one of the first writers to claim that his methods of research represented a scientific view of history in this model. With the late 19th century, attempts to apply equations to statements about human behavior became increasingly common. Among the first were the "Laws" of philology, which attempted to map the change over time of sounds in a language. Sociology was established by Comte in 1838. He had earlier used the term "social physics", but that had subsequently been appropriated by others, most notably the Belgian statistician Adolphe Quetelet. Comte endeavoured to unify history, psychology and economics through the scientific understanding of the social realm. Writing shortly after the malaise of the French Revolution, he proposed that social ills could be remedied through sociological positivism, an epistemological approach outlined in The Course in Positive Philosophy [1830–1842] and A General View of Positivism (1844). Comte believed a positivist stage would mark the final era, after conjectural theological and metaphysical phases, in the progression of human understanding. It was with the work of Charles Darwin that the descriptive version of social theory received another shock. Biology had, seemingly, resisted mathematical study, and yet the theory of natural selection and the implied idea of genetic inheritance—later found to have been enunciated by Gregor Mendel, seemed to point in the direction of a scientific biology based, like physics, chemistry, astronomy, and Earth science on mathematical relationships. The first thinkers to attempt to combine inquiry of the type they saw in Darwin with exploration of human relationships, which, evolutionary theory implied, would be based on selective forces, were Freud in Austria and William James in the United States. Freud's theory of the functioning of the mind, and James' work on experimental psychology would have enormous impact on those that followed. Freud, in particular, created a framework which would appeal not only to those studying psychology, but artists and writers as well. Though Comte is generally regarded as the "Father of Sociology", the discipline was formally established by another French thinker, Émile Durkheim (1858–1917), who developed positivism in greater detail. Durkheim set up the first European department of sociology at the University of Bordeaux in 1895, publishing his Rules of the Sociological Method. In 1896, he established the journal L'Année Sociologique. Durkheim's seminal monograph, Suicide (1897), a case study of suicide rates among Catholic and Protestant populations, distinguished sociological analysis from psychology or philosophy. It also marked a major contribution to the concept of structural functionalism. Today, Durkheim, Marx and Max Weber are typically cited as the three principal architects of social science in the science of society sense of the term. "Social science", however, has since become an umbrella term to describe all those disciplines, outside of physical science and art, which analyse human societies. 20th century In the first half of the 20th century, statistics became a free-standing discipline of applied mathematics. Statistical methods were used confidently, for example in an increasingly statistical view of biology. The first thinkers to attempt to combine inquiry of the type they saw in Darwin with exploration of human relationships, which, evolutionary theory implied, would be based on selective forces, were Freud in Austria and William James in the United States. Freud's theory of the functioning of the mind, and James' work on experimental psychology would have enormous impact on those that followed. Freud, in particular, created a framework which would appeal not only to those studying psychology, but artists and writers as well. One of the most persuasive advocates for the view of scientific treatment of philosophy would be John Dewey (1859–1952). He began, as Marx did, in an attempt to weld Hegelian idealism and logic to experimental science, for example in his Psychology of 1887. However, he abandoned Hegelian constructs. Influenced by both Charles Sanders Peirce and William James, he joined the movement in America called pragmatism. He then formulated his basic doctrine, enunciated in essays such as "The Influence of Darwin on Philosophy" (1910). This idea, based on his theory of how organisms respond, states that there are three phases to the process of inquiry: Problematic Situation, where the typical response is inadequate. Isolation of Data or subject matter. Reflective, which is tested empirically. With the rise of the idea of quantitative measurement in the physical sciences, for example Lord Rutherford's famous maxim that any knowledge that one cannot measure numerically "is a poor sort of knowledge", the stage was set for the conception of the humanities as being precursors to "social science." In 1924, prominent social scientists established the Pi Gamma Mu honor society for the social sciences. Among its key objectives were to promote interdisciplinary cooperation and develop an integrated theory of human personality and organization. Toward these ends, a journal for interdisciplinary scholarship in the various social sciences and lectureship grants were established. Interwar period Theodore Porter argued in The Rise of Statistical Thinking that the effort to provide a synthetic social science is a matter of both administration and discovery combined, and that the rise of social science was, therefore, marked by both pragmatic needs as much as by theoretical purity. An example of this is the rise of the concept of Intelligence Quotient, or IQ. It is unclear precisely what is being measured by IQ, but the measurement is useful in that it predicts success in various endeavors. The rise of industrialism had created a series of social science, economic, and political problems, particularly in managing supply and demand in their political economy, the management of resources for military and developmental use, the creation of mass education systems to train individuals in symbolic reasoning and problems in managing the effects of industrialization itself. The perceived senselessness of the "Great War" as it was then called, of 1914–18, now called World War I, based in what were perceived to be "emotional" and "irrational" decisions, provided an immediate impetus for a form of decision making that was more "scientific" and easier to manage. Simply put, to manage the new multi-national enterprises, private and governmental, required more data. More data required a means of reducing it to information upon which to make decisions. Numbers and charts could be interpreted more quickly and moved more efficiently than long texts. Conversely, the interdisciplinary and cross-disciplinary nature of scientific inquiry into human behavior and social and environmental factors affecting it have made many of the so-called hard sciences dependent on social science methodology. Examples of boundary blurring include emerging disciplines like social studies of medicine, neuropsychology, biocultural anthropology, and the history and sociology of science. Increasingly, quantitative and qualitative methods are being integrated in the study of human action and its implications and consequences. In the 1930s this new model of managing decision making became cemented with the New Deal in the US, and in Europe with the increasing need to manage industrial production and governmental affairs. Institutions such as The New School for Social Research, International Institute of Social History, and departments of "social research" at prestigious universities were meant to fill the growing demand for individuals who could quantify human interactions and produce models for decision making on this basis. Coupled with this pragmatic need was the belief that the clarity and simplicity of mathematical expression avoided systematic errors of holistic thinking and logic rooted in traditional argument. This trend, part of the larger movement known as modernism provided the rhetorical edge for the expansion of social sciences. Contemporary developments There continues to be little movement toward consensus on what methodology might have the power and refinement to connect a proposed "grand theory" with the various midrange theories which, with considerable success, continue to provide usable frameworks for massive, growing data banks (see consilience). See also Historiography, the study of the methodologies utilized by historians History of archaeology History of anthropology History of linguistics History of psychology History of sociology Outline of social science References Further reading Backhouse, Roger E., and Philippe Fontaine, eds. A historiography of the modern social sciences (Cambridge University Press, 2014) excerpt Lipset, Seymour M. ed. Politics and the Social Sciences (1969)
0.772708
0.991803
0.766374
Pluriculturalism
Pluriculturalism is an approach to the self and others as complex rich beings which act and react from the perspective of multiple identifications and experiences which combine to make up their pluricultural repertoire. Identity or identities are the by-products of experiences in different cultures and with people with different cultural repertoires. As an effect, multiple identifications create a unique personality instead of or more than a static identity. An individual's pluriculturalism includes their own cultural diversity and their awareness and experience with the cultural diversity of others. It can be influenced by their job or occupational trajectory, geographic location, family history and mobility, leisure or occupational travel, personal interests or experience with media. The term pluricultural competence is a consequence of the idea of plurilingualism. There is a distinction between pluriculturalism and multiculturalism. Spain has been referred to as a pluricultural country, due to its nationalisms and regionalisms. See also Multiculturalism Cultural diversity Interculturalism Intercultural communication Polyethnicity References Identity politics Multiculturalism Social theories Sociology of culture
0.779027
0.98374
0.76636
Historical archaeology
Historical archaeology is a form of archaeology dealing with places, things, and issues from the past or present when written records and oral traditions can inform and contextualize cultural material. These records can both complement and conflict with the archaeological evidence found at a particular site. Studies focus on literate, historical- period societies as opposed to non-literate, prehistoric societies. While they may not have generated the records, the lives of people for whom there was little need for written records, such as the working class, slaves, indentured labourers, and children but who live in the historical period can also be the subject of study. The sites are found on land and underwater. Industrial archaeology, unless practiced at industrial sites from the prehistoric era, is a form of historical archaeology concentrating on the remains and products of industry and the Industrial era. Definition According to the overall definition given here based on methodological and theoretical aspects classical archaeology or Egyptology as well as medieval archaeology are disciplines of historical archaeology. In practice, however – mainly in the Americas – historical archaeology refers to the modern, post-1492 period, which in Europe is often referred to as post-medieval archaeology. Notable historical archaeologists Judy Birmingham John L. Cotter James Deetz J. C. Harrington Ivor Noël Hume Mark P. Leone Stanley South Further reading Connah, Grahame. 1988 "Of the hut I builded" The archaeology of Australia's history. Cambridge: Cambridge University Press. M. Hall and S. Silliman (eds.) 2006. Historical Archaeology. Oxford: Blackwell. External links
0.792429
0.967097
0.766355
Legal history
Legal history or the history of law is the study of how law has evolved and why it has changed. Legal history is closely connected to the development of civilisations and operates in the wider context of social history. Certain jurists and historians of legal process have seen legal history as the recording of the evolution of laws and the technical explanation of how these laws have evolved with the view of better understanding the origins of various legal concepts; some consider legal history a branch of intellectual history. Twentieth-century historians viewed legal history in a more contextualised manner – more in line with the thinking of social historians. They have looked at legal institutions as complex systems of rules, players and symbols and have seen these elements interact with society to change, adapt, resist or promote certain aspects of civil society. Such legal historians have tended to analyse case histories from the parameters of social-science inquiry, using statistical methods, analysing class distinctions among litigants, petitioners and other players in various legal processes. By analyzing case outcomes, transaction costs, and numbers of settled cases, they have begun an analysis of legal institutions, practices, procedures and briefs that gives a more complex picture of law and society than the study of jurisprudence, case law and civil codes can achieve. Ancient world Ancient Egyptian law, dating as far back as 3000 BC, was based on the concept of Ma'at, and was characterised by tradition, rhetorical speech, social equality and impartiality. By the 22nd century BC, Ur-Nammu, an ancient Sumerian ruler, formulated the first extant law code, consisting of casuistic statements ("if... then..."). Around 1760 BC, King Hammurabi further developed Babylonian law, by codifying and inscribing it in stone. Hammurabi placed several copies of his law code throughout the kingdom of Babylon as stelae, for the entire public to see; this became known as the Codex Hammurabi. The most intact copy of these stelae was discovered in the 19th century by British Assyriologists, and has since been fully transliterated and translated into various languages, including English, German and French. Ancient Greek has no single word for "law" as an abstract concept, retaining instead the distinction between divine law (thémis), human decree (nomos) and custom (díkē). Yet Ancient Greek law contained major constitutional innovations in the development of democracy. Southern Asia Ancient India and China represent distinct traditions of law, and had historically independent schools of legal theory and practice. The Arthashastra, dating from the 400 BC, and the Manusmriti from 100 BCE were influential treatises in India, texts that were considered authoritative legal guidance. Manu's central philosophy was tolerance and pluralism, and was cited across South East Asia. During the Muslim conquests in the Indian subcontinent, sharia was established by the Muslim sultanates and empires, most notably Mughal Empire's Fatawa-e-Alamgiri, compiled by emperor Aurangzeb and various scholars of Islam. After British colonialism, Hindu tradition, along with Islamic law, was supplanted by the common law when India became part of the British Empire. Malaysia, Brunei, Singapore and Hong Kong also adopted the common law. Eastern Asia The eastern Asia legal tradition reflects a unique blend of secular and religious influences. Japan was the first country to begin modernising its legal system along western lines, by importing bits of the French, but mostly the German Civil Code. This partly reflected Germany's status as a rising power in the late nineteenth century. Similarly, traditional Chinese law gave way to westernisation towards the final years of the Qing dynasty in the form of six private law codes based mainly on the Japanese model of German law. Today Taiwanese law retains the closest affinity to the codifications from that period, because of the split between Chiang Kai-shek's nationalists, who fled there, and Mao Zedong's communists who won control of the mainland in 1949. The current legal infrastructure in the People's Republic of China was heavily influenced by soviet Socialist law, which essentially inflates administrative law at the expense of private law rights. Today, however, because of rapid industrialisation China has been reforming, at least in terms of economic (if not social and political) rights. A new contract code in 1999 represented a turn away from administrative domination. Furthermore, after negotiations lasting fifteen years, in 2001 China joined the World Trade Organization. Yassa of the Mongol Empire Canon law The legal history of the Catholic Church is the history of Catholic canon law, the oldest continuously functioning legal system in the West. Canon law originates much later than Roman law but predates the evolution of modern European civil law traditions. The cultural exchange between the secular (Roman/Barbarian) and ecclesiastical (canon) law produced the jus commune and greatly influenced both civil and common law. The history of Latin canon law can be divided into four periods: the jus antiquum, the jus novum, the jus novissimum and the Code of Canon Law. In relation to the Code, history can be divided into the jus vetus (all law before the Code) and the jus novum (the law of the Code, or jus codicis). Eastern canon law developed separately. In the twentieth century, canon law was comprehensively codified. On 27 May 1917, Pope Benedict XV codified the 1917 Code of Canon Law. John XXIII, together with his intention to call the Second Vatican Council, announced his intention to reform canon law, which culminated in the 1983 Code of Canon Law, promulgated by John Paul II on 25 January 1983. John Paul II also brought to a close the long process of codifying the Eastern Catholic canon law common to all 23 sui juris Eastern Catholic Churches on 18 October 1990 by promulgating the Code of Canons of the Eastern Churches. Islamic law One of the major legal systems developed during the Middle Ages was Islamic law and jurisprudence. A number of important legal institutions were developed by Islamic jurists during the classical period of Islamic law and jurisprudence. One such institution was the Hawala, an early informal value transfer system, which is mentioned in texts of Islamic jurisprudence as early as the 8th century. Hawala itself later influenced the development of the Aval in French civil law and the Avallo in Italian law. European laws Roman Empire Roman law was heavily influenced by Greek teachings. It forms the bridge to the modern legal world, over the centuries between the rise and decline of the Roman Empire. Roman law, in the days of the Roman republic and Empire, was heavily procedural and there was no professional legal class. Instead a lay person, iudex, was chosen to adjudicate. Precedents were not reported, so any case law that developed was disguised and almost unrecognised. Each case was to be decided afresh from the laws of the state, which mirrors the (theoretical) unimportance of judges' decisions for future cases in civil law systems today. During the 6th century AD in the Eastern Roman Empire, the Emperor Justinian codified and consolidated the laws that had existed in Rome so that what remained was one twentieth of the mass of legal texts from before. This became known as the Corpus Juris Civilis. As one legal historian wrote, "Justinian consciously looked back to the golden age of Roman law and aimed to restore it to the peak it had reached three centuries before." Middle Ages During the Byzantine Empire the Justinian Code was expanded and remained in force until the Empire fell, though it was never officially introduced to the West. Instead, following the fall of the Western Empire and in former Roman countries, the ruling classes relied on the Theodosian Code to govern natives and Germanic customary law for the Germanic incomers – a system known as folk-right – until the two laws blended together. Since the Roman court system had broken down, legal disputes were adjudicated according to Germanic custom by assemblies of learned lawspeakers in rigid ceremonies and in oral proceedings that relied heavily on testimony. After much of the West was consolidated under Charlemagne, law became centralized so as to strengthen the royal court system, and consequently case law, and abolished folk-right. However, once Charlemagne's kingdom definitively splintered, Europe became feudalistic, and law was generally not governed above the county, municipal or lordship level, thereby creating a highly decentralized legal culture that favored the development of customary law founded on localized case law. However, in the 11th century, crusaders, having pillaged the Byzantine Empire, returned with Byzantine legal texts including the Justinian Code, and scholars at the University of Bologna were the first to use them to interpret their own customary laws. Medieval European legal scholars began researching the Roman law and using its concepts and prepared the way for the partial resurrection of Roman law as the modern civil law in a large part of the world. There was, however, a great deal of resistance so that civil law rivaled customary law for much of the late Middle Ages. After the Norman conquest of England, which introduced Norman legal concepts into medieval England, the English King's powerful judges developed a body of precedent that became the common law. In particular, Henry II instituted legal reforms and developed a system of royal courts administered by a small number of judges who lived in Westminster and traveled throughout the kingdom. Henry II also instituted the Assize of Clarendon in 1166, which allowed for jury trials and reduced the number of trials by combat. Louis IX of France also undertook major legal reforms and, inspired by ecclesiastical court procedure, extended Canon-law evidence and inquisitorial-trial systems to the royal courts. In 1280 and 1295 measures were instituted by the Court of Arches and other authorities in London to improve the conduct of lawyers in the courts. Also, judges no longer moved on circuits becoming fixed to their jurisdictions, and jurors were nominated by parties to the legal dispute rather than by the sheriff. In addition, by the 10th century, the Law Merchant, first founded on Scandinavian trade customs, then solidified by the Hanseatic League, took shape so that merchants could trade using familiar standards, rather than the many splintered types of local law. A precursor to modern commercial law, the Law Merchant emphasised the freedom of contract and alienability of property. Modern European law The two main traditions of modern European law are the codified legal systems of most of continental Europe, and the English tradition based on case law. As nationalism grew in the 18th and 19th centuries, lex mercatoria was incorporated into countries' local law under new civil codes. Of these, the French Napoleonic Code and the German Bürgerliches Gesetzbuch became the most influential. As opposed to English common law, which consists of massive tomes of case law, codes in small books are easy to export and for judges to apply. However, today there are signs that civil and common law are converging. European Union law is codified in treaties, but develops through the precedent set down by the European Court of Justice. African law The African law system is based on common law and civilian law. Many legal systems in Africa were based on ethnic customs and traditions before colonization took over their original system. The people listened to their elders and relied on them as mediators during disputes. Several states didn't keep written records, as their laws were often passed orally. In the Mali Empire, the Kouroukan Fouga, was proclaimed in 1222–1236 AD as the official constitution of the state. It defined regulations in both constitutional and civil matters. The provisions of the constitution are still transmitted to this day by griots under oath. During colonization, authorities in Africa developed an official legal system called the Native Courts. After colonialism, the major faiths that stayed were Buddhism, Hinduism, and Judaism. United States The United States legal system developed primarily out of the English common law system (with the exception of the state of Louisiana, which continued to follow the French civilian system after being admitted to statehood). Some concepts from Spanish law, such as the prior appropriation doctrine and community property, still persist in some US states, particularly those that were part of the Mexican Cession in 1848. Under the doctrine of federalism, each state has its own separate court system, and the ability to legislate within areas not reserved to the federal government. See also Legal biography Association of Young Legal Historians (AYLH) Constitution of the Roman Republic Notes References Sadakat Kadri, The Trial: A History from Socrates to O.J. Simpson, HarperCollins 2005. Kempin, Jr., Frederick G. (1963). Legal History: Law and Social Change. Englewood Cliffs, New Jersey: Prentice-Hall. Further reading The Oxford History of the Laws of England. 13 Vols. Oxford University Press, 2003–. (Six volumes to date: Vol. I (Canon Law and Ecclesiastical Jurisdiction from 597 to the 1640s), vol. II (871–1216), vol. VI (1483–1558), vols. XI–XIII (1820–1914)) The Oxford International Encyclopedia of Legal History. Ed. Stanley N. Katz. 6 Vols. Oxford University Press, 2009. (OUP catalogue. Oxford Reference Online) Potz, Richard: Islam and Islamic Law in European Legal History, European History Online, Mainz: Institute of European History, 2011, retrieved: November 28, 2011. External links The Legal History Project (Resources and interviews) Some legal history materials The Schoyen Collection The Roman Law Library by Yves Lassard and Alexandr Koptev. CHD Centre for Legal History – Faculty of Law, University of Rennes 1 Centre for Legal History – Edinburgh Law School The European Society for History of Law Collection of Historical Statutory Material – Cornell Law Library Historical Laws of Hong Kong Online – University of Hong Kong Libraries, Digital Initiatives Basic Law Drafting History Online -University of Hong Kong Libraries, Digital Initiatives Jurisprudence Academic disciplines History of science by discipline
0.772334
0.99222
0.766326
14th century
The 14th century lasted from 1 January 1301 (represented by the Roman numerals MCCCI) to 31 December 1400 (MCD). It is estimated that the century witnessed the death of more than 45 million lives from political and natural disasters in both Europe and the Mongol Empire. West Africa experienced economic growth and prosperity. In Europe, the Black Death claimed 25 million lives wiping out one third of the European population while the Kingdom of England and the Kingdom of France fought in the protracted Hundred Years' War after the death of King Charles IV of France led to a claim to the French throne by King Edward III of England. This period is considered the height of chivalry and marks the beginning of strong separate identities for both England and France as well as the foundation of the Italian Renaissance and the Ottoman Empire. In Asia, Tamerlane (Timur), established the Timurid Empire, history's third largest empire to have been ever established by a single conqueror. Scholars estimate that Timur's military campaigns caused the deaths of 17 million people, amounting to about 5% of the world population at the time. Synchronously, the Timurid Renaissance emerged. In the Arab world, historian and political scientist Ibn Khaldun and explorer Ibn Battuta made significant contributions. In India, the Bengal Sultanate separated from the Delhi Sultanate, a major trading nation in the world. The sultanate was described by the Europeans as the richest country to trade with. The Mongol court was driven out of China and retreated to Mongolia, the Ilkhanate collapsed, the Chaghatayid dissolved and broke into two parts, and the Golden Horde lost its position as a great power in Eastern Europe. In Africa, the wealthy Mali Empire, a huge producer of gold, reached its territorial and economic height under the reign of Mansa Musa I of Mali, the wealthiest individual of medieval times, and perhaps the wealthiest ever. In the Americas, the Mexica founded the city of Tenochtitlan, while the Mississippian mound city of Cahokia was abandoned. 1301–1309 The Little Ice Age was a period of widespread cooling which, while conventionally defined as extending from around the 16th to the 19th centuries, is dated by some experts to a timespan from about 1300 to about 1850, during which average global temperatures dropped by as much as , particularly in Europe and North America. This created conditions for a shortened growing season and reduced crop yields that led to the famines in those areas. 1305–1314: The Trials of the Knights Templar. The Knights Templar arrested and tried. Jacques de Molay, the last grand master of the Templars, is executed in 1314. 1309: King Jayanegara succeeds Kertarajasa Jayawardhana as ruler of Majapahit. 1309–1377: The Avignon papacy transfers the seat of the Popes from Italy to France. 1310s The Great Famine of 1315–1317 kills millions of people in Europe. 1318–1330: An Italian Franciscan friar, Mattiussi, visited Sumatra, Java, and Banjarmasin in Borneo. In his record he described the kingdom of Majapahit. 1320s 1320: Władysław I the Elbow-high is crowned King of Poland which leads to its later unification. 1323: Malietoafaiga ordered cannibalism to be abolished in Tutuila (present-day American Samoa). 1325: Forced out of previous habitations, the Mexica found the city of Tenochtitlan. 1327: Tver Uprising against the Golden Horde. 1328: Tribhuwana Wijayatunggadewi succeeds Jayanegara as ruler of Majapahit. 1328–1333: Wang Dayuan, a traveller from Quanzhou, China during the Yuan dynasty, visited Luzon & Mindanao in the Philippines, many places in Southeast Asia, Sri Lanka and India, and reached Dhofar and Aden. 1330s 1335: The death of the Ilkhan Abu Said causes the disintegration of the Mongol rule in Persia. 1336: The Vijayanagara Empire is founded in South India by Harihara I. 1337: The Hundred Years' War begins when Edward III of England lays claim to the French throne. 1340s 1343–1345: In Saint George's Night Uprising, pagan Estonians launch a last large-scale attempt to rid themselves of the non-indigenous Christian religion. 1345–1346: The French recruit troops and ships in Genoa, Monaco, and Nice. 1346: English forces led by Edward III defeat a French army led by Philip VI of France in The Battle of Crécy, a major point in the Hundred Years' War which marks the rise of the longbow as a dominant weapon in Western Europe. 1346: King Valdemar IV of Denmark sells the Duchy of Estonia to the Teutonic Order. 1347–1351: The Black Death kills around a third of the population of Europe. 1347: Adityawarman moved the capital of Dharmasraya and established the kingdom of Malayupura in Pagarruyung, West Sumatra. 1348: The 6.9-magnitude 1348 Friuli earthquake centered in Northern Italy was felt across Europe. Contemporaries linked the quake with the Black Death and Great Famine, fueling fears that the Biblical Apocalypse had arrived. 1350s 1350: Ramathibodi I establishes the Ayutthaya Kingdom. 1350: Hayam Wuruk, styled Sri Rajasanagara, succeeds Tribhuwana Wijayatunggadewi as ruler of Majapahit; his reign is considered the empire's 'Golden Age'. Under its military commander Gajah Mada, Majapahit stretches over much of modern-day Indonesia. 1353: Fa Ngum established the Lan Xang kingdom in Laos. 1356: The Imperial Diet of the Holy Roman Empire headed by Emperor Charles IV issues the Golden Bull of 1356, establishing various constitutional aspects of the Empire, the most significant being the electoral college to elect future emperors. 1356: The Diet of the Hansa is held in Lübeck, formalising what up until then had only been a loose alliance of trading cities in northern Europe and officially founding the Hanseatic League. 1357: Scotland retains its independence with the signing of the Treaty of Berwick, thus ending the Wars of Scottish Independence. 1357: In the Battle of Bubat, the Sundanese royal family is massacred by the Majapahit army by the order of Gajah Mada; the death toll includes Sundanese king Lingga Buana and princess Dyah Pitaloka Citraresmi, who committed suicide. 1360s 1363: The Battle of Lake Poyang, a naval conflict between Chinese rebel groups led by Chen Youliang and Zhu Yuanzhang, takes place from August to October, constituting one of the largest naval battles in history. 1365: The Old Javanese text Nagarakertagama is written. 1366: Tepanec Tlatoani Acolnahuácatl accepts Acamapichtli as the first tlatoani of Tenochtitlan for the Mexica Empire. 1368: The end of the Mongol Yuan dynasty in China and the beginning of the Ming dynasty. 1370s 1371: The Battle of Maritsa, the Serbs are defeated by the Ottomans, with most of Serb nobility being killed. 1377: Majapahit sends a punitive expedition against Palembang in Sumatra. Palembang's prince, Parameswara (later Iskandar Syah) flees, eventually finding his way to Malacca and establishing it as a major international port. 1378: The Great Schism of the West splits the Catholic Church, eventually leading to three simultaneous popes and not resolved until 1417. 1378: Battle of the Vozha River between Russians and Mongols. 1378–1382: Ciompi Revolt occurs in Florence. 1380s 1380: Russian principalities defeat the Golden Horde at the Battle of Kulikovo. 1381: John Wycliffe is dismissed from the University of Oxford for criticism of the Catholic Church, leading to the Lollardy movement in England. 1381: Peasants' Revolt in England. 1382: Khan Tokhtamysh captures Moscow. 1382: Barquq rise to power to start the Burji dynasty, the Circassian Mamuluk Dynasty in Egypt. 1385: Battle of Aljubarrota between Portugal and Castile. Portugal maintains independence. 1385: Union of Krewo between Poland and Lithuania. 1389: Battle of Kosovo between Serbs and Ottoman Turks; Prince Lazar, Sultan Murad I and Miloš Obilić are killed. 1389: Wikramawardhana succeeds Sri Rajasanagara as ruler of Majapahit. 1390–1400 1391: Anti-Jewish pogroms spread throughout Spain and Portugal, and many thousands of Jews are massacred. 1392: Taejo of Joseon establishes the Joseon Dynasty. 1396: The Battle of Nicopolis, in which the Ottoman Empire defeats a large Crusader army of knights and infantry from various Christian kingdoms including Hungary, France, the Holy Roman Empire, Burgundy and Wallachia. 1396: The Second Bulgarian Empire ends, with the capture of the last stronghold fortress of Vidin and its king Ivan Sratsimir by the Ottomans. 1397: The Kalmar Union is established, uniting Norway, Sweden and Denmark into one kingdom. 1397: Reign of Chimalpopoca begins as the third tlatoani of Tenochtitlan. Undated Transition from the Medieval Warm Period to the Little Ice Age. Crisis of the Late Middle Ages The poet Petrarch coins the term Dark Ages to describe the preceding 900 years in Europe, beginning with the fall of the Western Roman Empire in 476 through to the renewal embodied in the Renaissance. Beginning of the Ottoman Empire, early expansion into the Balkans. Iwan vault, Jamé Mosque of Isfahan, Isfahan, Iran, is built. Early 14th century: Kao Ninga paints Monk Sewing (attributed) in the Kamakura period (Cleveland Museum collection). An account of Buddha's life, translated earlier into Greek by Saint John of Damascus and widely circulated to Christians as the story of Barlaam and Josaphat, became so popular that the two were venerated as saints. Singapore emerges for the first time as an important fortified city and trading centre. Islam reaches Terengganu, on the Malay Peninsula as evidence by the Terengganu Inscription Stone. The Hausa found several city-states in the south of modern Niger. Work begins on the Great Enclosure at Great Zimbabwe, built of non-cemented, dressed stone. Research suggests the city's population to be between less than 10,000 to 18,000 at its peak. Inventions, discoveries, introductions Music of Ars nova Foundation of the University of Kraków Chinese text the Huolongjing by Jiao Yu describes fire lances, fire arrows, rocket launchers, land mines, naval mines, bombards, cannons, and hollow cast iron cannonballs filled with gunpowder, and their use to set ablaze enemy camps First pound lock in Europe reportedly built in Vreeswijk, Netherlands in 1373 References 2nd millennium Centuries
0.769582
0.995677
0.766255
The Story of Civilization
The Story of Civilization (1935–1975), by husband and wife Will and Ariel Durant, is an 11-volume set of books covering both Eastern and Western civilizations for the general reader, with a particular emphasis on European (Western) history. The series was written over a span of four decades. The first six volumes of The Story of Civilization are credited to Will Durant alone, with Ariel recognized only in the acknowledgements. Beginning with The Age of Reason Begins, Ariel is credited as a co-author. In the preface to the first volume, Durant states his intention to make the series in 5 volumes, although this would not turn out to be the case. The series won a Pulitzer Prize for General Nonfiction in 1968 with the 10th volume in the series, Rousseau and Revolution. The volumes were best sellers and sold well for many years. Sets of them were frequently offered by book clubs. An unabridged audiobook production of all eleven volumes was produced by the Books on Tape company and was read by Alexander Adams (also known as Grover Gardner). Volumes I. Our Oriental Heritage (1935) This volume covers Near Eastern history until the fall of the Achaemenid Empire in the 330s BC, and the history of India, China, and Japan up to the 1930s. Full title: The Story of Civilization ~ 1 ~ Our Oriental Heritage ~ Being a History of Civilization in Egypt and the Near East to the Death of Alexander; and in India, China and Japan from the Beginning to Our Own Day; with an Introduction on the Nature and Foundations of Civilization. II. The Life of Greece (1939) This volume covers Ancient Greece and the Hellenistic Near East down to the Roman conquest. Full title: The Story of Civilization ~ 2 ~ The Life of Greece ~ A History of Greek Government, Industry, Manners, Morals, Religion, Philosophy, Science, Literature and Art from the Earliest Times to the Roman Conquest. III. Caesar and Christ (1944) The volume covers the history of Rome and of Christianity until the time of Constantine the Great. Full title: The Story of Civilization ~ 3 ~ Caesar and Christ ~ This Brilliantly Written History Surveys All Aspects of Roman Life ~ Politics, Economics, Literature, Art, Morals. It Ends with the Conflict of Pagan and Christian Forces and Raises the Curtain on the Great Struggle between Church and State. IV. The Age of Faith (1950) This volume covers the Middle Ages in both Europe and the Near East, from the time of Constantine I to that of Dante Alighieri. Full title: The Story of Civilization ~ 4 ~ The Age of Faith ~ A History of Medieval Civilization ~ Christian, Islamic, and Judaic ~ from Constantine to Dante ~ A.D. 325 - 1300. V. The Renaissance (1953) This volume covers the history of Italy from c.1300 to the mid 16th century, focusing on the Italian Renaissance. Full title: The Story of Civilization ~ 5 ~ The Renaissance ~ A History of Civilization in Italy from the Birth of Petrarch to the Death of Titian ~ 1304 to 1576. VI. The Reformation (1957) This volume covers the history of Europe outside of Italy from around 1300 to 1564, focusing on the Protestant Reformation. Full title: The Story of Civilization ~ 6 ~ The Reformation ~ A History of European Civilization from Wyclif to Calvin ~ 1300 - 1564. VII. The Age of Reason Begins (1961) This volume covers the history of Europe and the Near East from 1559 to 1648. Full title: The Story of Civilization ~ 7 ~ The Age of Reason Begins ~ A History of European Civilization in the Period of Shakespeare, Bacon, Montaigne, Rembrandt, Galileo and Descartes ~ 1558 - 1648. VIII. The Age of Louis XIV (1963) This volume covers the period of Louis XIV of France in Europe and the Near East. Full title: The Story of Civilization ~ 8 ~ The Age of Louis XIV ~ A History of European Civilization in the Period of Pascal, Molière, Cromwell, Milton, Peter the Great, Newton and Spinoza: 1648-1715. IX. The Age of Voltaire (1965) This volume covers the period of the Age of Enlightenment, as exemplified by Voltaire, focusing on the period between 1715 and 1756 in France, Britain, and Germany. Full title: The Story of Civilization ~ 9 ~ The Age of Voltaire ~ A History of Civilization in Western Europe from 1715 to 1756, with Special Emphasis on the Conflict between Religion and Philosophy. X. Rousseau and Revolution (1967) This volume centers on Jean-Jacques Rousseau and his times. It received the Pulitzer Prize for General Nonfiction in 1968. Full title: The Story of Civilization ~ 10 ~ Rousseau and Revolution ~ A History of Civilization in France, England, and Germany from 1756, and in the Remainder of Europe from 1715 to 1789. XI. The Age of Napoleon (1975) This volume centers on Napoleon I of France and his times. Full title: The Story of Civilization ~ 11 ~ The Age of Napoleon ~ A History of European Civilization from 1789 to 1815. Development history Editors on the series included M. Lincoln ("Max") Schuster and Michael Korda. Reception One volume, Rousseau and Revolution, won the Pulitzer Prize for General Non-Fiction in 1968. All eleven volumes were Book-of-the-Month Club selections and best-sellers with total sales of more than two million copies in nine languages. James H. Breasted's review of the first volume was highly negative. W. N. Brown was hardly more impressed. Henry James Forman, reviewing for The New York Times, found the first volume to be a masterpiece, as did the New York Herald Tribune. Michael Ginsberg was favorably disposed to the second volume, as was Edmund C. Richards. Reviews of the second volume from Time and Boston Evening Transcript were very positive. J.W. Swain noted in reviewing the third volume the book was written for a popular audience rather than scholars, and was successful at that. A review of the third volume in Time was positive. John Day published a mixed review of the third volume. Ralph Bates posted a negative review of the third volume for The New Republic. Sidney R. Packard, professor emeritus of history at Smith College, found the fourth volume to be quite good. Norman V. Hope had a similar impression. L.H. Carlson, for the Chicago Tribune, compared it to Jacob Burckhardt's works. Wallace K. Ferguson published a review of the fifth volume. Geoffrey Bruun published positive reviews of the fifth and sixth volumes for The New York Times. Garrett Mattingly, for The Saturday Review, lambasted the sixth volume but went on to say that Durant was widely-read and a capable storyteller. D. W. Brogan had a highly favorable impression of the seventh volume. A review in Time of the seventh volume was positive. J.H. Plumb found the eighth volume to be very poor, as did Stanley Mellon. Alfred J. Bingham found the ninth volume to be a "thoroughly enjoyable semi-popular history", and was effusive in his praise of the tenth volume. John H. Plumb was scathing in reviewing the eleventh volume. Joseph I. Shulim took a similar view. Alfred J. Bingham had a mixed yet favorable opinion. A review in The Saturday Review of the eleventh volume was very positive. See also A Study of History The Cartoon History of the Universe Civilisation (TV series) The Outline of History The Rise of the West: A History of the Human Community The Story of Philosophy The Lessons of History The Decline of the West References External links Universal history books Books by Will Durant 20th-century history books Series of history books Pulitzer Prize for General Non-Fiction-winning works Simon & Schuster books Book series introduced in 1935 Books about civilizations
0.770029
0.995059
0.766225
History of citizenship
History of citizenship describes the changing relation between an individual and the state, known as citizenship. Citizenship is generally identified not as an aspect of Eastern civilization but of Western civilization. There is a general view that citizenship in ancient times was a simpler relation than modern forms of citizenship, although this view has been challenged. While there is disagreement about when the relation of citizenship began, many thinkers point to the early city-states of ancient Greece, possibly as a reaction to the fear of slavery, although others see it as primarily a modern phenomenon dating back only a few hundred years. In Roman times, citizenship began to take on more of the character of a relationship based on law, with less political participation than in ancient Greece but a widening sphere of who was considered to be a citizen. In the Middle Ages in Europe, citizenship was primarily identified with commercial and secular life in the growing cities, and it came to be seen as membership in emerging nation-states. In modern democracies, citizenship has contrasting senses, including a liberal-individualist view emphasizing needs and entitlements and legal protections for essentially passive political beings, and a civic-republican view emphasizing political participation and seeing citizenship as an active relation with specific privileges and obligations. While citizenship has varied considerably throughout history, there are some common elements of citizenship over time. Citizenship bonds extend beyond basic kinship ties to unite people of different genetic backgrounds, that is, citizenship is more than a clan or extended kinship network. It generally describes the relation between a person and an overall political entity such as a city-state or nation and signifies membership in that body. It is often based on, or a function of, some form of military service or expectation of future military service. It is generally characterized by some form of political participation, although the extent of such participation can vary considerably from minimal duties such as voting to active service in government. And citizenship, throughout history, has often been seen as an ideal state, closely allied with freedom, an important status with legal aspects including rights, and it has sometimes been seen as a bundle of rights or a right to have rights. Last, citizenship almost always has had an element of exclusion, in the sense that citizenship derives meaning, in part, by excluding non-citizens from basic rights and privileges. Overview While a general definition of citizenship is membership in a political society or group, citizenship as a concept is difficult to define. Thinkers as far back as Aristotle realized that there was no agreed-upon definition of citizenship. And modern thinkers, as well, agree that the history of citizenship is complex with no single definition predominating. It is hard to isolate what citizenship means without reference to other terms such as nationalism, civil society, and democracy. According to one view, citizenship as a subject of study is undergoing transformation, with increased interest while the meaning of the term continues to shift. There is agreement citizenship is culture-specific: it is a function of each political culture. Further, how citizenship is seen and understood depends on the viewpoint of the person making the determination, such that a person from an upper class background will have a different notion of citizenship than a person from the lower class. The relation of citizenship has not been a fixed or static relation, but constantly changes within each society, and that according to one view, citizenship might "really have worked" only at select periods during certain times, such as when the Athenian politician Solon made reforms in the early Athenian state. The history of citizenship has sometimes been presented as a stark contrast between ancient citizenship and post-medieval times. One view is that citizenship should be studied as a long and direct progression throughout Western civilization, beginning from Ancient Greece or perhaps earlier, extending to the present; for example, thinker Feliks Gross examined citizenship as the "history of the continuation of a single institution." Other views question whether citizenship can be examined as a linear process, growing over time, usually for the better, and see the linear progression approach as an oversimplification possibly leading to incorrect conclusions. According to this view, citizenship should not be considered as a "progressive realisation of the core meanings that are definitionally built into citizenship." Another caveat, offered by some thinkers, is to avoid judging citizenship from one era in terms of the standards of another era; according to this view, citizenship should be understood by examining it within the context of a city-state or nation, and trying to understand it as people from these societies understood it. The rise of citizenship has been studied as an aspect of the development of law. Table Contrasting senses of citizenship from ancient times to the present day, according to Peter Zarrow: Ancient conceptions Jewish people in the ancient world One view is that the beginning of citizenship dates back to the ancient Israelites. These people developed an understanding of themselves as a distinct and unique people—different from the Egyptians or Babylonians. They had a written history, common language and one-deity-only religion sometimes described as ethical monotheism. While most peoples developed a loose identity tied to a specific geographic location, the Jewish people kept their common identity despite being physically moved to different lands, such as when they were held captive as slaves in ancient Egypt or Babylon. The Jewish Covenant has been described as a binding agreement not just with a few people or tribal leaders, but between the whole nation of Israel, including men, women and children, with the Jewish deity Yahweh. Jews, similar to other tribal groups, did not see themselves as citizens per se but they formed a strong attachment to their own group, such that people of different ethnicities were considered as part of an "outgroup". This is in contrast to the modern understanding of citizenship as a way to accept people of different races and ethnicities under the umbrella of being citizens of a nation. This said, there are several connected issues to be mentioned. First, ideologically the Israelites were very much against monarchy. Samuel very much tried to dissuade the people from having a king by pointing out the tyrannical nature of monarchy – in 1Sam 8:10–18 he indicates that the king has the right to take men for military service; women for domestic service; and land and a tenth part of the harvest, flocks, and herds for the support of the monarchy, and to require state service by both the people and their animals. Second, kings play almost no role in the five books of Moses. In all four books of Moses that precede Deuteronomy, there is no reference to the role of the king of Israel, not even where we would most expect to find it. See in also how the kingdom may be established to understand that the king has no role in the nation's establishment. Third, in Deut. 17, 16–17 the king is prevented from acquiring many wives, horses and amass a fortune. This has consequences on the powers of the king: wives – he cannot engage in diplomatic marriages, horses – he can't have large armies, fortune – obviously he lacks the economic power to pursue exaggerated kingly interests. "When he is seated on his royal throne, he shall write a copy of this Torah on a scroll before the Levitical priests." (Deut. 17:18) is almost the sole activity permitted to him. The Bible also encourages strong citizenship as shown in the examples of Abraham vs. God in the planned destruction of Sodom and the story of Gideon. We may conclude that while democracy arose in Athens it was not liberal democracy. Some of the liberal part of "liberal democracy" (limited government) arose in Jerusalem and its one of the many contributions of the Jews to the world. It would be difficult to exaggerate its contribution to the development of liberal democracy. Ancient Greece Polis citizenship There is more widespread agreement that the first real instances of citizenship began in ancient Greece. And while there were precursors of the relation in societies before then, it emerged in readily discernible form in the Greek city-states which began to dot the shores of the Aegean Sea, the Black Sea, the Adriatic Sea, and elsewhere around the Mediterranean perhaps around the 8th century BCE. The modern day distinction sometimes termed consent versus descent distinction—that is, citizenship by choice versus birthright citizenship, has been traced back to ancient Greece. And thinkers such as J.G.A. Pocock have suggested that the modern-day ideal of citizenship was first articulated by the ancient Athenians and Romans, although he suggested that the "transmission" of the sense of citizenship over two millennia was essentially a myth enshrouding western civilization. One writer suggests that despite the long history of China, there never was a political entity within China similar to the Greek polis. To the ancients, citizenship was a bond between a person and the city-state. Before Greek times, a person was generally connected to a tribe or kin-group such as an extended family, but citizenship added a layer to these ties—a non-kinship bond between the person and the state. Historian Geoffrey Hosking in his 2005 Modern Scholar lecture course suggested that citizenship in ancient Greece arose from an appreciation for the importance of freedom. Hosking explained: The Greek sense of the polis, in which citizenship and the rule of law prevailed, was an important strategic advantage for the Greeks during their wars with Persia. Greeks could see the benefits of having slaves, since their labor permitted slaveowners to have substantial free time, enabling participation in public life. While Greeks were spread out in many separate city-states, they had many things in common in addition to shared ideas about citizenship: the Mediterranean trading world, kinship ties, the common Greek language, a shared hostility to the so-called non-Greek-speaking or barbarian peoples, belief in the prescience of the oracle at Delphi, and later on the early Olympic Games which involved generally peaceful athletic competitions between city-states. City-states often feuded with each other; one view was that regular wars were necessary to perpetuate citizenship, since the seized goods and slaves helped make the city-state rich, and that a long peaceful period meant ruin for citizenship. An important aspect of polis citizenship was exclusivity. Polis meant both the political assembly as well as the entire society. Inequality of status was widely accepted. Citizens had a higher status than non-citizens, such as women, slaves or barbarians. For example, women were believed to be irrational and incapable of political participation, although a few writers, most notably Plato, disagreed. Methods used to determine whether someone could be a citizen or not could be based on wealth, identified by the amount of taxes one paid, or political participation, or heritage if both parents were citizens of the polis. The first form of citizenship was based on the way people lived in the ancient Greek times, in small-scale organic communities of the polis. Citizenship was not seen as a separate activity from the private life of the individual person, in the sense that there was not a distinction between public and private life. The obligations of citizenship were deeply connected into one's everyday life in the polis. The Greek sense of citizenship may have arisen from military necessity, since a key military formation demanded cohesion and commitment by each particular soldier. The phalanx formation had hoplite soldiers ranked shoulder-to-shoulder in a "compact mass" with each soldier's shield guarding the soldier to his left. If a single fighter failed to keep his position, then the entire formation could fall apart. Individual soldiers were generally protected provided that the entire mass stayed together. This technique called for large numbers of soldiers, sometimes involving most of the adult male population of a city-state, who supplied weapons at their own expense. The idea of citizenship, then, was that if each man had a say in whether the entire city-state should fight an adversary, and if each man was bound to the will of the group, then battlefield loyalty was much more likely. Political participation was thus linked with military effectiveness. In addition, the Greek city-states were the first instances in which judicial functions were separated from legislative functions in the law courts. Selected citizens served as jurors, and they were often paid a modest sum for their service. Greeks often despised tyrannical governments. In a tyrannical arrangement, there was no possibility of citizenship since political life was totally engineered to benefit the ruler. Spartan citizenship Several thinkers suggest that ancient Sparta, not Athens, was the originator of the concept of citizenship. Spartan citizenship was based on the principle of equality among a ruling military elite called Spartiates. They were "full Spartan citizens"—men who graduated from a rigorous regimen of military training and at age 30 received a land allotment called a kleros, although they had to keep paying dues to pay for food and drink as was required to maintain citizenship. In the Spartan approach to phalanx warfare, virtues such as courage and loyalty were particularly emphasized relative to other Greek city-states. Each Spartan citizen owned at least a minimum portion of the public land which was sufficient to provide food for a family, although the size of these plots varied. The Spartan citizens relied on the labor of captured slaves called helots to do the everyday drudgework of farming and maintenance, while the Spartan men underwent a rigorous military regimen, and in a sense it was the labor of the helots which permitted Spartans to engage in extensive military training and citizenship. Citizenship was viewed as incompatible with manual labor. Citizens ate meals together in a "communal mess". They were "frugally fed, ferociously disciplined, and kept in constant training through martial games and communal exercises," according to Hosking. As young men, they served in the military. It was seen as virtuous to participate in government when men grew older. Participation was required; failure to appear could entail a loss of citizenship. But the philosopher Aristotle viewed the Spartan model of citizenship as "artificial and strained", according to one account. While Spartans were expected to learn music and poetry, serious study was discouraged. Historian Ian Worthington described a "Spartan mirage" in the sense that the mystique about military invincibility tended to obscure weaknesses within the Spartan system, particularly their dependence on helots. In contrast with Athenian women, Spartan women could own property, and owned at one point up to 40% of the land according to Aristotle, and they had greater independence and rights, although their main task was not to rule the homes or participate in governance but rather to produce strong and healthy babies. Athenian citizenship In a book entitled Constitution of the Athenians, written in 350 BCE, the ancient Greek philosopher Aristotle suggested that ancient Greeks thought that being a citizen was a natural state, according to J. G. A. Pocock. It was an elitist notion, according to Peter Riesenberg, in which small scale communities had generally similar ideas of how people should behave in society and what constituted appropriate conduct. Geoffrey Hosking described a possible Athenian logic leading to participatory democracy: As a consequence, the original Athenian aristocratic constitution gradually became more inappropriate, and gave way to a more inclusive arrangement. In the early 6th century BCE, the reformer Solon replaced the Draconian constitution with the Solonian Constitution. Solon canceled all existing land debts, and enabled free Athenian males to participate in the assembly or ecclesia. In addition, he encouraged foreign craftsmen, particularly skilled in pottery, to move to Athens and offered citizenship by naturalization as an incentive. Solon expected that aristocratic Athenians would continue running affairs but nevertheless citizens had a "political voice in the Assembly." Subsequent reformers moved Athens even more towards direct democracy. The Greek reformer Cleisthenes in 508 BCE re-engineered Athenian society from organizations based on family-style groupings, or phratries, to larger mixed structures which combined people from different types of geographic areas—coastal areas and cities, hinterlands, and plains—into the same group. Cleisthenes abolished the tribes by "redistributing their identity so radically" so they ceased to exist. The result was that farmers, sailors and sheepherders came together in the same political unit, in effect lessening kinship ties as a basis for citizenship. In this sense, Athenian citizenship extended beyond basic bonds such as ties of family, descent, religion, race, or tribal membership, and reached towards the idea of a civic multiethnic state built on democratic principles. According to Feliks Gross, such an arrangement can succeed if people from different backgrounds can form constructive associations. The Athenian practice of ostracism, in which citizens could vote anonymously for a fellow citizen to be expelled from Athens for up to ten years, was seen as a way to pre-emptively remove a possible threat to the state, without having to go through legal proceedings. It was intended to promote internal harmony. Athenian citizenship was based on obligations of citizens towards the community rather than rights given to its members. This was not a problem because people had a strong affinity with the polis; their personal destiny and the destiny of the entire community were strongly linked. Also, citizens of the polis saw obligations to the community as an opportunity to be virtuous. It was a source of honour and respect. According to one view, the citizenry was "its own master". The people were sovereign; there was no sovereignty outside of the people themselves. In Athens, citizens were both ruler and ruled. Further, important political and judicial offices were rotated to widen participation and prevent corruption, and all citizens had the right to speak and vote in the political assembly. Pocock explained: The Athenian conception was that "laws that should govern everybody," in the sense of equality under the law or the Greek term isonomia. Citizens had certain rights and duties: the rights included the chance to speak and vote in the common assembly, to stand for public office, to serve as jurors, to be protected by the law, to own land, and to participate in public worship; duties included an obligation to obey the law, and to serve in the armed forces which could be "costly" in terms of buying or making expensive war equipment or in risking one's own life, according to Hosking. Hosking noticed that citizenship was "relatively narrowly distributed" and excluded all women, all minors, all slaves, all immigrants, and most colonials, that is, citizens who left their city to start another usually lost their rights from their city-state of origin. Many historians felt this exclusiveness was a weakness in Athenian society, according to Hosking, but he noted that there were perhaps 50,000 Athenian citizens overall, and that at most, a tenth of these ever took part in an actual assembly at any one time. Hosking argued that if citizenship had been spread more widely, it would have hurt solidarity. Pocock expresses a similar sentiment and noted that citizenship requires a certain distance from the day-to-day drudgery of daily living. Greek males solved this problem to some extent with the subjugation of women as well as the institution of slavery which freed their schedules so they could participate in the assembly. Pocock asked: for citizenship to happen, was it necessary to prevent free people from becoming "too much involved in the world of things"? Or, could citizenship be extended to working class persons, and if so, what does this mean for the nature of citizenship itself? Plato on citizenship The philosopher Plato envisioned a warrior class similar to the Spartan conception in that these persons did not engage in farming, business, or handicrafts, but their main duty was to prepare for war: to train, to exercise, to train, to exercise, constantly. Like the Spartan practice, Plato's idealized community was one of citizens who kept common meals to build common bonds. Citizenship status, in Plato's ideal view, was inherited. There were four separate classes. There were penalties for failing to vote. A key part of citizenship was obeying the law and being "deferent to the social and political system" and having internal self-control. Aristotle on citizenship Writing a generation after Plato, and in contrast with his teacher, Aristotle did not like Sparta's commune-oriented approach. He felt Sparta's land allocation system as well as the communal meals led to a world in which rich and poor were polarized. He recognized differences in citizenship patterns based on age: the young were "underdeveloped" citizens, while the elderly were "superannuated" citizens. And he noted that it was hard to classify the citizenship status of some persons, such as resident aliens who still had access to courts, or citizens who had lost their citizenship franchise. Still, Aristotle's conception of citizenship was that it was a legally guaranteed role in creating and running government. It reflected the division of labor which he believed was a good thing; citizenship, in his view, was a commanding role in society with citizens ruling over non-citizens. At the same time, there could not be a permanent barrier between the rulers and the ruled, according to Aristotle's conception, and if there was such a barrier, citizenship could not exist. Aristotle's sense of citizenship depended on a "rigorous separation of public from private, of polis from oikos, of persons and actions from things" which allowed people to interact politically with equals. To be truly human, one had to be an active citizen to the community: In Aristotle's view, "man is a political animal". Isolated men were not truly free, in his view. A beast was animal-like without self-control over passions and unable to coordinate with other beasts, and therefore could not be a citizen. And a god was so powerful and immortal that he or she did not need help from others. In Aristotle's conception, citizenship was possible generally in a small city-state since it required direct participation in public affairs with people knowing "one another's characters". What mattered, according to Pocock's interpretation of Aristotle, was that citizens had the freedom to take part in political discussions if they chose to do so. And citizenship was not merely a means to being free, but was freedom itself, a valued escape from the home-world of the oikos to the political world of the polis. It meant active sharing in civic life, meaning that all men rule, and are ruled, alternatively. And citizens were those who shared in deliberative and judicial office, and in that sense, attained the status of citizenship. What citizens do should benefit not just a segment of society, but be in the interest of everybody. Unlike Plato, Aristotle believed that women were incapable of citizenship since it did not suit their natures. In Aristotle's conception, humans are destined "by nature" to live in a political association and take short turns at ruling, inclusively, participating in making legislative, judicial and executive decisions. But Aristotle's sense of "inclusiveness" was limited to adult Greek males born in the polity: women, children, slaves, and foreigners (that is, resident aliens), were generally excluded from political participation. Roman conceptions Differences from Greece Roman citizenship was similar to the Greek model but differed in substantive ways. Geoffrey Hosking argued that Greek ideas of citizenship in the city-state, such as the principles of equality under the law, civic participation in government, and notions that "no one citizen should have too much power for too long", were carried forth into the Roman world. But unlike the Greek city-states which enslaved captured peoples following a war, Rome offered relatively generous terms to its captives, including chances for captives to have a "second category of Roman citizenship". Conquered peoples could not vote in the Roman assembly but had full protections of the law, and could make economic contracts and could marry Roman citizens. They blended together with Romans in a culture sometimes described as Romanitas—ceremonies, public baths, games, and a common culture helped unite diverse groups within the empire. One view was that the Greek sense of citizenship was an "emancipation from the world of things" in which citizens essentially acted upon other citizens; material things were left back in the private domestic world of the oikos. But the Roman sensibility took into account to a greater extent that citizens could act upon material things as well as other citizens, in the sense of buying or selling property, possessions, titles, goods. Accordingly, citizens often encountered other citizens on the basis of commerce which often required regulation. It introduced a new level of complexity regarding the concept of citizenship. Pocock explained: Class concerns A further departure from the Greek model was that the Roman government pitted the upper-class patrician interests against the lower-order working groups known as the plebeian class in a dynamic arrangement, sometimes described as a "tense tug-of-war" between the dignity of the great man and the liberty of the small man. Through worker discontent, the plebs threatened to set up a rival city to Rome, and through negotiation around 494 BCE, won the right to have their interests represented in government by officers known as tribunes. The Roman Republic, according to Hosking, tried to find a balance between the upper and lower classes. And writers such as Burchell have argued that citizenship meant different things depending on what social class one belonged to: for upper-class men, citizenship was an active chance to influence public life; for lower-class men, it was about a respect for "private rights" or ius privatum. A legal relation Pocock explained that a citizen came to be understood as a person "free to act by law, free to ask and expect the law's protection, a citizen of such and such a legal community, of such and such a legal standing in that community." An example was Saint Paul demanding fair treatment after his arrest by claiming to be a Roman citizen. Many thinkers including Pocock suggested that the Roman conception of citizenship had a greater emphasis than the Greek one of it being a legal relationship with the state, described as the "legal and political shield of a free person". And citizenship was believed to have had a "cosmopolitan character". Citizenship meant having rights to have possessions, immunities, expectations, which were "available in many kinds and degrees, available or unavailable to many kinds of person for many kinds of reason." Citizens could "sue and be sued in certain courts". And the law, itself, was a kind of bond uniting people, in the sense of it being the results of past decisions by the assembly, such that citizenship came to mean "membership in a community of shared or common law". According to Pocock, the Roman emphasis on law changed the nature of citizenship: it was more impersonal, universal, multiform, having different degrees and applications. It included many different types of citizenship: sometimes municipal citizenship, sometimes empire-wide citizenship. Law continued to advance as a subject under the Romans. The Romans developed law into a kind of science known as jurisprudence. Law helped protect citizens: Specialists in law found ways to adapt the fixed laws, and to have the common law or jus gentium, work in harmony with natural law or ius naturale, which are rules common to all things. Property was protected by law, and served as a protection of individuals against the power of the state. In addition, unlike the Greek model where laws were mostly made in the assembly, Roman law was often determined in other places than official government bodies. Rules could originate through court rulings, by looking to past court rulings, by sovereign decrees, and the effect was that the assembly's power became increasingly marginalized. Expansion of citizenship In the Roman Empire, polis citizenship expanded from small scale communities to the entire empire. In the early years of the Roman Republic, citizenship was a prized relationship which was not widely extended. Romans realised that granting citizenship to people from all over the empire legitimized Roman rule over conquered areas. As the centuries went by, citizenship was no longer a status of political agency, but it had been reduced to a judicial safeguard and the expression of rule and law. The Roman conception of citizenship was relatively more complex and nuanced than the earlier Athenian conception, and it usually did not involve political participation. There was a "multiplicity of roles" for citizens to play, and this sometimes led to "contradictory obligations". Roman citizenship was not a single black-and-white category of citizen versus non-citizen, but rather there were more gradations and relationships possible. Women were respected to a greater extent with a secure status as what Hosking terms "subsidiary citizens". But the citizenship rules generally had the effect of building loyalty throughout the empire among highly diverse populations. The Roman statesman Cicero, while encouraging political participation, saw that too much civic activism could have consequences that were possibly dangerous and disruptive. David Burchell argued that in Cicero's time, there were too many citizens pushing to "enhance their dignitas", and the result of a "political stage" with too many actors all wanting to play a leading role, was discord. The problem of extreme inequality of landed wealth led to a decline in the citizen-soldier arrangement, and was one of many causes leading to the dissolution of the Republic and rule by dictators. The Roman Empire gradually expanded the inclusiveness of persons considered as "citizens", while the economic power of persons declined, and fewer men wanted to serve in the military. The granting of citizenship to wide swaths of non-Roman groups diluted its meaning, according to one account. Decline of Rome When the Western Roman empire fell in 476 AD, the western part run by Rome was sacked, while the eastern empire headquartered at Constantinople endured. Some thinkers suggest that as a result of historical circumstances, western Europe evolved with two competing sources of authority—religious and secular—and that the ensuing separation of church and state was a "major step" in bringing forth the modern sense of citizenship. In the eastern half which survived, religious and secular authority were merged in the one emperor. The eastern Roman emperor Justinian, who ruled the eastern empire from 527 to 565, thought that citizenship meant people living with honor, not causing harm, and to "give each their due" in relation with fellow citizens. Early modern ideas of citizenship Feudalism In the feudal system, there were relationships characterized as reciprocal, with bonds between lords and vassals going both ways: vassals promised loyalty and subsistence, while lords promised protection. The basis of feudal arrangement was control over land. The loyalty of a person was not to a law, or to a constitution, or to an abstract concept such as a nation, but to a person, namely, the next higher-level up, such as a knight, lord, or king. One view is that feudalism's reciprocal obligation system gave rise to the idea of the individual and the citizen. According to a related view, Magna Carta, while a sort of "feudal document", marked a transition away from feudalism since the document was not a personal unspoken bond between nobles and the king, but rather was more like a contract between two parties, written in formal language, describing how different parties were supposed to behave towards each other. Magna Carta gave assurances that the liberty, security and freedom of individuals were "inviolable". Gradually the personal ties linking vassals with lords were replaced with contractual and more impersonal relationships. The early days of medieval communes were marked by intensive citizenship, according to one view. Sometimes there was terrific religious activism, spurred by fanatics and religious zealotry, and as a result of the discord and religious violence, Europeans learned to value the "dutiful passive citizen" as much preferred to the "self-directed religious zealot", according to another. Renaissance Italy According to historian Andrew C. Fix, Italy in the 14th century was much more urbanized than the rest of Europe, with major populations concentrated in cities like Milan, Rome, Genoa, Pisa, Florence, Venice and Naples. Trade in spices with the Middle East, and new industries such as wool and clothing, led to greater prosperity, which in turn permitted greater education and study of the liberal arts, particularly among urbanized youth. A philosophy of Studia Humanitatis, later called humanism, emerged with an emphasis away from the church and towards secularism; thinkers reflected on the study of ancient Rome and ancient Greece including its ideas of citizenship and politics. Competition among the cities helped spur thinking. Fix suggested that of the northern Italian cities, it was Florence which most closely resembled a true Republic, whereas most Italian cities were "complex oligarchies ruled by groups of rich citizens called patricians, the commercial elite." Florence's city leaders figured that civic education was crucial to the protection of the Republic, so that citizens and leaders could cope with future unexpected crises. Politics, previously "shunned as unspiritual", came to be viewed as a "worthy and honorable vocation", and it was expected that most sectors of the public, from the richer commercial classes and patricians, to workers and the lower classes, should participate in public life. A new sense of citizenship began to emerge based on an "often turbulent internal political life in the towns", according to Fix, with competition among guilds and "much political debate and confrontation". Early European towns During the Renaissance and growth of Europe, medieval political scholar Walter Ullmann suggested that the essence of the transition was from people being subjects of a monarch or lord to being citizens of a city and later to a nation. A distinguishing characteristic of a city was having its own law, courts, and independent administration. And being a citizen often meant being subject to the city's law in addition to helping to choose officials. Cities were defensive entities, and its citizens were persons who were "economically competent to bear arms, to equip and train themselves." According to one theorist, the requirement that individual citizen-soldiers provide their own equipment for fighting helped to explain why Western cities evolved the concept of citizenship, while Eastern ones generally did not. And city dwellers who had fought alongside nobles in battles were no longer content with having a subordinate social status, but demanded a greater role in the form of citizenship. In addition to city administration as a way of participating in political decision-making, membership in guilds was an indirect form of citizenship in that it helped their members succeed financially; guilds exerted considerable political influence in the growing towns. Emerging nation-states During European Middle Ages, citizenship was usually associated with cities. Nobility in the aristocracy used to have privileges of a higher nature than commoners. The rise of citizenship was linked to the rise of republicanism, according to one account, since if a republic belongs to its citizens, then kings have less power. In the emerging nation-states, the territory of the nation was its land, and citizenship was an idealized concept. Increasingly, citizenship related not to a person such as a lord or count, but rather citizenship related a person to the state on the basis of more abstract terms such as rights and duties. Citizenship was increasingly seen as a result of birth, that is, a birthright. But nations often welcomed foreigners with vital skills and capabilities, and came to accept these new people under a process of naturalization. Increasing frequency of cases of naturalization helped people see citizenship as a relationship which was freely chosen by people. Citizens were people who voluntarily chose allegiance to the state, who accepted the legal status of citizenship with its rights and responsibilities, who obeyed its laws, who were loyal to the state. Great Britain The early modern period saw significant social change in Great Britain in terms of the position of individuals in society and the growing power of Parliament in relation to the monarch. The English Reformation ushered in political, constitutional, social and cultural change in the 16th century. Moreover, it defined a national identity for England and slowly, but profoundly, changed people's religious beliefs and established the Church of England. In the 17th century, there was renewed interest in Magna Carta. English common law judge Sir Edward Coke revived the idea of rights based on citizenship by arguing that Englishmen had historically enjoyed such rights. Passage of the Petition of Right in 1628 and Habeas Corpus Act in 1679 established certain liberties for subjects in statute. The idea of a political party took form with groups debating rights to political representation during the Putney Debates of 1647. After the English Civil Wars (1642–1651) and the Glorious Revolution of 1688, the Bill of Rights was enacted in 1689, which codified certain rights and liberties. The Parliament of Scotland passed the Claim of Right 1689. These Acts set out the requirement for regular elections, rules for freedom of speech in Parliament and limited the power of the monarch, ensuring that, unlike much of Europe at the time, royal absolutism would not prevail. Across Europe, the Age of Enlightenment in the 17th and 18th centuries spread new ideas about liberty, reason and politics across the continent and beyond. The American Revolution British colonists across the Atlantic had grown up in a system in which local government was democratic, marked by participation by affluent men, but after the French and Indian War, colonists came to resent an increase in taxes imposed by Britain to offset expenses. What was particularly irksome to colonists was their lack of representation in the British Parliament, and the phrase no taxation without representation became a common grievance. The struggle between rebelling colonists and British troops was a time when citizenship "worked", according to one view. American and subsequent French declarations of rights were instrumental in linking the notion of fundamental rights to popular sovereignty in the sense that governments drew their legitimacy and authority from the consent of the governed. The Framers designed the United States Constitution to accommodate a rapidly growing republic by opting for representative democracy as opposed to direct democracy, but this arrangement challenged the idea of citizenship in the sense that citizens were, in effect, choosing other persons to represent them and take their place in government. The revolutionary spirit created a sense of "broadening inclusion". The Constitution specified a three-part structure of government with a federal government and state governments, but it did not specify the relation of citizenship. The Bill of Rights protected the rights of individuals from intrusion by the federal government, although it had little impact on judgements by the courts for the first 130 years after ratification. The term citizen was not defined by the Constitution until the Fourteenth Amendment was added in 1868, which defined United States citizenship to include "All persons born or naturalized in the United States, and subject to the jurisdiction thereof." The American Revolution demonstrated that it was plausible for Enlightenment ideas about how a government should be organized to actually be put into practice. The French Revolution The French Revolution marked major changes and has been widely seen as a watershed event in modern politics. Up until then, the main ties between people under the Ancien Regime were hierarchical, such that each person owed loyalty to the next person further up the chain of command; for example, serfs were loyal to local vassals, who in turn were loyal to nobles, who in turn were loyal to the king, who in turn was presumed to be loyal to God. Clergy and aristocracy had special privileges, including preferential treatment in law courts, and were exempt from taxes; this last privilege had the effect of placing the burden of paying for national expenses on the peasantry. One scholar who examined pre-Revolutionary France described powerful groups which stifled citizenship and included provincial estates, guilds, military governors, courts with judges who owned their offices, independent church officials, proud nobles, financiers and tax farmers. They blocked citizenship indirectly since they kept a small elite governing group in power, and kept regular people away from participating in political decision-making. These arrangements changed substantially during and after the French Revolution. Louis XVI mismanaged funds, vacillated, was blamed for inaction during a famine, causing the French people to see the interest of the king and the national interest as opposed. During the early stages of the uprising, the abolition of aristocratic privilege happened during a pivotal meeting on August 4, 1789, in which an aristocrat named Vicomte de Noailles proclaimed before the National Assembly that he would renounce all special privileges and would henceforward be known only as the "Citizen of Noailles." Other aristocrats joined him which helped to dismantle the Ancien Regime's seignorial rights during "one night of heated oratory", according to one historian. Later that month, the Assembly's Declaration of the Rights of Man and of the Citizen linked the concept of rights with citizenship and asserted that rights of man were "natural, inalienable, and sacred", that all men were "born free and equal, and that the aim of all political association is maintenance of their rights", according to historian Robert Bucholz. However, the document said nothing about the rights of women, although activist Olympe de Gouge issued a proclamation two years later which argued that women were born with equal rights to men. People began to identify a new loyalty to the nation as a whole, as citizens, and the idea of popular sovereignty earlier espoused by the thinker Rousseau took hold, along with strong feelings of nationalism. Louis XVI and his wife were guillotined. Citizenship became more inclusive and democratic, aligned with rights and national membership. The king's government was replaced with an administrative hierarchy at all levels, from a national legislature to even power at the local commune, such that power ran both up and down the chain of command. Loyalty became a cornerstone in the concept of citizenship, according to Peter Riesenberg. One analyst suggested that in the French Revolution, two often polar-opposite versions of citizenship merged: (1) the abstract idea of citizenship as equality before the law caused by the centralizing and rationalizing policies of absolute monarchs and (2) the idea of citizenship as a privileged status reserved for rule-makers, brought forth defensively by an aristocratic elite guarding its exclusiveness. According to one view by the German philosopher Max Stirner, the Revolution emancipated the citizen but not the individual, since the individuals were not the agents of change, but only the collective force of all individuals; in Stirner's sense, the "agent of change" was effectively the nation. The British thinker T. H. Marshall saw in the 18th century "serious growth" of civil rights, with major growth in the legal aspects of citizenship, often defended through courts of law. These civil rights extended citizenship's legal dimensions: they included the right to free speech, the right to a fair trial, and generally equal access to the legal system. Marshall saw the 18th century as signifying civil rights which was a precursor to political rights such as suffrage, and later, in the 20th century, social rights such as welfare. Early modern: 1700s–1800s After 1750, states such as Britain and France invested in massive armies and navies which were so expensive to maintain that the option of hiring mercenary soldiers became less attractive. Rulers found troops within the public, and taxed the public to pay for these troops, but one account suggested that the military buildup had a side-effect of undermining the military's autonomous political power. Another view corroborates the idea that military conscription spurred development of a broader role for citizens. A phenomenon known as the public sphere arose, according to philosopher Jürgen Habermas, as a space between authority and private life in which citizens could meet informally, exchange views on public matters, criticize government choices and suggest reforms. It happened in physical spaces such as public squares as well as in coffeehouses, museums, restaurants, as well as in media such as newspapers, journals, and dramatic performances. It served as a counterweight to government, a check on its power, since a bad ruling could be criticized by the public in places such as editorials. According to Schudson, the public sphere was a "playing field for citizenship". Eastern conceptions In the late-19th century, thinking about citizenship began to influence China. Discussion started of ideas (such as legal limits, definitions of monarchy and the state, parliaments and elections, an active press, public opinion) and of concepts (such as civic virtue, national unity, and social progress). Modern senses Transitions John Stuart Mill in his work On Liberty (1859) believed that there should be no distinctions between men and women, and that both were capable of citizenship. British sociologist Thomas Humphrey Marshall suggested that the changing patterns of citizenship were as follows: first, a civil relation in the sense of having equality before the law, followed by political citizenship in the sense of having the power to vote, and later a social citizenship in the sense of having the state support individual persons along the lines of a welfare state. Marshall argued in the middle of the 20th century that modern citizenship encompassed all three dimensions: civil, political, and social. He wrote that citizenship required a vital sense of community in the sense of a feeling of loyalty to a common civilization. Thinkers such as Marc Steinberg saw citizenship emerge from a class struggle interrelated with the principle of nationalism. People who were native-born or naturalised members of the state won a greater share of the rights out of "a continuing series of transactions between persons and agents of a given state in which each has enforceable rights and obligations", according to Steinberg. This give-and-take to a common acceptance of the powers of both the citizen and the state. He argued that: Nationalism emerged. Many thinkers suggest that notions of citizenship rights emerged from this spirit of each person identifying strongly with the nation of their birth. A modern type of citizenship is one which lets people participate in a number of different ways. Citizenship is not a "be-all end-all" relation, but only one of many types of relationships which a person might have. It has been seen as an "equalizing principle" in the sense that most other people have the same status. One theory sees different types of citizenship emanating out from concentric circles—from the town, to the state, to the world—and that citizenship can be studied by looking at which types of relations people value at any one time. The idea that participating in lawmaking is an essential aspect of citizenship continues to be expressed by different thinkers. For example, British journalist and pamphleteer William Cobbett said that the "greatest right", which he called the "right of rights", was having a share in the "making of the laws", and then submitting the laws to the "good of the whole." The idea of citizenship, and western senses of government, began to emerge in Asia in the 19th and 20th centuries. In Meiji Japan, popular social forces exerted influence against traditional types of authority, and out of a period of negotiations and concessions by the state came a time of "expanding democracy", according to one account. Numerous cause-and-effect relations worked to bring about a Japanese version of citizenship: expanding military activity led to an enlarged state and territory, which furthered direct rule including the power of the military and the Japanese emperor, but this indirectly led to popular resistance, struggle, bargaining, and consequently an expanded role for citizens in early 20th century Japan. Citizenship today The concept of citizenship is hard to isolate, since it relates to many other contextual aspects of society such as the family, military service, the individual, freedom, religion, ideas of right and wrong, ethnicity, and patterns for how a person should behave in society. According to British politician Douglas Hurd, citizenship is essentially doing good to others. When there are many different ethnic and religious groups within a nation, citizenship may be the only real bond which unites everybody as equals without discrimination—it is a "broad bond" as one writer described it. Citizenship links "a person with the state" and gives people a universal identity—as a legal member of a nation—besides their identity based on ties of ethnicity or an ethnic self. But clearly there are wide differences between ancient conceptions of citizenship and modern ones. While the modern one still respects the idea of participation in the political process, it is usually done through "elaborate systems of political representation at a distance" such as representative democracy, and carried out under the "shadow of a permanent professional administrative apparatus." Unlike the ancient patterns, modern citizenship is much more passive; action is delegated to others; citizenship is often a constraint on acting, not an impetus to act. Nevertheless, citizens are aware of their obligations to authorities, and they are aware that these bonds "limits their personal political autonomy in a quite profound manner". But there are disagreements that the contrast between ancient and modern versions of citizenship was that sharp; one theorist suggested that the supposedly "modern" aspects of so-called passive citizenship, such as tolerance, respect for others, and simply "minding one's own business", were present in ancient times too. Citizenship can be seen as both a status and an ideal. Sometimes mentioning the idea of citizenship implies a host of theories as well as the possibility of social reform, according to one view. It invokes a model of what a person should do in relation to the state, and suggests education or punishment for those who stray from the model. Several thinkers see the modern notion of individualism as being sometimes consistent with citizenship, and other times opposed to it. Accordingly, the modern individual and the modern citizen seem to be the same, but too much individualism can have the effect of leading to a "crisis of citizenship". Another agreed that individualism can corrupt citizenship. Another sees citizenship as a substantial dilemma between the individual and society, and between the individual and the state, and asked questions such as whether the focus of a person's efforts should be on the collective good or on the individual good? In a Marxist view, the individual and the citizen were both "essentially necessary" to each other in that neither could exist without the other, but both aspects within a person were essentially antagonistic to each other. Habermas suggested in his book The Structural Transformation of the Public Sphere that while citizenship widened to include more people, the public sphere shrunk and became commercialized, devoid of serious debate, with media coverage of political campaigns having less focus on issues and more focus on sound bites and political scandals, and in the process, citizenship became more common but meant less. Political participation declined for most people. Other thinkers echo that citizenship is a vortex for competing ideas and currents, sometimes working against each other, sometimes working in harmony. For example, sociologist T. H. Marshall suggested that citizenship was a contradiction between the "formal political equality of the franchise" and the "persistence of extensive social and economic inequality." In Marshall's sense, citizenship was a way to straddle both issues. A wealthy person and a poor person were both equal in the sense of being citizens, but separated by the economic inequality. Marshall saw citizenship as the basis for awarding social rights, and he made a case that extending such rights would not jeopardize the structure of social classes or end inequality. He saw capitalism as a dynamic system with constant clashes between citizenship and social class, and how these clashes played out determined how a society's political and social life would manifest themselves. Citizenship was not always about including everybody, but was also a powerful force to exclude persons at the margins of society, such as the outcasts, illegal immigrants and others. In this sense, citizenship was not only about getting rights and entitlements but it was a struggle to "reject claims of entitlement by those initially residing outside the core, and subsequently, of migrant and immigrant labour." But one thinker described democratic citizenship as inclusive, generally, and wrote that democratic citizenship: Competing senses Citizenship in the modern sense is often seen as having two widely divergent strains marked by tension between them. Liberal-individualist view The liberal-individualist conception of citizenship, or sometimes merely the liberal conception, has a concern that the individual's status may be undermined by government. The perspective suggests a language of "needs" and "entitlements" necessary for human dignity and is based on reason for the pursuit of self-interest or more accurately as enlightened self-interest. The conception suggests a focus on the manufacture of material things as well as man's economic vitality, with society seen as a "market-based association of competitive individuals." From this view, citizens are sovereign, morally autonomous beings with duties to pay taxes, obey the law, engage in business transactions, and defend the nation if it comes under attack, but are essentially passive politically. This conception of citizenship has sometimes been termed conservative in the sense that passive citizens want to conserve their private interests, and that private people have a right to be left alone. This formulation of citizenship was expressed somewhat in the philosophy of John Rawls, who believed that every person in a society has an "equal right to a fully adequate scheme of equal basic rights and liberties" and that society has an obligation to try to benefit the "least advantaged members of society". But this sense of citizenship has been criticized; according to one view, it can lead to a "culture of subjects" with a "degeneration of public spirit" since economic man, or homo economicus, is too focused on material pursuits to engage in civic activity to be true citizens. Civic-republican view A competing vision is that democratic citizenship may be founded on a "culture of participation". This orientation has sometimes been termed the civic-republican or classical conception of citizenship since it focuses on the importance of people practicing citizenship actively and finding places to do this. Unlike the liberal-individualist conception, the civic-republican conception emphasizes man's political nature, and sees citizenship as an active, not passive, activity. A general problem with this conception, according to critics, is that if this model is implemented, it may bring about other issues such as the free rider problem in which some people neglect basic citizenship duties and consequently get a free ride supported by the citizenship efforts of others. This view emphasizes the democratic participation inherent in citizenship, and can "channel legitimate frustrations and grievances" and bring people together to focus on matters of common concern and lead to a politics of empowerment, according to theorist Dora Kostakopoulou. Like the liberal-individualist conception, it is concerned about government running roughshod over individuals, but unlike the liberal-individualist conception, it is relatively more concerned that government will interfere with popular places to practice citizenship in the public sphere, rather than take away or lessen particular citizenship rights. This sense of citizenship has been described as "active and public citizenship", and has sometimes been called a "revolutionary idea". According to one view, most people today live as citizens according to the liberal-individualist conception but wished they lived more according to the civic-republican ideal. Other views The subject of citizenship, including political discussions about what exactly the term describes, can be a battleground for ideological debates. In Canada, citizenship and related issues such as civic education are "hotly contested." There continues to be sentiment within the academic community that trying to define one "unitary theory of citizenship" which would describe citizenship in every society, or even in any one society, would be a meaningless exercise. Citizenship has been described as "multi-layered belongings"—different attachments, different bonds and allegiances. This is the view of Hebert & Wilkinson who suggest there is not one single perspective on citizenship but "multiple citizenship" relations since each person belongs to many different groups which define him or her. Sociologist Michael Schudson examined changing patterns of citizenship in US history and suggested there were four basic periods: The colonial era was marked by property-owning white males who delegated authority to "gentlemen", and almost all people did not participate as citizens according to his research. Early elections did not generate much interest, were characterized by low voter turnout, and rather reflected an existing social hierarchy. Representative assemblies "barely existed" in the 18th century, according to Schudson. Political parties became prominent in the 19th century to win lucrative patronage jobs, and citizenship meant party loyalty. The 20th century citizenship ideal was having an "informed voter", choosing rationally (ie voting) based on information from sources such as newspapers and books. Citizenship came to be seen as a basis for rights and entitlements from government. Schudson predicted the emergence of what he called the monitorial citizen: persons engaged in watching for issues such as corruption and government violations of rights. Schudson chronicled changing patterns in which citizenship expanded to include formerly disenfranchised groups such as women and minorities while parties declined. Interest groups influenced legislators directly via lobbying. Politics retreated to being a peripheral concern for citizens who were often described as "self-absorbed". In the 21st-century America, citizenship is generally considered to be a legal marker recognizing that a person is an American. Duty is generally not part of citizenship. Citizens generally do not see themselves as having a duty to provide assistance to one another, although officeholders are seen as having a duty to the public. Rather, citizenship is a bundle of rights which includes being able to get assistance from the federal government. A similar pattern marks the idea of citizenship in many western-style nations. Most Americans do not think much about citizenship except perhaps when applying for a passport and traveling internationally. Feliks Gross sees 20th century America as an "efficient, pluralistic and civic system that extended equal rights to all citizens, irrespective of race, ethnicity and religion." According to Gross, the US can be considered as a "model of a modern civic and democratic state" although discrimination and prejudice still survive. The exception, of course, is that persons living within the borders of America illegally see citizenship as a major issue. Nevertheless, one of the constants is that scholars and thinkers continue to agree that the concept of citizenship is hard to define, and lacks a precise meaning. See also Citizenship Citizenship in the United States Cosmopolitanism Global citizenship Transnational citizenship Notes References External links Bürger, Bürgertum, Bürgerlichkeit a historical overview of the related term Bürger focused on Germany Citizenship History of Europe CItizenship Western culture
0.777188
0.985887
0.766219
Ontology
Ontology is the philosophical study of being. As one of the most fundamental concepts, being encompasses all of reality and every entity within it. To articulate the basic structure of being, ontology examines what all entities have in common and how they are divided into fundamental classes, known as categories. An influential distinction is between particular and universal entities. Particulars are unique, non-repeatable entities, like the person Socrates. Universals are general, repeatable entities, like the color green. Another contrast is between concrete objects existing in space and time, like a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality, employing categories such as substance, property, relation, state of affairs, and event. Ontologists disagree about which entities exist on the most basic level. Platonic realism asserts that universals have objective existence. Conceptualism says that universals only exist in the mind while nominalism denies their existence. There are similar disputes about mathematical objects, unobservable objects assumed by scientific theories, and moral facts. Materialism says that, fundamentally, there is only matter while dualism asserts that mind and matter are independent principles. According to some ontologists, there are no objective answers to ontological questions but only perspectives shaped by different linguistic practices. Ontology uses diverse methods of inquiry. They include the analysis of concepts and experience, the use of intuitions and thought experiments, and the integration of findings from natural science. Applied ontology employs ontological theories and principles to study entities belonging to a specific area. It is of particular relevance to information and computer science, which develop conceptual frameworks of limited domains. These frameworks are used to store information in a structured way, such as a college database tracking academic activities. Ontology is closely related to metaphysics and relevant to the fields of logic, theology, and anthropology. The origins of ontology lie in the ancient period with speculations about the nature of being and the source of the universe, including ancient Indian, Chinese, and Greek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name. Definition Ontology is the study of being. It is the branch of philosophy that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. It aims to discover the foundational building blocks of the world and characterize reality as a whole in its most general aspects. In this regard, ontology contrasts with individual sciences like biology and astronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena. In some contexts, the term ontology refers not to the general study of being but to a specific ontological theory within this discipline. It can also mean a conceptual scheme or inventory of a particular domain. Ontology is closely related to metaphysics but the exact relation of these two disciplines is disputed. According to a traditionally influential characterization, metaphysics is the study of fundamental reality in the widest sense while ontology is the subdiscipline of metaphysics that restricts itself to the most general features of reality. This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, like God, mind, and value. A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory. Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being. It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms. The word ontology has its roots in the ancient Greek terms (, meaning ) and (, meaning ), literally, . The ancient Greeks did not use the term ontology, which was coined by philosophers in the 17th century. Basic concepts Being Being, or existence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing the whole of reality and every entity within it. In its widest sense, being only contrasts with non-being or nothingness. It is controversial whether a more substantial analysis of the concept or meaning of being is possible. One proposal understands being as a property possessed by every entity. Critics of this view argue that an entity without being cannot have any properties, meaning that being cannot be a property since properties presuppose being. A different suggestion says that all beings share a set of essential features. According to the Eleatic principle, "power is the mark of being", meaning that only entities with a causal influence truly exist. According to a controversial proposal by philosopher George Berkeley, all existence is mental, expressed in his slogan "to be is to be perceived". Depending on the context, the term being is sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and impermanent and is distinguished from becoming, which implies change. Another contrast is between being, as what truly exists, and phenomena, as what merely appears to exist. In some contexts, being expresses the fact that something is while essence expresses its qualities or what it is like. Ontologists often divide being into fundamental classes or highest kinds, called categories of being. Proposed categories include substance, property, relation, state of affairs, and event. They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category. Some philosophers, like Aristotle, say that entities belonging to different categories exist in distinct ways. Others, like John Duns Scotus, insist that there are no differences in the mode of being, meaning that everything exists in the same way. A related dispute is whether some entities have a higher degree of being than others, an idea already found in Plato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees. The relation between being and non-being is a frequent topic in ontology. Influential issues include the status of nonexistent objects and why there is something rather than nothing. Particulars and universals A central distinction in ontology is between particular and universal entities. Particulars, also called individuals, are unique, non-repeatable entities, like Socrates, the Taj Mahal, and Mars. Universals are general, repeatable entities, like the color green, the form circularity, and the virtue courage. Universals express aspects or features shared by particulars. For example, Mount Everest and Mount Fuji are particulars characterized by the universal mountain. Universals can take the form of properties or relations. Properties express what entities are like. They are features or qualities possessed by an entity. Properties are often divided into essential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it. For instance, having three sides is an essential property of a triangle while being red is an accidental property. Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group. For example, being a city is a property while being east of is a relation, as in "Kathmandu is a city" and "Kathmandu is east of New Delhi". Relations are often divided into internal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation of resemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations. Substances play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the property green and acquires the property red. States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individual Socrates and the property wise. States of affairs that correspond to reality are called facts. Facts are truthmakers of statements, meaning that whether a statement is true or false depends on the underlying facts. Events are particular entities that occur in time, like the fall of the Berlin Wall and the first moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet. Complex events, also called processes, are composed of a sequence of events. Concrete and abstract objects Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set of integers. They lack causal powers and do not undergo changes. It is controversial whether or in what sense abstract objects exist and how people can know about them. Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and pages between them. Each of these components is itself constituted of smaller parts, like molecules, atoms, and elementary particles. Mereology studies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to a different view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another. The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it. Abstract objects are closely related to fictional and intentional objects. Fictional objects are entities invented in works of fiction. They can be things, like the One Ring in J. R. R. Tolkien's book series The Lord of the Rings, and people, like the Monkey King in the novel Journey to the West. Some philosophers say that fictional objects are one type of abstract object, existing outside space and time. Others understand them as artifacts that are created as the works of fiction are written. Intentional objects are entities that exist within mental states, like perceptions, beliefs, and desires. For example, if a person thinks about the Loch Ness Monster then the Loch Ness Monster is the intentional object of this thought. People can think about existing and non-existing objects, making it difficult to assess the ontological status of intentional objects. Other concepts Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity. For instance, the surface of an apple cannot exist without the apple. An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level. It is closely related to metaphysical grounding, which is the relation between a ground and facts it explains. An ontological commitment of a person or a theory is an entity that exists according to them. For instance, a person who believes in God has an ontological commitment to God. Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, the Quine–Putnam indispensability argument defends mathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers. Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible that extraterrestrial life exists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Doha is the capital of Qatar". Ontologists often use the concept of possible worlds to analyze possibility and necessity. A possible world is a complete and consistent way how things could have been. For example, Haruki Murakami was born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea, possible world semantics says that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds. In ontology, identity means that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also called exact similarity and indiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year". Branches There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole. Pure ontology contrasts with applied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science. It considers ontological problems in regard to specific entities such as matter, mind, numbers, God, and cultural artifacts. Social ontology, a major subfield of applied ontology, studies social kinds, like money, gender, society, and language. It aims to determine the nature and essential features of these concepts while also examining their mode of existence. According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars. In the fields of computer science, information science, and knowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way. A related application in genetics is Gene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases. Formal ontology is the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools of formal logic to express their findings in an abstract and general manner. Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area. Examples are ideal spatial beings in the area of geometry and living beings in the area of biology. Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization. Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion. Metaontology studies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists". It is closely related to fundamental ontology, an approach developed by philosopher Martin Heidegger that seeks to uncover the meaning of being. Schools of thought Realism and anti-realism The term realism is used for various theories that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there are objective facts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true. This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other. According to philosopher Rudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework. In a more narrow sense, realism refers to the existence of certain types of entities. Realists about universals say that universals have mind-independent existence. According to Platonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universal red could exist by itself even if there were no red objects in the world. Aristotelian realism, also called moderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them. Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world. Nominalists defend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects. Mathematical realism, a closely related view in the philosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence of mathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible to empirical observation. Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and game formalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation. Modal realism is the theory that in addition to the actual world, there are countless possible worlds as real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by our counterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects. Scientific realists say that the scientific description of the world is an accurate representation of reality. It is of particular relevance in regard to things that cannot be directly observed by humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature. Scientific anti-realism says that scientific theories are not descriptions of reality but instruments to predict observations and the outcomes of experiments. Moral realists claim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right. Moral anti-realists either claim that moral principles are subjective and differ between persons and cultures, a position known as moral relativism, or outright deny the existence of moral facts, a view referred to as moral nihilism. By number of categories Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class. For example, some forms of nominalism state that only concrete particulars exist while some forms of bundle theory state that only properties exist. Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything. The closely related discussion between monism and dualism is about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level. Materialism is an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states. Idealists take the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds. Neutral monism occupies a middle ground by saying that both mind and matter are derivative phenomena. Dualists state that mind and matter exist as independent principles, either as distinct substances or different types of properties. In a slightly different sense, monism contrasts with pluralism as a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality. Pluralism is more commonly accepted and says that several distinct entities exist. By fundamental categories The historically influential substance-attribute ontology is a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances. The closely related to substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless or bare particular that merely supports the properties. Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality. Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible. According to process ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change. Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According to trope bundle theory, properties are particular entities that belong to a single bundle. Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level. Ontic structural realism agrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate. Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact that the Earth is a planet consists of the particular object the Earth and the property being a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts. In the history of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested by Aristotle, whose system includes ten categories: substance, quantity, quality, relation, place, date, posture, state, action, and passion. An early influential system of categories in Indian philosophy, first proposed in the Vaisheshika school, distinguishes between six categories: substance, quality, motion, universal, individuator, and inherence. Immanuel Kant's transcendental idealism includes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality. In more recent philosophy, theories of categories were developed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Others The dispute between constituent and relational ontologies concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties. Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities. One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture. Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists. The ontological theories of endurantism and perdurantism aim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed of temporal parts and, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves. Differential ontology is a poststructuralist approach interested in the relation between the concepts of identity and difference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things. Object-oriented ontology belongs to the school of speculative realism and examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception. Methods Methods of ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied by metaontology. Conceptual analysis is a method to understand ontological concepts and clarify their meaning. It proceeds by analyzing their component parts and the necessary and sufficient conditions under which a concept applies to an entity. This information can help ontologists decide whether a certain type of entity, such as numbers, exists. Eidetic variation is a related method in phenomenological ontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential. The transcendental method begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or which conditions are required for this entity to exist. Another approach is based on intuitions in the form of non-inferential impressions about the correctness of general principles. These principles can be used as the foundation on which an ontological system is built and expanded using deductive reasoning. A further intuition-based method relies on thought experiments to evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employing counterfactual thinking to assess the consequences of this situation. For example, some ontologists examine the relation between mind and matter by imagining creatures identical to humans but without consciousness. Naturalistic methods rely on the insights of the natural sciences to determine what exists. According to an influential approach by Willard Van Orman Quine, ontology can be conducted by analyzing the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them. Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction. The principle of Ockham's Razor says that simple theories are preferable. A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities. Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations. A further factor is how close a theory is to common sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue. In applied ontology, ontological engineering is the process of creating and refining conceptual models of specific domains. Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in a formal language to ensure precision and, in some cases, automatic computability. In the following review phase, the validity of the ontology is assessed using test data. Various more specific instructions for how to carry out the different steps have been suggested. They include the Cyc method, Grüninger and Fox's methodology, and so-called METHONTOLOGY. In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch. Related fields Ontology overlaps with many disciplines, including logic, the study of correct reasoning. Ontologists often employ logical systems to express their insights, specifically in the field of formal ontology. Of particular interest to them is the existential quantifier, which is used to express what exists. In first-order logic, for example, the formula states that dogs exist. Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being. Doubts about the accuracy of natural language have led some ontologists to seek a new formal language, termed ontologese, for a better representation of the fundamental structure of reality. Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which builds databases to store this information and defines computational processes to automatically transform and use it. For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name. In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help of upper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, like Suggested Upper Merged Ontology and Basic Formal Ontology. Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework. Protein Ontology is a formal framework for the standardized representation of protein-related entities and their relationships. Gene Ontology and Sequence Ontology serve a similar purpose in the field of genetics. Environment Ontology is a knowledge representation focused on ecosystems and environmental processes. Friend of a Friend provides a conceptual framework to represent relations between people and their interests and activities. The topic of ontology has received increased attention in anthropology since the 1990s, sometimes termed the "ontological turn". This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook of Indigenous people and how it differs from a Western perspective. As an example of this contrast, it has been argued that various indigenous communities ascribe intentionality to non-human entities, like plants, forests, or rivers. This outlook is known as animism and is also found in Native American ontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature. Ontology is closely related to theology and its interest in the existence of God as an ultimate entity. The ontological argument, first proposed by Anselm of Canterbury, attempts to prove the existence of the divine. It defines God as the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence. Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming it ontotheology. History The roots of ontology in ancient philosophy are speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in the Upanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss in what sense ultimate reality is one or many. Samkhya, the first orthodox school of Indian philosophy, formulated an atheist dualist ontology based on the Upanishads, identifying pure consciousness and matter as its two foundational principles. The later Vaisheshika school proposed a comprehensive system of categories. In ancient China, Laozi's (6th century BCE) Taoism examines the underlying order of the universe, known as Tao, and how this order is shaped by the interaction of two basic forces, yin and yang. The philosophical movement of Xuanxue emerged in the 3rd century CE and explored the relation between being and non-being. Starting in the 6th century BCE, Presocratic philosophers in ancient Greece aimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things. Parmenides (c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being. Inspired by Presocratic philosophy, Plato (427–347 BCE) developed his theory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms. Aristotle (384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being. The school of Neoplatonism arose in the 3rd century CE and proposed an ineffable source of everything, called the One, which is more basic than being itself. The problem of universals was an influential topic in medieval ontology. Boethius (477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspired Peter Abelard (1079–1142 CE), who proposed that universals exist only in the mind. Thomas Aquinas (1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence and essence, between substance and accidents, and between matter and form. He also discussed the transcendentals, which are the most general properties or modes of being. John Duns Scotus (1266–1308) argued that all entities, including God, exist in the same way and that each entity has a unique essence, called haecceity. William of Ockham (c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known as Ockham's razor. In Arabic-Persian philosophy, Avicenna (980–1037 CE) combined ontology with theology. He identified God as a necessary being that is the source of everything else, which only has contingent existence. In 8th-century Indian philosophy, the school of Advaita Vedanta emerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is an illusion. Starting in the 13th century CE, the Navya-Nyāya school built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation. 9th-century China saw the emergence of Neo-Confucianism, which developed the idea that a rational principle, known as li, is the ground of being and order of the cosmos. René Descartes (1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact. Rejecting Descartes's dualism, Baruch Spinoza (1632–1677) proposed a monist ontology according to which there is only a single entity that is identical to God and nature. Gottfried Wilhelm Leibniz (1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another. John Locke (1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties. Christian Wolff (1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry. George Berkeley (1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds. Immanuel Kant (1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system of pure concepts of understanding. Influenced by Kant's philosophy, Georg Wilhelm Friedrich Hegel (1770–1831) linked ontology and logic. He said that being and thought are identical and examined their foundational structures. Arthur Schopenhauer (1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of a blind and irrational will. Francis Herbert Bradley (1846–1924) saw absolute spirit as the ultimate and all-encompassing reality while denying that there are any external relations. At the beginning of the 20th century, Edmund Husserl (1859–1938) developed phenomenology and employed its method, the description of experience, to address ontological problems. This idea inspired his student Martin Heidegger (1889–1976) to clarify the meaning of being by exploring the mode of human existence. Jean-Paul Sartre responded to Heidegger's philosophy by examining the relation between being and nothingness from the perspective of human existence, freedom, and consciousness. Based on the phenomenological method, Nicolai Hartmann (1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual. Alexius Meinong (1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being. Arguing against this theory, Bertrand Russell (1872–1970) formulated a fact ontology known as logical atomism. This idea was further refined by the early Ludwig Wittgenstein (1889–1951) and inspired D. M. Armstrong's (1926–2014) ontology. Alfred North Whitehead (1861–1947), by contrast, developed a process ontology. Rudolf Carnap (1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework. He had a strong influence on Willard Van Orman Quine (1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems. Quine's student David Lewis (1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world. Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains. See also References Notes Citations Sources External links
0.76653
0.999506
0.766151
Gellner's theory of nationalism
Gellner's theory of nationalism was developed by Ernest Gellner over a number of publications from around the early 1960s to his 1995 death. Gellner discussed nationalism in a number of works, starting with Thought and Change (1964), and he most notably developed it in Nations and Nationalism (1983). His theory is modernist. Characteristics Gellner defined nationalism as "primarily a political principle which holds that the political and the national unit should be congruent" and as the general imposition of a high culture on society, where previously low cultures had taken up the lives of the majority, and in some cases the totality, of the population. It means the general diffusion of a school-mediated, academy supervised idiom, codified for the requirements of a reasonably precise bureaucratic and technological communication. It is the establishment of an anonymous impersonal society, with mutually sustainable atomised individuals, held together above all by a shared culture of this kind, in place of the previous complex structure of local groups, sustained by folk cultures reproduced locally and idiosyncratically by the micro-groups themselves. Gellner analyzed nationalism by a historical perspective. He saw the history of humanity culminating in the discovery of modernity, nationalism being a key functional element. Modernity, by changes in political and economic system, is tied to the popularization of education, which, in turn, is tied to the unification of language. However, as modernization spread around the world, it did so slowly, and in numerous places, cultural elites were able to resist cultural assimilation and defend their own culture and language successfully. For Gellner, nationalism was a sociological condition and a likely but not guaranteed (he noted exceptions in multilingual states like Switzerland, Belgium and Canada) result of modernisation, the transition from agrarian to industrial society. His theory focused on the political and cultural aspects of that transition. In particular, he focused on the unifying and culturally homogenising roles of the educational systems, national labour markets and improved communication and mobility in the context of urbanisation. He thus argued that nationalism was highly compatible with industrialisation and served the purpose of replacing the ideological void left by both the disappearance of the prior agrarian society culture and the political and economical system of feudalism, which it legitimised. Thomas Hylland Eriksen lists these as "some of the central features of nationalism" in Gellner's theory: Shared, formal educational system Cultural homogenisation and "social entropy" Central monitoring of polity, with extensive bureaucratic control Linguistic standardisation National identification as abstract community Cultural similarity as a basis for political legitimacy Anonymity, single-stranded social relationships Gellner also provided a typology of "nationalism-inducing and nationalism-thwarting situations". Gellner criticised a number of other theoretical explanations of nationalism, including the "naturality theory", which states that it is "natural, self-evident and self-generating" and a basic quality of human being, and a neutral or a positive quality; its dark version, the "Dark Gods theory", which sees nationalism as an inevitable expression of basic human atavistic, irrational passions; and Elie Kedourie's idealist argument that it was an accidental development, an intellectual error of disseminating unhelpful ideas, and not related to industrialisation and the Marxist theory in which nations appropriated the leading role of social classes. On October 24, 1995, at Warwick University, Gellner debated one of his former students, Anthony D. Smith in what became known as the Warwick Debates. Smith presented an ethnosymbolist view, Gellner a modernist one. The debate has been described as epitomizing their positions. Influence Gellner is considered one of the leading theoreticians on nationalism. Eriksen notes that "nobody contests Ernest Gellner's central place in the research on nationalism over the last few decades". O'Leary refers to the theory as "the best-known modernist explanatory theory of nationalism". Criticisms Gellner's theory has been subject to various criticisms: It is too functionalist, as it explains the phenomenon with reference to the eventual historical outcome that industrial society could not 'function' without nationalism. It misreads the relationship between nationalism and industrialisation. It accounts poorly for national movements of Ancient Rome and Greece since it insists that nationalism is tied to modernity and so cannot exist without a clearly defined modern industrialisation. It fails to account for either nationalism in non-industrial society and resurgences of nationalism in post-industrial society. It fails to account for nationalism in 16th-century Europe. It cannot explain the passions generated by nationalism and why anyone should fight and die for a country. It fails to take into account either the role of war and the military in fostering both cultural homogenisation and nationalism or the relationship between militarism and compulsory education. It has been compared to technological determinism, as it disregards the views of individuals. Philip Gorski has argued that modernization theorists, such as Gellner, have gotten the timing of nationalism wrong: nationalism existed prior to modernity, and even had medieval roots. References Further reading Nationalism studies Sociological theories Political science theories
0.782971
0.978473
0.766116
Anthropomorphism
Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities. It is considered to be an innate tendency of human psychology. Personification is the related attribution of human form and characteristics to abstract concepts such as nations, emotions, and natural forces, such as seasons and weather. Both have ancient roots as storytelling and artistic devices, and most cultures have traditional fables with anthropomorphized animals as characters. People have also routinely attributed human emotions and behavioral traits to wild as well as domesticated animals. Etymology Anthropomorphism and anthropomorphization derive from the verb form anthropomorphize, itself derived from the Greek ánthrōpos (, "human") and morphē (, "form"). It is first attested in 1753, originally in reference to the heresy of applying a human form to the Christian God. Examples in prehistory From the beginnings of human behavioral modernity in the Upper Paleolithic, about 40,000 years ago, examples of zoomorphic (animal-shaped) works of art occur that may represent the earliest known evidence of anthropomorphism. One of the oldest known is an ivory sculpture, the Löwenmensch figurine, Germany, a human-shaped figurine with the head of a lioness or lion, determined to be about 32,000 years old. It is not possible to say what these prehistoric artworks represent. A more recent example is The Sorcerer, an enigmatic cave painting from the Trois-Frères Cave, Ariège, France: the figure's significance is unknown, but it is usually interpreted as some kind of great spirit or master of the animals. In either case there is an element of anthropomorphism. This anthropomorphic art has been linked by archaeologist Steven Mithen with the emergence of more systematic hunting practices in the Upper Palaeolithic. He proposes that these are the product of a change in the architecture of the human mind, , where anthropomorphism allowed hunters to identify empathetically with hunted animals and better predict their movements. In religion and mythology In religion and mythology, anthropomorphism is the perception of a divine being or beings in human form, or the recognition of human qualities in these beings. Ancient mythologies frequently represented the divine as deities with human forms and qualities. They resemble human beings not only in appearance and personality; they exhibited many human behaviors that were used to explain natural phenomena, creation, and historical events. The deities fell in love, married, had children, fought battles, wielded weapons, and rode horses and chariots. They feasted on special foods, and sometimes required sacrifices of food, beverage, and sacred objects to be made by human beings. Some anthropomorphic deities represented specific human concepts, such as love, war, fertility, beauty, or the seasons. Anthropomorphic deities exhibited human qualities such as beauty, wisdom, and power, and sometimes human weaknesses such as greed, hatred, jealousy, and uncontrollable anger. Greek deities such as Zeus and Apollo often were depicted in human form exhibiting both commendable and despicable human traits. Anthropomorphism in this case is, more specifically, anthropotheism. From the perspective of adherents to religions in which humans were created in the form of the divine, the phenomenon may be considered theomorphism, or the giving of divine qualities to humans. Anthropomorphism has cropped up as a Christian heresy, particularly prominently with Audianism in third-century Syria, but also fourth-century Egypt and tenth-century Italy. This often was based on a literal interpretation of the Genesis creation myth: "So God created humankind in his image, in the image of God he created them; male and female he created them". Hindus do not reject the concept of a deity in the abstract unmanifested, but note practical problems. The Bhagavad Gita, Chapter 12, Verse 5, states that it is much more difficult for people to focus on a deity that is unmanifested than one with form, remarking on the usage of anthropomorphic icons (murtis) that adherents can perceive with their senses. Criticism Some religions, scholars, and philosophers objected to anthropomorphic deities. The earliest known criticism was that of the Greek philosopher Xenophanes (570–480 BCE) who observed that people model their gods after themselves. He argued against the conception of deities as fundamentally anthropomorphic: Xenophanes said that "the greatest god" resembles man "neither in form nor in mind". Both Judaism and Islam reject an anthropomorphic deity, believing that God is beyond human comprehension. Judaism's rejection of an anthropomorphic deity began with the prophets, who explicitly rejected any likeness of God to humans. Their rejection grew further after the Islamic Golden Age in the tenth century, which Maimonides codified in the twelfth century, in his thirteen principles of Jewish faith. In the Ismaili interpretation of Islam, assigning attributes to God as well as negating any attributes from God (via negativa) both qualify as anthropomorphism and are rejected, as God cannot be understood by either assigning attributes to Him or taking them away. The 10th-century Ismaili philosopher Abu Yaqub al-Sijistani suggested the method of double negation; for example: "God is not existent" followed by "God is not non-existent". This glorifies God from any understanding or human comprehension. In secular thought, one of the most notable criticisms began in 1600 with Francis Bacon, who argued against Aristotle's teleology, which declared that everything behaves as it does in order to achieve some end, in order to fulfill itself. Bacon pointed out that achieving ends is a human activity and to attribute it to nature misconstrues it as humanlike. Modern criticisms followed Bacon's ideas such as critiques of Baruch Spinoza and David Hume. The latter, for instance, embedded his arguments in his wider criticism of human religions and specifically demonstrated in what he cited as their "inconsistence" where, on one hand, the Deity is painted in the most sublime colors but, on the other, is degraded to nearly human levels by giving him human infirmities, passions, and prejudices. In Faces in the Clouds, anthropologist Stewart Guthrie proposes that all religions are anthropomorphisms that originate in the brain's tendency to detect the presence or vestiges of other humans in natural phenomena. Some scholars argue that anthropomorphism overestimates the similarity of humans and nonhumans and therefore could not yield accurate accounts. In literature Religious texts There are various examples of personification in both the Hebrew Bible and Christian New Testaments, as well as in the texts of some other religions. Fables Anthropomorphism, also referred to as personification, is a well-established literary device from ancient times. The story of "The Hawk and the Nightingale" in Hesiod's Works and Days preceded Aesop's fables by centuries. Collections of linked fables from India, the Jataka Tales and Panchatantra, also employ anthropomorphized animals to illustrate principles of life. Many of the stereotypes of animals that are recognized today, such as the wily fox and the proud lion, can be found in these collections. Aesop's anthropomorphisms were so familiar by the first century CE that they colored the thinking of at least one philosopher: Apollonius noted that the fable was created to teach wisdom through fictions that are meant to be taken as fictions, contrasting them favorably with the poets' stories of the deities that are sometimes taken literally. Aesop, "by announcing a story which everyone knows not to be true, told the truth by the very fact that he did not claim to be relating real events". The same consciousness of the fable as fiction is to be found in other examples across the world, one example being a traditional Ashanti way of beginning tales of the anthropomorphic trickster-spider Anansi: "We do not really mean, we do not really mean that what we are about to say is true. A story, a story; let it come, let it go." Fairy tales Anthropomorphic motifs have been common in fairy tales from the earliest ancient examples set in a mythological context to the great collections of the Brothers Grimm and Perrault. The Tale of Two Brothers (Egypt, 13th century BCE) features several talking cows and in Cupid and Psyche (Rome, 2nd century CE) Zephyrus, the west wind, carries Psyche away. Later an ant feels sorry for her and helps her in her quest. Modern literature Building on the popularity of fables and fairy tales, children's literature began to emerge in the nineteenth century with works such as Alice's Adventures in Wonderland (1865) by Lewis Carroll, The Adventures of Pinocchio (1883) by Carlo Collodi and The Jungle Book (1894) by Rudyard Kipling, all employing anthropomorphic elements. This continued in the twentieth century with many of the most popular titles having anthropomorphic characters, examples being The Tale of Peter Rabbit (1901) and later books by Beatrix Potter; The Wind in the Willows by Kenneth Grahame (1908); Winnie-the-Pooh (1926) and The House at Pooh Corner (1928) by A. A. Milne; and The Lion, the Witch, and the Wardrobe (1950) and the subsequent books in The Chronicles of Narnia series by C. S. Lewis. In many of these stories the animals can be seen as representing facets of human personality and character. As John Rowe Townsend remarks, discussing The Jungle Book in which the boy Mowgli must rely on his new friends the bear Baloo and the black panther Bagheera, "The world of the jungle is in fact both itself and our world as well". A notable work aimed at an adult audience is George Orwell's Animal Farm, in which all the main characters are anthropomorphic animals. Non-animal examples include Rev. W. Awdry's Railway Series stories featuring Thomas the Tank Engine and other anthropomorphic locomotives. The fantasy genre developed from mythological, fairy tale, and Romance motifs sometimes have anthropomorphic animals as characters. The best-selling examples of the genre are The Hobbit (1937) and The Lord of the Rings (1954–1955), both by J. R. R. Tolkien, books peopled with talking creatures such as ravens, spiders, and the dragon Smaug and a multitude of anthropomorphic goblins and elves. John D. Rateliff calls this the "Doctor Dolittle Theme" in his book The History of the Hobbit and Tolkien saw this anthropomorphism as closely linked to the emergence of human language and myth: "...The first men to talk of 'trees and stars' saw things very differently. To them, the world was alive with mythological beings... To them the whole of creation was 'myth-woven and elf-patterned'." Richard Adams developed a distinctive take on anthropomorphic writing in the 1970s: his debut novel, Watership Down (1972), featured rabbits that could talkwith their own distinctive language (Lapine) and mythologyand included a police-state warren, Efrafa. Despite this, Adams attempted to ensure his characters' behavior mirrored that of wild rabbits, engaging in fighting, copulating and defecating, drawing on Ronald Lockley's study The Private Life of the Rabbit as research. Adams returned to anthropomorphic storytelling in his later novels The Plague Dogs (novel) (1977) and Traveller (1988). By the 21st century, the children's picture book market had expanded massively. Perhaps a majority of picture books have some kind of anthropomorphism, with popular examples being The Very Hungry Caterpillar (1969) by Eric Carle and The Gruffalo (1999) by Julia Donaldson. Anthropomorphism in literature and other media led to a sub-culture known as furry fandom, which promotes and creates stories and artwork involving anthropomorphic animals, and the examination and interpretation of humanity through anthropomorphism. This can often be shortened in searches as "anthro", used by some as an alternative term to "furry". Anthropomorphic characters have also been a staple of the comic book genre. The most prominent one was Neil Gaiman's the Sandman which had a huge impact on how characters that are physical embodiments are written in the fantasy genre. Other examples also include the mature Hellblazer (personified political and moral ideas), Fables and its spin-off series Jack of Fables, which was unique for having anthropomorphic representation of literary techniques and genres. Various Japanese manga and anime have used anthropomorphism as the basis of their story. Examples include Squid Girl (anthropomorphized squid), Hetalia: Axis Powers (personified countries), Upotte!! (personified guns), Arpeggio of Blue Steel and Kancolle (personified ships). In film Some of the most notable examples are the Walt Disney characters the Magic Carpet from Disney's Aladdin franchise, Mickey Mouse, Donald Duck, Goofy, and Oswald the Lucky Rabbit; the Looney Tunes characters Bugs Bunny, Daffy Duck, and Porky Pig; and an array of others from the 1920s to present day. In the Disney/Pixar franchises Cars and Planes, all the characters are anthropomorphic vehicles, while in Toy Story, they are anthropomorphic toys. Other Pixar franchises like Monsters, Inc features anthropomorphic monsters and Finding Nemo features anthropomorphic sea animals (like fish, sharks, and whales). Discussing anthropomorphic animals from DreamWorks franchise Madagascar, Timothy Laurie suggests that "". Other DreamWorks franchises like Shrek features fairy tale characters, and Blue Sky Studios of 20th Century Fox franchises like Ice Age features anthropomorphic extinct animals. Other characters in SpongeBob SquarePants features anthropomorphic sea animals as well (like sea sponges, starfish, octopus, crabs, whales, puffer fish, lobsters, and zooplankton). All of the characters in Walt Disney Animation Studios' Zootopia (2016) are anthropomorphic animals, that is an entirely nonhuman civilization. The live-action/animated franchise Alvin and the Chipmunks by 20th Century Fox centers around anthropomorphic talkative and singing chipmunks. The female singing chipmunks called The Chipettes are also centered in some of the franchise's films. In television Since the 1960s, anthropomorphism has also been represented in various animated television shows such as Biker Mice From Mars (1993–1996) and SWAT Kats: The Radical Squadron (1993–1995). Teenage Mutant Ninja Turtles, first aired in 1987, features four pizza-loving anthropomorphic turtles with a great knowledge of ninjutsu, led by their anthropomorphic rat sensei, Master Splinter. Nickelodeon's longest running animated TV series SpongeBob SquarePants (1999–present), revolves around SpongeBob, a yellow sea sponge, living in the underwater town of Bikini Bottom with his anthropomorphic marine life friends. Cartoon Network's animated series The Amazing World of Gumball (2011–2019) are about anthropomorphic animals and inanimate objects. All of the characters in Hasbro Studios' TV series My Little Pony: Friendship Is Magic (2010–2019) are anthropomorphic fantasy creatures, with most of them being ponies living in the pony-inhabited land of Equestria. The Netflix original series Centaurworld focuses on a warhorse who gets transported to a Dr. Seuss-like world full of centaurs who possess the bottom half of any animal, as opposed to the traditional horse. In the American animated TV series Family Guy, one of the show's main characters, Brian, is a dog. Brian shows many human characteristics – he walks upright, talks, smokes, and drinks Martinis – but also acts like a normal dog in other ways; for example, he cannot resist chasing a ball and barks at the mailman, believing him to be a threat. In a similar case, BoJack Horseman, an American Netflix adult animated black comedy series, takes place in an alternate world where humans and anthropomorphic animals live side by side, and centers around the life of BoJack Horseman; a humanoid horse who was a one hit wonder on a popular 1990s sitcom Horsin' Around, living off the show's residuals in present time. Multiple main characters of the series are other animals who possess human body form and other human-like traits and identity as well; Mr. Peanutbutter, a humanoid dog lives a mostly human lifehe speaks American English, walks upright, owns a house, drives a car, is in a romantic relationship with a human woman (in this series, as animals and humans are seen as equal, relationships like this are not seen as bestiality but seen as regular human sexuality), Diane, and has a successful career in televisionhowever also exhibits dog traitshe sleeps in a human-size dog bed, gets arrested for having a drag race with the mailman and is once forced to wear a dog cone after he gets stitches in his arm. The PBS Kids animated series Let's Go Luna! centers on an anthropomorphic female Moon who speaks, sings, and dances. She comes down out of the sky to serve as a tutor of international culture to the three main characters: a boy frog and wombat and a girl butterfly, who are supposed to be preschool children traveling a world populated by anthropomorphic animals with a circus run by their parents. The French-Belgian animated series Mush-Mush & the Mushables takes place in a world inhabited by Mushables, which are anthropomorphic fungi, along with other critters such as beetles, snails, and frogs. In video games Sonic the Hedgehog, a video game franchise debuting in 1991, features a speedy blue hedgehog as the main protagonist. This series' characters are almost all anthropomorphic animals such as foxes, cats, and other hedgehogs who are able to speak and walk on their hind legs like normal humans. As with most anthropomorphisms of animals, clothing is of little or no importance, where some characters may be fully clothed while some wear only shoes and gloves. Another popular example in video games is the Super Mario series, debuting in 1985 with Super Mario Bros., of which main antagonist includes a fictional species of anthropomorphic turtle-like creatures known as Koopas. Other games in the series, as well as of other of its greater Mario franchise, spawned similar characters such as Yoshi, Donkey Kong and many others. Art history Claes Oldenburg Claes Oldenburg's soft sculptures are commonly described as anthropomorphic. Depicting common household objects, Oldenburg's sculptures were considered Pop Art. Reproducing these objects, often at a greater size than the original, Oldenburg created his sculptures out of soft materials. The anthropomorphic qualities of the sculptures were mainly in their sagging and malleable exterior which mirrored the not-so-idealistic forms of the human body. In "Soft Light Switches" Oldenburg creates a household light switch out of vinyl. The two identical switches, in a dulled orange, insinuate nipples. The soft vinyl references the aging process as the sculpture wrinkles and sinks with time. Minimalism In the essay "Art and Objecthood", Michael Fried makes the case that "literalist art" (minimalism) becomes theatrical by means of anthropomorphism. The viewer engages the minimalist work, not as an autonomous art object, but as a theatrical interaction. Fried references a conversation in which Tony Smith answers questions about his six-foot cube, "Die". Fried implies an anthropomorphic connection by means of "a surrogate personthat is, a kind of statue." The minimalist decision of "hollowness" in much of their work was also considered by Fried to be "blatantly anthropomorphic". This "hollowness" contributes to the idea of a separate inside; an idea mirrored in the human form. Fried considers the Literalist art's "hollowness" to be "biomorphic" as it references a living organism. Post-minimalism Curator Lucy Lippard's Eccentric Abstraction show, in 1966, sets up Briony Fer's writing of a post-minimalist anthropomorphism. Reacting to Fried's interpretation of minimalist art's "looming presence of objects which appear as actors might on a stage", Fer interprets the artists in Eccentric Abstraction to a new form of anthropomorphism. She puts forth the thoughts of Surrealist writer Roger Caillois, who speaks of the "spacial lure of the subject, the way in which the subject could inhabit their surroundings." Caillous uses the example of an insect who "through camouflage does so in order to become invisible... and loses its distinctness." For Fer, the anthropomorphic qualities of imitation found in the erotic, organic sculptures of artists Eva Hesse and Louise Bourgeois, are not necessarily for strictly "mimetic" purposes. Instead, like the insect, the work must come into being in the "scopic field... which we cannot view from outside." Mascots For branding, merchandising, and representation, figures known as mascots are now often employed to personify sports teams, corporations, and major events such as the World's Fair and the Olympics. These personifications may be simple human or animal figures, such as Ronald McDonald or the donkey that represents the United States's Democratic Party. Other times, they are anthropomorphic items, such as "Clippy" or the "Michelin Man". Most often, they are anthropomorphic animals such as the Energizer Bunny or the San Diego Chicken. The practice is particularly widespread in Japan, where cities, regions, and companies all have mascots, collectively known as yuru-chara. Two of the most popular are Kumamon (a bear who represents Kumamoto Prefecture) and Funassyi (a pear who represents Funabashi, a suburb of Tokyo). Animals Other examples of anthropomorphism include the attribution of human traits to animals, especially domesticated pets such as dogs and cats. Examples of this include thinking a dog is smiling simply because it is showing his teeth, or a cat mourns for a dead owner. Anthropomorphism may be beneficial to the welfare of animals. A 2012 study by Butterfield et al. found that utilizing anthropomorphic language when describing dogs created a greater willingness to help them in situations of distress. Previous studies have shown that individuals who attribute human characteristics to animals are less willing to eat them, and that the degree to which individuals perceive minds in other animals predicts the moral concern afforded to them. It is possible that anthropomorphism leads humans to like non-humans more when they have apparent human qualities, since perceived similarity has been shown to increase prosocial behavior toward other humans. A study of how animal behaviors were discussed on the television series Life found that the script very often used anthropomorphisms. In science In science, the use of anthropomorphic language that suggests animals have intentions and emotions has traditionally been deprecated as indicating a lack of objectivity. Biologists have been warned to avoid assumptions that animals share any of the same mental, social, and emotional capacities of humans, and to rely instead on strictly observable evidence. In 1927 Ivan Pavlov wrote that animals should be considered "without any need to resort to fantastic speculations as to the existence of any possible subjective states". More recently, The Oxford companion to animal behaviour (1987) advised that "one is well advised to study the behaviour rather than attempting to get at any underlying emotion". Some scientists, like William M Wheeler (writing apologetically of his use of anthropomorphism in 1911), have used anthropomorphic language in metaphor to make subjects more humanly comprehensible or memorable. Despite the impact of Charles Darwin's ideas in The Expression of the Emotions in Man and Animals (Konrad Lorenz in 1965 called him a "patron saint" of ethology) ethology has generally focused on behavior, not on emotion in animals. The study of great apes in their own environment and in captivity has changed attitudes to anthropomorphism. In the 1960s the three so-called "Leakey's Angels", Jane Goodall studying chimpanzees, Dian Fossey studying gorillas and Biruté Galdikas studying orangutans, were all accused of "that worst of ethological sins – anthropomorphism". The charge was brought about by their descriptions of the great apes in the field; it is now more widely accepted that empathy has an important part to play in research. De Waal has written: "To endow animals with human emotions has long been a scientific taboo. But if we do not, we risk missing something fundamental, about both animals and us." Alongside this has come increasing awareness of the linguistic abilities of the great apes and the recognition that they are tool-makers and have individuality and culture. Writing of cats in 1992, veterinarian Bruce Fogle points to the fact that "both humans and cats have identical neurochemicals and regions in the brain responsible for emotion" as evidence that "it is not anthropomorphic to credit cats with emotions such as jealousy". In computing In science fiction, an artificially intelligent computer or robot, even though it has not been programmed with human emotions, often spontaneously experiences those emotions anyway: for example, Agent Smith in The Matrix was influenced by a "disgust" toward humanity. This is an example of anthropomorphism: in reality, while an artificial intelligence could perhaps be deliberately programmed with human emotions or could develop something similar to an emotion as a means to an ultimate goal if it is useful to do so, it would not spontaneously develop human emotions for no purpose whatsoever, as portrayed in fiction. One example of anthropomorphism would be to believe that one's computer is angry at them because they insulted it; another would be to believe that an intelligent robot would naturally find a woman attractive and be driven to mate with her. Scholars sometimes disagree with each other about whether a particular prediction about an artificial intelligence's behavior is logical, or whether the prediction constitutes illogical anthropomorphism. An example that might initially be considered anthropomorphism, but is in fact a logical statement about an artificial intelligence's behavior, would be the Dario Floreano experiments where certain robots spontaneously evolved a crude capacity for "deception", and tricked other robots into eating "poison" and dying: here, a trait, "deception", ordinarily associated with people rather than with machines, spontaneously evolves in a type of convergent evolution. The conscious use of anthropomorphic metaphor is not intrinsically unwise; ascribing mental processes to the computer, under the proper circumstances, may serve the same purpose as it does when humans do it to other people: it may help persons to understand what the computer will do, how their actions will affect the computer, how to compare computers with humans, and conceivably how to design computer programs. However, inappropriate use of anthropomorphic metaphors can result in false beliefs about the behavior of computers, for example by causing people to overestimate how "flexible" computers are. According to Paul R. Cohen and Edward Feigenbaum, in order to differentiate between anthropomorphization and logical prediction of AI behavior, "the trick is to know enough about how humans and computers think to say exactly what they have in common, and, when we lack this knowledge, to use the comparison to suggest theories of human thinking or computer thinking." Computers overturn the childhood hierarchical taxonomy of "stones (non-living) → plants (living) → animals (conscious) → humans (rational)", by introducing a non-human "actor" that appears to regularly behave rationally. Much of computing terminology derives from anthropomorphic metaphors: computers can "read", "write", or "catch a virus". Information technology presents no clear correspondence with any other entities in the world besides humans; the options are either to leverage an emotional, imprecise human metaphor, or to reject imprecise metaphor and make use of more precise, domain-specific technical terms. People often grant an unnecessary social role to computers during interactions. The underlying causes are debated; Youngme Moon and Clifford Nass propose that humans are emotionally, intellectually and physiologically biased toward social activity, and so when presented with even tiny social cues, deeply infused social responses are triggered automatically. This may allow incorporation of anthropomorphic features into computers/robots to enable more familiar "social" interactions, making them easier to use. Alleged examples of anthropomorphism toward AI have included: Google engineer Blake Lemoine's widely derided 2022 claim that the Google LaMDA chatbot was sentient; the 2017 granting of honorary Saudi Arabian citizenship to the robot Sophia; and the reactions to the chatbot ELIZA in the 1960s. Psychology Foundational research In psychology, the first empirical study of anthropomorphism was conducted in 1944 by Fritz Heider and Marianne Simmel. In the first part of this experiment, the researchers showed a 2-and-a-half-minute long animation of several shapes moving around on the screen in varying directions at various speeds. When subjects were asked to describe what they saw, they gave detailed accounts of the intentions and personalities of the shapes. For instance, the large triangle was characterized as a bully, chasing the other two shapes until they could trick the large triangle and escape. The researchers concluded that when people see objects making motions for which there is no obvious cause, they view these objects as intentional agents (individuals that deliberately make choices to achieve goals). Modern psychologists generally characterize anthropomorphism as a cognitive bias. That is, anthropomorphism is a cognitive process by which people use their schemas about other humans as a basis for inferring the properties of non-human entities in order to make efficient judgements about the environment, even if those inferences are not always accurate. Schemas about humans are used as the basis because this knowledge is acquired early in life, is more detailed than knowledge about non-human entities, and is more readily accessible in memory. Anthropomorphism can also function as a strategy to cope with loneliness when other human connections are not available. Three-factor theory Since making inferences requires cognitive effort, anthropomorphism is likely to be triggered only when certain aspects about a person and their environment are true. Psychologist Adam Waytz and his colleagues created a three-factor theory of anthropomorphism to describe these aspects and predict when people are most likely to anthropomorphize. The three factors are: Elicited agent knowledge, or the amount of prior knowledge held about an object and the extent to which that knowledge is called to mind. Effectance, or the drive to interact with and understand one's environment. Sociality, the need to establish social connections. When elicited agent knowledge is low and effectance and sociality are high, people are more likely to anthropomorphize. Various dispositional, situational, developmental, and cultural variables can affect these three factors, such as need for cognition, social disconnection, cultural ideologies, uncertainty avoidance, etc. Developmental perspective Children appear to anthropomorphize and use egocentric reasoning from an early age and use it more frequently than adults. Examples of this are describing a storm cloud as "angry" or drawing flowers with faces. This penchant for anthropomorphism is likely because children have acquired vast amounts of socialization, but not as much experience with specific non-human entities, so thus they have less developed alternative schemas for their environment. In contrast, autistic children may tend to describe anthropomorphized objects in purely mechanical terms (that is, in terms of what they do) because they have difficulties with theory of mind (ToM) according to past research. A 2018 study has shown that autistic people are more prone to object personification, suggesting that autistic empathy and ToM may be not only more complex but also more all-encompassing. The double empathy problem challenges the notion that autistic people have difficulties with ToM. Effect on learning Anthropomorphism can be used to assist learning. Specifically, anthropomorphized words and describing scientific concepts with intentionality can improve later recall of these concepts. In mental health In people with depression, social anxiety, or other mental illnesses, emotional support animals are a useful component of treatment partially because anthropomorphism of these animals can satisfy the patients' need for social connection. In marketing Anthropomorphism of inanimate objects can affect product buying behavior. When products seem to resemble a human schema, such as the front of a car resembling a face, potential buyers evaluate that product more positively than if they do not anthropomorphize the object. People also tend to trust robots to do more complex tasks such as driving a car or childcare if the robot resembles humans in ways such as having a face, voice, and name; mimicking human motions; expressing emotion; and displaying some variability in behavior. Image gallery See also Aniconism – antithetic concept Animism Anthropic principle Anthropocentrism Anthropology Anthropomorphic maps Anthropopathism Anthropomorphized food Cynocephaly Furry fandom Great Chain of Being Human-animal hybrid Humanoid Moe anthropomorphism National personification Nature fakers controversy Pareidolia – seeing faces in everyday objects Pathetic fallacy Prosopopoeia Speciesism Talking animals in fiction Tashbih Zoomorphism Notes References Sources Further reading External links "Anthropomorphism" entry in the Encyclopedia of Human-Animal Relationships (Horowitz A., 2007) "Anthropomorphism" entry in the Encyclopedia of Astrobiology, Astronomy, and Spaceflight "Anthropomorphism" in mid-century American print advertising. Collection at The Gallery of Graphic Design. Descriptive technique
0.766413
0.999612
0.766115
Western values
"Western values" are a set of values strongly associated with the West which generally posit the importance of an individualistic culture. They are often seen as stemming from Judeo-Christian values and the Age of Enlightenment, although since the 20th century they have become marked by other sociopolitical aspects of the West, such as free-market capitalism, feminism, liberal democracy, the scientific method, and the legacy of the sexual revolution. Background Western values were historically adopted around the world in large part due to colonialism and post-colonial dominance by the West, and are influential in the discourse around and justification of these phenomena. This has induced some opposition to Western values and spurred a search for alternative values in some countries, though Western values are argued by some to have underpinned non-Western peoples' quest for human rights, and to be more global in character than often assumed. The World wars forced the West to introspect on its application of its values to itself, as internal warfare and the rise of the Nazis within Europe, who openly opposed Western values, had greatly weakened it; after World War II and the start of the post-colonial era, global institutions such as the United Nations were founded with a basis in Western values. Western values have been used to explain a variety of phenomena relating to the global dominance and success of the West, such as the emergence of modern science and technology. They have been disseminated around the world through several mediums, such as through the spread of Western sports. The global esteem which Western values are held in has been considered by some to be leading to a harmful decline of non-Western cultures and values. Reception A constant theme of debate around Western values has been around their universal applicability or lack thereof; in modern times, as various non-Western nations have risen, they have sought to oppose certain Western values, with even Western countries also backing down to some extent from championing its own values in what some see as a contested transition to a post-Western era of the world. Western values is also often contrasted with Asian values of the East, which among other factors highly posits communitarianism and a deference to authority instead. The adoption of Western values among immigrants to the West has also been scrutinised, with some Westerners opposing immigration from the Muslim world or other parts of the non-West due to a perceived incompatibility of values; others support immigration on the basis of multiculturalism. See also Anti-Western sentiment Asian values Eurocentrism European values Western education References Western culture Sociology
0.784143
0.977006
0.766112
Anthropometry
Anthropometry (, ) refers to the measurement of the human individual. An early tool of physical anthropology, it has been used for identification, for the purposes of understanding human physical variation, in paleoanthropology and in various attempts to correlate physical with racial and psychological traits. Anthropometry involves the systematic measurement of the physical properties of the human body, primarily dimensional descriptors of body size and shape. Since commonly used methods and approaches in analysing living standards were not helpful enough, the anthropometric history became very useful for historians in answering questions that interested them. Today, anthropometry plays an important role in industrial design, clothing design, ergonomics and architecture where statistical data about the distribution of body dimensions in the population are used to optimize products. Changes in lifestyles, nutrition, and ethnic composition of populations lead to changes in the distribution of body dimensions (e.g. the rise in obesity) and require regular updating of anthropometric data collections. History The history of anthropometry includes and spans various concepts, both scientific and pseudoscientific, such as craniometry, paleoanthropology, biological anthropology, phrenology, physiognomy, forensics, criminology, phylogeography, human origins, and cranio-facial description, as well as correlations between various anthropometrics and personal identity, mental typology, personality, cranial vault and brain size, and other factors. At various times in history, applications of anthropometry have ranged from accurate scientific description and epidemiological analysis to rationales for eugenics and overtly racist social movements. One of its misuses was the discredited pseudoscience, phrenology. Individual variation Auxologic Auxologic is a broad term covering the study of all aspects of human physical growth. Height Human height varies greatly between individuals and across populations for a variety of complex biological, genetic, and environmental factors, among others. Due to methodological and practical problems, its measurement is also subject to considerable error in statistical sampling. The average height in genetically and environmentally homogeneous populations is often proportional across a large number of individuals. Exceptional height variation (around 20% deviation from a population's average) within such a population is sometimes due to gigantism or dwarfism, which are caused by specific genes or endocrine abnormalities. It is important to note that a great degree of variation occurs between even the most 'common' bodies (66% of the population), and as such no person can be considered 'average'. In the most extreme population comparisons, for example, the average female height in Bolivia is while the average male height in the Dinaric Alps is , an average difference of . Similarly, the shortest and tallest of individuals, Chandra Bahadur Dangi and Robert Wadlow, have ranged from , respectively.'The age range where most females stop growing is 15–⁠18 years and the age range where most males stop growing is 18–⁠21 years. Weight Human weight varies extensively both individually and across populations, with the most extreme documented examples of adults being Lucia Zarate who weighed , and Jon Brower Minnoch who weighed , and with population extremes ranging from in Bangladesh to in Micronesia. Organs Adult brain size varies from to in females and to in males, with the average being and , respectively. The right cerebral hemisphere is typically larger than the left, whereas the cerebellar hemispheres are typically of more similar size. Size of the human stomach varies significantly in adults, with one study showing volumes ranging from to and weights ranging from to . Male and female genitalia exhibit considerable individual variation, with penis size differing substantially and vaginal size differing significantly in healthy adults. Aesthetic Human beauty and physical attractiveness have been preoccupations throughout history which often intersect with anthropometric standards. Cosmetology, facial symmetry, and waist–hip ratio are three such examples where measurements are commonly thought to be fundamental. Evolutionary science Anthropometric studies today are conducted to investigate the evolutionary significance of differences in body proportion between populations whose ancestors lived in different environments. Human populations exhibit climatic variation patterns similar to those of other large-bodied mammals, following Bergmann's rule, which states that individuals in cold climates will tend to be larger than ones in warm climates, and Allen's rule, which states that individuals in cold climates will tend to have shorter, stubbier limbs than those in warm climates. On a microevolutionary level, anthropologists use anthropometric variation to reconstruct small-scale population history. For instance, John Relethford's studies of early 20th-century anthropometric data from Ireland show that the geographical patterning of body proportions still exhibits traces of the invasions by the English and Norse centuries ago. Similarly, anthropometric indices, namely comparison of the human stature was used to illustrate anthropometric trends. This study was conducted by Jörg Baten and Sandew Hira and was based on the anthropological founds that human height is predetermined by the quality of the nutrition, which used to be higher in the more developed countries. The research was based on the datasets for Southern Chinese contract migrants who were sent to Suriname and Indonesia and included 13,000 individuals. Measuring instruments 3D body scanners Today anthropometry can be performed with three-dimensional scanners. A global collaborative study to examine the uses of three-dimensional scanners for health care was launched in March 2007. The Body Benchmark Study will investigate the use of three-dimensional scanners to calculate volumes and segmental volumes of an individual body scan. The aim is to establish whether the Body Volume Index has the potential to be used as a long-term computer-based anthropometric measurement for health care. In 2001 the UK conducted the largest sizing survey to date using scanners. Since then several national surveys have followed in the UK's pioneering steps, notably SizeUSA, SizeMexico, and SizeThailand, the latter still ongoing. SizeUK showed that the nation had become taller and heavier but not as much as expected. Since 1951, when the last women's survey had taken place, the average weight for women had gone up from 62 to 65 kg. However, recent research has shown that posture of the participant significantly influences the measurements taken, the precision of 3D body scanner may or may not be high enough for industry tolerances, and measurements taken may or may not be relevant to all applications (e.g. garment construction). Despite these current limitations, 3D Body Scanning has been suggested as a replacement for body measurement prediction technologies which (despite the great appeal) have yet to be as reliable as real human data. Baropodographic Baropodographic devices fall into two main categories: (i) floor-based, and (ii) in-shoe. The underlying technology is diverse, ranging from piezoelectric sensor arrays to light refraction,Gefen A 2007. Pressure-sensing devices for assessment of soft tissue loading under bony prominences: technological concepts and clinical utilization. Wounds 19 350–62.Rosenbaum D, Becker HP 1997. Plantar pressure distribution measurements: technical background and clinical applications. J Foot Ankle Surg 3 1–14. but the ultimate form of the data generated by all modern technologies is either a 2D image or a 2D image time series of the pressures acting under the plantar surface of the foot. From these data other variables may be calculated (see data analysis.) The spatial and temporal resolutions of the images generated by commercial pedobarographic systems range from approximately 3 to 10 mm and 25 to 500 Hz, respectively. Sensor technology limits finer resolution. Such resolutions yield a contact area of approximately 500 sensors (for a typical adult human foot with surface area of approximately 100 cm2). For a stance phase duration of approximately 0.6 seconds during normal walking, approximately 150,000 pressure values, depending on the hardware specifications, are recorded for each step. Neuroimaging Direct measurements involve examinations of brains from corpses, or more recently, imaging techniques such as MRI, which can be used on living persons. Such measurements are used in research on neuroscience and intelligence. Brain volume data and other craniometric data are used in mainstream science to compare modern-day animal species and to analyze the evolution of the human species in archeology. Epidemiology and medical anthropology Anthropometric measurements also have uses in epidemiology and medical anthropology, for example in helping to determine the relationship between various body measurements (height, weight, percentage body fat, etc.) and medical outcomes. Anthropometric measurements are frequently used to diagnose malnutrition in resource-poor clinical settings. Forensics and criminology Forensic anthropologists study the human skeleton in a legal setting. A forensic anthropologist can assist in the identification of a decedent through various skeletal analyses that produce a biological profile. Forensic anthropologists utilize the Fordisc program to help in the interpretation of craniofacial measurements in regards to ancestry determination. One part of a biological profile is a person's ancestral affinity. People with significant European or Middle Eastern ancestry generally have little to no prognathism; a relatively long and narrow face; a prominent brow ridge that protrudes forward from the forehead; a narrow, tear-shaped nasal cavity; a "silled" nasal aperture; tower-shaped nasal bones; a triangular-shaped palate; and an angular and sloping eye orbit shape. People with considerable African ancestry typically have a broad and round nasal cavity; no dam or nasal sill; Quonset hut-shaped nasal bones; notable facial projection in the jaw and mouth area (prognathism); a rectangular-shaped palate; and a square or rectangular eye orbit shape. A relatively small prognathism often characterizes people with considerable East Asian ancestry; no nasal sill or dam; an oval-shaped nasal cavity; tent-shaped nasal bones; a horseshoe-shaped palate; and a rounded and non-sloping eye orbit shape. Many of these characteristics are only a matter of frequency among those of particular ancestries: their presence or absence of one or more does not automatically classify an individual into an ancestral group. Ergonomics Ergonomics professionals apply an understanding of human factors to the design of equipment, systems and working methods to improve comfort, health, safety, and productivity. This includes physical ergonomics in relation to human anatomy, physiological and bio mechanical characteristics; cognitive ergonomics in relation to perception, memory, reasoning, motor response including human–computer interaction, mental workloads, decision making, skilled performance, human reliability, work stress, training, and user experiences; organizational ergonomics in relation to metrics of communication, crew resource management, work design, schedules, teamwork, participation, community, cooperative work, new work programs, virtual organizations, and telework; environmental ergonomics in relation to human metrics affected by climate, temperature, pressure, vibration, and light; visual ergonomics; and others. Biometrics Biometrics refers to the identification of humans by their characteristics or traits. Biometrics is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological versus behavioral characteristics. Subclasses include dermatoglyphics and soft biometrics. United States military research The US Military has conducted over 40 anthropometric surveys of U.S. Military personnel between 1945 and 1988, including the 1988 Army Anthropometric Survey (ANSUR) of men and women with its 240 measures. Statistical data from these surveys encompasses over 75,000 individuals. Civilian American and European Surface Anthropometry Resource Project CAESAR began in 1997 as a partnership between government (represented by the US Air Force and NATO) and industry (represented by SAE International) to collect and organize the most extensive sampling of consumer body measurements for comparison. The project collected and organized data on 2,400 U.S. & Canadian and 2,000 European civilians and a database was developed. This database records the anthropometric variability of men and women, aged 18–65, of various weights, ethnic groups, gender, geographic regions, and socio-economic status. The study was conducted from April 1998 to early 2000 and included three scans per person in a standing pose, full-coverage pose and relaxed seating pose. Data collection methods were standardized and documented so that the database can be consistently expanded and updated. High-resolution measurements of body surfaces were made using 3D Surface Anthropometry. This technology can capture hundreds of thousands of points in three dimensions on the human body surface in a few seconds. It has many advantages over the old measurement system using tape measures, anthropometers, and other similar instruments. It provides detail about the surface shape as well as 3D locations of measurements relative to each other and enables easy transfer to Computer-Aided Design (CAD) or Manufacturing (CAM) tools. The resulting scan is independent of the measurer, making it easier to standardize. Automatic landmark recognition (ALR) technology was used to extract anatomical landmarks from the 3D body scans automatically. Eighty landmarks were placed on each subject. More than 100 univariate measures were provided, over 60 from the scan and approximately 40 using traditional measurements. Demographic data such as age, ethnic group, gender, geographic region, education level, and present occupation, family income and more were also captured.Robinette, Kathleen M, Daanen, Hein A M, Precision of the CAESAR scan-extracted measurements, Applied Ergonomics, vol 37, issue 3, May 2007, pp. 259–265. Fashion design Scientists working for private companies and government agencies conduct anthropometric studies to determine a range of sizes for clothing and other items. For just one instance, measurements of the foot are used in the manufacture and sale of footwear: measurement devices may be used either to determine a retail shoe size directly (e.g. the Brannock Device) or to determine the detailed dimensions of the foot for custom manufacture (e.g. ALINEr). See also References Further reading Anthropometric Survey of Army Personnel: Methods and Summary Statistics 1988 ISO 7250: Basic human body measurements for technological design, International Organization for Standardization, 1998. ISO 8559: Garment construction and anthropometric surveys — Body dimensions, International Organization for Standardization, 1989. ISO 15535: General requirements for establishing anthropometric databases, International Organization for Standardization, 2000. ISO 15537: Principles for selecting and using test persons for testing anthropometric aspects of industrial products and designs, International Organization for Standardization, 2003. ISO 20685: 3-D scanning methodologies for internationally compatible anthropometric databases, International Organization for Standardization, 2005. (A classic review of human body sizes.) External links Anthropometry at the Centers for Disease Control and Prevention Anthropometry and Biomechanics at NASA'' Anthropometry data at faculty of Industrial Design Engineering at Delft University of Technology Manual for Obtaining Anthropometric Measurements Free Full Text Prepared for the US Access Board: Anthropometry of Wheeled Mobility Project Report Free Full Text Civilian American and European Surface Anthropometry Resource Project—CAESAR at SAE International Articles containing video clips Biological anthropology Biometrics Ergonomics Forensic disciplines Human anatomy Human body Measurement Medical imaging Physiognomy Physiology Racism
0.768567
0.996804
0.766111
Pre-Columbian era
In the history of the Americas, the pre-Columbian era, also known as the pre-contact era, spans from the initial peopling of the Americas in the Upper Paleolithic to the onset of European colonization, which began with Christopher Columbus's voyage in 1492. This era encompasses the history of Indigenous cultures prior to significant European influence, which in some cases did not occur until decades or even centuries after Columbus's arrival. During the pre-Columbian era, many civilizations developed permanent settlements, cities, agricultural practices, civic and monumental architecture, major earthworks, and complex societal hierarchies. Some of these civilizations had declined by the time of the establishment of the first permanent European colonies, around the late 16th to early 17th centuries, and are known primarily through archaeological research of the Americas and oral histories. Other civilizations, contemporaneous with the colonial period, were documented in European accounts of the time. For instance, the Maya civilization maintained written records, which were often destroyed by Christian Europeans such as Diego de Landa, who viewed them as pagan but sought to preserve native histories. Despite the destruction, a few original documents have survived, and others were transcribed or translated into Spanish, providing modern historians with valuable insights into ancient cultures and knowledge. Historiography Before the development of archaeology in the 19th century, historians of the pre-Columbian period mainly interpreted the records of the European conquerors and the accounts of early European travelers and antiquaries. It was not until the nineteenth century that the work of people such as John Lloyd Stephens, Eduard Seler, and Alfred Maudslay, and institutions such as the Peabody Museum of Archaeology and Ethnology of Harvard University, led to the reconsideration and criticism of the early European sources. Now, the scholarly study of pre-Columbian cultures is most often based on scientific and multidisciplinary methodologies. Genetics The haplogroup most commonly associated with Indigenous Amerindian genetics is Y-chromosome haplogroup Q1a3a. Researchers have found genetic evidence that the Q1a3a haplogroup has been in South America since at least 18,000 BCE. Y-chromosome DNA, like mtDNA, differs from other nuclear chromosomes in that the majority of the Y-chromosome is unique and does not recombine during meiosis. This has the effect that the historical pattern of mutations can easily be studied. The pattern indicates Indigenous peoples of the Americas experienced two very distinctive genetic episodes: first with the initial peopling of the Americas and second with European colonization of the Americas. The former is the determinant factor for the number of gene lineages and founding haplotypes present in today's Indigenous populations. Human settlement of the Americas occurred in stages from the Bering Sea coastline, with an initial 20,000-year layover on Beringia for the founding population. The microsatellite diversity and distributions of the Y lineage specific to South America indicate that certain Amerindian populations have been isolated since the initial colonization of the region. The Na-Dené, Inuit, and Indigenous Alaskan populations exhibit haplogroup Q-M242 (Y-DNA) mutations, however, and are distinct from other Indigenous peoples with various mtDNA mutations. This suggests that the earliest migrants into the northern extremes of North America and Greenland derived from later populations. Settlement of the Americas Asian nomadic Paleo-Indians are thought to have entered the Americas via the Bering Land Bridge (Beringia), now the Bering Strait, and possibly along the coast. Genetic evidence found in Indigenous peoples' maternally inherited mitochondrial DNA (mtDNA) supports the theory of multiple genetic populations migrating from Asia. After crossing the land bridge, they moved southward along the Pacific coast and through an interior ice-free corridor. Throughout millennia, Paleo-Indians spread throughout the rest of North and South America. Exactly when the first people migrated into the Americas is the subject of much debate. One of the earliest identifiable cultures was the Clovis culture, with sites dating from some 13,000 years ago. However, older sites dating back to 20,000 years ago have been claimed. Some genetic studies estimate the colonization of the Americas dates from between 40,000 and 13,000 years ago. The chronology of migration models is currently divided into two general approaches. The first is the short chronology theory with the first movement beyond Alaska into the Americas occurring no earlier than 14,000–17,000 years ago, followed by successive waves of immigrants. The second belief is the long chronology theory, which proposes that the first group of people entered the hemisphere at a much earlier date, possibly 50,000–40,000 years ago or earlier. Artifacts have been found in both North and South America which have been dated to 14,000 years ago, and accordingly humans have been proposed to have reached Cape Horn at the southern tip of South America by this time. In that case, the Inuit would have arrived separately and at a much later date, probably no more than 2,000 years ago, moving across the ice from Siberia into Alaska. North America Lithic and Archaic periods The North American climate was unstable as the ice age receded during the Lithic stage. It finally stabilized about 10,000 years ago; climatic conditions were then very similar to today's. Within this time frame, roughly about the Archaic Period, numerous archaeological cultures have been identified. Lithic stage and early Archaic period The unstable climate led to widespread migration, with early Paleo-Indians soon spreading throughout the Americas, diversifying into many hundreds of culturally distinct tribes. The Paleo-Indians were hunter-gatherers, likely characterized by small, mobile bands consisting of approximately 20 to 50 members of an extended family. These groups moved from place to place as preferred resources were depleted and new supplies were sought. During much of the Paleo-Indian period, bands are thought to have subsisted primarily through hunting now-extinct giant land animals such as mastodon and ancient bison. Paleo-Indian groups carried a variety of tools, including distinctive projectile points and knives, as well as less distinctive butchering and hide-scraping implements. The vastness of the North American continent, and the variety of its climates, ecology, vegetation, fauna, and landforms, led ancient peoples to coalesce into many distinct linguistic and cultural groups. This is reflected in the oral histories of the indigenous peoples, described by a wide range of traditional creation stories which often say that a given people have been living in a certain territory since the creation of the world. Throughout thousands of years, paleo-Indian people domesticated, bred, and cultivated many plant species, including crops that now constitute 50–60% of worldwide agriculture. In general, Arctic, Subarctic, and coastal peoples continued to live as hunters and gatherers, while agriculture was adopted in more temperate and sheltered regions, permitting a dramatic rise in population. Middle Archaic period After the migration or migrations, it was several thousand years before the first complex societies arose, the earliest emerging about seven to eight thousand years ago. As early as 5500 BCE, people in the Lower Mississippi Valley at Monte Sano and other sites in present-day Louisiana, Mississippi, and Florida were building complex earthwork mounds, probably for religious purposes. Beginning in the late twentieth century, archeologists have studied, analyzed, and dated these sites, realizing that the earliest complexes were built by hunter-gatherer societies, whose people occupied the sites on a seasonal basis. Watson Brake, a large complex of eleven platform mounds, was constructed beginning in 3400 BCE and added to over 500 years. This has changed earlier assumptions that complex construction arose only after societies had adopted agriculture, and become sedentary, with stratified hierarchy and usually ceramics. These ancient people had organized to build complex mound projects under a different social structure. Late Archaic period Until the accurate dating of Watson Brake and similar sites, the oldest mound complex was thought to be Poverty Point, also located in the Lower Mississippi Valley. Built about 1500 BCE, it is the centerpiece of a culture extending over 100 sites on both sides of the Mississippi. The Poverty Point site has earthworks in the form of six concentric half-circles, divided by radial aisles, together with some mounds. The entire complex is nearly a mile across. Mound building was continued by succeeding cultures, who built numerous sites in the middle Mississippi and Ohio River valleys as well, adding effigy mounds, conical and ridge mounds, and other shapes. Woodland period The Woodland period of North American pre-Columbian cultures lasted from roughly 1000 BCE to 1000 CE. The term was coined in the 1930s and refers to prehistoric sites between the Archaic period and the Mississippian cultures. The Adena culture and the ensuing Hopewell tradition during this period built monumental earthwork architecture and established continent-spanning trade and exchange networks. This period is considered a developmental stage without any massive changes in a short period but instead has a continuous development in stone and bone tools, leatherworking, textile manufacture, tool production, cultivation, and shelter construction. Some Woodland people continued to use spears and atlatls until the end of the period when they were replaced by bows and arrows. Mississippian culture The Mississippian culture was spread across the Southeast and Midwest of what is today the United States, from the Atlantic coast to the edge of the plains, from the Gulf of Mexico to the Upper Midwest, although most intensively in the area along the Mississippi River and Ohio River. One of the distinguishing features of this culture was the construction of complexes of large earthen mounds and grand plazas, continuing the mound-building traditions of earlier cultures. They grew maize and other crops intensively, participated in an extensive trade network, and had a complex stratified society. The Mississippians first appeared around 1000 CE, following and developing out of the less agriculturally intensive and less centralized Woodland period. The largest urban site of these people, Cahokia—located near modern East St. Louis, Illinois—may have reached a population of over 20,000. Other chiefdoms were constructed throughout the Southeast, and its trade networks reached to the Great Lakes and the Gulf of Mexico. At its peak, between the 12th and 13th centuries, Cahokia was the most populous city in North America. (Larger cities did exist in Mesoamerica and the Andes.) Monks Mound, the major ceremonial center of Cahokia, remains the largest earthen construction of the prehistoric Americas. The culture reached its peak in about 1200–1400 CE, and in most places, it seems to have been in decline before the arrival of Europeans. Many Mississippian peoples were encountered by the expedition of Hernando de Soto in the 1540s, mostly with disastrous results for both sides. Unlike the Spanish expeditions in Mesoamerica, which conquered vast empires with relatively few men, the de Soto expedition wandered the American Southeast for four years, becoming more bedraggled, losing more men and equipment, and eventually arriving in Mexico as a fraction of its original size. The local people fared much worse though, as the fatalities of diseases introduced by the expedition devastated the populations and produced much social disruption. By the time Europeans returned a hundred years later, nearly all of the Mississippian groups had vanished, and vast swaths of their territory were virtually uninhabited. Ancestral Puebloans The Ancestral Puebloans thrived in what is now the Four Corners region in the United States. It is commonly suggested that the culture of the Ancestral Puebloans emerged during the Early Basketmaker II Era during the 12th century BCE. The Ancestral Puebloans were a complex Oasisamerican society that constructed kivas, multi-story houses, and apartment blocks made from stone and adobe, such as the Cliff Palace of Mesa Verde National Park in Colorado and the Great Houses in Chaco Canyon, New Mexico. The Puebloans also constructed a road system that stretched from Chaco Canyon to Kutz Canyon in the San Juan Basin. The Ancestral Puebloans are also known as "Anasazi", though the term is controversial, as the present-day Pueblo peoples consider the term to be derogatory, due to the word tracing its origins to a Navajo word meaning "ancestor enemies". Hohokam The Hohokam thrived in the Sonoran desert in what is now the U.S. state of Arizona and the Mexican state of Sonora. The Hohokam were responsible for the construction of a series of irrigation canals that led to the successful establishment of Phoenix, Arizona via the Salt River Project. The Hohokam also established complex settlements such as Snaketown, which served as an important commercial trading center. After 1375 CE, Hohokam society collapsed and the people abandoned their settlements, likely due to drought. Mogollon The Mogollon resided in the present-day states of Arizona, New Mexico, and Texas as well as Sonora and Chihuahua. Like most other cultures in Oasisamerica, the Mogollon constructed sophisticated kivas and cliff dwellings. In the village of Paquimé, the Mogollon are revealed to have housed pens for scarlet macaws, which were introduced from Mesoamerica through trade. Sinagua The Sinagua were hunter-gatherers and agriculturalists who lived in central Arizona. Like the Hohokam, they constructed kivas and great houses as well as ballcourts. Several of the Sinagua ruins include Montezuma Castle, Wupatki, and Tuzigoot. Salado The Salado resided in the Tonto Basin in southeastern Arizona from 1150 CE to the 15th century. Archaeological evidence suggests that they traded with far-away cultures, as evidenced by the presence of seashells from the Gulf of California and macaw feathers from Mexico. Most of the cliff dwellings constructed by the Salado are primarily located in Tonto National Monument. Iroquois The Iroquois League of Nations or "People of the Long House" was a politically advanced, democratic society, which is thought by some historians to have influenced the United States Constitution, with the Senate passing a resolution to this effect in 1988. Other historians have contested this interpretation and believe the impact was minimal or did not exist, pointing to numerous differences between the two systems and the ample precedents for the constitution in European political thought. Calusa The Calusa were a complex paramountcy/kingdom that resided in southern Florida. Instead of agriculture, the Calusa economy relied on abundant fishing. According to Spanish sources, the "king's house" at Mound Key was large enough to house 2,000 people. The Calusa ultimately collapsed into extinction at around 1750 after succumbing to diseases introduced by the Spanish colonists. Wichita The Wichita people were a loose confederation that consisted of sedentary agriculturalists and hunter-gatherers who resided in the eastern Great Plains. They lived in permanent settlements and even established a city called Etzanoa, which had a population of 20,000 people. The city was eventually abandoned around the 18th century after it was encountered by Spanish conquistadors Jusepe Gutierrez and Juan de Oñate. Historic tribes When the Europeans arrived, Indigenous peoples of North America had a wide range of lifeways from sedentary, agrarian societies to semi-nomadic hunter-gatherer societies. Many formed new tribes or confederations in response to European colonization. These are often classified by cultural regions, loosely based on geography. These can include the following: Arctic, including Aleut, Inuit, and Yupik peoples Subarctic Northeastern Woodlands Southeastern Woodlands Great Plains Great Basin Northwest Plateau Northwest Coast California Southwest Numerous pre-Columbian societies were sedentary, such as the Tlingit, Haida, Chumash, Mandan, Hidatsa, and others, and some established large settlements, even cities, such as Cahokia, in what is now Illinois. Mesoamerica Mesoamerica is the region extending from central Mexico south to the northwestern border of Costa Rica that gave rise to a group of stratified, culturally related agrarian civilizations spanning an approximately 3,000-year period before the visits to the Caribbean by Christopher Columbus. Mesoamerican is the adjective generally used to refer to that group of pre-Columbian cultures. This refers to an environmental area occupied by an assortment of ancient cultures that shared religious beliefs, art, architecture, and technology in the Americas for more than three thousand years. Between 2000 and 300 BCE, complex cultures began to form in Mesoamerica. Some matured into advanced pre-Columbian Mesoamerican civilizations such as the Olmec, Teotihuacan, Mayas, Zapotecs, Mixtecs, Huastecs, Purepecha, Toltecs, and Mexica/Aztecs. The Mexica civilization is also known as the Aztec Triple Alliance since they were three smaller kingdoms loosely united together. These Indigenous civilizations are credited with many inventions: building pyramid temples, mathematics, astronomy, medicine, writing, highly accurate calendars, fine arts, intensive agriculture, engineering, an abacus calculator, and complex theology. They also invented the wheel, but it was used solely as a toy. In addition, they used native copper, silver, and gold for metalworking. Archaic inscriptions on rocks and rock walls all over northern Mexico (especially in the state of Nuevo León) demonstrate an early propensity for counting. Their number system was base 20 and included zero. These early count markings were associated with astronomical events and underscore the influence that astronomical activities had upon Mesoamerican people before the arrival of Europeans. Many of the later Mesoamerican civilizations carefully built their cities and ceremonial centers according to specific astronomical events. The biggest Mesoamerican cities, such as Teotihuacan, Tenochtitlan, and Cholula, were among the largest in the world. These cities grew as centers of commerce, ideas, ceremonies, and theology, and they radiated influence outwards onto neighboring cultures in central Mexico. While many city-states, kingdoms, and empires competed with one another for power and prestige, Mesoamerica can be said to have had five major civilizations: the Olmecs, Teotihuacan, the Toltecs, the Mexica, and the Mayas. These civilizations (except for the politically fragmented Maya) extended their reach across Mesoamerica—and beyond—like no others. They consolidated power and distributed influence in matters of trade, art, politics, technology, and theology. Other regional power players made economic and political alliances with these civilizations over 4,000 years. Many made war with them, but almost all peoples found themselves within one of their spheres of influence. Regional communications in ancient Mesoamerica have been the subject of considerable research. There is evidence of trade routes starting as far north as the Mexico Central Plateau, and going down to the Pacific coast. These trade routes and cultural contacts then went on as far as Central America. These networks operated with various interruptions from pre-Olmec times and up to the Late Classical Period (600–900 CE). Olmec civilization The earliest known civilization in Mesoamerica is the Olmec. This civilization established the cultural blueprint by which all succeeding indigenous civilizations would follow in Mexico. Pre-Olmec civilization began with the production of pottery in abundance, around 2300 BCE in the Grijalva River delta. Between 1600 and 1500 BCE, the Olmec civilization had begun, with the consolidation of power at their capital, a site today known as San Lorenzo Tenochtitlán near the coast in southeast Veracruz. The Olmec influence extended across Mexico, into Central America, and along the Gulf of Mexico. They transformed many peoples' thinking toward a new way of government, pyramid temples, writing, astronomy, art, mathematics, economics, and religion. Their achievements paved the way for the Maya civilization and the civilizations in central Mexico. Teotihuacan civilization The decline of the Olmec resulted in a power vacuum in Mexico. Emerging from that vacuum was Teotihuacan, first settled in 300 BCE. By 150 CE, Teotihuacan had risen to become the first true metropolis of what is now called North America. Teotihuacan established a new economic and political order never before seen in Mexico. Its influence stretched across Mexico into Central America, founding new dynasties in the Maya cities of Tikal, Copan, and Kaminaljuyú. Teotihuacan's influence over the Maya civilization cannot be overstated: it transformed political power, artistic depictions, and the nature of economics. Within the city of Teotihuacan was a diverse and cosmopolitan population. Most of the regional ethnicities of Mexico were represented in the city, such as Zapotecs from the Oaxaca region. They lived in apartment communities where they worked their trades and contributed to the city's economic and cultural prowess. Teotihuacan's economic pull impacted areas in northern Mexico as well. It was a city whose monumental architecture reflected a monumental new era in Mexican civilization, declining in political power about 650 CE—but lasting in cultural influence for the better part of a millennium, to around 950 CE. Maya civilization Contemporary Teotihuacan's greatness was that of the Maya civilization. The period between 250 CE and 650 CE was a time of intense flourishing of Maya civilized accomplishments. While the many Maya city-states never achieved political unity on the order of the central Mexican civilizations, they exerted tremendous intellectual influence upon Mexico and Central America. The Maya built some of the most elaborate cities on the continent and made innovations in mathematics, astronomy, and calendrics. The Maya also developed the only true writing system native to the Americas using pictographs and syllabic elements in the form of texts and codices inscribed on stone, pottery, wood, or perishable books made from bark paper. Huastec civilization The Huastecs were a Maya ethnic group that migrated northwards to the Gulf Coast of Mexico. The Huastecs are considered to be distinct from the Maya civilization, as they separated from the main Maya branch at around 2000 BCE and did not possess the Maya script. Other accounts also suggest that the Huastecs migrated as a result of the Classic Maya collapse around the year 900 CE. Zapotec civilization The Zapotecs were a civilization that thrived in the Oaxaca Valley from the late 6th century BCE until their downfall at the hands of the Spanish conquistadors. The city of Monte Albán was an important religious center for the Zapotecs and served as the capital of the empire from 700 BCE to 700 CE. The Zapotecs resisted the expansion of the Aztecs until they were subjugated in 1502 under Aztec emperor Ahuitzotl. After the Spanish conquest of the Aztec Empire, the Zapotecs resisted Spanish rule until King Cosijopii I surrendered in 1563. Mixtec civilization Like the Zapotecs, the Mixtecs thrived in the Oaxaca Valley. The Mixtecs consisted of separate independent kingdoms and city-states, rather than a single unified empire. The Mixtecs would eventually be conquered by the Aztecs until the Spanish conquest. The Mixtecs saw the Spanish conquest as an opportunity for liberation and established agreements with the conquistadors that allowed them to preserve their cultural traditions, though relatively few sections resisted Spanish rule. Totonac civilization The Totonac civilization was concentrated in the present-day states of Veracruz and Puebla. The Totonacs were responsible for the establishment of cities, such as El Tajín as important commercial trading centers. The Totonacs would later assist in the Spanish conquest of the Aztec Empire as an opportunity to liberate themselves from Aztec military imperialism. Toltec civilization The Toltec civilization was established in the 8th century CE. The Toltec Empire expanded its political borders to as far south as the Yucatán peninsula, including the Maya city of Chichen Itza. The Toltecs established vast trading relations with other Mesoamerican civilizations in Central America and the Puebloans in present-day New Mexico. During the Post-Classic era, the Toltecs suffered a subsequent collapse in the early 12th century, due to famine and civil war. The Toltec civilization was so influential to the point where many groups such as the Aztecs claimed to be descended from. Aztec/Mexica/Triple Alliance civilization With the decline of the Toltec civilization came political fragmentation in the Valley of Mexico. Into this new political game of contenders to the Toltec throne stepped outsiders: the Mexica. They were also a desert people, one of seven groups who formerly called themselves "Azteca", in memory of Aztlán, but they changed their name after years of migrating. Since they were not from the Valley of Mexico, they were initially seen as crude and unrefined in the ways of the Nahua civilization. Through political maneuvers and ferocious martial skills, they managed to rule Mexico as the head of the 'Triple Alliance' which included two other Aztec cities, Tetxcoco and Tlacopan. Latecomers to Mexico's central plateau, the Mexica thought of themselves, nevertheless, as heirs of the civilizations that had preceded them. For them, arts, sculpture, architecture, engraving, feather-mosaic work, and the calendar, were bequest from the former inhabitants of Tula, the Toltecs. The Mexica-Aztecs were the rulers of much of central Mexico by about 1400 (while Yaquis, Coras, and Apaches commanded sizable regions of northern desert), having subjugated most of the other regional states by the 1470s. At their peak, the Valley of Mexico where the Aztec Empire presided, saw a population growth that included nearly one million people during the late Aztec period (1350–1519). Their capital, Tenochtitlan, is the site of modern-day Mexico City. At its peak, it was one of the largest cities in the world with population estimates of 200,000–300,000. The market established there was the largest ever seen by the conquistadores on arrival. Tarascan/Purépecha civilization Initially, the lands that would someday comprise the lands of the powerful Tarascan Empire were inhabited by several independent communities. Around 1300, however, the first Cazonci, Tariacuri, united these communities and built them into one of the most advanced civilizations in Mesoamerica. Their capital at Tzintzuntzan was just one of the many cities—there were ninety more under its control. The Tarascan Empire was among the largest in Central America, so it is no surprise that they routinely came into conflict with the neighboring Aztec Empire. Out of all the civilizations in its area, the Tarascan Empire was the most prominent in metallurgy, harnessing copper, silver, and gold to create items such as tools, decorations, and even weapons and armor. Bronze was also used. The great victories over the Aztecs by the Tarascans cannot be understated. Nearly every war they fought in resulted in a Tarascan victory. Because the Tarascan Empire had little links to the former Toltec Empire, they were also quite independent in culture from their neighbors. The Aztecs, Tlaxcaltec, Olmec, Mixtec, Maya, and others were very similar to each other, however. This is because they were all directly preceded by the Toltecs, and they therefore shared almost identical cultures. The Tarascans, however, possessed a unique religion, as well as other things. Tlaxcala republic Tlaxcala was a Nahua republic and confederation in central Mexico. The Tlaxcalans fiercely resisted Aztec expansion during the Flower Wars ever since the Aztecs expelled them from Lake Texcoco. The Tlaxcalans would later ally with the Spanish conquistadors under Hernán Cortés as an opportunity to liberate them from the Aztecs and managed to successfully conquer the Aztecs with the help of the conquistadors. The Spaniards would reward the Tlaxcalans for preserving their culture and for their assistance in defeating the Aztecs. The Tlaxcalans would once again assist to the Spaniards during the Mixtón War and the conquest of Guatemala. Cuzcatlan Cuzcatlan was a Pipil confederacy of kingdoms and city-states located in present-day El Salvador. According to legend, Cuzcatlan was established by Toltec migrants during the Classic Maya collapse in approximately 1200 CE. During the Spanish conquest of El Salvador, Cuzcatlan was forced to surrender to conquistador Pedro de Alvarado in 1528. Lenca The Lenca people were composed of several distinct multilingual confederations and city-states in present-day El Salvador and Honduras. Cities such as Yarumela were important commercial centers for the Lenca. During the Spanish conquest, several Lenca leaders such as Lempira resisted conversion to Christianity, while others converted peacefully. Nicarao An offshoot of the Pipil people from El Salvador, the Nicarao people were a tribal confederation that flourished in present-day Nicaragua. The migration of the Nicarao is theorized to have led to the fall of the city of Teotihuacan and the Toltec city of Tula. The Nicarao civilization was disestablished during the Spanish conquest of Nicaragua in 1522. Nicoya kingdom The Nicoya kingdom was an elective monarchy that thrived in the Nicoya peninsula in Costa Rica. It existed from 800 CE until the Spanish arrival in the 16th century. South America By the first millennium, South America's vast rainforests, mountains, plains, and coasts were the home of millions of people. Estimates vary, but 30–50 million are often given, and 100 million by some estimates. Some groups formed permanent settlements. Among those groups were Chibcha-speaking peoples ("Muisca" or "Muysca"), Valdivia, Quimbaya, Calima, Marajoara culture, and the Tairona. The Muisca of Colombia, postdating the Herrera Period, Valdivia of Ecuador, the Quechuas, and the Aymara of Peru and Bolivia were the four most important sedentary Amerindian groups in South America. Since the 1970s, numerous geoglyphs have been discovered on deforested land in the Amazon rainforest, Brazil, supporting Spanish accounts of complex and ancient Amazonian civilizations, such as Kuhikugu. The Upano Valley sites in present-day eastern Ecuador predate all known complex Amazonian societies. The theory of pre-Columbian contact across the South Pacific Ocean between South America and Polynesia has received support from several lines of evidence, although solid confirmation remains elusive. A diffusion by human agents has been put forward to explain the pre-Columbian presence in Oceania of several cultivated plant species native to South America, such as the bottle gourd (Lagenaria siceraria) or sweet potato (Ipomoea batatas). Direct archaeological evidence for such pre-Columbian contacts and transport has not emerged. Similarities noted in the names of edible roots in Maori and Ecuadorian languages ("kumari") and Melanesian and Chilean ("gaddu") have been inconclusive. A 2007 paper published in PNAS put forward DNA and archaeological evidence that domesticated chickens had been introduced into South America via Polynesia by late pre-Columbian times. These findings were challenged by a later study published in the same journal, that cast doubt on the dating calibration used and presented alternative mtDNA analyses that disagreed with a Polynesian genetic origin. The origin and dating remain an open issue. Whether or not early Polynesian–American exchanges occurred, no compelling human-genetic, archaeological, cultural, or linguistic legacy of such contact has turned up. Norte Chico civilization On the north-central coast of present-day Peru, Norte Chico or Caral-Supe (as known in Peru) was a civilization that emerged around 3200 BCE (contemporary with urbanism's rise in Mesopotamia). It had a cluster of large-scale urban settlements of which the Sacred City of Caral, in the Supe Valley, is one of the largest and best-studied sites. The civilization did not know machinery or pottery but still managed to develop trade, especially cotton and dehydrated fish. It was a hierarchical society that managed its ecosystems and had intercultural exchange. Its economy was heavily dependent on agriculture and fishing on the nearby coast. It is considered one of the cradles of civilization in the world and Caral-Supe is the oldest known civilization in the Americas. Valdivia culture The Valdivia culture was concentrated on the coast of Ecuador. Their existence was recently discovered by archeological findings. Their culture is among the oldest found in the Americas, spanning from 3500 to 1800 BCE. The Valdivia lived in a community of houses built in a circle or oval around a central plaza. They were sedentary people who lived off farming and fishing, though occasionally they hunted for deer. From the remains that have been found, scholars have determined that Valdivians cultivated maize, kidney beans, squash, cassava, chili peppers, and cotton plants, the last of which was used to make clothing. Valdivian pottery initially was rough and practical, but it became showy, delicate, and big over time. They generally used red and gray colors, and the polished dark red pottery is characteristic of the Valdivia period. In its ceramics and stone works, the Valdivia culture shows a progression from the most simple to much more complicated works. Cañari people The Cañari were the indigenous natives of today's Ecuadorian provinces of Cañar and Azuay. They were an elaborate civilization with advanced architecture and complex religious beliefs. The Inca destroyed and burned most of their remains. The Cañari's old city was replaced twice, first by the Incan city of Tumebamba and later on the same site by the colonial city of Cuenca. The city was also believed to be the site of El Dorado, the city of gold from the mythology of Colombia. The Cañari were most notable for having repelled the Incan invasion with fierce resistance for many years until they fell to Tupac Yupanqui. Many of their descendants are still present in Cañar. The majority did not mix with the colonists or become Mestizos. Chavín civilization The Chavín, a Peruvian preliterate civilization, established a trade network and developed agriculture by 900 BCE, according to some estimates and archeological finds. Artifacts were found at a site called Chavín in modern Peru at an elevation of . The Chavín civilization spanned from 900 to 300 BCE. Muisca confederation The Chibcha-speaking communities were the most numerous, the most territorially extended and the most socio-economically developed of the pre-Hispanic Colombians. By the 8th century, the indigenous people had established their civilization in the northern Andes. At one point, the Chibchas occupied part of what is now Panama, and the high plains of the Eastern Sierra of Colombia. The areas that they occupied in Colombia were the present-day Departments of Santander (North and South), Boyacá, and Cundinamarca. This is where the first farms and industries were developed. It is also where the independence movement originated. They are currently the richest areas in Colombia. The Chibcha developed the most populous zone between the Maya region and the Inca Empire. Next to the Quechua of Peru and the Aymara in Bolivia, the Chibcha of the eastern and north-eastern Highlands of Colombia developed the most notable culture among the sedentary Indigenous peoples in South America. In the Colombian Andes, the Chibcha comprised several tribes who spoke similar languages (Chibcha). They included the following: the Muisca, Guane, Lache, Cofán, and Chitareros. Tairona confederation The Tairona civilization thrived in the Sierra Nevada de Santa Marta mountain range in northern Colombia. Studies suggest that the civilization thrived from the 1st century CE until the Spanish arrival in the 16th century. The descendants of the Tairona, such as the Kogi were one of the few indigenous groups in the Americas to have escaped full colonial conquest and retain a majority of their indigenous cultures. Moche civilization The Moche thrived on the north coast of Peru from about 100 to 800 CE. The heritage of the Moche is seen in their elaborate burials. Some were recently excavated by UCLA's Christopher B. Donnan in association with the National Geographic Society. As skilled artisans, the Moche were a technologically advanced people. They traded with distant peoples such as the Maya. What has been learned about the Moche is based on the study of their ceramic pottery; the carvings reveal details of their daily lives. The Larco Museum of Lima, Peru, has an extensive collection of such ceramics. They show that the people practiced human sacrifice, had blood-drinking rituals and that their religion incorporated non-procreative sexual practices (such as fellatio). Wari Empire The Wari Empire was located in the western portion of Peru and existed from the 6th century to the 11th century. Wari, as the former capital city was called, is located 11 km (6.8 mi) northeast of the city of Ayacucho. This city was the center of a civilization that covered much of the highlands and coast of Peru. The best-preserved remnants, besides the Wari Ruins, are the recently discovered Northern Wari ruins near the city of Chiclayo, and Cerro Baul in Moquegua. Also well-known are the Wari ruins of Pikillaqta ("Flea Town"), a short distance southeast of the Cusco en route to Lake Titicaca. Tiwanaku Empire The Tiwanaku empire was based in western Bolivia and extended into present-day Peru and Chile from 300 to 1000 CE. Tiwanaku is recognized by Andean scholars as one of the most important South American civilizations before the birth of the Inca Empire in Peru; it was the ritual and administrative capital of a major state power for approximately five hundred years. The ruins of the ancient city state are near the south-eastern shore of Lake Titicaca in Tiwanaku Municipality, Ingavi Province, La Paz Department, about west of La Paz. Inca Empire Holding their capital at the great cougar-shaped city of Cusco, Peru, the Inca civilization dominated the Andes region from 1438 to 1533. Known as Tawantinsuyu, or "the land of the four regions", in Quechua, the Inca civilization was highly distinct and developed. Inca rule extended to nearly a hundred linguistic or ethnic communities, some 9 to 14 million people connected by a 40,000-kilometer road system. Cities were built with precise stonework, constructed over many levels of mountain terrain. Terrace farming was a useful form of agriculture. There is evidence of excellent metalwork and even successful brain surgery in the Inca civilization. Aymara kingdoms The Aymara kingdoms consisted of a confederation of separate diarchies that lasted from 1151 after the fall of Tiwanaku until 1477 when they were conquered by the Inca Empire. The Aymara kingdoms were primarily located in the Altiplano in Bolivia as well as some parts of Peru and Chile. Arawaks and Caribs Archeologists have discovered evidence of the earliest known inhabitants of the Venezuelan area in the form of leaf-shaped flake tools, together with chopping and plano–convex scraping implements exposed on the high riverine terraces of the Pedregal River in western Venezuela. Late Pleistocene hunting artifacts, including spear tips, come from a similar site in northwestern Venezuela known as El Jobo. According to radiocarbon dating, these date from 13,000 to 7000 BCE. Taima-Taima, yellow Muaco, and El Jobo in Falcón are some of the sites that have yielded archeological material from these times. These groups co-existed with megafauna like megatherium, glyptodonts, and toxodonts. It is not known how many people lived in Venezuela before the Spanish Conquest; it may have been around a million people, in addition to today's peoples included groups such as the Arawaks, Caribs, and Timoto-cuicas. The number was much reduced after the Conquest, mainly through the spread of new diseases from Europe. There were two main north–south axes of the pre-Columbian population, producing maize in the west and manioc in the east. Large parts of the Llanos plains were cultivated through a combination of slash-and-burn and permanent settled agriculture basically maize and tobacco. The indigenous peoples of Venezuela had already encountered crude oils and asphalts that seeped up through the ground to the surface. Known to the locals as mene, the thick, black liquid was primarily used for medicinal purposes, as an illumination source, and for the caulking of canoes. In the 16th century when Spanish colonization began in Venezuelan territory, the population of several indigenous peoples such as the Mariches (descendants of the Caribes) declined. Diaguita confederation The Diaguita consisted of several distinct chiefdoms across the Argentine Northwest. The Diaguita culture emerged around 1000 CE after the replacement of the Las Ánimas complex. The Diaguita resisted Spanish colonialism during the Calchaquí Wars until they were forced to surrender and submit to Spanish rule in 1667. Taíno The Taíno people were fragmented into numerous chiefdoms across the Greater Antilles, the Lucayan Archipelago, and the northern Lesser Antilles. The Taíno were the first pre-Columbian people to encounter Christopher Columbus during his voyage in 1492. The Taíno would later be subject to slavery by the Spanish colonists under the encomienda system until they were deemed virtually extinct in 1565. Huetar kingdoms The Huetar people were a major ethnic group that lived in Costa Rica. The Huetar were composed of several independent kingdoms, such as the western kingdom ruled by Garabito and the eastern kingdom ruled by El Guarco and Correque. After their annexation into Spanish administration, the descendants of the Huetar currently reside in the Quitirrisí reserve. Marajoara culture The Marajoara culture flourished on Marajó Island at the mouth of the Amazon River in northern Brazil between 800 and 1400 CE. The Marajoara consisted of a complex society that built mounds and constructed sophisticated settlements. The indigenous people of the area adopted methods of large-scale agriculture through the use of terra preta, which would support complex chiefdoms. Studies suggest that the civilization housed around 100,000 inhabitants. Kuhikugu Located in the Xingu Indigenous Park in Brazil, Kuhikugu consisted of an urban complex that housed around 50,000 inhabitants and 20 settlements. The civilization was likely established by the ancestors of the Kuikuro people. The people also constructed roads, bridges, and trenches for defensive purposes and were purported to be farmers, as evidenced by the fields of cassava and the use of terra preta. Like most other Amazonian civilizations, the disappearance of Kuhikugu was largely attributed to Old World diseases introduced by European colonists. Cambeba Also known as the Omagua, Umana, and Kambeba, the Cambeba are an indigenous people in Brazil's Amazon basin. The Cambeba were a populous, organized society in the late pre-Columbian era whose population suffered a steep decline in the early years of the Columbian Exchange. The Spanish explorer Francisco de Orellana traversed the Amazon River during the 16th century and reported densely populated regions running hundreds of kilometers along the river. These populations left no lasting monuments, possibly because they used local wood as their construction material as stone was not locally available. While it is possible Orellana may have exaggerated the level of development among the Amazonians, their semi-nomadic descendants have the odd distinction among tribal indigenous societies of a hereditary, yet landless, aristocracy. Archaeological evidence has revealed the continued presence of semi-domesticated orchards, as well as vast areas of land enriched with terra preta. Both of these discoveries, along with Cambeba ceramics discovered within the same archaeological levels suggest that a large and organized civilization existed in the area. Upano Valley cultures In the Upano River valley of eastern Ecuador, several cities were established by the Upano and Kilamope cultures around 500 BCE. The cities in the Upano Valley consisted of agricultural societies that cultivated crops such as corn, manioc and sweet potato. The cities fell into decline around 600 CE. Agricultural development Early inhabitants of the Americas developed agriculture, developing and breeding wild maize (corn) from ears in length to the current size that is familiar today. Potatoes, cassava, tomatoes, tomatillos (a husked green tomato), pumpkins, chili peppers, squash, beans, pineapple, sweet potatoes, the grains quinoa and amaranth, cocoa beans, vanilla, onion, peanuts, strawberries, raspberries, blueberries, blackberries, papaya, and avocados were among other plants grown by natives. Over two-thirds of all types of food crops grown worldwide are native to the Americas. Early Indigenous peoples began using fire in a widespread manner. Intentional burning of vegetation was taken up to mimic the effects of natural fires that tended to clear forest understories, thereby making travel easier and facilitating the growth of herbs and berry-producing plants that were important for both food and medicines. This created the pre-Columbian savannas of North America. While not as widespread as in Afro-Eurasia, indigenous Americans did have livestock. Domesticated turkeys were common in Mesoamerica and some regions of North America; they were valued for their meat, feathers, and, possibly, eggs. There is documentation of Mesoamericans utilizing hairless dogs, especially the Xoloitzcuintle breed, for their meat. Andean societies had llamas and alpacas for meat and wool, as well as for beasts of burden. Guinea pigs were raised for meat in the Andes. Iguanas and a range of wild animals, such as deer and pecari, were another source of meat in Mexico, Central, and northern South America. By the 15th century, maize had been transmitted from Mexico and was being farmed in the Mississippi embayment, as far as the East Coast of the United States, and as far north as southern Canada. Potatoes were used by the Inca, and chocolate was used by the Aztecs. See also 1491: New Revelations of the Americas Before Columbus by Charles C. Mann Before the Revolution, 2013 book by Daniel K. Richter List of pre-Columbian cultures Metallurgy in pre-Columbian America Periodization of pre-Columbian Peru Population history of indigenous peoples of the Americas Pre-Columbian trans-oceanic contact theories Pre-Columbian history of Brazil References Bibliography External links published on July 5, 2019, Quartz Collection: "Pre-Columbian Central and South America" from the University of Michigan Museum of Art Ancient American art at the Denver Art Museum Art of the Americas at the Cleveland Museum of Art Historical eras
0.767079
0.998705
0.766085
Quaternary
The Quaternary is the current and most recent of the three periods of the Cenozoic Era in the geologic time scale of the International Commission on Stratigraphy (ICS). It follows the Neogene Period and spans from 2.58 million years ago to the present. The Quaternary Period is divided into two epochs: the Pleistocene (2.58 million years ago to 11.7 thousand years ago) and the Holocene (11.7 thousand years ago to today); a proposed third epoch, the Anthropocene, was rejected in 2024 by IUGS, the governing body of the ICS. The Quaternary Period is typically defined by the cyclic growth and decay of continental ice sheets related to the Milankovitch cycles and the associated climate and environmental changes that they caused. Research history In 1759 Giovanni Arduino proposed that the geological strata of northern Italy could be divided into four successive formations or "orders". The term "quaternary" was introduced by Jules Desnoyers in 1829 for sediments of France's Seine Basin that clearly seemed to be younger than Tertiary Period rocks. The Quaternary Period follows the Neogene Period and extends to the present. The Quaternary covers the time span of glaciations classified as the Pleistocene, and includes the present interglacial time-period, the Holocene. This places the start of the Quaternary at the onset of Northern Hemisphere glaciation approximately 2.6 million years ago (mya). Prior to 2009, the Pleistocene was defined to be from 1.805 million years ago to the present, so the current definition of the Pleistocene includes a portion of what was, prior to 2009, defined as the Pliocene. Quaternary stratigraphers usually worked with regional subdivisions. From the 1970s, the International Commission on Stratigraphy (ICS) tried to make a single geologic time scale based on GSSP's, which could be used internationally. The Quaternary subdivisions were defined based on biostratigraphy instead of paleoclimate. This led to the problem that the proposed base of the Pleistocene was at 1.805 million years ago, long after the start of the major glaciations of the northern hemisphere. The ICS then proposed to abolish use of the name Quaternary altogether, which appeared unacceptable to the International Union for Quaternary Research (INQUA). In 2009, it was decided to make the Quaternary the youngest period of the Cenozoic Era with its base at 2.588 mya and including the Gelasian Stage, which was formerly considered part of the Neogene Period and Pliocene Epoch. This was later revised to 2.58 mya. The Anthropocene was proposed as a third epoch as a mark of the anthropogenic impact on the global environment starting with the Industrial Revolution, or about 200 years ago. The Anthropocene was rejected as a geological epoch in 2024 by the International Union of Geological Sciences (IUGS), the governing body of the ICS. Geology The 2.58 million years of the Quaternary represents the time during which recognisable humans existed. Over this geologically short time period there has been relatively little change in the distribution of the continents due to plate tectonics. The Quaternary geological record is preserved in greater detail than that for earlier periods. The major geographical changes during this time period included the emergence of the straits of Bosphorus and Skagerrak during glacial epochs, which respectively turned the Black Sea and Baltic Sea into fresh water lakes, followed by their flooding (and return to salt water) by rising sea level; the periodic filling of the English Channel, forming a land bridge between Britain and the European mainland; the periodic closing of the Bering Strait, forming the land bridge between Asia and North America; and the periodic flash flooding of Scablands of the American Northwest by glacial water. The current extent of Hudson Bay, the Great Lakes and other major lakes of North America are a consequence of the Canadian Shield's readjustment since the last ice age; different shorelines have existed over the course of Quaternary time. Climate The climate was one of periodic glaciations with continental glaciers moving as far from the poles as 40 degrees latitude. Glaciation took place repeatedly during the Quaternary Ice age – a term coined by Schimper in 1839 that began with the start of the Quaternary about 2.58 Mya and continues to the present day. In 1821, a Swiss engineer, Ignaz Venetz, presented an article in which he suggested the presence of traces of the passage of a glacier at a considerable distance from the Alps. This idea was initially disputed by another Swiss scientist, Louis Agassiz, but when he undertook to disprove it, he ended up affirming his colleague's hypothesis. A year later, Agassiz raised the hypothesis of a great glacial period that would have had long-reaching general effects. This idea gained him international fame and led to the establishment of the Glacial Theory. In time, thanks to the refinement of geology, it has been demonstrated that there were several periods of glacial advance and retreat and that past temperatures on Earth were very different from today. In particular, the Milankovitch cycles of Milutin Milankovitch are based on the premise that variations in incoming solar radiation are a fundamental factor controlling Earth's climate. During this time, substantial glaciers advanced and retreated over much of North America and Europe, parts of South America and Asia, and all of Antarctica. Flora and fauna There was a major extinction of large mammals globally during the Late Pleistocene Epoch. Many forms such as sabre-toothed cats, mammoths, mastodons, glyptodonts, etc., became extinct worldwide. Others, including horses, camels and American cheetahs became extinct in North America. The Great Lakes formed and giant mammals thrived in parts of North America and Eurasia not covered in ice. These mammals became extinct when the glacial period ended about 11,700 years ago. Modern humans evolved about 315,000 years ago. During the Quaternary Period, mammals, flowering plants, and insects dominated the land. See also List of Quaternary volcanic eruptions References External links Subcommission on Quaternary Stratigraphy Stratigraphical charts for the Quaternary Version history of the global Quaternary chronostratigraphical charts (from 2004b) Silva, P.G. C. Zazo, T. Bardají, J. Baena, J. Lario, A. Rosas, J. Van der Made. 2009, "Tabla Cronoestratigrafíca del Cuaternario en la Península Ibérica - V.2". [Versión PDF, 3.6 Mb]. Asociación Española para el Estudio del Cuaternario (AEQUA), Departamento de Geología, Universidad de Salamanca, Spain. (Correlation chart of European Quaternary and cultural stages and fossils) Welcome to the XVIII INQUA-Congress, Bern, 2011 Quaternary (chronostratigraphy scale) Geological periods Physical geography Physical oceanography
0.767109
0.998615
0.766046
Medieval renaissances
The medieval renaissances were periods of cultural renewal across medieval Western Europe. These are effectively seen as occurring in three phases - the Carolingian Renaissance (8th and 9th centuries), Ottonian Renaissance (10th century) and the Renaissance of the 12th century. The term was first used by medievalists in the 19th century, by analogy with the historiographical concept of the 15th and 16th century Italian Renaissance. This was notable since it marked a break with the dominant historiography of the time, which saw the Middle Ages as a Dark Age. The term has always been a subject of debate and criticism, particularly on how widespread such renewal movements were and on the validity of comparing them with the Italian Renaissance. History of the concept The term 'renaissance' was first used as a name for a period in medieval history in the 1830s, with the birth of medieval studies. It was coined by Jean-Jacques Ampère. Pre-Carolingian renaissances As Pierre Riché points out, the expression "Carolingian Renaissance" does not imply that Western Europe was barbaric or obscurantist before the Carolingian era. The centuries following the collapse of the Roman Empire in the West did not see an abrupt disappearance of the ancient schools, from which emerged Martianus Capella, Cassiodorus and Boethius, essential icons of the Roman cultural heritage in the Middle Ages, thanks to which the disciplines of liberal arts were preserved. The fall of the Western Roman Empire saw the "Vandal Renaissance" of Kings Thrasamund and Hilderic in late 5th and early 6th century North Africa, where ambitious architectural projects were commissioned, the Vandal kings dressed in Roman imperial style with Roman triumphal rulership symbols, and intellectual traditions, poetry and literature flourished. Classical education and the Romano-African elite's opulent lifestyle were maintained, as seen in the plentiful classicizing texts which emerged in this period. The 7th century saw the "Isidorian Renaissance" in the Visigothic Kingdom of Hispania in which sciences flourished and the integration of Christian and pre-Christian thought occurred, while the spread of Irish monastic schools (scriptoria) over Europe laid the groundwork for the Carolingian Renaissance. There was a similar flourishing in the Northumbrian Renaissance of the 7th and 8th centuries. Carolingian renaissance (8th and 9th centuries) The Carolingian Renaissance was a period of intellectual and cultural revival in the Carolingian Empire occurring from the late eighth century to the ninth century, as the first of three medieval renaissances. It occurred mostly during the reigns of the Carolingian rulers Charlemagne and Louis the Pious. It was supported by the scholars of the Carolingian court, notably Alcuin of York For moral betterment the Carolingian renaissance reached for models drawn from the example of the Christian Roman Empire of the 4th century. During this period there was an increase of literature, writing, the arts, architecture, jurisprudence, liturgical reforms and scriptural studies. Charlemagne's Admonitio generalis (789) and his Epistola de litteris colendis served as manifestos. The effects of this cultural revival, however, were largely limited to a small group of court literati: "it had a spectacular effect on education and culture in Francia, a debatable effect on artistic endeavors, and an immeasurable effect on what mattered most to the Carolingians, the moral regeneration of society," John Contreni observes. Beyond their efforts to write better Latin, to copy and preserve patristic and classical texts and to develop a more legible, classicizing script—the Carolingian minuscule that Renaissance humanists took to be Roman and employed as humanist minuscule, from which has developed early modern Italic script—the secular and ecclesiastical leaders of the Carolingian Renaissance for the first time in centuries applied rational ideas to social issues, providing a common language and writing style that allowed for communication across most of Europe. One of the primary efforts was the creation of a standardized curriculum for use at the recently created schools. Alcuin led this effort and was responsible for the writing of textbooks, creation of word lists, and establishing the trivium and quadrivium as the basis for education. Art historian Kenneth Clark was of the view that by means of the Carolingian Renaissance, Western civilization survived by the skin of its teeth. The use of the term renaissance to describe this period is contested due to the majority of changes brought about by this period being confined almost entirely to the clergy, and due to the period lacking the wide-ranging social movements of the later Italian Renaissance. Instead of being a rebirth of new cultural movements, the period was more an attempt to recreate the previous culture of the Roman Empire. The Carolingian Renaissance in retrospect also has some of the character of a false dawn, in that its cultural gains were largely dissipated within a couple of generations, a perception voiced by Walahfrid Strabo (died 849), in his introduction to Einhard's Life of Charlemagne. Similar processes occurred in Southeast Europe with the Christianization of Bulgaria and the introduction liturgy in Old Bulgarian language and the Cyrillic script created in Bulgaria few years before the reign of Simeon I of Bulgaria, during the reign of his father Boris I of Bulgaria. Clement of Ohrid and Naum of Preslav created (or rather compiled) the new alphabet which was called Cyrillic and was declared the official alphabet in Bulgaria in 893. The Old Church Slavonic language was declared as official in the same year. In the following centuries the liturgy in Bulgarian language and the alphabet were adopted by many other Slavic peoples and counties. The Golden Age of medieval Bulgarian culture is the period of the Bulgarian cultural prosperity during the reign of emperor Simeon I the Great (889—927). The term was coined by Spiridon Palauzov in the mid 19th century. During this period there was an increase of literature, writing, arts, architecture and liturgical reforms. Ottonian renaissance (10th and 11th centuries) The Ottonian Renaissance was a limited renaissance of logic, science, economy and art in central and southern Europe that accompanied the reigns of the first three emperors of the Saxon Dynasty, all named Otto: Otto I (936–973), Otto II (973–983), and Otto III (983–1002), and which in large part depended upon their patronage. Pope Sylvester II and Abbo of Fleury were leading figures in this movement. The Ottonian Renaissance began after Otto's marriage to Adelaide (951) united the kingdoms of Italy and Germany and thus brought the West closer to Byzantium and furthered the cause of Christian (political) unity with his imperial coronation in 963. The period is sometimes extended to cover the reign of Henry II as well, and, rarely, the Salian dynasts. The term is generally confined to Imperial court culture conducted in Latin in Germany. It is sometimes also known as the Renaissance of the 10th century, so as to include developments outside Germania, or as the Year 1000 Renewal, due to coming right at the end of the 10th century. It was shorter than the preceding Carolingian Renaissance and to a large extent a continuation of it - this has led historians such as Pierre Riché to prefer evoking it as a 'third Carolingian renaissance', covering the 10th century and running over into the 11th century, with the 'first Carolingian renaissance' occurring during Charlemagne's own reign and the 'second Carolingian renaissance' happening under his successors. The Ottonian Renaissance is recognized especially in the arts and architecture, invigorated by renewed contact with Constantinople, in some revived cathedral schools, such as that of Bruno of Cologne, in the production of illuminated manuscripts from a handful of elite scriptoria, such as Quedlinburg, founded by Otto in 936, and in political ideology. The Imperial court became the center of religious and spiritual life, led by the example of women of the royal family: Matilda the literate mother of Otto I, or his sister Gerberga of Saxony, or his consort Adelaide, or Empress Theophanu. 12th-century Renaissance The Renaissance of the 12th century was a period of many changes at the outset of the High Middle Ages. It included social, political and economic transformations, and an intellectual revitalization of Western Europe with strong philosophical and scientific roots. For some historians these changes paved the way to later achievements such as the literary and artistic movement of the Italian Renaissance in the 15th century and the scientific developments of the 17th century. After the collapse of the Western Roman Empire, Western Europe had entered the Middle Ages with great difficulties. Apart from depopulation and other factors, most classical scientific treatises of classical antiquity, written in Greek, had become unavailable. Philosophical and scientific teaching of the Early Middle Ages was based upon the few Latin translations and commentaries on ancient Greek scientific and philosophical texts that remained in the Latin West. This scenario changed during the renaissance of the 12th century. The increased contact with the Islamic world in Spain and Sicily, the Crusades, the Reconquista, as well as increased contact with Byzantium, allowed Europeans to seek and translate the works of Hellenic and Islamic philosophers and scientists, especially the works of Aristotle. The development of medieval universities allowed them to aid materially in the translation and propagation of these texts and started a new infrastructure which was needed for scientific communities. In fact, the European university put many of these texts at the center of its curriculum, with the result that the "medieval university laid far greater emphasis on science than does its modern counterpart and descendent." In Northern Europe, the Hanseatic League was founded in the 12th century, with the foundation of the city of Lübeck in 1158–1159. Many northern cities of the Holy Roman Empire became Hanseatic cities, including Hamburg, Stettin, Bremen and Rostock. Hanseatic cities outside the Holy Roman Empire were, for instance, Bruges, London and the Polish city of Danzig (Gdańsk). In Bergen and Novgorod the league had factories and middlemen. In this period the Germans started colonizing Eastern Europe beyond the Empire, into Prussia and Silesia. In the late 13th century Westerners became more aware of the Far East. Marco Polo is the most commonly known documenter due to his popular book Il Milione but he was neither the first nor the only traveller on the Silk Road to China. Several Christian missionaries such as William of Rubruck, Giovanni da Pian del Carpini, Andrew of Longjumeau, Odoric of Pordenone, Giovanni de Marignolli, Giovanni di Monte Corvino, and other travelers such as Niccolò da Conti also contributed to the knowledge and interest in the far eastern lands. The translation of texts from other cultures, especially ancient Greek works, was an important aspect of both this Twelfth-Century Renaissance and the latter Renaissance (of the 15th century), the relevant difference being that Latin scholars of this earlier period focused almost entirely on translating and studying Greek and Arabic works of natural science, philosophy and mathematics, while the latter Renaissance focus was on literary and historical texts. A new method of learning called scholasticism developed in the late 12th century from the rediscovery of the works of Aristotle; the works of medieval Jewish and Islamic thinkers influenced by him, notably Maimonides, Avicenna (see Avicennism) and Averroes (see Averroism); and the Christian philosophers influenced by them, most notably Albertus Magnus, Bonaventure and Abélard. Those who practiced the scholastic method believed in empiricism and supporting Roman Catholic doctrines through secular study, reason, and logic. Other notable scholastics ("schoolmen") included Roscelin and Peter Lombard. One of the main questions during this time was the problem of the universals. Prominent non-scholastics of the time included Anselm of Canterbury, Peter Damian, Bernard of Clairvaux, and the Victorines. The most famous of the scholastic practitioners was Thomas Aquinas (later declared a Doctor of the Church), who led the move away from the Platonic and Augustinian and towards the Aristotelian. During the High Middle Ages in Europe, there was increased innovation in means of production, leading to economic growth. These innovations included the windmill, manufacturing of paper, the spinning wheel, the magnetic compass, eyeglasses, the astrolabe, and Hindu-Arabic numerals. See also Continuity thesis Golden Age of medieval Bulgarian culture Tarnovo Literary School Art School of Tarnovo Painting of the Tarnovo Artistic School Architecture of the Tarnovo Artistic School References Medieval culture Medieval studies Christendom
0.777912
0.984725
0.766029
Development studies
Development studies is an interdisciplinary branch of social science. Development studies is offered as a specialized master's degree in a number of reputed universities around the world. It has grown in popularity as a subject of study since the early 1990s, and has been most widely taught and researched in developing countries and countries with a colonial history, such as the UK, where the discipline originated. Students of development studies often choose careers in international organisations such as the United Nations, World Bank, non-governmental organisations (NGOs), media and journalism houses, private sector development consultancy firms, corporate social responsibility (CSR) bodies and research centers. Professional bodies Throughout the world, a number of professional bodies for development studies have been founded: Europe: European Association of Development Research and Training Institutes (EADI) Latin America: Consejo Latinoamericano de Ciencias Sociales (CLACSO) Asia: Asian Political and International Studies Association (APISA) Africa: Council for the Development of Social Science Research in Africa (CODESRIA) and Organization for Social Science Research in Eastern and Southern Africa (OSSREA) Arabic world: Arab Institutes and Centers for Economic and Social Development Research (AICARDES) The common umbrella organisation of these association is the Inter-regional Coordinating Committee of Development Associations (ICCDA). In the UK and Ireland, the Development Studies Association is a major source of information for research on and studying in development studies. Its mission is to connect and promote those working on development research. Disciplines of development studies Development issues include: Adult education Area studies Anthropology Community development Demography Development aid Development communication Development theory Diaspora studies Ecology Economic development Economic History Environmental studies Geography Gender studies Governance History of economic thought Human rights Human security Indigenous rights Industrial relations Industrialization International business International development International relations Journalism Media Studies Migration studies Partnership Peace and conflict studies Pedagogy Philosophy Political philosophy Population studies Postcolonialism Psychology Public administration Public health Rural development Queer studies Sociology Social policy Social development Social work Sustainable development Urban studies Women's studies History The emergence of development studies as an academic discipline in the second half of the twentieth century is in large part due to increasing concern about economic prospects for the third world after decolonisation. In the immediate post-war period, development economics, a branch of economics, arose out of previous studies in colonial economics. By the 1960s, an increasing number of development economists felt that economics alone could not fully address issues such as political effectiveness and educational provision. Development studies arose as a result of this, initially aiming to integrate ideas of politics and economics. Since then, it has become an increasingly inter- and multi-disciplinary subject, encompassing a variety of social scientific fields. In recent years the use of political economy analysis- the application of the analytical techniques of economics- to try and assess and explain political and social factors that either enhance or limit development has become increasingly widespread as a way of explaining the success or failure of reform processes. The era of modern development is commonly deemed to have commenced with the inauguration speech of Harry S. Truman in 1949. In Point Four of his speech, with reference to Latin America and other poor nations, he said: More than half the people of the world are living in conditions approaching misery. Their food is inadequate. They are victims of disease. Their economic life is primitive and stagnant. Their poverty is a handicap and a threat both to them and to more prosperous areas. For the first time in history, humanity possesses the knowledge and the skill to relieve the suffering of these people. But development studies has since also taken an interest in lessons of past development experiences of Western countries. More recently, the emergence of human security – a new, people-oriented approach to understanding and addressing global security threats – has led to a growing recognition of a relationship between security and development. Human security argues that inequalities and insecurity in one state or region have consequences for global security and that it is thus in the interest of all states to address underlying development issues. This relationship with studies of human security is but one example of the interdisciplinary nature of development studies. Global Research cooperation between researchers from countries in the Global North and the Global South, so called North-south research partnerships, allow development studies to consider more diverse perspectives on development studies and other strongly value driven issues. Thus, it can contribute new findings to the field of research. See also Global South Development Magazine City development index Colonization Community development Development (disambiguation) Development Cooperation Issues Development Cooperation Stories Development Cooperation Testimonials Economic development Human rights Human security Industrialization International development North-South research partnerships Postdevelopment theory Right to development Social development Social work Sustainable development World-systems theory References Further reading Breuer, Martin. "Development" (2015). University Bielefeld: Center for InterAmerican Studies. Pradella, Lucia and Marois, Thomas, eds. (2015) Polarizing Development: Alternatives to Neoliberalism and the Crisis. Pluto Press. Sachs, Wolfgang, ed. (1992) The Development Dictionary: A Guide to Knowledge as Power. Zed Books. External links Global South Development Magazine Development Studies Internet Resources Studying Development – International Development Studies course directory
0.774098
0.989547
0.766007
Sphere sovereignty
In neo-Calvinism, sphere sovereignty, also known as differentiated responsibility, is the concept that each sphere (or sector) of life has its own distinct responsibilities and authority or competence, and stands equal to other spheres of life. Sphere sovereignty involves the idea of an all encompassing created order, designed and governed by God. This created order includes societal communities (such as those for purposes of education, worship, civil justice, agriculture, economy and labor, marriage and family, artistic expression, etc.), their historical development, and their abiding norms. The principle of sphere sovereignty seeks to affirm and respect creational boundaries, and historical differentiation. Sphere sovereignty implies that no one area of life or societal community is sovereign over another. Each sphere has its own created integrity. Neo-Calvinists hold that since God created everything "after its own kind," diversity must be acknowledged and appreciated. For instance, the different God-given norms for family life and economic life should be recognized, such that a family does not properly function like a business. Similarly, neither faith-institutions (e.g. churches) nor an institution of civil justice (i.e. the state) should seek totalitarian control, or any regulation of human activity outside their limited competence, respectively. The concept of sphere sovereignty became a general principle in European countries governed by Christian democratic political parties, who held it as an integral part of their ideology. The promotion of sphere sovereignty by Christian democrats led to the creation of corporatist welfare states throughout the world. Historical background Sphere sovereignty is an alternative to the worldviews of ecclesiasticism and secularism (especially in its statist form). During the Middle Ages, a form of papal monarchy assumed that God rules over the world through the church. Ecclesiasticism was widely evident in the arts. Religious themes were encouraged by art's primary patron, the church. Similarly, the politics in the Middle Ages often consisted of political leaders doing as the church instructed. In both economic guilds and agriculture the church supervised. In the family sphere, the church regulated sexual activity and procreation. In the educational sphere, several universities were founded by religious orders. During the Renaissance, the rise of a secularist worldview accompanied the emergence of a wealthy merchant class. Some merchants became patrons of the arts, independent of the church. Protestantism later made civil government, the arts, family, education, and economics officially free from ecclesiastical control. While Protestantism maintained a full-orbed or holistically religious view of life as distinguished from an ecclesiasticism, the later secular Enlightenment sought to rid society of religion entirely. [citation needed] Sphere Sovereignty was first formulated at the turn of the 20th century by the neo-Calvinist theologian and Dutch prime minister Abraham Kuyper and further developed by philosopher Herman Dooyeweerd. Kuyper based the idea of sphere sovereignty partially on the Christian view of existence coram Deo, every part of human life exists equally and directly "before the face of God." For Kuyper, this meant that sphere sovereignty involved a certain form of separation of church and state and a separation of state and other societal spheres, or anti-statism. As Christian democratic political parties were formed, they adopted the principle of sphere sovereignty, with both Protestants and Roman Catholics agreeing "that the principles of sphere sovereignty and subsidiarity boiled down to the same thing.", although this was at odds with Dooyeweerd's development of sphere sovereignty, which he held to be significantly distinct from subsidiarity. Applications The doctrine of sphere sovereignty has many applications. The institution of the family, for example, does not come from the state, the church, or from contingent social factors, but derives from the original creative act of God (it is a creational institution). It is the task of neither the state nor the church to define the family or to promulgate laws upon it. This duty is reserved to the Word of God, held by Protestantism to be sovereign, i.e., beyond the control of either church or state. The family (defined as the covenantal commitment of one man and one woman to each other and to their offspring) is not instituted by the state nor by any other external power, but proceeds naturally from the heads of households, who are directly responsible to God. However, when a particular family fails in its own responsibilities, institutions of civil governance are authorized to seek rectification of relevant civil injustices. Neither the state nor the church can dictate predetermined conclusions to a scientific organization, school or university. Applicable laws are those relative to that sphere only, so that the administration of schools should rest with those who are legitimately in charge of them, according to their specific competences and skills. Similarly, in a trade organization, the rules of trade only should be applied, and their leaders should be drawn from their own ranks of expertise. Similarly agriculture does not derive its laws from the government but from the laws of nature. Whenever a government presumes to regulate outside its sovereignty, those serving within the affected sphere should protest that the State is interfering in their internal affairs. The question is the proper role of civil governance and its intrinsic principle limits in terms of which it can act without interfering in the sovereignty of other spheres. Criticisms For Kuyper, because the Netherlands included multiple religious-ideological (or, worldview) communities, these each should form their own "pillar", with their own societal institutions like schools, news media, hospitals, etc. That resulted in a pillarized society. Kuyper himself founded the Vrije Universiteit, where ministers for the Reformed Churches in the Netherlands would be educated without interference by the Dutch state because educating ministers lies beyond the sphere of civil government in Kuyper's view. Kuyper also helped establish the Anti-Revolutionary Party (a Reformed political party), several Reformed newspapers, and an the Reformed Churches in the Netherlands (an independent Reformed church). Addressing the emergence of pillarization in the context of Kuyper's view of sphere sovereignty, Peter S. Heslam states, 'Indeed, it could be argued that if Dutch society had been of a more "homogenous" nature—rather than manifesting a roughly tripartite ideological divide between Catholics, Protestants, and Humanists—sphere sovereignty would still have been practicable whereas verzuiling [i.e., pillarization] would not have been necessary'. Some see the development of pillarization in the Netherlands as a failure of Kuyper to properly limit the state to its own sphere among other societal spheres, and to distinguish societal spheres from other worldview communities. See also Corporatism Separation of church and state Subsidiarity (Catholicism), a distinct concept, sometimes confused with sphere sovereignty References External links . . Calvinist theology Christian democracy Sovereignty
0.786312
0.974122
0.765964
Madness and Civilization
Madness and Civilization: A History of Insanity in the Age of Reason (, 1961) is an examination by Michel Foucault of the evolution of the meaning of madness in the cultures and laws, politics, philosophy, and medicine of Europe—from the Middle Ages until the end of the 18th century—and a critique of the idea of history and of the historical method. Although he uses the language of phenomenology to describe the influence of social structures in the history of the Othering of insane people from society, Madness and Civilization is Foucault's philosophic progress from phenomenology toward something like structuralism (a label Foucault himself always adamantly rejected). Background Philosopher Michel Foucault developed Madness and Civilization from his earlier works in the field of psychology, his personal psychological difficulties, and his professional experiences working in a mental hospital. He wrote the book between 1955 and 1959, when he worked cultural-diplomatic and educational posts in Poland and Germany, as well as in Sweden as director of a French cultural centre at the University of Uppsala. Summary In Madness and Civilization, Foucault traces the cultural evolution of the concept of insanity (madness) in three phases: the Renaissance; the Classical Age; and the Modern era Middle Ages In the Middle Ages, society distanced lepers from itself, while in the "Classical Age" the object of social segregation was moved from lepers to madmen, but in a different way. The lepers of the Middle Ages were certainly considered dangerous, but they were not the object of a radical rejection, as would be demonstrated by the fact that leper hospitals were almost always located near the city gates, far but not invisible from the community. The relative presence of the leper reminded everyone of the duty of Christian charity, and therefore played a positive role in society. Renaissance In the Renaissance, art portrayed insane people as possessing wisdom (knowledge of the limits of the world), whilst literature portrayed the insane as people who reveal the distinction between what men are and what men pretend to be. Renaissance art and literature further depicted insane people as intellectually engaged with reasonable people, because their madness represented the mysterious forces of cosmic tragedy. Foucault contrasts the Renaissance image of the ship of fools with later conceptions of confinement. The Renaissance, rather than locking up madmen, ensured their circulation, so that the madman as a "passenger" and "passing being" became the symbol of the human condition: "Madness is the anticipation of death". Yet Renaissance intellectualism began to develop an objective way of thinking about and describing reason and unreason, compared with the subjective descriptions of madness from the Middle Ages. Classical Age At the dawn of the Age of Reason in the 17th century, there occurred "the Great Confinement" of insane people in the countries of Europe; the initial management of insane people was to segregate them to the margins of society, and then to physically separate them from society by confinement, with other anti-social people (prostitutes, vagrants, blasphemers, et al.) into new institutions, such as the General Hospital of Paris. According to Foucault, the creation of the "general hospital" corresponds to Descartes's Meditations, and the desire to eliminate the irrational from philosophical discourse. "Classical reason" would have produced a "fracture" in the history of madness. Moreover, Christian European society perceived such anti-social people as being in moral error, for having freely chosen lives of prostitution, vagrancy, blasphemy, unreason, etc. To revert such moral errors, society's new institutions to confine outcast people featured way-of-life regimes composed of punishment-and-reward programs meant to compel the inmates to choose to reverse their choices of lifestyle. The socio-economic forces that promoted this institutional confinement included the legalistic need for an extrajudicial social mechanism with the legal authority to physically separate socially undesirable people from mainstream society; and for controlling the wages and employment of poor people living in workhouses, whose availability lowered the wages of freeman workers. The conceptual distinction, between the mentally insane and the mentally sane, was a social construct produced by the practices of the extrajudicial separation of a human being from free society to institutional confinement. In turn, institutional confinement conveniently made insane people available to medical doctors then beginning to view madness as a natural object of study, and then as an illness to be cured. Modern era The Modern era began at the end of the 18th century, with the creation of medical institutions for confining mentally insane people under the supervision of medical doctors. Those institutions were product of two cultural motives: (i) the new goal of curing the insane away from poor families; and (ii) the old purpose of confining socially undesirable people to protect society. Those two, distinct social purposes soon were forgotten, and the medical institution became the only place for the administration of therapeutic treatments for madness. Although nominally more enlightened in scientific and diagnostic perspective, and compassionate in the clinical treatment of insane people, the modern medical institution remained as cruelly controlling as were mediaeval treatments for madness. In the preface to the 1961 edition of Madness and Civilization, Foucault said that: Reception In the critical volume, Foucault (1985), the philosopher José Guilherme Merquior said that the value of Madness and Civilization as intellectual history was diminished by errors of fact and of interpretation that undermine Foucault's thesis—how social forces determine the meanings of madness and society's responses to the mental disorder of the person. Specifically problematic was his selective citation of data, which ignored contradictory historical evidence of preventive imprisonment and physical cruelty towards insane people during the historical periods when Foucault said society perceived the mad as wise people—institutional behaviors allowed by the culture of Christian Europeans who considered madness worse than sin. Nonetheless, Merquior said that, like the book Life Against Death (1959), by Norman O. Brown, Foucault's book about Madness and Civilization is "a call for the liberation of the Dionysian id"; and gave inspiration for Anti-Oedipus: Capitalism and Schizophrenia (1972), by the philosopher Gilles Deleuze and the psychoanalyst Félix Guattari. In his 1994 essay "Phänomenologie des Krankengeistes" ('Phenomenology of the Sick Spirit'), philosopher Gary Gutting said:[T]he reactions of professional historians to Foucault's Histoire de la folie [1961] seem, at first reading, ambivalent, not to say polarized. There are many acknowledgements of its seminal role, beginning with Robert Mandrou's early review in [the Annales d'Histoire Economique et Sociale], characterizing it as a "beautiful book" that will be "of central importance for our understanding of the Classical period." Twenty years later, Michael MacDonald confirmed Mandrou's prophecy: "Anyone who writes about the history of insanity in early modern Europe must travel in the spreading wake of Michael Foucault's famous book, Madness and Civilization." Later endorsements included Jan Goldstein, who said: "For both their empirical content and their powerful theoretical perspectives, the works of Michel Foucault occupy a special and central place in the historiography of psychiatry;" and Roy Porter: "Time has proved Madness and Civilization [to be by] far the most penetrating work ever written on the history of madness." However, despite Foucault being herald of "the new cultural history", there was much criticism. In Psychoanalysis and Male Homosexuality (1995), Kenneth Lewes said that Madness and Civilization is an example of the "critique of the institutions of psychiatry and psychoanalysis" that occurred as part of the "general upheaval of values in the 1960s." That the history Foucault presents in Madness and Civilization is similar to, but more profound than The Myth of Mental Illness (1961) by Thomas Szasz. See also Anti-psychiatry Cogito and the History of Madness The Archaeology of Knowledge Notes References External links Some images and paintings that appear in the book 1961 non-fiction books Anti-psychiatry books French-language books French non-fiction books Books about mental health Plon (publisher) books Books about social history Works by Michel Foucault
0.770902
0.993594
0.765964