title
stringlengths 3
82
| text
stringlengths 621
92.1k
| relevans
float64 0.76
0.83
| popularity
float64 0.93
1
| ranking
float64 0.75
0.83
|
---|---|---|---|---|
Social skills | A social skill is any competence facilitating interaction and communication with others where social rules and relations are created, communicated, and changed in verbal and nonverbal ways. The process of learning these skills is called socialization. Lack of such skills can cause social awkwardness.
Interpersonal skills are actions used to effectively interact with others. Interpersonal skills relate to categories of dominance vs. submission, love vs. hate, affiliation vs. aggression, and control vs. autonomy (Leary, 1957). Positive interpersonal skills include persuasion, active listening, delegation, and stewardship, among others. Social psychology, an academic discipline focused on research relating to social functioning, studies how interpersonal skills are learned through societal-based changes in attitude, thinking, and behavior.
Enumeration and categorization
Social skills are the tools that enable people to communicate, learn, ask for help, get needs met in appropriate ways, get along with others, make friends, develop healthy relationships, protect themselves, and in general, be able to interact with the society harmoniously. Social skills build essential character traits like trustworthiness, respectfulness, responsibility, fairness, caring, and citizenship. These traits help build an internal moral compass, allowing individuals to make good choices in thinking and behavior, resulting in social competence.
The important social skills identified by the Employment and Training Administration are:
Coordination – Adjusting actions in relation to others' actions.
Mentoring – Teaching and helping others learn how to do something (e.g. being a study partner).
Negotiation – Discussion aimed at reaching an agreement.
Persuasion – The action or fact of persuading someone or of being persuaded to do or believe something.
Service orientation – Actively looking for ways to evolve compassionately and grow psycho-socially with people.
Social perceptiveness – Being aware of others' reactions and able to respond in an understanding manner.
Social skills are goal oriented with both main goals and sub-goals. For example, a workplace interaction initiated by a new employee with a senior employee will first contain a main goal. This will be to gather information, and then the sub-goal will be to establish a rapport in order to obtain the main goal. Takeo Doi in his study of consciousness distinguished this as tatemae, meaning conventions and verbal expressions and honne, meaning true motive behind the conventions.
Causes of deficits
Deficits in social skills were categorized by Gresham in 1998, as failure to recognize and reflect social skills, a failure to model appropriate models, and failure to perform acceptable behavior in particular situations in relation to developmental and transitional stages. Social skill deficits are also a discouragement for children with behavioral challenges when it comes to adult adjustment.
Alcohol misuse
Social skills are often significantly impaired in people suffering from alcoholism. This is due to the neurotoxic long-term effects of alcohol misuse on the brain, especially the prefrontal cortex area of the brain. The social skills that are typically impaired by alcohol abuse, include impairments in perceiving facial emotions, prosody perception problems, and theory of mind deficits. The ability to understand humor is also often impaired in alcohol abusers. Impairments in social skills can also occur in individuals who have fetal alcohol spectrum disorders. These deficits persist throughout the affected people's lives, and may worsen over time due to the effects of aging on the brain.
ADHD and hyperkinetic disorder
People with ADHD and hyperkinetic disorder often have difficulties with social skills, such as social interaction. Approximately half of children and adolescents with ADHD will experience peer rejection, compared to 10–15 percent of non-ADHD youth. Adolescents with ADHD are less likely to develop close friendships and romantic relationships; they are usually regarded by their peers as immature or as social outcasts, with an exception for peers that have ADHD or related conditions themselves, or a high level of tolerance for such symptoms. As they begin to mature, however, it becomes easier to make such relationships. Training in social skills, behavioral modification, and medication have some beneficial effects. It is important for youth with ADHD to form friendships with people who are not involved in deviant or delinquent activities, people who do not have significant mental illnesses or developmental disabilities, in order to reduce emergence of later psychopathology. Poor peer relationships can contribute to major depression, criminality, school failure, and substance use disorders.
Autistic spectrum disorders
Individuals with autistic spectrum disorders including autism and Asperger syndrome are often characterized by their deficiency in social functioning. The concept of social skills has been questioned in terms of the autistic spectrum. In response to the needs of autistic children, Romanczyk has suggested adapting a comprehensive model of social acquisitions with behavioral modification rather than specific responses tailored for social contexts.
Anxiety and depression
Individuals with few opportunities to socialize with others often struggle with social skills. This can often create a downward spiral effect for people with mental illnesses like anxiety or depression. Due to anxiety experienced from concerns with interpersonal evaluation and fear of negative reaction by others, surfeit expectations of failure or social rejection in socialization leads to avoiding or shutting down from social interactions. Individuals who experience significant levels of social anxiety often struggle when communicating with others, and may have impaired abilities to demonstrate social cues and behaviors appropriately.
The use of social media can also cause anxiety and depression. The Internet is causing many problems, according to a study with a sample size of 3,560 students. Problematic internet use may be present in about 4% of high school students in the United States, it may be associated with depression. About one fourth of respondents (28.51%) reported spending fifteen or more hours per week on the internet. Although other studies show positive effects from internet use.
Depression can also cause people to avoid opportunities to socialize, which impairs their social skills, and makes socialization unattractive.
Anti-social behaviors
The authors of the book Snakes in Suits: When Psychopaths Go to Work explore psychopathy in workplace. The FBI consultants describe a five phase model of how a typical psychopath climbs to and maintains power. Many traits exhibited by these individuals include: superficial charm, insincerity, egocentricity, manipulativeness, grandiosity, lack of empathy, low agreeableness, exploitativeness, independence, rigidity, stubbornness and dictatorial tendencies. Babiak and Hare say for corporate psychopaths, success is defined as the best revenge and their problem behaviors are repeated "ad infinitum" due to little insight and their proto-emotions such as "anger, frustration, and rage" are refracted as irresistible charm. The authors note that lack of emotional literacy and moral conscience is often confused with toughness, the ability to make hard decisions, and effective crisis management. Babiak and Hare also emphasizes a reality they identified with psychopaths from studies that psychopaths are not able to be influenced by any sort of therapy.
At the University at Buffalo in New York, Emily Grijalva has investigated narcissism in business; she found there are two forms of narcissism: "vulnerable" and "grandiose". It is her finding that "moderate" level of grandiose narcissism is linked to becoming an effective manager. Grandiose narcissists are characterized as confident; they possess unshakable belief that they are superior, even when it is unwarranted. They can be charming, pompous show-offs, and can also be selfish, exploitative and entitled. Jens Lange and Jan Crusius at the University of Cologne, Germany associates "malicious-benign" envy within narcissistic social climbers in workplace. It is their finding that grandiose narcissists are less prone to low self-esteem and neuroticism and are less susceptible to the anxiety and depression that can affect vulnerable narcissists when coupled with envy. They characterize vulnerable narcissists as those who "believe they are special, and want to be seen that way–but are just not that competent, or charming." As a result, their self-esteem fluctuates a lot. They tend to be self-conscious and passive, but also prone to outbursts of potentially violent aggression if their inflated self-image is threatened." Richard Boyatzis says this is an unproductive form of expression of emotions that the person cannot share constructively, which reflects lack of appropriate skills. Eddie Brummelman, a social and behavioral scientist at the University of Amsterdam in the Netherlands and Brad Bushman at Ohio State University in Columbus says studies show that in western culture narcissism is on the rise from shifting focus on the self rather than on relationships and concludes all narcissism to be socially undesirable ("unhealthy feelings of superiority"). David Kealy at the University of British Columbia in Canada states that narcissism might aid temporarily but in the long run it is better to be true to oneself, have personal integrity, and be kind to others.
Management
Behavioral therapy
Behaviorism interprets social skills as learned behaviors that function to facilitate social reinforcement. According to Schneider & Byrne (1985), operant conditioning procedures for training social skills had the largest effect size, followed by modeling, coaching, and social cognitive techniques. Behavior analysts prefer to use the term behavioral skills to social skills. Behavioral skills training to build social and other skills is used with a variety of populations including in packages to treat addictions as in the community reinforcement approach and family training (CRAFT).
Behavioral skills training is also used for people with borderline personality disorder,
depression, and developmental disabilities. Typically, behaviorists try to develop what are considered cusp skills, which are critical skills to open access to a variety of environments. The rationale for this type of approach to treatment is that people meet a variety of social problems and can reduce the stress and punishment from the encounter in a safe environment. It also addresses how they can increase reinforcement by having the correct skills.
See also
References
External links
National Association of School Psychologists on Social Skills
Group processes
Human communication
Zoosemiotics
Behaviorism
Life skills | 0.766569 | 0.996823 | 0.764134 |
Harry Potter and the Methods of Rationality | Harry Potter and the Methods of Rationality (HPMOR) is a work of Harry Potter fan fiction by Eliezer Yudkowsky published on FanFiction.Net as a serial from February 28, 2010, to March 14, 2015, totaling 122 chapters and over 660,000 words. It adapts the story of Harry Potter to explain complex concepts in cognitive science, philosophy, and the scientific method. Yudkowsky's reimagining supposes that Harry's aunt Petunia Evans married an Oxford professor and homeschooled Harry in science and rational thinking, allowing Harry to enter the magical world with ideals from the Age of Enlightenment and an experimental spirit. The fan fiction spans one year, covering Harry's first year in Hogwarts. HPMOR has inspired other works of fan fiction, art, and poetry.
Plot
In this fan fiction's alternate universe to the Harry Potter series, Lily Potter magically made her sister Petunia Evans prettier, letting her marry Oxford professor Michael Verres. They adopt their orphaned nephew Harry James Potter as Harry James Potter-Evans-Verres and homeschool him in science and rationality. When Harry turns 11, Petunia and Professor McGonagall inform him and Michael about the wizarding world and Harry's defeat of Lord Voldemort. Harry becomes irritated over wizarding society's inconsistencies and backwardness. When boarding the Hogwarts Express, Harry befriends Draco Malfoy over Ron Weasley and teaches him science. Harry also befriends Hermione Granger over their scientific inclinations.
At Hogwarts, the Sorting Hat sends both Harry and Hermione to Ravenclaw and Draco to Slytherin. As school begins, Harry earns the trust of McGonagall, bonds with Professor Quirrell (who strives to resurrect the teaching of battle magic) and tests magic through the scientific method with Hermione. Harry invents partial transfiguration, which transmutes parts of wholes by applying timeless physics. Draco reluctantly accepts Harry's proof against the Malfoys' bigotry against muggle-borns and informs him that Dumbledore burned his innocent mother, Narcissa, alive.
After winter break, Quirrell procures a Dementor to teach students the Patronus charm. Though Hermione and Harry initially fail, Harry recognizes Dementors as shadows of death. He invents the True Patronus charm, destroying the Dementor. After Harry teaches him to cast a regular Patronus, Draco discovers Harry can speak Parseltongue. Quirrell reveals himself as a snake Animagus to Harry and convinces him to help spirit a supposedly manipulated Bellatrix Black from Azkaban, exposing Harry to the horrors of prisoners while Dumbledore believes that Voldemort is back. After a confrontation, he tells Harry that the Order of the Phoenix made him murder Narcissa to stop Voldemort from taking hostages.
Hermione establishes the organization S.P.H.E.W. to protest misogyny in heroism and fight bullies. This causes widespread chaos, and the group's activities are put on pause. She and Draco are manipulated into believing she attempted the murder of the latter, and Harry pays his fortune to Lucius Malfoy to save Hermione from Azkaban. A surprised Lucius accepts and withdraws Draco from Hogwarts. The wizarding world theorize that Quirrell is David Monroe, a long-missing opponent of Voldemort. A mountain troll enters Hogwarts and kills Hermione before Harry manages to kill it. Grieving, Harry vows to resurrect Hermione and preserves her body. Harry absolves the Malfoys of guilt in Hermione's murder in exchange for Lucius returning his money, exonerating Hermione, and returning Draco to Hogwarts.
Quirrell starts eating unicorns, supposedly to delay death from a disease. Near the end of the year, he captures Harry, revealing himself as Voldemort's spirit possessing Quirrell and how he framed and murdered Hermione by proxy. He coerces Harry into helping him steal the Philosopher's Stone, an artifact for performing true transmutation as transfiguration is otherwise temporary, by promising to resurrect Hermione. They succeed when Dumbledore appears and tries to seal Voldemort outside time. Voldemort endangers Harry, forcing Dumbledore to seal himself instead.
Voldemort's spirit abandons Quirrell and embodies using the Stone; he and Harry resurrect Hermione with the power of the Stone and Harry's True Patronus. Voldemort murders Quirrell as a human sacrifice for a ritual to give Hermione a Horcrux and the superpowers of a mountain troll and unicorn, rendering her near-immortal. Knowing Harry is prophesied to destroy the world, Voldemort holds Harry at gunpoint, strips him naked, summons his Death Eaters, forces Harry into a magical oath to never risk destroying the world, and orders his murder. Harry improvises a partial Transfiguration into carbon nanotubes that beheads every Death Eater and maims Voldemort. He stuns, memory-wipes, and transfigures him into his ring's jewel. Harry claims the Stone and stages a scene looking like "David Monroe" died defeating Voldemort and resurrected Hermione.
After the battle, Harry receives Dumbledore's letters, learning Dumbledore gambled the world's future on him due to prophecies and let Harry inherit his positions and assets. Harry helps a grieving Draco find his mother, Narcissa, and plans with the resurrected Hermione to overhaul wizarding society by destroying Azkaban with the True Patronus and using the Philosopher's Stone to grant everyone immortality.
History
Yudkowsky wrote Harry Potter and the Methods of Rationality to promote the rationality skills he advocates on his community blog LessWrong. According to him, "I'd been reading a lot of Harry Potter fan fiction at the time the plot of HPMOR spontaneously burped itself into existence inside my mind, so it came out as a Harry Potter story, [...] If I had to rationalize it afterward, I'd say the Potterverse is a very rich environment for a curious thinker, and there's a large number of potential readers who would enter at least moderately familiar with the Harry Potter universe."
Yudkowsky has used HPMOR to assist the launch of the Center for Applied Rationality, which teaches courses based on his work.
Yudkowsky refused a suggestion from David Whelan to sell HPMOR as an original story after rewriting it to remove the Harry Potter setting's elements from it to avoid copyright infringement like E. L. James did with Fifty Shades, which was originally a Twilight fan fiction, saying, "That's not possible in this case. HPMOR is fundamentally linked to, and can only be understood against the background of, the original Harry Potter novels. Numerous scenes are meant to be understood in the light of other scenes in the original HP."
After HPMOR concluded in 2015, Yudkowsky's readers held many worldwide wrap parties in celebration.
Reception
Critical response
Harry Potter and the Methods of Rationality is highly popular on FanFiction.Net, though it has also caused significant polarization among readers. In 2011, Daniel D. Snyder of The Atlantic recorded how HPMOR "caused uproar in the fan fiction community, drawing both condemnations and praise" on online message boards "for its blasphemous—or brilliant—treatment of the canon." In 2015, David Whelan of Vice described HPMOR as "the most popular Harry Potter book you've never heard of" and claimed, "Most people agree that it's brilliantly written, challenging, and—curiously—mind altering."
HPMOR has received positive mainstream reception. Hugo Award-winning science fiction author David Brin positively reviewed HPMOR for The Atlantic in 2010, saying, "It's a terrific series, subtle and dramatic and stimulating… I wish all Potter fans would go here, and try on a bigger, bolder and more challenging tale." In 2014, American politician Ben Wikler lauded HPMOR on The Guardian as "the #1 fan fiction series of all time," saying it was "told with enormous gusto, and with emotional insight into that kind of mind," and comparing Harry to his friend Aaron Swartz's skeptical attitude. Writing for The Washington Post, legal scholar William Baude praised HPMOR as "the best Harry Potter book ever written, though it is not written by J.K. Rowling" in 2014 and "one of my favorite books written this millennium" in 2015. In 2015, Vakasha Sachdev of Hindustan Times described HPMOR as "a thinking person's story about magic and heroism" and how "the conflict between good and evil is represented as a battle between knowledge and ignorance," eliciting his praise. In 2017, Carol Pinchefsky of Syfy lauded HPMOR as "something brilliant" and "a platform on which the writer bounces off complex ideas in a way that's accessible and downright fun." In a 2019 interview for The Sydney Morning Herald, young adult writer Lili Wilkinson said that she adores HPMOR; according to her, "It not only explains basically all scientific theory, from economics to astrophysics, but it also includes the greatest scene where Malfoy learns about DNA and has to confront his pureblood bigotry." Rhys McKay hailed HPMOR in a 2019 article for Who as "one of the best fanfics ever written" and "a familiar yet all-new take on the Wizarding world."
James D. Miller, an economics professor at Smith College and one of Yudkowsky's acquaintances, praised HPMOR in his 2012 book Singularity Rising as an "excellent marketing strategy" for Yudkowsky's "pseudoscientific-sounding" beliefs due to its carefully crafted lessons about rationality. Though he criticized Yudkowsky as "profoundly arrogant" for believing that making people more rational would make them more likely to agree with his ideas, he nonetheless agreed that such an effort would gain him more followers.
Accolades
The HPMOR fan audiobook was a Parsec Awards finalist in 2012 and 2015.
Translations
Russian
On July 17, 2018, Mikhail Samin, a former head of the Russian Pastafarian Church who had previously published The Gospel of the Flying Spaghetti Monster in Russian, launched a non-commercial crowdfunding campaign hosted on Planeta.ru alongside about 200 helpers to print a three-volume edition of the Russian translation of Harry Potter and the Methods of Rationality. Lin Lobaryov, the former lead editor of Mir Fantastiki, compiled the books. Samin's campaign reached its 1.086 million ₽ (approximately US$17 000) goal within 30 hours; it ended on September 30 with 11.4 million ₽ collected (approximately US$175 000), having involved 7,278 people, and became the biggest Russian crowdfunding project for a day before a fundraiser hosted on CrowdRepublic for the Russian translation for Gloomhaven surpassed it.
Though Samin originally planned to print 1000 copies of HPMOR, his campaign's unprecedented success led him to print twenty-one times more copies than that. Yudkowsky supported Samin's efforts and wrote an exclusive introduction for HPMOR's Russian printing, though the campaign's popularity surprised him. Samin's HPMOR publication project is the largest-scale effort on record, surpassing many previous low-circulation fan printings, and he sent some Russian copies of HPMOR to libraries and others to schools as prizes for Olympiad winners. J.K. Rowling and her agents refused Russian publishing house Eksmo's request for commercial publication of HPMOR.
Other
HPMOR has Czech, Chinese, French, German, Hebrew, Indonesian, Italian, Japanese, Norwegian, Spanish, Swedish, and Ukrainian translations.
See also
My Immortal and Hogwarts School of Prayer and Miracles, two near-universally condemned Harry Potter fan fictions
All the Young Dudes, a similarly praised Harry Potter fan fiction
References
External links
The Methods of Rationality Podcast (Full cast audiobook available as a podcast)
2010 works
Fiction about personifications of death
Harry Potter fan fiction
Literature first published in serial form
Philosophical fiction
Transhumanist books
Works set in the 1990s
Digital media works about philosophy | 0.767305 | 0.995856 | 0.764125 |
Rhetoric | Rhetoric is the art of persuasion. It is one of the three ancient arts of discourse (trivium) along with grammar and logic/dialectic. As an academic discipline within the humanities, rhetoric aims to study the techniques that speakers or writers use to inform, persuade, and motivate their audiences. Rhetoric also provides heuristics for understanding, discovering, and developing arguments for particular situations.
Aristotle defined rhetoric as "the faculty of observing in any given case the available means of persuasion", and since mastery of the art was necessary for victory in a case at law, for passage of proposals in the assembly, or for fame as a speaker in civic ceremonies, he called it "a combination of the science of logic and of the ethical branch of politics". Aristotle also identified three persuasive audience appeals: logos, pathos, and ethos. The five canons of rhetoric, or phases of developing a persuasive speech, were first codified in classical Rome: invention, arrangement, style, memory, and delivery.
From Ancient Greece to the late 19th century, rhetoric played a central role in Western education in training orators, lawyers, counsellors, historians, statesmen, and poets.
Uses
Scope
Scholars have debated the scope of rhetoric since ancient times. Although some have limited rhetoric to the specific realm of political discourse, to many modern scholars it encompasses every aspect of culture. Contemporary studies of rhetoric address a much more diverse range of domains than was the case in ancient times. While classical rhetoric trained speakers to be effective persuaders in public forums and in institutions such as courtrooms and assemblies, contemporary rhetoric investigates human discourse writ large. Rhetoricians have studied the discourses of a wide variety of domains, including the natural and social sciences, fine art, religion, journalism, digital media, fiction, history, cartography, and architecture, along with the more traditional domains of politics and the law.
Because the ancient Greeks valued public political participation, rhetoric emerged as an important curriculum for those desiring to influence politics. Rhetoric is still associated with its political origins. However, even the original instructors of Western speech—the Sophists—disputed this limited view of rhetoric. According to Sophists like Gorgias, a successful rhetorician could speak convincingly on a topic in any field, regardless of his experience in that field. This suggested rhetoric could be a means of communicating any expertise, not just politics. In his Encomium to Helen, Gorgias even applied rhetoric to fiction by seeking, for his amusement, to prove the blamelessness of the mythical Helen of Troy in starting the Trojan War.
Plato defined the scope of rhetoric according to his negative opinions of the art. He criticized the Sophists for using rhetoric to deceive rather than to discover truth. In Gorgias, one of his Socratic Dialogues, Plato defines rhetoric as the persuasion of ignorant masses within the courts and assemblies. Rhetoric, in Plato's opinion, is merely a form of flattery and functions similarly to culinary arts, which mask the undesirability of unhealthy food by making it taste good. Plato considered any speech of lengthy prose aimed at flattery as within the scope of rhetoric. Some scholars, however, contest the idea that Plato despised rhetoric and instead view his dialogues as a dramatization of complex rhetorical principles.
Aristotle both redeemed rhetoric from his teacher and narrowed its focus by defining three genres of rhetoric—deliberative, forensic or judicial, and epideictic. Yet, even as he provided order to existing rhetorical theories, Aristotle generalized the definition of rhetoric to be the ability to identify the appropriate means of persuasion in a given situation based upon the art of rhetoric (technê). This made rhetoric applicable to all fields, not just politics. Aristotle viewed the enthymeme based upon logic (especially, based upon the syllogism) as the basis of rhetoric.
Aristotle also outlined generic constraints that focused the rhetorical art squarely within the domain of public political practice. He restricted rhetoric to the domain of the contingent or probable: those matters that admit multiple legitimate opinions or arguments.
Since the time of Aristotle, logic has changed. For example, modal logic has undergone a major development that also modifies rhetoric.
The contemporary neo-Aristotelian and neo-Sophistic positions on rhetoric mirror the division between the Sophists and Aristotle. Neo-Aristotelians generally study rhetoric as political discourse, while the neo-Sophistic view contends that rhetoric cannot be so limited. Rhetorical scholar Michael Leff characterizes the conflict between these positions as viewing rhetoric as a "thing contained" versus a "container". The neo-Aristotelian view threatens the study of rhetoric by restraining it to such a limited field, ignoring many critical applications of rhetorical theory, criticism, and practice. Simultaneously, the neo-Sophists threaten to expand rhetoric beyond a point of coherent theoretical value.
In more recent years, people studying rhetoric have tended to enlarge its object domain beyond speech. Kenneth Burke asserted humans use rhetoric to resolve conflicts by identifying shared characteristics and interests in symbols. People engage in identification, either to assign themselves or another to a group. This definition of rhetoric as identification broadens the scope from strategic and overt political persuasion to the more implicit tactics of identification found in an immense range of sources.
Among the many scholars who have since pursued Burke's line of thought, James Boyd White sees rhetoric as a broader domain of social experience in his notion of constitutive rhetoric. Influenced by theories of social construction, White argues that culture is "reconstituted" through language. Just as language influences people, people influence language. Language is socially constructed, and depends on the meanings people attach to it. Because language is not rigid and changes depending on the situation, the very usage of language is rhetorical. An author, White would say, is always trying to construct a new world and persuading his or her readers to share that world within the text.
People engage in rhetoric any time they speak or produce meaning. Even in the field of science, via practices which were once viewed as being merely the objective testing and reporting of knowledge, scientists persuade their audience to accept their findings by sufficiently demonstrating that their study or experiment was conducted reliably and resulted in sufficient evidence to support their conclusions.
The vast scope of rhetoric is difficult to define. Political discourse remains the paradigmatic example for studying and theorizing specific techniques and conceptions of persuasion or rhetoric.
As a civic art
Throughout European History, rhetoric meant persuasion in public and political settings such as assemblies and courts. Because of its associations with democratic institutions, rhetoric is commonly said to flourish in open and democratic societies with rights of free speech, free assembly, and political enfranchisement for some portion of the population. Those who classify rhetoric as a civic art believe that rhetoric has the power to shape communities, form the character of citizens, and greatly affect civic life.
Rhetoric was viewed as a civic art by several of the ancient philosophers. Aristotle and Isocrates were two of the first to see rhetoric in this light. In Antidosis, Isocrates states, "We have come together and founded cities and made laws and invented arts; and, generally speaking, there is no institution devised by man which the power of speech has not helped us to establish." With this statement he argues that rhetoric is a fundamental part of civic life in every society and that it has been necessary in the foundation of all aspects of society. He further argues in Against the Sophists that rhetoric, although it cannot be taught to just anyone, is capable of shaping the character of man. He writes, "I do think that the study of political discourse can help more than any other thing to stimulate and form such qualities of character." Aristotle, writing several years after Isocrates, supported many of his arguments and argued for rhetoric as a civic art.
In the words of Aristotle, in the Rhetoric, rhetoric is "...the faculty of observing in any given case the available means of persuasion". According to Aristotle, this art of persuasion could be used in public settings in three different ways: "A member of the assembly decides about future events, a juryman about past events: while those who merely decide on the orator's skill are observers. From this it follows that there are three divisions of oratory—(1) political, (2) forensic, and (3) the ceremonial oratory of display". Eugene Garver, in his critique of Aristotle's Rhetoric, confirms that Aristotle viewed rhetoric as a civic art. Garver writes, "Rhetoric articulates a civic art of rhetoric, combining the almost incompatible properties of and appropriateness to citizens." Each of Aristotle's divisions plays a role in civic life and can be used in a different way to affect the .
Because rhetoric is a public art capable of shaping opinion, some of the ancients, including Plato found fault in it. They claimed that while it could be used to improve civic life, it could be used just as easily to deceive or manipulate. The masses were incapable of analyzing or deciding anything on their own and would therefore be swayed by the most persuasive speeches. Thus, civic life could be controlled by whoever could deliver the best speech. Plato explores the problematic moral status of rhetoric twice: in Gorgias and in The Phaedrus, a dialogue best-known for its commentary on love.
More trusting in the power of rhetoric to support a republic, the Roman orator Cicero argued that art required something more than eloquence. A good orator needed also to be a good man, a person enlightened on a variety of civic topics. He describes the proper training of the orator in his major text on rhetoric, De Oratore, which he modeled on Plato's dialogues.
Modern works continue to support the claims of the ancients that rhetoric is an art capable of influencing civic life. In Political Style, Robert Hariman claims that "questions of freedom, equality, and justice often are raised and addressed through performances ranging from debates to demonstrations without loss of moral content". James Boyd White argues that rhetoric is capable not only of addressing issues of political interest but that it can influence culture as a whole. In his book, When Words Lose Their Meaning, he argues that words of persuasion and identification define community and civic life. He states that words produce "the methods by which culture is maintained, criticized, and transformed".
Rhetoric remains relevant as a civic art. In speeches, as well as in non-verbal forms, rhetoric continues to be used as a tool to influence communities from local to national levels.
As a political tool
Political parties employ "manipulative rhetoric" to advance their party-line goals and lobbyist agendas. They use it to portray themselves as champions of compassion, freedom, and culture, all while implementing policies that appear to contradict these claims. It serves as a form of political propaganda, presented to sway and maintain public opinion in their favor, and garner a positive image, potentially at the expense of suppressing dissent or criticism. An example of this is the government's actions in freezing bank accounts and regulating internet speech, ostensibly to protect the vulnerable and preserve freedom of expression, despite contradicting values and rights.
The origins of the rhetoric language begin in Ancient Greece. It originally began by a group named the Sophists, who wanted to teach the Athenians to speak persuasively in order to be able to navigate themselves in the court and senate. What inspired this form of persuasive speech came about through a new form of government, known as democracy, that was being experimented with. Consequently people began to fear that persuasive speech would overpower truth. Aristotle however believed that this technique was an art, and that persuasive speech could have truth and logic embedded within it. In the end, rhetoric speech still remained popular and was used by many scholars and philosophers.
As a course of study
The study of rhetoric trains students to speak and/or write effectively, and to critically understand and analyze discourse. It is concerned with how people use symbols, especially language, to reach agreement that permits coordinated effort.
Rhetoric as a course of study has evolved since its ancient beginnings, and has adapted to the particular exigencies of various times, venues, and applications ranging from architecture to literature. Although the curriculum has transformed in a number of ways, it has generally emphasized the study of principles and rules of composition as a means for moving audiences.
Rhetoric began as a civic art in Ancient Greece where students were trained to develop tactics of oratorical persuasion, especially in legal disputes. Rhetoric originated in a school of pre-Socratic philosophers known as the Sophists . Demosthenes and Lysias emerged as major orators during this period, and Isocrates and Gorgias as prominent teachers. Modern teachings continue to reference these rhetoricians and their work in discussions of classical rhetoric and persuasion.
Rhetoric was taught in universities during the Middle Ages as one of the three original liberal arts or trivium (along with logic and grammar). During the medieval period, political rhetoric declined as republican oratory died out and the emperors of Rome garnered increasing authority. With the rise of European monarchs, rhetoric shifted into courtly and religious applications. Augustine exerted strong influence on Christian rhetoric in the Middle Ages, advocating the use of rhetoric to lead audiences to truth and understanding, especially in the church. The study of liberal arts, he believed, contributed to rhetorical study: "In the case of a keen and ardent nature, fine words will come more readily through reading and hearing the eloquent than by pursuing the rules of rhetoric." Poetry and letter writing became central to rhetorical study during the Middle Ages. After the fall of the Roman republic, poetry became a tool for rhetorical training since there were fewer opportunities for political speech. Letter writing was the primary way business was conducted both in state and church, so it became an important aspect of rhetorical education.
Rhetorical education became more restrained as style and substance separated in 16th-century France, and attention turned to the scientific method. Influential scholars like Peter Ramus argued that the processes of invention and arrangement should be elevated to the domain of philosophy, while rhetorical instruction should be chiefly concerned with the use of figures and other forms of the ornamentation of language. Scholars such as Francis Bacon developed the study of "scientific rhetoric" which rejected the elaborate style characteristic of classical oration. This plain language carried over to John Locke's teaching, which emphasized concrete knowledge and steered away from ornamentation in speech, further alienating rhetorical instruction—which was identified wholly with such ornamentation—from the pursuit of knowledge.
In the 18th century, rhetoric assumed a more social role, leading to the creation of new education systems (predominantly in England): "Elocution schools" in which girls and women analyzed classic literature, most notably the works of William Shakespeare, and discussed pronunciation tactics.
The study of rhetoric underwent a revival with the rise of democratic institutions during the late 18th and early 19th centuries. Hugh Blair was a key early leader of this movement. In his most famous work, Lectures on Rhetoric and Belles Lettres, he advocates rhetorical study for common citizens as a resource for social success. Many American colleges and secondary schools used Blair's text throughout the 19th century to train students of rhetoric.
Political rhetoric also underwent renewal in the wake of the U.S. and French revolutions. The rhetorical studies of ancient Greece and Rome were resurrected as speakers and teachers looked to Cicero and others to inspire defenses of the new republics. Leading rhetorical theorists included John Quincy Adams of Harvard, who advocated the democratic advancement of rhetorical art. Harvard's founding of the Boylston Professorship of Rhetoric and Oratory sparked the growth of the study of rhetoric in colleges across the United States. Harvard's rhetoric program drew inspiration from literary sources to guide organization and style, and studies the rhetoric used in political communication to illustrate how political figures persuade audiences. William G. Allen became the first American college professor of rhetoric, at New-York Central College, 1850–1853.
Debate clubs and lyceums also developed as forums in which common citizens could hear speakers and sharpen debate skills. The American lyceum in particular was seen as both an educational and social institution, featuring group discussions and guest lecturers. These programs cultivated democratic values and promoted active participation in political analysis.
Throughout the 20th century, rhetoric developed as a concentrated field of study, with the establishment of rhetorical courses in high schools and universities. Courses such as public speaking and speech analysis apply fundamental Greek theories (such as the modes of persuasion: , , and ) and trace rhetorical development through history. Rhetoric earned a more esteemed reputation as a field of study with the emergence of Communication Studies departments and of Rhetoric and Composition programs within English departments in universities, and in conjunction with the linguistic turn in Western philosophy. Rhetorical study has broadened in scope, and is especially used by the fields of marketing, politics, and literature.
Another area of rhetoric is the study of cultural rhetorics, which is the communication that occurs between cultures and the study of the way members of a culture communicate with each other. These ideas can then be studied and understood by other cultures, in order to bridge gaps in modes of communication and help different cultures communicate effectively with each other. James Zappen defines cultural rhetorics as the idea that rhetoric is concerned with negotiation and listening, not persuasion, which differs from ancient definitions. Some ancient rhetoric was disparaged because its persuasive techniques could be used to teach falsehoods. Communication as studied in cultural rhetorics is focused on listening and negotiation, and has little to do with persuasion.
Canons
Rhetorical education focused on five canons. The serve as a guide to creating persuasive messages and arguments:
inventio (invention) the process that leads to the development and refinement of an argument.
dispositio (disposition, or arrangement) used to determine how an argument should be organized for greatest effect, usually beginning with the exordium
elocutio (style) determining how to present the arguments
memoria (memory) the process of learning and memorizing the speech and persuasive messages
pronuntiatio (presentation) and actio (delivery) the gestures, pronunciation, tone, and pace used when presenting the persuasive arguments—the Grand Style.
Memory was added much later to the original four canons.
Music
During the Renaissance rhetoric enjoyed a resurgence, and as a result nearly every author who wrote about music before the Romantic era discussed rhetoric. Joachim Burmeister wrote in 1601, "there is only little difference between music and the nature of oration". Christoph Bernhard in the latter half of the century said "...until the art of music has attained such a height in our own day, that it may indeed be compared to a rhetoric, in view of the multitude of figures".
Knowledge
Epistemology and rhetoric have been compared to one another for decades, but the specifications of their similarities have gone undefined. Since scholar Robert L. Scott stated that, "rhetoric is epistemic," rhetoricians and philosophers alike have struggled to concretely define the expanse of implications these words hold. Those who have identified this inconsistency maintain the idea that Scott's relation is important, but requires further study.
The root of the issue lies in the ambiguous use of the term rhetoric itself, as well as the epistemological terms knowledge, certainty, and truth. Though counterintuitive and vague, Scott's claims are accepted by some academics, but are then used to draw different conclusions. Sonja K. Foss, for example, takes on the view that, "rhetoric creates knowledge," whereas James Herrick writes that rhetoric assists in people's ability to form beliefs, which are defined as knowledge once they become widespread in a community.
It is unclear whether Scott holds that certainty is an inherent part of establishing knowledge, his references to the term abstract. He is not the only one, as the debate's persistence in philosophical circles long predates his addition of rhetoric. There is an overwhelming majority that does support the concept of certainty as a requirement for knowledge, but it is at the definition of certainty where parties begin to diverge. One definition maintains that certainty is subjective and feeling-based, the other that it is a byproduct of justification.
The more commonly accepted definition of rhetoric claims it is synonymous with persuasion. For rhetorical purposes, this definition, like many others, is too broad. The same issue presents itself with definitions that are too narrow. Rhetoricians in support of the epistemic view of rhetoric have yet to agree in this regard.
Philosophical teachings refer to knowledge as a justified true belief. However, the Gettier Problem explores the room for fallacy in this concept. Therefore, the Gettier Problem impedes the effectivity of the argument of Richard A. Cherwitz and James A. Hikins, who employ the justified true belief standpoint in their argument for rhetoric as epistemic. Celeste Condit Railsback takes a different approach, drawing from Ray E. McKerrow's system of belief based on validity rather than certainty.
William D. Harpine refers to the issue of unclear definitions that occurs in the theories of "rhetoric is epistemic" in his 2004 article "What Do You Mean, Rhetoric is Epistemic?". In it, he focuses on uncovering the most appropriate definitions for the terms "rhetoric", "knowledge", and "certainty". According to Harpine, certainty is either objective or subjective. Although both Scotts and Cherwitz and Hikins theories deal with some form of certainty, Harpine believes that knowledge is not required to be neither objectively nor subjectively certain. In terms of "rhetoric", Harpine argues that the definition of rhetoric as "the art of persuasion" is the best choice in the context of this theoretical approach of rhetoric as epistemic. Harpine then proceeds to present two methods of approaching the idea of rhetoric as epistemic based on the definitions presented. One centers on Alston's view that one's beliefs are justified if formed by one's normal doxastic while the other focuses on the causal theory of knowledge. Both approaches manage to avoid Gettier's problems and do not rely on unclear conceptions of certainty.
In the discussion of rhetoric and epistemology, comes the question of ethics. Is it ethical for rhetoric to present itself in the branch of knowledge? Scott rears this question, addressing the issue, not with ambiguity in the definitions of other terms, but against subjectivity regarding certainty. Ultimately, according to Thomas O. Sloane, rhetoric and epistemology exist as counterparts, working towards the same purpose of establishing knowledge, with the common enemy of subjective certainty.
History and development
Rhetoric is a persuasive speech that holds people to a common purpose and therefore facilitates collective action. During the fifth century BCE, Athens had become active in metropolis and people all over there. During this time the Greek city state had been experimenting with a new form of government – democracy, demos, "the people". Political and cultural identity had been tied to the city area – the citizens of Athens formed institutions to the red processes: are the Senate, jury trials, and forms of public discussions, but people needed to learn how to navigate these new institutions. With no forms of passing on the information, other than word of mouth the Athenians needed an effective strategy to inform the people. A group of wandering Sicilian's later known as the Sophists, began teaching the Athenians persuasive speech, with the goal of navigating the courts and senate. The sophists became speech teachers known as Sophia; Greek for "wisdom" and root for philosophy, or "love of wisdom" – the sophists came to be common term for someone who sold wisdom for money. Although there is no clear understanding why the Sicilians engaged to educating the Athenians persuasive speech. It is known that the Athenians did, indeed rely on persuasive speech, more during public speak, and four new political processes, also increasing the sophists trainings leading too many victories for legal cases, public debate, and even a simple persuasive speech. This ultimately led to concerns rising on falsehood over truth, with highly trained, persuasive speakers, knowingly, misinforming.
Rhetoric has its origins in Mesopotamia. Some of the earliest examples of rhetoric can be found in the Akkadian writings of the princess and priestess Enheduanna. As the first named author in history, Enheduanna's writing exhibits numerous rhetorical features that would later become canon in Ancient Greece. Enheduanna's "The Exaltation of Inanna," includes an exordium, argument, and peroration, as well as elements of , , and , and repetition and metonymy. She is also known for describing her process of invention in "The Exaltation of Inanna," moving between first- and third-person address to relate her composing process in collaboration with the goddess Inanna, reflecting a mystical enthymeme in drawing upon a Cosmic audience.
Later examples of early rhetoric can be found in the Neo-Assyrian Empire during the time of Sennacherib.
In ancient Egypt, rhetoric had existed since at least the Middle Kingdom period. The five canons of eloquence in ancient Egyptian rhetoric were silence, timing, restraint, fluency, and truthfulness. The Egyptians held eloquent speaking in high esteem. Egyptian rules of rhetoric specified that "knowing when not to speak is essential, and very respected, rhetorical knowledge", making rhetoric a "balance between eloquence and wise silence". They also emphasized "adherence to social behaviors that support a conservative status quo" and they held that "skilled speech should support, not question, society".
In ancient China, rhetoric dates back to the Chinese philosopher, Confucius. The tradition of Confucianism emphasized the use of eloquence in speaking.
The use of rhetoric can also be found in the ancient Biblical tradition.
Ancient Greece
In Europe, organized thought about public speaking began in ancient Greece.
In ancient Greece, the earliest mention of oratorical skill occurs in Homer's Iliad, in which heroes like Achilles, Hector, and Odysseus were honored for their ability to advise and exhort their peers and followers (the or army) to wise and appropriate action. With the rise of the democratic , speaking skill was adapted to the needs of the public and political life of cities in ancient Greece. Greek citizens used oratory to make political and judicial decisions, and to develop and disseminate philosophical ideas. For modern students, it can be difficult to remember that the wide use and availability of written texts is a phenomenon that was just coming into vogue in Classical Greece. In Classical times, many of the great thinkers and political leaders performed their works before an audience, usually in the context of a competition or contest for fame, political influence, and cultural capital. In fact, many of them are known only through the texts that their students, followers, or detractors wrote down. was the Greek term for "orator": A was a citizen who regularly addressed juries and political assemblies and who was thus understood to have gained some knowledge about public speaking in the process, though in general facility with language was often referred to as , "skill with arguments" or "verbal artistry".
Possibly the first study about the power of language may be attributed to the philosopher Empedocles (d. ), whose theories on human knowledge would provide a basis for many future rhetoricians. The first written manual is attributed to Corax and his pupil Tisias. Their work, as well as that of many of the early rhetoricians, grew out of the courts of law; Tisias, for example, is believed to have written judicial speeches that others delivered in the courts.
Rhetoric evolved as an important art, one that provided the orator with the forms, means, and strategies for persuading an audience of the correctness of the orator's arguments. Today the term rhetoric can be used at times to refer only to the form of argumentation, often with the pejorative connotation that rhetoric is a means of obscuring the truth. Classical philosophers believed quite the contrary: the skilled use of rhetoric was essential to the discovery of truths, because it provided the means of ordering and clarifying arguments.
Sophists
Teaching in oratory was popularized in by itinerant teachers known as sophists, the best known of whom were Protagoras, Gorgias, and Isocrates. Aspasia of Miletus is believed to be one of the first women to engage in private and public rhetorical activities as a Sophist. The Sophists were a disparate group who travelled from city to city, teaching in public places to attract students and offer them an education. Their central focus was on , or what we might broadly refer to as discourse, its functions and powers. They defined parts of speech, analyzed poetry, parsed close synonyms, invented argumentation strategies, and debated the nature of reality. They claimed to make their students better, or, in other words, to teach virtue. They thus claimed that human excellence was not an accident of fate or a prerogative of noble birth, but an art or "" that could be taught and learned. They were thus among the first humanists.
Several Sophists also questioned received wisdom about the gods and the Greek culture, which they believed was taken for granted by Greeks of their time, making these Sophists among the first agnostics. For example, they argued that cultural practices were a function of convention or rather than blood or birth or . They argued further that the morality or immorality of any action could not be judged outside of the cultural context within which it occurred. The well-known phrase, "Man is the measure of all things" arises from this belief. One of the Sophists' most famous, and infamous, doctrines has to do with probability and counter arguments. They taught that every argument could be countered with an opposing argument, that an argument's effectiveness derived from how "likely" it appeared to the audience (its probability of seeming true), and that any probability argument could be countered with an inverted probability argument. Thus, if it seemed likely that a strong, poor man were guilty of robbing a rich, weak man, the strong poor man could argue, on the contrary, that this very likelihood (that he would be a suspect) makes it unlikely that he committed the crime, since he would most likely be apprehended for the crime. They also taught and were known for their ability to make the weaker (or worse) argument the stronger (or better). Aristophanes famously parodies the clever inversions that sophists were known for in his play The Clouds.
The word "sophistry" developed negative connotations in ancient Greece that continue today, but in ancient Greece, Sophists were popular and well-paid professionals, respected for their abilities and also criticized for their excesses.
According to William Keith and Christian Lundberg, as the Greek society shifted towards more democratic values, the Sophists were responsible for teaching the newly democratic Greek society the importance of persuasive speech and strategic communication for its new governmental institutions.
Isocrates
Isocrates, like the Sophists, taught public speaking as a means of human improvement, but he worked to distinguish himself from the Sophists, whom he saw as claiming far more than they could deliver. He suggested that while an art of virtue or excellence did exist, it was only one piece, and the least, in a process of self-improvement that relied much more on native talent, desire, constant practice, and the imitation of good models. Isocrates believed that practice in speaking publicly about noble themes and important questions would improve the character of both speaker and audience while also offering the best service to a city. Isocrates was an outspoken champion of rhetoric as a mode of civic engagement. He thus wrote his speeches as "models" for his students to imitate in the same way that poets might imitate Homer or Hesiod, seeking to inspire in them a desire to attain fame through civic leadership. His was the first permanent school in Athens and it is likely that Plato's Academy and Aristotle's Lyceum were founded in part as a response to Isocrates. Though he left no handbooks, his speeches ("Antidosis" and "Against the Sophists" are most relevant to students of rhetoric) became models of oratory and keys to his entire educational program. He was one of the canonical "Ten Attic Orators". He influenced Cicero and Quintilian, and through them, the entire educational system of the west.
Plato
Plato outlined the differences between true and false rhetoric in a number of dialogues—particularly the Gorgias and Phaedrus, dialogues in which Plato disputes the sophistic notion that the art of persuasion (the Sophists' art, which he calls "rhetoric"), can exist independent of the art of dialectic. Plato claims that since Sophists appeal only to what seems probable, they are not advancing their students and audiences, but simply flattering them with what they want to hear. While Plato's condemnation of rhetoric is clear in the Gorgias, in the Phaedrus he suggests the possibility of a true art wherein rhetoric is based upon the knowledge produced by dialectic. He relies on a dialectically informed rhetoric to appeal to the main character, Phaedrus, to take up philosophy. Thus Plato's rhetoric is actually dialectic (or philosophy) "turned" toward those who are not yet philosophers and are thus unready to pursue dialectic directly. Plato's animosity against rhetoric, and against the Sophists, derives not only from their inflated claims to teach virtue and their reliance on appearances, but from the fact that his teacher, Socrates, was sentenced to death after Sophists' efforts.
Some scholars, however, see Plato not as an opponent of rhetoric but rather as a nuanced rhetorical theorist who dramatized rhetorical practice in his dialogues and imagined rhetoric as more than just oratory.
Aristotle
Aristotle: Rhetoric is an antistrophes to dialectic. "Let rhetoric [be defined as] an ability [dynamis], in each [particular] case, to see the available means of persuasion." "Rhetoric is a counterpart of dialectic" — an art of practical civic reasoning, applied to deliberative, judicial, and "display" speeches in political assemblies, lawcourts, and other public gatherings.
Aristotle was a student of Plato who set forth an extended treatise on rhetoric that still repays careful study today. In the first sentence of The Art of Rhetoric, Aristotle says that "rhetoric is the of dialectic". As the "" of a Greek ode responds to and is patterned after the structure of the "" (they form two sections of the whole and are sung by two parts of the chorus), so the art of rhetoric follows and is structurally patterned after the art of dialectic because both are arts of discourse production. While dialectical methods are necessary to find truth in theoretical matters, rhetorical methods are required in practical matters such as adjudicating somebody's guilt or innocence when charged in a court of law, or adjudicating a prudent course of action to be taken in a deliberative assembly.
For Plato and Aristotle, dialectic involves persuasion, so when Aristotle says that rhetoric is the of dialectic, he means that rhetoric as he uses the term has a domain or scope of application that is parallel to, but different from, the domain or scope of application of dialectic. Claude Pavur explains that "[t]he Greek prefix 'anti' does not merely designate opposition, but it can also mean 'in place of'".
Aristotle's treatise on rhetoric systematically describes civic rhetoric as a human art or skill. It is more of than it is an interpretive theory with a rhetorical tradition. Aristotle's art of rhetoric emphasizes persuasion as the purpose of rhetoric. His definition of rhetoric as "the faculty of observing in any given case the available means of persuasion", essentially a mode of discovery, limits the art to the inventional process; Aristotle emphasizes the logical aspect of this process. A speaker supports the probability of a message by logical, ethical, and emotional proofs.
Aristotle identifies three steps or "offices" of rhetoric—invention, arrangement, and style—and three different types of rhetorical proof:
Aristotle's theory of character and how the character and credibility of a speaker can influence an audience to consider him/her to be believable—there being three qualities that contribute to a credible ethos: perceived intelligence, virtuous character, and goodwill
the use of emotional appeals to alter the audience's judgment through metaphor, amplification, storytelling, or presenting the topic in a way that evokes strong emotions in the audience
the use of reasoning, either inductive or deductive, to construct an argument
Aristotle emphasized enthymematic reasoning as central to the process of rhetorical invention, though later rhetorical theorists placed much less emphasis on it. An "enthymeme" follows the form of a syllogism, however it excludes either the major or minor premise. An enthymeme is persuasive because the audience provides the missing premise. Because the audience participates in providing the missing premise, they are more likely to be persuaded by the message.
Aristotle identified three different types or genres of civic rhetoric:
Forensic (also known as judicial) concerned with determining the truth or falseness of events that took place in the past and issues of guilt—for example, in a courtroom
Deliberative (also known as political) concerned with determining whether or not particular actions should or should not be taken in the future—for example, making laws
Epideictic (also known as ceremonial) concerned with praise and blame, values, right and wrong, demonstrating beauty and skill in the present—for example, a eulogy or a wedding toast
Another Aristotelian doctrine was the idea of topics (also referred to as common topics or commonplaces). Though the term had a wide range of application (as a memory technique or compositional exercise, for example) it most often referred to the "seats of argument"—the list of categories of thought or modes of reasoning—that a speaker could use to generate arguments or proofs. The topics were thus a heuristic or inventional tool designed to help speakers categorize and thus better retain and apply frequently used types of argument. For example, since we often see effects as "like" their causes, one way to invent an argument (about a future effect) is by discussing the cause (which it will be "like"). This and other rhetorical topics derive from Aristotle's belief that there are certain predictable ways in which humans (particularly non-specialists) draw conclusions from premises. Based upon and adapted from his dialectical Topics, the rhetorical topics became a central feature of later rhetorical theorizing, most famously in Cicero's work of that name.
India
India's Struggle for Independence offers a vivid description of the culture that sprang up around the newspaper in village India of the early 1870s:
This reading and discussion was the focal point of origin of the modern Indian rhetorical movement. Much before this, ancients such as Kautilya, Birbal, and the like indulged in a great deal of discussion and persuasion.
Keith Lloyd argued that much of the recital of the Vedas can be likened to the recital of ancient Greek poetry. Lloyd proposed including the Nyāya Sūtras in the field of rhetorical studies, exploring its methods within their historical context, comparing its approach to the traditional logical syllogism, and relating it to modern perspectives of Stephen Toulmin, Kenneth Burke, and Chaim Perelman.
is a Sanskrit word which means "just" or "right" and refers to "the science of right and wrong reasoning".
is also a Sanskrit word which means string or thread. Here sutra refers to a collection of aphorism in the form of a manual. Each sutra is a short rule usually consisted of one or two sentences. An example of a sutra is: "Reality is truth, and what is true is so, irrespective of whether we know it is, or are aware of that truth." The Nyāya Sūtras is an ancient Indian Sanskrit text composed by Aksapada Gautama. It is the foundational text of the Nyaya school of Hindu philosophy. It is estimated that the text was composed between and . The text may have been composed by more than one author, over a period of time. Radhakrishan and Moore placed its origin in the "though some of the contents of the Nyaya Sutra are certainly a post-Christian era". The ancient school of Nyaya extended over a period of one thousand years, beginning with Gautama about and ending with Vatsyayana about .
Nyaya provides insight into Indian rhetoric. Nyaya presents an argumentative approach with which a rhetor can decide about any argument. In addition, it proposes an approach to thinking about cultural tradition which is different from Western rhetoric. Whereas Toulmin emphasizes the situational dimension of argumentative genre as the fundamental component of any rhetorical logic; Nyaya views this situational rhetoric .
Some of India's famous rhetors include Kabir Das, Rahim Das, Chanakya, and Chandragupt Maurya.
Rome
Cicero
For the Romans, oration became an important part of public life. Cicero was chief among Roman rhetoricians and remains the best known ancient orator and the only orator who both spoke in public and produced treatises on the subject. Rhetorica ad Herennium, formerly attributed to Cicero but now considered to be of unknown authorship, is one of the most significant works on rhetoric and is still widely used as a reference today. It is an extensive reference on the use of rhetoric, and in the Middle Ages and Renaissance, it achieved wide publication as an advanced school text on rhetoric.
Cicero charted a middle path between the competing Attic and Asiatic styles to become considered second only to Demosthenes among history's orators. His works include the early and very influential De Inventione (On Invention, often read alongside Ad Herennium as the two basic texts of rhetorical theory throughout the Middle Ages and into the Renaissance), De Oratore (a fuller statement of rhetorical principles in dialogue form), Topics (a rhetorical treatment of common topics, highly influential through the Renaissance), Brutus (a discussion of famous orators), and Orator (a defense of Cicero's style). Cicero also left a large body of speeches and letters which would establish the outlines of Latin eloquence and style for generations.
The rediscovery of Cicero's speeches (such as the defense of Archias) and letters (to Atticus) by Italians like Petrarch helped to ignite the Renaissance.
Cicero championed the learning of Greek (and Greek rhetoric), contributed to Roman ethics, linguistics, philosophy, and politics, and emphasized the importance of all forms of appeal (emotion, humor, stylistic range, irony, and digression in addition to pure reasoning) in oratory. But perhaps his most significant contribution to subsequent rhetoric, and education in general, was his argument that orators learn not only about the specifics of their case (the hypothesis) but also about the general questions from which they derived (the theses). Thus, in giving a speech in defense of a poet whose Roman citizenship had been questioned, the orator should examine not only the specifics of that poet's civic status, he should also examine the role and value of poetry and of literature more generally in Roman culture and political life. The orator, said Cicero, needed to be knowledgeable about all areas of human life and culture, including law, politics, history, literature, ethics, warfare, medicine, and even arithmetic and geometry. Cicero gave rise to the idea that the "ideal orator" be well-versed in all branches of learning: an idea that was rendered as "liberal humanism", and that lives on today in liberal arts or general education requirements in colleges and universities around the world.
Quintilian
Quintilian began his career as a pleader in the courts of law; his reputation grew so great that Vespasian created a chair of rhetoric for him in Rome. The culmination of his life's work was the Institutio Oratoria (Institutes of Oratory, or alternatively, The Orator's Education), a lengthy treatise on the training of the orator, in which he discusses the training of the "perfect" orator from birth to old age and, in the process, reviews the doctrines and opinions of many influential rhetoricians who preceded him.
In the Institutes, Quintilian organizes rhetorical study through the stages of education that an aspiring orator would undergo, beginning with the selection of a nurse. Aspects of elementary education (training in reading and writing, grammar, and literary criticism) are followed by preliminary rhetorical exercises in composition (the ) that include maxims and fables, narratives and comparisons, and finally full legal or political speeches. The delivery of speeches within the context of education or for entertainment purposes became widespread and popular under the term "declamation".
This work was available only in fragments in medieval times, but the discovery of a complete copy at the Abbey of St. Gall in 1416 led to its emergence as one of the most influential works on rhetoric during the Renaissance.
Quintilian's work describes not just the art of rhetoric, but the formation of the perfect orator as a politically active, virtuous, publicly minded citizen. His emphasis was on the ethical application of rhetorical training, in part in reaction against the tendency in Roman schools toward standardization of themes and techniques. At the same time that rhetoric was becoming divorced from political decision making, rhetoric rose as a culturally vibrant and important mode of entertainment and cultural criticism in a movement known as the "Second Sophistic", a development that gave rise to the charge (made by Quintilian and others) that teachers were emphasizing style over substance in rhetoric.
Medieval to Enlightenment
After the breakup of the western Roman Empire, the study of rhetoric continued to be central to the study of the verbal arts. However the study of the verbal arts went into decline for several centuries, followed eventually by a gradual rise in formal education, culminating in the rise of medieval universities. Rhetoric transmuted during this period into the arts of letter writing and sermon writing. As part of the , rhetoric was secondary to the study of logic, and its study was highly scholastic: students were given repetitive exercises in the creation of discourses on historical subjects or on classic legal questions.
Although he is not commonly regarded as a rhetorician, St. Augustine (354–430) was trained in rhetoric and was at one time a professor of Latin rhetoric in Milan. After his conversion to Christianity, he became interested in using these "pagan" arts for spreading his religion. He explores this new use of rhetoric in De doctrina Christiana, which laid the foundation of what would become homiletics, the rhetoric of the sermon. Augustine asks why "the power of eloquence, which is so efficacious in pleading either for the erroneous cause or the right", should not be used for righteous purposes.
One early concern of the medieval Christian church was its attitude to classical rhetoric itself. Jerome (d. 420) complained, "What has Horace to do with the Psalms, Virgil with the Gospels, Cicero with the Apostles?" Augustine is also remembered for arguing for the preservation of pagan works and fostering a church tradition that led to conservation of numerous pre-Christian rhetorical writings.
Rhetoric would not regain its classical heights until the Renaissance, but new writings did advance rhetorical thought. Boethius (–524), in his brief Overview of the Structure of Rhetoric, continues Aristotle's taxonomy by placing rhetoric in subordination to philosophical argument or dialectic. The introduction of Arab scholarship from European relations with the Muslim empire (in particular Al-Andalus) renewed interest in Aristotle and Classical thought in general, leading to what some historians call the 12th century Renaissance. A number of medieval grammars and studies of poetry and rhetoric appeared.
Late medieval rhetorical writings include those of St. Thomas Aquinas (–1274), Matthew of Vendôme (Ars Versificatoria, ), and Geoffrey of Vinsauf (Poetria Nova, 1200–1216). Pre-modern female rhetoricians, outside of Socrates' friend Aspasia, are rare; but medieval rhetoric produced by women either in religious orders, such as Julian of Norwich (d. 1415), or the very well-connected Christine de Pizan (–), did occur although it was not always recorded in writing.
In his 1943 Cambridge University doctoral dissertation in English, Canadian Marshall McLuhan (1911–1980) surveys the verbal arts from approximately the time of Cicero down to the time of Thomas Nashe (1567–). His dissertation is still noteworthy for undertaking to study the history of the verbal arts together as the trivium, even though the developments that he surveys have been studied in greater detail since he undertook his study. As noted below, McLuhan became one of the most widely publicized communication theorists of the 20th century.
Another interesting record of medieval rhetorical thought can be seen in the many animal debate poems popular in England and the continent during the Middle Ages, such as The Owl and the Nightingale (13th century) and Geoffrey Chaucer's Parliament of Fowls.
Sixteenth century
Renaissance humanism defined itself broadly as disfavoring medieval scholastic logic and dialectic and as favoring instead the study of classical Latin style and grammar and philology and rhetoric.
One influential figure in the rebirth of interest in classical rhetoric was Erasmus (–1536). His 1512 work, De Duplici Copia Verborum et Rerum (also known as Copia: Foundations of the Abundant Style), was widely published (it went through more than 150 editions throughout Europe) and became one of the basic school texts on the subject. Its treatment of rhetoric is less comprehensive than the classic works of antiquity, but provides a traditional treatment of (matter and form). Its first book treats the subject of , showing the student how to use schemes and tropes; the second book covers . Much of the emphasis is on abundance of variation ( means "plenty" or "abundance", as in copious or cornucopia), so both books focus on ways to introduce the maximum amount of variety into discourse. For instance, in one section of the De Copia, Erasmus presents two hundred variations of the sentence "Always, as long as I live, I shall remember you" ("") Another of his works, the extremely popular The Praise of Folly, also had considerable influence on the teaching of rhetoric in the later 16th century. Its orations in favour of qualities such as madness spawned a type of exercise popular in Elizabethan grammar schools, later called adoxography, which required pupils to compose passages in praise of useless things.
Juan Luis Vives (1492–1540) also helped shape the study of rhetoric in England. A Spaniard, he was appointed in 1523 to the Lectureship of Rhetoric at Oxford by Cardinal Wolsey, and was entrusted by Henry VIII to be one of the tutors of Mary. Vives fell into disfavor when Henry VIII divorced Catherine of Aragon and left England in 1528. His best-known work was a book on education, , published in 1531, and his writings on rhetoric included (1533), (1533), and a treatise on letter writing, (1536).
It is likely that many well-known English writers were exposed to the works of Erasmus and Vives (as well as those of the Classical rhetoricians) in their schooling, which was conducted in Latin (not English), often included some study of Greek, and placed considerable emphasis on rhetoric.
The mid-16th century saw the rise of vernacular rhetorics—those written in English rather than in the Classical languages. Adoption of works in English was slow, however, due to the strong scholastic orientation toward Latin and Greek. Leonard Cox's The Art or Crafte of Rhetoryke (; second edition published in 1532) is the earliest text on rhetorics in English; it was, for the most part, a translation of the work of Philipp Melanchthon. Thomas Wilson's (1553) presents a traditional treatment of rhetoric, for instance, the standard five canons of rhetoric. Other notable works included Angel Day's (1586, 1592), George Puttenham's (1589), and Richard Rainholde's (1563).
During this same period, a movement began that would change the organization of the school curriculum in Protestant and especially Puritan circles and that led to rhetoric losing its central place. A French scholar, Petrus Ramus (1515–1572), dissatisfied with what he saw as the overly broad and redundant organization of the trivium, proposed a new curriculum. In his scheme of things, the five components of rhetoric no longer lived under the common heading of rhetoric. Instead, invention and disposition were determined to fall exclusively under the heading of dialectic, while style, delivery, and memory were all that remained for rhetoric. Ramus was martyred during the French Wars of Religion. His teachings, seen as inimical to Catholicism, were short-lived in France but found a fertile ground in the Netherlands, Germany, and England.
One of Ramus' French followers, Audomarus Talaeus (Omer Talon) published his rhetoric, Institutiones Oratoriae, in 1544. This work emphasized style, and became so popular that it was mentioned in John Brinsley's (1612) ; as being the "". Many other Ramist rhetorics followed in the next half-century, and by the 17th century, their approach became the primary method of teaching rhetoric in Protestant and especially Puritan circles. John Milton (1608–1674) wrote a textbook in logic or dialectic in Latin based on Ramus' work.
Ramism could not exert any influence on the established Catholic schools and universities, which remained loyal to Scholasticism, or on the new Catholic schools and universities founded by members of the Society of Jesus or the Oratorians, as can be seen in the Jesuit curriculum (in use up to the 19th century across the Christian world) known as the . If the influence of Cicero and Quintilian permeates the , it is through the lenses of devotion and the militancy of the Counter-Reformation. The was indeed imbued with a sense of the divine, of the incarnate logos, that is of rhetoric as an eloquent and humane means to reach further devotion and further action in the Christian city, which was absent from Ramist formalism. The Ratio is, in rhetoric, the answer to Ignatius Loyola's practice, in devotion, of "spiritual exercises". This complex oratorical-prayer system is absent from Ramism.
Seventeenth century
In New England and at Harvard College (founded 1636), Ramus and his followers dominated. However, in England, several writers influenced the course of rhetoric during the 17th century, many of them carrying forward the dichotomy that had been set forth by Ramus and his followers during the preceding decades. This century also saw the development of a modern, vernacular style that looked to English, rather than to Greek, Latin, or French models.
Francis Bacon (1561–1626), although not a rhetorician, contributed to the field in his writings. One of the concerns of the age was to find a suitable style for the discussion of scientific topics, which needed above all a clear exposition of facts and arguments, rather than an ornate style. Bacon in his The Advancement of Learning criticized those who are preoccupied with style rather than "the weight of matter, worth of subject, soundness of argument, life of invention, or depth of judgment". On matters of style, he proposed that the style conform to the subject matter and to the audience, that simple words be employed whenever possible, and that the style should be agreeable.
Thomas Hobbes (1588–1679) also wrote on rhetoric. Along with a shortened translation of Aristotle's Rhetoric, Hobbes also produced a number of other works on the subject. Sharply contrarian on many subjects, Hobbes, like Bacon, also promoted a simpler and more natural style that used figures of speech sparingly.
Perhaps the most influential development in English style came out of the work of the Royal Society (founded in 1660), which in 1664 set up a committee to improve the English language. Among the committee's members were John Evelyn (1620–1706), Thomas Sprat (1635–1713), and John Dryden (1631–1700). Sprat regarded "fine speaking" as a disease, and thought that a proper style should "reject all amplifications, digressions, and swellings of style" and instead "return back to a primitive purity and shortness".
While the work of this committee never went beyond planning, John Dryden is often credited with creating and exemplifying a new and modern English style. His central tenet was that the style should be proper "to the occasion, the subject, and the persons". As such, he advocated the use of English words whenever possible instead of foreign ones, as well as vernacular, rather than Latinate, syntax. His own prose (and his poetry) became exemplars of this new style.
Eighteenth century
Arguably one of the most influential schools of rhetoric during the 18th century was Scottish Belletristic rhetoric, exemplified by such professors of rhetoric as Hugh Blair whose Lectures on Rhetoric and Belles Lettres saw international success in various editions and translations, and Lord Kames with his influential Elements of Criticism.
Another notable figure in 18th century rhetoric was Maria Edgeworth, a novelist and children's author whose work often parodied the male-centric rhetorical strategies of her time. In her 1795 "An Essay on the Noble Science of Self-Justification," Edgeworth presents a satire of Enlightenment rhetoric's science-centrism and the Belletristic Movement. She was called "the great Maria" by Sir Walter Scott, with whom she corresponded, and by modern scholars is noted as "a transgressive and ironic reader" of the 18th century rhetorical norms.
Modern
At the turn of the 20th century, there was a revival of rhetorical study manifested in the establishment of departments of rhetoric and speech at academic institutions, as well as the formation of national and international professional organizations. The early interest in rhetorical studies was a movement away from elocution as taught in English departments in the United States, and an attempt to refocus rhetorical studies from delivery-only to civic engagement and a "rich complexity" of the nature of rhetoric.
By the 1930s, advances in mass media technology led to a revival of the study of rhetoric, language, persuasion, and political rhetoric and its consequences. The linguistic turn in philosophy also contributed to this revival. The term rhetoric came to be applied to media forms other than verbal language, e.g. visual rhetoric, "temporal rhetorics", and the "temporal turn" in rhetorical theory and practice.
The rise of advertising and of mass media such as photography, telegraphy, radio, and film brought rhetoric more prominently into people's lives. The discipline of rhetoric has been used to study how advertising persuades, and to help understand the spread of fake news and conspiracy theories on social media.
Notable theorists
Kenneth Burke Burke was a rhetorical theorist, philosopher, and poet. Many of his works are central to modern rhetorical theory: Counterstatement (1931), A Grammar of Motives (1945), A Rhetoric of Motives (1950), and Language as Symbolic Action (1966). Among his influential concepts are "identification", "consubstantiality", and the "dramatistic pentad". He described rhetoric as "the use of language as a symbolic means of inducing cooperation in beings that by nature respond to symbols". In relation to Aristotle's theory, Aristotle was more interested in constructing rhetoric, while Burke was interested in "debunking" it.
The Groupe μ This interdisciplinary team contributed to the renovation of the in the context of poetics and modern linguistics, significantly with Rhétorique générale and Rhétorique de la poésie (1977).
Marshall McLuhan McLuhan was a media theorist whose theories and whose choice of objects of study are important to the study of rhetoric. McLuhan's book The Mechanical Bride was a compilation of exhibits of ads and other materials from popular culture with short essays involving rhetorical analyses of the persuasive strategies in each item. McLuhan later shifted the focus of his rhetorical analysis and began to consider how communication media themselves affect us as persuasive devices. His famous dictum "the medium is the message" highlights the significance of the medium itself. This shift in focus led to his two most widely known books, The Gutenberg Galaxy and Understanding Media. These books represent an inward turn to attending to one's consciousness in contrast to the more outward orientation of other rhetoricians toward sociological considerations and symbolic interaction. No other scholar of the history and theory of rhetoric was as widely publicized in the 20th century as McLuhan.
Chaïm Perelman Perelman was among the most important argumentation theorists of the 20th century. His chief work is the Traité de l'argumentation—la nouvelle rhétorique (1958), with Lucie Olbrechts-Tyteca, which was translated into English as The New Rhetoric: A Treatise on Argumentation. Perelman and Olbrechts-Tyteca move rhetoric from the periphery to the center of argumentation theory. Among their most influential concepts are "dissociation", "the universal audience", "quasi-logical argument", and "presence".
I. A. Richards Richards was a literary critic and rhetorician. His The Philosophy of Rhetoric is an important text in modern rhetorical theory. In this work, he defined rhetoric as "a study of misunderstandings and its remedies", and introduced the influential concepts tenor and vehicle to describe the components of a metaphor—the main idea and the concept to which it is compared.
Stephen Toulmin Toulmin was a philosopher whose Uses of Argument is an important text in modern rhetorical theory and argumentation theory.
Richard M. Weaver Weaver was a rhetorical and cultural critic known for his contributions to the new conservatism. He focused on the ethical implications of rhetoric in his books Language is Sermonic and The Ethics of Rhetoric. According to Weaver there are four types of argument, and through the argument type a rhetorician habitually uses a critic can discern their worldview. Those who prefer the argument from genus or definition are idealists. Those who argue from similitude, such as poets and religious people, see the connectedness between things. The argument from consequence sees a cause and effect relationship. Finally the argument from circumstance considers the particulars of a situation and is an argument preferred by liberals.
Methods of analysis
Criticism seen as a method
Rhetoric can be analyzed by a variety of methods and theories. One such method is criticism. When those using criticism analyze instances of rhetoric what they do is called rhetorical criticism . According to rhetorical critic Jim A. Kuypers, "The use of rhetoric is an art, and as such, it does not lend itself well to scientific methods of analysis. Criticism is an art as well, and as such is particularly well suited for examining rhetorical creations." He asserts that criticism is a method of generating knowledge just as the scientific method is a method for generating knowledge:
Edwin Black wrote on this point that, "Methods, then, admit of varying degrees of personality. And criticism, on the whole, is near the indeterminate, contingent, personal end of the methodological scale. In consequence of this placement, it is neither possible nor desirable for criticism to be fixed into a system, for critical techniques to be objectified, for critics to be interchangeable for purposes of replication, or for rhetorical criticism to serve as the handmaiden of quasi-scientific theory."
Jim A. Kuypers sums this idea of criticism as art in the following manner: "In short, criticism is an art, not a science. It is not a scientific method; it uses subjective methods of argument; it exists on its own, not in conjunction with other methods of generating knowledge (i.e., social scientific or scientific)... [I]nsight and imagination top statistical applications when studying rhetorical action."
Strategies
Rhetorical strategies are the efforts made by authors or speakers to persuade or inform their audiences. According to James W. Gray, there are various argument strategies used in writing. He describes four of these as argument from analogy, argument from absurdity, thought experiments, and inference to the best explanation.
Criticism
Modern rhetorical criticism explores the relationship between text and context; that is, how an instance of rhetoric relates to circumstances. Since the aim of rhetoric is to be persuasive, the level to which the rhetoric in question persuades its audience is what must be analyzed, and later criticized. In determining the extent to which a text is persuasive, one may explore the text's relationship with its audience, purpose, ethics, argument, evidence, arrangement, delivery, and style.
In his Rhetorical Criticism: A Study in Method, Edwin Black states, "It is the task of criticism not to measure... discourses dogmatically against some parochial standard of rationality but, allowing for the immeasurable wide range of human experience, to see them as they really are." While "as they really are" is debatable, rhetorical critics explain texts and speeches by investigating their rhetorical situation, typically placing them in a framework of speaker/audience exchange. The antithetical view places the rhetor at the center of creating that which is considered the extant situation; i.e., the agenda and spin.
Additional theoretical approaches
Following the neo-Aristotelian approaches to criticism, scholars began to derive methods from other disciplines, such as history, philosophy, and the social sciences. The importance of critics' personal judgment while the analytical dimension of criticism began to gain momentum. Throughout the 1960s and 1970s, methodological pluralism replaced the singular neo-Aristotelian method. Methodological rhetorical criticism is typically done by deduction, in which is used to examine a specific case of rhetoric. include:
Ideological criticism engages rhetoric as it suggests the beliefs, values, assumptions, and interpretations held by the rhetor or the larger culture
Ideological criticism also treats ideology as an artifact of discourse, one that is embedded in key terms (called "ideographs") as well as material resources and discursive embodiment.
Cluster criticism seeks to help the critic understand the rhetor's worldview (developed by Kenneth Burke)
This means identifying terms that are "clustered" around key symbols in the rhetorical artifact and the patterns in which they appear.
Frame analysis looks for how rhetors construct an interpretive lens in their discourse
In short, how they make certain facts more noticeable than others. It is particularly useful for analyzing products of the news media.
Genre criticism assumes certain situations call for similar needs and expectations within the audience, therefore calling for certain types of rhetoric
It studies rhetoric in different times and locations, looking at similarities in the rhetorical situation and the rhetoric that responds to them. Examples include eulogies, inaugural addresses, and declarations of war.
Narrative criticism narratives help organize experiences in order to endow meaning to historical events and transformations
Narrative criticism focuses on the story itself and how the construction of the narrative directs the interpretation of the situation.
By the mid-1980s the study of rhetorical criticism began to move away from precise methodology towards conceptual issues. Conceptually-driven criticism operates more through abduction, according to scholar James Jasinski, who argues that this type of criticism can be thought of as a back-and-forth between the text and the concepts, which are being explored at the same time. The concepts remain "works in progress", and understanding develops through the analysis of a text.
Criticism is considered rhetorical when it focuses on the way some types of discourse react to situational exigencies—problems or demands—and constraints. Modern rhetorical criticism concerns how the rhetorical case or object persuades, defines, or constructs the audience. In modern terms, rhetoric includes, but it is not limited to, speeches, scientific discourse, pamphlets, literary work, works of art, and pictures. Contemporary rhetorical criticism has maintained aspects of early neo-Aristotelian thinking through close reading, which attempts to explore the organization and stylistic structure of a rhetorical object. Using close textual analysis means rhetorical critics use the tools of classical rhetoric and literary analysis to evaluate the style and strategy used to communicate the argument.
Purpose of criticism
Rhetorical criticism serves several purposes. For one, it hopes to help form or improve public taste. It helps educate audiences and develops them into better judges of rhetorical situations by reinforcing ideas of value, morality, and suitability. Rhetorical criticism can thus contribute to the audience's understanding of themselves and society.
According to Jim A. Kuypers, a second purpose for performing criticism should be to enhance our appreciation and understanding. "[W]e wish to enhance both our own and others' understanding of the rhetorical act; we wish to share our insights with others, and to enhance their appreciation of the rhetorical act. These are not hollow goals, but quality of life issues. By improving understanding and appreciation, the critic can offer new and potentially exciting ways for others to see the world. Through understanding we also produce knowledge about human communication; in theory this should help us to better govern our interactions with others." Criticism is a humanizing activity in that it explores and highlights qualities that make us human.
Animal rhetoric
Rhetoric is practiced by social animals in a variety of ways. For example, birds use song, various animals warn members of their species of danger, chimpanzees have the capacity to deceive through communicative keyboard systems, and deer stags compete for the attention of mates. While these might be understood as rhetorical actions (attempts at persuading through meaningful actions and utterances), they can also be seen as rhetorical fundamentals shared by humans and animals. The study of animal rhetoric has been called "biorhetorics".
The self-awareness required to practice rhetoric might be difficult to notice and acknowledge in some animals. However, some animals are capable of acknowledging themselves in a mirror, and therefore, they might be understood to be self-aware and engaged in rhetoric when practicing some form of language.
Anthropocentrism plays a significant role in human-animal relationships, reflecting and perpetuating binaries in which humans assume they are beings that have extraordinary qualities while they regard animals as beings that lack those qualities. This dualism is manifested in other forms as well, such as reason and sense, mind and body, ideal and phenomenal in which the first category of each pair (reason, mind, and ideal) represents and belongs to only humans. By becoming aware of and overcoming these dualistic conceptions including the one between humans and animals, human knowledge of themselves and the world to become more complete and holistic. The relationship between humans and animals (as well as the rest of the natural world) is often defined by the human rhetorical act of naming and categorizing animals through scientific and folk labeling. The act of naming partially defines the rhetorical relationships between humans and animals, though both may engage in rhetoric beyond human naming and categorizing.
Some animals have a sort of which enables them to "learn and receive instruction" with rudimentary understanding of some significant signs. Those animals practice deliberative, judicial, and epideictic rhetoric deploying , , and with gesture and preen, sing and growl. Since animals offer models of rhetorical behavior and interaction that are physical, even instinctual, but perhaps no less artful, transcending our accustomed focus on verbal language and consciousness concepts will help people interested in rhetoric and communication to promote human-animal rhetoric.
Comparative rhetoric
Comparative rhetoric is a practice and methodology that developed in the late twentieth century to broaden the study of rhetoric beyond the dominant rhetorical tradition that has been constructed and shaped in western Europe and the U.S. As a research practice, comparative rhetoric studies past and present cultures across the globe to reveal diversity in the uses of rhetoric and to uncover rhetorical perspectives, practices, and traditions that have been historically underrepresented or dismissed. As a methodology, comparative rhetoric constructs a culture's rhetorical perspectives, practices, and traditions on their own terms, in their own contexts, as opposed to using European or American theories, terminology, or framing.
Comparative rhetoric is comparative in that it illuminates how rhetorical traditions relate to one another, while seeking to avoid binary depictions or value judgments. This can reveal issues of power within and between cultures as well as new or under-recognized ways of thinking, doing, and being that challenge or enrich the dominant Euro-American tradition and provide a fuller account of rhetorical studies.
Robert T. Oliver is credited as the first scholar who recognized the need to study non-Western rhetorics in his 1971 publication Communication and Culture in Ancient India and China. George A. Kennedy has been credited for the first cross-cultural overview of rhetoric in his 1998 publication Comparative Rhetoric: An Historical and Cross-cultural Introduction. Though Oliver's and Kennedy's works contributed to the birth of comparative rhetoric, given the newness of the field, they both used Euro-American terms and theories to interpret non-Euro-American cultures' practices.
LuMing Mao, Xing Lu, Mary Garrett, Arabella Lyon, Bo Wang, Hui Wu, and Keith Lloyd have published extensively on comparative rhetoric, helping to shape and define the field. In 2013, LuMing Mao edited a special issue on comparative rhetoric in Rhetoric Society Quarterly, focusing on comparative methodologies in the age of globalization. In 2015, LuMing Mao and Bo Wang coedited a symposium featuring position essays by a group of leading scholars in the field. In their introduction, Mao and Wang emphasize the fluid and cross-cultural nature of rhetoric, "Rhetorical knowledge, like any other knowledge, is heterogeneous, multidimentional, and always in the process of being created." The symposium includes "A Manifesto: The What and How of Comparative Rhetoric", demonstrating the first collective effort to identify and articulate comparative rhetoric's definition, goals, and methodologies. The tenets of this manifesto are engaged within many later works that study or utilize comparative rhetoric.
Automatic detection of rhetorical figures
As natural language processing has developed, so has interest in automatically detecting rhetorical figures. The major focus has been to detect specific figures, such as chiasmus, epanaphora, and epiphora using classifiers trained with labeled data. A major shortcoming to achieving high accuracy with these systems is the shortage of labeled data for these tasks, but with recent advances in language modeling, such as few-shot learning, it may be possible to detect more rhetorical figures with less data.
Academic journals
Argumentation and Advocacy
College Composition and Communication
College English
Enculturation
Harlot
Kairos
Peitho
Present Tense
Relevant Rhetoric
Rhetoric Review
Rhetoric Society Quarterly (RSQ)
XChanges
See also
Glossary of rhetoric
List of political slogans
List of speeches
Artes liberales
Civic humanism
Composition studies
Conversation theory
Demagogy
Discourse analysis
Grammarian (Greco-Roman world)
Language and thought
Multimodality
New rhetoric
Pedagogy
Persuasion technology
Propaganda
Speechwriting
Technical communication
Notes
References
Citations
Sources
Primary sources
The for Greek and Latin primary texts on rhetoric is the Loeb Classical Library of the Harvard University Press, published with an English translation on the facing page.
Secondary sources
.
Further reading
.
.
External links
Wikibooks: Rhetoric and Composition
Applied linguistics
Communication studies
Critical thinking skills
History of logic
Intellectual history
Narratology
Philosophical logic
Philosophy of logic
Philosophy of language | 0.76481 | 0.999 | 0.764045 |
Digital literacy | Digital literacy is an individual's ability to find, evaluate, and communicate information using typing or digital media platforms. It is a combination of both technical and cognitive abilities in using information and communication technologies to create, evaluate, and share information.
While digital literacy initially focused on digital skills and
stand-alone computers, the advent of the internet and social media use has shifted some of its focus to mobile devices. Similar to other evolving definitions of literacy that recognize the cultural and historical ways of making meaning, digital literacy does not replace traditional methods of interpreting information but rather extends the foundational skills of these traditional literacies. Digital literacy should be considered a part of the path towards acquiring knowledge.
History
Research into digital literacies has fallen along the tracks of information literacy. This draws from traditions of information literacy and research into media literacy which rely on socio-cognitive traditions, as well as research into multimodal composition, which relies on anthropological methodologies. Digital literacy is built on the expanding role of social science research in the field of literacy as well as on concepts of visual literacy, computer literacy, and information literacy. The concept has evolved throughout the 20th and into the 21st century from a technical definition of skills and competencies to a broader comprehension of interacting with digital technologies.
Digital literacy is often discussed in the context of its precursor, media literacy. Media literacy education began in the United Kingdom and the United States due to war propaganda in the 1930s and the rise of advertising in the 1960s, respectively. Manipulative messaging and the increase in various forms of media further concerned educators. Educators began to promote media literacy education to teach individuals how to judge and assess the media messages they were receiving. The ability to critique digital and media content allows individuals to identify biases and evaluate messages independently.
Historically, digital literacy focused on source evaluation. Digital and media literacy include the ability to examine and comprehend the meaning of messages, judge credibility, and assess the quality of a digital work.
With the rise of file sharing on services such as Napster an ethics element began to get included in definitions of digital literacy. Frameworks for digital literacy began to include goals and objectives such as becoming a socially responsible member of their community by spreading awareness and helping others find digital solutions at home, work, or on a national platform.
Digital literacy may also include the production of multimodal texts. This definition refers more to reading and writing on a digital device but includes the use of any modes across multiple mediums that stress Semiotic meaning beyond graphemes. It also involves knowledge of producing other forces of media, like recording and uploading video.
Overall, digital literacy shares many defining principles with other fields that use modifiers in front of literacy to define ways of being and domain-specific knowledge or competence. The term has grown in popularity in education and higher education settings and is used in both international and national standards.
Academic and pedagogical concepts
The pedagogy of digital literacy has begun to move across disciplines. In academia, digital literacy is a part of the computing subject area alongside computer science and information technology. while some literacy scholars have argued for expanding the framing beyond information and communication technologies and into literacy education overall.
Given the many varied implications that digital literacy has on students and educators, pedagogy has responded by emphasizing four specific models of engaging with digital mediums. Those four models are text-participating, code-breaking, text-analyzing, and text-using. These methods present students (and other learners) with the ability to fully engage with the media, but also enhance the way the individual can relate to the digital text to their lived experiences.
21st-century skills
Digital literacy requires certain skill sets that are interdisciplinary in nature. Warschauer and Matuchniak (2010) list three skill sets, or 21st century skills, that individuals need to master in order to be digitally literate: information, media, and technology; learning and innovation skills; and life and career skills.. Aviram et al. assert that order to be competent in Life and Career Skills, it is also necessary to be able to exercise flexibility and adaptability, initiative and self-direction, social and cross-cultural skills, productivity and accountability, leadership and responsibility. Digital literacy is composed of different literacies, because of this fact, there is no need to search for similarities and differences. Some of these literacies are media literacy and information literacy.
Aviram and Eshet-Alkalai contend that five types of literacies are encompassed in the umbrella term that is digital literacy.
Reproduction literacy: the ability to use digital technology to create a new piece of work or combine existing pieces of work to make it your own.
Photo-visual literacy: the ability to read and deduce information from visuals.
Branching literacy: the ability to successfully navigate in the non-linear medium of digital space.
Information literacy: the ability to search, locate, assess and critically evaluate information found on the web and on-shelf in libraries.
Socio-emotional literacy: the social and emotional aspects of being present online, whether it may be through socializing, and collaborating, or simply consuming content.
Artificial intelligence (AI)
Digital literacy skills continue to develop with the rapid advancements of artificial intelligence (AI) technologies in the 21st century. AI technologies are designed to simulate human intelligence through the use of complex systems such as machine learning algorithms, natural language processing, and robotics. As the field advances and transforms aspects of everyday life such as education, workplaces, and public services, individuals must develop the skills to appropriately understand and use these tools.
As these technologies emerge, so have different attempts at defining AI literacy - the ability to understand the basic techniques and concepts behind AI in different products and services and how to use them effectively []. Many framings leverage existing digital literacy frameworks and apply an AI lens to the skills and competencies. Common elements of these frameworks include:
Know and understand: know the basic functions of AI and how to use AI applications
Use and apply: applying AI knowledge, concepts and applications in different scenarios
Evaluate and create: higher-order thinking skills (e.g., evaluate, appraise, predict, design)
Ethical issues: considering fairness, accountability, transparency, and safety with AI
As AI continues to advance and become more integrated into daily life, being AI literate will be critical for individuals and organizations to effectively engage with AI technologies and to take advantage of their potential benefits while mitigating their potential risks and challenges.
In society
Digital literacy is necessary for the correct use of various digital platforms. Literacy in social network services and Web 2.0 sites help people stay in contact with others, pass timely information, and even buy and sell goods and services. Digital literacy can also prevent people from being taken advantage of online, as photo manipulation, e-mail frauds and phishing often can fool the digitally illiterate, costing victims money and making them vulnerable to identity theft. However, those using technology and the internet to commit these manipulations and fraudulent acts possess the digital literacy abilities to fool victims by understanding the technical trends and consistencies; it becomes important to be digitally literate to always think one step ahead when utilizing the digital world.
The emergence of social media has paved the way for people to communicate and connect with one another in new and different ways. Websites like Facebook and Twitter (now X), as well as personal websites and blogs, have enabled a new type of journalism that is subjective, personal, and "represents a global conversation that is connected through its community of readers." These online communities foster group interactivity among the digitally literate. Social media also help users establish a digital identity or a "symbolic digital representation of identity attributes." Without digital literacy or the assistance of someone who is digitally literate, one cannot possess a personal digital identity (this is closely allied to web literacy).
Research has demonstrated that the differences in the level of digital literacy depend mainly on age and education level, while the influence of gender is decreasing. Among young people, digital literacy is high in its operational dimension. Young people rapidly move through hypertext and have a familiarity with different kinds of online resources. However, for young people, the skills to critically evaluate the content found online show a deficit. With the rise of digital connectivity amongst young people, concerns of digital safety are higher than ever. A study conducted in Poland, commissioned by the Ministry of National Knowledge, measured the digital literacy of parents in regards to digital and online safety. It concluded that parents often overestimate their level of knowledge, but clearly had an influence on their children's attitude and behavior towards the digital world. It suggests that with proper training programs, parents should have the knowledge in teaching their children about the safety precautions necessary to navigate the digital space.
Digital divide
Digital divide refers to disparities (such as those living in the developed vs the developing world) concerning access to and the use of information and communication technologies (ICT), such as computer hardware, software, and the Internet, among people. Individuals within societies that lack economic resources to build ICT infrastructure do not have adequate digital literacy, which means that their digital skills are limited. The divide can be explained by Max Weber's social stratification theory, which focuses on access to production, rather than ownership of the capital. Production means having access to ICT so that individuals can interact and produce information or create a product without which they cannot participate in learning, collaboration, and production processes. Digital literacy and digital access have become increasingly important competitive differentiators for individuals using the internet. In the article "The Great Class Wedge and the Internet's Hidden Costs", Jen Schradie discusses how social class can affect digital literacy. This creates a digital divide.
Research published in 2012 found that the digital divide, as defined by access to information technology, does not exist amongst youth in the United States. Young people report being connected to the internet at rates of 94–98%. There remains, however, a civic opportunity gap, where youth from poorer families and those attending lower socioeconomic status schools are less likely to have opportunities to apply their digital literacy. The digital divide is also defined as emphasizing the distinction between the "haves" and "have-nots", and presents all data separately for rural, urban, and central-city categories. Also, existing research on the digital divide reveals the existence of personal categorical inequalities between young and old people. An additional interpretation identified the gap between technology accessed by youth outside and inside the classroom.
Participation gap
Media theorist Henry Jenkins coined the term participation gap and distinguished the participation gap from the digital divide. According to Jenkins, in countries like the United States, where nearly everyone has access to the internet, the concept of the digital divide does not provide enough insight. As such, Jenkins uses the term participation gap to develop a more nuanced view of access to the internet. Instead of referring to the "haves" vs "have-nots" when referring to digital technologies, Jenkins proposes the participation gap refers to people who have sustained access to and competency with digital technologies due to media convergence. Jenkins states that students learn different sets of technology skills if they only have access to the internet in a library or school. In particular, Jenkins observes that students who have access to the internet at home have more opportunities to develop their skills and have fewer limitations, such as computer time limits and website filters commonly used in libraries. The participation gap is geared toward millennials. As of 2008, when this study was created, they were the oldest generation to be born in the age of technology. As of 2008, more technology has been integrated into the classroom. The issue with digital literacy is that students have access to the internet at home, which is equivalent to what they interact with in class. Some students only have access while at school and in a library. They aren't getting enough or the same quality of the digital experience. This creates the participation gap, along with an inability to understand digital literacy.
Digital rights
Digital rights are an individual's rights that allow them freedom of expression and opinion in an online setting, with roots centered on human theoretical and practical rights. It encompasses the individual's privacy rights when using the Internet, and is essentially concerned with how an individual uses different technologies and how content is distributed and mediated. Government officials and policymakers use digital rights as a springboard for enacting and developing policies and laws to obtain rights online in the same way that we obtain rights in real life. Private organizations that possess their online infrastructures also develop rights specific to their property. In today's world, most, if not all materials have shifted into an online setting and public policy has had a major influence in supporting this movement. Going beyond traditional academics, ethical rights such as copyright, citizenship and conversation can be applied to digital literacy because tools and materials nowadays can be easily copied, borrowed, stolen, and repurposed, as literacy is collaborative and interactive, especially in a networked world.
Digital citizenship
Digital citizenship refers to the "right to participate in society online". It is connected to the notion of state-based citizenship, which is determined by the country or region in which one was born, and concerns being a dutiful citizen who participates in the electoral process and online through mass media. A literate digital citizen possesses the skills to read, write and interact with online communities via screens and has an orientation towards social justice. This is best described in the article, Digital Citizenship during a Global Pandemic: Moving beyond Digital Digital Literacy, "Critical digital civic literacy, as is the case of democratic citizenship more generally, requires moving from learning about citizenship to participating and engaging in democratic communities face‐to‐face, online, and in all the spaces in between." Through the various digital skills and literacy one gains, one is able to effectively solve social problems that might arise through social platforms. Additionally, digital citizenship has three online dimensions: higher wages, democratic participation, and better communication opportunities that arise from the digital skills acquired. Digital citizenship also refers to online awareness and the ability to be safe and responsible online. This idea came from the rise of social media in the past decade, which has enhanced global connectivity and faster interaction. The idea of a good 'digital citizen' directly correlates with knowledge of, for example, how react to instances of predatory online behaviors, such as cyberbullying.
Digital natives and digital immigrants
Marc Prensky invented and popularized the terms digital natives and digital immigrants. A digital native is an individual born into the digital age who has used and applied digital skills from a young age, whereas 'digital immigrant' refers to an individual who adopts technology later in life. These two groups of people have had different interactions with technology since birth, a generational gap. This directly links to their individual and unique relationship with digital literacy. Digital natives brought the creation of ubiquitous information systems (UIS). These systems include mobile phones, laptop computers and personal digital assistants, as well as digital to cars and buildings (smart cars and smart homes), creating a new unique technological experience.
Carr claims that digital immigrants, although they adapt to the same technology as natives, possess a sort of "accent" that prevents them from communicating the way natives do. Research shows that, due to the brain's malleable nature, technology has changed the way today's students read, perceive, and process information. Marc Prensky believes this is a problem, because today's students have a vocabulary and skill set that educators (digital immigrants at the time of his writing), may not fully understand.
Statistics and popular representations of the elderly portray them as digital immigrants. For example, in Canada in 2010, it was found that 29% of its citizens were 75 years of age and older; 60% of its citizens between the ages of 65-74 had browsed the internet in the past month. Conversely, internet activity reached almost 100% among its 15 to 24-year-old citizens.
However, the concept of a digital native has been contested. According to two studies, it was found that students over the age of 30 were more likely to possess characteristics of a digital native when compared to their younger peers. 58% of the students that participated in the study were over 30 years old. One study conducted by Margaryan, Littlejohn, and Vojt (2011), found that while college students born after 1984 frequently used the internet and other digital technology, they showed restricted use of technologies for educational and socializing purposes. In another study conduced at Hong Kong University, it was found that young students are using technology as a means of consuming entertainment and ready-made content, rather the creating of, or engaging in academic content.
Applications
In education
Society is trending towards a technology-dependent world. It is now necessary to implement digital technology in education; this often includes having computers in the classroom, the use of educational software to teach curricula, and course materials that are made available to students online. Students are often taught literacy skills such as how to verify credible sources online, cite websites, and prevent plagiarism. Google and Wikipedia are frequently used by students "for everyday life research," and are just two common tools that facilitate modern education. Digital technology has impacted the way materials are taught in the classroom. With the use of technology rising in this century, educators are altering traditional forms of teaching to include course material on concepts related to digital literacy.
Educators have also turned to social media platforms to communicate and share ideas with one another. Social media and social networks have become a crucial part of the information landscape. Social media allows educators to communicate and collaborate with one another without having to use traditional educational tools. Restrictions such as time and location can be overcome with the use of social media-based education.
New models of learning are being developed with digital literacy in mind. Several countries have developed their models to emphasize ways of finding and implementing new digital didactics to implement, finding more opportunities and trends via surveys of educators and college instructors. Additionally, these new models of learning in the classroom have aided in promoting global connectivity, enabling students to become globally-minded citizens. According to one study by Stacy Delacruz, Virtual Field Trips, (VFT), a new form of multimedia presentation has gained popularity over the years because they offer the "opportunity for students to visit other places, talk to experts and participate in interactive learning activities without leaving the classroom". They have been used as a vessel for supporting cross-cultural collaboration amongst schools, including: "improved language skills, greater classroom engagement, deeper understandings of issues from multiple perspectives, and an increased sensitivity to multicultural differences". They also allow students to be the creators of their own digital content, a core standard from The International Society for Technology in Education (ISTE).
The COVID-19 pandemic pushed education into a more digital and online experience where teachers had to adapt to new levels of digital competency in software to continue the education system. As academic institutions discontinued in-person activity, different online meeting platforms were utilized for communication. An estimated 84% of the global student body was affected by this sudden closure due to the pandemic. Because of this, there was a clear disparity in student and school preparedness for digital education due, in large part, to a divide in digital skills and literacy that both the students and educators experienced. For example, countries like Croatia had already begun work on digitalizing its schools countrywide. In a pilot initiative, 920 instructors and over 6,000 pupils from 151 schools received computers, tablets, and presentation equipment, as well as improved connection and teacher training, so that when the pandemic struck, pilot schools were ready to begin offering online programs within two days.
The switch to online learning has brought about some concerns regarding learning effectiveness, exposure to cyber risks, and lack of socialization. These prompted the need to implement changes in how students can learn much-needed digital skills and develop digital literacy. As a response, the DQ (Digital Intelligence) Institute designed a common framework for enhancing digital literacy, digital skills, and digital readiness. Attention and focus was also brought to the development of digital literacy in higher education.
A study in Spain measured the digital knowledge of 4883 teachers of all education levels over recent school years and found that they needed further training to advance new learning models for the digital age. These programs were proposed using the joint framework. INTEF (National Institute of Educational Technologies and Teacher Training), as a reference.
In Europe, the Digital Competence of Educators (DigCompEdu), developed a framework to address and promote the development of digital literacy. It is divided into six branches: professional engagement, digital sources resources, teaching and learning, assessment, empowerment of learners, and the facilitation of learners' digital competence. The European Commission also developed the "Digital Education Action Plan", which focuses on using the COVID-19 pandemic experience to learn how technology is being used on a large scale for education and adapting the systems used for learning and training in the digital age. The framework is divided into two main strategic priorities: fostering the development of a high-performing digital education ecosystem and enhancing digital skills and competencies for digital transformation.
Digital competences
In 2013 the Open Universiteit Nederland released an article defining twelve digital competence areas. These areas are based on the knowledge and skills people have to acquire to be a literate.
A. General knowledge and functional skills. Knowing the basics of digital devices and using them for elementary purposes.
B. Use in everyday life. Being able to integrate digital technologies into the activities in everyday life.
C. Specialized and advanced competence in work and creative expression. Being able to use ICT (Information and Communication Technologies) to express your creativity and improve your professional performance.
D. Technology-mediated communication and collaboration. Being able to connect, share, communicate, and collaborate with others effectively in a digital environment.
E. Information processing and management. Using technology to improve your ability to gather, analyze, and judge the relevance and purpose of digital information.
F. Privacy and security. Being able to protect your privacy and take appropriate security measures.
G. Legal and ethical aspects. Behaving appropriately in a socially responsible way in the digital environment and being aware of the legal and ethical aspects of the use of ICT.
H. Balanced attitude towards technology. Demonstrating an informed, open-minded, and balanced attitude towards an information society and the use of digital technologies.
I. Understanding and awareness of the role of ICT in society. Understanding the broader context of use and development of ICT.
J. Learning about and with digital technologies. Exploring emerging technologies and integrating them.
K. Informed decisions on appropriate digital technologies. Being aware of the most relevant or common technologies.
L. Seamless use demonstrating self-efficacy. Confidently and creatively applying digital technologies to increase personal and professional effectiveness and efficiency.
The competencies mentioned are based on each other. Competencies A, B, and C are the basic knowledge and skills a person has to have to be a fully digitally literate person. When these three competencies are acquired you can build upon this knowledge and those skills to acquire the other competencies.
Digital writing
University of Southern Mississippi professor, Dr. Suzanne Mckee-Waddell conceptualized the idea of digital composition as: the ability to integrate multiple forms of communication technologies and research to create a better understanding of a topic. Digital writing is a pedagogy that is being taught increasingly in universities. It is focused on the impact technology has had on various writing environments; it is not simply the process of using a computer to write. Educators in favor of digital writing argue that it is necessary because "technology fundamentally changes how writing is produced, delivered, and received." The goal of teaching digital writing is that students will increase their ability to produce a relevant, high-quality product, instead of just a standard academic paper.
One aspect of digital writing is the use of hypertext or LaTeX. As opposed to printed text, hypertext invites readers to explore information in a non-linear fashion. Hypertext consists of traditional text and hyperlinks that send readers to other texts. These links may refer to related terms or concepts (such is the case on Wikipedia), or they may enable readers to choose the order in which they read. The process of digital writing requires the composer to make unique "decisions regarding linking and omission." These decisions "give rise to questions about the author's responsibilities to the [text] and objectivity."
In the workforce
The 2014 Workforce Innovation and Opportunity Act (WIOA) defines digital literacy skills as a workforce preparation activity. In the modern world employees are expected to be digitally literate, having full digital competence. Those who are digitally literate are more likely to be economically secure, as many jobs require a working knowledge of computers and the Internet to perform basic tasks. Additionally, digital technologies such as mobile devices, production suites, and collaboration platforms are ubiquitous in most office workplaces and are often crucial in daily tasks, since many White collar jobs today are performed primarily using digital devices and technology. Many of these jobs require proof of digital literacy to be hired or promoted. Sometimes companies will administer their tests to employees, or official certification will be required. A study on the role of digital literacy in the EU labour market found that individuals were more likely to be employed the more digitally literate they were.
As technology has become cheaper and more readily available, more blue-collar jobs have required digital literacy as well. Manufacturers and retailers, for example, are expected to collect and analyze data about productivity and market trends to stay competitive. Construction workers often use computers to increase employee safety.
In entrepreneurship
The acquisition of digital literacy is also important when it comes to starting and growing new ventures. The emergence of the World Wide Web and other digital platforms has led to a plethora of new digital products or services that can be bought and sold. Entrepreneurs are at the forefront of this development, using digital tools or infrastructure to deliver physical products, digital artifacts, or internet-enabled service innovations. Research has shown that digital literacy for entrepreneurs consists of four levels (basic usage, application, development, and transformation) and three dimensions (cognitive, social, and technical). At the lowest level, entrepreneurs need to be able to use access devices as well as basic communication technologies to balance safety and information needs. As they move to higher levels of digital literacy, entrepreneurs will be able to master and manipulate more complex digital technologies and tools, enhancing the absorptive capacities and innovative capability of their venture. In a similar vein, if small to medium enterprises, (SMEs), possess the ability to adapt to dynamic shifts in technology, then they can take advantage of trends, marketing campaigns, and communication with consumers to generate more demand for their goods and services. Moreover, if entrepreneurs are digitally literate, then online platforms like social media can further help businesses receive feedback and generate community engagement that could potentially boost their business's performance as well as their brand image. A research paper published in The Journal of Asian Finance, Economics, and Business provides critical insight that suggests digital literacy has the greatest influence on the performance of SME entrepreneurs. The authors suggest their findings can help craft performance development strategies for SME entrepreneurs, arguing that their research shows the essential contribution of digital literacy in developing business and marketing networks. Additionally, the study found that digitally literate entrepreneurs can communicate and reach wider markets than non-digitally literate entrepreneurs because of the use of web-management and e-commerce platforms that were supported by data analysis and coding. That said, constraints do exist for SMEs using e-commerce, including a lack of technical understanding of information technologies, and the high cost of internet access (especially for those in rural/underdeveloped areas).
Global impact
The United Nations included digital literacy in its Sustainable Development Goals for 2030, under thematic indicator 4.4.2, which encourages the development of digital literacy proficiency in teens and adults to facilitate educational and professional opportunities and growth. International initiatives like the Global Digital Literacy Council (GDLC) and the Coalition for Digital Intelligence (CDI) have also highlighted the need for digital literacy, and strategies to address this on a global scale. The CDI, under the umbrella of the DQ Institute, created the Common Framework for Digital Literacy, Skills, and Readiness in 2019, that conceptualizes eight areas of digital life: identity, use, safety, security, emotional intelligence, communication, literacy, and rights, three levels of maturity: citizenship, creativity, and competitiveness), and three components of competency: knowledge, attitudes and values, and skills, or what, why, and how. The UNESCO Institute for Statistics (UIS) also works to create, gather, map, and assess common frameworks on digital literacy across multiple member-states around the world.
In an attempt to narrow the Digital Divide, on September 26, 2018, the United States Senate Foreign Relations Committee passed legislation to help provide access to the internet in developing countries via the H.R.600 Digital Global Access Policy Act. The legislation itself was based on Senator Ed Markey's Digital Age Act, which was first introduced to the senate in 2016. In addition, Senator Markey provided a statement after the act was passed through the Senate: "American ingenuity created the internet and American leadership should help bring its power to the developing world," said Senator Markey. "Bridging the global digital divide can help promote prosperity, strengthen democracy, expand educational opportunity and lift some of the world’s poorest and most vulnerable out of poverty. The Digital GAP Act is a passport to the 21st-century digital economy, linking the people of the developing world to the most successful communications and commerce tool in history. I look forward to working with my colleagues to get this legislation signed into law and to harness the power of the internet to help the developing world."
The Philippines' Education Secretary Jesli Lapus has emphasized the importance of digital literacy in Filipino education. He claims a resistance to change is the main obstacle to improving the nation's education in the globalized world. In 2008, Lapus was inducted into Certiport's "Champions of Digital Literacy" Hall of Fame for his work emphasizing digital literacy.
A 2011 study by the Southern African Linguistics & Applied Language Studies program did an observation of some South African university students regarding digital literacy. While their courses did require some sort of digital literacy, very few students actually had access to a computer. Many had to pay others to type any work, as their digital literacy was almost nonexistent. Findings show that class, ignorance, and inexperience still affect access to learning that South African university students may need.
See also
Computer literacy
Cyber self-defense
Data literacy
Digital citizen
Digital intelligence
Digital rhetoric
Digital rights
Fact-checking
Information literacies
Media literacy
Web literacy
References
Bibliography
Vuorikari, R., Punie, Y., Gomez, S. C., & Van Den Brande, G. (2016). DigComp 2.0: The Digital Competence Framework for Citizens. Update Phase 1: The Conceptual Reference Model (No. JRC101254). Institute for Prospective Technological Studies, Joint Research Centre. https://ec.europa.eu/jrc/en/digcomp and https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/digcomp-20-digital-competence-framework-citizens-update-phase-1-conceptual-reference-model
External links
digitalliteracy.gov An initiative of the Obama administration to serve as a valuable resource to practitioners who are delivering digital literacy training and services in their communities.
digitalliteracy.org A Clearinghouse of Digital Literacy and Digital Inclusion best practices from around the world.
DigitalLiteracy.us A reference guide for public educators on the topic of digital literacy.
Digital divide
Literacy | 0.765289 | 0.998369 | 0.764041 |
Strategic leadership | Processes
Strategic leadership provides techniques that focus organizations when they are deciding on their purpose and best business practices that are critical for remaining competitive and relevant. Being able to learn and adapt has become vital for sustainability. Failure to be able to adapt to changing technology, climate change, and economic factors risks the organization becoming obsolete.
Remaining successful requires a different way of thinking about how to marshal the resources and deliver services. Strategic leadership balances a focused analytical perspective with the human dimension of strategy making (as documented by the Park Li Group). It is important to engage the entire business in a strategic dialogue in order to lay the foundation for building winning organizations that can define, commit, adjust and adapt their strategy quickly as needed.
Strategy execution
The analytical dimension and the human dimension
Leaders face the continuing challenge of how they can meet the expectations of those who placed them there. Addressing these expectations usually takes the form of strategic decisions and actions. For a strategy to succeed, the leader must be able to adjust it as conditions require. But leaders cannot learn enough, fast enough, and do enough on their own to effectively adapt the strategy and then define, shape, and execute the organizational response. If leaders are to win, they must rely on the prepared minds of employees throughout the organization to understand the strategic intent and then both carry out the current strategy and adapt it in real-time. The challenge is not only producing a winning strategy at a point in time but getting employees smart enough and motivated enough to execute the strategy and change it as conditions change. This requires the leader to focus as much on the process used to develop the strategy – the human dimension, as the content of the strategy – the analytical dimension.
General approaches
Leaders recognize the need to incorporate aspects of both the analytical and human dimensions to effectively drive the organization forward, but how this insight translates into action varies significantly from leader to leader.
These differences are largely driven by the bias leaders have for how they divide their time between the two dimensions. This bias is reflected in how leaders answer questions such as the following:
What is their primary role as a chief strategist?
What is their job as a leader during ongoing strategy making?
What type of team should their strategy-making create?
When is strategy making finished?
How leaders answer these questions will ultimately impact their ability to deliver a winning strategy because their responses indicate whether and how they build and lead an organization that is aligned and committed to a particular agenda.
Question 1: What is their primary role as a chief strategist?
Should the focus be on being the architect of the strategy product or being the architect of the strategy process? Is their primary job to come up with the right strategy, or is it to manage a process to achieve this outcome?
Analytical: From an analytical perspective, the chief strategist's job is to be the “architect of the perfect strategy product.” Leaders holding this perspective see the strategy itself as the outcome, and managing the process is either ignored or delegated, frequently to individuals who lack line of sight to the senior person. Their concerns center on organizing and mastering the data, developing the arguments, and looking for that burst of insight that will drive the organization's competitive advantage and provide the foundation for future success.
Human: Answering the same question from the perspective of the human dimension, the chief strategist's job is to be the “architect of the perfect strategy process.” Leaders holding this perspective see the process as the primary outcome, and the product, while important, can and should be built by others. There is a recognition that the product will necessarily evolve so the more important endpoint is to build the capacity for strategic thinking across the group so that change, when it occurs, can be absorbed more quickly and more completely.
Question 2: What is their job as a leader during ongoing strategy making?
Linked to the first question, this second question focuses on how leaders conceptualize their role as they participate in the ongoing strategy process. Is it to provide bold, clear leadership that elicits confidence in their personal capabilities as “hero”, or is it to serve as a “coach and guide” who enables others to perform and stand in the limelight?
Analytical: Analytical leaders feel the need to personally come up with the right answer. If they are to be the leader, they must be the ones with the solutions. They feel obligated to lead from the front on strategic issues, demonstrating expertise through business insights and customer knowledge, skillfully outsmarting the competition, and outguessing the marketplace. These leaders are seen as visionary, smart leaders comfortably assuming star status as they fill the role of a Homeric hero.
Human: These leaders view themselves as coaches or guides, believing that the organization's strategy is only as good as the breadth and depth of the understanding and commitment that it attracts. Responsibility for developing the strategy is widely dispersed but carefully coordinated. These leaders focus on guiding and responding while building commitment and empowerment among those building the strategy.
Question 3: What type of team should their strategy-making create?
This third question recognizes that every strategy process defines a community and creates a team. This is true whether the leader is aware of it or not and whether the leader manages it or not. The question being asked is, “Does the strategy-making create an exclusive club of capable thinkers, or create a broad base of ownership and commitment leading to a sense of citizenship across a much larger group?”
Analytical: The analytical approach to strategy creates an exclusive “inner circle” of thinkers who are in the know and make most of the decisions. Being part of this group feels good because it is similar to being part of a private society. The common element that binds society members together is their close-knit exclusiveness and the extraordinary access and understanding of the data and thinking that leads to the strategy. This smaller group is well versed in the views of the leader and the data and knows how the different pieces of the strategy fit together.
Human: A leader focusing on the human dimension is concerned about building a sense of citizenship among a much larger group of people. It is built around a process that invites much broader participation and relies on input from many others outside of the top team. The aim is to create a sense of belonging and ownership across the organization. In this situation, many more people feel they can have an informed opinion about the overall strategy. They believe they have been part of its development and that they can influence the outcome. In that sense, it is their strategy.
Question 4: When is strategy making finished?
Most leaders have an idea of how strategy making and time are related. The question being asked is, “Is strategy making as a discrete set of sequential activities with a defined start and stop? Or, is strategy something that is continually reforming itself, never quite complete or perfected but always in a state of evolution?” At its essence, the question is, “In the organization, is the strategy process fundamentally linear with a defined beginning and end, or is it fundamentally iterative with no defined endpoint?”
Analytical: From the analytical view, good strategy-making follows a linear process with each task being “checked off” as it is completed. As set out in many strategy texts, it is a set of reasonably well-defined steps leading to a fully formed plan of execution. Effectively, the strategy is set for a defined time period and executed.
Human: Leaders who lean to the human dimension see strategy as a continuing work in process, something that is more free-flowing, never truly complete but continuously being shaped as interactions occur with customers and competitors and as new issues and knowledge emerge from the people throughout the organization. They are comfortable circling back on key ideas and frequently will drive the strategy process to re-visit critical assumptions and, based on the insights gained, alter course. For these individuals, changes in strategy are markers of leadership success, not leadership failure.
Incorporating both analytical and human dimensions
To integrate both dimensions into strategy making in a way that creates a winning outcome and gets the whole organization understanding and committed to this common agenda requires leaders who are clear about the strategic capacity of each of their internal stakeholder groups and who have the perspective and insights to lead in a way that incorporates both dimensions as the strategy is developed. The steps described below are intended to provide the leader with techniques to do that. Taken collectively, they define a process that incorporates both the analytical and human dimensions while challenging individuals throughout the organization to raise the quality and quantity of their strategic thinking and their strategic leadership.
Standardize vocabulary and agree on a toolset
Strategy making that enlists large groups of employees needs a common vocabulary and a common set of tools in order to be effective. Deciding on a vocabulary is not difficult, but it does need to be done with intent and with a sense of discipline. The number of terms that get used during strategy making seems at times almost endless and includes such words as Vision, Mission, Fact Base, KPI, Goal, Objective, Scorecard, Driver, Strategic Action Plan, Strategic Issue Analysis, Governing Principle, and Metric to name a few. Establishing a common vocabulary begins and ends by getting alignment around three questions, “What does X mean? Why and when is it used?” and “Is X necessary in developing the strategy and building understanding and ownership for it over time?”
Closely linked to the need for a common vocabulary is the need for a common set of frameworks or tools to build your strategy. In many cases, toolsets come with their own embedded vocabulary. Some leaders use relatively more elaborate tools such as shareholder value add (SVA), computer modeling, and scenario planning.
Other leaders tend toward simplicity. Jack Welch described his toolset as a series of 5 questions with the answers ultimately leading up to what he called “the Big Aha.” His 5 questions included:
What does the playing field look like now?
What has the competition been up to?
What have we been up to?
What's around the corner?
What is our winning move?
There is a great deal of useful vocabulary and many fine toolsets in the strategy marketplace and no shortage of advocates for one or another of these. The important outcome is that the leader, as the executive leading the strategy process, needs to select a vocabulary and a toolset, use it consistently over time, and require others in the senior and middle ranks of the organization to do the same.
Finally, when deciding what vocabulary and toolset are best to use while working across large populations, simpler is usually better. The simpler the language and the fewer the tools, the more accessible the strategy becomes to larger groups of people, and the more people can understand it, know how they should think and talk about it, and identify how they can contribute. Some situations require more sophisticated (i.e., more complicated) tools because there is a need for much more thorough analytics. Many do not. The right balance point between comprehensiveness and simplicity will provide enough analytical complexity to adequately describe the marketplace, the customers, what you do, and how you will compete, but nothing more than that. Simplicity, where it can be found, makes a significant difference when working across a large population.
Broaden and strengthen senior managers as a strategic leadership team
Broadening and strengthening the team at the senior levels of the organization begins with an objective assessment of whether there actually is a working strategy currently in place and, if there is, the state of understanding and ownership for it in the organization.
The lack of clarity and ownership deeper in the organization leads to 1) misallocated resources because people are working at cross purposes, 2) excessive leadership time spent correcting and clarifying the direction because others are not convinced, or they fail to understand it, and 3) poor execution of the strategy due to diffuse and differing priorities. Perhaps most importantly, it directly impacts organizational agility because there is no broad understanding and agreement on the current strategy, so subsequent changes to the strategy make no more sense than the original agenda.
Leaders can address these dynamics by broadening out the understanding and ownership of the strategy to a much larger group without sacrificing the sense of commitment at the top of the organization. Having this larger group of managers accountable for successfully defining and executing a strategy is not only critical to building winning strategies, but if done in a way that includes both the analytical and the human dimensions, it is incredibly energizing for the organization. This is especially true in those cultures and organizations where the decision-making is traditionally held more closely by a relatively small group of senior people.
The mechanics of how to broaden the senior team will vary depending on cultural and organizational considerations. The key is to create a common context for both the “what” and the “why” of the strategy that serves as a critical touchstone for the broader leadership team. In most cases, the process creates a group of 50–100 or more people who recognize that they are collectively accountable for the success of the entire strategy and not just their piece of it. These steps lay the foundation for partnering with the middle of the organization by setting the stage for the senior team to speak with one voice to the middle managers.
Build a strategy support team to serve as champions for the strategy process
With varying degrees of success, many leaders get their strategy-making to this point and either stop or their process stalls. A major reason is the lack of understanding and commitment to the steps required to build more effective strategic leadership practices and a strategic dialogue in the operating groups below the senior managers. These groups and especially their leadership teams frequently do not know how to proceed, and there is no consistent in-house resource to assist them. The net effect is the sense of excitement and momentum that was generated at the top of the house in the earlier stages of the strategy process is lost, and the strategy team of employees is derailed before it is even gets started. One of the best ways to address this is to identify and train a cadre of high potential line managers in the middle of the organization that can serve as champions of the strategy process to those both above and below them. In this sense they serve both as a catalyst for the process and as a bridge between formulation and implementation. They do not replace the leadership role of the senior teams in each of these operating group but they do serve as a critical additional resource that is dedicated to creating momentum and fostering consistency. This can be especially important if the strategy defined requires changes in the organizational culture as well as the business model. This resource also helps to ensure that the day-to-day running the business is not neglected as the demands of building a large scale strategy dialogue come into play.
The make-up of this strategy support team (SST) generally includes 1 or more people from each of the operating groups, usually 2–3 downs from the senior person. The skills and behaviors required of these individuals are a blend of both the analytical and the human dimensions. Too much emphasis on one dimension over the other undermines the effectiveness of the role. In partnership with the senior team from their operating group, the members of the SST serve as a coach and guide for the strategy process as it unfolds. In this capacity, they reinforce expectations and teach methods for building and sustaining a strategy dialogue in their respective groups, ensure that the local strategy product being produced is of a uniform quality (including vocabulary and tools), and foster behavioral and organizational alignment over time. Additional roles for these individuals might also include facilitator, tracker and chaser, success and failure transfer agent across the businesses and writer when required.
In addition to serving as a resource to those around them, it is unique opportunity of the SST members to participate in the strategy discussion 2–3 levels above their normal level of discourse. It is also an excellent training ground for those involved and it gives the senior executive direct access to the middle of the organization while observing the performance of these high potential line managers.
Raise the bar for more effective strategic leadership in the middle of the organization
For many middle managers, participating effectively in the strategy development process is as much a question of training as it is doing. Building understanding and skills on topics such as the vocabulary and toolset, marketplace dynamics and the associated ambiguity, strategy story telling and their own individual strategic leadership strengths and weaknesses are all aspects of a process that can ignite a sense of understanding and commitment across the middle of the organization in a way that leverages the human fabric.
A key insight that drives this outcome is the recognition that most middle managers regardless of cultural background want to commit to something and belong to something that is more than who they are as individuals. It is the leader's job to give managers the opportunities in which they can make such commitments. In all instances, providing the settings for these individuals includes asking them to be story tellers of the organizational strategy to those around them. Doing this requires these middle managers to understand and embrace both the analytical and human dimensions of the strategy making. It also creates a much smarter and more prepared middle manager that has publicly committed to the strategy and is in a much stronger position to make local decisions as the strategy evolves.
Localize the strategy story at the lower levels of the organization and engage these levels with the question, “What does this mean for me and my team?”
While front line supervisors and their teams in most instances are the largest portion of the population, the strategy making work to be done with this group is relatively simple. Their needs center largely on context, community and clarity. Engaging this group in a discussion of the basic business model and the organizational strategy provides critical context and gives meaning to their work. Their participation in shaping the local strategy builds understanding and ownership and a sense of partnership with the larger organization.
Strategy making with this group begins with the organization's strategy story. Using middle managers in this role allows these individuals to raise their own strategic leadership bar. And it is through these middle managers that the organizational story becomes more accessible in those settings and situations that they know much more intimately than senior managers.
Ultimately, the strategy only comes alive and communities are built when it is used to set the broad context and is followed by a much more detailed local discussion addressing the question, “What does this mean for me and my team?” The combination of the analytical and human dimensions applied to this group provides a platform of understanding among the rank and file for what the strategy is, what it means to them and why it needs to continue to evolve over time. This in turn increases the willingness of this critically important but difficult to reach population to recognize the inevitable changes in strategy as markers of leadership success rather than leadership failure and in the process it builds and strengthens organizational agility.
Moving the “we/they” line
In every organization, there is a line that can be drawn. Above the line, generally at the more senior levels of the organization, people use the word “we” to imply collective responsibility for success and failure. People in this group say things like, “We did this well.” “We should have done this better.” “We need to discuss this more.” “We should have planned this out more carefully.” Below the line, generally at lower levels of the organization, people use the word “they” to imply that things are being done to them by others and frequently these things are not good. People in this group say things like, “They messed up.” “They should have done that better.” “They should have planned this more carefully.”
Effective strategy processes move the “we/they” line down in the organization so that more people use the word “we” and take ownership for making things happen and making things better. Good strategic leadership practices, with the right balance of the analytic dimension and the human dimension and the discipline and commitment to see the process through during strategy formulation and implementation can be a strong driver to take the “we/they” line much deeper into the organization. A deep “we” line produces winning strategies because those in the “we” are much more willing and able to meet the demands of perpetual change.
Building prepared minds on a large scale begins and ends with the senior person focusing on being the architect of the strategy process as much as the product. The focus is on working the middle ground between the analytical and the human dimensions, not giving up on the clarity that comes from the analytical rigor nor the broad-based commitment and organizational agility that comes from addressing the human dimension. Ultimately a deep “we” line is a signal that employees are developing, evolving, modulating, fine-tuning and executing a strategy concurrently.
Definition of Leadership
Leadership is about capacity: the capacity of leaders to listen and observe, to use their expertise as a starting point to encourage dialogue between all levels of decision-making, to establish processes and transparency in decision-making, to articulate their own value and visions clearly but not impose them. Leadership is about setting and not just reacting to agendas, identifying problems, and initiating change that makes for substantial improvement rather than managing change” (Pearce, 2008). Rowe states that strategic leadership is the ability to influence others to voluntarily make day-to-day decisions that enhance the long-term viability of the organization while at the same time maintaining its short-term financial stability.
Strategic leaders are defined as the ones having the organizational ability with strategic orientation; translate strategy into action; align people and organizations; determine effective strategic intervention points; develop strategic competencies. A strategic leader displays dissatisfaction or restlessness with the present; absorptive capacity; adaptive capacity; wisdom. Davies highlights the concept of “adaptive capacity,” a strategy that enables leaders to change and learn through asserting that ‘mastering chaos, complexity, and change requires new ways of ‘seeing and thinking’ (Sanders, 1998). A strategic leader is strategically future-oriented. A strategic leader's eyes are always on the horizon, not just on the near at hand. A strategic leader influences “the organization by aligning their systems, culture, and organizational structure to ensure consistency with the strategy” (Beatty and Quinn, 2010, p. 7). Influencing employees to voluntarily make decisions that enhance the organization is the most important part of strategic leadership. A strategic leader, in both instances, prepares for the future and considers both the long-term goal as well understanding the current contextual setting of the organization.
A leadership model that introduced Batty and Quinn consist of three components: who, how, and what. The three interdependent processes of this model are thinking, acting, and influencing. (Beatty and Quinn, 2010). Strategic leaders have the ability to determine effective intervention points. This means that the strategy of an effective leader is to develop new visions, create new strategies and move in a new, sometimes unexpected, direction. At these strategic opportunity points, the most important component is the timing of when to intervene and directing change verse what the intervention is put in place. Strategic leaders think strategically. Strategic thinking, as Batty and Quinn state, involves gathering, making connections, and filtering information or “form ideas and strategies that are focused, relevant, and sound.” (Beatty and Quinn, 2010, p. 5). The significance of strategic leadership “is making decisions about whether and when to act” (Beatty and Quinn, 2010, p. 6).
Leadership is about innovators and change agents; seeing the big picture, thinking strategically about how to attain goals, and working (with the help of others) to achieve the goals (Kouzes and Posner, 2009, p. 20). Strategic orientation is the ability to be innovative in connecting long-range visions and concepts to daily work. Quong & Walker (2010) based their works on describing the definitive terms and segments. In their article titled Seven Principles of Strategic Leadership, Quong and Walker describe a framework of seven principles, which are: Principle 1 Strategic Leaders are Futures Oriented and have a Futures Strategy; 2. Strategic Leaders are Evidence Based and Research Led; 3. Strategic Leaders Get Things Done; 4. Strategic Leaders Open New Horizon; 5. Strategic Leaders are Fit to Lead; 6. Strategic Leaders Make Good Partner; and 7. Strategic Leaders Do the ‘Next’ Right Thing.
The Role of Strategic Leadership in Organization
There are various strategic leadership styles. With strategic leadership being such a broad topic Rowe differentiates between strategic, visionary, and managerial leaders. (Rowe, 2001). Strategic leadership presumes a shared vision of what an organization is to be so that the day-to-day decision-making or emergent strategy process is consistent with this vision. Managerial leaders influence only the actions and decisions of those with whom they work. They are involved in situations and contexts characteristic of day-to-day activities and are concerned with and more comfortable in functional areas of responsibilities. In contrast, visionary leadership is future-oriented and concerned with risk-taking and visionary leaders are not dependent on their organizations for their sense of who they are. Visionary leaders work from high-risk positions and seek out risky ventures, especially when the rewards are high (Rowe, 2001).
Strategic Leadership in Education System
Strategic leadership is defined by Barron, 1995 as practicing existing abilities and skills and influencing others to train in new formats for new leadership models. Specifically, to obtain successful educational management within the organization, leaders should think strategically about where changes are needed and why. For instance, new leaders should be in possession of three fundamental skills: problem-solving, decision-making, and creative/critical thinking. Also, educators, administrators, and other practitioners should be trained in educational management and continually activate this training in new leadership roles. As a result, the outcome of the educational environment will be influenced by the total quality leadership. Therefore, in Barron's 1995 definition of strategic leadership, he concludes that “Strategic leadership is demonstrated by individuals in all areas of the educational environment who possess skills to create and communicate vision and effect change through interactive leadership.”
Strategic Leadership in Non-Profit Sector
Very little research in the field of strategic leadership has considered the sector in which leadership occurs. As a result, most of the theory development in strategic leadership has assumed that it occurs in the for-profit sector. There have been several theoretical articles published on the role and influence of nonprofit executives generally. In Phipps & Burbach (2010) study they determined the role of a public executive is different from the role of a business executive. The difference between public and private executive roles included different informational, interpersonal, and decisional roles. According to Phipps & Burbach (2010), a study by Taliento & Silverman in 2005 shows the difference between the role of a corporate CEO and the nonprofit CEO. Their conclusions were based on interviews with crossover leaders who had led both for-profit and nonprofit organizations. The study identified five areas in which nonprofit strategic leaders adapt the practices of for-profit strategic leaders:
• Smaller scope of authority
• A wider range of stakeholders who expect consensus
• The need for innovative metrics to monitor performance
• The requirement that nonprofit CEO's pay more attention to communications
• The challenge of building an effective organization with limited resources and training.
The study concluded that “there is reason to believe that strategic leaders contribute to nonprofit organizational performance in ways consistent with strategic leadership theory. However, there is evidence in the study suggesting that the exercise of strategic leadership is different in the nonprofit context (Phipps & Burbach, 2010)”.
Leadership remains one of the most relevant aspects of the organizational context. However, defining leadership is challenging. “The difficulty of arriving at a simple, cut-and-dried definition of strategic leadership is underscored in the literature on the subject” (Beatty and Quinn, 2010, p. 3). The definition of leadership varies from situation to situation. Strategic leadership filters the applicable information, creating an environment where learning can take place. Strategic leadership is a combined responsibility of the leader, the follower, and the organization. Leadership presents challenges that call forth the best in people, and bring them together around a shared sense of purpose. With intentionality, alignment, and a higher purpose; the work between the leader and the followers creates a synergy. Despite what style of leadership, the various styles can support one another to achieve the goals of the organization. Strategic leadership can only be achieved when the leader is strategic in their approach to the matters of the organization.
References
Further reading
Beatty, K., & Quinn, L. (2010). Strategic Command Taking the Long View for Organizational Success. Leadership In Action, 30(1), 3-7.
Barron, B. G., & Henderson, M. V. (1995). Strategic leadership: A theoretical and operational definition. Journal of Instructional Psychology, 22(2), 178.
Kouzes J, Posner B. (2009) To Lead, Create a Shared Vision. Harvard Business Review, Vol. 87, p. 20-21.
Pearce, Craig. (2008). Follow the Leaders. MIT Sloan Management Review.
Phipps, K. A., & Burbach, M. E. (2010). Strategic Leadership in the Nonprofit Sector: Opportunities for Research. Journal of Behavioral and Applied Management, 137-154.
Rowe, W. G. (2001). Creating Wealth in Organizations: The Role of Strategic Leadership. The Academy of Management Executive, 81-94.
Davies, Barbara J. & Davies, Brent (2004) Strategic leadership School Leadership & Management, Vol. 24, No. 1, February 2004 Washingborough Foundation Primary and Nursery School, UK; University of Hull, UK
Quong, Terry & Walker, Allan (2010) Seven Principles of Strategic Leadership International Studies and Education Administration (ISEA) Volume 38, Number 1, 2010
Sanders, T.I. (1998) Strategic thinking and the new science (New York, Free Press.)
Leadership
Change management | 0.783778 | 0.97478 | 0.764011 |
High-context and low-context cultures | In anthropology, high-context and low-context cultures are ends of a continuum of how explicit the messages exchanged in a culture are and how important the context is in communication. The distinction between cultures with high and low contexts is intended to draw attention to variations in both spoken and non-spoken forms of communication. The continuum pictures how people communicate with others through their range of communication abilities: utilizing gestures, relations, body language, verbal messages, or non-verbal messages.
"High-" and "low-" context cultures typically refer to language groups, nationalities, or regional communities. However, the concept may also apply to corporations, professions, and other cultural groups, as well as to settings such as online and offline communication.
High-context cultures often exhibit less-direct verbal and nonverbal communication, utilizing small communication gestures and reading more meaning into these less-direct messages. Low-context cultures do the opposite; direct verbal communication is needed to properly understand a message being communicated and relies heavily on explicit verbal skills.
The model of high-context and low-context cultures offers a popular framework in intercultural-communication studies but has been criticized as lacking empirical validation.
History of differing context cultures
These concepts were first introduced by the anthropologist Edward T. Hall in his 1959 book "The Silent Language." Cultures and communication in which the context of the message has great importance to structuring actions are referred to as high context. High-context defines cultures that are usually relational and collectivist, and which most highlight interpersonal relationships. Hall identifies high-context cultures as those in which harmony and the well-being of the group are preferred over individual achievement. In low context, communication members' communication must be more explicit, direct, and elaborate because individuals are not expected to have knowledge of each other's histories or backgrounds, and communication is not necessarily shaped by long-standing relationships between speakers. Because low-context communication concerns more direct messages, the meaning of these messages is more dependent on the words being spoken rather than on the interpretation of more subtle or unspoken cues.
Low-context communication relies more on said words to convey meaning than it does on more nuanced or unsaid indications. This is because spoken words are more straightforward in nature. A 2008 meta-analysis concluded that the model was "unsubstantiated and underdeveloped".
Characteristics of high-context and low-context cultures
Denotation and connotation
High-context cultures are related to connotation. People within high-context cultures tend to be more aware and observant of facial expressions, body language, changes in tone, and other aspects of communication that are not directly spoken. Denotation tends to be attributed to low-context culture. In low-context cultures, people communicate more directly by explicitly stating what they want to communicate.
Interpersonal relationships
Individualism and collectivism are related to low-context and high-context cultures, respectively. Within high-context cultures, people rely on their networks of friends and family, viewing their relationships as part of one large community. In low-context cultures, relationships are not viewed as important figures to identity. People within low-context cultures see their relationships much looser and the lines between networks of people are more flexibly drawn.
Interaction
When people from different cultures and communication styles work together, misunderstandings and conflicts can arise. Low-context communicators might seem distant or unfriendly to those from high-context societies, while high-context communicators might appear pushy or impolite.
Understanding whether a culture is high or low can dramatically improve communication effectiveness. In high-context cultures, where much of the communication is implicit, knowing the context allows individuals to pick up on non-verbal cues and indirect messages, thus facilitating smoother interactions. Conversely, in low-context cultures, recognizing the need for explicit communication helps in providing clear and direct information, which can avoid misunderstandings. This understanding is relevant to global business environments, which benefit from clear communication.
Examples of higher- and lower-context cultures
Cultural contexts are not absolutely "high" or "low". Instead, a comparison between cultures may find communication differences to a greater or lesser degree. Typically a high-context culture will be relational, collectivist, intuitive, and contemplative. They place a high value on interpersonal relationships and group members are a very close-knit community. Typically a low-context culture will be less close-knit, and so individuals communicating will have fewer relational cues when interpreting messages. Therefore, it is necessary for more explicit information to be included in the message so it is not misinterpreted. Not all individuals in a culture can be defined by cultural stereotypes, and there will be variations within a national culture in different settings. For example, Hall describes how Japanese culture has both low- and high-context situations. However, understanding the broad tendencies of predominant cultures can help inform and educate individuals on how to better facilitate communication between individuals of different cultural backgrounds.
Although the concept of high- and low-context cultures is usually applied in the field of analyzing national cultures, it can also be used to describe scientific or corporate cultures or specific settings such as airports or law courts. A simplified example mentioned by Hall is that scientists working in "hard science" fields (like chemistry and physics) tend to have lower-context cultures: because their knowledge and models have fewer variables, they will typically include less context for each event they describe. In contrast, scientists working with living systems need to include more context because there can be significant variables which impact the research outcomes.
Croucher's study examines the assertion that culture influences communication style (high/low-context) preference. Data was gathered in India, Ireland, Thailand, and the United States where the results confirm that "high-context nations (India and Thailand) prefer the avoiding and obliging conflict styles more than low-context nations (Ireland and the United States), whereas low-context nations prefer the uncompromising and dominating communication style more than high-context nations."
Individuals and groups operating in low context cultures are quite explicit and elaborate without having prior knowledge of each member's history or background, and tend to take what is said literally and prefer to have thorough knowledge before a task or a meeting.
In addition, Hall identified Japanese, Arabs, and Southern European peoples as high-context, but Germans, Scandinavians and other Northern Europeans, and Americans low-context.
Cultures and languages are defined as higher or lower contexts on a spectrum. Hall notes a similar difference between Navajo-speakers and English speakers in a United States school.
Hall proposed a "spectrum" of national cultures from "High-Context cultures" to "Low-Context Cultures.
Cultural context can also shift and evolve. For instance, a study has argued that both Japan and Finland (high-context cultures) are becoming lower-context with the increased influence of Western Europe and United States culture.
Case studies
US, China, and Korea
Kim Donghoon conducted a study to test the major aspects of high-context versus low-context culture concepts. The study collected three samples from different cultures - the US, China, and Korea - with 96 business managers surveyed in the American and Chinese samples and 50 managers in the Korean sample. According to Hall's theory, the Chinese and Korean samples represented higher-context cultures while the American sample represents a lower-context culture. The study tested 16 items, covering various aspects of the high-versus-low context concept, including social orientation, responsibility, confrontation, communication, commitment, and dealing with new situations.
The results show significant differences between the American, Chinese, and Korean samples on 15 out of 16 items, with 11 items significant at the .01 level, one at the .05 level, and three at the .10 level. The composite score also indicates a significant difference among the three samples at the .01 level. The American sample scored the lowest compared to the two "Oriental samples," which aligns with Hall's concept. Overall, this study provides further evidence to support the high versus low-context culture concepts with Chinese, Korean, and American participants. The study suggests that in high-context cultures, such as China and Korea, people tend to be more socially oriented, less confrontational, and more complacent with existing ways of living compared to people from low-context cultures like the US.
Sino-American Language through automobile advertisement
A study from 2019 found that due to the different cultural backgrounds of China and the United States, the use of language in automotive advertising varies a lot.
For example, Chinese automobile advertisements, which are found to belong to the high-context category, characterized by vagueness and implicitness. Much of the information’s are brought in the context of the publicity, that includes also shared history, relationships, and cultural norms/values (like Chinese poems for example).
On the other hand, American automobile advertisements are categorized as low context, characterized by straightforwardness and frankness. This aligns with low-context cultures where communication is more explicit, direct, and elaborate.
Russia and Romania
A case study was done on 30 Romanian and 30 Russian employees, to compare high- and low-context cultures, and the results strongly suggested that Russia and Romania are both high-context cultures. The table shows the major differences and similarities between individual queries.
Mexico and the U.S.
This study is a result of a cross-cultural examination between students from the United States, a low-context culture, and Mexico, a high-context culture, to study the reasons people communicate in each culture. There were 225 Mexican participants from three different undergraduate universities in Mexico City and 447 participants from Kent State University in the U.S. The case study looked into culture shock experienced by Mexicans studying in the U.S. The hypotheses tested indicated the high-context culture in Mexico would provide different motives for communication when compared with the low-context culture of the U.S.
The results found that U.S. participants used communication for pleasure more often than Mexican participants. Pleasure, affection, and inclusion were the highest motives for communication in both cultures and control was the lowest for both cultures.
Brazil and Germany
A research, published in 2022, investigates how varying cultural communication styles impact the outcomes of social group divisions. By blending concepts from theories on group dynamics and cultural communication, Kathrin Burmann and Thorsten Semrau examined 54 teams in the banking sector in Germany (low-context culture) and Brazil (high-context culture). The study results show that in Germany, known for direct communication, social divisions often lead to task conflicts, harming team performance. However, in Brazil, where communication tends to be more indirect, they didn't observe the same negative consequences.
Overlap and contrast between context cultures
The categories of context cultures are not totally separate. Both often take many aspects of the other's cultural communication abilities and strengths into account. The terms high- and low-context cultures are not classified with strict individual characteristics or boundaries. Instead, many cultures tend to have a mixture or at least some concepts that are shared between them, overlapping the two context cultures.
Ramos suggests that "in low context culture, communication members' communication must be more explicit. As such, what is said is what is meant, and further analysis of the message is usually unnecessary." This implies that communication is quite direct and detailed because members of the culture are not expected to have knowledge of each other's histories, past experiences, or backgrounds. Because low-context communication concerns more direct messages, the meaning of these messages is more dependent on the words being spoken rather than on the interpretation of more subtle or unspoken cues.
The Encyclopedia of Diversity and Social Justice states that "high context defines cultures that are relational and collectivist, and which most highlight interpersonal relationships. Cultures and communication in which context is of great importance to structuring actions are referred to as high context." In such cultures, people are highly perceptive of actions. Furthermore, cultural aspects such as tradition, ceremony, and history are also highly valued. Because of this, many features of cultural behavior in high-context cultures, such as individual roles and expectations, do not need much detailed or thought-out explanation.
According to Watson, "the influence of cultural variables interplays with other key factors – for example, social identities, those of age, gender, social class, and ethnicity; this may include a stronger or weaker influence." A similarity that the two communication styles share is its influence on social characteristics such as age, gender, social class, and ethnicity. For example, for someone who is older and more experienced within a society, the need for social cues may be higher or lower depending on the communication style. The same applies to the other characteristics in varied countries.
On the other hand, certain intercultural communication skills are unique for each culture and it is significant to note that these overlaps in communication techniques are represented by subgroups within social interactions or family settings. Many singular cultures that are large have subcultures inside of them, making communication and defining them more complicated than the low-context and high-context culture scale. The diversity within a main culture shows how the high and low scale differs depending on social settings such as school, work, home, and in other countries; variation is what allows the scale to fluctuate even if a large culture is categorized as primarily one or the other.
Online
Punctuation marks and emojis are important tools in digital communication. High-context users actively embrace these tools to enhance their communication styles and contribute to the efficiency and meaning of digital interactions. In high-context cultures, where communication relies on implicit understanding and cultural cues, the use of tools reflects specific cultural norms. These tools, which can be interpreted differently across cultures, emphasize the need for clarity in cross-cultural digital interactions.
Such markers can express emotion. Adding an exclamation mark to a sentence ("Thank you!") can emphasize a sense of gratitude. Similarly, using a smiley emoji or emoticon, such as "Thank you :)", can adds a friendly and cheerful tone to the message.
Punctuation can clarify meaning. For example, "Come" and "Come!" can have different nuances. The use of an exclamation mark often indicates emphasis or positivity from the speaker.
Miscommunication within cultural contexts
Between each type of culture context, there will be forms of miscommunication because of the difference in gestures, social cues, and intercultural adjustments; however, it is important to recognize these differences and learn how to avoid miscommunication to benefit certain situations. Since all sets of cultures differ, especially from a global standpoint where language also creates a barrier for communication, social interactions specific to a culture normally require a range of appropriate communication abilities that an opposing culture may not understand or know about. This significance follows into many situations such as the workplace, which can be prone to diversified cultures and opportunities for collaboration and working together. Awareness of miscommunication between high- and low-context cultures within the workplace or intercultural communication settings advocates for collected unification within a group through the flexibility and ability to understand one another.
How higher context relates to other cultural metrics
Diversity
Families, subcultures, and in-groups typically favor higher-context communication. Groups that are able to rely on a common background may not need to use words as explicitly to understand each other. Settings and cultures where people come together from a wider diversity of backgrounds such as international airports, large cities, or multi-national firms, tend to use lower-context communication forms.
Language
Hall links language to culture through the work of Sapir-Whorf on linguistic relativity. A trade language will typically need to explicitly explain more of the context than a dialect which can assume a high level of shared context. Because a low-context setting cannot rely on a shared understanding of potentially ambiguous messages, low-context cultures tend to give more information or to be precise in their language. In contrast, a high-context language like Japanese or Chinese or Korean can use a high number of homophones but still be understood by a listener who knows the context.
Elaborated and restricted codes
The concept of elaborated and restricted codes was introduced by sociologist Basil Bernstein in his book Class, Codes and Control. The use of an elaborated code indicates that the speaker and listener do not share significant amounts of common knowledge, and hence they may need to "spell out" their ideas more fully: elaborated codes tend to be more context-independent. In contrast, the use of restricted codes indicates that speakers and listeners do share a great deal of common background and perspectives, and hence much more can be taken for granted and thus expressed implicitly or through nuance: restricted codes tend to be more context-dependent.
Restricted codes are commonly used in high-context culture groups, where group members share the same cultural background and can easily understand the implicit meanings "between the lines" without further elaboration. Conversely, in cultural groups with low context, where people share less common knowledge or 'value individuality above group identification', elaborated codes are necessary to avoid misunderstanding.
Collectivism and individualism
The concepts of collectivism and individualism have been applied to high- and low-context cultures by Dutch psychologist Geert Hofstede in his Cultural Dimensions Theory. Collectivist societies prioritize the group over the individual, and vice versa for individualist ones. In high-context cultures, language may be used to assist and maintain relationship-building and to focus on process. India and Japan are typically high-context, highly collectivistic cultures, where business is done by building relationships and maintaining respectful communication.
Individualistic cultures promote the development of individual values and independent social groups. Individualism may lead to communicating to all people in a group in the same way, rather than offering hierarchical respect to certain members. Because individualistic cultures may value cultural diversity, a more explicit way of communicating is often required to avoid misunderstanding. Language may be used to achieve goals or exchange information. The USA and Australia are typically low-context, highly individualistic cultures, where transparency and competition in business are prized.
Stability and durability of tradition
High-context cultures tend to be more stable, as their communication is more economical, fast, efficient, and satisfying; but these are gained at the price of devoting time to preprogramming cultural background, and their high stability might come with a price of a high barrier for development. By contrast, low-context cultures tend to change more rapidly and drastically, allowing extension to happen at faster rates. This also means that low-context communication may fail due to the overload of information, which makes culture lose its screening function.
Therefore, higher-context cultures tend to correlate with cultures that also have a strong sense of tradition and history, and change little over time. For example, Native Americans in the United States have higher-context cultures with a strong sense of tradition and history, compared to general American culture. Focusing on tradition creates opportunities for higher-context messages between individuals of each new generation, and the high-context culture feeds back to the stability hence allowing the tradition to be maintained. This is in contrast to lower-context cultures in which the shared experiences upon which communication is built can change drastically from one generation to the next, creating communication gaps between parents and children, as in the United States.
Facial expression and gesture
Culture also affects how individuals interpret other people's facial expressions. An experiment performed by the University of Glasgow shows that different cultures have different understanding of the facial expression signals of the six basic emotions, which are the so-called "universal language of emotion"—happiness, surprise, fear, disgust, anger and sadness. In high-context cultures, facial expressions and gestures take on greater importance in conveying and understanding a message, and the receiver may require more cultural context to understand "basic" displays of emotions.
Marketing and advertising perspective
Cultural differences in advertising and marketing may also be explained through high- and low-context cultures. One study on McDonald's online advertising compared Japan, China, Korea, Hong Kong, Pakistan, Germany, Denmark, Sweden, Norway, Finland, and the United States, and found that in high-context countries, the advertising used more colors, movements, and sounds to give context, while in low-context cultures the advertising focused more on verbal information and linear processes. While a low-context approach might be more successful in cultures with direct communication styles, a high-context marketing strategy might be more beneficial in cultures where communication is indirect and largely dependent on context.
Website communication
Website design among cross-cultural barriers includes factoring in decisions about culture-sensitive color meanings, layout preferences, animation, and sounds. In a case study conducted by the IT University of Copenhagen, it was found that websites catering to high-context cultures tended to have more detailed and advanced designs, including various images and animations. Low-context websites had less animation and more stagnant images, with more details on information. The images found on the websites used in the study promoted individualistic and collectivist characteristics within the low-context and high-context websites, respectively. The low-context websites had multiple images of individuals, while the high-context websites contained images and animations of groups and communities.
Limitations of the model
In a 2008 meta-analysis of 224 articles published between 1990 and 2006, Peter W. Cardon wrote:
See also
Phatic expression
Taarof
References
Further reading
Hall, Edward, T. Beyond Culture. Anchor Books (December 7, 1976).
Samovar, Larry A. and Richard E. Porter. Communication Between Cultures. 5th Ed. Thompson and Wadsworth, 2004.
External links
High and low context cultures
Social anthropology
Culture | 0.765821 | 0.997612 | 0.763993 |
Triangulation (social science) | In the social sciences, triangulation refers to the application and combination of several research methods in the study of the same phenomenon. By combining multiple observers, theories, methods, and empirical materials, researchers hope to overcome the weakness or intrinsic biases and the problems that come from single method, single-observer, and single-theory studies.
It is popularly used in sociology. "The concept of triangulation is borrowed from navigational and land surveying techniques that determine a single point in space with the convergence of measurements taken from two other distinct points."
Triangulation can be used in both quantitative and qualitative studies as an alternative to traditional criteria like reliability and validity.
Purpose
The purpose of triangulation in qualitative research is to increase the credibility and validity of the results. Several scholars have aimed to define triangulation throughout the years.
Cohen and Manion (2000) define triangulation as an "attempt to map out, or explain more fully, the richness and complexity of human behavior by studying it from more than one standpoint."
Altrichter et al. (2008) contend that triangulation "gives a more detailed and balanced picture of the situation."
According to O'Donoghue and Punch (2003), triangulation is a "method of cross-checking data from multiple sources to search for regularities in the research data."
Types
Denzin (2006) identified four basic types of triangulation:
Data triangulation: involves time, space, and persons.Uses multiple sources of data that all have a similar focus.
Investigator triangulation: involves multiple researchers in an investigation.
Theory triangulation: involves using more than one theoretical scheme in the interpretation of the phenomenon.
Methodological triangulation: involves using more than one method to gather data, such as interviews, observations, questionnaires, and documents.
See also
Data cleansing
Data editing
Iterative proportional fitting for a method of data enhancement applied in statistics, economics and computer science
References
Social science methodology
Cohen, L., Mansion, L. and Morrison, K. (2000). Research Methods in Education.5th ed. London: Routledge. p.25 | 0.784791 | 0.973425 | 0.763935 |
Project method | The project method is a medium of instruction which was introduced during the 18th century into the schools of architecture and engineering in Europe when graduating students had to apply the skills and knowledge they had learned in the course of their studies to problems they had to solve as practicians of their trade, for example, designing a monument, building a steam engine. In the early 20th Century, William Heard Kilpatrick expanded the project method into a philosophy of education. His device is child-centered and based in progressive education. Both approaches are used by teachers worldwide to this day. Unlike traditional education, proponents of the project method attempt to allow the student to solve problems with as little teacher direction as possible. The teacher is seen more as a facilitator than a deliver of knowledge and information.
Students in a project method environment should be allowed to explore and experience their environment through their senses and, in a sense, direct their own learning by their individual interests. Very little is taught from textbooks and the emphasis is on experiential learning, rather than rote and memorization. A project method classroom focuses on democracy and collaboration to solve "purposeful" problems.
Kilpatrick devised four classes of projects for his method: construction (such as writing a play), enjoyment (such as experiencing a concert), problem (for instance, discussing a complex social problem like poverty), and specific learning (learning of skills such as swimming).
Literature
Knoll, Michael (1996): Faking a dissertation: Ellsworth Collings, William H. Kilpatrick and the "project curriculum". Journal of Curriculum Studies 28, no. 2, pp. 193–222.
Knoll, Michael (1997): The Project Method: Its Vocational Education Origin and International Development. Journal of Industrial Teacher Education 34, 59-80.
Knoll, Michael (2010): "A Marriage on the Rocks": An Unknown Letter by William H. Kilpatrick About His Project Method. Eric-online document 511129.
Knoll, Michael (2012): “I Had Made a Mistake”: William H. Kilpatrick and the Project Method. Teachers College Record 114, issue 2, 45 pages.
Knoll, Michael (2014) Project Method. In D. C. Phillips (ed) Encyclopedia of Educational Theory and Philosophy, Vol. 2 (London: Sage), 665-669.
References
Philosophy of education | 0.77847 | 0.981323 | 0.76393 |
Understanding Media | Understanding Media: The Extensions of Man is a 1964 book by Marshall McLuhan, in which the author proposes that the media, not the content that they carry, should be the focus of study. He suggests that the medium affects the society in which it plays a role mainly by the characteristics of the medium rather than the content. The book is considered a pioneering study in media theory.
McLuhan pointed to the light bulb as an example. A light bulb does not have content in the way that a newspaper has articles or a television has programs, yet it is a medium that has a social effect; that is, a light bulb enables people to create spaces during nighttime that would otherwise be enveloped by darkness. He describes the light bulb as a medium without any content. McLuhan states that "a light bulb creates an environment by its mere presence".
More controversially, he postulated that content had little effect on society—in other words, it did not matter if television broadcasts children's shows or violent programming. He noted that all media have characteristics that engage the viewer in different ways; for instance, a passage in a book could be reread at will, but a movie had to be screened again in its entirety to study any individual part of it.
The book is the source of the well-known phrase "the medium is the message". It was a leading indicator of the upheaval of local cultures by increasingly globalized values. The book greatly influenced academics, writers, and social theorists. The book discussed the radical analysis of social change, how society is shaped, and reflected by communications media.
Summary
Throughout Understanding Media, McLuhan uses historical quotes and anecdotes to probe the ways in which new forms of media change the perceptions of societies, with specific focus on the effects of each medium as opposed to the content that is transmitted by each medium. McLuhan identified two types of media: "hot" media and "cool" media, drawing from French anthropologist Lévi-Strauss' distinction between hot and cold societies.<ref name="Taunton2019p223">Taunton, Matthew (2019) Red Britain: The Russian Revolution in Mid-Century Culture, p.223</ref>
This terminology does not refer to the temperature or emotional intensity, nor some kind of classification, but to the degree of participation. Cool media are those that require high participation from users, due to their low definition (the receiver/user must fill in missing information). Since many senses may be used, they foster involvement. Conversely, hot media are low in audience participation due to their high resolution or definition. Film, for example, is defined as a hot medium, since in the context of a dark movie theater, the viewer is completely captivated, and one primary sense—visual—is filled in high definition. In contrast, television is a cool medium, since many other things may be going on and the viewer has to integrate all of the sounds and sights in the context.
In Part One, McLuhan discusses the differences between hot and cool media and the ways that one medium translates the content of another medium. Briefly, "the content of a medium is always another medium".
In Part Two, McLuhan analyzes each medium (circa 1964) in a manner that exposes the form, rather than the content of each medium. In order, McLuhan covers:
The Spoken Word;
The Written Word (i.e., manuscript or incunabulum);
Roads and Paper Routes;
Numbers;
Clothing;
Housing;
Money;
Clocks;
The Print (i.e., pictorial lithograph or woodcut);
Comics;
The Printed Word (i.e., typography);
The Wheel;
The Bicycle and Airplane;
The Photograph;
The Press;
The Motorcar;
Ads;
Games;
The Telegraph;
The Typewriter;
The Telephone;
The Phonograph;
Movies;
Radio;
Television;
Weapons; and
Automation.
Concept of "media"
McLuhan uses interchangeably the words medium, media, and technology.
For McLuhan a medium is "any extension of ourselves" or, more broadly, "any new technology". Contrastly, in addition to forms such as newspapers, television, and radio, McLuhan includes the light bulb, cars, speech, and language in his definition of media: all of these, as technologies, mediate our communication; their forms or structures affect how we perceive and understand the world around us.
McLuhan says that conventional pronouncements fail in studying media because they focus on content, which blinds them to the psychic and social effects that define the medium's true significance. McLuhan observes that any medium "amplifies or accelerates existing processes", introducing a "change of scale or pace or shape or pattern into human association, affairs, and action", which results in "psychic, and social consequences". This is the real "meaning or message" brought by a medium, a social and psychic message, and it depends solely on the medium itself, regardless of the 'content' emitted by it. This is basically the meaning of "the medium is the message".
To demonstrate the flaws of the common belief that the message resides in how the medium is used (the content), McLuhan provides the example of mechanization, pointing out that regardless of the product (e.g., corn flakes or Cadillacs), the impact on workers and society is the same.
In a further exemplification of the common unawareness of the real meaning of media, McLuhan says that people "describe the scratch but not the itch". As an example of "media experts" who follow this fundamentally flawed approach, McLuhan quotes a statement from "General" David Sarnoff (head of RCA), calling it the "voice of the current somnambulism". Each medium "adds itself on to what we already are", realizing "amputations and extensions" to our senses and bodies, shaping them in a new technical form. As appealing as this remaking of ourselves may seem, it really puts us in a "narcissistic hypnosis" that prevents us from seeing the real nature of the media. McLuhan also says that a characteristic of every medium is that its content is always another (previous) medium. For an example in the new millennium, the Internet is a medium whose content is various media which came before it—the printing press, radio and the moving image.
An overlooked, constantly repeated understanding McLuhan has is that moral judgement (for better or worse) of an individual using media is very difficult, because of the psychic effects media have on society and their users. Moreover, media and technology, for McLuhan, are not necessarily inherently "good" or "bad" but bring about great change in a society's way of life. Awareness of the changes are what McLuhan seemed to consider most important, so that, in his estimation, the only sure disaster would be a society not perceiving a technology's effects on their world, especially the chasms and tensions between generations.
The only possible way to discern the real "principles and lines of force" of a medium (or structure) is to stand aside from it and be detached from it. This is necessary to avoid the powerful ability of any medium to put the unwary into a "subliminal state of Narcissus trance", imposing "its own assumptions, bias, and values" on him. Instead, while in a detached position, one can predict and control the effects of the medium. This is difficult because "the spell can occur immediately upon contact, as in the first bars of a melody". One historical example of such detachment is Alexis de Tocqueville and the medium of typography. He was in such position because he was highly literate. Instead, an historical example of the embrace of technological assumptions happened with the Western world, which, heavily influenced by literacy, took its principles of "uniform and continuous and sequential" for the actual meaning of "rational".
McLuhan argues that media are languages, with their own structures and systems of grammar, and that they can be studied as such. He believed that media have effects in that they continually shape and re-shape the ways in which individuals, societies, and cultures perceive and understand the world. In his view, the purpose of media studies is to make visible what is invisible: the effects of media technologies themselves, rather than simply the messages they convey. Media studies therefore, ideally, seeks to identify patterns within a medium and in its interactions with other media. Based on his studies in New Criticism, McLuhan argued that technologies are to words as the surrounding culture is to a poem: the former derive their meaning from the context formed by the latter. Like Harold Innis, McLuhan looked to the broader culture and society within which a medium conveys its messages to identify patterns of the medium's effects.
"Hot" and "cool" media
In the first part of Understanding Media, McLuhan also states that different media invite different degrees of participation on the part of a person who chooses to consume a medium. Some media, such as film, were "hot" - that is, they enhance one single sense, in this case vision, in such a manner that a person does not need to exert much effort in filling in the details of a movie image. McLuhan contrasted this with "cool" TV, which he claimed requires more effort on the part of viewer to determine meaning, and comics, which due to their minimal presentation of visual detail require a high degree of effort to fill in details that the cartoonist may have intended to portray. A movie is thus said by McLuhan to be "hot", intensifying one single sense "high definition", demanding a viewer's attention, and a comic book to be "cool" and "low definition", requiring much more conscious participation by the reader to extract value.
"Any hot medium allows of less participation than a cool one, as a lecture makes for less participation than a seminar, and a book for less than a dialogue."
Hot media usually, but not always, provide complete involvement without considerable stimulus. For example, print occupies visual space, uses visual senses, but can immerse its reader. Hot media favour analytical precision, quantitative analysis and sequential ordering, as they are usually sequential, linear and logical. They emphasize one sense (for example, of sight or sound) over the others. For this reason, hot media also include radio, as well as film, the lecture and photography.
Cool media, on the other hand, are usually, but not always, those that provide little involvement with substantial stimulus. They require more active participation on the part of the user, including the perception of abstract patterning and simultaneous comprehension of all parts. Therefore, according to McLuhan cool media include television, as well as the seminar and cartoons. McLuhan describes the term "cool media" as emerging from jazz and popular music and, in this context, is used to mean "detached".
Examples of media and their messages
Critiques of Understanding Media
Some theorists have attacked McLuhan's definition and treatment of the word "medium" for being too simplistic. Umberto Eco, for instance, contends that McLuhan's medium conflates channels, codes, and messages under the overarching term of the medium, confusing the vehicle, internal code, and content of a given message in his framework.
In Media Manifestos, Régis Debray also takes issue with McLuhan's envisioning of the medium. Like Eco, he too is ill at ease with this reductionist approach, summarizing its ramifications as follows:
The list of objections could be and has been lengthened indefinitely: confusing technology itself with its use of the media makes of the media an abstract, undifferentiated force and produces its image in an imaginary "public" for mass consumption; the magical naivete of supposed causalities turns the media into a catch-all and contagious "mana"; apocalyptic millenarianism invents the figure of a homo mass-mediaticus without ties to historical and social context, and so on.
Furthermore, when Wired interviewed him in 1995, Debray stated that he views McLuhan "more as a poet than a historian, a master of intellectual collage rather than a systematic analyst.... McLuhan overemphasizes the technology behind cultural change at the expense of the usage that the messages and codes make of that technology."
Dwight Macdonald, in turn, reproached McLuhan for his focus on television and for his "aphoristic" style of prose, which he believes left Understanding Media filled with "contradictions, non-sequiturs, facts that are distorted and facts that are not facts, exaggerations, and chronic rhetorical vagueness".
Additionally, Brian Winston’s Misunderstanding Media, published in 1986, chides McLuhan for what he sees as his technologically deterministic stances. Raymond Williams and James W. Carey further this point of contention, claiming:
The work of McLuhan was a particular culmination of an aesthetic theory which became, negatively, a social theory ... It is an apparently sophisticated technological determinism which has the significant effect of indicating a social and cultural determinism ... If the medium - whether print or television – is the cause, of all other causes, all that men ordinarily see as history is at once reduced to effects. (Williams 1990, 126/7)
David Carr states that there has been a long line of "academics who have made a career out of deconstructing McLuhan’s effort to define the modern media ecosystem", whether it be due to what they see as McLuhan's ignorance toward socio-historical context or the style of his argument.
While some critics have taken issue with McLuhan's writing style and mode of argument, McLuhan himself urged readers to think of his work as "probes" or "mosaics" offering a toolkit approach to thinking about the media. His eclectic writing style has also been praised for its postmodern sensibilities and suitability for virtual space.
Exploring theories
McLuhan's theories about "the medium is the message" link culture and society. A recurrent topic is the contrast between oral cultures and print culture.
Each new form of media, according to the analysis of McLuhan, shapes messages differently thereby requiring new filters to be engaged in the experience of viewing and listening to those messages.
McLuhan argues that as "sequence yields to the simultaneous, one is in the world of the structure and of configuration". The main example is the passage from mechanization (processes fragmented into sequences, lineal connections) to electric speed (faster up to simultaneity, creative configuration, structure, total field).
Howard Rheingold comments upon McLuhan's "the medium is the message" in relation to the convergence of technology, specifically the computer. In his book Tools for Thought Rheingold explains the notion of the universal machine - the original conception of the computer. Eventually computers will no longer use information but knowledge to operate, in effect thinking''. If in the future computers (the medium) are everywhere, then what becomes of McLuhan's message?
Historical examples
According to McLuhan, the French Revolution and American Revolution happened under the push of print whereas the preexistence of a strong oral culture in Britain prevented such an effect.
Footnotes
External links
of Marshall McLuhan (author)
1964 non-fiction books
Books about the media
Books about hyperreality
Books in philosophy of technology
Canadian essays
English-language books
Works by Marshall McLuhan
Books about media theory | 0.770289 | 0.991711 | 0.763904 |
Microeconomics | Microeconomics is a branch of economics that studies the behavior of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms. Microeconomics focuses on the study of individual markets, sectors, or industries as opposed to the economy as a whole, which is studied in macroeconomics.
One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions under which free markets lead to desirable allocations. It also analyzes market failure, where markets fail to produce efficient results.
While microeconomics focuses on firms and individuals, macroeconomics focuses on the total of economic activity, dealing with the issues of growth, inflation, and unemployment—and with national policies relating to these issues. Microeconomics also deals with the effects of economic policies (such as changing taxation levels) on microeconomic behavior and thus on the aforementioned aspects of the economy. Particularly in the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundations—i.e., based upon basic assumptions about micro-level behavior.
Assumptions and definitions
Microeconomic study historically has been performed according to general equilibrium theory, developed by Léon Walras in Elements of Pure Economics (1874) and partial equilibrium theory, introduced by Alfred Marshall in Principles of Economics (1890).
Microeconomic theory typically begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive.
The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable.
Microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS (local non-satiation) there is no 100% guarantee but there would be a rational rise
in individual utility. With the necessary tools and assumptions in place the utility maximization problem (UMP) is developed.
The utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and then mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well.
The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists. That is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence.
The utility maximization problem has so far been developed by taking consumer tastes (i.e. consumer utility) as primitive. However, an alternative way to develop microeconomic theory is by taking consumer choice as primitive. This model of microeconomic theory is referred to as revealed preference theory.
The theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions.
Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation that is suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good. In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating "missing markets" to enable efficient trading where none had previously existed.
This is studied in the field of collective action and public choice theory. "Optimal welfare" usually takes on a Paretian norm, which is a mathematical application of the Kaldor–Hicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics (microeconomics) is limited in implications without mixing the belief of the economist and their theory.
The demand for various commodities by individuals is generally thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set.
Allocation of scarce resources
Individuals and firms need to allocate limited resources to ensure all agents in the economy are well off. Firms decide which goods and services to produce considering low costs involving labor, materials and capital as well as potential profit margins. Consumers choose the good and services they want that will maximize their happiness taking into account their limited wealth.
The government can make these allocation decisions or they can be independently made by the consumers and firms. For example, in the former Soviet Union, the government played a part in informing car manufacturers which cars to produce and which consumers will gain access to a car.
History
Economists commonly consider themselves microeconomists or macroeconomists. The difference between microeconomics and macroeconomics likely was introduced in 1933 by the Norwegian economist Ragnar Frisch, the co-recipient of the first Nobel Memorial Prize in Economic Sciences in 1969. However, Frisch did not actually use the word "microeconomics", instead drawing distinctions between "micro-dynamic" and "macro-dynamic" analysis in a way similar to how the words "microeconomics" and "macroeconomics" are used today. The first known use of the term "microeconomics" in a published article was from Pieter de Wolff in 1941, who broadened the term "micro-dynamics" into "microeconomics".
Microeconomic theory
Consumer demand theory
Consumer demand theory relates preferences for the consumption of both goods and services to the consumption expenditures; ultimately, this relationship between preferences and consumption expenditures is used to relate preferences to consumer demand curves. The link between personal preferences, consumption and the demand curve is one of the most closely studied relations in economics. It is a way of analyzing how consumers may achieve equilibrium between preferences and expenditures by maximizing utility subject to consumer budget constraints.
Production theory
Production theory is the study of production, or the economic process of converting inputs into outputs. Production uses resources to create a good or service that is suitable for use, gift-giving in a gift economy, or exchange in a market economy. This can include manufacturing, storing, shipping, and packaging. Some economists define production broadly as all economic activity other than consumption. They see every commercial activity other than the final purchase as some form of production.
Cost-of-production theory of value
The cost-of-production theory of value states that the price of an object or condition is determined by the sum of the cost of the resources that went into making it. The cost can comprise any of the factors of production (including labor, capital, or land) and taxation. Technology can be viewed either as a form of fixed capital (e.g. an industrial plant) or circulating capital (e.g. intermediate goods).
In the mathematical model for the cost of production, the short-run total cost is equal to fixed cost plus total variable cost. The fixed cost refers to the cost that is incurred regardless of how much the firm produces. The variable cost is a function of the quantity of an object being produced. The cost function can be used to characterize production through the duality theory in economics, developed mainly by Ronald Shephard (1953, 1970) and other scholars (Sickles & Zelenyuk, 2019, ch. 2).
Fixed and variable costs
Fixed cost (FC) – This cost does not change with output. It includes business expenses such as rent, salaries and utility bills.
Variable cost (VC) – This cost changes as output changes. This includes raw materials, delivery costs and production supplies.
Over a short time period (few months), most costs are fixed costs as the firm will have to pay for salaries, contracted shipment and materials used to produce various goods. Over a longer time period (2-3 years), costs can become variable. Firms can decide to reduce output, purchase fewer materials and even sell some machinery. Over 10 years, most costs become variable as workers can be laid off or new machinery can be bought to replace the old machinery
Sunk Costs – This is a fixed cost that has already been incurred and cannot be recovered. An example of this can be in R&D development like in the pharmaceutical industry. Hundreds of millions of dollars are spent to achieve new drug breakthroughs but this is challenging as its increasingly harder to find new breakthroughs and meet tighter regulation standards. Thus many projects are written off leading to losses of millions of dollars
Opportunity cost
Opportunity cost is closely related to the idea of time constraints. One can do only one thing at a time, which means that, inevitably, one is always giving up other things. The opportunity cost of any activity is the value of the next-best alternative thing one may have done instead. Opportunity cost depends only on the value of the next-best alternative. It does not matter whether one has five alternatives or 5,000.
Opportunity costs can tell when not to do something as well as when to do something. For example, one may like waffles, but like chocolate even more. If someone offers only waffles, one would take it. But if offered waffles or chocolate, one would take the chocolate. The opportunity cost of eating waffles is sacrificing the chance to eat chocolate. Because the cost of not eating the chocolate is higher than the benefits of eating the waffles, it makes no sense to choose waffles. Of course, if one chooses chocolate, they are still faced with the opportunity cost of giving up having waffles. But one is willing to do that because the waffle's opportunity cost is lower than the benefits of the chocolate. Opportunity costs are unavoidable constraints on behavior because one has to decide what's best and give up the next-best alternative.
Price theory
Microeconomics is also known as price theory to highlight the significance of prices in relation to buyer and sellers as these agents determine prices due to their individual actions. Price theory is a field of economics that uses the supply and demand framework to explain and predict human behavior. It is associated with the Chicago School of Economics. Price theory studies competitive equilibrium in markets to yield testable hypotheses that can be rejected.
Price theory is not the same as microeconomics. Strategic behavior, such as the interactions among sellers in a market where they are few, is a significant part of microeconomics but is not emphasized in price theory. Price theorists focus on competition believing it to be a reasonable description of most markets that leaves room to study additional aspects of tastes and technology. As a result, price theory tends to use less game theory than microeconomics does.
Price theory focuses on how agents respond to prices, but its framework can be applied to a wide variety of socioeconomic issues that might not seem to involve prices at first glance. Price theorists have influenced several other fields including developing public choice theory and law and economics. Price theory has been applied to issues previously thought of as outside the purview of economics such as criminal justice, marriage, and addiction.
Microeconomic models
Supply and demand
Supply and demand is an economic model of price determination in a perfectly competitive market. It concludes that in a perfectly competitive market with no externalities, per unit taxes, or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium.
Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power.
For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred.
The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply.
Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged.
That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors of inputs of production are all taken to be constant for a specific time period of evaluation of supply.
Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply.
For a given quantity of a consumer good, the point on the demand curve indicates the value, or marginal utility, to consumers for that unit. It measures what the consumer would be prepared to pay for that unit. The corresponding point on the supply curve measures marginal cost, the increase in total cost to the supplier for the corresponding unit of the good. The price in equilibrium is determined by supply and demand. In a perfectly competitive market, supply and demand equate marginal cost and marginal utility at equilibrium.
On the supply side of the market, some factors of production are described as (relatively) variable in the short run, which affects the cost of changing output levels. Their usage rates can be changed easily, such as electrical power, raw-material inputs, and over-time and temp work. Other inputs are relatively fixed, such as plant and equipment and key personnel. In the long run, all inputs may be adjusted by management. These distinctions translate to differences in the elasticity (responsiveness) of the supply curve in the short and long runs and corresponding differences in the price-quantity change from a shift on the supply or demand side of the market.
Marginalist theory, such as above, describes the consumers as attempting to reach most-preferred positions, subject to income and wealth constraints while producers attempt to maximize profits subject to their own constraints, including demand for goods produced, technology, and the price of inputs. For the consumer, that point comes where marginal utility of a good, net of price, reaches zero, leaving no net gain from further consumption increases. Analogously, the producer compares marginal revenue (identical to price for the perfect competitor) against the marginal cost of a good, with marginal profit the difference. At the point where marginal profit reaches zero, further increases in production of the good stop. For movement to market equilibrium and for changes in equilibrium, price and quantity also change "at the margin": more-or-less of something, rather than necessarily all-or-nothing.
Other applications of demand and supply include the distribution of income among the factors of production, including labor and capital, through factor markets. In a competitive labor market for example the quantity of labor employed and the price of labor (the wage rate) depends on the demand for labor (from employers for production) and supply of labor (from potential workers). Labor economics examines the interaction of workers and employers through such markets to explain patterns and changes of wages and other labor income, labor mobility, and (un)employment, productivity through human capital, and related public-policy issues.
Demand-and-supply analysis is used to explain the behavior of perfectly competitive markets, but as a standard of comparison it can be extended to any type of market. It can also be generalized to explain variables across the economy, for example, total output (estimated as real GDP) and the general price level, as studied in macroeconomics. Tracing the qualitative and quantitative effects of variables that change supply and demand, whether in the short or long run, is a standard exercise in applied economics. Economic theory may also specify conditions such that supply and demand through the market is an efficient mechanism for allocating resources.
Market structure
Market structure refers to features of a market, including the number of firms in the market, the distribution of market shares between them, product uniformity across firms, how easy it is for firms to enter and exit the market, and forms of competition in the market. A market structure can have several types of interacting market systems.
Different forms of markets are a feature of capitalism and market socialism, with advocates of state socialism often criticizing markets and aiming to substitute or replace markets with varying degrees of government-directed economic planning.
Competition acts as a regulatory mechanism for market systems, with government providing regulations where the market cannot be expected to regulate itself. Regulations help to mitigate negative externalities of goods and services when the private equilibrium of the market does not match the social equilibrium. One example of this is with regards to building codes, which if absent in a purely competition regulated market system, might result in several horrific injuries or deaths to be required before companies would begin improving structural safety, as consumers may at first not be as concerned or aware of safety issues to begin putting pressure on companies to provide them, and companies would be motivated not to provide proper safety features due to how it would cut into their profits.
The concept of "market type" is different from the concept of "market structure". Nevertheless, there are a variety of types of markets.
The different market structures produce cost curves based on the type of structure present. The different curves are developed based on the costs of production, specifically the graph contains marginal cost, average total cost, average variable cost, average fixed cost, and marginal revenue, which is sometimes equal to the demand, average revenue, and price in a price-taking firm.
Perfect competition
Perfect competition is a situation in which numerous small firms producing identical products compete against each other in a given industry. Perfect competition leads to firms producing the socially optimal output level at the minimum possible cost per unit. Firms in perfect competition are "price takers" (they do not have enough market power to profitably increase the price of their goods or services). A good example would be that of digital marketplaces, such as eBay, on which many different sellers sell similar products to many different buyers. Consumers in a perfect competitive market have perfect knowledge about the products that are being sold in this market.
Imperfect competition
Imperfect competition is a type of market structure showing some but not all features of competitive markets. In perfect competition, market power is not achievable due to a high level of producers causing high levels of competition. Therefore, prices are brought down to a marginal cost level. In a monopoly, market power is achieved by one firm leading to prices being higher than the marginal cost level.
Between these two types of markets are firms that are neither perfectly competitive or monopolistic. Firms such as Pepsi and Coke and Sony, Nintendo and Microsoft dominate the cola and video game industry respectively. These firms are in imperfect competition
Monopolistic competition
Monopolistic competition is a situation in which many firms with slightly different products compete. Production costs are above what may be achieved by perfectly competitive firms, but society benefits from the product differentiation. Examples of industries with market structures similar to monopolistic competition include restaurants, cereal, clothing, shoes, and service industries in large cities.
Monopoly
A monopoly is a market structure in which a market or industry is dominated by a single supplier of a particular good or service. Because monopolies have no competition, they tend to sell goods and services at a higher price and produce below the socially optimal output level. However, not all monopolies are a bad thing, especially in industries where multiple firms would result in more costs than benefits (i.e. natural monopolies).
Natural monopoly: A monopoly in an industry where one producer can produce output at a lower cost than many small producers.
Oligopoly
An oligopoly is a market structure in which a market or industry is dominated by a small number of firms (oligopolists). Oligopolies can create the incentive for firms to engage in collusion and form cartels that reduce competition leading to higher prices for consumers and less overall market output. Alternatively, oligopolies can be fiercely competitive and engage in flamboyant advertising campaigns.
Duopoly: A special case of an oligopoly, with only two firms. Game theory can elucidate behavior in duopolies and oligopolies.
Monopsony
A monopsony is a market where there is only one buyer and many sellers.
Bilateral monopoly
A bilateral monopoly is a market consisting of both a monopoly (a single seller) and a monopsony (a single buyer).
Oligopsony
An oligopsony is a market where there are a few buyers and many sellers.
Game theory
Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. The term "game" here implies the study of any strategic interaction between people. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers & acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems, and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy.
Information economics
Information economics is a branch of microeconomic theory that studies how information and information systems affect an economy and economic decisions. Information has special characteristics. It is easy to create but hard to trust. It is easy to spread but hard to control. It influences many decisions. These special characteristics (as compared with other types of goods) complicate many standard economic theories. The economics of information has recently become of great interest to many - possibly due to the rise of information-based companies inside the technology industry. From a game theory approach, the usual constraints that agents have complete information can be loosened to further examine the consequences of having incomplete information. This gives rise to many results which are applicable to real life situations. For example, if one does loosen this assumption, then it is possible to scrutinize the actions of agents in situations of uncertainty. It is also possible to more fully understand the impacts – both positive and negative – of agents seeking out or acquiring information.
Applied
Applied microeconomics includes a range of specialized areas of study, many of which draw on methods from other fields.
Economic history examines the evolution of the economy and economic institutions, using methods and techniques from the fields of economics, history, geography, sociology, psychology, and political science.
Education economics examines the organization of education provision and its implication for efficiency and equity, including the effects of education on productivity.
Financial economics examines topics such as the structure of optimal portfolios, the rate of return to capital, econometric analysis of security returns, and corporate financial behavior.
Health economics examines the organization of health care systems, including the role of the health care workforce and health insurance programs.
Industrial organization examines topics such as the entry and exit of firms, innovation, and the role of trademarks.
Law and economics applies microeconomic principles to the selection and enforcement of competing legal regimes and their relative efficiencies.
Political economy examines the role of political institutions in determining policy outcomes.
Public economics examines the design of government tax and expenditure policies and economic effects of these policies (e.g., social insurance programs).
Urban economics, which examines the challenges faced by cities, such as sprawl, air and water pollution, traffic congestion, and poverty, draws on the fields of urban geography and sociology.
Labor economics examines primarily labor markets, but comprises a large range of public policy issues such as immigration, minimum wages, or inequality.
See also
Macroeconomics
First-order approach
Critique of political economy
References
Further reading
*
Bouman, John: Principles of Microeconomics – free fully comprehensive Principles of Microeconomics and Macroeconomics texts. Columbia, Maryland, 2011
Colander, David. Microeconomics. McGraw-Hill Paperback, 7th ed.: 2008.
Eaton, B. Curtis; Eaton, Diane F.; and Douglas W. Allen. Microeconomics. Prentice Hall, 5th ed.: 2002.
Frank, Robert H.; Microeconomics and Behavior. McGraw-Hill/Irwin, 6th ed.: 2006.
Friedman, Milton. Price Theory. Aldine Transaction: 1976
Hagendorf, Klaus: Labour Values and the Theory of the Firm. Part I: The Competitive Firm. Paris: EURODOS; 2009.
Hicks, John R. Value and Capital. Clarendon Press. [1939] 1946, 2nd ed.
Hirshleifer, Jack., Glazer, Amihai, and Hirshleifer, David, Price theory and applications: Decisions, markets, and information. Cambridge University Press, 7th ed.: 2005.
Jaffe, Sonia; Minton, Robert; Mulligan, Casey B.; and Murphy, Kevin M.: Chicago Price Theory. Princeton University Press, 2019
Jehle, Geoffrey A.; and Philip J. Reny. Advanced Microeconomic Theory. Addison Wesley Paperback, 2nd ed.: 2000.
Katz, Michael L.; and Harvey S. Rosen. Microeconomics. McGraw-Hill/Irwin, 3rd ed.: 1997.
Kreps, David M. A Course in Microeconomic Theory. Princeton University Press: 1990
Landsburg, Steven. Price Theory and Applications. South-Western College Pub, 5th ed.: 2001.
Mankiw, N. Gregory. Principles of Microeconomics. South-Western Pub, 2nd ed.: 2000.
Mas-Colell, Andreu; Whinston, Michael D.; and Jerry R. Green. Microeconomic Theory. Oxford University Press, US: 1995.
McGuigan, James R.; Moyer, R. Charles; and Frederick H. Harris. Managerial Economics: Applications, Strategy and Tactics. South-Western Educational Publishing, 9th ed.: 2001.
Nicholson, Walter. Microeconomic Theory: Basic Principles and Extensions. South-Western College Pub, 8th ed.: 2001.
Perloff, Jeffrey M. Microeconomics. Pearson – Addison Wesley, 4th ed.: 2007.
Perloff, Jeffrey M. Microeconomics: Theory and Applications with Calculus. Pearson – Addison Wesley, 1st ed.: 2007
Pindyck, Robert S.; and Daniel L. Rubinfeld. Microeconomics. Prentice Hall, 7th ed.: 2008.
Ruffin, Roy J.; and Paul R. Gregory. Principles of Microeconomics. Addison Wesley, 7th ed.: 2000.
Varian, Hal R. (1987). "microeconomics," The New Palgrave: A Dictionary of Economics, v. 3, pp. 461–463.
Varian, Hal R. Intermediate Microeconomics: A Modern Approach. W. W. Norton & Company, 8th ed.: 2009.
Varian, Hal R. Microeconomic Analysis. W.W. Norton & Company, 3rd ed.: 1992.
The economic times (2023). What is Microeconomics. https://economictimes.indiatimes.com/definition/microeconomics.
External links
X-Lab: A Collaborative Micro-Economics and Social Sciences Research Laboratory
Simulations in Microeconomics
A brief history of microeconomics
Money | 0.765345 | 0.998044 | 0.763848 |
Cognitivism (psychology) | In psychology, cognitivism is a theoretical framework for understanding the mind that gained credence in the 1950s. The movement was a response to behaviorism, which cognitivists said neglected to explain cognition. Cognitive psychology derived its name from the Latin cognoscere, referring to knowing and information, thus cognitive psychology is an information-processing psychology derived in part from earlier traditions of the investigation of thought and problem solving.
Behaviorists acknowledged the existence of thinking but identified it as a behavior. Cognitivists argued that the way people think impacts their behavior and therefore cannot be a behavior in and of itself. Cognitivists later claimed that thinking is so essential to psychology that the study of thinking should become its own field. However, cognitivists typically presuppose a specific form of mental activity, of the kind advanced by computationalism.
Cognitivism has more recently been challenged by postcognitivism.
Cognitive development
The process of assimilating and expanding our intellectual horizon is termed as cognitive development. We have a complex physiological structure that absorbs a variety of stimuli from the environment, stimuli being the interactions that are able to produce knowledge and skills. Parents process knowledge informally in the home while teachers process knowledge formally in school. Knowledge should be pursued with zest and zeal; if not, then learning becomes a burden.
Attention
Attention is the first part of cognitive development. It pertains to a person's ability to focus and sustain concentration. Attention can also be how focus minded an individual is and having their full concentration on one thing. It is differentiated from other temperamental characteristics like persistence and distractibility in the sense that the latter modulates an individual's daily interaction with the environment. Attention, on the other hand, involves his behavior when performing specific tasks. Learning, for instance, takes place when the student gives attention towards the teacher. Interest and effort closely relate to attention. Attention is an active process which involves numerous outside stimuli. The attention of an organism at any point in time involves three concentric circles; beyond awareness, margin, and focus. Individuals have a mental capacity; there are only so many things someone can focus on at one time.
A theory of cognitive development called information processing holds that memory and attention are the foundation of cognition. It is suggested that children's attention is initially selective and is based on situations that are important to their goals. This capacity increases as the child grows older since they are more able to absorb stimuli from tasks. Another conceptualization classified attention into mental attention and perceptual attention. The former is described as the executive-driven attentional "brain energy" that activates task-relevant processes in the brain while the latter are immediate or spontaneous attention driven by novel perceptual experiences.
Process of learning
Cognitive theory mainly stresses the acquisition of knowledge and growth of the mental structure. Cognitive theory tends to focus on conceptualizing the student's learning process: how information is received; how information is processed and organized into existing schema; how information is retrieved upon recall. In other words, cognitive theory seeks to explain the process of knowledge acquisition and the subsequent effects on the mental structures within the mind. Learning is not about the mechanics of what a learner does, but rather a process depending on what the learner already knows (existing information) and their method of acquiring new knowledge (how they integrate new information into their existing schemas). Knowledge acquisition is an activity consisting of internal codification of mental structures within the student's mind. Inherent to the theory, the student must be an active participant in their own learning process. Cognitive approaches mainly focus on the mental activities of the learner like mental planning, goal setting, and organizational strategies.
In cognitive theories not only the environmental factors and instructional components play an important role in learning. There are additional key elements like learning to code, transform, rehearse, and store and retrieve the information. The learning process includes learner's thoughts, beliefs, and attitude values.
Role of memory
Memory plays a vital role in the learning process. Information is stored within memory in an organised, meaningful manner. Here, teacher and designers play different roles in the learning process. Teachers supposedly facilitate learning and the organization of information in an optimal way. Whereas designers supposedly use advanced techniques (such as analogies, mnemonic devices, and hierarchical relationships) to help learners acquire new information to add to their prior knowledge. Forgetting is described as an inability to retrieve information from memory. Memory loss may be a mechanism used to discard situationally irrelevant information by assessing the relevance of newly acquired information.
Process of transfer
According to cognitive theory, if a learner knows how to implement knowledge in different contexts and conditions, then we can say that transfer has occurred. Understanding is composed of knowledge - in the form of rules, concepts and discrimination. Knowledge stored in memory is important, but the use of such knowledge is also important. Prior knowledge will be used for identifying similarities and differences between itself and novel information.
Types of learning explained in detail by this position
Cognitive theory mostly explains complex forms of learning in terms of reasoning, problem solving and information processing. Emphasis must be placed on the fact that the goal of all aforementioned viewpoints is considered to be the same - the transfer of knowledge to the student in the most efficient and effective manner possible. Simplification and standardization are two techniques used to enhance the effectiveness and efficiency of knowledge transfer. Knowledge can be analysed, decomposed and simplified into basic building blocks. There is a correlation with the behaviorist model of the knowledge transfer environment. Cognitivists stress the importance of efficient processing strategies.
Basic principles of the cognitive theory and relevance to instructional design
A behaviorist uses feedback (reinforcement) to change the behavior in the desired direction, while the cognitivist uses the feedback for guiding and supporting the accurate mental connections.
For different reasons learners' task analyzers are critical to both cognitivists and behaviorists. Cognitivists look at the learner's predisposition to learning (How does the learner activate, maintain, and direct their learning?). Additionally, cognitivists examine the learners' 'how to design' instruction that it can be assimilated. (i.e., what about the learner's existing mental structures?) In contrast, the behaviorists look to determine where the lesson should begin (i.e., at what level the learners are performing successfully?) and what are the most effective reinforcements (i.e., What are the consequences that are most desired by the learner?).
There are some specific assumptions or principles that direct the instructional design: active involvement of the learner in the learning process, learner control, metacognitive training (e.g., self-planning, monitoring, and revising techniques), the use of hierarchical analyses to identify and illustrate prerequisite relationships (cognitive task analysis procedure), facilitating optimal processing of structuring, organizing and sequencing information (use of cognitive strategies such as outlining, summaries, synthesizers, advance organizers etc.), encouraging the students to make connections with previously learned material, and creating learning environments (recall of prerequisite skills; use of relevant examples, analogies).
Structuring instruction
Cognitive theories emphasize mainly on making knowledge meaningful and helping learners to organize and relate new information to existing knowledge in memory. Instruction should be based on students' existing schema or mental structures, to be effective. The organisation of information is connected in such a manner that it should relate to the existing knowledge in some meaningful way. Examples of cognitive strategies include the use of analogies and metaphors, framing, outlining the mnemonics, concept mapping, advance organizers, and so forth. The cognitive theory mainly emphasizes the major tasks of the teacher / designer and includes analyzing various learning experiences to the learning situation, which can impact learning outcomes of different individuals.
Organizing and structuring the new information to connect the learners' previously acquired knowledge abilities and experiences.
The new information is effectively and efficiently assimilated/accommodated within the learners cognitive structure.
Theoretical approach
Cognitivism has two major components, one methodological, the other theoretical. Methodologically, cognitivism has a positivist approach and says that psychology can be (in principle) fully explained by the use of the scientific method, there is speculation on whether or not this is true. This is also largely a reductionist goal, with the belief that individual components of mental function (the 'cognitive architecture') can be identified and meaningfully understood. The second says that cognition contains discrete and internal mental states (representations or symbols) that can be changed using rules or algorithms.
Cognitivism became the dominant force in psychology in the late-20th century, replacing behaviorism as the most popular paradigm for understanding mental function. Cognitive psychology is not a wholesale refutation of behaviorism, but rather an expansion that accepts that mental states exist. This was due to the increasing criticism towards the end of the 1950s of simplistic learning models. One of the most notable criticisms was Noam Chomsky's argument that language could not be acquired purely through conditioning, and must be at least partly explained by the existence of internal mental states.
The main issues that interest cognitive psychologists are the inner mechanisms of human thought and the processes of knowing. Cognitive psychologists have attempted to shed some light on the alleged mental structures that stand in a causal relationship to our physical actions.
Criticisms of psychological cognitivism
In the 1990s, various new theories emerged that challenged cognitivism and the idea that thought was best described as computation. Some of these new approaches, often influenced by phenomenological and postmodern philosophy, include situated cognition, distributed cognition, dynamicism and embodied cognition. Some thinkers working in the field of artificial life (for example Rodney Brooks) have also produced non-cognitivist models of cognition. On the other hand, much of early cognitive psychology, and the work of many currently active cognitive psychologists, does not treat cognitive processes as computational.
The idea that mental functions can be described as information processing models has been criticised by philosopher John Searle and mathematician Roger Penrose who both argue that computation has some inherent shortcomings which cannot capture the fundamentals of mental processes.
Penrose uses Gödel's incompleteness theorem (which states that there are mathematical truths which can never be proven in a sufficiently strong mathematical system; any sufficiently strong system of axioms will also be incomplete) and Turing's halting problem (which states that there are some things which are inherently non-computable) as evidence for his position.
Searle has developed two arguments, the first (well known through his Chinese room thought experiment) is the 'syntax is not semantics' argument—that a program is just syntax, while understanding requires semantics; therefore programs (hence cognitivism) cannot explain understanding. Such an argument presupposes the controversial notion of a private language. The second, which Searle now prefers but is less well known, is his 'syntax is not physics' argument—nothing in the world is intrinsically a computer program except as applied, described, or interpreted by an observer, so either everything can be described as a computer and trivially a brain can but then this does not explain any specific mental processes, or there is nothing intrinsic in a brain that makes it a computer (program). Many oppose these views and have criticized his arguments, which have created significant disagreement. Both points, Searle claims, refute cognitivism.
Another argument against cognitivism is the problems of Ryle's Regress or the homunculus fallacy. Cognitivists have offered a number of arguments attempting to refute these attacks.
See also
References
Further reading
Costall, A. and Still, A. (eds) (1987) Cognitive Psychology in Question. Brighton: Harvester Press Ltd.
Searle, J. R. Is the brain a digital computer APA Presidential Address
Wallace, B ., Ross, A., Davies, J.B., and Anderson T., (eds) (2007) The Mind, the Body and the World: Psychology after Cognitivism. London: Imprint Academic.
Cognitive psychology
Cognitive science
Philosophy of psychology
Psychological concepts
Psychological schools
Psychological theories | 0.77066 | 0.99112 | 0.763816 |
Intentional community | An intentional community is a voluntary residential community designed to foster a high degree of social cohesion and teamwork. Members typically unite around shared values, beliefs, or a common vision, which may be political, religious, spiritual, or simply focused on the practical benefits of cooperation and mutual support. While some groups emphasise shared ideologies, others are centred on enhancing social connections, sharing resources, and creating meaningful relationships.
Although intentional communities are sometimes described as alternative lifestyles or social experiments, some see them as a natural response to the isolation and fragmentation of modern housing, offering a return to the social bonds and collaborative spirit found in traditional village life.
The multitude of intentional communities includes collective households, cohousing communities, coliving, ecovillages, monasteries, survivalist retreats, kibbutzim, Hutterites, ashrams, and housing cooperatives.
History
Ashrams are likely the earliest intentional communities, founded around 1500 BCE. Buddhist monasteries appeared around 500 BCE. Pythagoras founded an intellectual vegetarian commune in about 525 BCE in southern Italy. Hundreds of modern intentional communities were formed across Europe, North and South America, Australia, and New Zealand out of the intellectual foment of utopianism. Intentional communities exhibit the utopian ambition to create a better, more sustainable world for living. Nevertheless, the term utopian community as a synonym for an intentional community might be considered to be of pejorative nature and many intentional communities do not consider themselves to be utopian. Also the alternative term commune is considered to be non-neutral or even linked to leftist politics or hippies.
Synonyms and definitions
Additional terms referring to an intentional community can be alternative lifestyle, intentional society, cooperative community, withdrawn community, enacted community, socialist colony, communistic society, collective settlement, communal society, commune, mutualistic community, communitarian experiment, experimental community, utopian experiment, practical utopia, and utopian society.
Variety
The purposes of intentional communities vary and may be political, spiritual, economic, or environmental. In addition to spiritual communities, secular communities also exist. One common practice, particularly in spiritual communities, is communal meals. Egalitarian values can be combined with other values. Benjamin Zablocki categorized communities this way:
Academic communities (see Living-Learning Communities)
Alternative-family communities (see Tenacious Unicorn Ranch)
Coliving communities
Cooperative communities
Countercultural communities
Egalitarian communities
Experimental communities
Political communities
Psychological communities (based on mystical or gestalt principles)
Rehabilitational communities (see Synanon)
Religious communities
Spiritual communities
Membership
Members of Christian intentional communities want to emulate the practices of the earliest believers. Using the biblical book of Acts (and, often, the Sermon on the Mount) as a model, members of these communities strive to demonstrate their faith in a corporate context, and to live out the teachings of the New Testament, practicing compassion and hospitality. Communities such as the Simple Way, the Bruderhof and Rutba House would fall into this category. Despite strict membership criteria, these communities are open to visitors and not reclusive to the extent of some other intentional communities.
A survey in the 1995 edition of the "Communities Directory", published by the Fellowship for Intentional Community (FIC), reported that 54 percent of the communities choosing to list themselves were rural, 28 percent were urban, 10 percent had both rural and urban sites, and 8 percent did not specify.
Governance
The most common form of governance in intentional communities is democratic (64 percent), with decisions made by some form of consensus decision-making or voting. A hierarchical or authoritarian structure governs 9 percent of communities, 11 percent are a combination of democratic and hierarchical structure, and 16 percent do not specify.
Core principles
The central characteristics of communes, or core principles that define communes, have been expressed in various forms over the years. The Suffolk-born radical John Goodwyn Barmby (1820-1881), subsequently a Unitarian minister, invented the term "" in 1840.
At the start of the 1970s, The New Communes author Ron E. Roberts classified communes as a subclass of a larger category of utopias. He listed three main characteristics:
First, egalitarianism – communes specifically rejected hierarchy or graduations of social status as being necessary to social order.
Second, human scale – members of some communes saw the scale of society as it was then organized as being too industrialized (or factory sized) and therefore unsympathetic to human dimensions.
Third, communes were consciously anti-bureaucratic.
Twenty-five years later, Dr. Bill Metcalf, in his edited book Shared Visions, Shared Lives, defined communes as having the following core principles:
the importance of the group as opposed to the nuclear family unit
a "common purse"
a collective household
group decision-making in general and intimate affairs
Sharing everyday life and facilities, a commune is an idealized form of family, being a new sort of "primary group" (generally with fewer than 20 people, although there are examples of much larger communes). Commune members have emotional bonds to the whole group rather than to any sub-group, and the commune is experienced with emotions that go beyond just social collectivity.
By region
With the simple definition of a commune as an intentional community with 100% income sharing, the online directory of the Fellowship for Intentional Community (FIC) lists 222 communes worldwide (28 January 2019). Some of these are religious institutions such as abbeys and monasteries. Others are based in anthroposophic philosophy, including Camphill villages that provide support for the education, employment, and daily lives of adults and children with developmental disabilities, mental health problems or other special needs.
Many cultures naturally practice communal or tribal living, and would not designate their way of life as a planned "commune" per se, though their living situation may have many characteristics of a commune.
Australia
In Australia, many intentional communities started with the hippie movement and those searching for social alternatives to the nuclear family. One of the oldest continuously running communities is called "Moora Moora Co-operative Community" with about 47 members (Oct 2021). Located at the top of Mount Toolebewong, 65 km east of Melbourne, Victoria at an altitude of 600–800 m, this community has been entirely off the electricity grid since its inception in 1974. Founding members still resident include Peter and Sandra Cock.
Germany
The first wave of utopian communities in Germany began during a period of rapid urbanization between 1890 and 1930. About 100 intentional communities were started but data is unreliable. They often pursued nudism, vegetarian and organic agriculture, as well as various religious and political ideologies like anabaptism, theosophy, anarchism, socialism and eugenics. Historically, German emigrants were also influential in the creation of intentional communities in other countries, like the Bruderhof in the United States of America and Kibbutzim in Israel.
In the 1960s, there was a resurgence of communities calling themselves communes, starting with the Kommune 1 in Berlin, without knowledge of or influence by previous movements.
A large number of contemporary intentional communities define themselves as communes, and there is a network of political communes called "Kommuja" with about 40 member groups (May 2023).
In the German commune book, , communes are defined by Elisabeth Voß as communities which:
Live and work together
Have a communal economy, i.e., common finances and common property (land, buildings, means of production)
Have communal decision making – usually consensus decision making
Try to reduce hierarchy and hierarchical structures
Have communalization of housework, childcare and other communal tasks
Have equality between women and men
Have low ecological footprints through sharing and saving resources
Israel
Kibbutzim in Israel, (sing., kibbutz) are examples of officially organized communes, the first of which were based on agriculture. Other Israeli communities are Kvutza, Yishuv Kehilati, Moshavim and Kfar No'ar. Today, there are dozens of urban communes growing in the cities of Israel, often called urban kibbutzim. The urban kibbutzim are smaller and more anarchist. Most of the urban communes in Israel emphasize social change, education, and local involvement in the cities where they live. Some of the urban communes have members who are graduates of zionist-socialist youth movements, like HaNoar HaOved VeHaLomed, HaMahanot HaOlim and Hashomer Hatsair.
Ireland
In 1831 John Vandeleur (a landlord) established a commune on his Ralahine Estate at Newmarket-on-Fergus, County Clare. Vandeleur asked Edward Thomas Craig, an English socialist, to formulate rules and regulations for the commune. It was set up with a population of 22 adult single men, 7 married women and their 7 husbands, 5 single women, 4 orphan boys and 5 children under the age of 9 years. No money was employed, only credit notes which could be used in the commune shop. All occupants were committed to a life with no alcohol, tobacco, snuff or gambling. All were required to work for 12 hours a day during the summer and from dawn to dusk in winter. The social experiment prospered for a time and 29 new members joined. However, in 1833 the experiment collapsed due to the gambling debts of John Vandeleur. The members of the commune met for the last time on 23 November 1833 and placed on record a declaration of "the contentment, peace and happiness they had experienced for two years under the arrangements introduced by Mr. Vandeleur and Mr. Craig and which through no fault of the Association was now at an end".
Russia
In imperial Russia, the vast majority of Russian peasants held their land in communal ownership within a mir community, which acted as a village government and a cooperative. The very widespread and influential pre-Soviet Russian tradition of monastic communities of both sexes could also be considered a form of communal living. After the end of communism in Russia, monastic communities have again become more common, populous and, to a lesser degree, more influential in Russian society. Various patterns of Russian behavior — (толока), (помочи), (артель) — are also based on communal ("мирские") traditions.
In the years immediately following the revolutions of 1917 Tolstoyan communities proliferated in Russia, but later they were eventually wiped out or stripped of their independence as collectivisation and ideological purges got under way in the late 1920s. Colonies, such as the Life and Labor Commune, relocated to Siberia to avoid being liquidated. Several Tolstoyan leaders, including Yakov Dragunovsky (1886-1937), were put on trial and then sent to the Gulag prison camps.
South Africa
In 1991, Afrikaners in South Africa founded the controversial Afrikaner-only town of Orania, with the goal of creating a stronghold for the Afrikaner minority group, the Afrikaans language and the Afrikaner culture. By 2022, the population was 2,500. The town was experiencing rapid growth and the population had climbed by 55% from 2018. They favour a model of strict Afrikaner self-sufficiency and have their own currency, bank, local government and only employ Afrikaners.
United Kingdom
A 19th century advocate and practitioner of communal living was the utopian socialist John Goodwyn Barmby, who founded a Communist Church before becoming a Unitarian minister.
The Simon Community in London is an example of social cooperation, made to ease homelessness within London. It provides food and religion and is staffed by homeless people and volunteers. Mildly nomadic, they run street "cafés" which distribute food to their known members and to the general public.
The Bruderhof has three locations in the UK. In Glandwr, near Crymych, Pembrokeshire, a co-op called Lammas Ecovillage focuses on planning and sustainable development. Granted planning permission by the Welsh Government in 2009, it has since created 9 holdings and is a central communal hub for its community. In Scotland, the Findhorn Foundation founded by Peter and Eileen Caddy and Dorothy Maclean in 1962 is prominent for its educational centre and experimental architectural community project based at The Park, in Moray, Scotland, near the village of Findhorn.
The Findhorn Ecovillage community at The Park, Findhorn, a village in Moray, Scotland, and at Cluny Hill in Forres, now houses more than 400 people.
Historic agricultural examples include the Diggers settlement on St George's Hill, Surrey during the English Civil War and the Clousden Hill Free Communist and Co-operative Colony near Newcastle upon Tyne during the 1890s.
United States
There is a long history of utopian communities in America that led to the rise in the communes of the hippie movement—the "back-to-the-land" ventures of the 1960s and 1970s. One commune that played a large role in the hippie movement was Kaliflower, a utopian living cooperative that existed in San Francisco between 1967 and 1973 built on values of free love and anti-capitalism.
Andrew Jacobs of The New York Times wrote that "after decades of contraction, the American commune movement has been expanding since the mid-1990s, spurred by the growth of settlements that seek to marry the utopian-minded commune of the 1960s with the American predilection for privacy and capital appreciation". The Fellowship for Intentional Community (FIC) is one of the main sources for listings of and more information about communes in the United States.
Although many American communes are short-lived, some have been in operation for over 50 years. The Bruderhof was established in the US in 1954, Twin Oaks in 1967 and Koinonia Farm in 1942. Twin Oaks is a rare example of a non-religious commune surviving for longer than 30 years.
See also
Affinity group
Anarchist Catalonia
Anarcho-communism
Art commune
Burning man
Christian Community of Universal Brotherhood, Canadian Community Doukhobors (1900-1938)
Common land
Communal land
Commune (documentary), a 2005 documentary about Black Bear Ranch, an intentional community located in Siskiyou County, California
Commune of Paris
Community garden
Cooperatives
Counterculture of the 1960s
Diggers and Dreamers
Drop City
Egalitarian communities
Ejido, a form of Mexican land distribution resembling a commune
Equality colony
Fellowship for Intentional Community
Free State Project
Free Vermont
Great Leap Forward, a time period in the 1950s and 1960s when the Chinese government created such communes
Hramada, a Belarusian commune assembly
Hutterite, a Christian sect that lives in communal "colonies"
List of intentional communities
Obshchina, communes of the Russian Empire
People's commune, type of administrative level in China from 1958 – early 1980s
Phalanstère
Renaissance Community
Slab City, California
Squatting
Tolstoyans
Well-field system, a Chinese land distribution system with common lands controlled by a village
World Brotherhood Colonies
Notes
References
Sources
Curl, John (2007). Memories of Drop City, The First Hippie Commune of the 1960s and the Summer of Love, a memoir. iUniverse. . Red-coral.net
Curl, John (2009) For All The People: Uncovering the Hidden History of Cooperation, Cooperative Movements, and Communalism in America, PM Press. .
Fitzgerald, George R. (1971). Communes Their Goals, Hopes, Problems. New York: Paulist Press.
Hall, John R. (1978). The Ways Out: Utopian Communal Groups in an Age of Babylon. London: Routledge & Kegan Paul.
Horrox, James. (2009). A Living Revolution: Anarchism in the Kibbutz Movement. Oakland: AK Press.
Margaret Hollenbach. (2004)Lost and Found: My Life in a Group Marriage Commune. University of New Mexico Press, .
Kanter, Rosabeth Moss. (1972) Commitment and community: communes and utopias in sociological perspective. Cambridge, Massachusetts, Harvard University Press.
Kanter, Rosabeth Moss. (1973) Communes: creating and managing the collective life. New York, Harper & Row.
Lattin, Don. (2003, March 2) Twilight of Hippiedom. The San Francisco Chronicle. Retrieved March 16, 2008
Lauber, John. (1963, June). Hawthorne's Shaker Tales [Electronic version]. Nineteenth-Century Fiction, Vol. 18, 82–86.
Meunier, Rachel. (1994, December 17). Communal Living in the Late 60s and Early 70s. Retrieved March 16, 2008, from thefarm.org
Miller, Timothy. (1997) "Assault on Eden: A Memoir of Communal Life in the Early '70s", Utopian Studies, Vol. 8, 1997.
Roberts, Ron E. (1971). The New Communes Coming Together in America. New Jersey: Prentice Hall inc.
Van Deusen, David. (2008) Green Mountain Communes: The Making of a Peoples’ Vermont, Catamount Tavern News Service.
Veysey, Laurence R. (1978) The Communal Experience: Anarchist and Mystical Communities in Twentieth Century America
Wild, Paul H. (1966 March). Teaching Utopia [Electronic version]. The English Journal, Vol. 55, No. 3, 335–37, 339.
Zablocki, Benjamin. (1980, 1971) The Joyful Community: An Account of the Bruderhof: A Communal Movement Now in Its Third Generation (University of Chicago Press, 1971, reissued 1980), . (The 1980 edition of the Whole Earth Catalog called this book "the best and most useful book on communes that's been written".)
Zablocki, Benjamin. (1980) Alienation and Charisma: A Study of Contemporary American Communes (The Free Press, 1980), .
Further reading
Curl, John (2007) Memories of Drop City, the First Hippie Commune of the 1960s and the Summer of Love: a memoir. iUniverse. .
Kanter, Rosabeth Moss (1972) Commitment and Community: communes and utopias in sociological perspective. Cambridge, Massachusetts: Harvard University Press.
McLaughlin, C. and Davidson, G. (1990) Builders of the Dawn: community lifestyles in a changing world. Book Publishing Company.
Lupton, Robert C. (1997) Return Flight: Community Development Through Reneighboring our Cities, Atlanta, Georgia:FCS Urban Ministries.
Moore, Charles E. Called to Community: The Life Jesus Wants for His People. Plough Publishing House, 2016.
"Intentional Community." Plough, Plough Publishing, www.plough.com/en/topics/community/intentional-community.
Mariani, Mike: The New Generation of Self-Created Utopias, The New York Times, January 16, 2020
External links
Federation of Egalitarian Communities
Intentional Communities Website
eurotopia European Directory of Communities and Ecovillages
Intentional Communities Wiki
List of Communes in the Communities Directory
Intentional Community For Media and Spirituality
Diggers & Dreamers UK directory & Journal
The Twitter Age Embraces Communal Living – slideshow by The New York Times
International Communes Desk
Housing cooperatives
Intentional living
Living arrangements
Sharing economy
Types of communities | 0.765904 | 0.997261 | 0.763806 |
Communitarianism | Communitarianism is a philosophy that emphasizes the connection between the individual and the community. Its overriding philosophy is based on the belief that a person's social identity and personality are largely molded by community relationships, with a smaller degree of development being placed on individualism. Although the community might be a family, communitarianism usually is understood, in the wider, philosophical sense, as a collection of interactions, among a community of people in a given place (geographical location), or among a community who share an interest or who share a history. Communitarianism is often contrasted with individualism, and opposes laissez-faire policies that deprioritize the stability of the overall community.
Terminology
The philosophy of communitarianism originated in the 20th century, but the term "communitarian" was coined in 1841, by John Goodwyn Barmby, a leader of the British Chartist movement, who used it in referring to utopian socialists and other idealists who experimented with communal styles of life. However, it was not until the 1980s that the term "communitarianism" gained currency through association with the work of a small group of political philosophers. Their application of the label "communitarian" was controversial, even among communitarians, because, in the West, the term evokes associations with the ideologies of socialism and collectivism; so, public leaders—and some of the academics who champion this school of thought—usually avoid the term "communitarian", while still advocating and advancing the ideas of communitarianism.
The term is primarily used in two senses:
Philosophical communitarianism considers classical liberalism to be ontologically and epistemologically incoherent, and opposes it on those grounds. Unlike classical liberalism, which construes communities as originating from the voluntary acts of pre-community individuals, it emphasizes the role of the community in defining and shaping individuals. Communitarians believe that the value of community is not sufficiently recognized in liberal theories of justice.
Ideological communitarianism is characterized as a radical centrist ideology that is sometimes marked by socially conservative and economically interventionist policies. This usage was coined recently. When the term is capitalized, it usually refers to the Responsive Communitarian movement of Amitai Etzioni and other philosophers.
Czech and Slovak philosophers like Marek Hrubec, Lukáš Perný and Luboš Blaha extend communitarianism to social projects tied to the values and significance of community or collectivism and to various types of communism and socialism (Christian, scientific, or utopian), including:
Historical roots of collectivist projects from Plato, through François-Noël Babeuf, Pierre Joseph Proudhon, Mikhail Bakunin, Charles Fourier, Robert Owen to Karl Marx
Contemporary theoretical communitarianism (Michael J. Sandel, Michael Walzer, Alasdair MacIntyre, Charles Taylor), originating in the 1980s
Pro-liberal, pro-multicultural (Walzer, Taylor)
Anti-liberal, pro-national (Sandel, MacIntyre)
The vision of practical, self-sustaining communities as described by Thomas More (Utopia), Tommaso Campanella and practised by Christian Utopians (Jesuit Reduction) or utopian socialists like Charles Fourier (List of Fourierist Associations in the United States), Robert Owen (List of Owenite communities in the United States). This line includes various forms of cooperatives, self-help institutions, or communities (Hussite communities, The Diggers, Habans, Hutterites, Amish, Israeli kibbutz, Slavic community; examples include the Twelve Tribes communities, Tamera (Portugal), Marinaleda (Spain), the monastic state of Mount Athos and the Catholic Worker Movement).
Origins
While the term communitarian was coined only in the mid-nineteenth century, ideas that are communitarian in nature appeared much earlier. They are found in some classical socialist doctrines (e.g. writings about the early commune and about workers' solidarity), and further back in the New Testament. Communitarianism has been traced back to early monasticism.
A number of early sociologists had strongly communitarian elements in their work, such as Ferdinand Tönnies in his comparison of Gemeinschaft (oppressive but nurturing communities) and Gesellschaft (liberating but impersonal societies), and Emile Durkheim's concerns about the integrating role of social values and the relations between the individual and society. Both authors warned of the dangers of anomie (normlessness) and alienation in modern societies composed of atomized individuals who had gained their liberty but lost their social moorings. Modern sociologists saw the rise of mass society and the decline of communal bonds and respect for traditional values and authority in the United States as of the 1960s. Among those who raised these issues were Robert Nisbet (Twilight of Authority), Robert N. Bellah Habits of the Heart, and Alan Ehrenhalt (The Lost City: The Forgotten Virtues Of Community In America). In his book Bowling Alone (2000), Robert Putnam documented the decline of "social capital" and stressed the importance of "bridging social capital," in which bonds of connectedness are formed across diverse social groups.
In the twentieth century communitarianism also began to be formulated as a philosophy by Dorothy Day and the Catholic Worker movement. In an early article the Catholic Worker clarified the dogma of the Mystical Body of Christ as the basis for the movement's communitarianism. Along similar lines, communitarianism is also related to the personalist philosophy of Emmanuel Mounier.
Responding to criticism that the term 'community' is too vague or cannot be defined, Amitai Etzioni, one of the leaders of the American communitarian movement, pointed out that communities can be defined with reasonable precision as having two characteristics: first, a web of affect-laden relationships among a group of individuals, relationships that often crisscross and reinforce one another (as opposed to one-on-one or chain-like individual relationships); and second, a measure of commitment to a set of shared values, norms, and meanings, and a shared history and identity – in short, a particular culture. Further, author David E. Pearson argued that "[t]o earn the appellation 'community,' it seems to me, groups must be able to exert moral suasion and extract a measure of compliance from their members. That is, communities are necessarily, indeed, by definition, coercive as well as moral, threatening their members with the stick of sanctions if they stray, offering them the carrot of certainty and stability if they don't."
What is specifically meant by "community" in the context of communitarianism can vary greatly between authors and periods. Historically, communities have been small and localized. However, as the reach of economic and technological forces extended, more expansive communities became necessary to provide effective normative and political guidance to these forces, prompting the rise of national communities in Europe in the 17th century. Since the late 20th century there has been some growing recognition that the scope of even these communities is too limited, as many challenges that people now face, such as the threat of nuclear war and that of global environmental degradation and economic crises, cannot be handled on a national basis. This has led to the quest for more encompassing communities, such as the European Union. Whether truly supra-national communities can be developed is far from clear.
More modern communities can take many different forms, but are often limited in scope and reach. For example, members of one residential community are often also members of other communities – such as work, ethnic, or religious ones. As a result, modern community members have multiple sources of attachments, and if one threatens to become overwhelming, individuals will often pull back and turn to another community for their attachments. Thus, communitarianism is the reaction of some intellectuals to the problems of Western society, an attempt to find flexible forms of balance between the individual and society, the autonomy of the individual and the interests of the community, between the common good and freedom, rights, and duties.
Academic communitarianism
Whereas the classical liberalism of the Enlightenment can be viewed as a reaction to centuries of authoritarianism, oppressive government, overbearing communities, and rigid dogma, modern communitarianism can be considered a reaction to excessive individualism, understood as an undue emphasis on individual rights, leading people to become selfish or egocentric.
The close relation between the individual and the community was discussed on a theoretical level by Michael Sandel and Charles Taylor, among other academic communitarians, in their criticisms of philosophical liberalism, especially the work of the American liberal theorist John Rawls and that of the German Enlightenment philosopher Immanuel Kant. They argued that contemporary liberalism failed to account for the complex set of social relations that all individuals in the modern world are a part of. Liberalism is rooted in an untenable ontology that posits the existence of generic individuals and fails to account for social embeddedness. To the contrary, they argued, there are no generic individuals but rather only Germans or Russians, Berliners or Muscovites, or members of some other particularistic community. Because individual identity is partly constructed by culture and social relations, there is no coherent way of formulating individual rights or interests in abstraction from social contexts. Thus, according to these communitarians, there is no point in attempting to found a theory of justice on principles decided behind Rawls' veil of ignorance, because individuals cannot exist in such an abstracted state, even in principle.
Academic communitarians also contend that the nature of the political community is misunderstood by liberalism. Where liberal philosophers described the polity as a neutral framework of rules within which a multiplicity of commitments to moral values can coexist, academic communitarians argue that such a thin conception of political community was both empirically misleading and normatively dangerous. Good societies, these authors believe, rest on much more than neutral rules and procedures—they rely on a shared moral culture. Some academic communitarians argued even more strongly on behalf of such particularistic values, suggesting that these were the only kind of values which matter and that it is a philosophical error to posit any truly universal moral values.
In addition to Charles Taylor and Michael Sandel, other thinkers sometimes associated with academic communitarianism include Michael Walzer, Alasdair MacIntyre, Seyla Benhabib, Shlomo Avineri, and Patrick J. Deneen.
Social capital
Beginning in the late 20th century, many authors began to observe a deterioration in the social networks of the United States. In the book Bowling Alone, Robert Putnam observed that nearly every form of civic organization has undergone drops in membership exemplified by the fact that, while more people are bowling than in the 1950s, there are fewer bowling leagues.
This results in a decline in "social capital", described by Putnam as "the collective value of all 'social networks' and the inclinations that arise from these networks to do things for each other". According to Putnam and his followers, social capital is a key component to building and maintaining democracy.
Communitarians seek to bolster social capital and the institutions of civil society. The Responsive Communitarian Platform described it thus:
Many social goals require partnerships between public and private groups. Though the government should not seek to replace local communities, it may need to empower them by strategies of support, including revenue-sharing and technical assistance. There is a great need for study and experimentation with creative use of the structures of civil society, and public-private cooperation, especially where the delivery of health, educational and social services are concerned.
Positive rights
Important to some supporters of communitarian philosophy is the concept of positive rights, which are rights or guarantees to certain things. These may include state-subsidized education, state-subsidized housing, a safe and clean environment, universal health care, and even the right to a job with the concomitant obligation of the government or individuals to provide one. To this end, communitarians generally support social security programs, public works programs, and laws limiting such things as pollution.
A common objection is that by providing such rights, communitarians violate the negative rights of the citizens; rights to not have something done for you. For example, taxation to pay for such programs as described above dispossesses individuals of property. Proponents of positive rights, by attributing the protection of negative rights to the society rather than the government, respond that individuals would not have any rights in the absence of societies—a central tenet of communitarianism—and thus have a responsibility to give something back to it. Some have viewed this as a negation of natural rights. However, what is or is not a "natural right" is a source of contention in modern politics, as well as historically; for example, whether or not universal health care, private property or protection from polluters can be considered a birthright.
Alternatively, some agree that negative rights may be violated by a government action, but argue that it is justifiable if the positive rights protected outweigh the negative rights lost.
Still, other communitarians question the very idea of natural rights and their place in a properly functioning community. They claim that instead, claims of rights and entitlements create a society unable to form cultural institutions and grounded social norms based on shared values. Rather, the liberalist claim to individual rights leads to a morality centered on individual emotivism, as ethical issues can no longer be solved by working through common understandings of the good. The worry here is that not only is society individualized, but so are moral claims.
Responsive communitarianism movement
In the early 1990s, in response to the perceived breakdown in the moral fabric of society engendered by excessive individualism, Amitai Etzioni and William A. Galston began to organize working meetings to think through communitarian approaches to key societal issues. This ultimately took the communitarian philosophy from a small academic group, introduced it into public life, and recast its philosophical content.
Deeming themselves "responsive communitarians" in order to distinguish the movement from authoritarian communitarians, Etzioni and Galston, along with a varied group of academics (including Mary Ann Glendon, Thomas A. Spragens, James Fishkin, Benjamin Barber, Hans Joas, Philip Selznick, and Robert N. Bellah, among others) drafted and published The Responsive Communitarian Platform based on their shared political principles, and the ideas in it were eventually elaborated in academic and popular books and periodicals, gaining thereby a measure of political currency in the West. Etzioni later formed the Communitarian Network to study and promote communitarian approaches to social issues and began publishing a quarterly journal, The Responsive Community.
The main thesis of responsive communitarianism is that people face two major sources of normativity: that of the common good and that of autonomy and rights, neither of which in principle should take precedence over the other. This can be contrasted with other political and social philosophies which derive their core assumptions from one overarching principle (such as liberty/autonomy for libertarianism). It further posits that a good society is based on a carefully crafted balance between liberty and social order, between individual rights and personal responsibility, and between pluralistic and socially established values.
Responsive communitarianism stresses the importance of society and its institutions above and beyond that of the state and the market, which are often the focus of other political philosophies. It also emphasizes the key role played by socialization, moral culture, and informal social controls rather than state coercion or market pressures. It provides an alternative to liberal individualism and a major counterpoint to authoritarian communitarianism by stressing that strong rights presume strong responsibilities and that one should not be neglected in the name of the other.
Following standing sociological positions, communitarians assume that the moral character of individuals tends to degrade over time unless that character is continually and communally reinforced. They contend that a major function of the community, as a building block of moral infrastructure, is to reinforce the character of its members through the community's "moral voice", defined as the informal sanction of others, built into a web of informal affect-laden relationships, which communities provide.
Influence
Responsive communitarians have been playing a considerable public role, presenting themselves as the founders of a different kind of environmental movement, one dedicated to shoring up society (as opposed to the state) rather than nature. Like environmentalism, communitarianism appeals to audiences across the political spectrum, although it has found greater acceptance with some groups than others.
Although communitarianism is a small philosophical school, it has had considerable influence on public dialogues and politics. There are strong similarities between communitarian thinking and the Third Way, the political thinking of centrist Democrats in the United States, and the Neue Mitte in Germany. Communitarianism played a key role in Tony Blair's remaking of the British socialist Labour Party into "New Labour" and a smaller role in President Bill Clinton's campaigns. Other politicians have echoed key communitarian themes, such as Hillary Clinton, who has long held that to raise a child takes not just parents, family, friends and neighbors, but a whole "village".
It has also been suggested that the compassionate conservatism espoused by President Bush during his 2000 presidential campaign was a form of conservative communitarian thinking, although he did not implement it in his policy program. Cited policies have included economic and rhetorical support for education, volunteerism, and community programs, as well as a social emphasis on promoting families, character education, traditional values, and faith-based projects.
President Barack Obama gave voice to communitarian ideas and ideals in his book The Audacity of Hope, and during the 2008 presidential election campaign he repeatedly called upon Americans to "ground our politics in the notion of a common good," for an "age of responsibility," and for foregoing identity politics in favor of community-wide unity building. However, for many in the West, the term communitarian conjures up authoritarian and collectivist associations, so many public leaders – and even several academics considered champions of this school – avoid the term while embracing and advancing its ideas.
Reflecting the dominance of liberal and conservative politics in the United States, no major party and few elected officials openly advocate communitarianism. Thus there is no consensus on individual policies, but some that most communitarians endorse have been enacted. Nonetheless, there is a small faction of communitarians within the Democratic Party; prominent communitarians include Bob Casey Jr., Joe Donnelly, and Claire McCaskill. Many communitarian Democrats are part of the Blue Dog Coalition. It is quite possible that the United States' right-libertarian ideological underpinnings have suppressed major communitarian factions from emerging.
Dana Milbank, writing in The Washington Post, remarked of modern communitarians, "There is still no such thing as a card-carrying communitarian, and therefore no consensus on policies. Some, such as John DiIulio and outside Bush adviser Marvin Olasky, favor religious solutions for communities, while others, like Etzioni and Galston, prefer secular approaches."
In August 2011, the right-libertarian Reason Magazine worked with the Rupe organization to survey 1,200 Americans by telephone. The Reason-Rupe poll found that "Americans cannot easily be bundled into either the 'liberal' or 'conservative' groups". Specifically, 28% expressed conservative views, 24% expressed libertarian views, 20% expressed communitarian views, and 28% expressed liberal views. The margin of error was ±3.
A similar Gallup survey in 2011 included possible centrist/moderate responses. That poll reported that 17% expressed conservative views, 22% expressed libertarian views, 20% expressed communitarian views, 17% expressed centrist views, and 24% expressed liberal views. The organization used the terminology "the bigger the better" to describe communitarianism.
The Pakistan Tehreek-e-Insaf party, founded and led by Imran Khan, is considered the first political party in the world which has declared communitarianism as one of their official ideologies.
Comparison to other political philosophies
Early communitarians were charged with being, in effect, social conservatives. However, many contemporary communitarians, especially those who define themselves as responsive communitarians, fully realize and often stress that they do not seek to return to traditional communities, with their authoritarian power structure, rigid stratification, and discriminatory practices against minorities and women. Responsive communitarians seek to build communities based on open participation, dialogue, and truly shared values. Linda McClain, a critic of communitarians, recognizes this feature of the responsive communitarians, writing that some communitarians do "recognize the need for careful evaluation of what is good and bad about [any specific] tradition and the possibility of severing certain features . . . from others." And R. Bruce Douglass writes, "Unlike conservatives, communitarians are aware that the days when the issues we face as a society could be settled on the basis of the beliefs of a privileged segment of the population have long since passed."
One major way the communitarian position differs from the social conservative one is that although communitarianism's ideal "good society" reaches into the private realm, it seeks to cultivate only a limited set of core virtues through an organically developed set of values rather than having an expansive or holistically normative agenda given by the state. For example, American society favors being religious over being atheist, but is rather neutral with regard to which particular religion a person should follow. There are no state-prescribed dress codes, "correct" number of children to have, or places one is expected to live, etc. In short, a key defining characteristic of the ideal communitarian society is that in contrast to a liberal state, it creates shared formulations of the good, but the scope of this good is much smaller than that advanced by authoritarian societies."
Criticism
Liberal theorists, such as Simon Caney, disagree that philosophical communitarianism has any interesting criticisms to make of liberalism. They reject the communitarian charges that liberalism neglects the value of community, and holds an "atomized" or asocial view of the self.
According to Peter Sutch the principal criticisms of communitarianism are:
that communitarianism leads necessarily to moral relativism;
that this relativism leads necessarily to a re-endorsement of the status quo in international politics; and
that such a position relies upon a discredited ontological argument that posits the foundational status of the community or state.
Other critics emphasize close relation of communitarianism to neoliberalism and new policies of dismantling the welfare state institutions through development of the third sector.
Opposition
Bruce Frohnen – author of The New Communitarians and the Crisis of Modern Liberalism (1996)
Charles Arthur Willard – author of Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy, University of Chicago Press, 1996.
List of communitarian political parties
American Solidarity Party (United States)
Australian Progressives (Australia)
Centre Party (Germany)
Christian Democratic Party (Norway)
Christian Democratic Union of Germany (Germany)
Christian Union (Netherlands)
Democratic Unionist Party (United Kingdom)
European Social Democratic Party (Moldova)
Fidesz (Hungary)
Finns Party (Finland)
Islamic Society Party (Afghanistan)
Law and Justice (Poland)
Liberal Democratic Party of Russia (Russia)
People's Party Our Slovakia (Slovakia)
Poland 2050 (Poland)
Prohibition Party (United States)
Social Democratic Party (Romania) (Romania)
Social Democratic Party (United Kingdom)
Sovereign Poland (Poland)
United Russia (Russia)
Xiluva (South Africa)
List of communitarian philosophers
Earlier theorists and writers
Contemporary theorists
See also
Notes
Further reading
Amitai Etzioni, 1996, The New Golden Rule, Basic Books .
Charles Taylor, 1992, Sources of the Self, Cambridge: Harvard University Press .
Daniel Bell, 2000, East Meets West, Princeton: Princeton University Press .
David L. Kirp, 2001, Almost Home: America's Love-Hate Relationship with Community, Princeton University Press .
Gad Barzilai, 2003, Communities and Law: Politics and Cultures of Legal Identities, Ann Arbor: University of Michigan Press .
Judith Harris & Donald Alexander, 1991, "Beyond capitalism and socialism: The communitarian alternative." Environments, 21(2), 29–37. Retrieved from: http://hdl.handle.net/10613/2733.
Michael J. Sandel, 1998, Liberalism and the Limits of Justice, Cambridge: Cambridge University Press .
Sterling Harwood, 1996, Against MacIntyre's Relativistic Communitarianism, in Sterling Harwood, ed., Business as Ethical and Business as Usual, Belmont, CA: Wadsworth Publishing Company), Chapter 3, and .
External links
Sourcewatch
"Communitarianism", Infed Encyclopedia.
Fareed Zakaria, The ABCs of Communitarianism. A devil's dictionary, Slate, July 26, 1996.
Robert Putnam, "Communitarianism", National Public Radio, February 5, 2001: "The term 'Third Way' was used to describe President Clinton's form of liberalism. Now 'Communitarianism' is being used in the same way to describe President Bush's form of conservatism. They're both an attempt to create a middle ground [...] an alternative to the liberal-conservative paradigm."
Civil Practices Network
Anti-capitalism
Collectivism
Communalism
Community building
Community organizing
Political ideologies
Political science theories
Political theories
Majority–minority relations | 0.766247 | 0.996803 | 0.763797 |
Existential crisis | Existential crises are inner conflicts characterized by the impression that life lacks meaning and confusion about one's personal identity. They are accompanied by anxiety and stress, often to such a degree that they disturb one's normal functioning in everyday life and lead to depression. Their negative attitude towards meaning reflects characteristics of the philosophical movement of existentialism. The components of existential crises can be divided into emotional, cognitive, and behavioral aspects. Emotional components refer to the feelings, such as emotional pain, despair, helplessness, guilt, anxiety, or loneliness. Cognitive components encompass the problem of meaninglessness, the loss of personal values or spiritual faith, and thinking about death. Behavioral components include addictions, and anti-social and compulsive behavior.
Existential crises may occur at different stages in life: the teenage crisis, the quarter-life crisis, the mid-life crisis, and the later-life crisis. Earlier crises tend to be forward-looking: the individual is anxious and confused about which path in life to follow regarding education, career, personal identity, and social relationships. Later crises tend to be backward-looking. Often triggered by the impression that one is past one's peak in life, they are usually characterized by guilt, regret, and a fear of death. If an earlier existential crisis was properly resolved, it is easier for the individual to resolve or avoid later crises. Not everyone experiences existential crises in their life.
The problem of meaninglessness plays a central role in all of these types. It can arise in the form of cosmic meaning, which is concerned with the meaning of life at large or why we are here. Another form concerns personal secular meaning, in which the individual tries to discover purpose and value mainly for their own life. Finding a source of meaning may resolve a crisis, like altruism, dedicating oneself to a religious or political cause, or finding a way to develop one's potential. Other approaches include adopting a new system of meaning, learning to accept meaninglessness, cognitive behavioral therapy, and the practice of social perspective-taking.
Negative consequences of existential crisis include anxiety and bad relationships on the personal level as well as a high divorce rate and decreased productivity on the social level. Some questionnaires, such as the Purpose in Life Test, measure whether someone is currently undergoing an existential crisis. Outside its main use in psychology and psychotherapy, the term "existential crisis" refers to a threat to the existence of something.
Definition
In psychology and psychotherapy, the term "existential crisis" refers to a form of inner conflict. It is characterized by the impression that life lacks meaning and is accompanied by various negative experiences, such as stress, anxiety, despair, and depression. This often happens to such a degree that it disturbs one's normal functioning in everyday life. The inner nature of this conflict sets existential crises apart from other types of crises that are mainly due to outward circumstances, like social or financial crises. Outward circumstances may still play a role in triggering or exacerbating an existential crisis, but the core conflict happens on an inner level. The most common approach to resolving an existential crisis consists in addressing this inner conflict and finding new sources of meaning in life.
The core issue responsible for the inner conflict is the impression that the individual's desire to lead a meaningful life is thwarted by an apparent lack of meaning, also because they feel much confusion about what meaning really is, and are constantly questioning themselves. In this sense, existential crises are crises of meaning. This is often understood through the lens of the philosophical movement known as existentialism. One important aspect of many forms of existentialism is that the individual seeks to live in a meaningful way but finds themselves in a meaningless and indifferent world. The exact term "existential crisis" is not commonly found in the traditional existentialist literature in philosophy. But various closely related technical terms are discussed, such as existential dread, existential vacuum, existential despair, existential neurosis, existential sickness, anxiety, and alienation.
Different authors focus in their definitions of existential crisis on different aspects. Some argue that existential crises are at their core crises of identity. On this view, they arise from a confusion about the question "Who am I?" and their goal is to achieve some form of clarity about oneself and one's position in the world. As identity crises, they involve intensive self-analysis, often in the form of exploring different ways of looking at oneself. They constitute a personal confrontation with certain key aspects of the human condition, like existence, death, freedom, and responsibility. In this sense, the person questions the very foundations of their life. Others emphasize the confrontation with human limitations, such as death and lack of control. Some stress the spiritual nature of existential crises by pointing out how outwardly successful people may still be severely affected by them if they lack the corresponding spiritual development.
The term "existential crisis" is most commonly used in the context of psychology and psychotherapy. But it can also be employed in a more literal sense as a crisis of existence to express that the existence of something is threatened. In this sense, a country, a company, or a social institution faces an existential crisis if political tensions, military threats , high debt, or social changes may have as a result that the corresponding entity ceases to exist.
Components
Existential crises are usually seen as complex phenomena that can be understood as consisting of various components. Some approaches distinguish three types of components belonging to the fields of emotion, cognition, and behavior. Emotional aspects correspond to what it feels like to have an existential crisis. It is usually associated with emotional pain, despair, helplessness, guilt, anxiety, and loneliness. On the cognitive side, the affected are often confronted with a loss of meaning and purpose together with the realization of one's own end. Behaviorally, existential crises may express themselves in addictions and anti-social behavior, sometimes paired with ritualistic behavior, loss of relationships, and degradation of one's health. While manifestations of these three components can usually be identified in every case of an existential crisis, there are often significant differences in how they manifest. Nonetheless, it has been suggested that these components can be used to give a more unified definition of existential crises.
Emotional
On the emotional level, existential crises are associated with unpleasant experiences, such as fear, anxiety, panic, and despair. They can be categorized as a form of emotional pain whereby people lose trust and hope. This pain often manifests in the form of despair and helplessness. The despair may be caused by being unable to find meaning in life, which is associated both with a lack of motivation and the absence of inner joy. The impression of helplessness arises from being unable to find a practical response to deal with the crisis and the associated despair. This helplessness concerns specifically a form of emotional vulnerability: the individual is not just subject to a wide range of negative emotions, but these emotions often seem to be outside the person's control. This feeling of vulnerability and lack of control can itself produce further negative impressions and may lead to a form of panic or a state of deep mourning.
But on the other hand, there is also often an impression in the affected that they are in some sense responsible for their predicament. This is the case, for example, if the loss of meaning is associated with bad choices in the past for which the individual feels guilty. But it can also take the form of a more abstract type of bad conscience as existential guilt. In this case, the agent carries a vague sense of guilt that is free-floating in the sense that it is not tied to any specific wrongdoing by the agent. Especially in existential crises in the later parts of one's life, this guilt is often accompanied by a fear of death. But just as in the case of guilt, this fear may also take a more abstract form as an unspecific anxiety associated with a sense of deficiency and meaninglessness.
As crises of identity, existential crises often lead to a disturbed sense of personal integrity. This can be provoked by the apparent meaninglessness of one's life together with a general lack of motivation. Central to the sense of personal integrity are close relationships with oneself, others, and the world. The absence of meaning usually has a negative impact on these relationships. As a lack of a clear purpose, it threatens one's personal integrity and can lead to insecurity, alienation, and self-abandonment. The negative impact on one's relationships with others is often experienced as a form of loneliness.
Depending on the person and the crisis they are suffering, some of these emotional aspects may be more or less pronounced. While they are all experienced as unpleasant, they often carry within them various positive potentials as well that can push the person in the direction of positive personal development. Through the experience of loneliness, for example, the person may achieve a better understanding of the substance and importance of relationships.
Cognitive
The main cognitive aspect of existential crises is the loss of meaning and purpose. In this context, the term "meaninglessness" refers to the general impression that there is no higher significance, direction, or purpose in our actions or in the world at large. It is associated with the question of why one is doing what one is doing and why one should continue. It is a central topic in existentialist psychotherapy, which has as one of its main goals to help the patient find a proper response to this meaninglessness. In Viktor Frankl's logotherapy, for example, the term existential vacuum is used to describe this state of mind. Many forms of existentialist psychotherapy aim to resolve existential crises by assisting the patient in rediscovering meaning in their life. Closely related to meaninglessness is the loss of personal values. This means that things that seemed valuable to the individual before, like the relation to a specific person or success in their career, may now appear insignificant or pointless to them. If the crisis is resolved, it can lead to the discovery of new values.
Another aspect of the cognitive component of many existential crises concerns the attitude to one's personal end, i.e. the realization that one will die one day. While this is not new information as an abstract insight, it takes on a more personal and concrete nature when one sees oneself confronted with this fact as a concrete reality one has to face. This aspect is of particular relevance for existential crises occurring later in life or when the crisis was triggered by the loss of a loved one or by the onset of a terminal disease. For many, the issue of their own death is associated with anxiety. But it has also been argued that the contemplation of one's death may act as a key to resolving an existential crisis. The reason for this is that the realization that one's time is limited can act as a source of meaning by making the remaining time more valuable and by making it easier to discern the bigger issues that matter in contrast to smaller everyday issues that can act as distractions. Important factors for dealing with imminent death include one's religious outlook, one's self-esteem, and social integration as well as one's future prospects.
Behavioral
Existential crises can have various effects on the individual's behavior. They often lead a person to isolate themself and engage less in social interactions. For example, one's communication to one's housemates may be limited to very brief responses like a simple "yes" or "no" in order to avoid a more extended exchange or the individual reduces various forms of contact that are not strictly speaking necessary. This can result in a long-term deterioration and loss of one's relationships. In some cases, existential crises may also express themselves in overtly anti-social behavior, like hostility or aggression. These negative impulses can also be directed at the person themselves, leading to self-injury and, in the worst case, suicide.
Addictive behavior is also seen in people going through an existential crisis. Some turn to drugs in order to lessen the impact of the negative experiences whereas others hope to learn through the non-ordinary drug experiences to cope with the existential crisis. While this type of behavior can succeed in providing a short-term relief of the effects of the existential crisis, it has been argued that it is usually maladaptive and fails on the long-term level. This way, the crises may even be further exacerbated. For the affected, it is often difficult to distinguish the need for pleasure and power from the need for meaning, thereby leading them on a wrong track in their efforts to resolve the crisis. The addictions themselves or the stress associated with existential crises can result in various health problems, ranging from high blood pressure to long-term organ damage and increased likelihood of cancer.
Existential crises may also be accompanied by ritualistic behavior. In some cases, this can have positive effects to help the affected transition to a new outlook on life. But it might also take the form of compulsive behavior that acts more as a distraction than as a step towards a solution. Another positive behavioral aspect concerns the tendency to seek therapy. This tendency reflects the awareness of the affected of the gravity of the problem and their desire to resolve it.
Types
Different types of existential crises are often distinguished based on the time in one's life when they occur. This approach rests on the idea that, depending on one's stage in life, individuals are faced with different issues connected to meaning and purpose. They lead to different types of crises if these issues are not properly resolved. The stages are usually tied to rough age groups but this correspondence is not always accurate since different people of the same age group may find themselves in different life situations and different stages of development. Being aware of these differences is central for properly assessing the issue at the core of a specific crisis and finding a corresponding response to resolve it.
The most well-known existential crisis is the mid-life crisis and a lot of research is directed specifically at this type of crisis. But researchers have additionally discovered various other existential crises belonging to different types. There is no general agreement about their exact number and periodization. Because of this, the categorizations of different theorists do not always coincide but they have significant overlaps. One categorization distinguishes between the early teenage crisis, the sophomore crisis, the adult crisis, the mid-life crisis, and the later-life crisis. Another focuses only on the sophomore crisis, the adult crisis, and the later-life crisis but defines them in wider terms. The sophomore crisis and the adult crisis are often treated together as forms of the quarter-life crisis.
There is wide agreement that the earlier crises tend to be more forward-looking and are characterized by anxiety and confusion about the path in life one wants to follow. The later crises, on the other hand, are more backward-looking, often in the form of guilt and regrets, while also concerned with the problem of one's own mortality.
These different crises can affect each other in various ways. For example, if an earlier crisis was not properly resolved, later crises may impose additional difficulties for the affected. But even if an earlier crisis was fully resolved, this does not guarantee that later crises will be successfully resolved or avoided altogether.
Another approach distinguishes existential crises based on their intensity. Some theorists use the terms existential vacuum and existential neurosis to refer to different degrees of existential crisis. On this view, an existential vacuum is a rather common phenomenon characterized by the frequent recurrence of subjective states like boredom, apathy, and emptiness. Some people experience this only in their free time but are otherwise not troubled by it. The term "Sunday neurosis" is often used in this context. An existential vacuum becomes an existential neurosis if it is paired with overt clinical neurotic symptoms, such as depression or alcoholism.
Teenage
The early teenage crisis involves the transition from childhood to adulthood and is centered around the issue of developing one's individuality and independence. This concerns specifically the relation to one's family and often leads to spending more time with one's peers instead. Various rebellious and anti-social behavior seen sometimes in this developmental stage, like stealing or trespassing, may be interpreted as attempts to achieve independence. It can also give rise to a new type of conformity concerning, for example, how the teenager dresses or behaves. This conformity tends to be not in relation to one's family or public standards but to one's peer group or adored celebrities. But this may be seen as a temporary step in order to distance oneself from previously accepted standards with later steps emphasizing one's independence also from one's peer group and celebrity influences. A central factor for resolving the early teenage crisis is that meaning and purpose are found in one's new identity since independence without it can result in the feeling of being lost and may lead to depression. Another factor pertains to the role of the parents. By looking for signs of depression, they may become aware that a teenager is going through a crisis. Examples include a change of appetite, sleep behavior is different; sleeps more or less, grades take a dive in a short amount of time, they are less social and more isolated, and start to become easily irritated. If parents regularly talk to their teenagers and ask them questions, it is more likely that they detect the presence of a crisis.
Quarter-life, sophomore, and adult
The term "quarter-life crisis" is often used to refer to existential crises occurring in early adulthood, i.e. roughly during the ages between 18 and 30. Some authors distinguish between two separate crises that may occur at this stage in life: the sophomore crisis and the adult crisis. The sophomore crisis affects primarily people in their late teenage years or their early 20s. It is also referred to as "sophomore slump", specifically when it affects students. It is the first time that serious questions about the meaning of life and one's role in the world are formulated. At this stage, these questions have a direct practical relation to one's future. They apply to what paths one wants to choose in life, like which career to focus on and how to form successful relationships. At the center of the sophomore crisis is the anxiety over one's future, i.e. how to lead one's life and how to best develop and employ one's abilities. Existential crisis often specifically affect high achievers who fear that they do not reach their highest potential since they lack a secure plan for the future. To solve them, it is necessary to find meaningful answers to these questions. Such answers may result in practical commitments and can inform later life decisions. Some people who have already made their career choices at a very early age may never experience a sophomore crisis. But such decisions can lead to problems later on since they are usually mainly informed by the outlook of one's social environment and less by the introspective insight into one's individual preferences. If there turns out to be a big discrepancy between the two, it can provoke a more severe form of the sophomore crisis later on. James Marcia defines this early commitment without sufficient exploration as identity foreclosure.
The adult crisis usually starts in the mid- to late 20s. The issues faced in it overlap to some extent with the ones in the sophomore crisis, but they tend to be more complex issues of identity. As such, they also circle around one's career and one's path in life. But they tend to take more details into account, like one's choice of religion, one's political outlook, or one's sexuality. Resolving the adult crisis means having a good idea of who one is as a person and being comfortable with this idea. It is usually associated with reaching full adulthood, having completed school, working full-time, having left one's home, and being financially independent. Being unable to resolve the adult crisis may result in disorientation, a lack of confidence in one's personal identity, and depression.
Mid-life
Among the different types of existential crises, the mid-life crisis is the one most widely discussed. It often sets in around the age of 40 and can be triggered by the impression that one's personal growth is obstructed. This may be combined with the sense that there is a significant distance between one's achievement and one's aspirations. In contrast to the earlier existential crises, it also involves a backward-looking component: previous choices in life are questioned and their meaning for one's achievements are assessed. This may lead to regrets and dissatisfaction with one's life choices on various topics, such as career, partner, children, social status, or missed opportunities. The tendency to look backward is often connected to the impression that one is past one's peak period in life.
Sometimes five intermediary stages are distinguished: accommodation, separation, liminality, reintegration, and individuation. In these stages, the individual first adapts to changed external demands, then addresses the distance between their innate motives and the external persona, next rejects their previously adaptive persona, later adopts their new persona, and lastly becomes aware of the external consequences associated with these changes.
Mid-life crises can be triggered by specific events such as losing a job, forced unemployment, extramarital affairs, separation, death of a loved one, or health problems. In this sense, the mid-life crisis can be understood as a period of transition or reevaluation in which the individual tries to adapt to their changed situation in life, both in response to the particular triggering event and to the more general changes that come with age.
Various symptoms are associated with mid-life crises, such as stress, boredom, self-doubt, compulsivity, changes in the libido and sexual preferences, rumination, and insecurity. In public discourse, the mid-life crisis is primarily associated with men, often in direct relation to their career. But it affects women just as well. An additional factor here is the limited time left in their reproductive period or the onset of the menopause. Between 8 and 25 percent of Americans over the age of thirty-five have experienced a mid-life crisis.
Both the severity and the length of the mid-life crisis are often affected by whether and how well the earlier crises were resolved. People who managed to resolve earlier crises well tend to feel more fulfilled with their life choices, which also reflects in how their meaningfulness is perceived when looking back on them. But it does not ensure that they still appear meaningful from one's current perspective.
Later-life
The later-life crisis often occurs around one's late 60s. It may be triggered by events such as retirement, the death of a loved one, serious illness, or imminent death. At its core is a backward-looking reflection on how one led one's life and the choices one made. This reflection is usually motivated by a desire to have lived a valuable and meaningful life paired with an uncertainty of one's success. A contemplation of one's past wrongdoings may also be motivated by a desire to find a way to make up for them while one still can. It can also express itself in a more theoretical form as trying to assess whether one's life made a positive impact on one's more immediate environment or the world at large. This is often associated with the desire to leave a positive and influential legacy behind.
Because of its backward-looking nature, there may be less one can do to truly resolve the crisis. This is true especially for people who arrive at a negative assessment of their life. An additional impeding factor in contrast to earlier crises is that individuals are often unable to find the energy and youthfulness necessary to make meaningful changes to their lives. Some suggest that developing an acceptance of the reality of death may help in the process. Other suggestions focus less on outright resolving the crisis but more on avoiding or minimizing its negative impact. Recommendations to this end include looking after one's physical, economic, and emotional well-being as well as developing and maintaining a social network of support. The best way to avoid the crisis as much as possible may be to ensure that one's earlier crises in life are resolved.
Meaninglessness
Most theorists see meaninglessness as the central issue around which existential crises revolve. In this sense, they may be understood as crises of meaning. The issue of meaning and meaninglessness concerns various closely related questions. Understood in the widest sense, it involves the global questions of the meaning of life in general, why we are here, or for what purpose we live. Answers to this question traditionally take the form of religious explanations, for example, that the world was created by God according to His purpose and that each thing is meaningful because it plays a role for this higher purpose. This is sometimes termed cosmic meaning in contrast to the secular personal meaning an individual seeks when asking in what way their particular life is meaningful or valuable. In this personal sense, it is often connected with a practical confusion about how one should live one's life or why one should continue doing what one does. This can express itself in the feeling that one has nothing to live for or to hope for. Sometimes this is even interpreted in the sense that there is no right and wrong or good and evil. While it may be more and more difficult in the contemporary secular world to find cosmic meaning, it has been argued that to resolve the problem of meaninglessness, it is sufficient for the individual to find a secular personal meaning to hold onto.
The issue of meaninglessness becomes a problem because humans seem to have a strong desire or need for meaning. This expresses itself both emotionally and practically since goals and ideals are needed to structure one's life. The other side of the problem is given in the fact that there seems to be no such meaning or that the world is at its bottom contingent and could have existed in a very different way or not at all. The world's contingency and indifference to human affairs are often referred to as the absurd in the existentialist literature. The problem can be summarized through the question "How does a being who needs meaning find meaning in a universe that has no meaning?". Various practitioners of existential psychotherapy have affirmed that the loss of meaning plays a role for the majority of people requiring psychotherapy and is the central issue for a significant number of them. But this loss has its most characteristic expression in existential crises.
Various factors affect whether life is experienced as meaningful, such as social relationships, religion, and thoughts about the past or future. Judgments of meaning are quite subjective. They are a form of global assessment since they take one's life as a whole into consideration. It is sometimes argued that the problem of a loss of meaning is particularly associated with modern society. This is often based on the idea that people tended to be more grounded in their immediate social environment, their profession, and their religion in premodern times.
Sources of meaning
It is usually held that humans have a need for meaning. This need may be satisfied by finding an accessible source of meaning. Religious faith can be a source of meaning and many studies demonstrate that it is associated with self-reported meaning in life. Another important source of meaning is due to one's social relationships. Lacking or losing a source of meaning, on the other hand, often leads to an existential crisis. In some cases, this change is clearly linked to a specific source of meaning that becomes inaccessible. For example, a religious person confronted with the vast extent of death and suffering may find their faith in a benevolent, omnipotent God shattered and thereby lose the ability to find meaning in life. For others, a concrete threat of imminent death, for example, due to the disruption of the social order, can have a similar effect. If the individual is unable to assimilate, reinterpret, or ignore this type of threatening information, the loss of their primary source of meaning may force them to reevaluate their system of meaning in life from the ground up. In this case, the person is entering an existential crisis, which can bring with it the need to question what other sources of meaning are accessible to them or whether there is meaning at all. Many different sources of meaning are discussed in the academic literature. Discovering such a source for oneself is often key to resolving an existential crisis. The sources discussed in the literature can be divided into altruism, dedication to a cause, creativity, hedonism, self-actualization, and finding the right attitude.
Altruism refers to the practice or attitude based on the desire to benefit others. Altruists aim to make the world a better place than they found it. This can happen in various ways. On a small scale, one may try to be kinder to the people in one's immediate social environment. It can include the effort to become aware of their problems and try to help them, directly or indirectly. But the altruistic attitude may also express itself in a less personal form towards strangers, for example, by donating money to charities. Effective altruism is an example of a contemporary movement promoting altruism and providing concrete advice on how to live altruistically. It has been argued that altruism can be a strong source of meaning in one's life. This is also reflected in the fact that altruists tend to enjoy higher levels of well-being as well as increased physical and mental health.
Dedicating oneself to a cause can act as a closely related source of meaning. In many cases, the two overlap, if altruism is the primary motivation. But this is not always the case since the fascination with a cause may not be explicitly linked to the desire to benefit others. It consists in devoting oneself fully to producing something greater than oneself. A diverse set of causes can be followed this way, ranging from religious goals, political movements, or social institutions to scientific or philosophical ventures. Such causes provide meaning to one's life to the extent that one participates in the meaningfulness of the cause by working towards it and realizing it.
Creativity refers to the activity of creating something new and exciting. It can act as a source of meaning even if it is not obvious that the creation serves a specific purpose. This aspect is especially relevant in the field of art, where it is sometimes claimed that the work of art does not need an external justification since it is "its own excuse for being". It has been argued that for many great artists, their keener vision of the existential dilemma of the human condition was the cause of their creative efforts. These efforts in turn may have served them as a form of therapy. But creativity is not limited to art. It can be found and practiced in many different fields, both on a big and a small scale, such as in science, cooking, gardening, writing, regular work, or romantic relationships.
The hedonistic approach can also constitute a source of meaning. It is based on the idea that a life enjoyed to the fullest extent is meaningful even if it lacks any higher overarching purpose. For this perspective, it is relevant that hedonism is not understood in a vulgar sense, i.e. as the pursuit of sensory pleasures characterized by a disregard of the long-term consequences. While such a lifestyle may be satisfying in certain respects, a more refined form of hedonism that includes other forms of pleasures and considers their long-term consequences is more commonly recommended in the academic literature. This wider sense also includes more subtle pleasures such as looking at fine art or engaging in a stimulating intellectual conversation. In this way, life can be meaningful to the individual if it is seen as a gift evoking a sense of astonishment at its miracle and a general appreciation of it.
According to the perspective of self-actualization, each human carries within themselves a potential of what they may become. The purpose of life then is to develop oneself to realize this potential and successfully doing so increases the individual's well-being and sense of meaningfulness. In this sense, just like an acorn has the potential to become an oak, so an infant has the potential to become a fully actualized adult with various virtues and skills based on their inborn talents. The process of self-actualization is sometimes understood in terms of a hierarchy: certain lower potentials have to be actualized before the actualization of higher potentials becomes possible.
Most of the approaches mentioned so far have clear practical implications in that they affect how the individual interacts with the world. The attitudinal approach, on the other hand, identifies different sources of meaning based only on taking the right attitude towards life. This concerns specifically negative situations in which one is faced with a fate that one cannot change. In existential crises, this often expresses itself in the feeling of helplessness. The idea is that in such situations one can still find meaning based on taking a virtuous or admirable attitude towards one's suffering, for example, by remaining courageous.
Whether a certain source of meaning is accessible differs from person to person. It may also depend on the stage in life one finds oneself in, similar to how different stages are often associated with different types of existential crises. It has been argued, for example, that the concern with oneself and one's own well-being found in self-actualization and hedonism tends to be associated more with earlier stages in life. The concern with others or the world at large found in altruism and the dedication to a cause, on the other hand, is more likely found in later stages in life, for example, when an older generation aims to pass on their knowledge and improve the lives of a younger generation.
Consequences, clinical manifestation, and measurement
Going through an existential crisis is associated with a variety of consequences, both for the affected individual and their social environment. On the personal level, the immediate effects are usually negative since experiencing an existential crisis is connected to stress, anxiety, and the formation of bad relationships. This can lead all the way to depression if existential crises are not resolved. On the social level, they cause a high divorce rate and an increased number of people being unable to make significant positive contributions to society, for example, due to a lack of drive resulting from depression. But if resolved properly, they can also have positive effects by pushing the affected to address the underlying issue. Individuals may thereby find new sources of meaning, develop as a person, and thereby improve their way of life. In the sophomore crisis, for example, this can happen by planning ahead and thereby making more conscious choices in how to lead one's life.
Being aware of the symptoms and consequences of existential crises on the personal level is important for psychotherapists so they can arrive at an accurate diagnosis. But this is not always easy since the symptoms usually differ from person to person. In this sense, the lack of meaning at the core of existential crises can express itself in several different ways. For some, it may lead them to become overly adventurous and zealous. In their attempt to wrest themselves free from meaninglessness, they are desperate to indiscriminately dedicate themselves to any cause. They might do so without much concern for the concrete content of the cause or for their personal safety. It has been argued that this type of behavior is present in some hardcore activists. This may be understood as a form of defense mechanism in which the individual engages fanatically in activities in response to a deep sense of purposelessness. It can also express itself in a related but less dramatic way as compulsive activity. This may take various forms, such as workaholism or the obsessive pursuit of prestige, or material acquisitions. This is sometimes referred to as false centering or inauthenticity since the activity is pursued more as a distraction and less because it is in itself fulfilling to the agent. It can provide a temporary alleviation by helping the individual drain their energy and thus distract them from the threat of meaninglessness.
Another response consists in an overt declaration of nihilism characterized by a pervasive tendency to discredit activities purported by others to have meaning. Such an individual may, for example, dismiss altruism out of hand as a disingenuous form of selfishness or see all leaders as motivated by their lust for power rather than inspired by a grand vision. In some more extreme forms of crisis, the individual's behavior may show severe forms of aimlessness and apathy, often accompanied by depression. Being unable to find good reasons for making an effort, such a person remains inactive for extended periods of time, such as staying in bed all day. If they engage in a behavior, they may do so indiscriminately without much concern for what they are doing.
Indirect factors for determining the severeness of an existential crisis include job satisfaction and the quality of one's relationships. For example, physical violence or constant fighting in a relationship may be interpreted as external signs of a serious existential crisis. Various empirical studies have shown that a lack of sense of meaning in life is associated with psychopathology. Having a positive sense of meaning, on the other hand, is associated with deeply held religious beliefs, having a clear life goal, and having dedicated oneself to a cause.
Measurement
Different suggestions have been made concerning how to measure whether someone has an existential crisis, to what degree it is present, and which approach to resolving it might be promising. These methods can help therapists and counselors to understand both whether their client is going through an existential crisis and, if so, how severe their crisis is. But they can also be used by theorists in order to identify how existential crises correlate with other phenomena, such as depression, gender, or poverty.
One way to assess this is through questionnaires focusing on topics like the meaning of life, such as the Purpose in Life Test and the Life Regard Index. The Purpose in Life Test is widely used and consists of 20 items rated on a seven-point scale, such as "In life I have: (1) no goals or aims at all ... (7) very clear goals and aims" or "With regard to death, I am (1) unprepared and frightened ... (7) prepared and unafraid".
Resolution
Since existential crises can have a crippling effect on people, it is important to find ways to resolve them. Different forms of resolution have been proposed. The right approach often depends on the type of crisis experienced. Many approaches emphasize the importance of developing a new stage of intellectual functioning in order to resolve the inner conflict. But others focus more on external changes. For example, crises related to one's sexual identity and one's level of independence may be resolved by finding a partner matching one's character and preferences. Positive indicators of marital success include having similar interests, engaging in common activities, and having a similar level of education. Crises centering around one's professional path may also be approached more externally by finding the right type of career. In this respect, important factors include that the career matches both one's interests and one's skills to avoid a job that is unfulfilling, lacks engagement, or is overwhelming.
But the more common approach aims at changing one's intellectual functioning and inner attitude. Existential psychotherapists, for example, usually try to resolve existential crises by helping the patient to rediscover meaning in their life. Sometimes this takes the form of finding a spiritual or religious purpose in life, such as dedicating oneself to an ideal or discovering God. Other approaches focus less on the idea of discovering meaning and more on the idea of creating meaning. This is based on the idea that meaning is not something independent of the agent out there but something that has to be created and maintained. However, there are also types of existentialist psychotherapy that accept the idea that the world is meaningless and try to develop the best way of coping with this fact. The different approaches to resolving the issue of meaninglessness are sometimes divided into a leap of faith, the reasoned approach, and nihilism. Another classification categorizes possible resolutions as isolation, anchoring, distraction, and sublimation. Methods from cognitive behavior therapy have also been used to treat existential crises by bringing about a change in the individual's intellectual functioning.
Leap of faith, reasoned approach, and nihilism
Since existential crises circle around the idea of being unable to find meaning in life, various resolutions focus on specifically this aspect. Sometimes three different forms of this approach are distinguished. On the one hand, the individual may perform a leap of faith and affirm a new system of meaning without a previous in-depth understanding of how secure it is as a source of meaning. Another method consists in carefully considering all the relevant factors and thereby rebuilding and justifying a new system of meaning. A third approach goes against these two by denying that there is actual meaning. It consists in accepting the meaninglessness of life and learning how to deal with it without the illusion of meaning.
A leap of faith implies committing oneself to something one does not fully understand. In the case of existential crises, the commitment involves the faith that life is meaningful even though the believer lacks a reasoned justification. This leap is motivated by the strong desire that life is meaningful and triggered as a response to the threat posed to the fulfillment of this desire by the existential crisis. For whom this is psychologically possible, this may be the fastest way to bypass an existential crisis. This option may be more available to people oriented toward intuitive processing and less to people who favor a more rational approach since it has less need for a thorough reflection and introspection. It has been argued that the meaning acquired through a leap of faith may be more robust than in other cases. One reason for this is that since it is not based on empirical evidence for it, it is also less vulnerable to empirical evidence against it. Another reason concerns the flexibility of intuition to selectively disregard threatening information on the one hand and to focus instead on validating cues.
More rationally inclined persons tend to focus more on a careful evaluation of the sources of meaning based on solid justification through empirical evidence. If successful, this approach has the advantage of providing the individual with a concrete and realistic understanding of how their life is meaningful. It can also constitute a very robust source of meaning if it is based on solid empirical evidence and thorough understanding. The system of meaning arrived at may be very idiosyncratic by being based on the individual's values, preferences, and experiences. On a practical level, it often leads to a more efficient realization of this meaning since the individual can focus more exclusively on this factor. If someone determines that family life is their main source of meaning, for example, they may focus more intensely on this aspect and take a less involved stance towards other areas in life, such as success at work. In comparison to the leap of faith, this approach offers more room for personal growth due to the cognitive labor in the form of reflection and introspection involved in it and the self-knowledge resulting from this process. One of the drawbacks of this approach is that it can take a considerable amount of time to complete and rid oneself of the negative psychological consequences. If successful, the foundations arrived at this way may provide a solid basis to withstand future existential crises. But success is not certain and even after a prolonged search, the individual might still be unable to identify a significant source of meaning in their life.
If the search for meaning in either way fails, there is still another approach to resolving the issue of meaninglessness in existential crises: to find a way to accept that life is meaningless. This position is usually referred to as nihilism. One can distinguish a local and a global version of this approach, depending on whether the denial of meaningfulness is only directed at a certain area of life or at life as a whole. It becomes necessary if the individual arrives at the justifiable conclusion that life is, after all, meaningless. This conclusion may be intolerable initially, since humans seem to have a strong desire to lead a meaningful life, sometimes referred to as the will to meaning. Some theorists, such as Viktor Frankl, see this desire even as the primary motivation of all individuals. One difficulty with this negative stance towards meaning is that it seems to provide very little practical guidance in how to live one's life. So even if an individual has resolved their existential crises this way, they may still lack an answer to the question of what they should do with their life. Positive aspects of this stance include that it can lead to a heightened sense of freedom by being unbound from any predetermined purpose. It also exemplifies the virtue of truthfulness by being able to acknowledge an inconvenient truth instead of escaping into the convenient illusion of meaningfulness.
Isolation, anchoring, distraction, and sublimation
According to Peter Wessel Zapffe, life is essentially meaningless but this does not mean that we are automatically doomed to unresolvable existential crises. Instead, he identifies four ways of dealing with this fact without falling into an existential depression: isolation, anchoring, distraction, and sublimation. Isolation involves a dismissal of destructive thoughts and feelings from consciousness. Physicians and medical students, for example, may adopt a detached and technical stance in order to better deal with the tragic and disgusting aspects of their vocation. Anchoring involves a dedication to certain values and practical commitments that give the individual a sense of assurance. This often happens collectively, for example, through devotion to a common religion, but it can also happen individually. Distraction is a more temporary form of withdrawing one's attention from the meaninglessness of certain life situations that do not provide any significant contributions to the construction of our self. Sublimation is the rarest of these mechanisms. Its essential characteristic setting it apart from the other mechanisms is that it uses the pain of living and transforms it into a work of art or another creative expression.
Cognitive behavioral therapy and social perspective-taking
Some approaches from the field of cognitive behavioral therapy adjust and employ treatments for depression to resolve existential crises. One fundamental idea in cognitive behavior theory is that various psychological problems arise due to inaccurate core beliefs about oneself, such as beliefs that one is worthless, helpless, or incompetent. These problematic core beliefs may lie dormant for extended periods. But when activated by certain life events, they may express themselves in the form of recurrent negative and damaging thoughts. This can lead, among other things, to depression. Cognitive behavioral therapy then consists in raising the awareness of the affected person in regards to these toxic thought patterns and the underlying core beliefs while training to change them. This can happen by focusing on one's immediate present, being goal-oriented, role-playing, or behavioral experiments.
A closely related method employs the practice of social perspective-taking. Social perspective-taking involves the ability to assess one's situation and character from the point of view of a different individual. This enables the individual to step outside their own immediate perspective while taking into consideration how others see the individual and thus reach a more integral perspective.
Unresolved crises
Existential crises sometimes pass even if the underlying issue is not resolved. This may happen, for example, if the issue is pushed into the background by other concerns and thus remains present only in a masked or dormant state. But even in this state, it may have unconscious effects on how people lead their life, like career choices. It can also increase the likelihood of suffering another existential crisis later on in life and might make resolving these later crises more difficult. It has been argued that many existential crises in contemporary society are not resolved. The reason for this may be a lack of clear awareness of the nature, importance, and possible treatments of existential crises.
Cultural context
In the 19th century, Thomas Carlyle wrote of how the loss of faith in God results in an existential crisis which he called the "Centre of Indifference", wherein the world appears cold and unfeeling and the individual considers himself to be without worth. Søren Kierkegaard considered that angst and existential despair would appear when an inherited or borrowed world-view (often of a collective nature) proved unable to handle unexpected and extreme life-experiences. Friedrich Nietzsche extended his views to suggest that the death of God—the loss of collective faith in religion and traditional morality—created a more widespread existential crisis for the philosophically aware.
Existential crisis has indeed been seen as the inevitable accompaniment of modernism (1890–1945). Whereas Émile Durkheim saw individual crises as the by-product of social pathology and a (partial) lack of collective norms, others have seen existentialism as arising more broadly from the modernist crisis of the loss of meaning throughout the modern world.<ref>M. Hardt/K. Weeks, The Jameson Reader (2000) p. 265</ref> Its twin answers were either a religion revivified by the experience of anomie (as with Martin Buber), or an individualistic existentialism based on facing directly the absurd contingency of human fate within a meaningless and alien universe, as with Sartre and Camus.
Irvin Yalom, an emeritus professor of psychiatry at Stanford University, has made fundamental contributions to the field of existential psychotherapy. Rollo May is another of the founders of this approach.
Fredric Jameson has suggested that postmodernism, with its saturation of social space by a visual consumer culture, has replaced the modernist angst of the traditional subject, and with it the existential crisis of old, by a new social pathology of flattened affect and a fragmented subject.
Historical context
Existential crises are often seen as a phenomenon associated specifically with modern society. One important factor in this context is that various sources of meaning, such as religion or being grounded in one's local culture and immediate social environment, are less important in the contemporary context.
Another factor in modern society is that individuals are faced with a daunting number of decisions to make and alternatives to choose from, often without any clear guidelines on how to make these choices. The high difficulty for finding the best alternative and the importance of doing so are often the cause of anxiety and may lead to an existential crisis. For example, it was very common for a long time in history for a son to simply follow his father's profession. In contrast to this, the modern schooling system presents students with different areas of study and interest, thereby opening a wide range of career opportunities to them. The problem brought about by this increased freedom is sometimes referred to as the agony of choice. The increased difficulty is described in Barry Schwartz's law, which links the costs, time, and energy needed to make a well-informed choice to the number of alternatives available.
See also
Absurdism
Why there is anything at all
Antinatalism
"Dark Night of the Soul"
Depersonalization
Duḥkha
Ego death
Limit situation
Scholarly approaches to mysticism
Positive disintegration
The Sickness unto Death Spiritual crisis
References
Further reading
J. Watson, Caring Science as Sacred Science 2005. Chapter 4: "Existential Crisis in Science and Human Sciences".
T.M. Cousineau, A. Seibring, M.T. Barnard, P-673 Making meaning of infertility: Existential crisis or personal transformation? Fertility and Sterility, 2006.
Sanders, Marc, Existential Depression. How to recognize and cure life-related sadness in gifted people'', 2013.
External links
Alan Watts on meaningless life, and its resolution
Crisis
Personal life
Philosophy of life
Popular psychology
Psychological concepts
Psychotherapy
Religion and mental health
Suffering | 0.764454 | 0.999122 | 0.763783 |
Adaptive behavior | Adaptive behavior is behavior that enables a person (usually used in the context of children) to cope in their environment with greatest success and least conflict with others. This is a term used in the areas of psychology and special education. Adaptive behavior relates to everyday skills or tasks that the "average" person is able to complete, similar to the term life skills.
Nonconstructive or disruptive social or personal behaviors can sometimes be used to achieve a constructive outcome. For example, a constant repetitive action could be re-focused on something that creates or builds something. In other words, the behavior can be adapted to something else.
In contrast, maladaptive behavior is a type of behavior that is often used to reduce one's anxiety, but the result is dysfunctional and non-productive coping. For example, avoiding situations because you have unrealistic fears may initially reduce your anxiety, but it is non-productive in alleviating the actual problem in the long term. Maladaptive behavior is frequently used as an indicator of abnormality or mental dysfunction, since its assessment is relatively free from subjectivity. However, many behaviors considered moral can be maladaptive, such as dissent or abstinence.
Adaptive behavior reflects an individual's social and practical competence to meet the demands of everyday living.
Behavioral patterns change throughout a person's development, life settings and social constructs, evolution of personal values, and the expectations of others. It is important to assess adaptive behavior in order to determine how well an individual functions in daily life: vocationally, socially and educationally.
Examples
A child born with cerebral palsy will most likely have a form of hemiparesis or hemiplegia (the weakening, or loss of use, of one side of the body). In order to adapt to one's environment, the child may use these limbs as helpers, in some cases even adapt the use of their mouth and teeth as a tool used for more than just eating or conversation.
Frustration from lack of the ability to verbalize one's own needs can lead to tantrums. In addition, it may lead to the use of signs or sign language to communicate needs.
Core problems
Limitations in self-care skills and social relationships, as well as behavioral excesses, are common characteristics of individuals with mental disabilities. Individuals with mental disabilities—who require extensive supports—are often taught basic self-care skills such as dressing, eating, and hygiene. Direct instruction and environmental supports, such as added prompts and simplified routines, are necessary to ensure that deficits in these adaptive areas do not limit one's quality of life.
Most children with milder forms of mental disabilities learn how to take care of their basic needs, but they often require training in self-management skills to achieve the levels of performance necessary for eventual independent living. Making and sustaining personal relationships present significant challenges for many persons with mental disabilities. Limited cognitive processing skills, poor language development, and unusual or inappropriate behaviors can seriously impede interactions with others. Teaching students with mental disabilities appropriate social and interpersonal skills is an important function of special education. Students with mental disabilities often exhibit behavior problems than students who do not have the similar disabilities. Some behaviors observed by students with mental disabilities are difficulty accepting criticism, limited self-control, and inappropriate behaviors. The greater the severity of the mental disabilities, generally the higher the incidence of behavioral problems.
Problems with assessing long-term and short-term adaptation
One problem with assessments of adaptive behavior is that a behavior that appears adaptive in the short run can be maladaptive in the long run and vice versa. For example, in the case of a group with rules that insist on drinking harmful amounts of alcohol both abstinence and moderate drinking (moderate as defined by actual health effects, not by socially constructed rules) may seem maladaptive if assessments are strictly short term, but an assessment that focuses on long-term survival would instead find that it was adaptive and that it was obedience under the drinking rule that was maladaptive. Such differences between short term effects and long-term effects in the context of harmful consequences of short-term compliance with destructive rules are argued by some researchers to show that assessments of adaptive behavior are not as unproblematic as is often assumed by psychiatry.
Adaptive behaviors in education
In education, adaptive behavior is defined as that which (1) meets the needs of the community of stakeholders (parents, teachers, peers, and later employers) and (2) meets the needs of the learner, now and in the future. Specifically, these behaviors include such things as effective speech, self-help, using money, cooking, and reading, for example.
Training in adaptive behavior is a key component of any educational program, but is critically important for children with special needs. The US Department of Education has allocated billions of dollars ($12.3 billion in 2008) for special education programs aimed at improving educational and early intervention outcomes for children with disabilities.
In 2001, the United States National Research Council published a comprehensive review of interventions for children and adults diagnosed with autism. The review indicates that interventions based on applied behavior analysis have been effective with these groups.
Adaptive behavior includes socially responsible and independent performance of daily activities. However, the specific activities and skills needed may differ from setting to setting. When a student is going to school, school and academic skills are adaptive. However, some of those same skills might be useless or maladaptive in a job settings, so the transition between school and job needs careful attention.
Specific skills
Adaptive behavior includes the age-appropriate behaviors necessary for people to live independently and to function safely and appropriately in daily life. Adaptive behaviors include life skills such as grooming, dressing, safety, food handling, working, money management, cleaning, making friends, social skills, and the personal responsibility expected of their age, social group and wealth group. Specifically relevant are community access skills and peer access and retention skills, and behaviors which act as barriers to such access. These are itemised below.
Community access skills
Bus riding
Independent walking
Coin summation
Ordering food in a restaurant
Vending machine use
Eating in public places
Pedestrian safety
Peer access and retention
Clothing selection skills
Appropriate mealtime behaviors
Toy play skills and playful activities
Oral hygiene and tooth brushing
Soccer play
Adaptive behaviors are considered to change due to the persons culture and surroundings. Professors have to delve into the students technical and comprehension skills to measure how adaptive their behavior is.
Barriers to access to peers and communities
Diurnal bruxism
Controlling rumination and vomiting
Pica
Adaptive skills
Every human being must learn a set of skills that is beneficial for the environments and communities they live in. Adaptive skills are stepping stones toward accessing and benefiting from local or remote communities. This means that, in urban environments, to go to the movies, a child will have to learn to navigate through the town or take the bus, read the movie schedule, and pay for the movie. Adaptive skills allow for safer exploration because they provide the learner with an increased awareness of their surroundings and of changes in context, that require new adaptive responses to meet the demands and dangers of that new context. Adaptive skills may generate more opportunities to engage in meaningful social interactions and acceptance. Adaptive skills are socially acceptable and desirable at any age and regardless of gender (with the exception of sex specific biological differences such as menstrual care skills).
Learning adaptive skills
Adaptive skills encompass a range of daily situations and they usually start with a task analysis. The task analysis will reveal all the steps necessary to perform the task in the natural environment. The use of behavior analytic procedures has been documented, with children, adolescents and adults, under the guidance of behavior analysts and supervised behavioral technicians. The list of applications has a broad scope and it is in continuous expansion as more research is carried out in applied behavior analysis (see Journal of Applied Behavior Analysis, The Analysis of Verbal Behavior).
Practopoietic theory
According to practopoietic theory, creation of adaptive behavior involves special, poietic interactions among different levels of system organization. These interactions are described on the basis of cybernetic theory in particular, good regulator theorem. In practopoietic systems, lower levels of organization determine the properties of higher levels of organization, but not the other way around. This ensures that lower levels of organization (e.g., genes) always possess cybernetically more general knowledge than the higher levels of organization—knowledge at a higher level being a special case of the knowledge at the lower level. At the highest level of organization lies the overt behavior. Cognitive operations lay in the middle parts of that hierarchy, above genes and below behavior. For behavior to be adaptive, at least three adaptive traverses are needed.
See also
Adaptive Behavior – journal
Character
Evolutionary mismatch
Vineland Social Maturity Scale
References
External links
BACB (Behavior Analyst Certification Board)
Human behavior
Behavioral concepts
Developmental psychology
Evolutionary psychology | 0.774809 | 0.985742 | 0.763762 |
Experiential learning | Experiential learning (ExL) is the process of learning through experience, and is more narrowly defined as "learning through reflection on doing". Hands-on learning can be a form of experiential learning, but does not necessarily involve students reflecting on their product. Experiential learning is distinct from rote or didactic learning, in which the learner plays a comparatively passive role. It is related to, but not synonymous with, other forms of active learning such as action learning, adventure learning, free-choice learning, cooperative learning, service-learning, and situated learning.
Experiential learning is often used synonymously with the term "experiential education", but while experiential education is a broader philosophy of education, experiential learning considers the individual learning process. As such, compared to experiential education, experiential learning is concerned with more concrete issues related to the learner and the learning context. Experiences "stick out" in the mind and assist with information retention.
The general concept of learning through experience is ancient. Around 350 BC, Aristotle wrote in the Nicomachean Ethics "for the things we have to learn before we can do them, we learn by doing them". But as an articulated educational approach, experiential learning is of much more recent origin. Beginning in the 1970s, David A. Kolb helped develop the modern theory of experiential learning, drawing heavily on the work of John Dewey, Kurt Lewin, and Jean Piaget.
Experiential learning has significant teaching advantages. Peter Senge, author of The Fifth Discipline (1990), states that teaching is of utmost importance to motivate people. Learning only has good effects when learners have the desire to absorb the knowledge. Therefore, experiential learning requires the showing of directions for learners.
Experiential learning entails a hands-on approach to learning that moves away from just the teacher at the front of the room imparting and transferring their knowledge to students. It makes learning an experience that moves beyond the classroom and strives to bring a more involved way of learning.
Kolb's experiential learning model
Experiential learning focuses on the learning process for the individual. One example of experiential learning is going to the zoo and learning through observation and interaction with the zoo environment, as opposed to reading about animals from a book. Thus, one makes discoveries and experiments with knowledge firsthand, instead of hearing or reading about others' experiences. Likewise, in business school, internship, and job-shadowing, opportunities in a student's field of interest can provide valuable experiential learning which contributes significantly to the student's overall understanding of the real-world environment.
A third example of experiential learning involves learning how to ride a bike, a process which can illustrate the four-step experiential learning model (ELM) as set forth by Kolb and outlined in Figure 1 below. Following this example, in the "concrete experience" stage, the learner physically interacts with the bike in the "here and now". This experience forms "the basis for observation and reflection" and the learner has the opportunity to consider what is working or failing (reflective observation), formulate a generalized theory or idea about riding a bike in general (abstract conceptualization) and to think about ways to improve on the next attempt made at riding (active experimentation). Every new attempt to ride is informed by a cyclical pattern of previous experience, thought and reflection.
Figure 1 – David Kolb's Experiential Learning Model (ELM)
Elements
Experiential learning can occur without a teacher and relates solely to the meaning-making process of the individual's direct experience. However, though the gaining of knowledge is an inherent process that occurs naturally, a genuine learning experience requires certain elements. According to Kolb, knowledge is continuously gained through both personal and environmental experiences. Kolb states that in order to gain genuine knowledge from an experience, the learner must have four abilities:
The learner must be willing to be actively involved in the experience;
The learner must be able to reflect on the experience;
The learner must possess and use analytical skills to conceptualize the experience; and
The learner must possess decision making and problem solving skills in order to use the new ideas gained from the experience.
Implementation
Experiential learning requires self-initiative, an "intention to learn" and an "active phase of learning". Kolb's cycle of experiential learning can be used as a framework for considering the different stages involved. Jennifer A. Moon has elaborated on this cycle to argue that experiential learning is most effective when it involves: 1) a "reflective learning phase" 2) a phase of learning resulting from the actions inherent to experiential learning, and 3) "a further phase of learning from feedback". This process of learning can result in "changes in judgment, feeling or skills" for the individual and can provide direction for the "making of judgments as a guide to choice and action".
Most educators understand the important role experience plays in the learning process. The role of emotion and feelings in learning from experience has been recognised as an important part of experiential learning. While those factors may improve the likelihood of experiential learning occurring, it can occur without them. Rather, what is vital in experiential learning is that the individual is encouraged to directly involve themselves in the experience, and then to reflect on their experiences using analytic skills, in order that they gain a better understanding of the new knowledge and retain the information for a longer time.
Reflection is a crucial part of the experiential learning process, and like experiential learning itself, it can be facilitated or independent. Dewey wrote that "successive portions of reflective thought grow out of one another and support one another", creating a scaffold for further learning, and allowing for further experiences and reflection. This reinforces the fact that experiential learning and reflective learning are iterative processes, and the learning builds and develops with further reflection and experience. Facilitation of experiential learning and reflection is challenging, but "a skilled facilitator, asking the right questions and guiding reflective conversation before, during, and after an experience, can help open a gateway to powerful new thinking and learning". Jacobson and Ruddy, building on Kolb's four-stage Experiential Learning Model and Pfeiffer and Jones's five stage Experiential Learning Cycle, took these theoretical frameworks and created a simple, practical questioning model for facilitators to use in promoting critical reflection in experiential learning. Their "5 Questions" model is as follows:
Did you notice?
Why did that happen?
Does that happen in life?
Why does that happen?
How can you use that?
These questions are posed by the facilitator after an experience, and gradually lead the group towards a critical reflection on their experience, and an understanding of how they can apply the learning to their own life. Although the questions are simple, they allow a relatively inexperienced facilitator to apply the theories of Kolb, Pfeiffer, and Jones, and deepen the learning of the group.
While it is the learner's experience that is most important to the learning process, it is also important not to forget the wealth of experience a good facilitator also brings to the situation. However, while a facilitator, or "teacher", may improve the likelihood of experiential learning occurring, a facilitator is not essential to experiential learning. Rather, the mechanism of experiential learning is the learner's reflection on experiences using analytic skills. This can occur without the presence of a facilitator, meaning that experiential learning is not defined by the presence of a facilitator. Yet, by considering experiential learning in developing course or program content, it provides an opportunity to develop a framework for adapting varying teaching/learning techniques into the classroom.
In schools
Experiential learning is supported in different school organizational models and learning environments.
Hyper Island is a global, constructivist school originally from Sweden, with a range of school and executive education programs grounded in experience-based learning, and with reflection taught as key skill to learn for life.
THINK Global School is a four-year traveling high school that holds classes in a new country each term. Students engage in experiential learning through activities such as workshops, cultural exchanges, museum tours, and nature expeditions.
The Dawson School in Boulder, Colorado, devotes two weeks of each school year to experiential learning, with students visiting surrounding states to engage in community service, visit museums and scientific institutions, and engage in activities such as mountain biking, backpacking, and canoeing.
In the ELENA-Project, the follow-up project of "animals live", experiential learning with living animals will be developed. Together with project partners from Romania, Hungary and Georgia, the Bavarian Academy of Nature Conservation and Landscape Management in Germany brings living animals in the lessons of European schools. The aim is to brief children for the context of the biological diversity and to support them to develop ecologically oriented values.
Loving High School in Loving, New Mexico, publishes career and technical education opportunities for students. These include internship for students who are interested in science, STEM majors, or architecture. The school is making good connections with local businesses, which helps students get used to working in such environments.
The Work Experience Builders project connect work to learning by helping students gain real-world work experience and experiential knowledge within a mentored project-based learning environment.
Chicago Public Schools operates eight early college STEM high schools through its Early College STEM School Initiative. The eight high schools offer four-years of computer science classes to every student. Additionally, students are able to earn college credits from local community colleges. Each school partners with a technology company which offers students internships and mentors from the company to expose students to jobs in STEM fields.
Robert H. Smith School of Business offers select undergraduate students a year-round advanced course whereby students conduct financial analyses and security trades to manage real investment dollars in the Lemma Senbet Fund.
Nonprofits such as Out Teach, Life Lab, Nature Explore, and the National Wildlife Federation, provide training for teachers on how to use outdoor spaces for experiential learning.
Many European schools take part in intercultural educational programs, such as the European Youth Parliament, which uses experience-based learning to promote intercultural understanding among young students, through indoor and outdoor activities, discussions and debates.
In business education
As higher education continues to adapt to new expectations from students, experiential learning in business and accounting programs has become more important. For example, Clark & White (2010) point out that "a quality university business education program must include an experiential learning component". With reference to this study, employers note that graduating students need to build skills in "professionalism" – which can be taught via experiential learning. Students value this learning as much as industry. Phan & Ninh (2024) also highlight that experiential learning through company field trips plays a critical role in enhancing students' understanding of CSR and encourages them to share CSR-related content with others.
Learning styles also impact business education in the classroom. Kolb positions four learning styles, Diverger, Assimilator, Accommodator and Converger, atop the Experiential Learning Model, using the four experiential learning stages to carve out "four quadrants", one for each learning style. An individual's dominant learning style can be identified by taking Kolb's Learning Style Inventory (LSI). More recent researchers have argued that learning styles are a neuromyth, and that categorising learners according to styles is unhelpful and inaccurate.
Robert Loo (2002) undertook a meta-analysis of 8 studies which revealed that Kolb's learning styles were not equally distributed among business majors in the sample. More specifically, results indicated that there appears to be a high proportion of assimilators and a lower proportion of accommodators than expected for business majors. Not surprisingly, within the accounting sub-sample there was a higher proportion of convergers and a lower proportion of accommodators. Similarly, in the finance sub-sample, a higher proportion of assimilators and lower proportion of divergers was apparent. Within the marketing sub-sample there was an equal distribution of styles. This would provide some evidence to suggest that while it is useful for educators to be aware of common learning styles within business and accounting programs, they should be encouraging students to use all four learning styles appropriately and students should use a wide range of learning methods.
Professional education applications, also known as management training or organizational development, apply experiential learning techniques in training employees at all levels within the business and professional environment. Interactive, role-play based customer service training is often used in large retail chains. Training board games simulating business and professional situations such as the Beer Distribution Game used to teach supply chain management, and the Friday Night at the ER game used to teach systems thinking, are used in business training efforts.
In business
Experiential business learning is the process of learning and developing business skills through the medium of shared experience. The main point of difference between this and academic learning is more “real-life” experience for the recipient.
This may include for example, learning gained from a network of business leaders sharing best practice, or individuals being mentored or coached by a person who has faced similar challenges and issues, or simply listening to an expert or thought leader in current business thinking.
Providers of this type of experiential business learning often include membership organisations who offer product offerings such as peer group learning, professional business networking, expert/speaker sessions, mentoring and/or coaching.
Comparisons
Experiential learning is most easily compared with academic learning, the process of acquiring information through the study of a subject without the necessity for direct experience. While the dimensions of experiential learning are analysis, initiative, and immersion, the dimensions of academic learning are constructive learning and reproductive learning. Though both methods aim to instill new knowledge in the learner, academic learning does so through more abstract, classroom-based techniques, whereas experiential learning actively involves the learner in a concrete experience.
Benefits
Experience real world: For example, students who major in Chemistry may have chances to interact with the chemical environment. Learners who have a desire to become businesspeople will have the opportunity to experience the manager position.
Improved on-the-job performance: For example, municipal bus drivers trained via high-fidelity simulation training (instead of just classroom training) showed significant decreases in accidents and fuel consumption.
Opportunities for creativity: There is always more than one solution for a problem in the real world. Students will have a better chance to learn that lesson when they get to interact with real life experiences.
See also
People
Jean Marc Gaspard Itard
Édouard Séguin
Johann Heinrich Pestalozzi
Friedrich Fröbel
Emile Jaques-Dalcroze - Swiss musician, composer and pedagogue
Subjects
4-H
Active learning
Apprenticeship
Context-based learning
Contextual learning
Dual education system
Learning by doing
Learning by teaching
Trial and error
Vocational education
References
Alternative education
Learning methods
Experiential learning schools | 0.767098 | 0.995603 | 0.763725 |
DIKW pyramid | The DIKW pyramid, also known variously as the DIKW hierarchy, wisdom hierarchy, knowledge hierarchy, information hierarchy, information pyramid, and the data pyramid, refers to a class of models representing purported structural or functional relationships between data, information, knowledge, and wisdom. It claims that deep understanding of a subject emerges through four qualitative stages: data, information, knowledge, and wisdom
Not all versions of the DIKW model reference all four components (earlier versions not including data, later versions omitting or downplaying wisdom) and some include additional components. In addition to a hierarchy and a pyramid, the DIKW model has also been characterized as a chain, framework, and continuum.
History
Danny P. Wallace, a professor of library and information science, explained that the origin of the DIKW pyramid is uncertain:
The presentation of the relationships among data, information, knowledge, and sometimes wisdom in a hierarchical arrangement has been part of the language of information science for many years. Although it is uncertain when and by whom those relationships were first presented, the ubiquity of the notion of a hierarchy is embedded in the use of the acronym DIKW as a shorthand representation for the data-to-information-to-knowledge-to-wisdom transformation.Many authors think that the idea of the DIKW relationship originated from two lines in the poem "Choruses", by T. S. Eliot, that appeared in the pageant play The Rock, in 1934:
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
Knowledge, intelligence, and wisdom
In 1927, Clarence W. Barron addressed his employees at Dow Jones & Company on the hierarchy: "Knowledge, Intelligence and Wisdom".
Data, information, knowledge
In 1955, English-American economist and educator Kenneth Boulding presented a variation on the hierarchy consisting of "signals, messages, information, and knowledge". However, "[t]he first author to distinguish among data, information, and knowledge and to also employ the term 'knowledge management' may have been American educator Nicholas L. Henry", in a 1974 journal article.
Data, information, knowledge, wisdom
Other early versions (prior to 1982) of the hierarchy that refer to a data tier include those of Chinese-American geographer Yi-Fu Tuan and sociologist-historian Daniel Bell.. In 1980, Irish-born engineer Mike Cooley invoked the same hierarchy in his critique of automation and computerization, in his book Architect or Bee?: The Human / Technology Relationship.
Thereafter, in 1987, Czechoslovakia-born educator Milan Zeleny mapped the elements of the hierarchy to knowledge forms: know-nothing, know-what, know-how, and know-why. Zeleny "has frequently been credited with proposing the [representation of DIKW as a pyramid ]... although he actually made no reference to any such graphical model."
The hierarchy appears again in a 1988 address to the International Society for General Systems Research, by American organizational theorist Russell Ackoff, published in 1989. Subsequent authors and textbooks cite Ackoff's as the "original articulation" of the hierarchy or otherwise credit Ackoff with its proposal. Ackoff's version of the model includes an understanding tier (as Adler had, before him), interposed between knowledge and wisdom. Although Ackoff did not present the hierarchy graphically, he has also been credited with its representation as a pyramid.
In 1989, Bell Labs veteran Robert W. Lucky wrote about the four-tier "information hierarchy" in the form of a pyramid in his book Silicon Dreams. In the same year as Ackoff presented his address, information scientist Anthony Debons and colleagues introduced an extended hierarchy, with "events", "symbols", and "rules and formulations" tiers ahead of data. In 1994 Nathan Shedroff presented the DIKW hierarchy in an information design context.
Jennifer Rowley noted in 2007 that there was "little reference to wisdom" in discussion of the DIKW in recently published college textbooks, and does not include wisdom in her own definitions following that research. Meanwhile, Chaim Zins' extensive analysis of the conceptualizations of data, information, and knowledge, in his 2007 research study, makes no explicit commentary on wisdom, although some of the citations included by Zins do make mention of the term.
Description
The DIKW model "is often quoted, or used implicitly, in definitions of data, information and knowledge in the information management, information systems and knowledge management literatures, but there has been limited direct discussion of the hierarchy". Reviews of textbooks and a survey of scholars in relevant fields indicate that there is not a consensus as to definitions used in the model, and even less "in the description of the processes that transform elements lower in the hierarchy into those above them".
This has led Zins to suggest that the data–information–knowledge components of DIKW refer to a class of no less than five models, as a function of whether data, information, and knowledge are each conceived of as subjective, objective (what Zins terms, "universal" or "collective") or both. In Zins' usage, subjective and objective "are not related to arbitrariness and truthfulness, which are usually attached to the concepts of subjective knowledge and objective knowledge". Information science, Zins argues, studies data and information, but not knowledge, as knowledge is an internal (subjective) rather than an external (universal–collective) phenomenon.
Data
In the context of DIKW, data is conceived of as symbols or signs, representing stimuli or signals, that are "of no use until ... in a usable (that is, relevant) form". Zeleny characterized this non-usable characteristic of data as "know-nothing".
In some cases, data is understood to refer not only to symbols, but also to signals or stimuli referred to by said symbols—what Zins terms subjective data. Where universal data, for Zins, are "the product of observation" (italics in original), subjective data are the observations. This distinction is often obscured in definitions of data in terms of "facts".
Data as fact
Rowley, following her study of DIKW definitions given in textbooks, characterizes data "as being discrete, objective facts or observations, which are unorganized and unprocessed and therefore have no meaning or value because of lack of context and interpretation." In Henry's early formulation of the hierarchy, data was simply defined as "merely raw facts", while two recent texts define data as "chunks of facts about the state of the world" and "material facts", respectively. Cleveland does not include an explicit data tier, but defines information as "the sum total of ... facts and ideas".
Insofar as facts have as a fundamental property that they are true, have objective reality, or otherwise can be verified, such definitions would preclude false, meaningless, and nonsensical data from the DIKW model, such that the principle of garbage in, garbage out would not be accounted for under DIKW.
Data as signal
In the subjective domain, data are conceived of as "sensory stimuli, which we perceive through our senses", or "signal readings", including "sensor and/or sensory readings of light, sound, smell, taste, and touch". Others have argued that what Zins calls subjective data actually count as a "signal" tier (as had Boulding), which precedes data in the DIKW chain.
American information scientist Glynn Harmon defined data as "one or more kinds of energy waves or particles (light, heat, sound, force, electromagnetic) selected by a conscious organism or intelligent agent on the basis of a preexisting frame or inferential mechanism in the organism or agent."
The meaning of sensory stimuli may also be thought of as subjective data:
Information is the meaning of these sensory stimuli (i.e., the empirical perception). For example, the noises that I hear are data. The meaning of these noises (e.g., a running car engine) is information. Still, there is another alternative as to how to define these two concepts—which seems even better. Data are sense stimuli, or their meaning (i.e., the empirical perception). Accordingly, in the example above, the loud noises, as well as the perception of a running car engine, are data. (Italics added. Bold in original.)
Subjective data, if understood in this way, would be comparable to knowledge by acquaintance, in that it is based on direct experience of stimuli. However, unlike knowledge by acquaintance, as described by Bertrand Russell and others, the subjective domain is "not related to ... truthfulness".
Whether Zins' alternate definition would hold would be a function of whether "the running of a car engine" is understood as an objective fact or as a contextual interpretation.
Data as symbol
Whether the DIKW definition of data is deemed to include Zins's subjective data (with or without meaning), data is consistently defined to include "symbols", or "sets of signs that represent empirical stimuli or perceptions", of "a property of an object, an event or of their environment". Data, in this sense, are "recorded (captured or stored) symbols", including "words (text and/or verbal), numbers, diagrams, and images (still and/or video), which are the building blocks of communication", the purpose of which "is to record activities or situations, to attempt to capture the true picture or real event," such that "all data are historical, unless used for illustrative purposes, such as forecasting."
Boulding's version of DIKW explicitly named the level below the information tier message, distinguishing it from an underlying signal tier. Debons and colleagues reverse this relationship, identifying an explicit symbol tier as one of several levels underlying data.
Zins determined that, for most of those surveyed, data "are characterized as phenomena in the universal domain". "Apparently," clarifies Zins, "it is more useful to relate to the data, information, and knowledge as sets of signs rather than as meaning and its building blocks".
Information
In the context of DIKW, information meets the definition for knowledge by description ("information is contained in descriptions"), and is differentiated from data in that it is "useful". "Information is inferred from data", in the process of answering interrogative questions (e.g., "who", "what", "where", "how many", "when"), thereby making the data useful for "decisions and/or action". "Classically," states a 2007 text, "information is defined as data that are endowed with meaning and purpose."
Structural vis-à-vis functional
Rowley, following her review of how DIKW is presented in textbooks, describes information as "organized or structured data, which has been processed in such a way that the information now has relevance for a specific purpose or context, and is therefore meaningful, valuable, useful and relevant." Note that this definition contrasts with Rowley's characterization of Ackoff's definitions, wherein "[t]he difference between data and information is structural, not functional."
In his formulation of the hierarchy, Henry defined information as "data that changes us", this being a functional, rather than structural, distinction between data and information. Meanwhile, Cleveland, who did not refer to a data level in his version of DIKW, described information as "the sum total of all the facts and ideas that are available to be known by somebody at a given moment in time".
American educator Bob Boiko is more obscure, defining information only as "matter-of-fact".
Symbolic vis-à-vis subjective
Information may be conceived of in DIKW models as: universal, existing as symbols and signs; subjective, the meaning to which symbols attach; or both. Examples of information as both symbol and meaning include:
American information scientist Anthony Debons's characterization of information as representing "a state of awareness (consciousness) and the physical manifestations they form", such that "[i]nformation, as a phenomenon, represents both a process and a product; a cognitive/affective state, and the physical counterpart (product of) the cognitive/affective state."
Danish information scientist Hanne Albrechtsen's description of information as "related to meaning or human intention", either as "the contents of databases, the web, etc." (italics added) or "the meaning of statements as they are intended by the speaker/writer and understood/misunderstood by the listener/reader."
Zeleny formerly described information as "know-what", but has since refined this to differentiate between "what to have or to possess" (information) and "what to do, act or carry out" (wisdom). To this conceptualization of information, he also adds "why is", as distinct from "why do" (another aspect of wisdom). Zeleny further argues that there is no such thing as explicit knowledge, but rather that knowledge, once made explicit in symbolic form, becomes information.
Knowledge
The knowledge component of DIKW is generally agreed to be an elusive concept which is difficult to define. The DIKW definition of knowledge differs from that used by epistemology. The DIKW view is that "knowledge is defined with reference to information." Definitions may refer to information having been processed, organized or structured in some way, or else as being applied or put into action.
Zins has suggested that knowledge, being subjective rather than universal, is not the subject of study in information science, and that it is often defined in propositional terms, while Zeleny has asserted that to capture knowledge in symbolic form is to make it into information, i.e., that "All knowledge is tacit".
"One of the most frequently quoted definitions" of knowledge captures some of the various ways in which it has been defined by others:
Knowledge is a fluid mix of framed experience, values, contextual information, expert insight and grounded intuition that provides an environment and framework for evaluating and incorporating new experiences and information. It originates and is applied in the minds of knowers. In organizations it often becomes embedded not only in documents and repositories but also in organizational routines, processes, practices and norms.
Knowledge as processed
Mirroring the description of information as "organized or structured data", knowledge is sometimes described as:
"synthesis of multiple sources of information over time"
"organization and processing to convey understanding, experience [and] accumulated learning"
"a mix of contextual information, values, experience and rules"
One of Boulding's definitions for knowledge had been "a mental structure" and Cleveland described knowledge as "the result of somebody applying the refiner's fire to [information], selecting and organizing what is useful to somebody". A 2007 text describes knowledge as "information connected in relationships".
Knowledge as procedural
Zeleny defines knowledge as "know-how" (i.e., procedural knowledge), and also "know-who" and "know-when", each gained through "practical experience". "Knowledge ... brings forth from the background of experience a coherent and self-consistent set of coordinated actions.". Further, implicitly holding information as descriptive, Zeleny declares that "Knowledge is action, not a description of action."
Ackoff, likewise, described knowledge as the "application of data and information", which "answers 'how' questions", that is, "know-how".
Meanwhile, textbooks discussing DIKW have been found to describe knowledge variously in terms of experience, skill, expertise or capability:
"study and experience"
"a mix of contextual information, expert opinion, skills and experience"
"information combined with understanding and capability"
"perception, skills, training, common sense and experience".
Businessmen James Chisholm and Greg Warman characterize knowledge simply as "doing things right".
Knowledge as propositional
Knowledge is sometimes described as "belief structuring" and "internalization with reference to cognitive frameworks". One definition given by Boulding for knowledge was "the subjective 'perception of the world and one's place in it'", while Zeleny's said that knowledge "should refer to an observer's distinction of 'objects' (wholes, unities)".
Zins, likewise, found that knowledge is described in propositional terms, as justifiable beliefs (subjective domain, akin to tacit knowledge), and sometimes also as signs that represent such beliefs (universal/collective domain, akin to explicit knowledge). Zeleny has rejected the idea of explicit knowledge (as in Zins' universal knowledge), arguing that once made symbolic, knowledge becomes information. Boiko appears to echo this sentiment, in his claim that "knowledge and wisdom can be information".
In the subjective domain:
Knowledge is a thought in the individual's mind, which is characterized by the individual's justifiable belief that it is true. It can be empirical and non-empirical, as in the case of logical and mathematical knowledge (e.g., "every triangle has three sides"), religious knowledge (e.g., "God exists"), philosophical knowledge (e.g., "Cogito ergo sum"), and the like. Note that knowledge is the content of a thought in the individual's mind, which is characterized by the individual's justifiable belief that it is true, while "knowing" is a state of mind which is characterized by the three conditions: (1) the individual believe[s] that it is true, (2) S/he can justify it, and (3) It is true, or it [appears] to be true. (Italics added. Bold in original.)
The distinction here between subjective knowledge and subjective information is that subjective knowledge is characterized by justifiable belief, where subjective information is a type of knowledge concerning the meaning of data.
Boiko implied that knowledge was both open to rational discourse and justification, when he defined knowledge as "a matter of dispute".
Wisdom
Although commonly included as a level in DIKW, "there is limited reference to wisdom" in discussions of the model. Boiko appears to have dismissed wisdom, characterizing it as "non-material".
Ackoff refers to understanding as an "appreciation of 'why'", and wisdom as "evaluated understanding", where understanding is posited as a discrete layer between knowledge and wisdom. Adler had previously also included an understanding tier, while other authors have depicted understanding as a dimension in relation to which DIKW is plotted.
Cleveland described wisdom simply as "integrated knowledge—information made super-useful". Other authors have characterized wisdom as "knowing the right things to do" and "the ability to make sound judgments and decisions apparently without thought".
Wisdom involves using knowledge for the greater good. Because of this, wisdom is deeper and more uniquely human. It requires a sense of good and bad, right and wrong, ethical and unethical.
Zeleny described wisdom as "know-why", but later refined his definitions, so as to differentiate "why do" (wisdom) from "why is" (information), and expanding his definition to include a form of know-what ("what to do, act or carry out"). According to Nikhil Sharma, Zeleny has argued for a tier to the model beyond wisdom, termed "enlightenment".
Representations
Graphical representation
DIKW is a hierarchical model often depicted as a pyramid, with data at its base and wisdom at its apex. In this regard it is similar to Maslow's hierarchy of needs, in that each level of the hierarchy is argued to be an essential precursor to the levels above. Unlike Maslow's hierarchy, which describes relationships of priority (lower levels are focused on first), DIKW describes purported structural or functional relationships (lower levels comprise the material of higher levels). Both Zeleny and Ackoff have been credited with originating the pyramid representation, although neither used a pyramid to present their ideas.
DIKW has also been represented as a two-dimensional chart or as one or more flow diagrams. In such cases, the relationships between the elements may be presented as less hierarchical, with feedback loops and control relationships.
Debons and colleagues may have been the first to "present the hierarchy graphically".
Throughout the years many adaptations of the DIKW pyramid have been produced. One evolving adaptation, in use by knowledge managers in the United States Department of Defense, attempts to show the progression transforming data to information then knowledge and finally wisdom to enable effective decisions, as well as the activities involved to ultimately create shared understanding throughout the organization and managing decision risk.
Computational representation
Intelligent decision support systems are trying to improve decision making by introducing new technologies and methods from the domain of modeling and simulation in general, and in particular from the domain of intelligent software agents in the contexts of agent-based modeling.
The following example describes a military decision support system, but the architecture and underlying conceptual idea are transferable to other application domains:
The value chain starts with data quality describing the information within the underlying command and control systems.
Information quality tracks the completeness, correctness, currency, consistency and precision of the data items and information statements available.
Knowledge quality deals with procedural knowledge and information embedded in the command and control system such as templates for adversary forces, assumptions about entities such as ranges and weapons, and doctrinal assumptions, often coded as rules.
Awareness quality measures the degree of using the information and knowledge embedded within the command and control system. Awareness is explicitly placed in the cognitive domain.
By the introduction of a common operational picture, data are put into context, which leads to information instead of data. The next step, which is enabled by service-oriented web-based infrastructures (but not yet operationally used), is the use of models and simulations for decision support. Simulation systems are the prototype for procedural knowledge, which is the basis for knowledge quality. Finally, using intelligent software agents to continually observe the battle sphere, apply models and simulations to analyse what is going on, to monitor the execution of a plan, and to do all the tasks necessary to make the decision maker aware of what is going on, command and control systems could even support situational awareness, the level in the value chain traditionally limited to pure cognitive methods.
Criticisms
Rafael Capurro, a philosopher based in Germany, argues that data is an abstraction, information refers to "the act of communicating meaning", and knowledge "is the event of meaning selection of a (psychic/social) system
from its 'world' on the basis of communication". As such, any impression of a logical hierarchy between these concepts "is a fairytale".
One objection offered by Zins is that, while knowledge may be an exclusively cognitive phenomenon, the difficulty in pointing to a given fact as being distinctively information or knowledge, but not both, makes the DIKW model unworkable.
[I]s Albert Einstein's famous equation "E = mc2" (which is printed on my computer screen, and is definitely separated from any human mind) information or knowledge? Is "2 + 2 = 4" information or knowledge?
Alternatively, information and knowledge might be seen as synonyms. In answer to these criticisms, Zins argues that, subjectivist and empiricist philosophy aside, "the three fundamental concepts of data, information, and knowledge and the relations among them, as they are perceived by leading scholars in the information science academic community", have meanings open to distinct definitions. Rowley echoes this point in arguing that, where definitions of knowledge may disagree, "[t]hese various perspectives all take as their point of departure the relationship between data, information and knowledge."
American philosophers John Dewey and Arthur Bentley, in their 1949 book Knowing and the Known, argued that "knowledge" was "a vague word", and presented a complex alternative to DIKW including some nineteen "terminological guide-posts".
Information processing theory argues that the physical world is made of information itself. Under this definition, data is either made up of or synonymous with physical information. It is unclear, however, whether information as it is conceived in the DIKW model would be considered derivative from physical-information/data or synonymous with physical information. In the former case, the DIKW model is open to the fallacy of equivocation. In the latter, the data tier of the DIKW model is preempted by an assertion of neutral monism.
Educator Martin Frické has published an article critiquing the DIKW hierarchy, in which he argues that the model is based on "dated and unsatisfactory philosophical positions of operationalism and inductivism", that information and knowledge are both weak knowledge, and that wisdom is the "possession and use of wide practical knowledge.
David Weinberger argues that although the DIKW pyramid appears to be a logical and straight-forward progression, this is incorrect. "What looks like a logical progression is actually a desperate cry for help." He points out there is a discontinuity between Data and Information (which are stored in computers), versus Knowledge and Wisdom (which are human endeavours). This suggests that the DIKW pyramid is too simplistic in representing how these concepts interact. "...Knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used."
See also
, a similar graphic in the field of psychology
Inverted pyramid (journalism), a metaphor used by journalists and writers to prioritise and structure the most newsworthy info and important details over general info
References
Further reading
Information science
Knowledge management
Information systems | 0.767053 | 0.995618 | 0.763692 |
Autoethnography | Autoethnography is a form of ethnographic research in which a researcher connects personal experiences to wider cultural, political, and social meanings and understandings. It is considered a form of qualitative and/or arts-based research.
Autoethnography has been used across various disciplines, including anthropology, arts education, communication studies, education, educational administration, English literature, ethnic studies, gender studies, history, human resource development, marketing, music therapy, nursing, organizational behavior, paramedicine, performance studies, physiotherapy, psychology, social work, sociology, and theology and religious studies.
Definitions
Historically, researchers have had trouble reaching a consensus regarding the definition of autoethnography. Whereas some scholars situate autoethnography within the family of narrative methods, others place it within the ethnographic tradition. However, it generally refers to research that involves critical observation of an individual's lived experiences and connecting those experience to broader cultural, political, and social concepts.
Autoethnography can refer to research in which a researcher reflexively studies a group they belong to or their subjective experience. In the 1970s, autoethnography was more narrowly defined as "insider ethnography," referring to studies of the (culture of) a group of which the researcher is a member.
According to Adams et al., autoethnography
uses a researcher's personal experience to describe and critique cultural beliefs, practices, and experiences;
acknowledges and values a researcher's relationships with others
uses deep and careful self-reflection—typically referred to as “reflexivity”—to name and interrogate the intersections between self and society, the particular and the general, the personal and the political
shows people in the process of figuring out what to do, how to live, and the meaning of their struggles
balances intellectual and methodological rigor, emotion, and creativity
strives for social justice and to make life better.
Bochner and Ellis have also defined autoethnography as "an autobiographical genre of writing and research that displays multiple layers of consciousness, connecting the personal to the cultural." They further indicate that autoethnography is typically written in first-person and can "appear in a variety of forms," such as "short stories, poetry, fiction, novels, photographic essays, personal essays, journals, fragmented and layered writing, and social science prose."
History
Mid-1800s
Anthropologists began conducting ethnographic research in the mid-1800s to study the cultures people they deemed "exotic" and/or "primitive." Typically, these early ethnographers aimed to merely observe and write "objective" accounts of these groups to provide others a better understanding of various cultures. They also "recognized and wrestled with questions of how to render textual accounts that would provide clear, accurate, rich descriptions of cultural practices of others" and "were concerned with offering valid, reliable, and objective interpretations in their writings."
Early- to mid-1900s
In the early to mid 1900s, it became clear that observation and fieldwork interfered with the cultural groups' natural and typical behaviors. Additionally, researchers realized the role they play in analyzing others' behaviors. As such, "serious questions arose about the possibility and legitimacy of offering purely objective accounts of cultural practices, traditions, symbols, meanings, premises, rituals, rules, and other social engagements."
To help combat potential issues of validity, ethnographers began using what Gilbert Ryle refers to as thick description: a description of human social behavior in which the writer-researcher describes the behavior and provides "commentary on, context for, and interpretation of these behaviors into the text." By doing so, the researcher aims to "evoke a cultural scene vividly, in detail, and with care," so readers can understand and attempt to interpret the scene for themselves, much like in more traditional research methods.
A few ethnographers, especially those related to the Chicago school, began incorporating aspects of autoethnography into their work, such as narrated life histories. While they created more lifelike representations of their subject than their predecessors, these researchers often "romanticized the subject" by creating narratives with "the three stages of the classic morality tale: being in a state of grace, being seduced by evil and falling from grace, and finally achieving redemption through suffering." Such researchers include Robert Parks, Nels Anderson, Everett Hughes, and Fred Davis.
During this time period, new theoretical constructs, such as feminism, began to emerge and with it, grew qualitative research. However, researchers were trying to "fit the classical traditional model of internal and external validity to constructionist and interactionist conceptions of the research act."
1970s
With the growth of qualitative research from the mid-1900s, "a few scholars were urging thicker descriptions, giving more attention to concrete details of everyday life, renouncing the ethics and artificiality of experimental studies, and complaining about the obscurity of jargon and technical language, ... but social scientists, for the most part, weren't all that concerned about the researcher's location in the text, the capacity of language to accurately represent reality, or the need for researcher reflexivity."
The term autoethnography was first used in 1975, when Heider connected individuals' personal experiences to larger, cultural beliefs and traditions. In Heider's case, the individual self referred to the people he was studying rather than himself. Because the people he studied were providing their personal accounts and experiences, Heider considered the work autoethnographic.
Later in the 1970s, researchers began more clearly stating their positionality and indicating how their mere presence altered the behaviors of the groups they studied. Further, researchers distinguished between people who researched groups of which they were a part (i.e., cultural insiders) and those who researched groups of which they were not a part (i.e., cultural outsiders). At this point, the term autoethnography began to refer to forms of ethnography in which the researcher is a cultural insider.
Walter Goldschmidt proposed that all ethnography is, in some way, autobiographical, because "ethnographic representations privilege personal beliefs, perspectives, and observations." As an anthropologist, David Hayano was interested in the role that an individual's own identity had in their research. Unlike more traditional research methods, Hayano believed there was value in a researcher "conducting and writing ethnographies of their own people."
While researchers recognized the part they played in understanding a group of people, none focused explicitly on the "inclusion and importance of personal experience in research."
1980s
More generally in the 1980s, researchers began questioning and critiquing the role of the researcher, especially in social sciences. Multiple researchers aimed to make "research and writing more reflexive and called into question the issues of gender, class, and race." As a result of these concerns, researchers purposefully inserted themselves as characters in the ethnographic narrative as a way of navigating the problem of researcher interference. Additionally, some of the predmoninant ways of understanding truth were eroded, and "[i]ssues such as validity, reliability, and objectivity ... were once again problematic. Pattern and interpretive theories, as opposed to causal linear theories, were now more common as writers continued to challenge older models of truth and meaning."
In addition to and perhaps because of the above, researchers became interested in the importance of culture and storytelling as they gradually became more engaged through the personal aspects in ethnographic practices.
In 1988, John Van Maanen noted three predominant ways ethnographers write about culture:
Realist Tales, in which the researcher uses a "dispassionate, third-person voice" and attempts to provide an "accurate" and "objective" account of the group studied without provider much researcher response
Confessional Tales, which include the researchers' "highly personalized styles" and responses to the observed data
Impressionist Tales, in which the researcher uses first-person to craft a "tightly focused, vibrant, exact, but necessarily imaginative rendering of fieldwork"
At the end of the 1980s, scholars began to apply the term autoethnography to work that used confessional and impressionist forms as they recognized that "the richness of cultural lives and life practices of others cannot be fully captured or evoked in purely objective or descriptive language."
1990s to present
In the early- to mid-1990s, researchers aimed to address the concerns raised in the previous decades regarding questions of legitimacy and reliability of ethnographic approaches. One way to do that was to directly place oneself into the research narrative, noting the positionality of the researcher. Here, the researcher could either insert themselves into the research narrative and/or increase participants' involvement in the research project, such as through participatory action research.
Autoethnography became more popular in the 1990s for ethnographers who aimed to use "personal experience and reflexivity to examine cultural experiences." Series such as Ethnographic Alternatives and the first Handbook of Qualitative Research were published to better explain the importance of autoethnographic use, and key texts focused specifically on autoethnography were published, including Carolyn Ellis's Investigating Subjectivity, Final Negotiations, The Ethnographic I, and Revision, as well as Art Bochner's Coming to Narrative. In 2013, Tony Adams, Stacy Holman Jones, and Carolyn Ellis co-edited the first edition of the Handbook of Autoethnography. They published Autoethnography in 2015 and the second edition of the Handbook of Autoethnography in 2022. In 2020, Adams and Andrew Herrmann started the Journal of Autoethnography with the University of California Press. In 2021, Marlen Harrison started The Autoethnographer, a Literary & Arts Magazine.
In the 2000s, major conferences began to regularly accept autoethnographic work, starting primarily with the International Congress of Qualitative Inquiry (2005). Other conferences that foreground autoethnographic research include the International Symposium on Autoethnography and Narrative (formerly Doing Autoethnography), the International Conference of Autoethnography (formerly British Autoethnography), and Critical Autoethnography.
Today, ethnographers typically use a "kind of hybrid form of confessional-impressionist tale" that includes "performative, poetic, impressionistic, symbolic, and lyrical language" while also "focusing closely on the self-data inherent in confessional writing."
Epistemological and theoretical basis
Autoethnography differs from ethnography in that autoethnography embraces and foregrounds the researcher's subjectivity rather than attempting to limit it as in empirical research. As Carolyn Ellis explains, "autoethnography overlaps art and science; it is part auto or self and part ethno or culture."Importantly, it is also "something different from both of them, greater than its parts." In other words, as Ellingson and Ellis put it, "whether we call a work an autoethnography or an ethnography depends as much on the claims made by authors as anything else."
In embracing personal thoughts, feelings, stories, and observations as a way of understanding the social context they are studying, autoethnographers are also shedding light on their total interaction with that setting by making their every emotion and thought visible to the reader. This is much the opposite of theory-driven, hypothesis-testing research methods that are based on the positivist epistemology. In this sense, Ellingson and Ellis see autoethnography as a social constructionist project that rejects the deep-rooted binary oppositions between the researcher and the researched, objectivity and subjectivity, process and product, self and others, art and science, and the personal and the political.
Autoethnographers, therefore, tend to reject the concept of social research as an objective and neutral knowledge produced by scientific methods, which can be characterized and achieved by detachment of the researcher from the researched. Autoethnography, in this regard, is a critical "response to the alienating effects on both researchers and audiences of impersonal, passionless, abstract claims of truth generated by such research practices and clothed in exclusionary scientific discourse." Deborah Reed-Danahay (1997) also argues that autoethnography is a postmodernist construct:
The concept of autoethnography...synthesizes both a postmodern ethnography, in which the realist conventions and objective observer position of standard ethnography have been called into question, and a postmodern autobiography, in which the notion of the coherent, individual self has been similarly called into question. The term has a double sense - referring either to the ethnography of one's own group or to autobiographical writing that has ethnographic interest. Thus, either a self- (auto-) ethnography or an autobiographical (auto-) ethnography can be signaled by "autoethnography.
Process
As a method, autoethnography combines characteristics of autobiography and ethnography.
To form the autobiographical aspects of the autoethnography, the author will write retroactively and selectively about past experiences. Unlike other forms of research, the author typically did not live through such experiences solely to create a publishable document; rather, the experiences are assembled using hindsight. Additionally, authors may conduct formal or informal interview and/or consult relevant texts (e.g., diaries or photographs) to help with recall. The experiences are tied together using literary elements "to create evocative and specific representations of the culture/cultural experience and to give audiences a sense of how being there in the experience feels."
Ethnography, on the other hand, involves observing and writing about culture. During the first stage, researchers will observe and interview individuals of the selected cultural group and take detailed fieldnotes. Ethnographers discover their findings through induction. That is, ethnographers don't go into the field looking for specific answers; rather, their observations, writing, and fieldnotes yield the findings. Such findings are conveyed to others through thick description so that readers may come to their own conclusions regarding the situation described.
Autoethnography uses aspects of autobiography (e.g., personal experiences and recall) and ethnography (e.g., interviews, observations, and fieldnotes) to create vivid descriptions that connect to the personal to the cultural.
Types of autoethnography
Because autoethnography is a broad and ambiguous "category that encompasses a wide array of practices," autoethnographies "vary in their emphasis on the writing and research process (graphy), on culture (ethnos), and on self (auto)." More recently, autoethnography has been separated into two distinct subtypes: analytic and evocative. According to Ellingson and Ellis, "Analytic autoethnographers focus on developing theoretical explanations of broader social phenomena, whereas evocative autoethnographers focus on narrative presentations that open up conversations and evoke emotional responses."Scholars also discuss visual autoethnography, which incorporates imagery along with written analysis.
Analytic autoethnography
Analytic autoethnography focuses on "developing theoretical explanations of broader social phenomena" and aligns with more traditional forms of research that value "generalization, distanced analysis, and theory-building."
This form has five key features:
complete member researcher (CMR) status
analytic reflexivity
narrative visibility of the researcher's self
dialogue with informants beyond the self
commitment to theoretical analysis
First, in all forms of autoethnography, the researcher must be a member of the cultural group they are study and thus, have CMR status. This cultural group may be loosely connected without knowledge of one another (e.g., people with disabilities) or tightly connected (e.g., members of a small church). CMR status helps the research "approximate the emotional stance of the people they study," thereby addressing some criticisms of ethnography. Like the evocative autoethnographer, the analytic autoethnographer "is personally engaged in a social group, setting, or culture as a full member and active participant." However, they "retains a distinct and highly visible identity as a self-aware scholar and social actor within the ethnographic text."
Two CMR status types are recognized: opportunistic and convert. Opportunistic CMRs exist as part of the cultural group they aim to study prior to deciding to research the group. To receive this insider status, the researcher "may be born into a group, thrown into a group by chance circumstance (e.g., illness), or have acquired intimate familiarity through occupational, recreational, or lifestyle participation." Conversely, convert CMRs "begin with a purely data-oriented research interest in the setting but become converted to complete immersion and membership during the course of the research." Here, a researcher will opt to study a cultural group, then become ingrained into that culture throughout the research process.
Second, when conducting analytic autoethnography, the researcher must utilize analytic reflexivity. That is, they must express their "awareness of their necessary connection to the research situation and hence their effects upon it," making themselves "visible, active, and reflexively engaged in the text."
Thirdly and similarly, the researcher should be visibly present throughout the narrative and "should illustrate analytic insights through recounting their own experiences and thoughts as well as those of others." Beyond this, analytic autoethnographers "should openly discuss changes in their beliefs and relationships over the course of fieldwork, thus vividly revealing themselves as people grappling with issues relevant to membership and participation in fluid rather than static social worlds."
Conversely, the fourth concept aims to prevent the text from "author saturation," which centers the author more than the culture being observed. While "analytic autoethnography is grounded inself-experience," it should "[reach] beyond it as well," perhaps including interviews with and/or observations of with others who are members of the culture studied. This connection to the culture moves the autoethnography beyond a mere autobiography or memoir.
Lastly, analytic autoethnography should commit to an analytic agenda. That is, the analytic autoethnography should not merely "document personal experience," "provide an 'insider's perspective,'" or "evoke emotional resonance with the reader." Rather, it should "use empirical data to gain insight into some broader set of social phenomena than those provided by the data themselves."
Although Leon Anderson (academic) conceptualized analytic autoethnography alongside evocative autoethnography, Anderson critiques the false dichotomy between analytic and evocative autoethnography in his chapter, "I Learn by Going: Autoethnographic Modes of Inquiry" (co-authored with Bonnie Glass-Coffin), the lead chapter in the first edition of the Handbook of Autoethnography.
Evocative autoethnography
Evocative autoethnography "focus[es] on narrative presentations that open up conversations and evoke emotional responses." According to Bochner and Ellis, the goal is for the readers to see themselves in the autoethnographer so they transform private troubles into public plight, making it powerful, comforting, dangerous, and culturally essential. Accounts are presented like novels or biographies and thus, fracture the boundaries that normally separate literature from social science.
Symbiotic autoethnography
Symbiotic Autoethnography (Beattie, 2022) suggests a way of reconciling the differences in various types of autoethnography through suggesting an innovative symbiotic approach. The author uses the concept of 'symbiosis' in its broader sense to denote close interdependence and interrelation between its suggested seven attributes, including temporality, researcher's omnipresence, evocative storytelling, interpretative analysis, political (transformative) focus, reflexivity and polyvocality.
Auto-ethnographic Design
Auto-ethnographic design is a materially-oriented practice that ties design research with expression. According to Schouwenberg and Kaethler, "There is a break here between the autoethnographic tradition and how it is taken up in design, where for the ‘graphy’ the act of reporting and reflection is replaced by creative production; design activates the knowledge component by directly engaging and altering the very world it seeks to make sense of". In contrast to other forms of design, auto-ethnographic designs are deeply personal and tend towards the artistic, using materiality as a way of understanding the self and communicating it. The hyphen that separates auto and ethnography represents the materiality that is needed to understand the self. It is critiqued for being excessively naval gazing.
Minor Literature Autoethnography
Minor Literature Autoethnography (MLA) draws on the concept of 'minor literature' as developed by Deleuze and Guattari, which refers to the use of a major language from a minoritarian perspective to challenge dominant cultural narratives. According to De Jong this type of autoethnography focuses on the experiences of marginalized groups and individuals who use the language of the majority to articulate their unique cultural positions and create new forms of expression. By doing so, minor literature autoethnography aims to reveal and critique power structures and give voice to perspectives that are often silenced or overlooked.
Goals of autoethnography
Adams, Ellis, and Jones recognize two primary purposes for practicing autoethnographic research. Given the complicated history of ethnography, "autoethnographers speak against, or provide alternatives to, dominant, taken-for-granted, and harmful cultural scripts, stories, and stereotypes" and "offer accounts of personal experience to complement, or fill gaps in, existing research."As with other forms of qualitative research, autoethnographic "accounts may show how the desire for, and practice of, generalization in research can mask important nuances of cultural issues."
In addition to providing nuanced accounts of cultural phenomena, Adams, Ellis, and Jones argue that the goal of autoethnography "is to articulate insider knowledge of cultural experience." Underlying this argument is the assumption that "the writer can inform readers about aspects of cultural life that other researchers may not be able to know." Importantly, "[i]nsider knowledge does not suggest that an autoethnographer can articulate more truthful or more accurate knowledge as compared to outsiders, but rather that as authors we can tell our stories in novel ways when compared to how others may be able to tell them."
Uses of autoethnography
Autoethnography is utilized across a variety of disciplines and can be presented in many forms, including but not limited to "short stories, poetry, fiction, novels, photographic essays, personal essays, journals, fragmented and layered writing, and social science prose."
Symbolic interactionists are particularly interested autoethnography, and examples can be found in a number of scholarly journals, such as Qualitative Inquiry, the Journal of the Society for the Study of Symbolic Interactionism, the Journal of Contemporary Ethnography, and the Journal of Humanistic Ethnography.
In performance studies, autoethnography acknowledges the researcher and the audience having equal weight. Portraying the performed "self" through writing then becomes an aim to create an embodied experience for the writer and the reader. This area acknowledges the inward and outward experience of ethnography in experiencing the subjectivity of the author. Audience members may experience the work of ethnography through reading/hearing/feeling (inward) and then have a reaction to it (outward), maybe by emotion. Ethnography and performance work together to invoke emotion in the reader.
Autoethnography is also used in film as a variant of the standard documentary film. It differs from the traditional documentary film in that its subject is the filmmaker. An autoethnographical film typically relates the life experiences and thoughts, views, and beliefs of the filmmaker, and as such, it is often considered to be rife with bias and image manipulation. Unlike other documentaries, autoethnographies do not usually make a claim of objectivity.
Storyteller/narrator
In different academic disciplines (particularly communication studies and performance studies), the term autoethnography itself is contested and is sometimes used interchangeably with or referred to as personal narrative or autobiography. Autoethnographic methods include journaling, looking at archival records - whether institutional or personal, interviewing one's own self, and using writing to generate a self-cultural understandings. Reporting an autoethnography might take the form of a traditional journal article or scholarly book, performed on the stage, or be seen in the popular press. Autoethnography can include direct (and participant) observation of daily behavior; unearthing of local beliefs and perception and recording of life history (e.g. kinship, education, etc.); and in-depth interviewing: "The analysis of data involves interpretation on the part of the researcher" (Hammersley in Genzuk). However, rather than a portrait of the Other (person, group, culture), the difference is that the researcher is constructing a portrait of the self.
Autoethnography can also be "associated with narrative inquiry and autobiography" in that it foregrounds experience and story as a meaning-making enterprise. Maréchal argues that "narrative inquiry can provoke identification, feelings, emotions, and dialogue." Furthermore, the increased focus on incorporating autoethnography and Narrative Inquiry into qualitative research indicates a growing concern for how the style of academic writing informs the types of claims made. As Laurel Richardson articulates "I consider writing as a method of inquiry, a way of finding out about a topic...form and content are inseparable." For many researchers, experimenting with alternative forms of writing and reporting, including autoethnography, personal narrative, performative writing, layered accounts and writing stories, provides a way to create multiple layered accounts of a research study, creating not only the opportunity to create new and provocative claims but also the ability to do so in a compelling manner. Ellis (2004) says that autoethnographers advocate "the conventions of literary writing and expression" in that "autoethnographic forms feature concrete action, emotion, embodiment, self-consciousness, and introspection portrayed in dialogue, scenes, characterization, and plot" (p. xix).
According to Bochner and Ellis (2006), an autoethnographer is "first and foremost a communicator and a storyteller." In other words, autoethnography "depicts people struggling to overcome adversity" and shows "people in the process of figuring out what to do, how to live, and the meaning of their struggles" (p. 111). Therefore, according to them, autoethnography is "ethical practice" and "gifts" that has a caregiving function (p. 111). In essence autoethnography is a story that re-enacts an experience by which people find meaning and through that meaning are able to be okay with that experience.
In Dr. Mayukh Dewan's opinion, this can be a problem because many readers may see us as being too self-indulgent but they have to realise that our stories and experiences we share are not solely ours, but rather that they also represent the group we are autoethnographically representing.
In this storytelling process, the researcher seeks to make meaning of a disorienting experience. A life example in which autoethnography could be applied is the death of a family member or someone close by. In this painful experience people often wonder how they will go about living without this person and what it will be like. In this scenario, especially in religious homes, one often asks "Why God?" thinking that with an answer as to why the person died they can go about living. Others, wanting to be able to offer up an explanation to make the person feel better, generally say things such as "At least they are in a better place" or "God wanted him/her home." People, who are never really left with an explanation as to why, generally fall back on the reason that "it was their time to go" and through this somewhat "explanation" find themselves able to move on and keep living life. Over time when looking back at the experience of someone close to you dying, one may find that through this hardship they became a stronger more independent person, or that they grew closer to other family members. With these realizations, the person has actually made sense of and has become fine with the tragic experience that occurred. And through this autoethnography is performed.
Evaluation
The main critique of autoethnography — and qualitative research in general — comes from the traditional social science methods that emphasize the objectivity of social research. In this critique, qualitative researchers are often called "journalists, or soft scientists," and their work, including autoethnography, is "termed unscientific, or only exploratory, or entirely personal and full of bias." Many quantitative researchers regard the materials produced by narrative as "the means by which a narrating subject, autonomous and independent...can achieve authenticity...This represents an almost total failure to use narrative to achieve serious social analysis."
According to Maréchal, the early criticism of autobiographical methods in anthropology was about "their validity on grounds of being unrepresentative and lacking objectivity." She also points out that evocative and emotional genres of autoethnography have been criticized by mostly analytic proponents for their "lack of ethnographic relevance as a result of being too personal." As she writes, they are criticized "for being biased, navel-gazing, self-absorbed, or emotionally incontinent, and for hijacking traditional ethnographic purposes and scholarly contribution."
The reluctance to accept narrative work as serious extends far beyond the realm of academia. In 1994, Arlene Croce refused to evaluate or even attend Bill T. Jones Still/Here performance. She echoed a quantitative stance towards narrative research by explaining
I can't review someone I feel sorry or hopeless about...I'm forced to feel sorry because of the way they present themselves as: dissed blacks, abused women, or disenfranchised homosexuals - as performers, in short, who make victimhood victim art
Croce illustrates what Adams, Jones, and Ellis refer to as "illusory boundaries and borders between scholarship and criticism." These "borders" are seen to hide or take away from the idea that autoethnographic evaluation and criticism present another personal story about the experience of an experience. Or as Craig Gingrich-Philbrook wrote, "any evaluation of autoethnography...is simply another story from a highly situated, privileged, empowered subject about something he or she experienced."
Rethinking traditional criteria
In her book's tenth chapter, titled "Evaluating and Publishing Autoethnography" (pp. 252~255), Ellis (2004) discusses how to evaluate an autoethnographic project, based on other authors' ideas about evaluating alternative modes of qualitative research. (See the special section in Qualitative Inquiry on "Assessing Alternative Modes of Qualitative and Ethnographic Research: How Do We Judge? Who Judges?") She presents several criteria for "good autoethnography," and indicates how these ideas resonate with each other.
First, Ellis mentions Richardson who described five factors she uses when reviewing personal narrative papers that includes analysis of both evaluative and constructive validity techniques. The criteria are:
(a) Substantive contribution. Does the piece contribute to our understanding of social life?
(b) Aesthetic merit. Does this piece succeed aesthetically? Is the text artistically shaped, satisfyingly complex, and not boring?
(c) Reflexivity. How did the author come to write this text? How has the author's subjectivity been both a producer and a product of this text?
(d) Impactfulness. Does this affect me emotionally and/or intellectually? Does it generate new questions or move me to action?
(e) Expresses a reality. Does this text embody a fleshed out sense of lived experience?
Autoethnographic manuscripts might include dramatic recall, unusual phrasing, and strong metaphors to invite the reader to "relive" events with the author. These guidelines may provide a framework for directing investigators and reviewers alike.
Further, Ellis suggests how Richardson's criteria mesh with criteria mentioned by Bochner who describes what makes him understand and feel with a story. (Bochner, 2000, pp. 264~266) He looks for concrete details (similar to Richardson's expression of lived experience), structurally complex narratives (Richardson's aesthetic merit), author's attempt to dig under the superficial to get to vulnerability and honesty (Richardson's reflexivity), a standard of ethical self-consciousness (Richardson's substantive contribution), and a moving story (Richardson's impact) (Ellis, 2004, pp. 253~254).
In 2015, Adams, Jones, and Ellis collaborated to bring about a similar list of Goals for Assessing Autoethnography. The list takes encompasses descriptive, prescriptive, practical, and theoretical goals for evaluating autoethnographic work (2015, pp. 102–104).
Make contributions to knowledge
Value the personal and experiential
Demonstrate the power, craft, and responsibilities of stories and storytelling
Take a relationally responsible approach to research practice and representation
Contributions to knowledge
Adams, Jones, and Ellis define the first goal of autoethnography as a conscious effort to "extend existing knowledge and research while recognizing that knowledge is both situated and contested." As Adams explains in his critique of his work Narrating the Closet,
I knew I had to contribute to knowledge about coming out by saying something new about the experience...I also needed a new angle toward coming out; my experience, alone, of coming out was not sufficient to justify a narrative.
With the critic's general decree of narrative as narcissism, Adams, Jones, and Ellis use the first goal of assessing autoethnography to explain the importance of striving to combine personal experience and existing theory while remaining mindful of the "insider insight that autoethnography offers researchers, participants, and readers/audiences." Ellis' Maternal Connections can be considered a successful incorporation of the first goal in that she "questions the idea of care-giving as a burden, instead of portraying caregiving as a loving and meaning-making relationship."
Value the personal and experiential
Adams, Jones, and Ellis define the second goal for assessing autoethnography with four elements which include featuring the perspective of the self in context and culture, exploring experience as a means of insight about social life, embracing the risks of presenting vulnerable selves in research, and using emotions and bodily experience as means and modes of understanding. This goal fully recognizes and commends the "I" in academic writing and calls for analysis of the subjective experience. In Jones' Lost and Found essay she writes,
I convey the sadness and the joy I feel about my relationships with my adopted child, the child I chose not to adopt, and my grandmother. I focus on the emotions and bodily experiences of both losing and memorializing my grandmother'
The careful and deliberate incorporation of auto (the "I," the self) into research is considered one of the most crucial aspects of the autoethnography process. The exploration of the ethics and care of presenting vulnerable selves is addressed at length by Adams in A Review of Narrative Ethics.
Stories and storytelling
Autoethnography showcases stories as the means in which sensemaking and researcher reflexivity create descriptions and critiques of culture. Adams, Jones, and Ellis write:
Reflexivity includes both acknowledging and critiquing our place and privilege in society and using the stories we tell to break long-held silences on power, relationships, cultural taboos, and forgotten and/or suppressed experiences.
A focus is placed a writer's ability to develop writing and representation skills alongside other analytic abilities. Adams switches between first-person and second-person narration in Living (In) the Closet: The Time of Being Closeted as a way to "bring readers into my story, inviting them to live my experiences alongside me, feeling how I felt and suggesting how they might, under similar circumstances, act as I did." Similarly, Ellis in Maternal Connections chose to steer away from the inclusion of references to the research literature or theory instead opting to "call on sensory details, movements, emotions, dialogue, and scene setting to convey an experience of taking care of a parent."
The examples included above are incomplete. Autoethnographers exploring different narrative structures can be seen in Andrew Herrmann's use of layered accounts, Ellis' use of haibun, and the use of autoethnographic film by Rebecca Long and Anne Harris.
Addressing veracity and the art of story telling in his 2019 autoethnographic monograph Going All City: Struggle and Survival in LA's Graffiti Subculture, Stefano Bloch writes "I do rely on artful rendering, but not artistic license."
Relationally responsible approach
Among the concepts in qualitative research is "relational responsibility." Researchers should work to make research relationships as collaborative, committed, and reciprocal as possible while taking care to safeguard identities and privacy of participants. Included under this concept is the accessibility of the work to a variety of readers which allows for the "opportunity to engage and improve the lives of our selves, participants, and readers/audiences."
Autoethnographers struggle with relational responsibility as in Adams' critique of his work on coming out and recognizing:
...how others can perceive my ideas as relationally irresponsible concessions to homophobic others and to insidious heteronormative cultural structures; by not being aggressively critical, my work does not do enough to engage and improve the lives of others.
In the critique he also questions how relationally irresponsible he was by including several brief conversations in his work without consent and exploited other's experiences for his own benefit. Similar sentiments are echoed throughout Adams, Jones, and Ellis critiques of their own writing.
From "validity" to "truth"
As an idea that emerged from the tradition of social constructionism and interpretive paradigm, autoethnography challenges the traditional social scientific methodology that emphasizes the criteria for quality in social research developed in terms of validity. Carolyn Ellis writes, In autoethnographic work, I look at validity in terms of what happens to readers as well as to research participants and researchers. To me, validity means that our work seeks verisimilitude; it evokes in readers a feeling that the experience described is lifelike, believable, and possible. You also can judge validity by whether it helps readers communicate with others different from themselves or offers a way to improve the lives of participants and readers- or even your own. In this sense, Ellis emphasizes the "narrative truth" for autoethnographic writings.
I believe you should try to construct the story as close to the experience as you can remember it, especially in the initial version. If you do, it will help you work through the meaning and purpose of the story. But it's not so important that narratives represent lives accurately – only, as Art(Arthur Bochner) argues, "that narrators believe they are doing so" (Bochner, 2002, p. 86). Art believes that we can judge one narrative interpretation of events against another, but we cannot measure a narrative against the events themselves because the meaning of the events comes clear only in their narrative expression.
Instead, Ellis suggests to judge autoethnographic writings on the usefulness of the story, rather than only on accuracy. She quotes Art Bochner, who argues
that the real questions is what narratives do, what consequences they have, to what uses they can be put. Narrative is the way we remember the past, turn life into language, and disclose to ourselves and others the truth of our experiences. In moving from concern with the inner veridicality to outer pragmatics of evaluating stories, Plummer [2001, p. 401] also looks at uses, functions, and roles of stories, and adds that they "need to have rhetorical power enhanced by aesthetic delight (Ellis, 2004, p. 126-127).
Similarly,
Laurel Richardson [1997, p. 92] uses the metaphor of a crystal to deconstruct traditional validity. A crystal has an infinite number of shapes, dimensions and angles. It acts as a prism and changes shape, but still has structure. Another writer, Patti Lather [1993, p. 674], proposes counter-practices of authority that rupture validity as a "regime of truth" and lead to a critical political agenda [Cf. Olesen, 2000, p. 231]. She mentions the four subtypes [pp. 685-686]: "ironic validity, concerning the problems of representation; paralogical validity, which honors differences and uncertainties; rhizomatic validity, which seeks out multiplicity; and voluptuous validity, which seeks out ethics through practices of engagement and self-reflexivity (Ellis, 2004, pp. 124~125).
From "generalizability" to "resonance"
With regard to the term of "generalizability," Ellis points out that autoethnographic research seeks generalizability not just from the respondents but also from the readers. Ellis says:
I would argue that a story's generalizability is always being tested – not in the traditional way through random samples of respondents, but by readers as they determine if a story speaks to them about their experience or about the lives of others they know. Readers provide theoretical validation by comparing their lives to ours, by thinking about how our lives are similar and different and the reasons why. Some stories inform readers about unfamiliar people or lives. We can ask, after Stake [1994], "does the story have 'naturalistic generalization'?" meaning that it brings "felt" news from one world to another and provides opportunities for the reader to have vicarious experience of the things told. The focus of generalizability moves from respondents to readers.(p. 195)
This generalizability through the resonance of readers' lives and "lived experience" (Richardson, 1997) in autoethnographic work, intends to open up rather than close down conversation (Ellis, 2004, p. 22).
Benefits and concerns
Denzin's criterion is whether the work has the possibility to change the world and make it a better place (Denzin, 2000, p. 256). This position fits with Clough, who argues that good autoethnographic writing should motivate cultural criticism. Autoethnographic writing should be closely aligned with theoretical reflection, says Clough, so that it can serve as a vehicle for thinking "new sociological subjects" and forming "new parameters of the social" (Clough, 2000, p. 290). Though Richardson and Bochner are less overtly political than Denzin and Clough, they indicate that good personal narratives should contribute to positive social change and move us to action (Bochner, 2000, p. 271).
In addition to helping the researcher make sense of his or her individual experience, autoethnographies are political in nature as they engage their readers in political issues and often ask us to consider things, or do things differently. Chang argues that autoethnography offers a research method friendly to researchers and readers because autoethnographic texts are engaging and enable researchers to gain a cultural understanding of self in relation to others, on which cross-cultural coalition can be built between self and others.
Also, autoethnography as a genre frees us to move beyond traditional methods of writing, promoting narrative and poetic forms, displays of artifacts, photographs, drawings, and live performances (Cons, p. 449). Denzin says autoethnography must be literary, present cultural and political issues, and articulate a politics of hope. The literary criteria he mentions are covered in what Richardson advocates: aesthetic value. Ellis elaborates her idea in autoethnography as good writing that through the plot, dramatic tension, coherence, and verisimilitude, the author shows rather than tells, develops characters and scenes fully, and paints vivid sensory experiences.
While advocating autoethnography for its value, some researchers argue that there are also several concerns about autoethnography. Chang warns autoethnographers of pitfalls that they should avoid in doing autoethnography: (1) excessive focus on self in isolation from others; (2) overemphasis on narration rather than analysis and cultural interpretation; (3) exclusive reliance on personal memory and recalling as a data source; (4) negligence of ethical standards regarding others in self-narratives; and (5) inappropriate application of the label autoethnography.Also some qualitative researchers have expressed their concerns about the worth and validity of autoethnography. Robert Krizek (2003) contributed a chapter titled "Ethnography as the Excavation of Personal Narrative" (pp. 141–152) to the book of Expressions of Ethnography in which he expresses concern about the possibility for autoethnography to devolve into narcissism. Krizek goes on to suggest that autoethnography, no matter how personal, should always connect to some larger element of life.
One of the main advantages of personal narratives is that they give us access into learners' private worlds and provide rich data (Pavlenko, 2002, 2007). Another advantage is the ease of access to data since the researcher calls on his or her own experiences as the source from which to investigate a particular phenomenon. It is this advantage that also entails a limitation as, by subscribing analysis to a personal narrative, the research is also limited in its conclusions. However, Bochner and Ellis (1996) consider that this limitation on the self is not valid, since, "If culture circulates through all of us, how can autoethnography be free of connection to a world beyond the self?."
Criticisms and concerns
Similar to other forms of qualitative and art-based research, autoethnography has faced many criticisms. As Sparkes stated, "The emergence of autoethnography and narratives of self…has not been trouble-free, and their status as proper research remains problematic."
The most recurrent criticism of autoethnography is of its strong emphasis on self, which is at the core of the resistance to accepting autoethnography as a valuable research method. Thus, autoethnographies have been criticised for being self-indulgent, narcissistic, introspective and individualised.
Another criticism is of the reality personal narratives or autoethnographies represent. As Geoffrey Walford states, "If people wish to write fiction, they have every right to do so, but not every right to call it research." This criticism originates from a statement by Ellis and Bochner (2000), conceiving autoethnography as a narrative that "is always a story about the past and not the past itself." To this, Walford asserts that "the aim of research is surely to reduce the distortion as much as possible." Walford's concerns are focused on how much of the accounts presented as autoethnographies represent real conversations or events as they happened and how much they are just inventions of the authors.
Evaluation
Several critiques exist regarding evaluating autoethnographical works grounded in interpretive paradigm.
From within qualitative research, some researchers have posited that autoethnographers, along with others, fail to meet positivist standards of validity and reliability. Schwandt, for instance, argues that some social researchers have "come to equate being rational in social science with being procedural and criteriological." Building on quantitative foundations, Lincoln and Guba translate quantitative indicators into qualitative quality indicators, namely: credibility (parallels internal validity), transferability (parallels external validity), dependability (parallels reliability), and confirmability (parallels objectivity and seeks to critically examine whether the researcher has acted in good faith during the course of the research). Smith and Smith and Heshusius critique these qualitative translations and warn that the claim of compatibility (between qualitative and quantitative criteria) cannot be sustained, and by making such claims, researchers are in effect closing down the conversation. Smith points out that "the assumptions of interpretive inquiry are incompatible with the desire for foundational criteria. How we are to work out this problem, one way or another, would seem to merit serious attention.
Secondly, some other researchers questions the need for specific criteria itself. Bochner and Clough both are concerned that too much emphasis on criteria will move us back to methodological policing and will takes us away from a focus on imagination, ethical issues in autographic work, and creating better ways of living. The autoethnographer internally judges its quality. Evidence is tacit, individualistic, and subjective. (see Ellis & Bochner, 2003). Practice-based quality is based in the lived research experience itself rather than in its formal evidencing per se. Bochner says:
Self-narratives... are not so much academic as they are existential, reflecting a desire to grasp or seize the possibilities of meaning, which is what gives life its imaginative and poetic qualities... a poetic social science does not beg the question of how to separate good narrativization from bad... [but] the good ones help the reader or listener to understand and feel the phenomena under scrutiny.
Finally, in addition to this anti-criteria stance of some researchers, some scholars have suggested that the criteria used to judge autoethnography should not necessarily be the same as traditional criteria used to judge other qualitative research investigations (Garratt & Hodkinson, 1999). They argue that autoethnography has been received with a significant degree of academic suspicion because it contravenes certain qualitative research traditions. The controversy surrounding autoethnography is in part related to the problematic exclusive use of the self to produce research (Denzin & Lincoln, 1994). This use of self as the only data source in autoethnography has been questioned (see, for example, Denzin & Lincoln, 1994; Sparkes, 2000; Beattie, 2022). Accordingly, autoethnographies have been criticized for being too self-indulgent and narcissistic. Sparkes (2000) suggested that autoethnography is at the boundaries of academic research because such accounts do not sit comfortably with traditional criteria used to judge qualitative inquiries.
Holt associates this problem with this problem as two crucial issues in "the fourth moment of qualitative research" Denzin and Lincoln (2000) presented; the dual crises of representation and legitimation. The crisis of representation refers to the writing practices (i.e., how researchers write and represent the social world). Additionally, verification issues relating to methods and representation are (re)considered as problematic (Marcus & Fischer, 1986). The crisis of legitimation questions traditional criteria used for evaluating and interpreting qualitative research, involving a rethinking of terms such as validity, reliability, and objectivity. Holt says:
Much like the autoethnographic texts themselves, the boundaries of research and their maintenance are socially constructed (Sparkes, 2000). In justifying autoethnography as proper research... ethnographers have acted autobiographically before, but in the past they may not have been aware of doing so, and taken their genre for granted (Coffey, 1999). Autoethnographies may leave reviewers in a perilous position.... the reviewers were not sure if the account was proper research (because of the style of representation), and the verification criteria they wished to judge this research by appeared to be inappropriate. Whereas the use of autoethnographic methods may be increasing, knowledge of how to evaluate and provide feedback to improve such accounts appears to be lagging. As reviewers begin to develop ways in which to judge autoethnography, they must resist the temptation to "seek universal foundational criteria lest one form of dogma simply replaces another" (Sparkes, 2002b, p. 223). However, criteria for evaluating personal writing have barely begun to develop.
Notable autoethnographers
Leon Anderson
Liana Beattie
Arthur P. Bochner
Jesse Cornplanter
Kimberly Dark
Norman K. Denzin
Carolyn Ellis
Maaike de Jong
Peter Pitseolak
Ernest Spybuck
Aleksandr Solzhenitsyn
Johnny Saldana
See also
Layered account
References
59. Beattie,L. (2022). Symbiotic Autoethnography. Moving Beyond the Boundaries of Qualitative Methodologies. London: Bloomsbury Publishing
Additional references
Ellis, C. (2001). With Mother/With Child: A True Story. Qualitative Inquiry, 7 (5), 598–616.
Ellis, C. (2009). Revision: Autoethnographic Reflections on Life and Work. Walnut Creek, CA: Left Coast Press.
Herrmann, A. F., & Di Fate, K. (Eds.) (2014). The new ethnography: Goodall, Trujillo, and the necessity of storytelling. Storytelling Self Society: An Interdisciplinary Journal of Storytelling Studies, 10.
Hodges, N. (2015). The Chemical Life. Health Communication, 30, 627–634.
Hodges, N. (2015). The American Dental Dream. Health Communication, 30, 943–950.
Holman Jones, S. (2005). Autoethnography: Making the personal political. In N. K. Denzin & Y. S. Lincoln. (Eds.) Handbook of Qualitative Research, (2nd ed., pp. 763–791). Thousand Oaks, CA: Sage Publications.
Holman Jones, S., Adams, T. & Ellis, C. (2013). Handbook of Autoethnography. Walnut Creek CA: Left Coast Press
Krizek, R. (2003). Ethnography as the Excavation of Personal Narrative. In R.P.Clair(Ed.), Expressions of ethnography: novel approaches to qualitative methods (pp. 141–152). New York: SUNY Press.
Plummer, K. (2001). The call of life stories in ethnographic research. In P. Atkinson, A. Coffey, S. Delamont, J. Lofland, and L. Lofland (Eds.), Handbook of Ethnography (pp. 395–406). London: Sage.
Richardson, L. (1997). Fields of play: Constructing an academic life. New Brunswick, N. J.: Rutgers University Press.
Richardson, L. (2007). Writing: A method of inquiry. In N. K. Denzin & Y. S. Lincoln. (Eds.) Handbook of Qualitative Research, (2nd ed., pp. 923–948). Thousand Oaks, CA: Sage Publications.
Stake, R. E. (1994). Case studies. In N. K. Denzin & Y. S. Lincoln. (Eds.) Handbook of Qualitative Research, (2nd ed., pp. 236–247). Thousand Oaks, CA: Sage Publications.
Ethnography | 0.769144 | 0.992842 | 0.763639 |
Communicative language teaching | Communicative language teaching (CLT), or the communicative approach (CA), is an approach to language teaching that emphasizes interaction as both the means and the ultimate goal of study.
Learners in environments using communication to learn and practice the target language by interactions with one another and the instructor, the study of "authentic texts" (those written in the target language for purposes other than language learning), and the use of the language both in class and outside of class.
Learners converse about personal experiences with partners, and instructors teach topics outside of the realm of traditional grammar to promote language skills in all types of situations. That method also claims to encourage learners to incorporate their personal experiences into their language learning environment and to focus on the learning experience, in addition to the learning of the target language.
According to CLT, the goal of language education is the ability to communicate in the target language. This is in contrast to previous views in which grammatical competence was commonly given top priority.
CLT also positions the teacher as a facilitator, rather than an instructor. Furthermore, the approach is a non-methodical system that does not use a textbook series to teach the target language but works on developing sound oral and verbal skills prior to reading and writing.
Background
Societal influences
The rise of CLT in the 1970s and the early 1980s was partly in response to the lack of success with traditional language teaching methods and partly by the increase in demand for language learning. In Europe, the advent of the European Common Market, an economic predecessor to the European Union, led to migration in Europe and an increased number of people who needed to learn a foreign language for work or personal reasons. Meanwhile, more children were given the opportunity to learn foreign languages in school, as the number of secondary schools offering languages rose worldwide as part of a general trend of curriculum-broadening and modernization, with foreign-language study no longer confined to the elite academies. In Britain, the introduction of comprehensive schools, which offered foreign-language study to all children, rather than to the select few of the elite grammar schools, greatly increased the demand for language learning.
The increased demand included many learners who struggled with traditional methods such as grammar translation, which involves the direct translation of sentence after sentence as a way to learn the language. Those methods assumed that students aimed to master the target language and were willing to study for years before expecting to use the language in real life. However, those assumptions were challenged by adult learners, who were busy with work, and by schoolchildren who were less academically gifted and so could not devote years to learning before they could use the language. Educators realized that to motivate those students an approach with a more immediate reward was necessary, and they began to use CLT, an approach that emphasizes communicative ability and yielded better results.
Academic influences
Already in the late 19th century, the American educator John Dewey was writing about learning by doing, and later that learning should be based on the learner's interests and experiences. In 1963, American psychologist David Ausubel released his book The Psychology of Meaningful Verbal Learning calling for a holistic approach to learners teaching through meaningful material. American educator Clifford Prator published a paper in 1965 calling for teachers to turn from an emphasis on manipulation (drills) towards communication where learners were free to choose their own words. In 1966, the sociolinguist Dell Hymes posited the concept of communicative competence considerably broadening out Noam Chomsky's syntactic concept of competence. Also, in 1966, American psychologist Jerome Bruner wrote that learners construct their own understanding of the world based on their experiences and prior knowledge, and teachers should provide scaffolding to promote this. Bruner appears to have been influenced by Lev Vygotsky, a Russian psychologist whose zone of proximal development is a similar concept.
Later in the 1970s British linguist M.A.K. Halliday studied how language functions are expressed through grammar.
The development of communicative language teaching was bolstered by these academic ideas. Before the growth of communicative language teaching, the primary method of language teaching was situational language teaching, a method that was much more clinical in nature and relied less on direct communication. In Britain, applied linguists began to doubt the efficacy of situational language teaching, partly in response to Chomsky's insights into the nature of language. Chomsky had shown that the structural theories of language then prevalent could not explain the variety that is found in real communication. In addition, applied linguists like Christopher Candlin and Henry Widdowson observed that the current model of language learning was ineffective in classrooms. They saw a need for students to develop communicative skill and functional competence in addition to mastering language structures.
In 1966, the linguist and anthropologist Dell Hymes developed the concept of communicative competence, which redefined what it meant to "know" a language. In addition to speakers having mastery over the structural elements of language, they must also be able to use those structural elements appropriately in a variety of speech domains. That can be neatly summed up by Hymes's statement: "There are rules of use without which the rules of grammar would be useless." The idea of communicative competence stemmed from Chomsky's concept of the linguistic competence of an ideal native speaker. Hymes did not make a concrete formulation of communicative competence, but subsequent authors, notably Michael Canale, have tied the concept to language teaching. Canale and Swain (1980) defined communicative competence in terms of three components: grammatical competence, sociolinguistic competence, and strategic competence. Canale (1983) refined the model by adding discourse competence, which contains the concepts of cohesion and coherence.
An influential development in the history of communicative language teaching was the work of the Council of Europe in creating new language syllabi. When communicative language teaching had effectively replaced situational language teaching as the standard by leading linguists, the Council of Europe made an effort to once again bolster the growth of the new method, which led to the Council of Europe creating a new language syllabus. Education was a high priority for the Council of Europe, which set out to provide a syllabus that would meet the needs of European immigrants. Among the studies that it used in designing the course was one by a British linguist, D. A. Wilkins, that defined language using "notions" and "functions," rather than more traditional categories of grammar and vocabulary. The new syllabus reinforced the idea that language could not be adequately explained by grammar and syntax but instead relied on real interaction.
In the mid-1990s, the Dogme 95 manifesto influenced language teaching through the Dogme language teaching movement. It proposed that published materials stifle the communicative approach. As such, the aim of the Dogme approach to language teaching is to focus on real conversations about practical subjects in which communication is the engine of learning. The idea behind the Dogme approach is that communication can lead to explanation, which leads to further learning. That approach is the antithesis of situational language teaching, which emphasizes learning by text and prioritizes grammar over communication.
A survey of communicative competence by Bachman (1990) divides competency into the broad headings of "organizational competence," which includes both grammatical and discourse (or textual) competence, and "pragmatic competence," which includes both sociolinguistic and "illocutionary" competence. Strategic competence is associated with the interlocutors' ability in using communication strategies.
Classroom activities
CLT teachers choose classroom activities based on what they believe will be most effective for students developing communicative abilities in the target language (TL). Oral activities are popular among CLT teachers compared to grammar drills or reading and writing activities, because they include active conversation and creative, unpredicted responses from students. Activities vary based on the level of language class they are used in. They promote collaboration, fluency, and comfort in the TL. The six activities listed and explained below are commonly used in CLT classrooms.
Role-play
Role-play is an oral activity usually done in pairs, whose main goal is to develop students' communicative abilities in a certain setting.
Example:
The instructor sets the scene: where is the conversation taking place? (E.g., in a café, in a park, etc.)
The instructor defines the goal of the students' conversation. (E.g., the speaker is asking for directions, the speaker is ordering coffee, the speaker is talking about a movie they recently saw, etc.)
The students converse in pairs for a designated amount of time.
This activity gives students the chance to improve their communication skills in the TL in a low-pressure situation. Most students are more comfortable speaking in pairs rather than in front of the entire class.
Instructors need to be aware of the differences between a conversation and an utterance. Students may use the same utterances repeatedly when doing this activity and not actually have a creative conversation. If instructors do not regulate what kinds of conversations students are having, then the students might not be truly improving their communication skills.
Interviews
An interview is an oral activity done in pairs, whose main goal is to develop students' interpersonal skills in the TL.
Example:
The instructor gives each student the same set of questions to ask a partner.
Students take turns asking and answering the questions in pairs.
This activity, since it is highly structured, allows for the instructor to more closely monitor students' responses. It can zone in on one specific aspect of grammar or vocabulary, while still being a primarily communicative activity and giving the students communicative benefits.
This is an activity that should be used primarily in the lower levels of language classes, because it will be most beneficial to lower-level speakers. Higher-level speakers should be having unpredictable conversations in the TL, where neither the questions nor the answers are scripted or expected. If this activity were used with higher-level speakers it wouldn't have many benefits.
Group work
Group work is a collaborative activity whose purpose is to foster communication in the TL, in a larger group setting.
Example:
Students are assigned a group of no more than six people.
Students are assigned a specific role within the group. (E.g., member A, member B, etc.)
The instructor gives each group the same task to complete.
Each member of the group takes a designated amount of time to work on the part of the task to which they are assigned.
The members of the group discuss the information they have found, with each other and put it all together to complete the task.
Students can feel overwhelmed in language classes, but this activity can take away from that feeling. Students are asked to focus on one piece of information only, which increases their comprehension of that information. Better comprehension leads to better communication with the rest of the group, which improves students' communicative abilities in the TL.
Instructors should be sure to monitor that each student is contributing equally to the group effort. It takes a good instructor to design the activity well, so that students will contribute equally, and benefit equally from the activity.
Information gap
Information gap is a collaborative activity, whose purpose is for students to effectively obtain information that was previously unknown to them, in the TL.
Example:
The class is paired up. One partner in each pair is Partner A, and the other is Partner B.
All the students that are Partner A are given a sheet of paper with a time-table on it. The time-table is filled in half-way, but some of the boxes are empty.
All the students that are Partner B are given a sheet of paper with a time-table on it. The boxes that are empty on Partner A's time-table are filled in on Partner B's. There are also empty boxes on Partner B's time-table, but they are filled in on Partner A's.
The partners must work together to ask about and supply each other with the information they are both missing, to complete each other's time-tables.
Completing information gap activities improves students' abilities to communicate about unknown information in the TL. These abilities are directly applicable to many real-world conversations, where the goal is to find out some new piece of information, or simply to exchange information.
Instructors should not overlook the fact that their students need to be prepared to communicate effectively for this activity. They need to know certain vocabulary words, certain structures of grammar, etc. If the students have not been well prepared for the task at hand, then they will not communicate effectively.
Opinion sharing
Opinion sharing is a content-based activity, whose purpose is to engage students' conversational skills, while talking about something they care about.
Example:
The instructor introduces a topic and asks students to contemplate their opinions about it. (E.g., dating, school dress codes, global warming)
The students talk in pairs or small groups, debating their opinions on the topic.
Opinion sharing is a great way to get more introverted students to open up and share their opinions. If a student has a strong opinion about a certain topic, then they will speak up and share.
Respect is key with this activity. If a student does not feel like their opinion is respected by the instructor or their peers, then they will not feel comfortable sharing, and they will not receive the communicative benefits of this activity.
Scavenger hunt
A scavenger hunt is a mingling activity that promotes open interaction between students.
Example:
The instructor gives students a sheet with instructions on it. (e.g. Find someone who has a birthday in the same month as yours.)
Students go around the classroom asking and answering questions about each other.
The students wish to find all of the answers they need to complete the scavenger hunt.
In doing this activity, students have the opportunity to speak with a number of classmates, while still being in a low-pressure situation, and talking to only one person at a time. After learning more about each other, and getting to share about themselves, students will feel more comfortable talking and sharing during other communicative activities.
Since this activity is not as structured as some of the others, it is important for instructors to add structure. If certain vocabulary should be used in students' conversations, or a certain grammar is necessary to complete the activity, then instructors should incorporate that into the scavenger hunt.
Criticism
Although CLT has been extremely influential in the field of language teaching, it is not universally accepted and has been subject to significant critique.
In his critique of CLT, Michael Swan addresses both the theoretical and practical problems with CLT. He mentions that CLT is not an altogether cohesive subject but one in which theoretical understandings (by linguists) and practical understandings (by language teachers) differ greatly. Criticism of the theory of CLT includes that it makes broad claims regarding the usefulness of CLT while citing little data, it uses a large amount of confusing vocabulary, and it assumes knowledge that is predominately not language-specific (such as the ability to make educated guesses) to be language-specific. Swan suggests that those theoretical issues lead to confusion in the application of CLT techniques.
Where confusion in the application of CLT techniques is readily apparent is in classroom settings. Swan suggests that CLT techniques often suggest prioritizing the "function" of a language (what one can do with the language knowledge one has) over the "structure" of a language (the grammatical systems of the language). That priority can leave learners with serious gaps in their knowledge of the formal aspects of their target language. Swan also suggests that in CLT techniques, the languages that a student might already know are not valued or employed in instructional techniques.
Further critique of CLT techniques in classroom teaching can be attributed to Elaine Ridge. One of her criticisms of CLT is that it falsely implies that there is a general consensus regarding the definition of "communicative competence," which CLT claims to facilitate. Because there is no such agreement, students may be seen to be in possession of "communicative competence" without being able to make full or even adequate use of the language. That individuals are proficient in a language does not necessarily entail that they can make full use of that language, which can limit an individual's potential with that language, especially if that language is an endangered language. That criticism largely has to do with the fact that CLT is often highly praised and is popular though it may not necessarily be the best method of language teaching.
Ridge also notes that CLT has nonspecific requirements of its teachers, as there is no completely standard definition of what CLT is, which is especially true for the teaching of grammar, the formal rules governing the standardized version of the language in question. Some critics of CLT suggest that the method does not put enough emphasis on the teaching of grammar and instead allows students to produce utterances, despite being grammatically incorrect, as long as the interlocutor can get some meaning from them.
Stephen Bax's critique of CLT has to do with the context of its implementation. Bax asserts that many researchers associate the use of CLT techniques with modernity and so the lack of CLT techniques as a lack of modernism. That way, those researchers consider teachers or school systems that fail to use CLT techniques as outdated and suggest that their students learn the target language "in spite of" the absence of CLT techniques, as if CLT were the only way to learn a language, and everyone who fails to implement its techniques is ignorant and cannot teach the target language.
See also
English as an additional language
Grammar–translation method
Language education
Language exchange
Learning by teaching (LdL)
Notional-functional syllabus
Task-based language learning
Teaching English as a foreign language
Target language (translation)
References
Further reading
Færch, C., & Kasper, G. (1983). Strategies in interlanguage communication. London: Longman.
Language-teaching methodology | 0.767563 | 0.994864 | 0.763621 |
ADDIE Model | ADDIE is an instructional systems design (ISD) framework that many instructional designers and training developers use to develop courses. The name is an acronym for the five phases it defines for building training and performance support tools:
Analysis
Design
Development
Implementation
Evaluation
Most current ISD models are variations of the ADDIE process. Other models include the Dick and Carey and Kemp ISD models. Rapid prototyping is another common alternative.
Instructional theories are important in instructional materials design. These include behaviorism, constructivism, social learning, and cognitivism.
History
Florida State University initially developed the ADDIE framework in 1975 to explain, “...the processes involved in the formulation of an instructional systems development (ISD) program for military interservice training that will adequately train individuals to do a particular job and which can also be applied to any interservice curriculum development activity.” The model originally contained several steps under its five original phases (analyze, design, develop, implement, and evaluate). The idea was to complete each phase before moving to the next. Subsequent practitioners revised the steps, and eventually the model became more dynamic and interactive than the original hierarchical version. By the mid-1980s, the version familiar today appeared.
The origin of the label itself is obscure, but the underlying ISD concepts come from a model developed for the U.S. armed forces in the mid 1970s. As Branson (1978) recounts, the Center for Educational Technology at Florida State University worked with a branch of the U.S. Army to develop a model, which evolved into the Interservice Procedures for Instructional Systems Development (IPISD), intended for the Army, Navy, Air Force, and Marine Corps. Branson provides a graphic overview of the IPISD, which shows five top-level headings: analyze, design, develop, implement, and control. Virtually all subsequent historical reviews of ID reference this model but, notably, users do not refer to it by the ADDIC acronym. The authors and users refer only to IPISD. Hence, it is clearly not the source of the ADDIE acronym.
Phases of ADDIE (Analysis, Design, Development, Implementation and Evaluation)
Analysis phase
The analysis phase clarifies the instructional problems and objectives, and identifies the learning environment and learner's existing knowledge and skills. Questions the analysis phase addresses include:
Who are the learners and what are their characteristics?
What is the desired new behavior?
What types of learning constraints exist?
What are the delivery options?
What are the pedagogical considerations?
What adult learning theory considerations apply?
What is the timeline for project completion?
The process of asking these questions is often part of a needs analysis. During the needs analysis instructional designers (IDs) will determine constraints and resources in order to fine tune their plan of action.
Design phase
The design phase deals with learning objectives, assessment instruments, exercises, content, subject matter analysis, lesson planning, and media selection. The design phase should be systematic and specific. Systematic means a logical, orderly method that identifies, develops, and evaluates a set of planned strategies for attaining project goals. Specific means the team must execute each element of the instructional design plan with attention to detail. The design phase may involve writing a design document/design proposal or concept and structure note to aid final development.
Development phase
In the development phase, instructional designers and developers create and assemble content assets described in the design phase. If e-learning is involved, programmers develop or integrate technologies. Designers create storyboards. Testers debug materials and procedures. The team reviews and revises the project according to feedback. After completing the development of the course material, the designers should conduct an imperative pilot test; this can be carried out by involving key stakeholders and rehearsing the course material.
Implementation phase
The implementation phase develops procedures for training facilitators and learners. Training facilitators cover the course curriculum, learning outcomes, method of delivery, and testing procedures. Preparation for learners includes training them on new tools (software or hardware) and student registration. Implementation includes evaluation of the design.
Evaluation phase
The evaluation phase consists of two aspects: formative and summative. Formative evaluation is present in each stage of the ADDIE process, while summative evaluation is conducted on finished instructional programs or products. Donald Kirkpatrick's Four Levels of Learning Evaluation are often utilized during this phase of the ADDIE process.
Other versions
Some institutions have modified the ADDIE model to meet specific needs. For example, the United States Navy created a version they call PADDIE+M. The P phase is the planning phase, which develops project goals, project objectives, budget, and schedules. The M phase is the maintenance phase, which implements life cycle maintenance with continuous improvement methods. This model is gaining acceptance in the United States government as a more complete model of ADDIE. Some organizations have adopted the PADDIE model without the M phase. Pavlis Korres (2010), in her instructional model (ESG Framework), has proposed an expanded version of ADDIE, named ADDIE+M, where Μ=Maintenance of the Learning Community Network after the end of a course. The Maintenance of the Learning Community Network is a modern educational process that supports the continuous educational development of its members with social media and web tools.
See also
Educational technology
Instructional technology
Instructional design
Design-based learning
Information technology
References
Further reading
Pedagogy
Instructional design models | 0.771064 | 0.990317 | 0.763597 |
Phenomenography | Phenomenography is a qualitative research methodology, within the interpretivist paradigm, that investigates the qualitatively different ways in which people experience something or think about something. It is an approach to educational research which appeared in publications in the early 1980s. It initially emerged from an empirical rather than a theoretical or philosophical basis.
While being an established methodological approach in education for several decades, phenomenography has now been applied rather extensively in a range of diverse disciplines such as environmental management, computer programming, workplace competence, and internationalization practices.
Overview
Phenomenography's ontological assumptions are subjectivist: the world exists and different people construct it in different ways and from a non-dualist viewpoint (viz., there is only one world, one that is ours, and one that people experience in many different ways). Phenomenography's research object has the character of knowledge; therefore its ontological assumptions are also epistemological assumptions.
Its emphasis is on description. Its data collection methods typically include semi-structured interviews with a small, purposive sample of subjects, with the researcher "working toward an articulation of the interviewee’s reflections on experience that is as complete as possible". Description is important because our knowledge of the world is a matter of meaning and of the qualitative similarities and differences in meaning as it is experienced by different people.
A phenomenographic data analysis sorts qualitatively distinct perceptions which emerge from the data collected into specific "categories of description." The set of these categories is sometimes referred to as an "outcome space." These categories (and the underlying structure) become the phenomenographic essence of the phenomenon. They are the primary outcomes and are the most important result of phenomenographic research. Phenomenographic categories are logically related to one another, typically by way of hierarchically inclusive relationships, although linear and branched relationships can also occur. That which varies between different categories of description is known as the "dimensions of variation."
The process of phenomenographic analysis is strongly iterative and comparative. It involves continual sorting and resorting of data and ongoing comparisons between the data and the developing categories of description, as well as between the categories themselves.
A phenomenographic analysis seeks a "description, analysis, and understanding of . . . experiences". The focus is on variation: variation in both the perceptions of the phenomenon, as experienced by the actor, and in the "ways of seeing something" as experienced and described by the researcher. This is described as phenomenography's "theory of variation." Phenomenography allows researchers to use their own experiences as data for phenomenographic analysis; it aims for a collective analysis of individual experiences.
Emphasis on description
Phenomenographic studies usually involve contextual groups of people and data collection involves individual description of understanding, often through interview. Analysis is whole group orientated since all data is analysed together with the aim of identifying possible conceptions of experience related to the phenomenon under investigation, rather than individual experiences. There is emphasis on detailed analysis of description which follows from an assumption that conceptions are formed from both the results of human action and from the conditions for it. Clarification of understanding and experience depends upon the meaning of the conceptions themselves. The object of phenomenographic study is not the phenomenon per se but the relationship between the actors and the phenomenon.
Distinguished from phenomenology
Phenomenography is not phenomenology. Phenomenographers adopt an empirical orientation and they investigate the experiences of others. The focus of interpretive phenomenology is upon the essence of the phenomenon, whereas the focus of phenomenography is upon the essence of the experiences and the subsequent perceptions of the phenomenon.
See also
Ference Marton
Antipositivism
References
Qualitative research
Educational research | 0.794468 | 0.961133 | 0.76359 |
Character education | Character education is an umbrella term loosely used to describe the teaching of children and adults in a manner that will help them develop variously as moral, civic, good, mannered, behaved, non-bullying, healthy, critical, successful, traditional, compliant or socially acceptable beings. Concepts that now and in the past have fallen under this term include social and emotional learning, moral reasoning and cognitive development, life skills education, health education, violence prevention, critical thinking, ethical reasoning, and conflict resolution and mediation. Many of these are now considered failed programs, i.e. "religious education", "moral development", "values clarification".
Today, there are dozens of character education programs in, and vying for adoption by, schools and businesses. Some are commercial, some non-profit and many are uniquely devised by states, districts and schools, themselves. A common approach of these programs is to provide a list of principles, pillars, values or virtues, which are memorized or around which themed activities are planned. It is commonly claimed that the values included in any particular list are universally recognized. However, there is no agreement among the competing programs on core values (e.g., honesty, stewardship, kindness, generosity, courage, freedom, justice, equality, and respect) or even how many to list. There is also no common or standard means for assessing, implementing or evaluating programs.
Terminology
"Character" is one of those overarching concepts that is the subject of disciplines from philosophy to theology, from psychology to sociology—with many competing and conflicting theories. Thomas Lickona defines character education as "the deliberate effort to develop virtues that are good for the individual and good for society." More recently, psychologist Robert McGrath has proposed that character education is less focused on social skill acquisition and more on constructing a moral identity within a life narrative.
Character as it relates to character education most often refers to how 'good' a person is. In other words, a person who exhibits personal qualities like those a society considers desirable might be considered to have good character—and developing such personal qualities is often seen as a purpose of education. However, the various proponents of character education are far from agreement as to what "good" is, or what qualities are desirable. Compounding this problem is that there is no scientific definition of character. Because such a concept blends personality and behavioral components, scientists have long since abandoned use of the term "character" and, instead, use the term psychological motivators to measure the behavioral predispositions of individuals. With no clinically defined meaning, there is virtually no way to measure if an individual has a deficit of character, or if a school program can improve it.
The various terms in the lists of values that character education programs propose—even those few found in common among some programs—suffer from vague definitions. This makes the need and effectiveness of character education problematic to measure.
In-school programs
There is no common practice in schools in relation to the formation of pupils' character or values education. This is partly due to the many competing programs and the lack of standards in character education, but also because of how and by whom the programs are executed.
Programs are generally of four varieties: cheerleading, praise and reward, define and drill, and forced formality. They may be used alone or in combination.
1) Cheerleading involves multicolored posters, banners, and bulletin boards featuring a value or virtue of the month; lively morning public-address announcements; occasional motivational assemblies; and possibly a high-profile event such as a fund-raiser for a good cause.
2) Praise-and-reward approach seeks to make virtue into habit using "positive reinforcement". Elements include "catching students being good" and praising them or giving them chits that can be exchanged for privileges or prizes. In this approach, all too often, the real significance of the students' actions is lost, as the reward or award becomes the primary focus.
3) Define-and-drill calls on students to memorize a list of values and the definition of each. Students' simple memorization of definitions seems to be equated with their development of the far more complex capacity for making moral decisions.
4) Forced-formality focuses on strict, uniform compliance with specific rules of conduct, (i.e., walking in lines, arms at one's sides), or formal forms of address ("yes sir," "no ma'am"), or other procedures deemed to promote order or respect of adults.
"These four approaches aim for quick behavioral results, rather than helping students better understand and commit to the values that are core to our society, or helping them develop the skills for putting those values into action in life's complex situations."
Generally, the most common practitioners of character education in the United States are school counselors, although there is a growing tendency to include other professionals in schools and the wider community. Depending on the program, the means of implementation may be by teachers and/or any other adults (faculty, bus drivers, cafeteria workers, maintenance staff, etc.); by storytelling, which can be through books and media; or by embedding into the classroom curriculum. There are many theories about means, but no comparative data and no consensus in the industry as to what, if any, approach may be effective.
History
It has been said that, "character education is as old as education itself". Indeed, the attempt to understand and develop character extends into prehistory.
Understanding character
Psychic arts
Since very early times, people have tried to access or "read" the pre-disposition (character) of self and others. Being able to predict and even manipulate human behavior, motivations, and reactions would bestow obvious advantages. Pre-scientific character assessment techniques have included, among others: anthropometry, astrology, palmistry, and metoposcopy. These approaches have been scientifically discredited although they continue to be widely practiced.
Race character
The concept of inherited "race character" has long been used to characterize desirable versus undesirable qualities in members of groups as a whole along national, tribal, ethnic, religious and even class lines. Race character is predominantly used as a justification for the denigration and subsequent persecution of minority groups, most infamously, justifying European persecution of Native Americans, the concept of slavery, and the Nazis' persecution of Jews. Though race character continues to be used as a justification for persecution of minorities worldwide, it has been scientifically discredited and is not overtly a component of modern character education in western societies.
Generational character
Particularly in modern liberal republics, social and economic change is rapid and can result in cognitive stress to older generations when each succeeding generation expands on and exhibits their own modes of expressing the freedoms such societies enjoy.
America is a prime example. With few traditions, each generation exhibits attitudes and behaviors that conservative segments of preceding generations uneasily assimilate. Individual incidents can also produce a moral panic. Cries about loss of morals in the succeeding generation, overwhelmingly unsubstantiated, and calls for remediation have been constant in America since before its founding. (It should be expected that—in a free country that supports children's rights—this trend will continue apace.)
Developing character
Eastern philosophy
Eastern philosophy views the nature of man as initially quiet and calm, but when affected by the external world, it develops desires. When the desires are not properly controlled and the conscious mind is distracted by the material world, we lose our true selves and the principle of reason in Nature is destroyed. From this arise rebellion, disobedience, cunning and deceit, and general immorality. This is the way of chaos. Confucianism stands with Taoism as two of the great religious/philosophical systems of China.
A hallmark of the philosophy of Confucius is his emphasis on tradition and study. He disparages those who have faith in natural understanding or intuition and argues for long and careful study. Study, for Confucius, means finding a good teacher, who is familiar with the ways of the past and the practices of the ancients, imitating his words and deeds. The result is a heavy scheme of obligations and intricate duties throughout all of one's many social roles. Confucius is said to have sung his sayings and accompanied himself on a 'qin' (a kind of zither). According to Confucius, musical training is the most effective method for molding the moral character of man and keeping society in order. He said: "Let a man be stimulated by poetry, established by the rules of propriety, perfected by music."
The theme of Taoism is one of harmony with nature. Zhuangzi was a central figure in Taoist philosophy. He wrote that people develop different moral attitudes from different natural upbringings, each feeling that his own views are obvious and natural, yet all are blinded by this socialization to their true nature. To Zhuangzi, pre-social desires are relatively few and easy to satisfy, but socialization creates a plethora of desires for "social goods" such as status, reputation, and pride. These conventional values, because of their comparative nature create attitudes of resentment and anger inciting competition and then violence. The way to social order is for people to eliminate these socialized ambitions through open-minded receptivity to all kinds of voices—particularly those who have run afoul of human authority or seem least authoritative. Each has insights. Indeed, in Taoist moral philosophy, perfection may well look like its opposite to us. One theme of Zhuangzi's that links Taoism to the Zen branch of Buddhism is the concept of flow, of losing oneself in activity, particularly the absorption in skilled execution of a highly cultivated way. His most famous example concerns a butcher who carves beef with the focus and absorption of a virtuoso dancer in an elegantly choreographed performance. The height of human satisfaction comes in achieving and exercising such skills with the focus and commitment that gets us "outside ourselves" and into such an intimate connection with our inborn nature.
Western philosophy
The early Greek philosophers felt that happiness requires virtue and hence that a happy person must have virtuous traits of character.
Socrates identifies happiness with pleasure and explains the various virtues as instrumental means to pleasure. He teaches, however, that pleasure is to be understood in an overarching sense wherein fleeing battle is a momentary pleasure that detracts from the greater pleasure of acting bravely.
Plato wrote that to be virtuous, we must both understand what contributes to our overall good and have our spirited and appetitive desires educated properly and guided by the rational part of the soul. The path he prescribes is that a potentially virtuous person should learn when young to love and take pleasure in virtuous actions, but he must wait until late in life to develop the understanding of why what he loves is good. An obvious problem is that this reasoning is circular.
Aristotle is perhaps, even today, the most influential of all the early Western philosophers. His view is often summarized as 'moderation in all things'. For example, courage is worthy, for too little of it makes one defenseless. But too much courage can result in foolhardiness in the face of danger. To be clear, Aristotle emphasizes that the moderate state is not an arithmetic mean, but one relative to the situation: sometimes the mean course is to be angry at, say, injustice or mistreatment, at other times anger is wholly inappropriate. Additionally, because people are different, the mean for one person may be bravery, but for another it is recklessness.
For Aristotle, the key to finding this balance is to enjoy and recognize the value of developing one's rational powers, and then using this recognition to determine which actions are appropriate in which circumstances.
The views of nineteenth-century philosophers were heavily indebted to these early Greeks. Two of them, Karl Marx and John Stuart Mill, had a major influence on approaches to developing character.
Karl Marx applies Aristotle's conclusions in his understanding of work as a place where workers should be able to express their rational powers. But workers subject to capitalist values are characterized primarily by material self-interest. This makes them distrustful of others, viewing them primarily as competitors. Given these attitudes, workers become prone to a number of vices, including selfishness, cowardice, and intemperance.
To correct these conditions, he proposes that workers perform tasks that are interesting and mentally challenging—and that each worker help decide how, and to what ends, their work should be directed. Marx believes that this, coupled with democratic conditions in the workplace, reduces competitive feelings among workers so they want to exhibit traditional virtues like generosity and trustfulness, and avoid the more traditional vices such as cowardice, stinginess, and self-indulgence.
John Stuart Mill, like Marx, also highly regarded development of the rational mind. He argued that seriously unequal societies, by preventing individuals from developing their deliberative powers, affect individuals' character in unhealthy ways and impede their ability to live virtuous lives. In particular, Mill argued that societies that have systematically subordinated women have harmed men and women, and advised that the place of women in families and in societies be reconsidered.
Contemporary views
Because women and men today may not be well-positioned to fully develop the capacities Aristotle and others considered central to virtuous character, it continues to be a central issue not only in ethics, but also in feminist philosophy, political philosophy, philosophy of education, and philosophy of literature. Because moral character requires communities where citizens can fully realize their human powers and ties of friendship, there are hard questions of how educational, economic, political, and social institutions should be structured to make that development possible.
Situationism
Impressed by scientific experiments in social psychology, "situationist" philosophers argue that character traits are not stable or consistent and cannot be used to explain why people act as they do. Experimental data shows that much of human behavior is attributable to seemingly trivial features of the situations in which people find themselves. In a typical experiment, seminary students agreed to give a talk on the importance of helping those in need. On the way to the building where their talks were to be given, they encountered a confederate slumped over and groaning. Ironically, those who were told they were already late were much less likely to help than those who were told they had time to spare.
Perhaps most damning to the traditional view of character are the results of the experiments conducted by Stanley Milgram in the 1960s and Philip G. Zimbardo in 1971. In the first of these experiments, the great majority of subjects, when politely though firmly requested by an experimenter, were willing to administer what they thought were increasingly severe electric shocks to a screaming "victim." In the second, the infamous Stanford prison experiment, the planned two-week investigation into the psychology of prison life had to be ended after only six days because the college students who were assigned to act as guards became sadistic and those who were the "prisoners" became depressed and showed signs of extreme stress. These and other experiments are taken to show that if humans do have noble tendencies, they are narrow, "local" traits that are not unified with other traits into a wider behavioral pattern of being.
History of character education in U.S. schools
The colonial period
As common schools spread throughout the colonies, the moral education of children was taken for granted. Formal education had a distinctly moral and religion emphasis. In the Christian tradition, it is believed that humans are flawed at birth (original sin), requiring salvation through religious means: teaching, guidance and supernatural rituals. This belief in America, originally heavily populated by Protestant immigrants, creates a situation of a-priori assumption that humans are morally deficient by nature and that preemptive measures are needed to develop children into acceptable members of society: home, church and school.
Character education in school in the United States began with the circulation of the New England Primer. Besides rudimentary instruction in reading, it was filled with Biblical quotes, prayers, catechisms and religiously charged moral exhortations. Typical is this short verse from the 1777 edition:
Good children must,
Fear God all day, Love Christ alway,
Parents obey, In secret pray,
No false thing say, Mind little play,
By no sin stray, Make no delay,
In doing good.
Nineteenth century
As the young republic took shape, schooling was promoted for both secular and moral reasons. By the time of the nineteenth century, however, religion became a problem in the schools. In the United States, the overwhelming dominant religion was Protestantism. While not as prominent as during the Puritan era, the King James Bible was, nevertheless, a staple of U.S. public schools. Yet, as waves of immigrants from Ireland, Germany, and Italy came to the country from the mid-nineteenth century forward, they reacted to the Protestant tone and orthodoxy of the schools. Concerned that their children would be weaned from their faith, Catholics developed their own school system. Later in the twentieth century, other religious groups, such as Jews, Muslims, and even various Protestant denominations, formed their own schools. Each group desired, and continues to desire, that its moral education be rooted in its respective faith or code.
Horace Mann, the nineteenth-century champion of the common schools, strongly advocated for moral education. He and his followers were worried by the widespread drunkenness, crime, and poverty during the Jacksonian period they lived in. No less troubling were the waves of immigrants flooding into cities, unprepared for urban life and particularly unprepared to participate in democratic civic life.
The most successful textbooks during the nineteenth and early twentieth centuries were the famed McGuffey Readers, fostering virtues such as thrift honesty, piety, punctuality and industry. McGuffey was a theological and conservative teacher and attempted to give schools a curriculum that would instill Presbyterian Calvinist beliefs and manners in their students.
Mid-twentieth century
During the late-nineteenth-century and twentieth-century period, intellectual leaders and writers were deeply influenced by the ideas of the English naturalist Charles Darwin, the German political philosopher Karl Marx, the Austrian neurologist and founder of psychoanalysis Sigmund Freud, and by a growing strict interpretation of the separation of church and state doctrine. This trend increased after World War II and was further intensified by what appeared to be changes in the nation's moral consensus in the late 1960s. Educators and others became wary of using the schools for moral education. More and more this was seen to be the province of the family and the church.
Still, due to a perceived view of academic and moral decline, educators continued to receive mandates to address the moral concerns of students, which they did using primarily two approaches: values clarification and cognitive developmental moral education.
Values clarification. Values change over time in response to changing life experiences. Recognizing these changes and understanding how they affect one's actions and behaviors is the goal of the values clarification process. Values clarification does not tell you what you should have, it simply provides the means to discover what your values are. This approach, although widely practiced, came under strong criticism for, among other things, promoting moral relativism among students.
Cognitive-developmental theory of moral education and development sprang from the work of the Swiss psychologist Jean Piaget and was further developed by Lawrence Kohlberg. Kohlberg rejected the focus on values and virtues, not only due to the lack of consensus on what virtues are to be taught, but also because of the complex nature of practicing such virtues. For example, people often make different decisions yet hold the same basic moral values. Kohlberg believed a better approach to affecting moral behavior should focus on stages of moral development. These stages are critical, as they consider the way a person organizes their understanding of virtues, rules, and norms, and integrates these into a moral choice.
Character education movement of the 1980s
The impetus and energy behind the return of a more didactic character education to American schools did not come from within the educational community. It continues to be fueled by desire from conservative and religious segments of the population for traditionally orderly schools where conformity to "standards" of behavior and good habits are stressed. State and national politicians, as well as local school districts, lobbied by character education organizations, have responded by supporting this sentiment. During his presidency, Bill Clinton hosted five conferences on character education. President George W. Bush expanded on the programs of the previous administration and made character education a major focus of his educational reform agenda.
21st century developments
Grit is defined as perseverance and commitment to long-term goals. It is a character attribute associated with University of Pennsylvania professor Angela Duckworth who wrote about her research in a best-selling book and promoted it on a widely watched Ted Talks video. Initially, lauded as a breakthrough discovery of the "key character ingredient" to success and performance, it soon came under wide criticism and has been exposed, like other character interventions, as suspect as a character construct, and where attempts have been made to implement it in school programs, shows no more than a weak effect, if any. Moreover, the original data was misinterpreted by Duckworth. Additionally, the construct of grit ability ignores the positive socio-economic pre-requisites necessary to deploy it.
Modern scientific approaches
Today, the sciences of social psychology, neuropsychology and evolutionary psychology have taken new approaches to the understanding of human social behavior.
Personality and social psychology is a scientific method used by health professionals for researching personal and social motivators in and between the individual and society, as well as applying them to the problems people have in the context of society. Personality and social psychologists study how people think about, influence, and relate to one another. By exploring forces within the person (such as traits, attitudes, and goals) as well as forces within the situation (such as social norms and incentives), they seek to provide insight into issues as wide-ranging as prejudice, romantic attraction, persuasion, friendship, helping, aggression, conformity, and group interaction.
Neuropsychology addresses how brain regions associated with emotional processing are involved in moral cognition by studying the biological mechanisms that underlie human choices and behavior. Like social psychology, it seeks to determine, not how we should, but how we do behave—though neurologically. For instance, what happens in the brain when we favor one response over another, or when it is difficult to make any decision? Studies of clinical populations, including patients with VMPC (ventromedial prefrontal cortex) damage, reveal an association between impairments in emotional processing and impairments in moral judgement and behavior. These and other studies conclude that not only are emotions engaged during moral cognition, but that emotions, particularly those mediated by VMPC, are in fact critical for morality.
Other neurological research is documenting how much the unconscious mind is involved in decision making. According to cognitive neuroscientists, we are conscious of only about 5 percent of our cognitive activity, so most of our decisions, actions, emotions, and behavior depends on the 95 percent of brain activity that goes beyond our conscious awareness. These studies show that actions come from preconscious brain activity patterns and not from people consciously thinking about what they are going to do. A 2011 study conducted by Itzhak Fried found that individual neurons fire 2 seconds before a reported "will" to act (long before EEG activity predicted such a response). This was accomplished with the help of volunteer epilepsy patients, who needed electrodes implanted deep in their brain for evaluation and treatment anyway. Similarly to these tests, a 2013 study found that the choice to sum or subtract can be predicted before the subject reports it.
Evolutionary psychology, a new science, emerged in the 1990s to focus on explaining human behavior against the backdrop of Darwinian processes. This science considers how the biological forces of genetics and neurotransmissions in the brain influence unconscious strategies and conscious and proposes that these features of biology have developed through evolution processes. In this view, the cognitive programs of the human brain are adaptations. They exist because this behavior in our ancestors enabled them to survive and reproduce these same traits in their descendants, thereby equipping us with solutions to problems that our ancestors faced during our species' evolutionary history. Ethical topics addressed include altruistic behaviors, deceptive or harmful behaviors, an innate sense of fairness or unfairness, feelings of kindness or love, self-sacrifice, feelings related to competitiveness and moral punishment or retribution, and moral "cheating" or hypocrisy.
Issues and controversies
Scientific studies
The largest federal study, to-date, a 2010 report released under the auspices of the U.S. Department of Education found that the vast majority of character education programs have failed to prove their effectiveness, producing no improvements in student behavior or academic performance. Previous and current research on the subject fails to find one peer-reviewed study demonstrating any scientifically validated need for or result from character education programs. Typically, support is attested to by referring to "correlations" (e.g., grades, number of disciplinary referrals, subjective opinion, etc.).
Functional and ideological problems
1) An assumption that "character" is deficient in some or all children
2) Lack of agreement on what constitutes effectiveness
3) Lack of evidence that it does what it claims
4) A conflict between what good character is and the way that character education proposes to teach it
5) Differing standards in methods and objectives. Differing standards for assessing need and evaluating results. Some attempts have been made.
6) Supportive "studies" that overwhelmingly rely on subjective feedback (usually surveys) from vested participants
7) Programs instituted towards ideological and/or religious ends
8) The pervasive problem of confusing morality with social conformity
9) There are few if any common goals among character education programs. The dissensions in the list of values among character education programs, itself, constitutes a major criticism that there is anything to character education that is either fundamental or universally relevant to students or society.
10) It might be said that there is agreement in as much as what values do not find inclusion on lists of core values. Not found, even though they are fundamental to the success of modern democratic societies, are such noted values as independence, inventiveness, curiosity, critical thinking, skepticism, and even moderation. "Take chances, make mistakes, get messy!" the famous saying by Ms. Frizzle on the much celebrated TV show, The Magic School Bus, embodies values that would be antithetical to those found on today's character education lists.
See also
Lawrence Kohlberg's stages of moral development
Journal of Moral Education
Moral character
Moral development
Moral emotions
Moral enhancement
Moral psychology
Moral reasoning
Social cognitive theory of morality
Values education
References
Notes
Further reading
Arthur, J. (2003). Education with Character, New York: Routledge Falmer
United States educational programs | 0.77905 | 0.980099 | 0.763546 |
Education policy | Education policy consists of the principles and policy decisions that influence the field of education, as well as the collection of laws and rules that govern the operation of education systems. Education governance may be shared between the local, state, and federal government at varying levels. Some analysts see education policy in terms of social engineering.
Education takes place in many forms for many purposes through many institutions. Examples of such educational institutions may include early childhood education centers, kindergarten to 12th grade schools, two- and four-year colleges or universities, graduate and professional education institutes, adult-education establishments, and job-training schemes. The educational goals of these institutions influence education policy. Furthermore, these education policies can affect the education people engage in at all ages.
Examples of areas subject to debate in education policy, specifically from the field of schools, include school size, class size, school choice, school privatization, police in schools, tracking, teacher selection, education and certification, teacher pay, teaching methods, curricular content, graduation requirements, school-infrastructure investment, and the values that schools are expected to uphold and model.
Issues in education policy also address problems within higher education. The Pell Institute analyzes the barriers experienced by teachers and students within community colleges and universities. These issues involve undocumented students, sex education, and federal-grant aides.
Education policy analysis is the scholarly study of education policy. It seeks to answer questions about the purpose of education, the objectives (societal and personal) that it is designed to attain, the methods for attaining them and the tools for measuring their success or failure. Research intended to inform education policy is carried out in a wide variety of institutions and in many academic disciplines. For example, researchers are affiliated with schools and departments of education, public policy, psychology, economics, sociology, and human development. Additionally, sociology, political science, economics, and law are all disciplines that can be used to better understand how education systems function, what their impacts are, and how policies might be changed for different conditions. Education policy is sometimes considered a sub-field of social policy and public policy. Examples of education policy analysis may be found in such academic journals as Education Policy Analysis Archives and in university-policy centers such as the National Education Policy Center housed at the University of Colorado Boulder.
Education reform in the United States
Over the past 30 years, policymakers have made a steady increase at the state and federal levels of government in their involvement of US schools. According to the Tenth Amendment to the United States Constitution, state governments have the main authority on education. State governments spend most of their budgets funding schools, whereas only a small portion of the federal budget is allocated to education. The federal government advances their role by building on state and local education policies. Over time, the role of the federal government grew through federal education policies that affected the funding and evaluation of education. For example, the National Defense Education Act (NDEA) was established in 1958 to increase federal funding to schools, and the National Assessment of Educational Progress was created to track and compare student performance in academic subjects across the states. Moreover, the United States Department of Education was created in 1979.
Education reform is currently being seen as a "tangled web" due to the nature of education authority. Some education policies are being defined at either the federal, state or local level and in most cases, their authorities overlap one another. This manner of authority has led many to believe there is an inefficiency within education governance. Compared to other OECD countries, educational governance in the US is more decentralized and most of its autonomy is found within the state and district levels. The reason for this is that US citizens put an emphasis on individual rights and fear federal government overreach. A recent report by the National Center on Education and the Economy, believes that the education system is neither coherent nor likely to see improvements due to the nature of it.
A critical race theory analysis of the history of education reform in the United States reveals the influence of systemic racism on educational policy. Historically, educational policy changes have resulted via progress from protest, and such progress met with pushback.
In the state of Texas during the 84th Legislature, there were several education reform bills filed and sponsored by many education reform groups, such as Texans for Education Reform. Lawmakers want to create more involvement at the local level, and more transparency in our public schools. These groups are being pressured and opposed by teachers' unions saying that accountability and transparency policies are targeting educators, and that they are trying to hold them responsible for the education system.
Teacher policy
Teacher policy is education policy that addresses the preparation, recruitment, and retention of teachers. A teacher policy is guided by the same overall vision and essential characteristics as the wider education policy: it should be strategic, holistic, feasible, sustainable, and context-sensitive. Overall objectives and major challenges to be addressed, the funding to achieve these objectives, the demographic parameters of the learner population and the human resources required to achieve universally accessible quality education should all be addressed in a comprehensive teacher policy.
Nine key dimensions
Nine key dimensions are considered crucial to any comprehensive teacher policy: Teacher Recruitment and Retention, Teacher education (initial and continuing), Deployment, Career Structures/Paths, Teacher Employment and Working Conditions, Teach Reward and Remuneration, Teacher Standards, Teacher Accountability, and School Governance.
Teacher Recruitment and Retention
An effective education system must have a safe way to attract, recruit and retain outstanding educators. There has been a growing demand for teachers but the supply continues to diminish and many of them leave their profession. This development is a threat to the "academic and economic welfare of students". It affects learning and drain taxpayers’ money. The federal and state governments along with the districts must invest in complete human capital systems. It is the best approach in preparing and retaining committed and capable mentors for the long-term. A reasonable strategy in talent management for the education sector must focus on recruitment, development, and retention of intelligent and efficient teachers.
Teacher Education (Initial and Continuing)
Teachers need to go back to school periodically to become better educators. Good mentors can become outstanding by going further than textbooks. This is the logic behind continuing education. Technology in the form of web-based workshops and lectures will be helpful. School administrators and district officials must push their teachers to make use of available resources and opportunities to continue the learning process. Conferences with workshops are also valuable because these activities provide teachers with tools for integration of technology in the classrooms and Continuing Professional Development Units in boosting their careers. School administrators must ensure that teachers are not only competent in classroom management but also in protecting students from harm such as bullying.
Gender equality
Quality and timely data and evidence are key factors for policy-making, planning and the delivery to advance gender equality in and through education. They can help countries to identify and analyse gendered patterns and trends, and better plan and target resources to address gender inequalities. They can also help to identify and inform interventions that influence participation, learning and empowerment, from early childhood to tertiary education and beyond.
Though the SDG 4 monitoring framework is a step forward in the policy process, a complete monitoring framework for gender equality in and through education should include indicators that consider:
Social and gender norms,
Values and attitudes (many of which can be influenced by education).
Education laws and policies, as well legislation and policies outside of the education system.
Resource distribution; and teaching and learning practices and environments.
Efforts are also needed to track disparities in informal and non-formal learning contexts with a lifelong learning approach, and to ensure that data are collected on the most excluded.
See also
Education policy in Brazil
Higher education policy
Sources
References
Information on education policy, OECD - Contains indicators and information about education policy in OECD countries.
External links
OECD's Education GPS: a review of education policy analysis and statistics. | 0.770667 | 0.990756 | 0.763543 |
Social dynamics | Social dynamics (or sociodynamics) is the study of the behavior of groups and of the interactions of individual group members, aiming to understand the emergence of complex social behaviors among microorganisms, plants and animals, including humans. It is related to sociobiology but also draws from physics and complex system sciences.
In the last century, sociodynamics was viewed as part of psychology, as shown in the work: "Sociodynamics: an integrative theorem of power, authority, interfluence and love". In the 1990s, social dynamics began being viewed as a separate scientific discipline[By whom?]. An important paper in this respect is: "The Laws of Sociodynamics".
Then, starting in the 2000s, sociodynamics took off as a discipline of its own, many papers were released in the field in this decade.
Overview
The field of social dynamics brings together ideas from economics, sociology, social psychology, and other disciplines, and is a sub-field of complex adaptive systems or complexity science. The fundamental assumption of the field is that individuals are influenced by one another's behavior. The field is closely related to system dynamics. Like system dynamics, social dynamics is concerned with changes over time and emphasizes the role of feedbacks. However, in social dynamics individual choices and interactions are typically viewed as the source of aggregate level behavior, while system dynamics posits that the structure of feedbacks and accumulations are responsible for system level dynamics. Research in the field typically takes a behavioral approach, assuming that individuals are boundedly rational and act on local information. Mathematical and computational modeling are important tools for studying social dynamics. This field grew out of work done in the 1940s by game theorists such as Duncan & Luce, and even earlier works by mathematician Armand Borel. Because social dynamics focuses on individual level behavior, and recognizes the importance of heterogeneity across individuals, strict analytic results are often impossible. Instead, approximation techniques, such as mean-field approximations from statistical physics, or computer simulations are used to understand the behaviors of the system. In contrast to more traditional approaches in economics, scholars of social dynamics are often interested in non-equilibrium, or dynamic, behavior. That is, behavior that changes over time.
Topics
Social networks
Diffusion of technologies and information
Cooperation
Social norms
See also
Complex adaptive system
Complexity science
Collective intelligence
Dynamical systems
Jay Wright Forrester
Group dynamics
Operations research
Population dynamics
System dynamics
Social psychology
Societal collapse
Sociobiology
Sociocultural evolution
Notes
References
Weidlich, W. (1997) "Sociodynamics applied to the evolution of urban and regional structures". Discrete Dynamics in Nature and Society, Vol. 1, pp. 85–98.
Further reading
External links
Introduction to Social Macrodynamics
Club of Rome report, quote: "We must also keep in mind the presence of social delays--the delays necessary to allow society to absorb or to prepare for a change. Most delays, physical or social reduce the stability of the world system and increase the likelihood of the overshoot mode"
Northwestern Institute on Complex Systems—Institute with research focusing on complexity and social dynamics.
Center for the Study of Complex Systems, University of Michigan—Center with research focusing on complexity and social dynamics.
social-dynamics.org—Blog on Social Dynamics from Kellogg School of Management Social Dynamics Scholar
https://archive.today/20020305021324/http://139.142.203.66/pub/www/Journal/vol3/iss2/art4/
http://arquivo.pt/wayback/20090628232019/http://www-rcf.usc.edu/~read/connectionism_preface2.html
"Historical Dynamics in a Time of Crisis: Late Byzantium, 1204–1453" (discussion of social dynamics from the point of view of historical studies)
Systems theory
Social systems | 0.774494 | 0.985832 | 0.763521 |
Dimensions of globalization | Manfred Steger, professor of Global Studies at the University of Hawaii at Manoa argues that globalization has four main dimensions: economic, political, cultural, ecological, with ideological aspects of each category. David Held's book Global Transformations is organized around the same dimensions, though the ecological is not listed in the title. This set of categories relates to the four-domain approach of circles of social life, and Circles of Sustainability.
Steger compares the current study of globalization to the ancient Buddhist parable of blind scholars and their first encounter with an elephant. Similar to the blind scholars, some globalization scholars are too focused on compacting globalization into a singular process and clashes over “which aspect of social life constitutes its primary domain” prevail.
Dimensions
Economic
Economic globalization is the intensification and stretching of economic interrelations around the globe.
It encompasses such things as the emergence of a new global economic order, the internationalization of trade and finance, the changing power of transnational corporations, and the enhanced role of international economic institutions.
Political
Political globalization is the intensification and expansion of political interrelations around the globe. Aspects of political globalization include the modern-nation state system and its changing place in today's world, the role of global governance, and the direction of our global political systems.
Cultural
Cultural globalization is the intensification and expansion of cultural flows across the globe. Culture is a very broad concept and has many facets, but in the discussion on globalization, Steger means it to refer to “the symbolic construction, articulation, and dissemination of meaning.” Topics under this heading include discussion about the development of a global culture, or lack thereof, the role of the media in shaping our identities and desires, and the globalization of languages.
Ecological
Topics of ecological globalization include population growth, access to food, worldwide reduction in biodiversity, the gap between rich and poor as well as between the global North and global South, human-induced climate change, and global environmental degradation.
Ideologies
According to Steger, there are three main types of globalisms (ideologies that endow the concept of globalization with particular values and meanings): market globalism, justice globalism, and religious globalisms. Steger defines them as follows:
Market globalism seeks to endow ‘globalization’ with free-market norms and neoliberal meanings.
Justice globalism constructs an alternative vision of globalization based on egalitarian ideals of global solidarity and distributive justice.
Religious globalisms struggle against both market globalism and justice globalism as they seek to mobilize a religious values and beliefs that are thought to be under severe attack by the forces of secularism and consumerism.
These ideologies of globalization (or globalisms) then relate to broader imaginaries and ontologies.
See also
Cultural globalization
Globalism
Globalization
References
Notes
Globalization | 0.77333 | 0.987225 | 0.76345 |
Scholarship of teaching and learning | The scholarship of teaching and learning (SOTL or SoTL) is often defined as systematic inquiry into student learning which advances the practice of teaching in higher education by making inquiry findings public. Building on this definition, Peter Felten identified 5 principles for good practice in SOTL: (1) inquiry focused on student learning, (2) grounded in context, (3) methodologically sound, (4) conducted in partnership with students, (5) appropriately public.
SOTL necessarily builds on many past traditions in higher education, including classroom and program assessment, action research, the reflective practice movement, peer review of teaching, traditional educational research, and faculty development efforts to enhance teaching and learning. As such, SOTL encompasses aspects of professional development or faculty development, such as how teachers can not only improve their expertise in their fields, but also develop their pedagogical expertise, i.e., how to better teach novice students in the field or enable their learning. It also encompasses the study and implementation of more modern teaching methods, such as active learning, cooperative learning, problem based learning, and others. SOTL scholars come from various backgrounds, such as those in educational psychology and other education related fields, as well as specialists in various disciplines who are interested in improving teaching and learning in their respective fields. Some scholars are educational researchers or consultants affiliated with teaching and learning centers at universities.
Inquiry methods in SOTL include reflection and analysis, interviews and focus groups, questionnaires and surveys, content analysis of text, secondary analysis of existing data, quasi-experiments (comparison of two sections of the same course), observational research, and case studies, among others. As with all scholarly study, evidence depends not only upon the methods chosen but the relevant disciplinary standards. Dissemination for impact among scholarly teachers may be local within the academic department, college or university, or may be in published, peer-reviewed form. A few journals exclusively publish SOTL outputs, and numerous disciplinary publications disseminate such inquiry outputs (e.g., J. Chem. Educ., J. Natural Resour. Life Sci. Educ., Research in the Teaching of English, College English, J. Economic Education), as well as a number of core SoTL journals and newsletters.
Related frameworks
Related to SoTL are Discipline-Based Educational Research (DBER) and Decoding the Disciplines. DBER differs from the more general SoTL concept in that it is closely linked to specific subject areas, such as physics or mathematics. This is often reflected in very subject-specific questions, and actors in this research area also often have a subject background rather than a pedagogical one.
Closely related to SoTL is also the Decoding the Disciplines approach, which aims more at making the tacit knowledge of experts explicit and helping students master mental actions.
Signature pedagogies
Signature pedagogies are ways of learning in specific disciplines. Examples of signature pedagogies include medical residents making rounds in hospitals or pre-service teachers doing a classroom-based practicum as part of their teacher training. The notion of signature pedagogies has expanded in recent years, as scholars have examined their use in e-learning, for example. Some scholars contend that SoTL itself is a signature pedagogy of higher education.
4M Framework
It has been suggested that the role of SoTL is evolving, but there remains a need to demonstrate the impact of efforts to promote the impact of SoTL within higher education. The 4M framework is used in SoTL to understand complex problems relating to teaching and learning. The framework grew out of systems theory and has been adapted for used in educational settings. The framework includes four levels through which complex problems can be studied: micro (individual), meso (departmental), macro (institutional), and mega. Changes at the meso-level and beyond can have the most impact over time. The framework has been proposed as a means to engage in strategic planning and institutional reporting of SoTL activities.
Professional societies
The International Society for Exploring Teaching and Learning (ISETL) has as its purpose "to encourage the study of instruction and principles of learning in order to implement practical, effective methods of teaching and learning; promote the application, development, and evaluation of such methods; and foster the scholarship of teaching and learning among practicing post-secondary educators." They hold a yearly conference in varying locations. Their 50th annual conference was to be held in Charlotte, NC in 2019.
The International Society for the Scholarship of Teaching & Learning (ISSOTL) was founded in 2004 by a committee of 67 scholars from several countries and serves faculty members, staff, and students who care about teaching and learning as serious intellectual work. ISSOTL has held annual conferences since 2004, attended by scholars from about a dozen nations. The conferences sites include Bloomington, Indiana USA (2004); Vancouver, British Columbia, Canada (2005); Washington, DC, USA (2006); Sydney, Australia (2007); Edmonton, Alberta, Canada (2008); Bloomington, Indiana, USA (2009); Liverpool, UK (2010); Milwaukee, Wisconsin, USA (2011); Hamilton, Ontario, Canada (2012); Raleigh, North Carolina, USA (2013); Quebec City, Quebec, Canada (2014).
There are also stand-alone conferences that have a long-standing commitment to SOTL. The Lilly Conferences are a series of conferences that occur multiple times a year and provide "opportunities for the presentation of the scholarship of teaching and learning." Additionally, The SoTL Commons Conference is an international conference that has been held since 2007 at the Georgia Southern University Center for Teaching Excellence (CTE).
Criticism and limitations of SoTL
Some writers have been critical of SoTL as lacking focus and definition with a lack of clarity on the differences between SoTL and Educational Research undertaken in tertiary education. It is also argued that SoTL has become too broad in definition and is conflated with non-evidenced based teaching interventions and innovations. Macfarlane claims SoTL damages the reputation of educational research, reinforcing a long-standing notion that educational research is of lower status compared to discipline-based research.
List of journals focusing on SOTL topics
The Canadian Journal for the Scholarship of Teaching and Learning
College Teaching
International Journal for Academic Development
International Journal for the Scholarship of Teaching and Learning (IJ-SOTL)
International Journal for Students as Partners (IJSaP)
International Journal of Teaching and Learning in Higher Education (IJTLHE)
Journal of Effective Teaching in Higher Education (Formerly Journal of Effective Teaching)
Journal of the Scholarship of Teaching and Learning
Journal on Excellence in College Teaching
Teaching & Learning Inquiry (TLI)
See also
Education research
Education science
National Survey of Student Engagement
Pedagogy
References
Bibliography
Bass, R. 1999. "The scholarship of teaching: What is the problem?" Creative Thinking about Learning and Teaching 1(1). online – online
Boyer, E. L. (1990), Scholarship reconsidered: Priorities of the professoriate. (PDF), Carnegie Foundation for the Advancement of Teaching http://www.hadinur.com/paper/BoyerScholarshipReconsidered.pdf
Huber, M.T., and P. Hutchings. 2005. "Surveying the scholarship of teaching and learning", Chap. 1, The Advancement of Learning: Building the Teaching Commons,
Hutchings, P. 2000. "Approaching the scholarship of teaching and learning" (Introduction to Opening Lines: Approaches to the Scholarship of Teaching and Learning; ) online
Kreber, C. 2002. "Teaching excellence, teaching expertise, and the scholarship of teaching" Innovative Higher Educ. 27:5–23.
McKinney, K. 2004. "The scholarship of teaching and learning: Past lessons, current challenges, and future visions." To Improve the Academy 22:3–19.
Shulman, L.S. 1999. "Taking learning seriously" Change July/August 1999:11–17.
External links
National Forum for the Enhancement of Teaching and Learning in Higher Education (Ireland)
Lilly Conferences on College and University Teaching
The International Society for the Scholarship of Teaching & Learning (ISSOTL) Annual Conference
The Society for Teaching and Learning in Higher Education
Teaching
Education by method | 0.788754 | 0.967888 | 0.763426 |
Reading comprehension | Reading comprehension is the ability to process written text, understand its meaning, and to integrate with what the reader already knows. Reading comprehension relies on two abilities that are connected to each other: word reading and language comprehension. Comprehension specifically is a "creative, multifaceted process" that is dependent upon four language skills: phonology, syntax, semantics, and pragmatics.
Some of the fundamental skills required in efficient reading comprehension are the ability to:
know the meaning of words,
understand the meaning of a word from a discourse context,
follow the organization of a passage and to identify antecedents and references in it,
draw inferences from a passage about its contents,
identify the main thought of a passage,
ask questions about the text,
answer questions asked in a passage,
visualize the text,
recall prior knowledge connected to text,
recognize confusion or attention problems,
recognize the literary devices or propositional structures used in a passage and determine its tone,
understand the situational mood (agents, objects, temporal and spatial reference points, casual and intentional inflections, etc.) conveyed for assertions, questioning, commanding, refraining, etc., and
determine the writer's purpose, intent, and point of view, and draw inferences about the writer (discourse-semantics).
Comprehension skills that can be applied as well as taught to all reading situations include:
Summarizing
Sequencing
Inferencing
Comparing and contrasting
Drawing conclusions
Self-questioning
Problem-solving
Relating background knowledge
Distinguishing between fact and opinion
Finding the main idea, important facts, and supporting details.
There are many reading strategies to use in improving reading comprehension and inferences, these include improving one's vocabulary, critical text analysis (intertextuality, actual events vs. narration of events, etc.), and practising deep reading.
The ability to comprehend text is influenced by the readers' skills and their ability to process information. If word recognition is difficult, students tend to use too much of their processing capacity to read individual words which interferes with their ability to comprehend what is read.
Overview
Some people learn comprehension skills through education or instruction and others learn through direct experiences. Proficient reading depends on the ability to recognize words quickly and effortlessly. It is also determined by an individual's cognitive development, which is "the construction of thought processes".
There are specific characteristics that determine how successfully an individual will comprehend text, including prior knowledge about the subject, well-developed language, and the ability to make inferences from methodical questioning & monitoring comprehension like: "Why is this important?" and "Do I need to read the entire text?" are examples of passage questioning.
Instruction for comprehension strategy often involves initially aiding the students by social and imitation learning, wherein teachers explain genre styles and model both top-down and bottom-up strategies, and familiarize students with a required complexity of text comprehension. After the contiguity interface, the second stage involves the gradual release of responsibility wherein over time teachers give students individual responsibility for using the learned strategies independently with remedial instruction as required and this helps in error management.
The final stage involves leading the students to a self-regulated learning state with more and more practice and assessment, it leads to overlearning and the learned skills will become reflexive or "second nature". The teacher as reading instructor is a role model of a reader for students, demonstrating what it means to be an effective reader and the rewards of being one.
Reading comprehension levels
Reading comprehension involves two levels of processing, shallow (low-level) processing and deep (high-level) processing.
Deep processing involves semantic processing, which happens when we encode the meaning of a word and relate it to similar words. Shallow processing involves structural and phonemic recognition, the processing of sentence and word structure, i.e. first-order logic, and their associated sounds. This theory was first identified by Fergus I. M. Craik and Robert S. Lockhart.
Comprehension levels are observed through neuroimaging techniques like functional magnetic resonance imaging (fMRI). fMRI is used to determine the specific neural pathways of activation across two conditions: narrative-level comprehension, and sentence-level comprehension. Images showed that there was less brain region activation during sentence-level comprehension, suggesting a shared reliance with comprehension pathways. The scans also showed an enhanced temporal activation during narrative levels tests, indicating this approach activates situation and spatial processing.
In general, neuroimaging studies have found that reading involves three overlapping neural systems: networks active in visual, orthography-phonology (angular gyrus), and semantic functions (anterior temporal lobe with Broca's and Wernicke's areas). However, these neural networks are not discrete, meaning these areas have several other functions as well. The Broca's area involved in executive functions helps the reader to vary depth of reading comprehension and textual engagement in accordance with reading goals.
The role of vocabulary
Reading comprehension and vocabulary are inextricably linked together. The ability to decode or identify and pronounce words is self-evidently important, but knowing what the words mean has a major and direct effect on knowing what any specific passage means while skimming a reading material. It has been shown that students with a smaller vocabulary than other students comprehend less of what they read. It has also been suggested that to improve comprehension, improving word groups, complex vocabularies such as homonyms or words that have multiple meanings, and those with figurative meanings like idioms, similes, collocations and metaphors are a good practice.
Andrew Biemiller argues that teachers should give out topic-related words and phrases before reading a book to students. Note also that teaching includes topic-related word groups, synonyms of words, and their meaning with the context. He further says teachers should familiarize students with sentence structures in which these words commonly occur. According to Biemiller, this intensive approach gives students opportunities to explore the topic beyond its discourse – freedom of conceptual expansion. However, there is no evidence to suggest the primacy of this approach. Incidental morphemic analysis of words – prefixes, suffixes and roots – is also considered to improve understanding of the vocabulary, though they are proved to be an unreliable strategy for improving comprehension and is no longer used to teach students.
Vocabulary is important as it is what connects a reader to the text, while helping develop background knowledge, their own ideas, communicating, and learning new concepts. Vocabulary has been described as "the glue that holds stories, ideas, and content together...making comprehension accessible". This greatly reflects the important role that vocabulary plays. Especially when studying various pieces of literature, it is important to have this background vocabulary, otherwise readers will become lost rather quickly. Because of this, teachers focus a great deal of attention to vocabulary programs and implementing them into their weekly lesson plans.
History
Initially most comprehension teaching was that when taken together it would allow students to be imparted through selected techniques for each genre by strategic readers. However, from the 1930s testing various methods never seemed to win support in empirical research. One such strategy for improving reading comprehension is the technique called SQ3R introduced by Francis Pleasant Robinson in his 1946 book Effective Study.
Between 1969 and 2000, a number of "strategies" were devised for teaching students to employ self-guided methods for improving reading comprehension. In 1969 Anthony V. Manzo designed and found empirical support for the Re Quest, or Reciprocal Questioning Procedure, in traditional teacher-centered approach due to its sharing of "cognitive secrets". It was the first method to convert a fundamental theory such as social learning into teaching methods through the use of cognitive modeling between teachers and students.
Since the turn of the 20th century, comprehension lessons usually consist of students answering teacher's questions or writing responses to questions of their own, or from prompts of the teacher. This detached whole group version only helped students individually to respond to portions of the text (content area reading), and improve their writing skills. In the last quarter of the 20th century, evidence accumulated that academic reading test methods were more successful in assessing rather than imparting comprehension or giving a realistic insight. Instead of using the prior response registering method, research studies have concluded that an effective way to teach comprehension is to teach novice readers a bank of "practical reading strategies" or tools to interpret and analyze various categories and styles of text.
Common Core State Standards (CCSS) have been implemented in hopes that students test scores would improve. Some of the goals of CCSS are directly related to students and their reading comprehension skills, with them being concerned with students learning and noticing key ideas and details, considering the structure of the text, looking at how the ideas are integrated, and reading texts with varying difficulties and complexity.
Reading strategies
There are a variety of strategies used to teach reading. Strategies are key to help with reading comprehension. They vary according to the challenges like new concepts, unfamiliar vocabulary, long and complex sentences, etc. Trying to deal with all of these challenges at the same time may be unrealistic. Then again strategies should fit to the ability, aptitude and age level of the learner. Some of the strategies teachers use are: reading aloud, group work, and more reading exercises.
Reciprocal teaching
In the 1980s, Annemarie Sullivan Palincsar and Ann L. Brown developed a technique called reciprocal teaching that taught students to predict, summarize, clarify, and ask questions for sections of a text. The use of strategies like summarizing after each paragraph has come to be seen as effective for building students' comprehension. The idea is that students will develop stronger reading comprehension skills on their own if the teacher gives them explicit mental tools for unpacking text.
Instructional conversations
"Instructional conversations", or comprehension through discussion, create higher-level thinking opportunities for students by promoting critical and aesthetic thinking about the text. According to Vivian Thayer, class discussions help students to generate ideas and new questions. (Goldenberg, p. 317).
Dr. Neil Postman has said, "All our knowledge results from questions, which is another way of saying that question-asking is our most important intellectual tool" (Response to Intervention). There are several types of questions that a teacher should focus on: remembering, testing, understanding, application or solving, invite synthesis or creating, evaluation and judging. Teachers should model these types of questions through "think-alouds" before, during, and after reading a text. When a student can relate a passage to an experience, another book, or other facts about the world, they are "making a connection". Making connections help students understand the author's purpose and fiction or non-fiction story.
Text factors
There are factors that, once discerned, make it easier for the reader to understand the written text. One of such is the genre, like folktales, historical fiction, biographies or poetry. Each genre has its own characteristics for text structure that once understood helps the reader comprehend it. A story is composed of a plot, characters, setting, point of view, and theme. Informational books provide real-world knowledge for students and have unique features such as: headings, maps, vocabulary, and an index. Poems are written in different forms and the most commonly used are: rhymed verse, haikus, free verse, and narratives. Poetry uses devices such as: alliteration, repetition, rhyme, metaphors, and similes. "When children are familiar with genres, organizational patterns, and text features in books they're reading, they're better able to create those text factors in their own writing." Another one is arranging the text per perceptual span and a text display favorable to the age level of the reader.
Non-verbal imagery
Non-verbal imagery refers to media that utilize schemata to make planned or unplanned connections more commonly used within context such as a passage, an experience, or one's imagination. Some notable examples are emojis, emoticons, cropped and uncropped images, and recently, emojis which are images that are used to elicit humor and comprehension.
Visualization
Visualization is a "mental image" created in a person's mind while reading text. This "brings words to life" and helps improve reading comprehension. Asking sensory questions will help students become better visualizers.
Students can practice visualizing before seeing the picture of what they are reading by imagining what they "see, hear, smell, taste, or feel" when they are reading a page of a picture book aloud. They can share their visualizations, then check their level of detail against the illustrations.
Partner reading
Partner reading is a strategy created for reading pairs. The teacher chooses two appropriate books for the students to read. First, the pupils and their partners must read their own book. Once they have completed this, they are given the opportunity to write down their own comprehension questions for their partner. The students swap books, read them out loud to one another and ask one another questions about the book they have read.
There are different levels of this strategy:
1) The lower ones who need extra help recording the strategies.
2) The average ones who still need some help.
3) The good level. At this level, the children require no help.
Students at a very good level are a few years ahead of the other students.
This strategy:
Provides a model of fluent reading and helps students learn decoding skills by offering positive feedback.
Provides direct opportunities for a teacher to circulate in the class, observe students, and offer individual remediation.
Multiple reading strategies
There are a wide range of reading strategies suggested by reading programs and educators. Effective reading strategies may differ for second language learners, as opposed to native speakers. The National Reading Panel identified positive effects only for a subset, particularly summarizing, asking questions, answering questions, comprehension monitoring, graphic organizers, and cooperative learning. The Panel also emphasized that a combination of strategies, as used in Reciprocal Teaching, can be effective. The use of effective comprehension strategies that provide specific instructions for developing and retaining comprehension skills, with intermittent feedback, has been found to improve reading comprehension across all ages, specifically those affected by mental disabilities.
Reading different types of texts requires the use of different reading strategies and approaches. Making reading an active, observable process can be very beneficial to struggling readers. A good reader interacts with the text in order to develop an understanding of the information before them. Some good reader strategies are predicting, connecting, inferring, summarizing, analyzing and critiquing. There are many resources and activities educators and instructors of reading can use to help with reading strategies in specific content areas and disciplines. Some examples are graphic organizers, talking to the text, anticipation guides, double entry journals, interactive reading and note taking guides, chunking, and summarizing.
The use of effective comprehension strategies is highly important when learning to improve reading comprehension. These strategies provide specific instructions for developing and retaining comprehension skills across all ages. Applying methods to attain an overt phonemic awareness with intermittent practice has been found to improve reading in early ages, specifically those affected by mental disabilities.
The importance of interest
A common statistic that researchers have found is the importance of readers, and specifically students, to be interested in what they are reading. It has been reported by students that they are more likely to finish books if they are the ones that choose them. They are also more likely to remember what they read if they were interested as it causes them to pay attention to the minute details.
Reading strategies
There are various reading strategies that help readers recognize what they are learning, which allows them to further understand themselves as readers. Also to understand what information they have comprehended. These strategies also activate reading strategies that good readers use when reading and understanding a text.
Think-Alouds
When reading a passage, it is good to vocalize what one is reading and also their mental processes that are occurring while reading. This can take many different forms, with a few being asking oneself questions about reading or the text, making connections with prior knowledge or prior read texts, noticing when one struggles, and rereading what needs to be. These tasks will help readers think about their reading and if they are understood fully, which helps them notice what changes or tactics might need to be considered.
Know, Want to know, Learned
Know, Want to know, and Learned (KWL) is often used by teachers and their students, but it is a great tactic for all readers when considering their own knowledge. So, the reader goes through the knowledge that they already have, they think about what they want to know or the knowledge they want to gain, and finally they think about what they have learnt after reading. This allows readers to reflect on the prior knowledge they have, and also to recognize what knowledge they have gained and comprehended from their reading.
Comprehension strategies
Research studies on reading and comprehension have shown that highly proficient, effective readers utilize a number of different strategies to comprehend various types of texts, strategies that can also be used by less proficient readers in order to improve their comprehension. These include:
Making Inferences: In everyday terms we refer to this as "reading between the lines". It involves connecting various parts of texts that are not directly linked in order to form a sensible conclusion. A form of assumption, the reader speculates what connections lie within the texts. They also make predictions about what might occur next.
Planning and Monitoring: This strategy centers around the reader's mental awareness and their ability to control their comprehension by way of awareness. By previewing text (via outlines, table of contents, etc.) one can establish a goal for reading: "what do I need to get out of this"? Readers use context clues and other evaluation strategies to clarify texts and ideas, and thus monitoring their level of understanding.
Asking Questions: To solidify one's understanding of passages of texts, readers inquire and develop their own opinion of the author's writing, character motivations, relationships, etc. This strategy involves allowing oneself to be completely objective in order to find various meanings within the text.
Self-Monitoring: Asking oneself questions about reading strategies, whether they are getting confused or having trouble paying attention.
Determining Importance: Pinpointing the important ideas and messages within the text. Readers are taught to identify direct and indirect ideas and to summarize the relevance of each.
Visualizing: With this sensory-driven strategy, readers form mental and visual images of the contents of text. Being able to connect visually allows for a better understanding of the text through emotional responses.
Synthesizing: This method involves marrying multiple ideas from various texts in order to draw conclusions and make comparisons across different texts; with the reader's goal being to understand how they all fit together.
Making Connections: A cognitive approach also referred to as "reading beyond the lines", which involves:
(A) finding a personal connection to reading, such as personal experience, previously read texts, etc. to help establish a deeper understanding of the context of the te xt, or (B) thinking about implications that have no immediate connection with the theme of the text.
Assessment
There are informal and formal assessments to monitor an individual's comprehension ability and use of comprehension strategies. Informal assessments are generally conducted through observation and the use of tools, like story boards, word sorts, and interactive writing. Many teachers use Formative assessments to determine if a student has mastered content of the lesson. Formative assessments can be verbal as in a "Think-Pair-Share" or "Partner Share". Formative Assessments can also be "Ticket out the door" or "digital summarizers". Formal assessments are district or state assessments that evaluates all students on important skills and concepts. Summative assessments typically, are assessments given at the end of a unit to measure a student's learning.
Running records
A popular assessment undertaken in numerous primary schools around the world are running records. Running records are a helpful tool in regard to reading comprehension. The tool assists teachers in analyzing specific patterns in student behaviors and planning appropriate instruction. By conducting running records, teachers are given an overview of students' reading abilities and learning over a period of time.
In order for teachers to conduct a running record properly, they must sit beside a student and make sure that the environment is as relaxed as possible so the student does not feel pressured or intimidated. It is best if the running record assessment is conducted during reading, to avoid distractions. Another alternative is asking an education assistant to conduct the running record for you in a separate room whilst you teach/supervise the class. Quietly observe the students' reading and record during this time. There is a specific code for recording which most teachers understand. Once the student has finished reading, ask them to retell the story as best as they can. After the completion of this, ask them comprehensive questions listed to test them on their understanding of the book. At the end of the assessment add up their running record score and file the assessment sheet away. After the completion of the running record assessment, plan strategies that will improve the students' ability to read and understand the text.
Overview of the steps taken when conducting a Running Record assessment:
Select the text
Introduce the text
Take a running record
Ask for retelling of the story
Ask comprehensive questions
Check fluency
Analyze the record
Plan strategies to improve students reading/understanding ability
File results away.
Difficult or complex content
Reading difficult texts
Some texts, like in philosophy, literature or scientific research, may appear more difficult to read because of the prior knowledge they assume, the tradition from which they come, or the tone, such as criticizing or parodying. A Philosopher Jacques Derrida, explained his opinion about complicated text: "In order to unfold what is implicit in so many discourses, one would have each time to make a pedagogical outlay that is just not reasonable to expect from every book. Here the responsibility has to be shared out, mediated; the reading has to do its work and the work has to make its reader." Other Philosophers however, believe that if you have something to say, you should be able to make the message readable to a wide audience.
Hyperlinks
Embedded hyperlinks in documents or Internet pages have been found to make different demands on the reader than traditional text. Authors such as Nicholas Carr, and Psychologists, such as Maryanne Wolf, contend that the internet may have a negative impact on attention and reading comprehension. Some studies report increased demands of reading hyperlinked text in terms of cognitive load, or the amount of information actively maintained in one's mind (also see working memory). One study showed that going from about 5 hyperlinks per page to about 11 per page reduced college students' understanding (assessed by multiple choice tests) of articles about alternative energy. This can be attributed to the decision-making process (deciding whether to click on it) required by each hyperlink, which may reduce comprehension of surrounding text.
On the other hand, other studies have shown that if a short summary of the link's content is provided when the mouse pointer hovers over it, then comprehension of the text is improved. "Navigation hints" about which links are most relevant improved comprehension. Finally, the background knowledge of the reader can partially determine the effect hyperlinks have on comprehension. In a study of reading comprehension with subjects who were familiar or unfamiliar with art history, texts which were hyperlinked to one another hierarchically were easier for novices to understand than texts which were hyperlinked semantically. In contrast, those already familiar with the topic understood the content equally well with both types of organization.
In interpreting these results, it may be useful to note that the studies mentioned were all performed in closed content environments, not on the internet. That is, the texts used only linked to a predetermined set of other texts which was offline. Furthermore, the participants were explicitly instructed to read on a certain topic in a limited amount of time. Reading text on the internet may not have these constraints.
Professional development
The National Reading Panel noted that comprehension strategy instruction is difficult for many teachers as well as for students, particularly because they were not taught this way and because it is a demanding task. They suggested that professional development can increase teachers/students willingness to use reading strategies but admitted that much remains to be done in this area.
The directed listening and thinking activity is a technique available to teachers to aid students in learning how to un-read and reading comprehension. It is also difficult for students that are new. There is often some debate when considering the relationship between reading fluency and reading comprehension. There is evidence of a direct correlation that fluency and comprehension lead to better understanding of the written material, across all ages. The National Assessment of Educational Progress assessed U.S. student performance in reading at grade 12 from both public and private school population and found that only 37 percent of students had proficient skills. The majority, 72 percent of the students, were only at or above basic skills, and 28 percent of the students were below basic level.
See also
Balanced literacy
Baseball Study
Directed listening and thinking activity
English as a second or foreign language
Fluency
Levels-of-processing
Phonics
Readability
Reading
Reading for special needs
Simple view of reading
SQ3R
Synthetic phonics
Whole language
Notes
References
Sources
Further reading
External links
Info, Tips, and Strategies for PTE Read Aloud, Express English Language Training Center
English Reading Comprehension Skills, Andrews University
SQ3R Reading Strategy And How to Apply It, ProductiveFish
Vocabulary Instruction and Reading comprehension – From the ERIC Clearinghouse on Reading English and Communication.
ReadWorks.org | The Solution to Reading Comprehension
Education in the United States
Learning to read
Comprehension | 0.765428 | 0.997357 | 0.763405 |
MECE principle | The MECE principle (mutually exclusive and collectively exhaustive) is a grouping principle for separating a set of items into subsets that are mutually exclusive (ME) and collectively exhaustive (CE). It was developed in the late 1960s by Barbara Minto at McKinsey & Company and underlies her Minto Pyramid Principle, and while she takes credit for MECE, according to her interview with McKinsey, she says the idea for MECE goes back as far as to Aristotle.
The MECE principle has been used in the business mapping process wherein the optimum arrangement of information is exhaustive and does not double count at any level of the hierarchy. Examples of MECE arrangements include categorizing people by year of birth (assuming all years are known), apartments by their building number, letters by postmark, and dice rolls. A non-MECE example would be categorization by nationality, because nationalities are neither mutually exclusive (some people have dual nationality) nor collectively exhaustive (some people have none).
Common uses
Strategy consultants use MECE problem structuring to break down client problems into logical, clean buckets of analysis that they can then hand out as work streams to consulting staff on the project.
Similarly, MECE can be used in technical problem solving and communication. In some technical projects, like Six Sigma projects, the most effective method of communication is not the same as the problem solving process. In Six Sigma, the DMAIC process is used, but executive audiences looking for a summary or overview may not be interested in the details. By reorganizing the information using MECE and the related storytelling framework, the point of the topic can be addressed quickly and supported with appropriate detail. The aim is more effective communication.
Criticisms
The MECE concept has been criticized for not being exhaustive, as it doesn't exclude superfluous/extraneous items.
Also, MECE thinking can be too limiting as mutual exclusiveness is not necessarily desirable. For instance, while it may be desirable to classify the answers to a question in a MECE framework so as to consider all of them exactly once, forcing the answers themselves to be MECE can be unnecessarily limiting.
Another attribute of MECE thinking is that, by definition, it precludes redundancies. However, there are cases where redundancies are desirable or even necessary.
Acronym pronunciation
There is some debate regarding the pronunciation of the acronym MECE. Although it is pronounced by many as , the author insisted that it should be pronounced as .
See also
Proof by cases or case analysis
Partition of a set for a mathematical treatment
Work breakdown structure for application in project management
Algebraic data type in programming, which makes it possible to define analogous structures
Carroll diagram in logic, which divides a set into partitions of attributes
References
Types of groupings | 0.766889 | 0.995371 | 0.763339 |
Chronemics | Chronemics is an anthropological, philosophical, and linguistic subdiscipline that describes how time is perceived, coded, and communicated across a given culture. It is one of several subcategories to emerge from the study of nonverbal communication. According to the Encyclopedia of Special Education, "Chronemics includes time orientation, understanding and organisation, the use of and reaction to time pressures, the innate and learned awareness of time, by physically wearing or not wearing a watch, arriving, starting, and ending late or on time." A person's perception and values placed on time plays a considerable role in their communication process. The use of time can affect lifestyles, personal relationships, and work life. Across cultures, people usually have different time perceptions, and this can result in conflicts between individuals. Time perceptions include punctuality, interactions, and willingness to wait.
Definition
Chronemics is the study of the use of time in nonverbal communication, though it carries implications for verbal communication as well. Time perceptions include punctuality, willingness to wait, and interactions. The use of time can affect lifestyles, daily agendas, speed of speech, movements, and how long people are willing to listen.
Fernando Poyatos, Professor Emeritus at the University of New Brunswick, coined the term "chronemics" in 1972. Thomas J. Bruneau (1940-2012), Professor Emeritus at Radford University who taught at the University of Guam in his early career and whose scholarship focused on silence, empathy, and intercultural communication, identified the parameters of this field of study in the late 1970s. Bruneau defined chronemics and specified the functions of time in human interactions as follows:
Time can be used as an indicator of status. For example, in most companies the boss can interrupt progress to hold an impromptu meeting in the middle of the work day, yet the average worker would have to make an appointment to see the boss.
The way in which different cultures perceive time can influence communication as well.
Monochronic time
A monochronic time system means that things are done one at a time and time is segmented into small precise units. Under this system, time is scheduled, arranged, and managed.
The United States considers itself a monochronic society. This perception came about during the Industrial Revolution. Many Americans think of time as a precious resource not to be wasted or taken lightly. As communication scholar Edward T. Hall wrote regarding the American's viewpoint of time in the business world, "the schedule is sacred." Hall says that for monochronic cultures, such as the American culture, "time is tangible" and viewed as a commodity where "time is money" or "time is wasted." John Ivers, a professor of cultural paradigms, agrees with Edward Hall by stating, "In the market sense, monochronic people consume time." The result of this perspective is that monochronic cultures place a paramount value on schedules, tasks, and "getting the job done.
Monochronic time orientation is very prominent in North European cultures, German-speaking countries, and the Scandinavian countries. For example, a businessperson from the USA has a meeting scheduled, they then grow frustrated because they are waiting an hour for their partner to arrive. This is an example of a monochronic time oriented individual running in with a polychronic time oriented individual. The interesting thing is that even though America is seen as one of the most monochronic countries it "has subcultures that may lean more to one side or the other of the monochronic-polychronic divide" within the states themselves. One can see this as they compare the southern states to the northern ones. John Ivers points this out with comparing waiters in the northern and southern restaurants. The waiters from the north are "to the point": they will "engage in little" and there is usually "no small talk." They are trying to be as efficient as possible, while those in the south will work towards "establishing a nice, friendly, micro-relationship" with the customer. They are still considerate of time, but it is not the most important goal in the south.
The culture of African Americans might also be seen as polychronic. (See CP Time.)
Polychronic time
A polychronic time system means several things can be done at once. In polychronic time systems, a wider view of time is exhibited, and time is perceived in large fluid sections.
Examples of polychronic cultures are: Latin American, African, Arab, South Asian, Mediterranean, and Native American cultures. These cultures' view on time can be connected to "natural rhythms, the earth, and the seasons". These analogies can be understood and compared because natural events can occur spontaneously and sporadically, just like polychronic time oriented people and polychronic time oriented cultures. A scenario would be an Inuit working in a factory in Alaska, the superiors blow a whistle to alert for break times, etc. The Inuit are not fond of that method because they determine their times by the sea tides, how long it takes place and how long it lasts. In polychronic cultures, "time spent with others" is considered a "task" and of importance to one's daily regimen.
Polychronic cultures are much less focused on the preciseness of accounting for time and more on tradition and relationships rather than on tasks. Polychronic societies have no problem being late for an appointment if they are deeply focused on some work or in a meeting that ran past schedule, because the concept of time is fluid and can easily expand or contract as need be. As a result, polychronic cultures have a much less formal perception of time. They are not ruled by precise calendars and schedules.
Measuring polychronicity
Bluedorn, Allen C., Carol Felker Kaufman, and Paul M. Lane concluded that "developing an understanding of the monochronic/polychronic continuum will not only result in a better self-management but will also allow more rewarding job performances and relationships with people from different cultures and traditions." Researchers have examined that predicting someone's polychronicity plays an important role in productivity and individual well-being. Researchers have developed the following questionnaires to measure polychronicity:
Inventory of Polychronic Values (IPV), developed by Bluedorn et al., which is a 10-item scale designed to assess "the extent to which people in a culture prefer to be engaged in two or more tasks or events simultaneously and believe their preference is the best way to do things."
Polychronic Attitude Index (PAI), developed by Kaufman-Scarborough & Lindquist in 1991, which is a 4-item scale measuring individual preference for polychronicity, in the following statements:
"I do not like to juggle several activities at the same time".
"People should not try to do many things at once".
"When I sit down at my desk, I work on one project at a time".
"I am comfortable doing several things at the same time".
Predictable patterns between cultures with differing time systems
Cross-cultural perspectives on time
Conflicting attitudes between the monochronic and polychronic perceptions of time can interfere with cross-cultural relations and play a role in these domains, and as a result, challenges can occur within an otherwise assimilated culture. One example in the United States is the Hawaiian culture, which employs two time systems: Haole time and Hawaiian time.
According to Ashley Fulmer and Brandon Crosby, "as intercultural interactions increasingly become the norm rather than the exception, the ability of individuals, groups, and organizations to manage time effectively in cross-cultural settings is critical to the success of these interactions".
Time orientations
The way an individual perceives time and the role time plays in their lives is a learned perspective. As discussed by Alexander Gonzalez and Phillip Zimbardo, "every child learns a time perspective that is appropriate to the values and needs of his society" (Guerrero, DeVito & Hecht, 1999, p. 227).
There are four basic psychological time orientations:
Past
Time-line
Present
Future
Each orientation affects the structure, content, and urgency of communication (Burgoon, 1989). The past orientation has a hard time developing the notion of elapsed time and these individuals often confuse present and past happenings as all in the same. People oriented with time-line cognitivity are often detail oriented and think of everything in linear terms. These individuals also often have difficulty with comprehending multiple events at the same time. Individuals with a present orientation are mostly characterized as pleasure seekers who live for the moment and have a very low risk aversion. Those individuals who operate with future orientation are often thought of as being highly goal oriented and focused on the broad picture.
The use of time as a communicative channel can be a powerful, yet subtle, force in face-to-face interactions. Some of the more recognizable types of interaction that use time are:
Regulating interaction This is shown to aid in the orderly transition of conversational turn-taking. When the speaker is opening the floor for a response, they will pause. However, when no response is desired, the speaker will talk a faster pace with minimal pause. (Capella, 1985)
Expressing intimacy As relationships become more intimate, certain changes are made to accommodate the new relationship status. Some of the changes that are made include lengthening the time spent on mutual gazes, increasing the amount of time doing tasks for or with the other person and planning for the future by making plans to spend more time together (Patterson, 1990).
Affect management The onset of powerful emotions can cause a stronger affect, ranging from joy to sorrow or even to embarrassment. Some of the behaviors associated with negative affects include decreased time of gaze and awkwardly long pauses during conversations. When this happens, it is common for the individuals to try and decrease any negative affects and subsequently strengthen positive affects (Edelman & Iwawaki, 1987).
Evoking emotion Time can be used to evoke emotions in an interpersonal relationship by communicating the value of the relationship. For example, when someone with whom you have a close relationship is late, you may not take it personally, especially if that is characteristic of them. However, when meeting with a total stranger, disrespect for the value of your time may be taken personally and could even cause you to display negative emotions if and when they do arrive for the meeting.
Facilitating service and task goals Professional settings can sometimes give rise to interpersonal relations which are quite different from other "normal" interactions. For example, the societal norms that dictate minimal touch between strangers are clearly altered if one member of the dyad is a doctor, and the environment is that of a hospital examination room.
Time orientation and consumers
Time orientation has also revealed insights into how people react to advertising. Martin, Gnoth and Strong (2009) found that future-oriented consumers react most favorably to ads that feature a product to be released in the distant future and that highlight primary product attributes. In contrast, present-oriented consumers prefer near-future ads that highlight secondary product attributes. Consumer attitudes were mediated by the perceived usefulness of the attribute information.
Culture and diplomacy
Cultural roots
Just as monochronic and polychronic cultures have different time perspectives, understanding the time orientation of a culture is critical to becoming better able to successfully handle diplomatic situations. Americans think they have a future orientation. Hall indicates that for Americans "tomorrow is more important" and that they "are oriented almost entirely toward the future" (Cohen, 2004, p. 35). The future-focused orientation attributes to at least some of the concerns that Americans have with "addressing immediate issues and moving on to new challenges" (Cohen, 2004, p. 35).
On the other hand, many polychronic cultures have a past-orientation toward time.
These time perspectives are the seeds for communication clashes in diplomatic situations. Trade negotiators have observed that "American negotiators are generally more anxious for agreement because "they are always in a hurry" and basically "problem solving oriented." In other words, they place a high value on resolving an issue quickly calling to mind the American catchphrase "some solution is better than no solution" (Cohen, 2004, p. 114). Similar observations have been made of Japanese-American relations. Noting the difference in time perceptions between the two countries, former ambassador to Tokyo, Mike Mansfield commented "We're too fast, they're too slow" (Cohen, 2004, p. 118).
Influence on global affairs
Different perceptions of time across cultures can influence global communication. When writing about time perspective, Gonzalez and Zimbardo comment that "There is no more powerful, pervasive influence on how individuals think and cultures interact than our different perspectives on time—the way we learn how we mentally partition time into past, present and future."
Depending upon where an individual is from, their perception of time might be that "the clock rules the day" or that "we'll get there when we get there."
Improving prospects for success in the global community requires understanding cultural differences, traditions and communication styles.
The monochronic-oriented approach to negotiations is direct, linear and rooted in the characteristics that illustrate low context tendencies. The low context culture approaches diplomacy in a lawyerly, dispassionate fashion with a clear idea of acceptable outcomes and a plan for reaching them. Draft arguments would be prepared elaborating positions. A monochronic culture, more concerned with time, deadlines and schedules, tends to grow impatient and want to rush to "close the deal."
More polychronic-oriented cultures come to diplomatic situations with no particular importance placed on time. Chronemics is one of the channels of nonverbal communication preferred by a High context Polychronic negotiator over verbal communication. The polychronic approach to negotiations will emphasize building trust between participants, forming coalitions and finding consensus. High context Polychronic negotiators might be charged with emotion toward a subject thereby obscuring an otherwise obvious solution.
Control of time in power relationships
Time has a definite relationship to power. Though power most often refers to the ability to influence people, power is also related to dominance and status.
For example, in the workplace, those in a leadership or management position treat time and – by virtue of position – have their time treated differently from those who are of a lower stature position. Anderson and Bowman have identified three specific examples of how chronemics and power converge in the workplacewaiting time, talk time, and work time.
Waiting time
Researchers Insel and Lindgren write that the act of making an individual of a lower stature wait is a sign of dominance. They note that one who "is in the position to cause another to wait has power over him. To be kept waiting is to imply that one's time is less valuable than that of the one who imposes the wait."
Talk time
There is a direct correlation between the power of an individual in an organization and conversation. This includes both length of conversation, turn-taking, and who initiates and ends a conversation. Extensive research indicates that those with more power in an organization will speak more often and for a greater length of time. Meetings between superiors and subordinates provide an opportunity to illustrate this concept. A superior – regardless of whether or not they are running the actual meeting – lead discussions, ask questions, and have the ability to speak for longer periods of time without interruption. Likewise, research shows that turn-taking is also influenced by power. Social psychologist Nancy Henley notes that "Subordinates are expected to yield to superiors and there is a cultural expectation that a subordinate will not interrupt a superior". The length of a response follows the same pattern. While the superior can speak for as long as they want, the responses of the subordinate are shorter in length. Albert Mehrabian noted that deviation from this pattern led to negative perceptions of the subordinate by the superior. Beginning and ending a communication interaction in the workplace is also controlled by the higher-status individual in an organization. The time and duration of the conversation are dictated by the higher-status individual.
Work time
The time of high status individuals is perceived as valuable, and they control their own time. On the other hand, a subordinate with less power has their time controlled by a higher status individual and are in less control of their time – making them likely to report their time to a higher authority. Such practices are more associated with those in non-supervisory roles or in blue collar rather than white collar professions. Instead, as power and status in an organization increase, the flexibility of the work schedule also increases. For instance, while administrative professionals might keep a 9 to 5 work schedule, their superiors may keep less structured hours. This does not mean that the superior works less. They may work longer, but the structure of their work environment is not strictly dictated by the traditional workday. Instead, as Koehler and their associates note "individuals who spend more time, especially spare time, to meetings, to committees, and to developing contacts, are more likely to be influential decision makers".
A specific example of the way power is expressed through work time is scheduling. As Yakura and others have noted in research shared by Ballard and Seibold, "scheduling reflects the extent to which the sequencing and duration of plans activities and events are formalized" (Ballard and Seibold, p. 6). Higher-status individuals have very precise and formal schedules – indicating that their stature requires that they have specific blocks of time for specific meetings, projects and appointments. Lower status individuals however, may have less formalized schedules. Finally, the schedule and appointment calendar of the higher status individual will take precedence in determining where, when and the importance of a specific event or appointment.
Associated theories
Expectancy violations theory
Developed by Judee Burgoon, expectancy violations theory (EVT) sees communication as the exchange of information which is high in relational content and can be used to violate the expectations of another which will be perceived as either positively or negatively depending on the liking between the two people.
When our expectations are violated, we will respond in specific ways. If an act is unexpected and is assigned favorable interpretation, and it is evaluated positively, it will produce more favorable outcomes than an expected act with the same interpretation and evaluation.
Interpersonal adaptation theory
The interpersonal adaptation theory (IAT), founded by Judee Burgoon, states that adaptation in interaction is responsive to the needs, expectations, and desires of communicators and affects how communicators position themselves in relation to one another and adapt to one another's communication. For example, they may match each other's behavior, synchronize the timing of behavior, or behave in dissimilar ways. It is also important to note that individuals bring to interactions certain requirements that reflect basic human needs, expectations about behavior based on social norms, and desires for interaction based on goals and personal preferences (Burgoon, Stern & Dillman, 1995).
See also
African time
Paul Virilio
Philosophy of space and time
Johannes Fabian
References
Adler, ROBIN.B., Lawrence B.R., & Towne, N. (1995). Interplay (6th ed.). Fort Worth: Hardcourt Brace College.
Ballard, D & Seibold, D., Communication-related organizational structures and work group temporal differences: the effects of coordination method, technology type, and feedback cycle on members' construals and enactments of time. Communication Monographs, Vol. 71, No. 1, March 2004, pp. 1–27
Buller D.B., & Burgoon, J.K. (1996). Interpersonal deception theory. Communication Theory, 6, 203–242.
Buller, D.B., Burgoon, J.K., & Woodall, W.G. (1996). Nonverbal communications: The unspoken dialogue (2nd ed.). New York: McGraw-Hill.
Burgoon, J.K., Stern, L.A., & Dillman, L. (1995). Interpersonal adaptation: Dyadic interaction patterns. Massachusetts: Cambridge University Press.
Capella, J. N. (1985). Controlling the floor in conversation. In A. Siegman and S. Feldstein (Eds.), Multichannel integrations of nonverbal behavior, (pp. 69–103). Hillsdale, NJ: Erlbaum
Cohen, R. (2004). Negotiating across cultures: International communication in an interdependent world (rev. ed.). Washington, DC: United States Institute of Peace.
Eddelman, R.J., and Iwawaki, S. (1987). Self-reported expression and the consequences of embarrassment in the United Kingdom and Japan. Psychologia, 30, 205-216
Griffin, E. (2000). A first look at communication theory (4th ed). Boston, MA: McGraw Hill.
Gonzalez, G., & Zimbardo, P. (1985). Time in perspective. Psychology Today Magazine, 20–26.
Hall, E.T. & Hall, M. R. (1990). Understanding cultural differences: Germans, French, and Americans. Boston, MA: Intercultural Press.
Hall, J.A., & Kapp, M.L. (1992). Nonverbal communication in human interaction (3rd ed.). New York: Holt Rinehart and Winston, Inc.
Knapp, M. L. & Miller, G.R. (1985). Handbook of Interpersonal Communication. Beverly Hills: Sage Publications.
Koester, J., & Lustig, M.W. (2003). Intercultural competence (4th ed.). New York: Pearson Education, Inc.
Patterson, M.L. (1990). Functions of non-verbal behavior in social interaction.
H. Giles & W.P. Robinson (Eds), Handbook of Language and Social Psychology, Chichester, G.B.: Wiley
West, R., & Turner, L. H. (2000). Introducing communication theory: Analysis and application. Mountain View, CA: Mayfield.
Wood, J. T. (1997). Communication theories in action: An introduction. Belmont, CA: Wadsworth.
Ivers, J. J. (2017). For Deep Thinkers Only. John J. Ivers
Further reading
Bluedorn, A.C. (2002). The human organization of time: Temporal realities and experience. Stanford, CA: Stanford University Press.
Cohen, R. (2004). Negotiating across cultures: International communication in an interdependent world (rev. ed.). Washington, DC: United States Institute of Peace.
Griffin, E. (2000). A first look at communication theory (4th ed). Boston, MA: McGraw Hill.
Hugg, A. (2002, February 4). Universal language. Retrieved May 10, 2007 from Website:
Osborne, H. (2006, January/February). In other words…actions can speak as clearly as words. Retrieved May 12, 2007 from Website: http://www.healthliteracy.com/article.asp?PageID=3763
Wessel, R. (2003, January 9). Is there time to slow down?. Retrieved May 10, 2007 from Website: http://www.csmonitor.com/2003/0109/p13s01-sten.html
A sonnet on the topic by the editor of the 11th edition (1910) of the Encyclopædia Britannica.
External links
Nonverbal communication
Social constructionism
Time | 0.769024 | 0.992565 | 0.763306 |
Schema therapy | Schema therapy was developed by Jeffrey E. Young for use in treatment of personality disorders and chronic DSM Axis I disorders, such as when patients fail to respond or relapse after having been through other therapies (for example, traditional cognitive behavioral therapy). Schema therapy is an integrative psychotherapy combining theory and techniques from previously existing therapies, including cognitive behavioral therapy, psychoanalytic object relations theory, attachment theory, and Gestalt therapy.
Introduction
Four main theoretical concepts in schema therapy are early maladaptive schemas (or simply schemas), coping styles, modes, and basic emotional needs:
In cognitive psychology, a schema is an organized pattern of thought and behavior. It can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organizing and perceiving new information. In schema therapy, a schema specifically refers to an early maladaptive schema, defined as a pervasive self-defeating or dysfunctional theme or pattern of memories, emotions, and physical sensations, developed during childhood or adolescence and elaborated throughout one's lifetime. Often they have the form of a belief about the self or the world. For instance, a person with an Abandonment schema could be hypersensitive (have an "emotional button" or "trigger") about their perceived value to others, which in turn could make them feel sad and panicky in their interpersonal relationships.
Coping styles are a person's behavioral responses to schemas. There are three potential coping styles. In "avoidance" the person tries to avoid situations that activate the schema. In "surrender" the person gives into the schema, doesn't try to fight against it, and changes their behavior in expectation that the feared outcome is inevitable. In "counterattack", also called "overcompensation", the person puts extra work into not allowing the schema's feared outcome to happen. These maladaptive coping styles (overcompensation, avoidance, or surrender) very often wind up reinforcing the schemas. Continuing the Abandonment example: having imagined a threat of abandonment in a relationship and feeling sad and panicky, a person using an avoidance coping style might then behave in ways to limit the closeness in the relationship to try to protect themself from being abandoned. The resulting loneliness or even actual loss of the relationship could easily reinforce the person's Abandonment schema. Another example can be given for the Defectiveness schema: A person using an avoidance coping style might avoid situations that make them feel defective, or might try to numb the feeling with addictions or distractions. A person using a surrender coping style might tolerate unfair criticism without defending themself. A person using the counterattack/overcompensation coping style might put extra effort into being superhuman.
Modes are mind states that cluster schemas and coping styles into a temporary "way of being" that a person can shift into occasionally or more frequently. For example, a Vulnerable Child mode might be a state of mind encompassing schemas of Abandonment, Defectiveness, Mistrust/Abuse and a coping style of surrendering (to the schemas).
If a patient's basic emotional needs are not met in childhood, then schemas, coping styles, and modes can develop. Some basic needs that have been identified are: connection, mutuality, reciprocity, flow, and autonomy. For example, a child with unmet needs around connection—perhaps due to parental loss to death, divorce, or addiction—might develop an Abandonment schema.
The goal of schema therapy is to help patients meet their basic emotional needs by helping the patient learn how to:
heal schemas by diminishing the intensity of emotional memories comprising the schema and the intensity of bodily sensations, and by changing the cognitive patterns connected to the schema;
replace maladaptive coping styles and responses with adaptive patterns of behavior.
Techniques used in schema therapy including limited reparenting and Gestalt therapy psychodrama techniques such as imagery re-scripting and empty chair dialogues. See , below.
Early maladaptive schemas
Early maladaptive schemas are self-defeating emotional and cognitive patterns established from childhood and repeated throughout life. They may be made up of emotional memories of past hurt, tragedy, fear, abuse, neglect, unmet safety needs, abandonment, or lack of normal human affection in general. Early maladaptive schemas can also include bodily sensations associated with such emotional memories. Early maladaptive schemas can have different levels of severity and pervasiveness: the more severe the schema, the more intense the negative emotion when the schema is triggered and the longer it lasts; the more pervasive the schema, the greater the number of situations that trigger it.
Schema domains
Schema domains are five broad categories of unmet needs into which are grouped 18 early maladaptive schemas identified by :
Disconnection/Rejection includes 5 schemas:
Abandonment/Instability
Mistrust/Abuse
Emotional Deprivation
Defectiveness/Shame
Social Isolation/Alienation
Impaired Autonomy and/or Performance includes 4 schemas:
Dependence/Incompetence
Vulnerability to Harm or Illness
Enmeshment/Undeveloped Self
Failure
Impaired Limits includes 2 schemas:
Entitlement/Grandiosity
Insufficient Self-Control and/or Self-Discipline
Other-Directedness includes 3 schemas:
Subjugation
Self-Sacrifice
Approval-Seeking/Recognition-Seeking
Overvigilance/Inhibition includes 4 schemas:
Negativity/Pessimism
Emotional Inhibition
Unrelenting Standards/Hypercriticalness
Punitiveness
did a primary and a higher-order factor analysis of data from a large clinical sample and smaller non-clinical population. The higher-order factor analysis indicated four schema domains—Emotional Dysregulation, Disconnection, Impaired Autonomy/Underdeveloped Self, and Excessive Responsibility/Overcontrol—that overlap with the five domains (listed above) proposed earlier by . The primary factor analysis indicated that the Emotional Inhibition schema could be split into Emotional Constriction and Fear of Losing Control, and the Punitiveness schema could be split into Punitiveness (Self) and Punitiveness (Other).
Schema modes
Schema modes are momentary mind states which every human being experiences at one time or another. A schema mode consists of a cluster of schemas and coping styles. Life situations that a person finds disturbing or offensive, or arouse bad memories, are referred to as "triggers" that tend to activate schema modes. In psychologically healthy persons, schema modes are mild, flexible mind states that are easily pacified by the rest of their personality. In patients with personality disorders, schema modes are more severe, rigid mind states that may seem split off from the rest of their personality.
Identified schema modes
identified 10 schema modes, further described by , and grouped into four categories. The four categories are: Child modes, Dysfunctional Coping modes, Dysfunctional Parent modes, and the Healthy Adult mode. The four Child modes are: Vulnerable Child, Angry Child, Impulsive/Undisciplined Child, and Happy Child. The three Dysfunctional Coping modes are: Compliant Surrenderer, Detached Protector, and Overcompensator. The two Dysfunctional Parent modes are: Punitive Parent and Demanding Parent.
Vulnerable Child is the mode in which a patient may feel defective in some way, thrown aside, unloved, obviously alone, or may be in a "me against the world" mindset. The patient may feel as though peers, friends, family, and even the entire world have abandoned them. Behaviors of patients in Vulnerable Child mode may include (but are not limited to) falling into major depression, pessimism, feeling unwanted, feeling unworthy of love, and perceiving personality traits as irredeemable flaws. Rarely, a patient's self-perceived flaws may be intentionally withheld on the inside; when this occurs, instead of showing one's true self, the patient may appear to others as "egotistical", "attention-seeking", selfish, distant, and may exhibit behaviors unlike their true nature. The patient might create a narcissistic alter-ego/persona in order to escape or hide the insecurity from others. Due to fear of rejection, of feeling disconnected from their true self and poor self-image, these patients, who truly desire companionship/affection, may instead end up pushing others away.
Angry Child is fueled mainly by feelings of victimization or bitterness, leading towards negativity, pessimism, jealousy, and rage. While experiencing this schema mode, a patient may have urges to yell, scream, throw/break things, or possibly even injure themself or harm others. The Angry Child schema mode is enraged, anxious, frustrated, self-doubting, feels unsupported in ideas and vulnerable.
Impulsive Child is the mode where anything goes. Behaviors of the Impulsive Child schema mode may include reckless driving, substance abuse, cutting oneself, suicidal thoughts, gambling, or fits of rage, such as punching a wall when "triggered" or laying blame of circumstantial difficulties upon innocent people. Unsafe sex, rash decisions to run away from a situation without resolution, tantrums perceived by peers as infantile, and so forth are a mere few of the behaviors which a patient in this schema mode might display. Impulsive Child is the rebellious and careless schema mode.
Happy Child occurs when one feels like their needs are being met. When people experience the Happy Child mode, they feel safe, loved, and content. They experience a joyful sense of wonder and playfulness about the world. This mode is healthy as it represents the absence of activation of maladaptive schemas. While healthy adults spend most of their time in the Healthy Adult mode, they also cultivate their Happy Child to balance the demands of life with a sense of lightheartedness.
Compliant Surrenderer is a coping mode where one experiences the schema that triggered it as true. This in turn leads to feelings such as helplessness, sadness, guilt, or anger about the situation. People in this mode often believe it is pointless to challenge their schema, and that it must simply be accepted. They also often adopt an interpersonally passive and dependent style, seeking to please people in their lives, to minimize conflict, and therefore avoid further harm or abuse.
Detached Protector is based in escape. Patients in Detached Protector schema mode withdraw, dissociate, alienate, or hide in some way. This may be triggered by numerous stress factors or feelings of being overwhelmed. When a patient with insufficient skills is in a situation involving excessive demands, it can trigger a Detached Protector response mode. Stated simply, patients become numb in order to protect themselves from the harm or stress of what they fear is to come, or to protect themselves from fear of the unknown in general.
Overcompensator is marked by attempts to fight off schemas in a way that is rigid and extreme. It often involves aggressiveness, rebelliousness, violating the rights of other people, and an attempt to dominate them. In this mode, a person who feels emotionally deprived demands affection from others, while a person who believes others cannot be trusted will try to preemptively hurt them before they do. It may also involve obsessiveness in an excessive attempt to control the environment, or forced behaviors, such as extreme forgiveness for someone with a Punitiveness schema.
Punitive Parent is identified by beliefs of a patient that they should be harshly punished, perhaps due to feeling "defective", or making a simple mistake. The patient may feel that they should be punished for even existing. Sadness, anger, impatience, and judgment are directed to the patient and from the patient. The Punitive Parent has great difficulty in forgiving themself even under average circumstances in which anyone could fall short of their standards. The Punitive Parent does not wish to allow for human error or imperfection, thus punishment is what this mode seeks.
Demanding Parent is associated with a strong sense of pressure to achieve. When experiencing this mode, people often feel like their performance is inadequate, no matter how well they do or how much effort they make. Common beliefs also involve the idea that rest, fun, and relaxation are not acceptable and that one's attention should remain focused on achieving more. It is important to note that while this mode is often accompanied by Punitive Parent, this is not always the case. Clients with the Demanding Parent mode feel pressure and dissatisfaction with their achievements, but not necessarily guilt, shame or feelings of worthlessness.
Healthy Adult is the mode that schema therapy aims to help a patient achieve as the long-lasting state of well-being. The Healthy Adult is comfortable making decisions, is a problem-solver, thinks before acting, is appropriately ambitious, sets limits and boundaries, nurtures self and others, forms healthy relationships, takes on all responsibility, sees things through, and enjoys/partakes in enjoyable adult activities and interests with boundaries enforced, takes care of their physical health, and values themself. In this schema mode the patient focuses on the present day with hope and strives toward the best tomorrow possible. The Healthy Adult forgives the past, no longer sees themself as a victim (but as a survivor), and expresses all emotions in ways which are healthy and cause no harm.
Techniques in schema therapy
Treatment plans in schema therapy generally encompass three basic classes of techniques: cognitive, experiential, and behavioral (in addition to the basic healing components of the therapeutic relationship). Cognitive strategies expand on standard cognitive behavioral therapy techniques such as listing pros and cons of a schema, testing the validity of a schema, or conducting a dialogue between the "schema side" and the "healthy side". Experiential and emotion focused strategies expand on standard Gestalt therapy psychodrama and imagery techniques. Behavioral pattern-breaking strategies expand on standard behavior therapy techniques, such as role playing an interaction and then assigning the interaction as homework. One of the most central techniques in schema therapy is the use of the therapeutic relationship, specifically through a process called "limited reparenting".
Specific techniques often used in schema therapy include flash cards with important therapeutic messages, created in session and used by the patient between sessions, and the schema diary—a template or workbook that is filled out by the patient between sessions and that records the patient's progress in relation to all the theoretical concepts in schema therapy.
Schema therapy and psychoanalysis
From an integrative psychotherapy perspective, limited reparenting and the experiential techniques, particularly around changing modes, could be seen as actively changing what psychoanalysis has described as object relations. Historically, mainstream psychoanalysis tended to reject active techniques—such as Fritz Perls' Gestalt therapy work or Franz Alexander's "corrective emotional experience"—but contemporary relational psychoanalysis (led by analysts such as Lewis Aron, and building on the ideas of earlier unorthodox analysts such as Sándor Ferenczi) is more open to active techniques. It is notable that in a head-to-head comparison of a psychoanalytic object relations treatment (Otto F. Kernberg's transference focused psychotherapy) and schema therapy, the latter has been demonstrated to be more effective in treating Borderline Personality Disorder.
Outcome studies on schema therapy
Schema therapy vs transference focused psychotherapy outcomes
Dutch investigators, including Josephine Giesen-Bloo and Arnoud Arntz (the project leader), compared schema therapy (also known as schema focused therapy or SFT) with transference focused psychotherapy (TFP) in the treatment of borderline personality disorder. 86 patients were recruited from four mental health institutes in the Netherlands. Patients in the study received two sessions per week of SFT or TFP for three years. After three years, full recovery was achieved in 45% of the patients in the SFT condition, and in 24% of those receiving TFP. One year later, the percentage fully recovered increased to 52% in the SFT condition and 29% in the TFP condition, with 70% of the patients in the SFT group achieving "clinically significant and relevant improvement". Moreover, the dropout rate was only 27% for SFT, compared with 50% for TFP.
Patients began to feel and function significantly better after the first year, with improvement occurring more rapidly in the SFT group. There was continuing improvement in subsequent years. Thus investigators concluded that both treatments had positive effects, with schema therapy clearly more successful.
Less intensive outpatient, individual schema therapy
Dutch investigators, including Marjon Nadort and Arnoud Arntz, assessed the effectiveness of schema therapy in the treatment of borderline personality disorder when utilized in regular mental health care settings. A total of 62 patients were treated in eight mental health centers located in the Netherlands. The treatment was less intensive along a number of dimensions including a shift from twice weekly to once weekly sessions during the second year. Despite this, there was no lessening of effectiveness with recovery rates that were at least as high and similarly low dropout rates.
Pilot study of group schema therapy for borderline personality disorder
Investigators Joan Farrell, Ida Shaw and Michael Webber at the Indiana University School of Medicine Center for BPD Treatment & Research tested the effectiveness of adding an eight-month, 30-session schema therapy group to treatment-as-usual (TAU) for borderline personality disorder (BPD) with 32 patients. The dropout rate was 0% for those patients who received group schema therapy in addition to TAU and 25% for those who received TAU alone. At the end of treatment, 94% of the patients who received group schema therapy in addition to TAU compared to 16% of the patients receiving TAU alone no longer met BPD diagnostic criteria. The schema therapy group treatment led to significant reductions in symptoms and global improvement in functioning. The large positive treatment effects found in the group schema therapy study suggest that the group modality may augment or catalyze the active ingredients of the treatment for BPD patients. As of 2014, a collaborative randomized controlled trial is under way at 14 sites in six countries to further explore this interaction between groups and schema therapy.
See also
Cognitive therapy
Dynamic-maturational model of attachment and adaptation
Personal construct theory
Schema (psychology)
Notes
References
Further reading
Professional literature
Self-help literature
Psychotherapy by type
Cognitive behavioral therapy
Cognitive therapy
Borderline personality disorder | 0.766266 | 0.996113 | 0.763287 |
Posthumanism | Posthumanism or post-humanism (meaning "after humanism" or "beyond humanism") is an idea in continental philosophy and critical theory responding to the presence of anthropocentrism in 21st-century thought. Posthumanization comprises "those processes by which a society comes to include members other than 'natural' biological human beings who, in one way or another, contribute to the structures, dynamics, or meaning of the society."
It encompasses a wide variety of branches, including:
Antihumanism: a branch of theory that is critical of traditional humanism and traditional ideas about the human condition, vitality and agency.
Cultural posthumanism: A branch of cultural theory critical of the foundational assumptions of humanism and its legacy that examines and questions the historical notions of "human" and "human nature", often challenging typical notions of human subjectivity and embodiment and strives to move beyond "archaic" concepts of "human nature" to develop ones which constantly adapt to contemporary technoscientific knowledge.
Philosophical posthumanism: A philosophical direction that draws on cultural posthumanism, the philosophical strand examines the ethical implications of expanding the circle of moral concern and extending subjectivities beyond the human species.
Posthuman condition: The deconstruction of the human condition by critical theorists.
Existential posthumanism: it embraces posthumanism as a praxis of existence. Its sources are drawn from non-dualistic global philosophies, such as Advaita Vedanta, Taoism and Zen Buddhism, the philosophies of Yoga, continental existentialism, native epistemologies and Sufism, among others. It examines and challenges hegemonic notions of being "human" by delving into the history of embodied practices of being human and, thus, expanding the reflection on human nature.
Posthuman transhumanism: A transhuman ideology and movement which, drawing from posthumanist philosophy, seeks to develop and make available technologies that enable immortality and greatly enhance human intellectual, physical, and psychological capacities in order to achieve a "posthuman future".
AI takeover: A variant of transhumanism in which humans will not be enhanced, but rather eventually replaced by artificial intelligences. Some philosophers and theorists, including Nick Land, promote the view that humans should embrace and accept their eventual demise as a consequence of a technological singularity. This is related to the view of "cosmism", which supports the building of strong artificial intelligence even if it may entail the end of humanity, as in their view it "would be a cosmic tragedy if humanity freezes evolution at the puny human level".
Voluntary human extinction: Seeks a "posthuman future" that in this case is a future without humans.
Philosophical posthumanism
Philosopher Theodore Schatzki suggests there are two varieties of posthumanism of the philosophical kind:
One, which he calls "objectivism", tries to counter the overemphasis of the subjective, or intersubjective, that pervades humanism, and emphasises the role of the nonhuman agents, whether they be animals and plants, or computers or other things, because "Humans and nonhumans, it [objectivism] proclaims, codetermine one another", and also claims "independence of (some) objects from human activity and conceptualization".
A second posthumanist agenda is "the prioritization of practices over individuals (or individual subjects)", which, they say, constitute the individual.
There may be a third kind of posthumanism, propounded by the philosopher Herman Dooyeweerd. Though he did not label it "posthumanism", he made an immanent critique of humanism, and then constructed a philosophy that presupposed neither humanist, nor scholastic, nor Greek thought but started with a different religious ground motive. Dooyeweerd prioritized law and meaningfulness as that which enables humanity and all else to exist, behave, live, occur, etc. "Meaning is the being of all that has been created", Dooyeweerd wrote, "and the nature even of our selfhood". Both human and nonhuman alike function subject to a common law-side, which is diverse, composed of a number of distinct law-spheres or aspects. The temporal being of both human and non-human is multi-aspectual; for example, both plants and humans are bodies, functioning in the biotic aspect, and both computers and humans function in the formative and lingual aspect, but humans function in the aesthetic, juridical, ethical and faith aspects too. The Dooyeweerdian version is able to incorporate and integrate both the objectivist version and the practices version, because it allows nonhuman agents their own subject-functioning in various aspects and places emphasis on aspectual functioning.
Emergence of philosophical posthumanism
Ihab Hassan, theorist in the academic study of literature, once stated: "Humanism may be coming to an end as humanism transforms itself into something one must helplessly call posthumanism." This view predates most currents of posthumanism which have developed over the late 20th century in somewhat diverse, but complementary, domains of thought and practice. For example, Hassan is a known scholar whose theoretical writings expressly address postmodernity in society. Beyond postmodernist studies, posthumanism has been developed and deployed by various cultural theorists, often in reaction to problematic inherent assumptions within humanistic and enlightenment thought.
Theorists who both complement and contrast Hassan include Michel Foucault, Judith Butler, cyberneticists such as Gregory Bateson, Warren McCullouch, Norbert Wiener, Bruno Latour, Cary Wolfe, Elaine Graham, N. Katherine Hayles, Benjamin H. Bratton, Donna Haraway, Peter Sloterdijk, Stefan Lorenz Sorgner, Evan Thompson, Francisco Varela, Humberto Maturana, Timothy Morton, and Douglas Kellner. Among the theorists are philosophers, such as Robert Pepperell, who have written about a "posthuman condition", which is often substituted for the term posthumanism.
Posthumanism differs from classical humanism by relegating humanity back to one of many natural species, thereby rejecting any claims founded on anthropocentric dominance. According to this claim, humans have no inherent rights to destroy nature or set themselves above it in ethical considerations a priori. Human knowledge is also reduced to a less controlling position, previously seen as the defining aspect of the world. Human rights exist on a spectrum with animal rights and posthuman rights. The limitations and fallibility of human intelligence are confessed, even though it does not imply abandoning the rational tradition of humanism.
Proponents of a posthuman discourse, suggest that innovative advancements and emerging technologies have transcended the traditional model of the human, as proposed by Descartes among others associated with philosophy of the Enlightenment period. Posthumanistic views were also found in the works of Shakespeare. In contrast to humanism, the discourse of posthumanism seeks to redefine the boundaries surrounding modern philosophical understanding of the human. Posthumanism represents an evolution of thought beyond that of the contemporary social boundaries and is predicated on the seeking of truth within a postmodern context. In so doing, it rejects previous attempts to establish "anthropological universals" that are imbued with anthropocentric assumptions. Recently, critics have sought to describe the emergence of posthumanism as a critical moment in modernity, arguing for the origins of key posthuman ideas in modern fiction, in Nietzsche, or in a modernist response to the crisis of historicity.
Although Nietzsche's philosophy has been characterized as posthumanist, Foucault placed posthumanism within a context that differentiated humanism from Enlightenment thought. According to Foucault, the two existed in a state of tension: as humanism sought to establish norms while Enlightenment thought attempted to transcend all that is material, including the boundaries that are constructed by humanistic thought. Drawing on the Enlightenment's challenges to the boundaries of humanism, posthumanism rejects the various assumptions of human dogmas (anthropological, political, scientific) and takes the next step by attempting to change the nature of thought about what it means to be human. This requires not only decentering the human in multiple discourses (evolutionary, ecological and technological) but also examining those discourses to uncover inherent humanistic, anthropocentric, normative notions of humanness and the concept of the human.
Contemporary posthuman discourse
Posthumanistic discourse aims to open up spaces to examine what it means to be human and critically question the concept of "the human" in light of current cultural and historical contexts. In her book How We Became Posthuman, N. Katherine Hayles, writes about the struggle between different versions of the posthuman as it continually co-evolves alongside intelligent machines. Such coevolution, according to some strands of the posthuman discourse, allows one to extend their subjective understandings of real experiences beyond the boundaries of embodied existence. According to Hayles's view of posthuman, often referred to as "technological posthumanism", visual perception and digital representations thus paradoxically become ever more salient. Even as one seeks to extend knowledge by deconstructing perceived boundaries, it is these same boundaries that make knowledge acquisition possible. The use of technology in a contemporary society is thought to complicate this relationship.
Hayles discusses the translation of human bodies into information (as suggested by Hans Moravec) in order to illuminate how the boundaries of our embodied reality have been compromised in the current age and how narrow definitions of humanness no longer apply. Because of this, according to Hayles, posthumanism is characterized by a loss of subjectivity based on bodily boundaries. This strand of posthumanism, including the changing notion of subjectivity and the disruption of ideas concerning what it means to be human, is often associated with Donna Haraway's concept of the cyborg. However, Haraway has distanced herself from posthumanistic discourse due to other theorists' use of the term to promote utopian views of technological innovation to extend the human biological capacity (even though these notions would more correctly fall into the realm of transhumanism).
While posthumanism is a broad and complex ideology, it has relevant implications today and for the future. It attempts to redefine social structures without inherently humanly or even biological origins, but rather in terms of social and psychological systems where consciousness and communication could potentially exist as unique disembodied entities. Questions subsequently emerge with respect to the current use and the future of technology in shaping human existence, as do new concerns with regards to language, symbolism, subjectivity, phenomenology, ethics, justice and creativity.
Technological versus non-technological
Posthumanism can be divided into non-technological and technological forms.
Non-technological posthumanism
While posthumanization has links with the scholarly methodologies of posthumanism, it is a distinct phenomenon. The rise of explicit posthumanism as a scholarly approach is relatively recent, occurring since the late 1970s; however, some of the processes of posthumanization that it studies are ancient. For example, the dynamics of non-technological posthumanization have existed historically in all societies in which animals were incorporated into families as household pets or in which ghosts, monsters, angels, or semidivine heroes were considered to play some role in the world.
Such non-technological posthumanization has been manifested not only in mythological and literary works but also in the construction of temples, cemeteries, zoos, or other physical structures that were considered to be inhabited or used by quasi- or para-human beings who were not natural, living, biological human beings but who nevertheless played some role within a given society, to the extent that, according to philosopher Francesca Ferrando: "the notion of spirituality dramatically broadens our understanding of the posthuman, allowing us to investigate not only technical technologies (robotics, cybernetics, biotechnology, nanotechnology, among others), but also, technologies of existence."
Technological posthumanism
Some forms of technological posthumanization involve efforts to directly alter the social, psychological, or physical structures and behaviors of the human being through the development and application of technologies relating to genetic engineering or neurocybernetic augmentation; such forms of posthumanization are studied, e.g., by cyborg theory. Other forms of technological posthumanization indirectly "posthumanize" human society through the deployment of social robots or attempts to develop artificial general intelligences, sentient networks, or other entities that can collaborate and interact with human beings as members of posthumanized societies.
The dynamics of technological posthumanization have long been an important element of science fiction; genres such as cyberpunk take them as a central focus. In recent decades, technological posthumanization has also become the subject of increasing attention by scholars and policymakers. The expanding and accelerating forces of technological posthumanization have generated diverse and conflicting responses, with some researchers viewing the processes of posthumanization as opening the door to a more meaningful and advanced transhumanist future for humanity, while other bioconservative critiques warn that such processes may lead to a fragmentation of human society, loss of meaning, and subjugation to the forces of technology.
Common features
Processes of technological and non-technological posthumanization both tend to result in a partial "de-anthropocentrization" of human society, as its circle of membership is expanded to include other types of entities and the position of human beings is decentered. A common theme of posthumanist study is the way in which processes of posthumanization challenge or blur simple binaries, such as those of "human versus non-human", "natural versus artificial", "alive versus non-alive", and "biological versus mechanical".
Relationship with transhumanism
Sociologist James Hughes comments that there is considerable confusion between the two terms. In the introduction to their book on post- and transhumanism, Robert Ranisch and Stefan Sorgner address the source of this confusion, stating that posthumanism is often used as an umbrella term that includes both transhumanism and critical posthumanism.
Although both subjects relate to the future of humanity, they differ in their view of anthropocentrism. Pramod Nayar, author of Posthumanism, states that posthumanism has two main branches: ontological and critical. Ontological posthumanism is synonymous with transhumanism. The subject is regarded as "an intensification of humanism". Transhumanist thought suggests that humans are not post human yet, but that human enhancement, often through technological advancement and application, is the passage of becoming post human. Transhumanism retains humanism's focus on the Homo sapiens as the center of the world but also considers technology to be an integral aid to human progression. Critical posthumanism, however, is opposed to these views. Critical posthumanism "rejects both human exceptionalism (the idea that humans are unique creatures) and human instrumentalism (that humans have a right to control the natural world)". These contrasting views on the importance of human beings are the main distinctions between the two subjects.
Transhumanism is also more ingrained in popular culture than critical posthumanism, especially in science fiction. The term is referred to by Pramod Nayar as "the pop posthumanism of cinema and pop culture".
Criticism
Some critics have argued that all forms of posthumanism, including transhumanism, have more in common than their respective proponents realize. Linking these different approaches, Paul James suggests that "the key political problem is that, in effect, the position allows the human as a category of being to flow down the plughole of history":
However, some posthumanists in the humanities and the arts are critical of transhumanism (the brunt of James's criticism), in part, because they argue that it incorporates and extends many of the values of Enlightenment humanism and classical liberalism, namely scientism, according to performance philosopher Shannon Bell:
While many modern leaders of thought are accepting of nature of ideologies described by posthumanism, some are more skeptical of the term. Haraway, the author of A Cyborg Manifesto, has outspokenly rejected the term, though acknowledges a philosophical alignment with posthumanism. Haraway opts instead for the term of companion species, referring to nonhuman entities with which humans coexist.
Questions of race, some argue, are suspiciously elided within the "turn" to posthumanism. Noting that the terms "post" and "human" are already loaded with racial meaning, critical theorist Zakiyyah Iman Jackson argues that the impulse to move "beyond" the human within posthumanism too often ignores "praxes of humanity and critiques produced by black people", including Frantz Fanon, Aime Cesaire, Hortense Spillers and Fred Moten. Interrogating the conceptual grounds in which such a mode of "beyond" is rendered legible and viable, Jackson argues that it is important to observe that "blackness conditions and constitutes the very nonhuman disruption and/or disruption" which posthumanists invite. In other words, given that race in general and blackness in particular constitute the very terms through which human-nonhuman distinctions are made, for example in enduring legacies of scientific racism, a gesture toward a "beyond" actually "returns us to a Eurocentric transcendentalism long challenged". Posthumanist scholarship, due to characteristic rhetorical techniques, is also frequently subject to the same critiques commonly made of postmodernist scholarship in the 1980s and 1990s.
See also
Bioconservatism
Cyborg anthropology
Posthuman
Superhuman
Technological change
Technological transitions
Transhumanism
References
Works cited
Via Project Muse .
Critical theory
Ontology
Philosophical theories
Philosophical schools and traditions
Postmodernism | 0.766451 | 0.995869 | 0.763285 |
Social practice | Social practice is a theory within psychology that seeks to determine the link between practice and context within social situations. Emphasized as a commitment to change, social practice occurs in two forms: activity and inquiry. Most often applied within the context of human development, social practice involves knowledge production and the theorization and analysis of both institutional and intervention practices.
Background in psychology
Through research, Sylvia Scribner sought to understand and create a decent life for all people regardless of geographical position, race, gender, and social class. Using anthropological field research and psychological experimentation, Scribner tried to dig deeper into human mental functioning and its creation through social practice in different societal and cultural settings. She therefore aimed to enact social reform and community development through an ethical orientation that accounts for the interaction of historical and societal conditions of different institutional settings with human social and mental functioning and development.
As activity
Social practice involves engagement with communities of interest by creating a practitioner-community relationship wherein there remains a focus on the skills, knowledge, and understanding of people in their private, family, community, and working lives. In this approach to social practice, activity is used for social change without the agenda of research. Activity theory suggests the use of a system of participants that work toward an object or goal that brings about some form of change or transformation in the community.
As inquiry
Within research, social practice aims to integrate the individual with his or her surrounding environment while assessing how context and culture relate to common actions and practices of the individual. Just as social practice is an activity itself, inquiry focuses on how social activity occurs and identifies its main causes and outcomes. It has been argued that research be developed as a specific theory of social practice through which research purposes are defined not by philosophical paradigms but by researchers' commitments to specific forms of social action.
Areas of interest
Education
In education, social practice refers to the use of adult-child interaction for observation in order to propose intentions and gauge the reactions of others. Under social practice, literacy is seen as a key dimension of community regeneration and a part of the wider lifelong learning agenda. In particular, literacy is considered to be an area of instruction for the introduction of social practice through social language and social identity. According to social practice in education, literacy and numeracy are complex capabilities rather than a simple set of basic skills. Furthermore, adult learners are more likely to develop and retain knowledge, skills, and understanding if they see them as relevant to their own problems and challenges. Social practice perspectives focus on local literacies and how literacy practices are affected by settings and groups interacting around print.
Literature
As literature is repeatedly studied in education and critiqued in discourse, many believe that it should be a field of social practice as it evokes emotion and discussion of social interactions and social conditions. Those that believe literature may be construed as a form of social practice believe that literature and society are essentially related to each other. As such, they attempt to define specific sociological practices of literature and share expressions of literature as works comprising text, institution, and individual. Overall, literature becomes a realm of social exchange through fiction, poetry, politics, and history.
Art
Social practice is also considered a medium for making art. Social practice art came about in response to increasing pressure within art education to work collaboratively through social and participatory formats from artists' desires and art viewers' increasing media sophistication. "Social practice art" is a term for artwork that uses social engagement as a primary medium, and is also referred to by a range of different names: socially engaged art, community art, new-genre public art, participatory art, interventionist art, and collaborative art.
Artists working in the medium of social practice develop projects by inviting collaboration with individuals, communities, institutions, or a combination of these, creating participatory art that exists both within and outside of the traditional gallery and museum system. Artists working in social practice art co-create their work with a specific audience or propose critical interventions within existing social systems that inspire debate or catalyze social exchange. Social practice art work focuses on the interaction between the audience, social systems, and the artist through topics such as aesthetics, ethics, collaboration, persona, media strategies, and social activism. The social interaction component inspires, drives, or, in some instances, completes the project. Although projects may incorporate traditional studio media, they are realized in a variety of visual or social forms (depending on variable contexts and participant demographics) such as performance, social activism, or mobilizing communities towards a common goal.
References
The arts
Human development
Social psychology concepts | 0.795084 | 0.959996 | 0.763277 |
Generalization | A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common characteristics shared by those elements (thus creating a conceptual model). As such, they are the essential basis of all valid deductive inferences (particularly in logic, mathematics and science), where the process of verification is necessary to determine whether a generalization holds true for any given situation.
Generalization can also be used to refer to the process of identifying the parts of a whole, as belonging to the whole. The parts, which might be unrelated when left on their own, may be brought together as a group, hence belonging to the whole by establishing a common relation between them.
However, the parts cannot be generalized into a whole—until a common relation is established among all parts. This does not mean that the parts are unrelated, only that no common relation has been established yet for the generalization.
The concept of generalization has broad application in many connected disciplines, and might sometimes have a more specific meaning in a specialized context (e.g. generalization in psychology, generalization in learning).
In general, given two related concepts A and B, A is a "generalization" of B (equiv., B is a special case of A) if and only if both of the following hold:
Every instance of concept B is also an instance of concept A.
There are instances of concept A which are not instances of concept B.
For example, the concept animal is a generalization of the concept bird, since every bird is an animal, but not all animals are birds (dogs, for instance). For more, see Specialisation (biology).
Hypernym and hyponym
The connection of generalization to specialization (or particularization) is reflected in the contrasting words hypernym and hyponym. A hypernym as a generic stands for a class or group of equally ranked items, such as the term tree which stands for equally ranked items such as peach and oak, and the term ship which stands for equally ranked items such as cruiser and steamer. In contrast, a hyponym is one of the items included in the generic, such as peach and oak which are included in tree, and cruiser and steamer which are included in ship. A hypernym is superordinate to a hyponym, and a hyponym is subordinate to a hypernym.
Examples
Biological generalization
An animal is a generalization of a mammal, a bird, a fish, an amphibian and a reptile.
Cartographic generalization of geo-spatial data
Generalization has a long history in cartography as an art of creating maps for different scale and purpose. Cartographic generalization is the process of selecting and representing information of a map in a way that adapts to the scale of the display medium of the map. In this way, every map has, to some extent, been generalized to match the criteria of display. This includes small cartographic scale maps, which cannot convey every detail of the real world. As a result, cartographers must decide and then adjust the content within their maps, to create a suitable and useful map that conveys the geospatial information within their representation of the world.
Generalization is meant to be context-specific. That is to say, correctly generalized maps are those that emphasize the most important map elements, while still representing the world in the most faithful and recognizable way. The level of detail and importance in what is remaining on the map must outweigh the insignificance of items that were generalized—so as to preserve the distinguishing characteristics of what makes the map useful and important.
Mathematical generalizations
In mathematics, one commonly says that a concept or a result is a generalization of if is defined or proved before (historically or conceptually) and is a special case of .
The complex numbers are a generalization of the real numbers, which are a generalization of the rational numbers, which are a generalization of the integers, which are a generalization of the natural numbers.
A polygon is a generalization of a 3-sided triangle, a 4-sided quadrilateral, and so on to n sides.
A hypercube is a generalization of a 2-dimensional square, a 3-dimensional cube, and so on to n dimensions.
A quadric, such as a hypersphere, ellipsoid, paraboloid, or hyperboloid, is a generalization of a conic section to higher dimensions.
A Taylor series is a generalization of a MacLaurin series.
The binomial formula is a generalization of the formula for .
A ring is a generalization of a field.
See also
Categorical imperative (ethical generalization)
Ceteris paribus
External validity (scientific studies)
Faulty generalization
Generic (disambiguation)
Critical thinking
Generic antecedent
Hasty generalization
Inheritance (object-oriented programming)
Mutatis mutandis
-onym
Ramer–Douglas–Peucker algorithm
Semantic compression
Inventor's paradox
References
Generalizations
Critical thinking skills
Inductive_reasoning | 0.771127 | 0.989763 | 0.763233 |
Bricolage | In the arts, bricolage (French for "DIY" or "do-it-yourself projects"; ) is the construction or creation of a work from a diverse range of things that happen to be available, or a work constructed using mixed media.
The term bricolage has also been used in many other fields, including anthropology, philosophy, critical theory, education, computer software, public health, and business.
Origin
Bricolage is a French loanword that means the process of improvisation in a human endeavor. The word is derived from the French verb bricoler ("to tinker"), with the English term DIY ("Do-it-yourself") being the closest equivalent of the contemporary French usage. In both languages, bricolage also denotes any works or products of DIY endeavors.
The arts
Visual art
In art, bricolage is a technique or creative mode, where works are constructed from various materials available or on hand, and is often seen as a characteristic of postmodern art practice. It has been likened to the concept of curating and has also been described as the remixture, reconstruction, and reuse of separate materials or artifacts to produce new meanings and insights.
Architecture
Bricolage is considered the jumbled effect produced by the close proximity of buildings from different periods and in different architectural styles.
It is also a term that is admiringly applied to the architectural work of Le Corbusier, by Colin Rowe and Fred Koetter in their book Collage City, whom they called "a fox in hedgehog disguise," commenting on his wily approach to assembling ideas from found objects of the history of architecture, in contrast to Frank Lloyd Wright, who is called a "hedgehog" for being overly focused on a narrow concept.
Academics
Anthropology
In anthropology, the term has been used in several ways. Most notably, Claude Lévi-Strauss invoked the concept of bricolage to refer to the process that leads to the creation of mythical thought, which "expresses itself by means of a heterogeneous repertoire which, even if extensive, is nevertheless limited. It has to use this repertoire, however, whatever the task in hand because it has nothing else at its disposal". Later, Hervé Varenne and Jill Koyama used the term when explaining the processual aspect of culture, i.e., education
Literature
In literature, bricolage is affected by intertextuality, the shaping of a text's meanings by reference to other texts.
Cultural studies
In cultural studies, bricolage is used to mean the processes by which people acquire objects from across social divisions to create new cultural identities. In particular, it is a feature of subcultures such as the punk movement. Here, objects that possess one meaning (or no meaning) in the dominant culture are acquired and given a new, often subversive meaning. For example, the safety pin became a form of decoration in punk culture.
Social psychology
The term "psychological bricolage" is used to explain the mental processes through which an individual develops novel solutions to problems by making use of previously unrelated knowledge or ideas they already possess.
The term, introduced by Jeffrey Sanchez-Burks, Matthew J. Karlesky and Fiona Lee The Oxford Handbook of Creativity, Innovation, and Entrepreneurship of the University of Michigan, draws from two separate disciplines. The first, "social bricolage," was introduced by cultural anthropologist Claude Lévi-Strauss in 1962. Lévi-Strauss was interested in how societies create novel solutions by using resources that already exist in the collective social consciousness. The second, "creative cognition," is an intra-psychic approach to studying how individuals retrieve and recombine knowledge in new ways. Psychological bricolage, therefore, refers to the cognitive processes that enable individuals to retrieve and recombine previously unrelated knowledge they already possess. Psychological bricolage is an intra-individual process akin to Karl E. Weick's notion of bricolage in organizations, which is akin to Lévi-Strauss' notion of bricolage in societies.
Philosophy
In his book The Savage Mind (1962, English translation 1966), French anthropologist Claude Lévi-Strauss used "bricolage" to describe the characteristic patterns of mythological thought. In his description it is opposed to the engineers' creative thinking, which proceeds from goals to means. Mythical thought, according to Lévi-Strauss, attempts to re-use available materials in order to solve new problems.
Jacques Derrida extends this notion to any discourse. "If one calls bricolage the necessity of borrowing one's concept from the text of a heritage which is more or less coherent or ruined, it must be said that every discourse is bricoleur."
Gilles Deleuze and Félix Guattari, in their 1972 book Anti-Oedipus, identify bricolage as the characteristic mode of production of the schizophrenic producer.
Education
In the discussion of constructionism, Seymour Papert discusses two styles of solving problems. Contrary to the analytical style of solving problems, he describes bricolage as a way to learn and solve problems by trying, testing, playing around.
Joe L. Kincheloe and Shirley R. Steinberg have used the term bricolage in educational research to denote the use of multiperspectival research methods. In Kincheloe's conception of the research bricolage, diverse theoretical traditions are employed in a broader critical theoretical/critical pedagogical context to lay the foundation for a transformative mode of multimethodological inquiry. Using these multiple frameworks and methodologies, researchers are empowered to produce more rigorous and praxiological insights into socio-political and educational phenomena.
Kincheloe and Steinberg theorize a critical multilogical epistemology and critical connected ontology to ground the research bricolage. These philosophical notions provide the research bricolage with a sophisticated understanding of the complexity of knowledge production and the interrelated complexity of both researcher positionality and phenomena in the world. Such complexity demands a more rigorous mode of research that is capable of dealing with the complications of socio-educational experience. Such a critical form of rigor avoids the reductionism of many monological, mimetic research orientations (see Kincheloe, 2001, 2005; Kincheloe & Berry, 2004; Steinberg, 2015; Kincheloe, McLaren, & Steinberg, 2012).
Information technology
Information systems
In information systems, bricolage is used by Claudio Ciborra to describe the way in which strategic information systems (SIS) can be built in order to maintain successful competitive advantage over a longer period of time than standard SIS. By valuing tinkering and allowing SIS to evolve from the bottom-up, rather than implementing it from the top-down, the firm will end up with something that is deeply rooted in the organisational culture that is specific to that firm and is much less easily imitated.
Internet
In her book Life on the Screen (1995), Sherry Turkle discusses the concept of bricolage as it applies to problem solving in code projects and workspace productivity. She advocates the "bricoleur style" of programming as a valid and underexamined alternative to what she describes as the conventional structured "planner" approach. In this style of coding, the programmer works without an exhaustive preliminary specification, opting instead for a step-by-step growth and re-evaluation process. In her essay "Epistemological Pluralism", Turkle writes: "The bricoleur resembles the painter who stands back between brushstrokes, looks at the canvas, and only after this contemplation, decides what to do next."
Visual arts
The visual arts is a field in which individuals often integrate a variety of knowledge sets in order to produce inventive work. To reach this stage, artists read print materials across a wide array of disciplines, as well as information from their own social identities. For instance, the artist Shirin Neshat has integrated her identities as an Iranian exile and a woman in order to make complex, creative and critical bodies of work. This willingness to integrate diverse knowledge sets enables artists with multiple identities to fully leverage their knowledge sets. This is demonstrated by Jeffrey Sanchez-Burks, Chi-Ying Chen and Fiona Lee, who found that individuals were shown to exhibit greater levels of innovation in tasks related to their cultural identities when they successfully integrated those identities.
Business
Karl Weick identifies the following requirements for successful bricolage in organizations.
Intimate knowledge of resources
Careful observation and listening
Trusting one's ideas
Self-correcting structures, with feedback
Glenn Gosnell, V & E Limited, defines the formal term "Bricoleurologist", as indicating expertise and experience in Bricoleurology, i.e. devising and implementing elegant solutions to immediate problems and issues. Those skilled in the art and practice of AMA (Alternate Means of Accomplishment) in the efficient and effective reconstitution of resources can be assigned the title "Bricoleurologist" by a company or institution.
In popular culture
Fashion
In his essay "Subculture: The Meaning of Style", Dick Hebdige discusses how an individual can be identified as a bricoleur when they "appropriated another range of commodities by placing them in a symbolic ensemble which served to erase or subvert their original straight meanings". The fashion industry uses bricolage-like styles by incorporating items typically utilized for other purposes.
Television
MacGyver is a television series in which the protagonist is the paragon of a bricoleur, creating solutions for the problem to be solved out of immediately available found objects.
See also
Collage
Détournement
Do it yourself
Intrapreneurial Bricolage
Jugaad
Jury rig
Kludge
Maker culture
Syncretism
Pastiche
References
External links
Digital humanities
Philosophy of technology
Postmodernism
Psychogeography
Artistic techniques
Do it yourself
Improvisation | 0.76866 | 0.992914 | 0.763214 |
Technology and Livelihood Education | Technology and Livelihood Education (TLE) is one of the learning areas of the Secondary Education Curriculum used in Philippine secondary schools. As a subject in high school, its component areas are: Home Economics, Agri-Fishery Arts, Industrial Arts, and Information and Communication Technology.
TLE is also referred to as CP-TLE for Career Pathways in Technology and Livelihood Education. The 2010 Secondary Education Curriculum allocates 240 minutes per week for CP-TLE, which is equivalent to 1.2 units. However, CP-TLE is required to include practical work experience in the community, which may extend beyond its specified school hours.
Curriculum
The Technical-Vocational Education-based TLE is focused on technical skills development in any area. Five common competencies, based on the training regulations of the Technical Education and Skills Development Authority (TESDA), are covered in the exploratory phase (Grades 7 and 8): mensuration and calculation, technical drafting, use of tools and equipment, maintenance of tools and equipment, and occupational health and safety. The specialization phase is from Grades 7 to 12.
The Entrepreneurship Education-based TLE is focused on the learning of some livelihood skills every quarter, so that the student may be equipped to start a small household enterprise with family members. It covers three domains: Personal Entrepreneurial Competencies, Market and Environment, and Process and Delivery. The five common competencies from TESDA are integrated in the Process and Delivery domain.
§−
Expansion
The 2010 Secondary Education Curriculum expanded the CP-TLE to include additional special curricular programs. This makes a total of six programs: Special Program in the Arts (SPA), Special Program in Sports (SPS); Science and Technology, Engineering, and Mathematics Program (STEM Program, previously called ESEP), Special Program in Journalism (SPJ), Technical-Vocational-Livelihood Education (TVE), and Special Program in Foreign Language (SPFL).
References
Education in the Philippines | 0.770799 | 0.990155 | 0.763211 |
Critical mathematics pedagogy | Critical mathematics pedagogy is an approach to mathematics education that includes a practical and philosophical commitment to liberation. Approaches that involve critical mathematics pedagogy give special attention to the social, political, cultural and economic contexts of oppression, as they can be understood through mathematics. They also analyze the role that mathematics plays in producing and maintaining potentially oppressive social, political, cultural or economic structures. Finally, critical mathematics pedagogy demands that critique is connected to action promoting more just and equitable social, political or economic reform.
Critical mathematics pedagogy builds on critical theory developed in the post-Marxist Frankfurt School, as well as critical pedagogy developed out of critical theory by Brazilian educator and educational theorist Paulo Freire. Definitions of critical mathematics pedagogy and critical mathematics education differ among those who practice it and write about it in their work. The focus of critical mathematics pedagogy shifts between three core tenets, but always includes some attention to all three: (1) analysis of injustice and inequitable relations of power made possible through mathematics, (2) critiques of the ways in which mathematics is used to structure and maintain power, and (3) critiques toward plans of action for change and the use of mathematics to reveal and oppose injustices, as well as imagine proposals for more equitable and just relations.
Core concepts and foundations
Critical theory and critical mathematics
Those who build their critical mathematics pedagogy with close relations to critical theory, focus on the analysis of mathematics as having "formatting power" that shapes the way we understand and organize the world. The assumption underlying critical mathematics pedagogy that comes from critical theory is the notion that mathematics is not neutral. According to critical mathematics, neither mathematics itself nor the teaching or learning of mathematics can be value-neutral, or free of interpretation. The critical mathematics group (est. 1990), one of the first groups of teachers and researchers to convene around the work of critical mathematics, state that mathematics is (1) knowledge constructed by humans, (2) the set of knowledges constructed by all groups of humans, not only the Eurocentric knowledge traditionally included in academic texts and (3) a human enterprise in which understanding results from action in social, cultural, political and economic context.
Marilyn Frankenstein, the first educator to coin the term critical mathematics pedagogy in the United States in her 1983 article "Critical Mathematics Pedagogy: An Application of Paulo Freire's Epistemology," illustrates one way in which mathematics is not neutral using the example of the world map. She explains that in order to represent a three-dimensional object on a two dimensional surface, such as is necessary when mapping the earth, map-makers must make decisions about which types of distortions to allow. For example, the most traditionally accepted and commonly used world map is the Mercator map which enlarges the size of Europe and shrinks the size of Africa - a side-effect of the way it works (to assist navigation). This representation can be read to suggest that certain parts of the world are larger, and therefore more important or more powerful than others via the (inaccurate) size comparison presented in the map.
Ole Skovsmose's first publication on critical mathematics pedagogy in Europe coincided with Marilyn Frankenstein's in the United States. It refers to "mathemacy" which would parallel critical literacy for mathematics. He explains that "mathematics colonizes part of reality and reorders it." Therefore, "the goal of mathematics education should be to understand the formatting power of mathematics and to empower people to examine this formatting power so they will not be controlled by it." According to him, mathemacy would consist of three components (1) mathematical knowing, or the skills developed in traditional mathematics classrooms, (2) technological knowing, or the ability to build models with mathematics and (3) reflective knowing, or competency in evaluating applications of mathematics. It is specifically the third component that makes this approach to mathematical literacy a critical one.
Bülent Avcı, through classroom-based participatory action research, in his recent book, Critical Mathematics Education: Can Democratic Education Survive under Neoliberal Regime?, re-conceptualizes Critical Mathematics Education as a bottom-up response to the top-down imposed market-driven implementations and neoliberal hegemony in education. In this context, Bülent Avcı offers rich ethnographic data to redefine concepts such as dialogic pedagogy, collaborative learning, and inquiry-based mathematics education in order to promote justice-based critical citizenship and participatory democracy. In that he distinguishes these concepts from neoliberal pedagogy. Bülent Avcı simultaneously draws on the ideas of Paulo Freire and Jurgen Habermas to develop a unique approach to Critical Mathematics Education.
Critical pedagogy and critical mathematics pedagogy
Those who build their critical mathematics pedagogy out of critical pedagogy focus on empowerment of the learners as experts and actors for change in their own world. Critical mathematics pedagogy demands that students and teachers use mathematics to understand "relations of power, resource inequalities between different social groups and explicit discrimination" in order to take action for change. Paulo Freire (1921–1997), Brazilian educator and educational theorist, commonly regarded as the originator of critical pedagogy, suggests that most teaching happens in a "banking" model where teachers hold the information and students are assumed to be passive receptacles for that knowledge. Freire's alternative to the banking method is a "problem-posing" model of education. Through this model students and teachers participate together in a mutually humanizing process of dialogue. With the support of their teacher, students examine problems from their own lives and work collaboratively to generate solutions. One goal of critical pedagogy, according to Freire, is to develop critical consciousness or conscientização (Portuguese). Both teachers and students are expected to challenge their own "well-established ways of thinking that frequently limit their own potential" and that of others. They are especially expected to challenge those ways of thinking that might reproduce instead of challenge oppressive ways of thinking and being. This commitment to learning and critique for the purpose of action for change is also known as praxis, the intersection of theory and practice, another core tenet of the critical pedagogy of Paulo Freire.
Marilyn Frankenstien argues that "most current uses of mathematics support hegemonic ideologies." In particular, she focuses on the mathematical science of statistics which supports the unquestioned acceptance of uncertain conclusions. She argues that the use of the banking model in mathematics education (memorization and procedural focus) produces "math anxiety" in many people, especially and disproportionately those in non-dominant groups (women, people of color, lower income students). This math anxiety then leads people to "not probe the mathematical mystifications" that drive industrial society.
Eric (Rico) Gutstein applies Freire's notion of the inherent connection between "reading the word and the world" to mathematical literacy. He suggests that teaching mathematics for social justice involves both reading the world with mathematics, or more explicitly, "using mathematics to understand relations of power, resource inequalities between different social groups and explicit discrimination," as well as writing the world with mathematics, or developing the tools of social agency in young people for acting in their own worlds. Mathematical literacy according to Gutstein must include both the capacity to "read the mathematical world," necessary for traditional academic and economic success, as well as the capacity to "read the world with mathematics," meaning the use of mathematics to understand and interrogate potentially problematic or unjust structures in their own lives.
Critical mathematics pedagogy in action
Because critical mathematics pedagogy is designed to be responsive to the lives of the students in a given classroom and their local context, there is no set curriculum. Some educators re-use lessons or units from year to year that may apply to multiple groups of students, while other educators develop projects that respond directly to the concerns of a particular group of students, building a project together around a problem the students have posed. Precisely for this reason it is pertinent to consider a few examples of what critical mathematics pedagogy might look like in action.
William Tate, critical race theorist and promoter of culturally relevant teaching, describes the work of one teacher who brought together many of the core components of critical mathematics pedagogy. This teacher elicited concerns from her students about their own neighborhood and lives, and found out that one concern was the prevalence of liquor stores in the neighborhood. Students were being harassed on their way to and from school, having to step over or walk past drunk individuals, making them feel uncomfortable and unsafe. This teacher led her students through the process of in-depth research to better understand the distribution of liquor licenses and the reasons behind the concentration in their neighborhood. The class then met with local journalists to discuss the use of different types of graphic for representing statistics to the general public. The class then considered and determined which graphics and statistical representations (decimals, fractions, percents) might be the strongest for communicating their findings. Finally, the students used their research to produce a policy solution which they presented to the local community council. The work of this group of students and their teacher succeeded in leading to the closing of two of the nearby liquor stores in the neighborhood.
Ole Skovsmose describes a classroom in Denmark in which students learned about the use of algorithms for distribution of welfare support to families by attempting to create their own algorithms. The class worked in groups, where each group came up with a family profile to serve under the supervision of the instructor. Groups then were given a budget for welfare distributions to families and had to come up with how to distribute the money among all the families in their "town" made up of all the created family profiles. The task led them to develop ways of categorizing people in families by age, and families type, by income amount and type, by labor and employment, by possible productivity to society, and more. Some groups distributed the money without building a distribution algorithm, using trial and error and attempting to balance the distribution by more intuitive means. Others built algorithms, working backwards, attempting to break down the distribution using percentages. Many groups were surprised to find that their algorithms did not function comprehensively, and did not fully distribute the amount they were budgeted, and that the outcomes by group were vastly different. Perhaps more importantly, students gained an awareness of the choices and decision making that goes into how policies such as welfare for families are complex and human-created, not simply existing structures. This project is an example of the way in which critical mathematics pedagogy can reveal the role that humans play in mathematizing the world. It is different from Tate's example because it does not explicitly include an action component.
Shelly M. Jones teaches Mathematics Education at Central Connecticut State University. Her classes address culturally relevant mathematics, where she explains cognitively demanding mathematics skills from a relevant cultural perspective.
For a collection of sample lessons that address mathematics teaching through a critical lens see the book, Rethinking Mathematics: Teaching Social Justice by the Numbers (Eds. Gutstein and Peterson, 2005).
Related concepts
Other work in the field of mathematics education that often overlaps at least in part with critical mathematics pedagogy includes the work of ethnomathematics, culturally relevant teaching in mathematics, and work for educational equity in mathematics.
The concept of ethnomathematics was introduced by D'Ambrosio in 1978, in response to the reliance on Eurocentric models for academic mathematics teaching to the exclusion of other cultural models. The goal of work in ethnomathematics is to de-center mathematics as a European dominated discipline by contributing research and teaching that highlights the contributions of many different cultures to mathematics as a discipline, and validating a wide range of mathematical practices. Ethnomathematics work notices, recognizes, reclaims, and celebrates the ways in which non-European communities and cultures are now and have throughout their histories been creating, using, and innovating with mathematics. It differs from critical mathematics pedagogy in that its focus is on cultural and social aspects of mathematics, where critical mathematics work also includes an explicit focus on politics and power structures. Though differences exist, those who work in either field oftentimes publish in similar publications and both consider their work mathematics for social justice.
Culturally relevant teaching in mathematics was developed initially to support the success of African-American students, frequently poorly served by the American public school system which has a long history of educational inequality. The liquor store example provided above is shared by Tate as an example of culturally relevant teaching, but might likewise be seen to embody the tenets of critical pedagogy. He cites six core practices of the teacher from the example that make her work culturally relevant: (1) communication between students, teacher, and outside entities, (2) cooperative group work, (3) investigative research throughout the learning process, (4) questioning content, people, and institutions, (5) open-ended problem solving connected to student realities, and (6) social action. While the practices listed by Tate resonate profoundly with those of critical mathematics pedagogy, the difference (if there is any) is in the goals of the two approaches. The focus of culturally relevant teaching is on the empowerment and liberation of a cultural or racial group, whereas the goals of critical pedagogy include empowerment and liberation of individuals as well as groups, in the face of any form of oppression, not only cultural or racial oppression.
The notion of educational equity in mathematics education promotes the provision of high quality mathematics education to all groups and individuals in an attempt to narrow achievement gaps, for example gaps related to race and gender. This approach does not include a critical approach to mathematics itself, or the notion that mathematics education should include the learning of mathematics for the purpose of being able to analyze and change structures of power and injustice in the world. The National Council of Teachers of Mathematics, the world's largest mathematics education organization, has placed equity as one of its top priorities. However, critical mathematics educators suggest that the NCTM standards "fail to define equity in applicable terms for classroom teachers, and it overemphasized the economic aspects of equity."
Challenges and critiques
Logistically, implementation of critical pedagogy is a challenge because there is and can be no "how-to recipe." If the curriculum must be built out of students’ lives then it will necessarily change each year and with each group of students.
Critiques are widespread, suggesting that mathematics is unbiased and not bound to culture, society or politics and therefore should not be wrongly politicized in the classroom. It is argued that this politicization is a distraction from achievement and risks holding students back, most specifically those it purports to support.
References
Bibliography
Frankenstein, M. (1983). Critical Mathematics Education: An Application of Paulo Freire’s Epistemology. The Journal of Education, 165(4), 315–339.
Powell, A. (2012). The historical development of criticalmathematics education. In Teaching Mathematics for Social Justice: Conversations with Educators. Eds. Anita Wager & Stinson, D. Reston, VA: National Council of Teachers of Mathematics.
Skovsmose, O. (1994). Towards a critical mathematics education. Educational Studies in Mathematics, 27(1), 35–57. http://doi.org/10.1007/BF01284527
Stinson, D. & Wager, A. (2012). A sojourn into the empowering uncertainties of teaching and learning mathematics for social change. In Teaching Mathematics for Social Justice: Conversations with Educators. Eds. Anita Wager & Stinson, D. Reston, VA: National Council of Teachers of Mathematics.
Tate, W. F. (1995). Returning to the Root: A Culturally Relevant Approach to Mathematics Pedagogy. Theory into Practice, 34(3), 166–173.
Tutak, F. A., Bondy, E., & Adams, T. L. (2011). Critical pedagogy for critical mathematics education. International Journal of Mathematics Education in Science and Technology, 42(1), 65–74. http://doi.org/10.1080/0020739X.2010.510221
Avcı, B. (2018). Critical Mathematics Education: Can Democratic Mathematics Education Survive under Neoliberal Regime?. Boston, USA: Brill-Sense.
Mathematics education | 0.805139 | 0.947898 | 0.763189 |
Eudaimonia | Eudaimonia (; ), sometimes anglicized as Eudaemonia, Eudemonia or Eudimonia, is a Greek word literally translating to the state or condition of good spirit, and which is commonly translated as happiness or welfare.
In the works of Aristotle, eudaimonia was the term for the highest human good in older Greek tradition. It is the aim of practical philosophy-prudence, including ethics and political philosophy, to consider and experience what this state really is and how it can be achieved. It is thus a central concept in Aristotelian ethics and subsequent Hellenistic philosophy, along with the terms aretē (most often translated as virtue or excellence) and phronesis ('practical or ethical wisdom').
Discussion of the links between ēthikē aretē (virtue of character) and eudaimonia (happiness) is one of the central concerns of ancient ethics, and a subject of disagreement. As a result, there are many varieties of eudaimonism.
Definition and etymology
In terms of its etymology, eudaimonia is an abstract noun derived from the words eû (good, well) and daímōn (spirit or deity).
Semantically speaking, the word δαίμων derives from the same root of the Ancient Greek verb δαίομαι (, "to divide") allowing the concept of eudaimonia to be thought of as an "activity linked with dividing or dispensing, in a good way".
Definitions, a dictionary of Greek philosophical terms attributed to Plato himself but believed by modern scholars to have been written by his immediate followers in the Academy, provides the following definition of the word eudaimonia: "The good composed of all goods; an ability which suffices for living well; perfection in respect of virtue; resources sufficient for a living creature."
In his Nicomachean Ethics (§21; 1095a15–22), Aristotle says that everyone agrees that eudaimonia is the highest good for humans, but that there is substantial disagreement on what sort of life counts as doing and living well; i.e. eudaimon:Verbally there is a very general agreement; for both the general run of men and people of superior refinement say that it is [eudaimonia], and identify living well and faring well with being happy; but with regard to what [eudaimonia] is they differ, and the many do not give the same account as the wise. For the former think it is some plain and obvious thing like pleasure, wealth or honour... [1095a17]
So, as Aristotle points out, saying that a eudaimonic life is a life that is objectively desirable and involves living well is not saying very much. Everyone wants to be eudaimonic; and everyone agrees that being eudaimonic is related to faring well and to an individual's well-being. The really difficult question is to specify just what sort of activities enable one to live well. Aristotle presents various popular conceptions of the best life for human beings. The candidates that he mentions are (1) a life of pleasure, (2) a life of political activity, and (3) a philosophical life.
Eudaimonia and areté
One important move in Greek philosophy to answer the question of how to achieve eudaimonia is to bring in another important concept in ancient philosophy, aretē ('virtue'). Aristotle says that the eudaimonic life is one of "virtuous activity in accordance with reason" [1097b22–1098a20]; even Epicurus, who argues that the eudaimonic life is the life of pleasure, maintains that the life of pleasure coincides with the life of virtue. So, the ancient ethical theorists tend to agree that virtue is closely bound up with happiness (areté is bound up with eudaimonia). However, they disagree on the way in which this is so. A major difference between Aristotle and the Stoics, for instance, is that the Stoics believed moral virtue was in and of itself sufficient for happiness (eudaimonia). For the Stoics, one does not need external goods, like physical beauty, in order to have virtue and therefore happiness.
One problem with the English translation of areté as virtue is that we are inclined to understand virtue in a moral sense, which is not always what the ancients had in mind. For Aristotle, areté pertains to all sorts of qualities we would not regard as relevant to ethics, for example, physical beauty. So it is important to bear in mind that the sense of virtue operative in ancient ethics is not exclusively moral and includes more than states such as wisdom, courage, and compassion. The sense of virtue which areté connotes would include saying something like "speed is a virtue in a horse," or "height is a virtue in a basketball player." Doing anything well requires virtue, and each characteristic activity (such as carpentry, flute playing, etc.) has its own set of virtues. The alternative translation excellence (a desirable quality) might be helpful in conveying this general meaning of the term. The moral virtues are simply a subset of the general sense in which a human being is capable of functioning well or excellently.
Eudaimonia and happiness
Eudaimonia implies a positive and divine state of being that humanity is able to strive toward and possibly reach. A literal view of eudaimonia means achieving a state of being similar to a benevolent deity, or being protected and looked after by a benevolent deity. As this would be considered the most positive state to be in, the word is often translated as happiness although incorporating the divine nature of the word extends the meaning to also include the concepts of being fortunate, or blessed. Despite this etymology, however, discussions of eudaimonia in ancient Greek ethics are often conducted independently of any supernatural significance.
In his Nicomachean Ethics (1095a15–22) Aristotle says that eudaimonia means 'doing and living well'. It is significant that synonyms for eudaimonia are living well and doing well. In the standard English translation, this would be to say that, "happiness is doing well and living well." The word happiness does not entirely capture the meaning of the Greek word. One important difference is that happiness often connotes being or tending to be in a certain pleasant state of mind. For example, when one says that someone is "a very happy person", one usually means that they seem subjectively contented with the way things are going in their life. They mean to imply that they feel good about the way things are going for them. In contrast, Aristotle suggests that eudaimonia is a more encompassing notion than feeling happy since events that do not contribute to one's experience of feeling happy may affect one's eudaimonia.
Eudaimonia depends on all the things that would make us happy if we knew of their existence, but quite independently of whether we do know about them. Ascribing eudaimonia to a person, then, may include ascribing such things as being virtuous, being loved and having good friends. But these are all objective judgments about someone's life: they concern whether a person is really being virtuous, really being loved, and really having fine friends. This implies that a person who has evil sons and daughters will not be judged to be eudaimonic even if he or she does not know that they are evil and feels pleased and contented with the way they have turned out (happy). Conversely, being loved by your children would not count towards your happiness if you did not know that they loved you (and perhaps thought that they did not), but it would count towards your eudaimonia. So, eudaimonia corresponds to the idea of having an objectively good or desirable life, to some extent independently of whether one knows that certain things exist or not. It includes conscious experiences of well-being, success, and failure, but also a whole lot more. (See Aristotle's discussion: Nicomachean Ethics, book 1.10–1.11.)
Because of this discrepancy between the meanings of eudaimonia and happiness, some alternative translations have been proposed. W.D. Ross suggests 'well-being' and John Cooper proposes flourishing. These translations may avoid some of the misleading associations carried by "happiness" although each tends to raise some problems of its own. In some modern texts therefore, the other alternative is to leave the term in an English form of the original Greek, as eudaimonia.
Classical views on eudaimonia and aretē
Socrates
What is known of Socrates' philosophy is almost entirely derived from Plato's writings. Scholars typically divide Plato's works into three periods: the early, middle, and late periods. They tend to agree also that Plato's earliest works quite faithfully represent the teachings of Socrates and that Plato's own views, which go beyond those of Socrates, appear for the first time in the middle works such as the Phaedo and the Republic.
As with all ancient ethical thinkers, Socrates thought that all human beings wanted eudaimonia more than anything else (see Plato, Apology 30b, Euthydemus 280d–282d, Meno 87d–89a). However, Socrates adopted a quite radical form of eudaimonism (see above): he seems to have thought that virtue is both necessary and sufficient for eudaimonia. Socrates is convinced that virtues such as self-control, courage, justice, piety, wisdom and related qualities of mind and soul are absolutely crucial if a person is to lead a good and happy (eudaimon) life. Virtues guarantee a happy life eudaimonia. For example, in the Meno, with respect to wisdom, he says: "everything the soul endeavours or endures under the guidance of wisdom ends in happiness" (Meno 88c).
In the Apology, Socrates clearly presents his disagreement with those who think that the eudaimon life is the life of honour or pleasure, when he chastises the Athenians for caring more for riches and honour than the state of their souls.Good Sir, you are an Athenian, a citizen of the greatest city with the greatest reputation for both wisdom and power; are you not ashamed of your eagerness to possess as much wealth, reputation, and honors as possible, while you do not care for nor give thought to wisdom or truth or the best possible state of your soul? (29e) ... [I]t does not seem like human nature for me to have neglected all my own affairs and to have tolerated this neglect for so many years while I was always concerned with you, approaching each one of you like a father or an elder brother to persuade you to care for virtue. (31a–b; italics added)It emerges a bit further on that this concern for one's soul, that one's soul might be in the best possible state, amounts to acquiring moral virtue. So Socrates' pointing out that the Athenians should care for their souls means that they should care for their virtue, rather than pursuing honour or riches. Virtues are states of the soul. When a soul has been properly cared for and perfected, it possesses the virtues. Moreover, according to Socrates, this state of the soul, moral virtue, is the most important good. The health of the soul is incomparably more important for eudaimonia than (e.g.) wealth and political power. Someone with a virtuous soul is better off than someone who is wealthy and honoured but whose soul is corrupted by unjust actions. This view is confirmed in the Crito, where Socrates gets Crito to agree that the perfection of the soul, virtue, is the most important good:
And is life worth living for us with that part of us corrupted that unjust action harms and just action benefits? Or do we think that part of us, whatever it is, that is concerned with justice and injustice, is inferior to the body? Not at all. It is much more valuable...? Much more... (47e–48a)
Here, Socrates argues that life is not worth living if the soul is ruined by wrongdoing. In summary, Socrates seems to think that virtue is both necessary and sufficient for eudaimonia. A person who is not virtuous cannot be happy, and a person with virtue cannot fail to be happy. We shall see later on that Stoic ethics takes its cue from this Socratic insight.
Plato
Plato's great work of the middle period, the Republic, is devoted to answering a challenge made by the sophist Thrasymachus, that conventional morality, particularly the virtue of justice, actually prevents the strong man from achieving eudaimonia. Thrasymachus's views are restatements of a position which Plato discusses earlier on in his writings, in the Gorgias, through the mouthpiece of Callicles. The basic argument presented by Thrasymachus and Callicles is that justice (being just) hinders or prevents the achievement of eudaimonia because conventional morality requires that we control ourselves and hence live with un-satiated desires. This idea is vividly illustrated in book 2 of the Republic when Glaucon, taking up Thrasymachus' challenge, recounts a myth of the magical ring of Gyges. According to the myth, Gyges becomes king of Lydia when he stumbles upon a magical ring, which, when he turns it a particular way, makes him invisible, so that he can satisfy any desire he wishes without fear of punishment. When he discovers the power of the ring he kills the king, marries his wife and takes over the throne. The thrust of Glaucon's challenge is that no one would be just if he could escape the retribution he would normally encounter for fulfilling his desires at whim. But if eudaimonia is to be achieved through the satisfaction of desire, whereas being just or acting justly requires suppression of desire, then it is not in the interests of the strong man to act according to the dictates of conventional morality. (This general line of argument reoccurs much later in the philosophy of Nietzsche.) Throughout the rest of the Republic, Plato aims to refute this claim by showing that the virtue of justice is necessary for eudaimonia.
The argument of the Republic is lengthy and complex. In brief, Plato argues that virtues are states of the soul, and that the just person is someone whose soul is ordered and harmonious, with all its parts functioning properly to the person's benefit. In contrast, Plato argues that the unjust man's soul, without the virtues, is chaotic and at war with itself, so that even if he were able to satisfy most of his desires, his lack of inner harmony and unity thwart any chance he has of achieving eudaimonia. Plato's ethical theory is eudaimonistic because it maintains that eudaimonia depends on virtue. On Plato's version of the relationship, virtue is depicted as the most crucial and the dominant constituent of eudaimonia.
Aristotle
Aristotle's account is articulated in the Nicomachean Ethics and the Eudemian Ethics. In outline, for Aristotle, eudaimonia involves activity, exhibiting virtue (aretē sometimes translated as excellence) in accordance with reason. This conception of eudaimonia derives from Aristotle's essentialist understanding of human nature, the view that reason (logos sometimes translated as rationality) is unique to human beings and that the ideal function or work (ergon) of a human being is the fullest or most perfect exercise of reason. Basically, well-being (eudaimonia) is gained by proper development of one's highest and most human capabilities and human beings are "the rational animal". It follows that eudaimonia for a human being is the attainment of excellence (areté) in reason.
According to Aristotle, eudaimonia actually requires activity, action, so that it is not sufficient for a person to possess a squandered ability or disposition. Eudaimonia requires not only good character but rational activity. Aristotle clearly maintains that to live in accordance with reason means achieving excellence thereby. Moreover, he claims this excellence cannot be isolated and so competencies are also required appropriate to related functions. For example, if being a truly outstanding scientist requires impressive math skills, one might say "doing mathematics well is necessary to be a first rate scientist". From this it follows that eudaimonia, living well, consists in activities exercising the rational part of the psyche in accordance with the virtues or excellency of reason [1097b22–1098a20]. Which is to say, to be fully engaged in the intellectually stimulating and fulfilling work at which one achieves well-earned success. The rest of the Nicomachean Ethics is devoted to filling out the claim that the best life for a human being is the life of excellence in accordance with reason. Since the reason for Aristotle is not only theoretical but practical as well, he spends quite a bit of time discussing excellence of character, which enables a person to exercise his practical reason (i.e., reason relating to action) successfully.
Aristotle's ethical theory is eudaimonist because it maintains that eudaimonia depends on virtue. However, it is Aristotle's explicit view that virtue is necessary but not sufficient for eudaimonia. While emphasizing the importance of the rational aspect of the psyche, he does not ignore the importance of other goods such as friends, wealth, and power in a life that is eudaimonic. He doubts the likelihood of being eudaimonic if one lacks certain external goods such as good birth, good children, and beauty. So, a person who is hideously ugly or has "lost children or good friends through death" (1099b5–6), or who is isolated, is unlikely to be eudaimon. In this way, "dumb luck" (chance) can preempt one's attainment of eudaimonia.
Pyrrho
Pyrrho was the founder of Pyrrhonism. A summary of his approach to eudaimonia was preserved by Eusebius, quoting Aristocles of Messene, quoting Timon of Phlius, in what is known as the "Aristocles passage".
Whoever wants eudaimonia must consider these three questions: First, how are pragmata (ethical matters, affairs, topics) by nature? Secondly, what attitude should we adopt towards them? Thirdly, what will be the outcome for those who have this attitude?" Pyrrho's answer is that "As for pragmata they are all adiaphora (undifferentiated by a logical differentia), astathmēta (unstable, unbalanced, not measurable), and anepikrita (unjudged, unfixed, undecidable). Therefore, neither our sense-perceptions nor our doxai (views, theories, beliefs) tell us the truth or lie; so we certainly should not rely on them. Rather, we should be adoxastoi (without views), aklineis (uninclined toward this side or that), and akradantoi (unwavering in our refusal to choose), saying about every single one that it no more is than it is not or it both is and is not or it neither is nor is not.
With respect to aretē, the Pyrrhonist philosopher Sextus Empiricus said:
If one defines a system as an attachment to a number of dogmas that agree with one another and with appearances, and defines a dogma as an assent to something non-evident, we shall say that the Pyrrhonist does not have a system. But if one says that a system is a way of life that, in accordance with appearances, follows a certain rationale, where that rationale shows how it is possible to seem to live rightly ("rightly" being taken, not as referring only to aretē, but in a more ordinary sense) and tends to produce the disposition to suspend judgment, then we say that he does have a system.
Epicurus
Epicurus' ethical theory is hedonistic. His views were very influential for the founders and best proponents of utilitarianism, Jeremy Bentham and John Stuart Mill. Hedonism is the view that pleasure is the only intrinsic good and that pain is the only intrinsic bad. An object, experience or state of affairs is intrinsically valuable if it is good simply because of what it is. Intrinsic value is to be contrasted with instrumental value. An object, experience or state of affairs is instrumentally valuable if it serves as a means to what is intrinsically valuable. To see this, consider the following example. Suppose a person spends their days and nights in an office, working at not entirely pleasant activities for the purpose of receiving money. Someone asks them "why do you want the money?", and they answer: "So, I can buy an apartment overlooking the ocean, and a red sports car." This answer expresses the point that money is instrumentally valuable because its value lies in what one obtains by means of it—in this case, the money is a means to getting an apartment and a sports car and the value of making this money dependent on the price of these commodities.
Epicurus identifies the good life with the life of pleasure. He understands eudaimonia as a more or less continuous experience of pleasure and, also, freedom from pain and distress. But Epicurus does not advocate that one pursue any and every pleasure. Rather, he recommends a policy whereby pleasures are maximized "in the long run". In other words, Epicurus claims that some pleasures are not worth having because they lead to greater pains, and some pains are worthwhile when they lead to greater pleasures. The best strategy for attaining a maximal amount of pleasure overall is not to seek instant gratification but to work out a sensible long term policy.
Ancient Greek ethics is eudaimonist because it links virtue and eudaimonia, where eudaimonia refers to an individual's well-being. Epicurus' doctrine can be considered eudaimonist since Epicurus argues that a life of pleasure will coincide with a life of virtue. He believes that we do and ought to seek virtue because virtue brings pleasure. Epicurus' basic doctrine is that a life of virtue is the life that generates the most pleasure, and it is for this reason that we ought to be virtuous. This thesis—the eudaimon life is the pleasurable life—is not a tautology as "eudaimonia is the good life" would be: rather, it is the substantive and controversial claim that a life of pleasure and absence of pain is what eudaimonia consists in.
One important difference between Epicurus' eudaimonism and that of Plato and Aristotle is that for the latter virtue is a constituent of eudaimonia, whereas Epicurus makes virtue a means to happiness. To this difference, consider Aristotle's theory. Aristotle maintains that eudaimonia is what everyone wants (and Epicurus would agree). He also thinks that eudaimonia is best achieved by a life of virtuous activity in accordance with reason. The virtuous person takes pleasure in doing the right thing as a result of a proper training of moral and intellectual character (See e.g., Nicomachean Ethics 1099a5). However, Aristotle does not think that virtuous activity is pursued for the sake of pleasure. Pleasure is a byproduct of virtuous action: it does not enter at all into the reasons why virtuous action is virtuous. Aristotle does not think that we literally aim for eudaimonia. Rather, eudaimonia is what we achieve (assuming that we are not particularly unfortunate in the possession of external goods) when we live according to the requirements of reason. Virtue is the largest constituent in a eudaimon life.
By contrast, Epicurus holds that virtue is the means to achieve happiness. His theory is eudaimonist in that he holds that virtue is indispensable to happiness; but virtue is not a constituent of a eudaimon life, and being virtuous is not (external goods aside) identical with being eudaimon. Rather, according to Epicurus, virtue is only instrumentally related to happiness. So whereas Aristotle would not say that one ought to aim for virtue in order to attain pleasure, Epicurus would endorse this claim.
The Stoics
Stoic philosophy begins with Zeno of Citium , and was developed by Cleanthes (331–232 BC) and Chrysippus into a formidable systematic unity. Zeno believed happiness was a "good flow of life"; Cleanthes suggested it was "living in agreement with nature", and Chrysippus believed it was "living in accordance with experience of what happens by nature." Stoic ethics is a particularly strong version of eudaimonism. According to the Stoics, virtue is necessary and sufficient for eudaimonia. (This thesis is generally regarded as stemming from the Socrates of Plato's earlier dialogues.)
We saw earlier that the conventional Greek concept of arete is not quite the same as that denoted by virtue, which has Christian connotations of charity, patience, and uprightness, since arete includes many non-moral virtues such as physical strength and beauty. However, the Stoic concept of arete is much nearer to the Christian conception of virtue, which refers to the moral virtues. However, unlike Christian understandings of virtue, righteousness or piety, the Stoic conception does not place as great an emphasis on mercy, forgiveness, self-abasement (i.e. the ritual process of declaring complete powerlessness and humility before God), charity and self-sacrificial love, though these behaviors/mentalities are not necessarily spurned by the Stoics (they are spurned by some other philosophers of Antiquity). Rather Stoicism emphasizes states such as justice, honesty, moderation, simplicity, self-discipline, resolve, fortitude, and courage (states which Christianity also encourages).
The Stoics make a radical claim that the eudaimon life is the morally virtuous life. Moral virtue is good, and moral vice is bad, and everything else, such as health, honour and riches, are merely "neutral". The Stoics therefore are committed to saying that external goods such as wealth and physical beauty are not really good at all. Moral virtue is both necessary and sufficient for eudaimonia. In this, they are akin to Cynic philosophers such as Antisthenes and Diogenes in denying the importance to eudaimonia of external goods and circumstances, such as were recognized by Aristotle, who thought that severe misfortune (such as the death of one's family and friends) could rob even the most virtuous person of eudaimonia. This Stoic doctrine re-emerges later in the history of ethical philosophy in the writings of Immanuel Kant, who argues that the possession of a "good will" is the only unconditional good. One difference is that whereas the Stoics regard external goods as neutral, as neither good nor bad, Kant's position seems to be that external goods are good, but only so far as they are a condition to achieving happiness.
Modern conceptions
"Modern Moral Philosophy"
Interest in the concept of eudaimonia and ancient ethical theory more generally had a revival in the 20th century. G. E. M. Anscombe in her article "Modern Moral Philosophy" (1958) argued that duty-based conceptions of morality are conceptually incoherent for they are based on the idea of a "law without a lawgiver". She claims a system of morality conceived along the lines of the Ten Commandments depends on someone having made these rules. Anscombe recommends a return to the eudaimonistic ethical theories of the ancients, particularly Aristotle, which ground morality in the interests and well-being of human moral agents, and can do so without appealing to any such lawgiver.
Julia Driver in the Stanford Encyclopedia of Philosophy explains:
Anscombe's article Modern Moral Philosophy stimulated the development of virtue ethics as an alternative to Utilitarianism, Kantian Ethics, and Social Contract theories. Her primary charge in the article is that, as secular approaches to moral theory, they are without foundation. They use concepts such as "morally ought", "morally obligated", "morally right", and so forth that are legalistic and require a legislator as the source of moral authority. In the past God occupied that role, but systems that dispense with God as part of the theory are lacking the proper foundation for meaningful employment of those concepts.
Modern psychology
Models of eudaimonia in psychology and positive psychology emerged from early work on self-actualization and the means of its accomplishment by researchers such as Erik Erikson, Gordon Allport, and Abraham Maslow (hierarchy of needs).
Theories include Diener's tripartite model of subjective well-being, Ryff's Six-factor Model of Psychological Well-being, Keyes work on flourishing, and Seligman's contributions to positive psychology and his theories on authentic happiness and P.E.R.M.A. Related concepts are happiness, flourishing, quality of life, contentment, and meaningful life.
The Japanese concept of Ikigai has been described as eudaimonic well-being, as it "entails actions of devoting oneself to pursuits one enjoys and is associated with feelings of accomplishment and fulfillment."
Positive psychology on eudaimonia
The "Questionnaire for Eudaimonic Well-Being" developed in Positive Psychology lists six dimensions of eudaimonia:
self-discovery;
perceived development of one's best potentials;
a sense of purpose and meaning in life;
investment of significant effort in pursuit of excellence;
intense involvement in activities; and
enjoyment of activities as personally expressive.
See also
Ataraxia
Eudaemon (mythology)
Eudaemons
Eupraxsophy
Humanism
Social quality
Summum bonum
References
Further reading
Primary sources
Aristotle. The Nicomachean Ethics, translated by Martin Ostwald. New York: The Bobbs-Merrill Company. 1962
—— The Complete Works of Aristotle, vol. 1 and 2 (rev. ed.), edited by Jonathan Barnes (1984). Bollingen Foundation.1995.
Cicero. "On Ends" in De Finibus Bonorum et Malorum, translated by H. Rackham, Loeb Classical Library. Cambridge: Harvard University Press. 1914. Latin text with old-fashioned and not always philosophically precise English translation.
Epicurus. "Letter to Menoeceus, Principal Doctrines, and Vatican Sayings." pp. 28–40 in Hellenistic Philosophy: Introductory Readings (2nd ed.), edited by B. Inwood and L. Gerson. Indianapolis: Hackett Publishing Co. 1998. .
Plato. Plato's Complete Works, edited by John M. Cooper, translated by D. S. Hutchinson. Indianapolis: Hackett Publishing Co. 1997. .
Secondary sources
Ackrill, J. L. (1981) Aristotle the Philosopher. Oxford: Oxford University Press.
Anscombe, G. E. M. (1958) "Modern Moral Philosophy". Philosophy 33; repr. in G.E.M. Anscombe (1981), vol. 3, 26–42.
Broadie, Sarah W. (1991) Ethics with Aristotle. Oxford: Oxford University Press.
Irwin, T. H. (1995) Plato's Ethics, Oxford: Oxford University Press.
Long, A. A., and D.N. Sedley, The Hellenistic Philosophers, vol 1 and 2 (Cambridge: Cambridge University Press, 1987)
McMahon, Darrin M. (2005). Happiness: A History. Atlantic Monthly Press.
—— (2004) "The History of Happiness: 400 B.C. – A.D. 1780." Daedalus (Spring 2004).
Norton, David L. (1976) Personal Destinies, Princeton University Press.
Sellars, J. (2014). Stoicism. Routledge.
Urmson, J. O. (1988) Aristotle's Ethics. Oxford: Blackwell.
Vlastos, G. (1991) Socrates: Ironist and Moral Philosopher. Ithaca, NY: Cornell University Press.
External links
Ancient Ethical Theory, Stanford Encyclopedia of Philosophy
Aristotle's Ethics, Stanford Encyclopedia of Philosophy
Aristotle: Ethics, Internet Encyclopedia of Philosophy
Concepts in ancient Greek ethics
Concepts in ancient Greek philosophy of mind
Happiness
Theories in ancient Greek philosophy
Virtue
Virtue ethics
Well-being | 0.763805 | 0.999183 | 0.763181 |
French and Raven's bases of power | In a notable study of power conducted by social psychologists John R. P. French and Bertram Raven in 1959, power is divided into five separate and distinct forms. They identified those five bases of power as coercive, reward, legitimate, referent, and expert. This was followed by Raven's subsequent addition in 1965 of a sixth separate and distinct base of power: informational power.
French and Raven defined social influence as "a change in the belief, attitude, or behavior of a person (the target of influence) which results from the action of another person (an influencing agent)", and they defined social power as the potential for such influence, that is, the ability of the agent to bring about such a change using available resources.
Relating to social communication studies, power in social influence settings has introduced a large realm of research pertaining to persuasion tactics and leadership practices. Through social communication studies, it has been theorized that leadership and power are closely linked. It has been further presumed that different forms of power affect one's leadership and success. This idea is used often in organizational communication and throughout the workforce.
Though there have been many formal definitions of leadership that did not include social influence and power, any discussion of leadership must inevitably deal with the means by which a leader gets the members of a group or organization to act and move in a particular direction.
Whereby, this is to be considered "power" in social influential situations.
Overview
The original French and Raven (1959) model included five bases of power – reward, coercion, legitimate, expert, and referent – however, informational power was added by Raven in 1965, bringing the total to six. Since then, the model has gone through very significant developments: coercion and reward can have personal as well as impersonal forms. Expert and referent power can be negative or positive. Legitimate power, in addition to position power, may be based on other normative obligations: reciprocity, equity, and responsibility. Information may be utilized in direct or indirect fashion.
French and Raven defined social power as the potential for influence (a change in the belief, attitude or behavior of a someone who is the target of influence.
As we know leadership and power are closely linked. This model shows how the different forms of power affect one's leadership and success. This idea is used often in organizational communication and throughout the workforce. "The French-Raven power forms are introduced with consideration of the level of observability and the extent to which power is dependent or independent of structural conditions. Dependency refers to the degree of internalization that occurs among persons subject to social control. Using these considerations it is possible to link personal processes to structural conditions".
Original typology
The bases of social power have evolved over the years with benefits coming from advanced research and theoretical developments in related fields. On the basis of research and evidence, there have been many other developments and elaborations on the original theory. French and Raven developed an original model outlining the change dependencies and also further delineating each power basis.
Table 1
It is a common understanding that most social influence can still be understood by the original six bases of power, but the foundational bases have been elaborated and further differentiated. Table 2 further differentiates the Bases of Social Power.
Table 2
Bases of power
As mentioned above, there are now six main concepts of power strategies consistently studied in social communication research. They are described as Coercive, Reward, Legitimate, Referent, Expert, and Informational. Additionally, research has shown that source credibility has an explicit effect on the bases of power used in persuasion.
Source credibility, the bases of power, and objective power, which is established based on variables such as position or title, are interrelated. The levels of each have a direct relationship in the manipulation and levels of one another.
The bases of power differ according to the manner in which social changes are implemented, the permanence of such changes, and the ways in which each basis of power is established and maintained.
The effectiveness of power is situational. Given there are six bases of power studied in the communication field, it is very important to know the situational uses of each power, focusing on when each is most effective. According to French and Raven, "it is of particular practical interest to know what bases of power or which power strategies are most likely to be effective, but it is clear that there is no simple answer.
For example, a power strategy that works immediately but relies on surveillance (for example, reward power or coercive power) may not last once surveillance ends. One organizational study found that reward power tended to lead to greater satisfaction on the part of employees, which means that it might increase influence in a broad range of situations. Coercive power was more effective in influencing a subordinate who jeopardized the success of the overall organization or threatened the leader's authority, even though in the short term it also led to resentment on the part of the target. A power strategy that ultimately leads to private acceptance and long-lasting change (for example, information power) may be difficult to implement, and consume considerable time and energy. In the short term, complete reliance on information power might even be dangerous (for example, telling a small child not to run into the street unattended). A military officer leading his troops into combat might be severely handicapped if he had to give complete explanations for each move. Instead, he would want to rely on unquestioned legitimate position power, backed up by coercive power. Power resources, which may be effective for one leader, dealing with one target or follower, may not work for a different leader and follower. The manner in which the power strategy is utilized will also affect its success or failure. Where coercion is deemed necessary, a leader might soften its negative effects with a touch of humor. There have been studies indicating that cultural factors may determine the effectiveness of power strategies."
Coercive power
Coercive power uses the threat of force to gain compliance from another. Force may include physical, social, emotional, political, or economic means. Coercion is not always recognized by the target of influence. This type of power is based upon the idea of coercion. The main idea behind this concept is that someone is forced to do something that he/she does not desire to do. The main goal of coercion is compliance. Coercive power's influence is socially dependent on how the target relates to the change being desired by the influence agent. Furthermore, a person would have to be consistently watched by the influencing agent in order for the change to remain in effect.
Impersonal
An example of impersonal coercion relates a person's belief that the influencing agent has the real power to physically threaten, impose a monetary fine or dismiss an employee.
Personal
An example of personal coercion relates to a threat of rejection or the possibility of disapproval from a person whom is highly valued.
According to Changingminds.org "demonstrations of the harm are often used to illustrate what will happen if compliance is not gained". The power of coercion has been proven to be related with punitive behavior that may be outside one's normal role expectations. However coercion has also been associated positively with generally punitive behavior and negatively associated to contingent reward behavior. This source of power can often lead to problems and in many circumstances it involves abuse. These type of leaders rely on the use of threats in their leadership style. Often the threats involve saying someone will be fired or demoted.
Reward power
Reward power is based on the right of some to offer or deny tangible, social, emotional, or spiritual rewards to others for doing what is wanted or expected of them. Some examples of reward power (positive reward) are: (a) a child is given a dollar for earning better grades; (b) a student is admitted into an honor society for excellent effort; (c) a retiree is praised and feted for lengthy service at a retirement party; and (d) New York firefighters were heralded as heroes for their acts on September 11, 2001. Some examples of reward power (negative reward) are: (a) a driver is fined for illegal parking; (b) a teenager grounded for a week for misbehaving; (c) a rookie player is ridiculed for not following tradition; and (d) President Warren G. Harding's name is commonly invoked whenever political scandal is mentioned. Some pitfalls can emerge when a too heavy reliance is placed on reward power; these include: (a) some people become fixated and too dependent on rewards to do even mundane activities; (b) too severe fears of punishment can immobilize some people; (c) as time passes, past rewards become insufficient to motivate or activate desired outcomes; and (d) negative rewards may be perverted into positive attention.
Impersonal
An example of impersonal reward relates to promises of promotions, money and rewards from various social areas.
Personal
An example of personal reward relates to the reward of receiving approval from a desired person and building relationships with romantic partners.
Legitimate power
Legitimate power comes from an elected, selected, or appointed position of authority and may be underpinned by social norms. This power which means the ability to administer to another certain feelings of obligation or the notion of responsibility. "Rewarding and Punishing subordinates is generally seen as a legitimate part of the formal or appointed leadership role and most managerial positions in work organizations carry with them, some degree of expected reward and punishment." This type of formal power relies on position in an authority hierarchy. Occasionally, those possessing legitimate power fail to recognize they have it, and may begin to notice others going around them to accomplish their goals. Three bases of legitimate power are cultural values, acceptance of social structure, and designation. Cultural values comprise a general basis for legitimate power of one entity over another. Such legitimacy is conferred by others and this legitimacy can be revoked by the original granters, their designees, or their inheritors.
Legitimate power originates from a target of influence accepting the power of the influencing agent whereas behavioral change or compliance occurs based on target's obligation. One who uses legitimate power may have a high need for power which is their motivator to use this base for change in behavior and influence. There may be a range of legitimate power.
Position
The legitimate position power is based on the social norm which requires people to be obedient to those who hold superior positions in a formal or informal social structure. Examples may include: a police officer's legitimacy to make arrests; a parent's legitimacy to restrict a child's activities; the President's legitimacy to live in the White House; and the Congress' legitimacy to declare war. Some pitfalls can arise when too heavy reliance is placed on legitimate power; these include: (a) unexpected exigencies call for non-legitimized individuals to act in the absence of a legitimate authority – such as a citizen's arrest in the absence of a police official; and (b) military legitimacy
Reciprocity
The legitimate power of reciprocity is based on the social norm of reciprocity. Which states how we feel obligated to do something in return for someone who does something beneficial for us.
Equity
The legitimate power of equity is based on the social norm of equity (or compensatory damages) The social norm of equity makes people feel compelled to compensate someone who has suffered or worked hard. As well as someone whom we have harmed in some way is based on the premise that there is a wrong that can be made right, which may be a compensatory form of righting the wrong.
Dependence
The legitimate power of dependence is based on the social norm of social responsibility. Social responsibility norm states how people feel obligated to help someone who is in need of assistance.
People traditionally obey the person with this power solely based on their role, position or title rather than the person specifically as a leader. Therefore, this type of power can easily be lost and the leader does not have his position or title anymore. This power is therefore not strong enough to be one's only form of influencing/persuading.
Referent power
Referent power is rooted in the affiliations we make and/or the groups and organizations we belong to. Our affiliation with a group and the beliefs of the group are shared to some degree. As Referent power emphasizes similarity, respect for an agent of influence's superiority may be undermined by a target of influence. Use of this power base and its outcomes may be negative or positive. An agent for change motivated with a strong need for affiliation and concern of likeability will prefer this power base and will influence their leadership style. Ingratiation or flattery and sense of community may be used by an agent of influence to enhance their influence.
Positive
Referent power in a positive form utilizes the shared personal connection or shared belief between the influencing agent and target with the intention of positively correlated actions of the target.
Negative
Referent power in a negative form produces actions in opposition to the intent of the influencing agent, this is the result from the agent's creation of cognitive dissonance between the referent influencing agent and the target's perception of that influence.
Examples of referent power include: (a) each of the last seven White House press secretaries have been paid handsomely for their memoirs relating to their presence at the seat of government; (b) Mrs. Hillary Clinton gained political capital by her marriage to the President; (c) Reverend Pat Robertson lost a bid for the Republican Party's nomination for President due, in significant part, to his religious affiliation; and (4) national firefighters have received vocational acclaim due to the association with the heroic NYC firefighters. Some pitfalls can occur related to referent assumptions; these include: (a) guilt or glory by association where little or no true tie is established; (b) associative traits tend to linger long after real association ends; (c) some individuals tend to pay dearly for associates' misdeeds or terrible reputations. It is important to distinguish between referent power and other bases of social power involving control or conformity. According to Fuqua, Payne, and Cangemi, referent power acts a little like role model power. It depends on respecting, liking, and holding another individual in high esteem. It usually develops over a long period of time.
The power of holding the ability to administer to another a sense of personal acceptance or personal approval. This type of power is strong enough that the power-holder is often looked up to as a role model. This power is often regarded as admiration, or charm. The responsibility involved is heavy and the power easily lost, but when combined with other forms of power it can be very useful. Referent power is commonly seen in political and military figures, although celebrities often have this as well.
Expert power
Expert power is based on what one knows, experience, and special skills or talents. Expertise can be demonstrated by reputation, credentials certifying expertise, and actions. The effectiveness and impacts of the Expert power base may be negative or positive. According to Raven, there will be more use of Expert power if the motive is a need for achievement. The ability to administer to another information, knowledge or expertise. (Example: Doctors, lawyers). As a consequence of the expert power or knowledge, a leader is able to convince their subordinates to trust them. The expertise does not have to be genuine – it is the perception of expertise that provides the power base. When individuals perceive or assume that a person possesses superior skills or abilities, they award power to that person.
Positive
Expert power in a positive form influences the target to act accordingly as instructed by the expert, based on the assumption of the expert's correct knowledge.
Negative
Expert power in a negative form can result from a person acting in opposition to the expert's instructions if the target feels that the expert has personal gain motives.
Some examples include: (a) a violinist demonstrating through audition skill with music; (b) a professor submits school transcripts to demonstrate discipline expertise; (c) a bricklayer relies on 20+ years of experience to prove expertise. Some pitfalls can emerge when too heavy a reliance is made on expertise; these include: (a) sometimes inferences are made suggesting expertise is wider in scope than it actually is; for example, an expert in antique vases may have little expertise in antique lamps; (b) one's expertise is not everlasting; for example, a physician who fails to keep up with medical technology and advances may lose expertise; and (c) expertise does not necessarily carry with it common sense or ethical judgement.
Informational power
French and Raven's original five powers brought about change after many years, by which Raven added a sixth base of power. Informational is the ability of an agent of influence to bring about change through the resource of information. Raven arguably believed that power as a potential influence logically meant that information was a form of influence and the social power base of Information Power was derived. Informational influence results in cognition and acceptance by the target of influence. The ability for altered behavior initiated through information rather than a specific change agent is called socially independent change. In order to establish Information Power, an agent of influence would likely provide a baseline of information to a target of influence to lay the groundwork in order to be effective with future persuasion. A link between informational power, control, cooperation, and satisfaction have been hypothesized and tested in a lab study. The findings indicate that a channel member's control over another's strategy increases with its informational power source. According to Raven, there will be more use of Information power if the motive is a need for achievement and can also be affected by an agent's self-esteem. Feldman summarizes informational power as the most transitory type of power. If one gives information away, then the power is given away, which differs from other forms of power because it's grounded in what you know about the content of a specific situation. Other forms of power are independent of the content.
Information power comes as a result of possessing knowledge that others need or want. In the age of Information technology, information power is increasingly relevant as an abundance of information is readily available. There may be a cost-benefit analysis by an agent of influence to determine if Information Power or influence is the best strategy. Informational influence or persuasion would generally be favorable however it may not be best suited if timing and effort lacks. Information possessed that no one needs or wants is powerless. Information power extends to the ability to get information not presently held such as a case with a librarian or data base manager. Not all information is readily available; some information is closely controlled by few people. Examples of information that is sensitive or limits accessibility: (a) national security data; (b) personnel information for government or business; (c) corporate trade secrets; (d) juvenile court records; (e) many privately settled lawsuit documents; (f) Swiss bank account owners; and (g) private phone conversations. Of course, legally obtained phone tap warrants, spying, eavesdropping, group and group member leaks can allow others not intended to be privy to information. Possessing information is not, typically, the vital act; it is what one can and does do or potentially can do with the information that typically is of vital importance. Information can, and often is, used as a weapon as in a divorce, a child custody case, business dissolution, or in civil suits discoveries. Information has been used by some to extort action, utterance, agreement, or settlement by others.
Information power is a form of personal or collective power that is based on controlling information needed by others in order to reach an important goal. Our society is now reliant on information power as knowledge for influence, decision making, credibility, and control. Timely and relevant information delivered on demand can be the most influential way to acquire power. Information may be readily available through public records, research, however, information is sometimes assumed privileged or confidential. The target of influence accepts, comprehends and internalizes the change independently, without having to go back to the influencing agent.<
Informational power is based on the potential to utilize information. Providing rational arguments, using information to persuade others, using facts and manipulating information can create a power base. How information is used – sharing it with others, limiting it to key people, keeping it secret from key people, organizing it, increasing it, or even falsifying it – can create a shift in power within a group.
Direct
Information presented by the influencing agent directly to the target of change.
Indirect
Information presented influencing agent indirectly to the target of change void of attempting influence, such as hints or suggestions.
Socially independent of change
The ability for altered behavior initiated through information rather than a specific change agent is called socially independent change. Power socially independent of change may reflect the target continuing changed behavior without referring to, or even remembering, the supervisor or individual of authority as an agent of change because the target understands and accepts the reasoning of information received.
Accessibility
Raven acknowledged leaders can attempt to influence subordinates by access and control of information. Information power may be used in both personal and positional classifications and is among the most preferable power bases.
Tools/mechanisms
Informational power includes not only possessing information, but also the ability to obtain relevant information in a timely way to amass a power base. The use of tools or technological mechanisms such as internet, smart phones, and Social media progresses society's access to information but informational power as a base is derived by determining the usefulness and appropriateness of the information.
Power as a function of leadership and leadership styles
Tradition power is that force that is exerted upon us to conform to traditional ways. Traditions, for the most part, are social constructs; they invite, seduce, or compel us to conform and act in predictable, patterned ways. Breaking with traditions put people at risk of social alienation. Traditions can blunt rationality; they can block innovation; and they can appear to outsiders as silly when original traditions' rationales become outdated or forgotten.
The power of traditions, rather than being typically vested in particular individuals, is ordinarily focused on group conformity
Charismatic power is that aura possessed by only a few individuals in our midst; it is characterized by super confidence, typical physical attractiveness, social adroitness, amiability, sharpened leadership skills, and heightened charm. Some charisma has dark and sinister overtones such as that shown by Adolf Hitler, Jim Jones, Idi Amin, Osama bin Laden, David Koresh, and many confidence tricksters. Others demonstrate more positive displays of charisma such as that displayed by Jacqueline Kennedy, Charles de Gaulle, Diana, Princess of Wales, Michael Jordan, and Bruce Springsteen. Charisma has, in many cases, short circuited rationality; that is, others have been fooled into or lulled into not rationally considering what a charismatic requests or demands but going along as a result of the charismatic attraction. It must be remembered that power is effective only when the target of powerful actions agrees [implicitly or explicitly] to the relevant power dynamic; we are all technically able to resist the power of others; at times, however, we may feel powerless to resist or the social, political, personal, and/or emotional price to be paid is too high or we fear failure in resisting.
Power Tactics
Regardless of the basis of power in use, power-holders often use power tactics to influence others. Power tactics are different strategies used to influence others, typically to gain a particular advantage or objective. Power-holders commonly use six different power tactics.
The first is soft tactics which utilize the relationships between the target and the influencer to bring out compliance. Sometimes individuals use this method of influence more indirectly and interpersonally through the use of friendships, socialization, collaboration, and personal rewards.
The second tactic is hard tactics that rely on economic, tangible outcomes. These tactics are harsh, forcing, or direct, especially in comparison to soft tactics. Though this tactic may seem more significant, it is not necessarily more powerful than soft tactics.
The third tactic is rational tactics; they use reasoning, logic, and sound judgement by bargaining and persuading the target they are influencing.
The fourth tactic is nonrational; these tactics rely on emotionality and misinformation; an example would be ingratiation and evasion.
The fifth power tactic is bilateral tactics; these are based on an interactive approach involving a give-and-take process for both the influencer and the target receiving the influence. For instance, someone using bilateral tactics would likely open discussions with the person they are trying to influence and be more prone to negotiating with the target.
The last power tactic is unilateral tactics; these are the opposite of an interactive approach and instead can be done without the cooperation of the target, including making demands, disengagement, and evasion.
People will vary in their use of power tactics and use a mixture of the six. For instance, when asked, “How would you get your way?” different powerholders will respond with a range of power tactics. An interpersonally oriented individual who wishes to be liked will use more soft, indirect, and rational power tactics in leader roles. In contrast, someone who holds dictatorial power will use hard, direct, and irrational tactics.
Personal and biological characteristics also influence the use of power tactics. For instance, an extrovert –an outgoing and overtly expressive individual – will use a more extensive range of tactics than an introvert – a shy or reticent individual. A difference in tactics also exists between males and females. Females tend to intervene more diminutive than their male counterparts in leadership roles and use far fewer tactics. A study conducted by Instone, Major, and Bunker (1983) found that women who supervised an inadequate employee would promise more irregular pay raises and threaten more pay deductions than men in the same position. In intimate relationships, women tend to lean toward using unilateral and indirect methods with their partners, whereas men use bilateral and direct tactics.
Situational factors can also play a role in the use of power tactics. Depending on the nature of the group situation, certain people will react differently in their leadership role; high-status members tend to use more conflict-driven tactics than low-status members, who aim to minimize any conflict. Different situations call for different tactics: a teacher will lean toward using soft tactics on their students, whereas a CEO may switch back and forth between soft and hard tactics depending on the situation. People may often vary in their power tactics and can use a range of tactics depending on the situation; power tactics are case-specific.
References
Political philosophy
Persuasion
Social concepts
Sociological theories
Power (social and political) theories | 0.771911 | 0.988674 | 0.763168 |
Digital humanities | Digital humanities (DH) is an area of scholarly activity at the intersection of computing or digital technologies and the disciplines of the humanities. It includes the systematic use of digital resources in the humanities, as well as the analysis of their application. DH can be defined as new ways of doing scholarship that involve collaborative, transdisciplinary, and computationally engaged research, teaching, and publishing. It brings digital tools and methods to the study of the humanities with the recognition that the printed word is no longer the main medium for knowledge production and distribution.
By producing and using new applications and techniques, DH makes new kinds of teaching possible, while at the same time studying and critiquing how these impact cultural heritage and digital culture. DH is also applied in research. Thus, a distinctive feature of DH is its cultivation of a two-way relationship between the humanities and the digital: the field both employs technology in the pursuit of humanities research and subjects technology to humanistic questioning and interrogation, often simultaneously.
Definition
The definition of the digital humanities is being continually formulated by scholars and practitioners. Since the field is constantly growing and changing, specific definitions can quickly become outdated or unnecessarily limit future potential. The second volume of Debates in the Digital Humanities (2016) acknowledges the difficulty in defining the field: "Along with the digital archives, quantitative analyses, and tool-building projects that once characterized the field, DH now encompasses a wide range of methods and practices: visualizations of large image sets, 3D modeling of historical artifacts, 'born digital' dissertations, hashtag activism and the analysis thereof, alternate reality games, mobile makerspaces, and more. In what has been called 'big tent' DH, it can at times be difficult to determine with any specificity what, precisely, digital humanities work entails."
Historically, the digital humanities developed out of humanities computing and has become associated with other fields, such as humanistic computing, social computing, and media studies. In concrete terms, the digital humanities embraces a variety of topics, from curating online collections of primary sources (primarily textual) to the data mining of large cultural data sets to topic modeling. Digital humanities incorporates both digitized (remediated) and born-digital materials and combines the methodologies from traditional humanities disciplines (such as rhetoric, history, philosophy, linguistics, literature, art, archaeology, music, and cultural studies) and social sciences, with tools provided by computing (such as hypertext, hypermedia, data visualisation, information retrieval, data mining, statistics, text mining, digital mapping), and digital publishing. Related subfields of digital humanities have emerged like software studies, platform studies, and critical code studies. Fields that parallel the digital humanities include new media studies and information science as well as media theory of composition, game studies, particularly in areas related to digital humanities project design and production, and cultural analytics. Each disciplinary field and each country has its own unique history of digital humanities.
Berry and Fagerjord have suggested that a way to reconceptualise digital humanities could be through a "digital humanities stack". They argue that "this type of diagram is common in computation and computer science to show how technologies are 'stacked' on top of each other in increasing levels of abstraction. Here, [they] use the method in a more illustrative and creative sense of showing the range of activities, practices, skills, technologies and structures that could be said to make up the digital humanities, with the aim of providing a high-level map." Indeed, the "diagram can be read as the bottom levels indicating some of the fundamental elements of the digital humanities stack, such as computational thinking and knowledge representation, and then other elements that later build on these."
In practical terms, a major distinction within digital humanities is the focus on the data being processed. For processing textual data, digital humanities builds on a long and extensive history of digital edition, computational linguistics and natural language processing and developed an independent and highly specialized technology stack (largely cumulating in the specifications of the Text Encoding Initiative). This part of the field is sometimes thus set apart from Digital Humanities in general as 'digital philology' or 'computational philology'. For the creation and analysis of digital editions of objects or artifacts, digital philologists have access to digital practices, methods, and technologies such as optical character recognition that are providing opportunities to adapt the field to the digital age.
History
Digital humanities descends from the field of humanities computing, whose origins reach back to 1940s and 50s, in the pioneering work of Jesuit scholar Roberto Busa, which began in 1946, and of English professor Josephine Miles, beginning in the early 1950s. In collaboration with IBM, Busa and his team created a computer-generated concordance to Thomas Aquinas' writings known as the Index Thomisticus. Busa's works have been collected and translated by Julianne Nyhan and Marco Passarotti. Other scholars began using mainframe computers to automate tasks like word-searching, sorting, and counting, which was much faster than processing information from texts with handwritten or typed index cards. Similar first advances were made by Gerhard Sperl in Austria using computers by Zuse for Digital Assyriology. In the decades which followed archaeologists, classicists, historians, literary scholars, and a broad array of humanities researchers in other disciplines applied emerging computational methods to transform humanities scholarship.
As Tara McPherson has pointed out, the digital humanities also inherit practices and perspectives developed through many artistic and theoretical engagements with electronic screen culture beginning the late 1960s and 1970s. These range from research developed by organizations such as SIGGRAPH to creations by artists such as Charles and Ray Eames and the members of E.A.T. (Experiments in Art and Technology). The Eames and E.A.T. explored nascent computer culture and intermediality in creative works that dovetailed technological innovation with art.
The first specialized journal in the digital humanities was Computers and the Humanities, which debuted in 1966. The Computer Applications and Quantitative Methods in Archaeology (CAA) association was founded in 1973. The Association for Literary and Linguistic Computing (ALLC) and the Association for Computers and the Humanities (ACH) were then founded in 1977 and 1978, respectively.
Soon, there was a need for a standardized protocol for tagging digital texts, and the Text Encoding Initiative (TEI) was developed. The TEI project was launched in 1987 and published the first full version of the TEI Guidelines in May 1994. TEI helped shape the field of electronic textual scholarship and led to Extensible Markup Language (XML), which is a tag scheme for digital editing. Researchers also began experimenting with databases and hypertextual editing, which are structured around links and nodes, as opposed to the standard linear convention of print. In the nineties, major digital text and image archives emerged at centers of humanities computing in the U.S. (e.g. the Women Writers Project, the Rossetti Archive, and The William Blake Archive), which demonstrated the sophistication and robustness of text-encoding for literature. The advent of personal computing and the World Wide Web meant that Digital Humanities work could become less centered on text and more on design. The multimedia nature of the internet has allowed Digital Humanities work to incorporate audio, video, and other components in addition to text.
The terminological change from "humanities computing" to "digital humanities" has been attributed to John Unsworth, Susan Schreibman, and Ray Siemens who, as editors of the anthology A Companion to Digital Humanities (2004), tried to prevent the field from being viewed as "mere digitization". Consequently, the hybrid term has created an overlap between fields like rhetoric and composition, which use "the methods of contemporary humanities in studying digital objects", and digital humanities, which uses "digital technology in studying traditional humanities objects". The use of computational systems and the study of computational media within the humanities, arts and social sciences more generally has been termed the 'computational turn'.
In 2006 the National Endowment for the Humanities (NEH) launched the Digital Humanities Initiative (renamed Office of Digital Humanities in 2008), which made widespread adoption of the term "digital humanities" in the United States.
Digital humanities emerged from its former niche status and became "big news" at the 2009 MLA convention in Philadelphia, where digital humanists made "some of the liveliest and most visible contributions" and had their field hailed as "the first 'next big thing' in a long time."
Values and methods
Although digital humanities projects and initiatives are diverse, they often reflect common values and methods. These can help in understanding this hard-to-define field.
Values
Critical and theoretical
Iterative and experimental
Collaborative and distributed
Multimodal and performative
Open and accessible
Methods
Enhanced critical curation
Augmented editions and fluid textuality
Scale: the law of large numbers
Distant/close, macro/micro, surface/depth
Cultural analytics, aggregation, and data-mining
Visualization and data design
Locative investigation and thick mapping
The animated archive
Distributed knowledge production and performative access
Humanities gaming
Code, software, and platform studies
Database documentaries
Repurposable content and remix culture
Pervasive infrastructure
Ubiquitous scholarship
In keeping with the value of being open and accessible, many digital humanities projects and journals are open access and/or under Creative Commons licensing, showing the field's "commitment to open standards and open source." Open access is designed to enable anyone with an internet-enabled device and internet connection to view a website or read an article without having to pay, as well as share content with the appropriate permissions.
Digital humanities scholars use computational methods either to answer existing research questions or to challenge existing theoretical paradigms, generating new questions and pioneering new approaches. One goal is to systematically integrate computer technology into the activities of humanities scholars, as is done in contemporary empirical social sciences. Yet despite the significant trend in digital humanities towards networked and multimodal forms of knowledge, a substantial amount of digital humanities focuses on documents and text in ways that differentiate the field's work from digital research in media studies, information studies, communication studies, and sociology. Another goal of digital humanities is to create scholarship that transcends textual sources. This includes the integration of multimedia, metadata, and dynamic environments (see The Valley of the Shadow project at the University of Virginia, the Vectors Journal of Culture and Technology in a Dynamic Vernacular at University of Southern California, or Digital Pioneers projects at Harvard). A growing number of researchers in digital humanities are using computational methods for the analysis of large cultural data sets such as the Google Books corpus. Examples of such projects were highlighted by the Humanities High Performance Computing competition sponsored by the Office of Digital Humanities in 2008, and also by the Digging Into Data challenge organized in 2009 and 2011 by NEH in collaboration with NSF, and in partnership with JISC in the UK, and SSHRC in Canada. In addition to books, historical newspapers can also be analyzed with big data methods. The analysis of vast quantities of historical newspaper content has showed how periodic structures can be automatically discovered, and a similar analysis was performed on social media. As part of the big data revolution, gender bias, readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents and historical documents written in literary Chinese.
Digital humanities is also involved in the creation of software, providing "environments and tools for producing, curating, and interacting with knowledge that is 'born digital' and lives in various digital contexts." In this context, the field is sometimes known as computational humanities.
Tools
Digital humanities scholars use a variety of digital tools for their research, which may take place in an environment as small as a mobile device or as large as a virtual reality lab. Environments for "creating, publishing and working with digital scholarship include everything from personal equipment to institutes and software to cyberspace." Some scholars use advanced programming languages and databases, while others use less complex tools, depending on their needs. DiRT (Digital Research Tools Directory) offers a registry of digital research tools for scholars. TAPoR (Text Analysis Portal for Research) is a gateway to text analysis and retrieval tools. An accessible, free example of an online textual analysis program is Voyant Tools, which only requires the user to copy and paste either a body of text or a URL and then click the 'reveal' button to run the program. There is also an online list of online or downloadable Digital Humanities tools that are largely free, aimed toward helping students and others who lack access to funding or institutional servers. Free, open source web publishing platforms like WordPress and Omeka are also popular tools.
Projects
Digital humanities projects are more likely than traditional humanities work to involve a team or a lab, which may be composed of faculty, staff, graduate or undergraduate students, information technology specialists, and partners in galleries, libraries, archives, and museums. Credit and authorship are often given to multiple people to reflect this collaborative nature, which is different from the sole authorship model in the traditional humanities (and more like the natural sciences).
There are thousands of digital humanities projects, ranging from small-scale ones with limited or no funding to large-scale ones with multi-year financial support. Some are continually updated while others may not be due to loss of support or interest, though they may still remain online in either a beta version or a finished form. The following are a few examples of the variety of projects in the field:
Digital archives
The Women Writers Project (begun in 1988) is a long-term research project to make pre-Victorian women writers more accessible through an electronic collection of rare texts. The Walt Whitman Archive (begun in the 1990s) sought to create a hypertext and scholarly edition of Whitman's works and now includes photographs, sounds, and the only comprehensive current bibliography of Whitman criticism. The Emily Dickinson Archive (begun in 2013) is a collection of high-resolution images of Dickinson's poetry manuscripts as well as a searchable lexicon of over 9,000 words that appear in the poems. The Slave Societies Digital Archive (formerly Ecclesiastical and Secular Sources for Slave Societies), directed by Jane Landers and hosted at Vanderbilt University, preserves endangered ecclesiastical and secular documents related to Africans and African-descended peoples in slave societies. This Digital Archive currently holds 500,000 unique images, dating from the 16th to the 20th centuries, and documents the history of between 6 and 8 million individuals. They are the most extensive serial records for the history of Africans in the Atlantic World and also include valuable information on the indigenous, European, and Asian populations who lived alongside them. Another example of a digital humanities projects focused on the Americas is at the National Autonomous University of Mexico, which has the digitization of 17th-century manuscripts, an electronic corpus of Mexican history from the 16th to 19th century, and the visualization of pre-Hispanic archaeological sites in 3-D. A rare example of a digital humanities project focused on the cultural heritage of Africa is the Princeton Ethiopian, Eritrean, and Egyptian Miracles of Mary project, which documents African medieval stories, paintings, and manuscripts about the Virgin Mary from the 1300s into the 1900s.
The involvement of librarians and archivists plays an important part in digital humanities projects because of the recent expansion of their role so that it now covers digital curation, which is critical in the preservation, promotion, and access to digital collections, as well as the application of scholarly orientation to digital humanities projects. A specific example involves the case of initiatives where archivists help scholars and academics build their projects through their experience in evaluating, implementing, and customizing metadata schemas for library collections.
Cultural analytics
"Cultural analytics" refers to the use of computational method for exploration and analysis of large visual collections and also contemporary digital media. The concept was developed in 2005 by Lev Manovich who then established the Cultural Analytics Lab in 2007 at Qualcomm Institute at California Institute for Telecommunication and Information (Calit2). The lab has been using methods from the field of computer science called Computer Vision many types of both historical and contemporary visual media—for example, all covers of Time magazine published between 1923 and 2009, 20,000 historical art photographs from the collection in Museum of Modern Art (MoMA) in New York, one million pages from Manga books, and 16 million images shared on Instagram in 17 global cities. Cultural analytics also includes using methods from media design and data visualization to create interactive visual interfaces for exploration of large visual collections e.g., Selfiecity and On Broadway.
Cultural analytics research is also addressing a number of theoretical questions. How can we "observe" giant cultural universes of both user-generated and professional media content created today, without reducing them to averages, outliers, or pre-existing categories? How can work with large cultural data help us question our stereotypes and assumptions about cultures? What new theoretical cultural concepts and models are required for studying global digital culture with its new mega-scale, speed, and connectivity?
The term "cultural analytics" (or "culture analytics") is now used by many other researchers, as exemplified by two academic symposiums, a four-month long research program at UCLA that brought together 120 leading researchers from university and industry labs, an academic peer-review Journal of Cultural Analytics: CA established in 2016, and academic job listings.
Textual mining, analysis, and visualization
WordHoard (begun in 2004) is a free application that enables scholarly but non-technical users to read and analyze, in new ways, deeply-tagged texts, including the canon of Early Greek epic, Chaucer, Shakespeare, and Spenser. The Republic of Letters (begun in 2008) seeks to visualize the social network of Enlightenment writers through an interactive map and visualization tools. Network analysis and data visualization is also used for reflections on the field itself – researchers may produce network maps of social media interactions or infographics from data on digital humanities scholars and projects.
Document in Context of its Time (DICT) analysis style and an online demo tool allow in an interactive way let users know whether the vocabulary used by an author of an input text was frequent at the time of text creation, whether the author used anachronisms or neologisms, and enables detecting terms in text that underwent considerable semantic change.
Analysis of macroscopic trends in cultural change
Culturomics is a form of computational lexicology that studies human behavior and cultural trends through the quantitative analysis of digitized texts. Researchers data mine large digital archives to investigate cultural phenomena reflected in language and word usage. The term is an American neologism first described in a 2010 Science article called Quantitative Analysis of Culture Using Millions of Digitized Books, co-authored by Harvard researchers Jean-Baptiste Michel and Erez Lieberman Aiden.
A 2017 study published in the Proceedings of the National Academy of Sciences of the United States of America compared the trajectory of n-grams over time in both digitised books from the 2010 Science article with those found in a large corpus of regional newspapers from the United Kingdom over the course of 150 years. The study further went on to use more advanced natural language processing techniques to discover macroscopic trends in history and culture, including gender bias, geographical focus, technology, and politics, along with accurate dates for specific events.
The applications of digital humanities may be used along with other non humanities subject areas such as pure sciences, agriculture, management etc. to produce great variants of practical solutions to solve issues in industry as well as society.
Online publishing
The Stanford Encyclopedia of Philosophy (begun in 1995) is a dynamic reference work of terms, concepts, and people from philosophy maintained by scholars in the field. MLA Commons offers an open peer-review site (where anyone can comment) for their ongoing curated collection of teaching artifacts in Digital Pedagogy in the Humanities: Concepts, Models, and Experiments (2016). The Debates in the Digital Humanities platform contains volumes of the open-access book of the same title (2012 and 2016 editions) and allows readers to interact with material by marking sentences as interesting or adding terms to a crowdsourced index.
Wikimedia projects
Some research institutions work with the Wikimedia Foundation or volunteers of the community, for example, to make freely licensed media files available via Wikimedia Commons or to link or load data sets with Wikidata. Text analysis has been performed on the contribution history of articles on Wikipedia or its sister projects.
DH-OER
The 'South African Centre for Digital Language Resources' (SADiLaR ) was set up at a time when a global definition of Open Education Resources (OER) was being drafted and accepted by UNESCO SADiLaR saw this an opportunity to stimulate activism and research around the use and creation of OERs for Digital Humanities. They initiated and launched the Digital Humanities OER ( DH-OER) project to raise consciousness about the costs of materials, foster the adoption of open principles and practices and support the growth of open education resources and digital humanities in South African Higher education institutions. DH-OER began with 26 projects and an introduction to openness in April 2022. It concluded in November 2023, when 16 projects showcased their efforts in a public event.
Criticism
In 2012, Matthew K. Gold identified a range of perceived criticisms of the field of digital humanities: "a lack of attention to issues of race, class, gender, and sexuality; a preference for research-driven projects over pedagogical ones; an absence of political commitment; an inadequate level of diversity among its practitioners; an inability to address texts under copyright; and an institutional concentration in well-funded research universities". Similarly Berry and Fagerjord have argued that a digital humanities should "focus on the need to think critically about the implications of computational imaginaries, and raise some questions in this regard. This is also to foreground the importance of the politics and norms that are embedded in digital technology, algorithms and software. We need to explore how to negotiate between close and distant readings of texts and how micro-analysis and macro-analysis can be usefully reconciled in humanist work." Alan Liu has argued, "while digital humanists develop tools, data, and metadata critically, therefore (e.g., debating the 'ordered hierarchy of content objects' principle; disputing whether computation is best used for truth finding or, as Lisa Samuels and Jerome McGann put it, 'deformance'; and so on) rarely do they extend their critique to the full register of society, economics, politics, or culture." Some of these concerns have given rise to the emergent subfield of Critical Digital Humanities (CDH): Some key questions include: how do we make the invisible become visible in the study of software? How is knowledge transformed when mediated through code and software? What are the critical approaches to Big Data, visualization, digital methods, etc.? How does computation create new disciplinary boundaries and gate-keeping functions? What are the new hegemonic representations of the digital – 'geons', 'pixels', 'waves', visualization, visual rhetorics, etc.? How do media changes create epistemic changes, and how can we look behind the 'screen essentialism' of computational interfaces? Here we might also reflect on the way in which the practice of making-visible also entails the making-invisible – computation involves making choices about what is to be captured.
Negative publicity
Lauren F. Klein and Gold note that many appearances of the digital humanities in public media are often in a critical fashion. Armand Leroi, writing in The New York Times, discusses the contrast between the algorithmic analysis of themes in literary texts and the work of Harold Bloom, who qualitatively and phenomenologically analyzes the themes of literature over time. Leroi questions whether or not the digital humanities can provide a truly robust analysis of literature and social phenomena or offer a novel alternative perspective on them. The literary theorist Stanley Fish claims that the digital humanities pursue a revolutionary agenda and thereby undermine the conventional standards of "pre-eminence, authority and disciplinary power". However, digital humanities scholars note that "Digital Humanities is an extension of traditional knowledge skills and methods, not a replacement for them. Its distinctive contributions do not obliterate the insights of the past, but add and supplement the humanities' long-standing commitment to scholarly interpretation, informed research, structured argument, and dialogue within communities of practice".
Some have hailed the digital humanities as a solution to the apparent problems within the humanities, namely a decline in funding, a repeat of debates, and a fading set of theoretical claims and methodological arguments. Adam Kirsch, writing in the New Republic, calls this the "False Promise" of the digital humanities. While the rest of humanities and many social science departments are seeing a decline in funding or prestige, the digital humanities has been seeing increasing funding and prestige. Burdened with the problems of novelty, the digital humanities is discussed as either a revolutionary alternative to the humanities as it is usually conceived or as simply new wine in old bottles. Kirsch believes that digital humanities practitioners suffer from problems of being marketers rather than scholars, who attest to the grand capacity of their research more than actually performing new analysis and when they do so, only performing trivial parlor tricks of research. This form of criticism has been repeated by others, such as in Carl Staumshein, writing in Inside Higher Education, who calls it a "Digital Humanities Bubble". Later in the same publication, Straumshein alleges that the digital humanities is a 'Corporatist Restructuring' of the Humanities. Some see the alliance of the digital humanities with business to be a positive turn that causes the business world to pay more attention, thus bringing needed funding and attention to the humanities. If it were not burdened by the title of digital humanities, it could escape the allegations that it is elitist and unfairly funded.
Black box
There has also been critique of the use of digital humanities tools by scholars who do not fully understand what happens to the data they input and place too much trust in the "black box" of software that cannot be sufficiently examined for errors. Johanna Drucker, a professor at UCLA Department of Information Studies, has criticized the "epistemological fallacies" prevalent in popular visualization tools and technologies (such as Google's n-gram graph) used by digital humanities scholars and the general public, calling some network diagramming and topic modeling tools "just too crude for humanistic work." The lack of transparency in these programs obscures the subjective nature of the data and its processing, she argues, as these programs "generate standard diagrams based on conventional algorithms for screen display ... mak[ing] it very difficult for the semantics of the data processing to be made evident."
Diversity
There has also been some recent controversy among practitioners of digital humanities around the role that race and/or identity politics plays. Tara McPherson attributes some of the lack of racial diversity in digital humanities to the modality of UNIX and computers themselves. An open thread on DHpoco.org recently garnered well over 100 comments on the issue of race in digital humanities, with scholars arguing about the amount that racial (and other) biases affect the tools and texts available for digital humanities research. McPherson posits that there needs to be an understanding and theorizing of the implications of digital technology and race, even when the subject for analysis appears not to be about race.
Amy E. Earhart criticizes what has become the new digital humanities "canon" in the shift from websites using simple HTML to the usage of the TEI and visuals in textual recovery projects. Works that have been previously lost or excluded were afforded a new home on the internet, but much of the same marginalizing practices found in traditional humanities also took place digitally. According to Earhart, there is a "need to examine the canon that we, as digital humanists, are constructing, a canon that skews toward traditional texts and excludes crucial work by women, people of color, and the LGBTQ community."
Issues of access
Practitioners in digital humanities are also failing to meet the needs of users with disabilities. George H. Williams argues that universal design is imperative for practitioners to increase usability because "many of the otherwise most valuable digital resources are useless for people who are—for example—deaf or hard of hearing, as well as for people who are blind, have low vision, or have difficulty distinguishing particular colors." In order to provide accessibility successfully, and productive universal design, it is important to understand why and how users with disabilities are using the digital resources while remembering that all users approach their informational needs differently.
Cultural criticism
Digital humanities have been criticized for not only ignoring traditional questions of lineage and history in the humanities, but lacking the fundamental cultural criticism that defines the humanities. However, it remains to be seen whether or not the humanities have to be tied to cultural criticism, per se, in order to be the humanities. The sciences might imagine the Digital Humanities as a welcome improvement over the non-quantitative methods of the humanities and social sciences.
Difficulty of evaluation
As the field matures, there has been a recognition that the standard model of academic peer-review of work may not be adequate for digital humanities projects, which often involve website components, databases, and other non-print objects. Evaluation of quality and impact thus require a combination of old and new methods of peer review. One response has been the creation of the DHCommons Journal. This accepts non-traditional submissions, especially mid-stage digital projects, and provides an innovative model of peer review more suited for the multimedia, transdisciplinary, and milestone-driven nature of Digital Humanities projects. Other professional humanities organizations, such as the American Historical Association and the Modern Language Association, have developed guidelines for evaluating academic digital scholarship.
Lack of focus on pedagogy
The 2012 edition of Debates in the Digital Humanities recognized the fact that pedagogy was the "neglected 'stepchild' of DH" and included an entire section on teaching the digital humanities. Part of the reason is that grants in the humanities are geared more toward research with quantifiable results rather than teaching innovations, which are harder to measure. In recognition of a need for more scholarship on the area of teaching, the edited volume Digital Humanities Pedagogy was published and offered case studies and strategies to address how to teach digital humanities methods in various disciplines.
See also
Cyborg anthropology
Digital anthropology
References
External links
Debates in the Digital Humanities book series
Digital Humanities Quarterly
Intro to Digital Humanities by UCLA Center for Digital Humanities
CUNY Digital Humanities Resource Guide by CUNY Digital Humanities Initiative
DH Toychest: Guides and Introductions curated by DH scholar Alan Liu
How did they make that? by DH scholar Miriam Posner | 0.767832 | 0.993921 | 0.763164 |
Liberal education | A liberal education is a system or course of education suitable for the cultivation of a free human being. It is based on the medieval concept of the liberal arts or, more commonly now, the liberalism of the Age of Enlightenment. It has been described as "a philosophy of education that empowers individuals with broad knowledge and transferable skills, and a stronger sense of values, ethics, and civic engagement ... characterized by challenging encounters with important issues, and more a way of studying than a specific course or field of study" by the Association of American Colleges and Universities. Usually global and pluralistic in scope, it can include a general education curriculum which provides broad exposure to multiple disciplines and learning strategies in addition to in-depth study in at least one academic area.
Liberal education was advocated in the 19th century by thinkers such as John Henry Newman, Thomas Huxley, and F. D. Maurice. The decline of liberal education is often attributed to mobilization during the Second World War. The premium and emphasis placed upon mathematics, science, and technical training caused a shift away from a liberal concept of higher education studies; however, it became central to much undergraduate education in the United States in the mid-20th century, being conspicuous in the movement for general education.
Definition
Wilfred Griffin Eady, the Principal of the Working Men's College from 1949 to 1955, defined the liberal education his institution sought to provide as "something you can enjoy for its own sake, something which is a personal possession and an inward enrichment, and something which teaches a sense of values".
The American Association for the Advancement of Science describes a liberal education in this way: "Ideally, a liberal education produces persons who are open-minded and free from provincialism, dogma, preconception, and ideology; conscious of their opinions and judgments; reflective of their actions; and aware of their place in the social and natural worlds." Liberally educated people are skeptical of their own traditions; they are trained to think for themselves rather than conform to higher authorities.
It also cultivates "active citizenship" through off-campus community service, internships, research, and study abroad. Some faculty see this movement towards "civic engagement" as more pedagogically powerful than traditional classroom teaching, but opponents argue that the education occurring within an academic institution must be purely intellectual and scholarly.
A liberal education combines an education in the classics, literature, the humanities, moral virtues, and others. The term liberal education in the modern sense should not be confused with liberal arts education; the latter deals with academic subjects, while the former deals with ideological subjects. Indeed, a liberal arts education does not necessarily include a liberal education, and a liberal arts program may even be as specialized as a vocational program. For practical purposes, liberal education is not actually differentiated from liberal arts education today, except by scholars.
Unlike a professional and vocational education that prepares students for their careers, a liberal education prepares students to utilize their leisure time. Such an education helps the individual navigate internal and external conflicts in life. For example, a liberal education aims to help students be self-conscious and aware of their actions and motivations. Individuals also become more considerate for other beliefs and cultures. According to James Engel, the author of The Value of a Liberal Arts Education, A liberal education provides the framework for an educated and thoughtful citizen.
History
Definitions of a liberal education may be broad, generalized, and sometimes even contradictory. "It is at once the most enduring and changeable of academic traditions." Axelrod, Anisef, and Lin suggest that conceptions of liberal education are rooted in the teaching methods of Ancient Greece, a slave-owning community divided between slaves and freemen. The freemen, mostly concerned about their rights and obligations as citizens, received a non-specialized, non-vocational, liberal arts education that produced well-rounded citizens aware of their place in society. At the same time, Socrates emphasized the importance of individualism, impressing upon his students the duty of man to form his own opinions through reason rather than indoctrination. Athenian education also provided a balance between developing the mind and the body. Another possibility is that liberal education dates back to the Zhou dynasty, where the teachings of Confucianism focused on propriety, morality, and social order. Hoerner also suggests that Jesus was a liberal educator, as "he was talking of a free man capable of thinking for himself and of being a responsible citizen," but liberal education is still commonly traced back to the Greeks.
While liberal education was stifled during the barbarism of the Early Middle Ages, it rose to prominence once again in the eleventh and twelfth centuries, especially with the re-emergence of Aristotelian philosophy. The thirteenth and fourteenth centuries saw a revolt against narrow spirituality and educators started to focus on the human, rather than God. This humanist approach favored reason, nature and aesthetics.
Study of the Classics and humanities slowly returned in the fourteenth century, which led to increased study of both Ancient Greek and Latin. In the fifteenth and sixteenth centuries, liberal education focused mostly on the classics. Commoners, however, were not too keen on studying the classics, so they instead took up vernacular languages and literature, and also the sciences. Until at least the twentieth century, both humanist and classicist influences remained in the liberal education, and proponents of a progressive education also embraced the humanist philosophy. Study of the classics continued in the form of the Great Books program. Robert Maynard Hutchins brought this program to the University of Chicago. Upon Hutchins' resignation, the university got rid of the program, but an adapted version still exists at Shimer College.
While liberal education is a Western movement, it has been influential in other regions as well. For example, in Japan during the general liberalism of the Taishō period, there was a liberal education movement that saw the establishment of a number of schools based on liberal education in the 1920s – see 大正自由教育運動.
Relationship with professional education
Liberal education and professional education have often been seen as divergent. German universities moved towards more professional teaching in the nineteenth century, and unlike American students, who still pursued a liberal education, students elsewhere started to take professional courses in the first or second year of study. In the early twentieth century, American liberal arts colleges still required students to pursue a common curriculum, whereas public universities allowed a student to move on to more pragmatic courses after having taken general education courses for the first two years of study. As an emphasis on specialized knowledge grew in the middle of the century, colleges began to adjust the proportion of required general education courses to those required for a particular major.
As University of Chicago professor Martha Nussbaum points out, standardized testing has placed more emphasis on honing technical knowledge, and its quantitative, multiple-choice nature prompts rote learning in the classroom. At the same time, humanistic concepts such as imagination and critical thinking, which cannot be tested by such methods, are disappearing from college curricula.
Thirty percent of college graduates in the United States may eventually work in jobs that do not exist yet. Proponents of a liberal education therefore argue that a postsecondary education must prepare students for an increasingly complex labor market. Rather than provide narrowly designed technical courses, a liberal education would foster critical thinking and analytical skills that allow the student to adapt to a rapidly changing workforce. The movement towards career-oriented courses within a liberal education has begun at places like Dartmouth College, where a journalism course combines lessons on writing style with reading and analyzing historical journalism. An American survey of CEOs published in 1997 revealed that employers were more focused on the long-term outcomes of education, such as adaptability, than college students and their parents, who were more concerned with the short-term outcomes of getting a job.
Provision
As of 2009, said only eight percent of colleges provide a liberal education to four percent of students in the United States. Liberal education revived three times in the United States during periods of industrialization and shifts of social preoccupations—before World War I, after World War II, and in the late 1970s—perhaps as a reaction against overspecialization in undergraduate curricula.
Currently, pressures from employers, parents and governments have defined the type of education offered at educational institutions. Such trends have curtailed the role of education offered in America. Universities have now provided education for the sole purpose to prepare students for the workforce. This idea has negatively influenced the credibility of liberal education which has impacted how students view higher education. The negative impact being a focus on specific disciplinary practices separating it from the original ideology of liberal education as "...a philosophy of education that empowers individuals with broad knowledge and transferable skills, and a stronger sense of values, ethics, and civic engagement ..." Politicians have influenced the type of education provided at universities. These politicians have been recently cutting the funds for universities applying immense pressure on higher educational institutions. Lack of funds have caused many to abandon the liberal arts curricula. Therefore, universities have been forced to provide a curriculum useful for providing a vocational education. The lack of funds to maintain a balanced education system has caused American universities to provide an education with a lack of emphasis on liberal values.
The disappearance of liberal education can also be traced to Liberal Art Colleges. Students are beginning to view higher education as a preparation for careers. This has then led to the natural selection of colleges. The thought of having education that instructs to enhance the individual for the purpose of improving society does not meet current demands. Thus, as a result, Liberal Art Colleges are diminishing along with the emphasis on providing a liberal education.
Chinese universities began to implement liberal curricula between the 1920s and 1940s, but shifted to specialized education upon the establishment of the People's Republic of China in 1949. Higher education reform in the 1990s returned to liberal education. In 2000 Peking University started to offer a liberal education curriculum to its undergraduate students, followed by other institutions throughout the country. In Hong Kong, The Chinese University of Hong Kong has implemented a collegiate system since the establishment of the university in 1960s and since then, it has been known for its emphasis in general education in greater China.
Some of the universities in India have started offering Liberal Arts Education. Ahmedabad University is one such young university which offers students a liberal education focused on research and interdisciplinary learning.
See also
Liberal arts education
References
Works cited
Further reading
Hughes, Thomas. "What is a Liberal Education?". The American Catholic Quarterly Review, Vol. X, January/October 1885.
External links
Philosophy of education
Liberal arts education
Pedagogical movements and theories
da:Almendannelse | 0.772418 | 0.987921 | 0.763088 |
Herbartianism | Herbartianism (Her-bart-ti-an-ism) is an educational philosophy, movement, and method loosely based on the educational and pedagogical thought of German educator Johann Friedrich Herbart, and influential on American school pedagogy of the late 19th century as the field worked towards a science of education. Herbart advocated for instruction that introduced new ideas in discrete steps. About a quarter-century after his death, Herbart's ideas were expanded in two German schools of thought that were later embodied in the method used at a practice school in Jena, which attracted educationists from the United States. Herbartianism was later replaced by new pedagogies, such as those of John Dewey.
Description
Herbartianism was used most often in adolescent instruction and was greatly influential on American school pedagogy in the 19th century. Herbart believed in maintaining the integrity of a student's individuality for as long as possible during the education process as well as an emphasis on moral training. The goal of herbartiainsim was to aid students in their learning process, beginning from no knowledge to complete knowledge.
Herbart's pedagogical method was divided into discrete steps: preparation, presentation, association, generalization, and application. In preparation, teachers introduce new material in relation to the students' existing knowledge or interests, so as to instill an interest in the new material. In presentation, the new material is shown in a concrete or material fashion. In association, the new material is compared with the students' previous knowledge for similarities and differences, so as to note the new material's distinction. In generalization, the new material is extrapolated beyond concrete and material traits. In application, if the students have internalized the new material, they apply it towards every facet of their lives rather than in a utilitarian manner. Through this process, students will be able to achieve complete knowledge on the curriculum being taught. Herbartianism provided terminology for didactic theory and helped improve teachers professionalism.
Rise and influence
While the term Herbartianism derives transparently from Herbart's name, the movement was only loosely connected to his own ideas and was not an organized practice until 25 years after his death in 1841. Herbartianism was developed from Herbart's philosophy, and divided into two schools of thought. In the first, Tuiskon Ziller of Leipzig expanded on Herbart's philosophy of "unification of studies", especially around a single discipline (called "correlation" and "concentration", respectively). In the second, Karl Stoy of Jena opened a practice school in the style of Herbart's Königsberg school. A student of Ziller and Stoy, Wilhelm Rein, later led the Jena school and designed a German elementary school curriculum that the school used. This school became "the center of Herbartian theory and practice and attracted students of pedagogy from outside Germany, including the United States".
Between the 1890s and the early twentieth century, Herbartianism was influential in normal schools and universities as they worked towards a science of education. In particular, Illinois State University, then known as Illinois State Normal University, was one of the central hubs for the Herbartianism movement in the United States. In 1893, prominent adherents at Illinois State Normal University founded the Normal Pedagogical Club, which consisted of students and faculty. Later in 1895, they helped found the National Herbart Society "to study and investigate and discuss important problems of education". Among those prominent in the society were Charles De Garmo (their first president), Charles Alexander McMurry, and Frank Morton McMurry, who all wrote on methods in education. All of whom, had studied Herbartianism or were introduced to the varieties of German Herbartianism. Charles De Garmo had studied under and worked with Karl Stoy and Otto Frick in Jena. While Charles and Frank McMurry had worked with Ziller and Rein, whom many considered to be radical in their approach to Herbartianism. These three were considered to be the main transporters of Herbartian pedagogy from Germany to the United States, where they blended the views and teachings of Otto Frick, Karl Stoy, Willhelm Rein and Tuiskon Ziller, to create an American branch of the educational pedagogy. The society also acknowledged works influenced by Herbartianism, such as two works by John Dewey, within a yearbook. The society removed Herbart from its name in 1902 and later became the National Society for the Study of Education.
Decline
Newer pedagogical theories, such as those of John Dewey, eventually replaced Herbartianism. Though Herbartian work is unpopular in the 21st century, its greatest influence was in the 19th century's "development of the science of education". In the 1996 Philosophy of Education: An Encyclopedia, J. J. Chambliss wrote that Herbartianism's influence shows wherever "thinking, moral judgment, and conduct" are considered simultaneously.
Notes and references
References
Further reading
Philosophy of education
Pedagogical movements and theories
History of education in the United States
History of education in Germany
Johann Friedrich Herbart | 0.785482 | 0.971451 | 0.763057 |
Constructivism (philosophy of science) | Constructivism is a view in the philosophy of science that maintains that scientific knowledge is constructed by the scientific community, which seeks to measure and construct models of the natural world. According to constructivists, natural science consists of mental constructs that aim to explain sensory experiences and measurements, and that there is no single valid methodology in science but rather a diversity of useful methods. They also hold that the world is independent of human minds, but knowledge of the world is always a human and social construction. Constructivism opposes the philosophy of objectivism, embracing the belief that human beings can come to know the truth about the natural world not mediated by scientific approximations with different degrees of validity and accuracy.
Constructivism and sciences
Social constructivism in sociology
One version of social constructivism contends that categories of knowledge and reality are actively created by social relationships and interactions. These interactions also alter the way in which scientific episteme is organized.
Social activity presupposes human interaction, and in the case of social construction, utilizing semiotic resources (meaning-making and signifying) with reference to social structures and institutions. Several traditions use the term Social Constructivism: psychology (after Lev Vygotsky), sociology (after Peter Berger and Thomas Luckmann, themselves influenced by Alfred Schütz), sociology of knowledge (David Bloor), sociology of mathematics (Sal Restivo), philosophy of mathematics (Paul Ernest). Ludwig Wittgenstein's later philosophy can be seen as a foundation for social constructivism, with its key theoretical concepts of language games embedded in forms of life.
Constructivism in philosophy of science
Thomas Kuhn argued that changes in scientists' views of reality not only contain subjective elements but result from group dynamics, "revolutions" in scientific practice, and changes in "paradigms". As an example, Kuhn suggested that the Sun-centric Copernican "revolution" replaced the Earth-centric views of Ptolemy not because of empirical failures but because of a new "paradigm" that exerted control over what scientists felt to be the more fruitful way to pursue their goals.
The view of reality as accessible only through models was called model-dependent realism by Stephen Hawking and Leonard Mlodinow. While not rejecting an independent reality, model-dependent realism says that we can know only an approximation of it provided by the intermediary of models.
These models evolve over time as guided by scientific inspiration and experiments.
In the field of the social sciences, constructivism as an epistemology urges that researchers reflect upon the paradigms that may be underpinning their research, and in the light of this that they become more open to considering other ways of interpreting any results of the research. Furthermore, the focus is on presenting results as negotiable constructs rather than as models that aim to "represent" social realities more or less accurately. Norma Romm, in her book Accountability in Social Research (2001), argues that social researchers can earn trust from participants and wider audiences insofar as they adopt this orientation and invite inputs from others regarding their inquiry practices and the results thereof.
Constructivism and psychology
In psychology, constructivism refers to many schools of thought that, though extraordinarily different in their techniques (applied in fields such as education and psychotherapy), are all connected by a common critique of previous standard objectivist approaches. Constructivist psychology schools share assumptions about the active constructive nature of human knowledge. In particular, the critique is aimed at the "associationist" postulate of empiricism, "by which the mind is conceived as a passive system that gathers its contents from its environment and, through the act of knowing, produces a copy of the order of reality."
In contrast, "constructivism is an epistemological premise grounded on the assertion that, in the act of knowing, it is the human mind that actively gives meaning and order to that reality to which it is responding".
The constructivist psychologies theorize about and investigate how human beings create systems for meaningfully understanding their worlds and experiences.
Constructivism and education
Joe L. Kincheloe has published numerous social and educational books on critical constructivism (2001, 2005, 2008), a version of constructivist epistemology that places emphasis on the exaggerated influence of political and cultural power in the construction of knowledge, consciousness, and views of reality. In the contemporary mediated electronic era, Kincheloe argues, dominant modes of power have never exerted such influence on human affairs. Coming from a critical pedagogical perspective, Kincheloe argues that understanding a critical constructivist epistemology is central to becoming an educated person and to the institution of just social change.
Kincheloe's characteristics of critical constructivism:
Knowledge is socially constructed: World and information co-construct one another
Consciousness is a social construction
Political struggles: Power plays an exaggerated role in the production of knowledge and consciousness
The necessity of understanding consciousness—even though it does not lend itself to traditional reductionistic modes of measurability
The importance of uniting logic and emotion in the process of knowledge and producing knowledge
The inseparability of the knower and the known
The centrality of the perspectives of oppressed peoples—the value of the insights of those who have suffered as the result of existing social arrangements
The existence of multiple realities: Making sense of a world far more complex than we originally imagined
Becoming humble knowledge workers: Understanding our location in the tangled web of reality
Standpoint epistemology: Locating ourselves in the web of reality, we are better equipped to produce our own knowledge
Constructing practical knowledge for critical social action
Complexity: Overcoming reductionism
Knowledge is always entrenched in a larger process
The centrality of interpretation: Critical hermeneutics
The new frontier of classroom knowledge: Personal experiences intersecting with pluriversal information
Constructing new ways of being human: Critical ontology
Constructivist approaches
Critical constructivism
A series of articles published in the journal Critical Inquiry (1991) served as a manifesto for the movement of critical constructivism in various disciplines, including the natural sciences. Not only truth and reality, but also "evidence", "document", "experience", "fact", "proof", and other central categories of empirical research (in physics, biology, statistics, history, law, etc.) reveal their contingent character as a social and ideological construction. Thus, a "realist" or "rationalist" interpretation is subjected to criticism. Kincheloe's political and pedagogical notion (above) has emerged as a central articulation of the concept.
Cultural constructivism
Cultural constructivism asserts that knowledge and reality are a product of their cultural context, meaning that two independent cultures will likely form different observational methodologies.
Genetic epistemology
James Mark Baldwin invented this expression, which was later popularized by Jean Piaget. From 1955 to 1980, Piaget was Director of the International Centre for Genetic Epistemology in Geneva.
Radical constructivism
Ernst von Glasersfeld was a prominent proponent of radical constructivism. This claims that knowledge is not a commodity that is transported from one mind into another. Rather, it is up to the individual to "link up" specific interpretations of experiences and ideas with their own reference of what is possible and viable. That is, the process of constructing knowledge, of understanding, is dependent on the individual's subjective interpretation of their active experience, not what "actually" occurs. Understanding and acting are seen by radical constructivists not as dualistic processes but "circularly conjoined".
Radical constructivism is closely related to second-order cybernetics.
Constructivist Foundations is a free online journal publishing peer-reviewed articles on radical constructivism by researchers from multiple domains.
Relational constructivism
Relational constructivism can be perceived as a relational consequence of radical constructivism. In contrary to social constructivism, it picks up the epistemological threads. It maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception (i.e., self-referentially operating cognition). Therefore, humans are not able to come to objective conclusions about the world.
In spite of the subjectivity of human constructions of reality, relational constructivism focuses on the relational conditions applying to human perceptional processes. Björn Kraus puts it in a nutshell:
Social Constructivism
Criticisms
Numerous criticisms have been levelled at Constructivism. The most common one is that it either explicitly advocates or implicitly reduces to relativism.
Another criticism of constructivism is that it holds that the concepts of two different social formations be entirely different and incommensurate. This being the case, it is impossible to make comparative judgments about statements made according to each worldview. This is because the criteria of judgment will themselves have to be based on some worldview or other. If this is the case, then it brings into question how communication between them about the truth or falsity of any given statement could be established.
The Wittgensteinian philosopher Gavin Kitching argues that constructivists usually implicitly presuppose a deterministic view of language, which severely constrains the minds and use of words by members of societies: they are not just "constructed" by language on this view but are literally "determined" by it. Kitching notes the contradiction here: somehow, the advocate of constructivism is not similarly constrained. While other individuals are controlled by the dominant concepts of society, the advocate of constructivism can transcend these concepts and see through them.
See also
Autopoiesis
Consensus reality
Constructivism in international relations
Cultural pluralism
Epistemological pluralism
Tinkerbell effect
Map–territory relation
Meaning making
Metacognition
Ontological pluralism
Personal construct psychology
Perspectivism
Pragmatism
References
Further reading
Devitt, M. 1997. Realism and Truth, Princeton University Press.
Gillett, E. 1998. "Relativism and the Social-constructivist Paradigm", Philosophy, Psychiatry, & Psychology, Vol.5, No.1, pp. 37–48
Ernst von Glasersfeld 1987. The construction of knowledge, Contributions to conceptual semantics.
Ernst von Glasersfeld 1995. Radical constructivism: A way of knowing and learning.
Joe L. Kincheloe 2001. Getting beyond the Facts: Teaching Social Studies/Social Science in the Twenty-First Century, NY: Peter Lang.
Joe L. Kincheloe 2005. Critical Constructivism Primer, NY: Peter Lang.
Joe L. Kincheloe 2008. Knowledge and Critical Pedagogy, Dordrecht, The Netherlands: Springer.
Kitching, G. 2008. The Trouble with Theory: The Educational Costs of Postmodernism, Penn State University Press.
Björn Kraus 2014: Introducing a model for analyzing the possibilities of power, help and control. In: Social Work and Society. International Online Journal. Retrieved 3 April 2019.(http://www.socwork.net/sws/article/view/393)
Björn Kraus 2015: The Life We Live and the Life We Experience: Introducing the Epistemological Difference between "Lifeworld" (Lebenswelt) and "Life Conditions" (Lebenslage). In: Social Work and Society. International Online Journal. Retrieved 27 August 2018.(http://www.socwork.net/sws/article/view/438).
Björn Kraus 2019: Relational constructivism and relational social work. In: Webb, Stephen, A. (edt.) The Routledge Handbook of Critical Social Work. Routledge international Handbooks. London and New York: Taylor & Francis Ltd.
Friedrich Kratochwil: Constructivism: what it is (not) and how it matters, in Donatella della Porta & Michael Keating (eds.) 2008, Approaches and Methodologies in the Social Sciences: A Pluralist Perspective, Cambridge University Press, 80–98.
Mariyani-Squire, E. 1999. "Social Constructivism: A flawed Debate over Conceptual Foundations", Capitalism, Nature, Socialism, vol.10, no.4, pp. 97–125
Matthews, M.R. (ed.) 1998. Constructivism in Science Education: A Philosophical Examination, Kluwer Academic Publishers.
Edgar Morin 1986, La Méthode, Tome 3, La Connaissance de la connaissance.
Nola, R. 1997. "Constructivism in Science and in Science Education: A Philosophical Critique", Science & Education, Vol.6, no.1-2, pp. 55–83.
Jean Piaget (ed.) 1967. Logique et connaissance scientifique, Encyclopédie de la Pléiade, vol. 22. Editions Gallimard.
Herbert A. Simon 1969. The Sciences of the Artificial (3rd Edition MIT Press 1996).
Slezak, P. 2000. "A Critique of Radical Social Constructivism", in D.C. Philips, (ed.) 2000, Constructivism in Education: Opinions and Second Opinions on Controversial Issues, The University of Chicago Press.
Suchting, W.A. 1992. "Constructivism Deconstructed", Science & Education, vol.1, no.3, pp. 223–254
Paul Watzlawick 1984. The Invented Reality: How Do We Know What We Believe We Know? (Contributions to Constructivism), W W. Norton.
Tom Rockmore 2008. On Constructivist Epistemology.
Romm, N.R.A. 2001. Accountability in Social Research, Dordrecht, The Netherlands: Springer. https://www.springer.com/social+sciences/book/978-0-306-46564-2
External links
Journal of Constructivist Psychology
Radical Constructivism
Constructivist Foundations
Epistemological theories
Epistemology of science
Metatheory of science
Philosophical analogies
Social constructionism
Social epistemology
Systems theory
Theories of truth
Constructivism | 0.776187 | 0.98307 | 0.763046 |
Civic education in the United States | Rationale
The promotion of a republic and its values has been an important concern for policy-makers – to impact people's political perceptions, to encourage political participation, and to foster the principles enshrined in the Constitution (e.g. liberty, freedom of speech, civil rights). The subject of "Civics" has been integrated into the Curriculum and Content Standards, to enhance the comprehension of democratic values in the educational system. Civic literature has found that "engaging young children in civic activities from an early age is a positive predictor of their participation in later civic life".
As an academic subject, Civics has the instructional objective to promote knowledge that is aligned with self-governance and participation in matters of public concern. These objectives advocate for an instruction that encourages active student participation in democratic decision-making environments, such as voting to elect a course representative for a school government, or deciding on actions that will affect the school environment or community. Thus the intersection of individual and collective decision making activities, are critical to shape "individual's moral development". To reach those goals, civic instructors must promote the adoption of certain skills and attitudes such as "respectful argumentation, debate, information literacy", to support "the development of morally responsible individuals who will shape a morally responsible and civically minded society". In the 21st century, young people are less interested in direct political participation (i.e. being in a political party or even voting), but are motivated to use digital media (e.g. Twitter, Facebook). Digital media enable young people to share and exchange ideas rapidly, enabling the coordination of local communities that promote volunteerism and political activism, in topics principally related to human rights and environmental subjects.
Young people are constructing and supporting their political identities in the 21st century by using social media, and digital tools (e.g. text messaging, hashtags, videos) to share, post, reply an opinion or attitude about a political/social topic and to promote social mobilization and support through online mechanism to a wide and diverse audience. Therefore, civics' end-goal in the 21st century must be oriented to "empower the learners to find issues in their immediate communities that seem important to the people with whom they live and associate", once "learners have identified with a personal issue and participated in constructing a collective framing for common issues".
Current state
According to the No Child Left Behind Act of 2001, one of the purposes of Civic Education is to "foster civic competence and responsibility" which is promoted through the Center for Civic Education’s We the People and Project Citizen initiatives. However, there is a lack of consensus for how this mission should be pursued. The Center for Information & Research on Civic Learning & Engagement (CIRCLE) reviewed state civic education requirements in the United States for 2012. The findings include:
All 50 states have social studies standards which include civics and government.
39 states require at least one course in government/civics.
21 states require a state-mandated social studies test which is a decrease from 2001 (34 states).
8 states require students to take a state-mandated government/civics test.
9 states require a social studies test as a requirement for high school graduation.
The lack of state-mandated student accountability relating to civics may be a result of a shift in emphasis towards reading and mathematics in response to the 2001 No Child Left Behind Act. There is a movement to require that states utilize the citizenship test as a graduation requirement, but this is seen as a controversial solution to some educators.
Students are also demonstrating that their civic knowledge leaves much to be desired. A National Center for Education Statistics NAEP report card for civics (2010) stated that "levels of civic knowledge in U.S. have remained unchanged or even declined over the past century". Specifically, only 24 percent of 4th, 8th, and 12th graders were at or above the proficient level on the National Assessment of Educational Progress in civics. Traditionally, civic education has emphasized the facts of government processes detached from participatory experience. In an effort to combat the existing approach, the National Council for the Social Studies developed the College, Career, and Civic Life (C3) Framework for Social Studies State Standards. The C3 Framework emphasizes "new and active approaches" including the "discussion of controversial issues and current events, deliberation of public issues, service-learning, action civics, participation in simulation and role play, and the use of digital technologies".
In the 21st century
According to a 2007 study conducted by the Pew Research Center, among teens 12–17 years old, 95% have access to the Internet, 70% go online daily, 80% use social networking sites, and 77% have cell phones. As a result, participatory culture has become a staple for today’s youth, affecting their conceptualization of civic participation. They use Web 2.0 tools (i.e. blogs, podcasts, wikis, social media) to: circulate information (blogs and podcasts); collaborate with peers (wikis); produce and exchange media; and connect with people around the world via social media and online communities. The pervasiveness of participatory digital tools has led to a shift in the way adolescents today perceive civic action and participation. Whereas 20th century civic education embraced the belief of "dutiful citizenship" and civic engagement as a "matter of duty or obligation;" 21st century civic education has shifted to reflect youths' "personally expressive politics" and "peer-to-peer relationships" that promote civic engagement.
This shift in students' perceptions has led to classroom civic education experiences that reflect the digital world in which 21st century youth now live, in order to make the content both relevant and meaningful. Civics education classrooms in the 21st century now seek to provide genuine opportunities to actively engage in the consumption, circulation, discussion, and production of civic and political content via Web 2.0 technologies such as blogging, wikis, and social media. Although these tools offer new ways for engagement, interaction, and dialogue, educators have also recognized the need to teach youth how to interact both respectfully and productively with their peers and members of online communities. As a result, many school districts have also begun adopting Media Literacy Frameworks for Engaged Citizenship as a pedagogical approach to prepare students for active participatory citizenship in today’s digital age. This model includes critical analysis of digital media as well as a deep understanding of media literacy as a "collaborative and participatory movement that aims to empower individuals to have a voice and to use it."
See also
Citizenship education (subject)
Civics
Legal education in the United States
Notes
References
Further reading
Active citizenship
Education in the United States | 0.782222 | 0.975387 | 0.762969 |
Anachronism | An anachronism (from the Greek , 'against' and , 'time') is a chronological inconsistency in some arrangement, especially a juxtaposition of people, events, objects, language terms and customs from different time periods. The most common type of anachronism is an object misplaced in time, but it may be a verbal expression, a technology, a philosophical idea, a musical style, a material, a plant or animal, a custom, or anything else associated with a particular period that is placed outside its proper temporal domain.
An anachronism may be either intentional or unintentional. Intentional anachronisms may be introduced into a literary or artistic work to help a contemporary audience engage more readily with a historical period. Anachronism can also be used intentionally for purposes of rhetoric, propaganda, comedy, or shock. Unintentional anachronisms may occur when a writer, artist, or performer is unaware of differences in technology, terminology and language, customs and attitudes, or even fashions between different historical periods and eras.
Types
The metachronism-prochronism contrast is nearly synonymous with parachronism-anachronism, and involves postdating-predating respectively.
Parachronism
A parachronism (from the Greek , "on the side", and , "time") postdates. It is anything that appears in a time period in which it is not normally found (though not sufficiently out of place as to be impossible).
This may be an object, idiomatic expression, technology, philosophical idea, musical style, material, custom, or anything else so closely bound to a particular time period as to seem strange when encountered in a later era. They may be objects or ideas that were once common but are now considered rare or inappropriate. They can take the form of obsolete technology or outdated fashion or idioms.
Prochronism
A prochronism (from the Greek , "before", and , "time") predates. It is an impossible anachronism which occurs when an object or idea has not yet been invented when the situation takes place, and therefore could not have possibly existed at the time. A prochronism may be an object not yet developed, a verbal expression that had not yet been coined, a philosophy not yet formulated, a breed of animal not yet evolved or bred, or use of a technology that had not yet been created.
Metachronism
A metachronism (from the Greek , "after", and , "time") postdates. It is the use of older cultural artifacts in modern settings which may seem inappropriate. For example, it could be considered metachronistic for a modern-day person to be depicted wearing a top hat or writing with a quill.
Politically motivated anachronism
Works of art and literature promoting a political, nationalist or revolutionary cause may use anachronism to depict an institution or custom as being more ancient than it actually is, or otherwise intentionally blur the distinctions between past and present. For example, the 19th-century Romanian painter Constantin Lecca depicts the peace agreement between Ioan Bogdan Voievod and Radu Voievod—two leaders in Romania's 16th-century history—with the flags of Moldavia (blue-red) and of Wallachia (yellow-blue) seen in the background. These flags date only from the 1830s: anachronism promotes legitimacy for the unification of Moldavia and Wallachia into the Kingdom of Romania at the time the painting was made. The Russian artist Vasily Vereshchagin, in his painting Suppression of the Indian Revolt by the English, depicts the aftermath of the Indian Rebellion of 1857, when mutineers were executed by being blown from guns. In order to make the argument that the method of execution would again be utilized by the British if another rebellion broke out in India, Vereshchagin depicted the British soldiers conducting the executions in late 19th-century uniforms.
Art and literature
Anachronism is used especially in works of imagination that rest on a historical basis. Anachronisms may be introduced in many ways: for example, in the disregard of the different modes of life and thought that characterize different periods, or in ignorance of the progress of the arts and sciences and other facts of history. They vary from glaring inconsistencies to scarcely perceptible misrepresentation. Anachronisms may be the unintentional result of ignorance, or may be a deliberate aesthetic choice.
Sir Walter Scott justified the use of anachronism in historical literature: "It is necessary, for exciting interest of any kind, that the subject assumed should be, as it were, translated into the manners as well as the language of the age we live in." However, as fashions, conventions and technologies move on, such attempts to use anachronisms to engage an audience may have quite the reverse effect, as the details in question are increasingly recognized as belonging neither to the historical era being represented, nor to the present, but to the intervening period in which the artwork was created. "Nothing becomes obsolete like a period vision of an older period", writes Anthony Grafton; "Hearing a mother in a historical movie of the 1940s call out 'Ludwig! Ludwig van Beethoven! Come in and practice your piano now!' we are jerked from our suspension of disbelief by what was intended as a means of reinforcing it, and plunged directly into the American bourgeois world of the filmmaker."
It is only since the beginning of the 19th century that anachronistic deviations from historical reality have jarred on a general audience. C. S. Lewis wrote:
Anachronisms abound in the works of Raphael and Shakespeare, as well as in those of less celebrated painters and playwrights of earlier times. Carol Meyers says that anachronisms in ancient texts can be used to better understand the stories by asking what the anachronism represents. Repeated anachronisms and historical errors can become an accepted part of popular culture, such as the belief that Roman legionaries wore leather armor.
Comical anachronism
Comedy fiction set in the past may use anachronism for humorous effect. Comedic anachronism can be used to make serious points about both historical and modern society, such as drawing parallels to political or social conventions.
Future anachronism
Even with careful research, science fiction writers risk anachronism as their works age because they cannot predict all political, social, and technological change.
For example, many books, television shows, radio productions and films nominally set in the mid-21st century or later refer to the Soviet Union, to Saint Petersburg in Russia as Leningrad, to the continuing struggle between the Eastern and Western Blocs and to divided Germany and divided Berlin. Star Trek has suffered from future anachronisms; instead of "retconning" these errors, the 2009 film retained them for consistency with older franchises.
Buildings or natural features, such as the World Trade Center in New York City, can become out of place once they disappear, with some works having been edited to remove the World Trade Center to avoid this situation.
Futuristic technology may appear alongside technology which would be obsolete by the time in which the story is set. For example, in the stories of Robert A. Heinlein, interplanetary space travel coexists with calculation using slide rules.
Language anachronism
Language anachronisms in novels and films are quite common, both intentional and unintentional. Intentional anachronisms inform the audience more readily about a film set in the past. In this regard, language and pronunciation change so fast that most modern people (even many scholars) would find it difficult, or even impossible, to understand a film with dialogue in 15th-century English; thus, audiences willingly accept characters speaking an updated language, and modern slang and figures of speech are often used in these films.
Unconscious anachronism
Unintentional anachronisms may occur even in what are intended as wholly objective and accurate records or representations of historic artifacts and artworks, because the perspectives of historical recorders are conditioned by the assumptions and practices of their own times, in a form of cultural bias. One example is the attribution of historically inaccurate beards to various medieval tomb effigies and figures in stained glass in records made by English antiquaries of the late 16th and early 17th centuries. Working in an age in which beards were in fashion and widespread, the antiquaries seem to have unconsciously projected the fashion back into an era in which they were rare.
In academia
In historical writing, the most common type of anachronism is the adoption of the political, social or cultural concerns and assumptions of one era to interpret or evaluate the events and actions of another. The anachronistic application of present-day perspectives to comment on the historical past is sometimes described as presentism. Empiricist historians, working in the traditions established by Leopold von Ranke in the 19th century, regard this as a great error, and a trap to be avoided. Arthur Marwick has argued that "a grasp of the fact that past societies are very different from our own, and ... very difficult to get to know" is an essential and fundamental skill of the professional historian; and that "anachronism is still one of the most obvious faults when the unqualified (those expert in other disciplines, perhaps) attempt to do history".
Detection of forgery
The ability to identify anachronisms may be employed as a critical and forensic tool to demonstrate the fraudulence of a document or artifact purporting to be from an earlier time. Anthony Grafton discusses, for example, the work of the 3rd-century philosopher Porphyry, of Isaac Casaubon (1559–1614), and of Richard Reitzenstein (1861–1931), all of whom succeeded in exposing literary forgeries and plagiarisms, such as those included in the "Hermetic Corpus", through – among other techniques – the recognition of anachronisms. The detection of anachronisms is an important element within the scholarly discipline of diplomatics, the critical analysis of the forms and language of documents, developed by the Maurist scholar Jean Mabillon (1632–1707) and his successors René-Prosper Tassin (1697–1777) and Charles-François Toustain (1700–1754). The philosopher and reformer Jeremy Bentham wrote at the beginning of the 19th century:
Examples are:
The exposure by Lorenzo Valla in 1440 of the so-called Donation of Constantine, a decree purportedly issued by the Emperor Constantine the Great in either 315 or 317 AD, as a later forgery, depended to a considerable degree on the identification of anachronisms, such as references to the city of Constantinople (a name not in fact bestowed until 330 AD).
A large number of apparent anachronisms in the Book of Mormon have served to convince critics that the book was written in the 19th century, and not, as its adherents claim, in pre-Columbian America.
The use of 19th- and 20th-century anti-semitic terminology demonstrates that the purported "Franklin Prophecy" (attributed to Benjamin Franklin, who died in 1790) is a forgery.
The "William Lynch speech", an address, supposedly delivered in 1712, on the control of slaves in Virginia, is now considered to be a 20th-century forgery, partly on account of its use of anachronistic terms such as "program" and "refueling".
See also
Anachronisms in the Book of Mormon
Anatopism
Evolutionary anachronism
Invented traditions
List of stories set in a future now past
Retrofuturism
Skeuomorph
Society for Creative Anachronism
Steampunk
Tiffany Problem
Whig history
References
Bibliography
External links | 0.764133 | 0.99842 | 0.762926 |
Calgary–Cambridge model | The Calgary–Cambridge model (Calgary-Cambridge guide) is a method for structuring medical interviews. It focuses on giving a clear structure of initiating a session, gathering information, physical examination, explaining results and planning, and closing a session. It is popular in medical education in many countries.
Method
The Calgary–Cambridge model involves:
initiating a session: This involves preparation by the clinician, building rapport with the patient, and an understanding of why the interview is needed.
gathering information: This may be split into a focus on a biomedical perspective, the patient's experience, and contextual information about the patient. Contextual information may include personal history, social history, and other medical history.
a physical examination of a patient: This varies based on the purpose of the interview.
explaining results and planning: This aims to ensure a shared understanding, and allowing for shared decision-making.
closing a session: This may involve discussing further plans.
This is designed to give a clear structure to the interview, and to help to build the relationship between the clinician and the patient. The importance of nonverbal communication is noted.
The model is based on 71 skills and techniques that improve patient interviews. These include maintaining eye contact, active listening (not interrupting, giving verbal cues), summarizing information frequently, asking about patient ideas and beliefs, and showing empathy.
Advantages
The Calgary–Cambridge model was developed based on evidence from interviews of patients, and what made them successful. It is generally focussed on the patient and their experience. The guide of skills and techniques is generally seen as comprehensive.
Disadvantages
The Calgary–Cambridge model has been criticized for creating a separation between the process of interviewing a patient and the information gained. The 71 skills are very difficult to incorporate simultaneously, making it more difficult to learn for clinicians than other techniques.
History
The Calgary–Cambridge model is named after Calgary, Canada, and Cambridge, United Kingdom where the three authors worked. It is popular in medical education in many countries. It has also been adapted for veterinarians. Other models, such as the Global Consultation Rating Scale, have been based on the Calgary–Cambridge model.
References
External links
Book chapter summarising the model and the 71 skills
Practice of medicine
Theory of medicine
Interviews
Medical education | 0.781417 | 0.976316 | 0.76291 |
Heuristic | A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless "good enough" as an approximation or attribute substitution. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.
Context
Gigerenzer & Gaissmaier (2011) state that sub-sets of strategy include heuristics, regression analysis, and Bayesian inference.
Heuristics are strategies based on rules to generate optimal decisions, like the anchoring effect and utility maximization problem. These strategies depend on using readily accessible, though loosely applicable, information to control problem solving in human beings, machines and abstract issues. When an individual applies a heuristic in practice, it generally performs as expected. However it can alternatively create systematic errors.
The most fundamental heuristic is trial and error, which can be used in everything from matching nuts and bolts to finding the values of variables in algebra problems. In mathematics, some common heuristics involve the use of visual representations, additional assumptions, forward/backward reasoning and simplification.
Dual process theory concerns embodied heuristics.
In psychology, heuristics are simple, efficient rules, either learned or inculcated by evolutionary processes. These psychological heuristics have been proposed to explain how people make decisions, come to judgements, and solve problems. These rules typically come into play when people face complex problems or incomplete information. Researchers employ various methods to test whether people use these rules. The rules have been shown to work well under most circumstances, but in certain cases can lead to systematic errors or cognitive biases.
Heuristic rigour models
Lakatosian heuristics is based on the key term: Justification (epistemology).
One-reason decisions
One-reason decisions are algorithms that are made of three rules: search rules, confirmation rules (stopping), and decision rules
Hiatus heuristic: a "recency-of-last-purchase rule"
Take-the-first heuristic
Recognition-based decisions
A class that's function is to determine and filter out superfluous things.
Tracking heuristics
Tracking heuristics is a class of heuristics.
Trade-off
Tallying heuristic
Equality heuristic
Social heuristics
Epistemic heuristics
Behavioral economics
Others
Minimalist heuristic
Meta-heuristic
Optimality
History
George Polya studied and published on heuristics in 1945. Polya (1945) cites Pappus of Alexandria as having written a text that Polya dubs Heuristic. Pappus' heuristic problem-solving methods consist of analysis and synthesis.
Notable
Figures
George Polya
Herbert A. Simon
Daniel Kahneman
Amos Tversky
Gerd Gigerenzer
Judea Pearl
Robin Dunbar
David Perkins Page
Herbert Spencer
Charles Alexander McMurry
Frank Morton McMurry
Lawrence Zalcman
Imre Lakatos
William C. Wimsatt
Alan Hodgkin
Andrew Huxley
Works
Meno
How to solve it
Mathematics and Plausible Reasoning
Contemporary
The study of heuristics in human decision-making was developed in the 1970s and the 1980s, by the psychologists Amos Tversky and Daniel Kahneman, although the concept had been originally introduced by the Nobel laureate Herbert A. Simon. Simon's original primary object of research was problem solving that showed that we operate within what he calls bounded rationality. He coined the term satisficing, which denotes a situation in which people seek solutions, or accept choices or judgements, that are "good enough" for their purposes although they could be optimised.
Rudolf Groner analysed the history of heuristics from its roots in ancient Greece up to contemporary work in cognitive psychology and artificial intelligence, proposing a cognitive style "heuristic versus algorithmic thinking", which can be assessed by means of a validated questionnaire.
Adaptive toolbox
The adaptive toolbox contains strategies for fabricating heuristic devices. The core mental capacities are recall (memory), frequency, object permanence, and imitation. Gerd Gigerenzer and his research group argued that models of heuristics need to be formal to allow for predictions of behavior that can be tested. They study the fast and frugal heuristics in the "adaptive toolbox" of individuals or institutions, and the ecological rationality of these heuristics; that is, the conditions under which a given heuristic is likely to be successful. The descriptive study of the "adaptive toolbox" is done by observation and experiment, while the prescriptive study of ecological rationality requires mathematical analysis and computer simulation. Heuristics – such as the recognition heuristic, the take-the-best heuristic and fast-and-frugal trees – have been shown to be effective in predictions, particularly in situations of uncertainty. It is often said that heuristics trade accuracy for effort but this is only the case in situations of risk. Risk refers to situations where all possible actions, their outcomes and probabilities are known. In the absence of this information, that is under uncertainty, heuristics can achieve higher accuracy with lower effort. This finding, known as a less-is-more effect, would not have been found without formal models. The valuable insight of this program is that heuristics are effective not despite their simplicity – but because of it. Furthermore, Gigerenzer and Wolfgang Gaissmaier found that both individuals and organisations rely on heuristics in an adaptive way.
Cognitive-experiential self-theory
Heuristics, through greater refinement and research, have begun to be applied to other theories, or be explained by them. For example, the cognitive-experiential self-theory (CEST) is also an adaptive view of heuristic processing. CEST breaks down two systems that process information. At some times, roughly speaking, individuals consider issues rationally, systematically, logically, deliberately, effortfully, and verbally. On other occasions, individuals consider issues intuitively, effortlessly, globally, and emotionally. From this perspective, heuristics are part of a larger experiential processing system that is often adaptive, but vulnerable to error in situations that require logical analysis.
Attribute substitution
In 2002, Daniel Kahneman and Shane Frederick proposed that cognitive heuristics work by a process called attribute substitution, which happens without conscious awareness. According to this theory, when somebody makes a judgement (of a "target attribute") that is computationally complex, a more easily calculated "heuristic attribute" is substituted. In effect, a cognitively difficult problem is dealt with by answering a rather simpler problem, without being aware of this happening. This theory explains cases where judgements fail to show regression toward the mean. Heuristics can be considered to reduce the complexity of clinical judgments in health care.
Academic disciplines
Psychology
A heuristic is stored in the memory. Heuristics are inherently phenomenological, e.g., I and Thou.
Philosophy
A heuristic device is used when an entity X exists to enable understanding of, or knowledge concerning, some other entity Y.
A good example is a model that, as it is never identical with what it models, is a heuristic device to enable understanding of what it models. Stories, metaphors, etc., can also be termed heuristic in this sense. A classic example is the notion of utopia as described in Plato's best-known work, The Republic. This means that the "ideal city" as depicted in The Republic is not given as something to be pursued, or to present an orientation-point for development. Rather, it shows how things would have to be connected, and how one thing would lead to another (often with highly problematic results), if one opted for certain principles and carried them through rigorously.
Heuristic is also often used as a noun to describe a rule of thumb, procedure, or method. Philosophers of science have emphasised the importance of heuristics in creative thought and the construction of scientific theories. Seminal works include Karl Popper's The Logic of Scientific Discovery and others by Imre Lakatos, Lindley Darden, and William C. Wimsatt.
Law
In legal theory, especially in the theory of law and economics, heuristics are used in the law when case-by-case analysis would be impractical, insofar as "practicality" is defined by the interests of a governing body.
The present securities regulation regime largely assumes that all investors act as perfectly rational persons. In truth, actual investors face cognitive limitations from biases, heuristics, and framing effects. For instance, in all states in the United States the legal drinking age for unsupervised persons is 21 years, because it is argued that people need to be mature enough to make decisions involving the risks of alcohol consumption. However, assuming people mature at different rates, the specific age of 21 would be too late for some and too early for others. In this case, the somewhat arbitrary delineation is used because it is impossible or impractical to tell whether an individual is sufficiently mature for society to trust them with that kind of responsibility. Some proposed changes, however, have included the completion of an alcohol education course rather than the attainment of 21 years of age as the criterion for legal alcohol possession. This would put youth alcohol policy more on a case-by-case basis and less on a heuristic one, since the completion of such a course would presumably be voluntary and not uniform across the population.
The same reasoning applies to patent law. Patents are justified on the grounds that inventors must be protected so they have incentive to invent. It is therefore argued that it is in society's best interest that inventors receive a temporary government-granted monopoly on their idea, so that they can recoup investment costs and make economic profit for a limited period. In the United States, the length of this temporary monopoly is 20 years from the date the patent application was filed, though the monopoly does not actually begin until the application has matured into a patent. However, like the drinking age problem above, the specific length of time would need to be different for every product to be efficient. A 20-year term is used because it is difficult to tell what the number should be for any individual patent. More recently, some, including University of North Dakota law professor Eric E. Johnson, have argued that patents in different kinds of industries – such as software patents – should be protected for different lengths of time.
Artificial intelligence
The bias–variance tradeoff gives insight into describing the less-is-more strategy. A heuristic can be used in artificial intelligence systems while searching a solution space. The heuristic is derived by using some function that is put into the system by the designer, or by adjusting the weight of branches based on how likely each branch is to lead to a goal node.
Behavioural economics
Heuristics refers to the cognitive shortcuts that individuals use to simplify decision-making processes in economic situations. Behavioral economics is a field that integrates insights from psychology and economics to better understand how people make decisions.
Anchoring and adjustment is one of the most extensively researched heuristics in behavioural economics. Anchoring is the tendency of people to make future judgements or conclusions based too heavily on the original information supplied to them. This initial knowledge functions as an anchor, and it can influence future judgements even if the anchor is entirely unrelated to the decisions at hand. Adjustment, on the other hand, is the process through which individuals make gradual changes to their initial judgements or conclusions.
Anchoring and adjustment has been observed in a wide range of decision-making contexts, including financial decision-making, consumer behavior, and negotiation. Researchers have identified a number of strategies that can be used to mitigate the effects of anchoring and adjustment, including providing multiple anchors, encouraging individuals to generate alternative anchors, and providing cognitive prompts to encourage more deliberative decision-making.
Other heuristics studied in behavioral economics include the representativeness heuristic, which refers to the tendency of individuals to categorize objects or events based on how similar they are to typical examples, and the availability heuristic, which refers to the tendency of individuals to judge the likelihood of an event based on how easily it comes to mind.
Stereotyping
Stereotyping is a type of heuristic that people use to form opinions or make judgements about things they have never seen or experienced. They work as a mental shortcut to assess everything from the social status of a person (based on their actions), to classifying a plant as a tree based on it being tall, having a trunk, and that it has leaves (even though the person making the evaluation might never have seen that particular type of tree before).
Stereotypes, as first described by journalist Walter Lippmann in his book Public Opinion (1922), are the pictures we have in our heads that are built around experiences as well as what we are told about the world.
See also
References
Further reading
How To Solve It: Modern Heuristics, Zbigniew Michalewicz and David B. Fogel, Springer Verlag, 2000.
The Problem of Thinking Too Much , 11 December 2002, Persi Diaconis
Adages
Biological rules
Ecogeographic rules
Heuristics
Problem solving methods
Rules of thumb | 0.763551 | 0.999096 | 0.762861 |
Postcolonialism | Postcolonialism (also post-colonial theory) is the critical academic study of the cultural, political and economic consequences of colonialism and imperialism, focusing on the impact of human control and exploitation of colonized people and their lands. The field started to emerge in the 1960s, as scholars from previously colonized countries began publishing on the lingering effects of colonialism, developing a critical theory analysis of the history, culture, literature, and discourse of (usually European) imperial power.
Postcolonial, as in the postcolonial condition, is to be understood, as Mahmood Mamdani puts it, as a reversal of colonialism but not as superseding it.
Purpose and basic concepts
As an epistemology (i.e., a study of knowledge, its nature, and verifiability), ethics (moral philosophy), and as a political science (i.e., in its concern with affairs of the citizenry), the field of postcolonialism addresses the matters that constitute the postcolonial identity of a decolonized people, which derives from:
the colonizer's generation of cultural knowledge about the colonized people; and
how that cultural knowledge was applied to subjugate a geographically or culturally distinct people into a colony of the colonizing empire, which, after initial invasion, was effected by means of the cultural identities of 'colonizer' and 'colonized'.
Postcolonialism is aimed at disempowering such theories (intellectual and linguistic, social and economic) by means of which colonialists "perceive," "understand," and "know" the world. Postcolonial theory thus establishes intellectual spaces for subaltern peoples to speak for themselves, in their own voices, and thus produce cultural discourses of philosophy, language, society, and economy, balancing the imbalanced us-and-them binary power-relationship between the colonist and the colonial subjects.
Approaches
Understanding the complex chain of political and social, economic, and cultural impacts left in the aftermath of colonial control is essential to understanding post-colonialism. A wide range of experiences are included in post-colonial discourse, from ongoing battles against colonialism and globalization to struggles for independence. The long-lasting effects of colonialism will be faced by them, such as identity issues, structural injustices, and the elimination of indigenous knowledge and customs.
Postcolonialism encompasses a wide variety of approaches, and theoreticians may not always agree on a common set of definitions. On a simple level, through anthropological study, it may seek to build a better understanding of colonial life—based on the assumption that the colonial rulers are unreliable narrators—from the point of view of the colonized people. On a deeper level, postcolonialism examines the social and political power relationships that sustain colonialism and neocolonialism, including the social, political and cultural narratives surrounding the colonizer and the colonized. This approach may overlap with studies of contemporary history, and may also draw examples from anthropology, historiography, political science, philosophy, sociology, and human geography. Sub-disciplines of postcolonial studies examine the effects of colonial rule on the practice of feminism, anarchism, literature, and Christian thought.
At times, the term postcolonial studies may be preferred to postcolonialism, as the ambiguous term colonialism could refer either to a system of government, or to an ideology or world view underlying that system. However, postcolonialism (i.e., postcolonial studies) generally represents an ideological response to colonialist thought, rather than simply describing a system that comes after colonialism, as the prefix post- may suggest. As such, postcolonialism may be thought of as a reaction to or departure from colonialism in the same way postmodernism is a reaction to modernism; the term postcolonialism itself is modeled on postmodernism, with which it shares certain concepts and methods.
A clear reflection of the continuous fights for independence around the world is provided by the ongoing struggles against colonialism and globalization. The harsh effects of colonial rule and the homogenizing effects of globalization have development to movements in recent years. The opposition to colonialism and globalization represents a complex battle for liberty and independence, ranging from community organizations calling for economic sovereignty and self-determination to indigenous people defending their land and culture against corporate exploitation. These initiatives, which cross continents rather than stay inside a specific area, demonstrate the interdependence of movements and the shared pursuit of justice and emancipation.
Colonialist discourse
Colonialism was presented as "the extension of civilization," which ideologically justified the self-ascribed racial and cultural superiority of the Western world over the non-Western world. This concept was espoused by Ernest Renan in La Réforme intellectuelle et morale (1871), whereby imperial stewardship was thought to affect the intellectual and moral reformation of the coloured peoples of the lesser cultures of the world. That such a divinely established, natural harmony among the human races of the world would be possible, because everyone has an assigned cultural identity, a social place, and an economic role within an imperial colony. Thus:
From the mid- to the late-nineteenth century, such racialist group-identity language was the cultural common-currency justifying geopolitical competition amongst the European and American empires and meant to protect their over-extended economies. Especially in the colonization of the Far East and in the late-nineteenth century Scramble for Africa, the representation of a homogeneous European identity justified colonization. Hence, Belgium and Britain, and France and Germany proffered theories of national superiority that justified colonialism as delivering the light of civilization to unenlightened peoples. Notably, la mission civilisatrice, the self-ascribed 'civilizing mission' of the French Empire, proposed that some races and cultures have a higher purpose in life, whereby the more powerful, more developed, and more civilized races have the right to colonize other peoples, in service to the noble idea of "civilization" and its economic benefits.
Postcolonial identity
Postcolonial theory holds that decolonized people develop a postcolonial identity that is based on cultural interactions between different identities (cultural, national, and ethnic as well as gender and class based) which are assigned varying degrees of social power by the colonial society. In postcolonial literature, the anti-conquest narrative analyzes the identity politics that are the social and cultural perspectives of the subaltern colonial subjects—their creative resistance to the culture of the colonizer; how such cultural resistance complicated the establishment of a colonial society; how the colonizers developed their postcolonial identity; and how neocolonialism actively employs the 'us-and-them' binary social relation to view the non-Western world as inhabited by 'the other'.
As an example, consider how neocolonial discourse of geopolitical homogeneity often includes the relegating of decolonized peoples, their cultures, and their countries, to an imaginary place, such as "the Third World." Oftentimes the term "the third World" is over-inclusive: it refers vaguely to large geographic areas comprising several continents and seas, i.e. Africa, Asia, Latin America, and Oceania. Rather than providing a clear or complete description of the area it supposedly refers to, it instead erases distinctions and identities of the groups it claims to represent. A postcolonial critique of this term would analyze the self-justifying usage of such a term, the discourse it occurs within, as well as the philosophical and political functions the language may have. Postcolonial critiques of homogeneous concepts such as the "Arabs," the "First World," "Christendom," and the "Ummah", often aim to show how such language actually does not represent the groups supposedly identified. Such terminology often fails to adequately describe the heterogeneous peoples, cultures, and geography that make them up. Accurate descriptions of the world's peoples, places, and things require nuanced and accurate terms. By including everyone under the Third World concept, it ignores the why those regions or countries are considered Third World and who is responsible.
One of the ongoing struggles is balancing the cultural heritage of the indigenous people with the norms and values imposed by colonizers. This can cause identity fracture and a sense of displacement in people as well as communities. In addition, the hierarchical social structures that were created during colonial control have continued to support inequalities in power and injustice, which contributed to identity conflicts based on gender, class, and ethnicity. These problems are not just historical artifacts; rather, they are fundamental components of society and are expressed in current discussions about government, language, education, and cultural representation. In order to address these persistent identity problems, it is necessary to thoroughly reconsider historical narratives, acknowledge a variety of viewpoints, and work to create inclusive and equitable societies that enable people to affirm and reclaim their distinct cultural identities in the post-colonial era.
Difficulty of definition
As a term in contemporary history, postcolonialism occasionally is applied, temporally, to denote the immediate time after the period during which imperial powers retreated from their colonial territories. Such is believed to be a problematic application of the term, as the immediate, historical, political time is not included in the categories of critical identity-discourse, which deals with over-inclusive terms of cultural representation, which are abrogated and replaced by postcolonial criticism. As such, the terms postcolonial and postcolonialism denote aspects of the subject matter that indicate that the decolonized world is an intellectual space "of contradictions, of half-finished processes, of confusions, of hybridity, and of liminalities." As in most critical theory-based research, the lack of clarity in the definition of the subject matter coupled with an open claim to normativity makes criticism of postcolonial discourse problematic, reasserting its dogmatic or ideological status.
In Post-Colonial Drama: Theory, Practice, Politics (1996), Helen Gilbert and Joanne Tompkins clarify the denotational functions, among which:
The term post-colonialism is also applied to denote the Mother Country's neocolonial control of the decolonized country, affected by the legalistic continuation of the economic, cultural, and linguistic power relationships that controlled the colonial politics of knowledge (i.e., the generation, production, and distribution of knowledge) about the colonized peoples of the non-Western world. The cultural and religious assumptions of colonialist logic remain active practices in contemporary society and are the basis of the Mother Country's neocolonial attitude towards her former colonial subjects—an economical source of labour and raw materials. It acts as a non interchangeable term that links the independent country to its colonizer, depriving countries of their Independence, decades after building their own identities.
Notable theoreticians and theories
Frantz Fanon and subjugation
In The Wretched of the Earth (1961), psychiatrist and philosopher Frantz Fanon analyzes and medically describes the nature of colonialism as essentially destructive. Its societal effects—the imposition of a subjugating colonial identity—is harmful to the mental health of the native peoples who were subjugated into colonies. Fanon writes that the ideological essence of colonialism is the systematic denial of "all attributes of humanity" of the colonized people. Such dehumanization is achieved with physical and mental violence, by which the colonist means to inculcate a servile mentality upon the natives.
For Fanon, the natives must violently resist colonial subjugation. Hence, Fanon describes violent resistance to colonialism as a mentally cathartic practise, which purges colonial servility from the native psyche, and restores self-respect to the subjugated. Thus, Fanon actively supported and participated in the Algerian Revolution (1954–62) for independence from France as a member and representative of the Front de Libération Nationale.
As postcolonial praxis, Fanon's mental health analyses of colonialism and imperialism, and the supporting economic theories, were partly derived from the essay "Imperialism, the Highest Stage of Capitalism" (1916), wherein Vladimir Lenin described colonial imperialism as an advanced form of capitalism, desperate for growth at all costs, and so requires more and more human exploitation to ensure continually consistent profit-for-investment.
Another key book that predates postcolonial theories is Fanon's Black Skins, White Masks. In this book, Fanon discusses the logic of colonial rule from the perspective of the existential experience of racialized subjectivity. Fanon treats colonialism as a total project which rules every aspect of colonized peoples and their reality. Fanon reflects on colonialism, language, and racism and asserts that to speak a language is to adopt a civilization and to participate in the world of that language. His ideas show the influence of French and German philosophy, since existentialism, phenomenology, and hermeneutics claim that language, subjectivity, and reality are interrelated. However, the colonial situation presents a paradox: when colonial beings are forced to adopt and speak an imposed language which is not their own, they adopt and participate in the world and civilization of the colonized. This language results from centuries of colonial domination which is aimed at eliminating other expressive forms in order to reflect the world of the colonizer. As a consequence, when colonial beings speak as the colonized, they participate in their own oppression and the very structures of alienation are reflected in all aspects of their adopted language.
Edward Said and orientalism
Cultural critic Edward Said is considered by E. San Juan, Jr. as "the originator and inspiring patron-saint of postcolonial theory and discourse" due to his interpretation of the theory of orientalism explained in his 1978 book, Orientalism. To describe the us-and-them "binary social relation" with which Western Europe intellectually divided the world—into the "Occident" and the "Orient"—Said developed the denotations and connotations of the term orientalism (an art-history term for Western depictions and the study of the Orient). Said's concept (which he also termed "orientalism") is that the cultural representations generated with the us-and-them binary relation are social constructs, which are mutually constitutive and cannot exist independent of each other, because each exists on account of and for the other.
Notably, "the West" created the cultural concept of "the East," which according to Said allowed the Europeans to suppress the peoples of the Middle East, the Indian Subcontinent, and of Asia in general, from expressing and representing themselves as discrete peoples and cultures. Orientalism thus conflated and reduced the non-Western world into the homogeneous cultural entity known as "the East." Therefore, in service to the colonial type of imperialism, the us-and-them orientalist paradigm allowed European scholars to represent the Oriental World as inferior and backward, irrational and wild, as opposed to a Western Europe that was superior and progressive, rational and civil—the opposite of the Oriental Other.
Reviewing Said's Orientalism (1978), A. Madhavan (1993) says that "Said's passionate thesis in that book, now an 'almost canonical study', represented Orientalism as a 'style of thought' based on the antinomy of East and West in their world-views, and also as a 'corporate institution' for dealing with the Orient."
In concordance with philosopher Michel Foucault, Said established that power and knowledge are the inseparable components of the intellectual binary relationship with which Occidentals claim "knowledge of the Orient." That the applied power of such cultural knowledge allowed Europeans to rename, re-define, and thereby control Oriental peoples, places, and things, into imperial colonies. The power-knowledge binary relation is conceptually essential to identify and understand colonialism in general, and European colonialism in particular. Hence,
Nonetheless, critics of the homogeneous "Occident–Orient" binary social relation, say that Orientalism is of limited descriptive capability and practical application, and propose instead that there are variants of Orientalism that apply to Africa and to Latin America. Said responds that the European West applied Orientalism as a homogeneous form of The Other, in order to facilitate the formation of the cohesive, collective European cultural identity denoted by the term "The West."
With this described binary logic, the West generally constructs the Orient subconsciously as its alter ego. Therefore, descriptions of the Orient by the Occident lack material attributes, grounded within the land. This imaginative interpretation ascribes female characteristics to the Orient and plays into fantasies that are inherent within the West's alter ego. It should be understood that this process draws creativity, amounting to an entire domain and discourse.
In Orientalism (p. 6), Said mentions the production of "philology [the study of the history of languages], lexicography [dictionary making], history, biology, political and economic theory, novel-writing and lyric poetry." There is an entire industry that exploits the Orient for its own subjective purposes, one that lacks a native and intimate understanding. Such industries become institutionalized and eventually become a resource for manifest Orientalism, or for compiling misinformation about the Orient.
These subjective fields of academia now synthesize the political resources and think-tanks that are so common in the West today. Orientalism is self-perpetuating to the extent that it becomes normalized within common discourse, making people say things that are latent, impulsive, or not fully conscious of it.
Gayatri Spivak and the subaltern
In establishing the Postcolonial definition of the term subaltern, the philosopher and theoretician Gayatri Chakravorty Spivak cautioned against assigning an over-broad connotation. She argues:
Spivak also introduced the terms essentialism and strategic essentialism to describe the social functions of postcolonialism.
Essentialism denotes the perceptual dangers inherent to reviving subaltern voices in ways that might (over) simplify the cultural identity of heterogeneous social groups and, thereby, create stereotyped representations of the different identities of the people who compose a given social group. Strategic essentialism, on the other hand, denotes a temporary, essential group-identity used in the praxis of discourse among peoples. Furthermore, essentialism can occasionally be applied—by the so-described people—to facilitate the subaltern's communication in being heeded, heard, and understood, because strategic essentialism (a fixed and established subaltern identity) is more readily grasped, and accepted, by the popular majority, in the course of inter-group discourse. The important distinction, between the terms, is that strategic essentialism does not ignore the diversity of identities (cultural and ethnic) in a social group, but that, in its practical function, strategic essentialism temporarily minimizes inter-group diversity to pragmatically support the essential group-identity.
Spivak developed and applied Foucault's term epistemic violence to describe the destruction of non-Western ways of perceiving the world and the resultant dominance of the Western ways of perceiving the world. Conceptually, epistemic violence specifically relates to women, whereby the "Subaltern [woman] must always be caught in translation, never [allowed to be] truly expressing herself," because the colonial power's destruction of her culture pushed to the social margins her non–Western ways of perceiving, understanding, and knowing the world.
In June of the year 1600, the Afro–Iberian woman Francisca de Figueroa requested from the King of Spain his permission for her to emigrate from Europe to New Granada, and reunite with her daughter, Juana de Figueroa. As a subaltern woman, Francisca repressed her native African language, and spoke her request in Peninsular Spanish, the official language of Colonial Latin America. As a subaltern woman, she applied to her voice the Spanish cultural filters of sexism, Christian monotheism, and servile language, in addressing her colonial master:
Moreover, Spivak further cautioned against ignoring subaltern peoples as "cultural Others", and said that the West could progress—beyond the colonial perspective—by means of introspective self-criticism of the basic ideas and investigative methods that establish a culturally superior West studying the culturally inferior non–Western peoples. Hence, the integration of the subaltern voice to the intellectual spaces of social studies is problematic, because of the unrealistic opposition to the idea of studying "Others"; Spivak rejected such an anti-intellectual stance by social scientists, and about them said that "to refuse to represent a cultural Other is salving your conscience…allowing you not to do any homework." Moreover, postcolonial studies also reject the colonial cultural depiction of subaltern peoples as hollow mimics of the European colonists and their Western ways; and rejects the depiction of subaltern peoples as the passive recipient-vessels of the imperial and colonial power of the Mother Country. Consequent to Foucault's philosophic model of the binary relationship of power and knowledge, scholars from the Subaltern Studies Collective, proposed that anti-colonial resistance always counters every exercise of colonial power.
Homi K. Bhabha and hybridity
In The Location of Culture (1994), theoretician Homi K. Bhabha argues that viewing the human world as composed of separate and unequal cultures, rather than as an integral human world, perpetuates the belief in the existence of imaginary peoples and places—"Christendom" and the "Islamic World", "First World," "Second World," and the "Third World." To counter such linguistic and sociological reductionism, postcolonial praxis establishes the philosophic value of hybrid intellectual spaces, wherein ambiguity abrogates truth and authenticity; thereby, hybridity is the philosophic condition that most substantively challenges the ideological validity of colonialism.
R. Siva Kumar and alternative modernity
In 1997, on the occasion of the 50th anniversary of India's Independence, "Santiniketan: The Making of a Contextual Modernism" was an important exhibition curated by R. Siva Kumar at the National Gallery of Modern Art. In his catalogue essay, Kumar introduced the term Contextual Modernism, which later emerged as a postcolonial critical tool in the understanding of Indian art, specifically the works of Nandalal Bose, Rabindranath Tagore, Ramkinkar Baij, and Benode Behari Mukherjee.
In the post-colonial history of art, this marked the departure from Eurocentric unilateral idea of modernism to alternative context sensitive modernisms.
Several terms including Paul Gilroy's counterculture of modernity and Tani E. Barlow's Colonial modernity have been used to describe the kind of alternative modernity that emerged in non-European contexts. Professor Gall argues that 'Contextual Modernism' is a more suited term because "the colonial in colonial modernity does not accommodate the refusal of many in colonized situations to internalize inferiority. Santiniketan's artist teachers' refusal of subordination incorporated a counter vision of modernity, which sought to correct the racial and cultural essentialism that drove and characterized imperial Western modernity and modernism. Those European modernities, projected through a triumphant British colonial power, provoked nationalist responses, equally problematic when they incorporated similar essentialisms."
Dipesh Chakrabarty
In Provincializing Europe (2000), Dipesh Chakrabarty charts the subaltern history of the Indian struggle for independence, and counters Eurocentric, Western scholarship about non-Western peoples and cultures, by proposing that Western Europe simply be considered as culturally equal to the other cultures of the world; that is, as "one region among many" in human geography.
Derek Gregory and the colonial present
Derek Gregory argues the long trajectory through history of British and American colonization is an ongoing process still happening today. In The Colonial Present, Gregory traces connections between the geopolitics of events happening in modern-day Afghanistan, Palestine, and Iraq and links it back to the us-and-them binary relation between the Western and Eastern world. Building upon the ideas of the other and Said's work on orientalism, Gregory critiques the economic policy, military apparatus, and transnational corporations as vehicles driving present-day colonialism. Emphasizing discussion of ideas around colonialism in the present tense, Gregory utilizes modern events such as the September 11 attacks to tell spatial stories around the colonial behavior happening due to the War on Terror.
Amar Acheraiou and Classical influences
Acheraiou argues that colonialism was a capitalist venture moved by appropriation and plundering of foreign lands and was supported by military force and a discourse that legitimized violence in the name of progress and a universal civilizing mission. This discourse is complex and multi-faceted. It was elaborated in the 19th century by colonial ideologues such as Ernest Renan and Arthur de Gobineau, but its roots reach far back in history.
In Rethinking Postcolonialism: Colonialist Discourse in Modern Literature and the Legacy of Classical Writers, Acheraiou discusses the history of colonialist discourse and traces its spirit to ancient Greece, including Europe's claim to racial supremacy and right to rule over non-Europeans harboured by Renan and other 19th-century colonial ideologues. He argues that modern colonial representations of the colonized as "inferior," "stagnant," and "degenerate" were borrowed from Greek and Latin authors like Lysias (440–380 BC), Isocrates (436–338 BC), Plato (427–327 BC), Aristotle (384–322 BC), Cicero (106–43 BC), and Sallust (86–34 BC), who all considered their racial others—the Persians, Scythians, Egyptians as "backward," "inferior," and "effeminate."
Among these ancient writers Aristotle is the one who articulated more thoroughly these ancient racial assumptions, which served as a source of inspiration for modern colonists. In The Politics, he established a racial classification and ranked the Greeks superior to the rest. He considered them as an ideal race to rule over Asian and other 'barbarian' peoples, for they knew how to blend the spirit of the European "war-like races" with Asiatic "intelligence" and "competence."
Ancient Rome was a source of admiration in Europe since the enlightenment. In France, Voltaire (1694–1778) was one of the most fervent admirers of Rome. He regarded highly the Roman republican values of rationality, democracy, order and justice. In early-18th century Britain, it was poets and politicians like Joseph Addison (1672–1719) and Richard Glover (1712 –1785) who were vocal advocates of these ancient republican values.
It was in the mid-18th century that ancient Greece became a source of admiration among the French and British. This enthusiasm gained prominence in the late-eighteenth century. It was spurred by German Hellenist scholars and English romantic poets, who regarded ancient Greece as the matrix of Western civilization and a model of beauty and democracy. These included: Johann Joachim Winckelmann (1717–1768), Wilhelm von Humboldt (1767–1835), and Goethe (1749–1832), Lord Byron (1788–1824), Samuel Taylor Coleridge (1772–1834), Percy Bysshe Shelley (1792–1822), and John Keats (1795–1821).
In the 19th century, when Europe began to expand across the globe and establish colonies, ancient Greece and Rome were used as a source of empowerment and justification to Western civilizing mission. At this period, many French and British imperial ideologues identified strongly with the ancient empires and invoked ancient Greece and Rome to justify the colonial civilizing project. They urged European colonizers to emulate these "ideal" classical conquerors, whom they regarded as "universal instructors."
For Alexis de Tocqueville (1805–1859), an ardent and influential advocate of la "Grande France," the classical empires were model conquerors to imitate. He advised the French colonists in Algeria to follow the ancient imperial example. In 1841, he stated:[W]hat matters most when we want to set up and develop a colony is to make sure that those who arrive in it are as less estranged as possible, that these newcomers meet a perfect image of their homeland....the thousand colonies that the Greeks founded on the Mediterranean coasts were all exact copies of the Greek cities on which they had been modelled. The Romans established in almost all parts of the globe known to them municipalities which were no more than miniature Romes. Among modern colonizers, the English did the same. Who can prevent us from emulating these European peoples?.The Greeks and Romans were deemed exemplary conquerors and "heuristic teachers," whose lessons were invaluable for modern colonists ideologues. John-Robert Seeley (1834–1895), a history professor at Cambridge and proponent of imperialism stated in a rhetoric which echoed that of Renan that the role of the British Empire was 'similar to that of Rome, in which we hold the position of not merely of ruling but of an educating and civilizing race."
The incorporation of ancient concepts and racial and cultural assumptions into modern imperial ideology bolstered colonial claims to supremacy and right to colonize non-Europeans. Because of these numerous ramifications between ancient representations and modern colonial rhetoric, 19th century's colonialist discourse acquires a "multi-layered" or "palimpsestic" structure. It forms a "historical, ideological and narcissistic continuum," in which modern theories of domination feed upon and blend with "ancient myths of supremacy and grandeur."
Postcolonial literary study
As a literary theory, postcolonialism deals with the literatures produced by the peoples who once were colonized by the European imperial powers (e.g. Britain, France, and Spain) and the literatures of the decolonized countries engaged in contemporary, postcolonial arrangements (e.g. Organisation internationale de la Francophonie and the Commonwealth of Nations) with their former mother countries.
Postcolonial literary criticism comprehends the literatures written by the colonizer and the colonized, wherein the subject matter includes portraits of the colonized peoples and their lives as imperial subjects. In Dutch literature, the Indies Literature includes the colonial and postcolonial genres, which examine and analyze the formation of a postcolonial identity, and the postcolonial culture produced by the diaspora of the Indo-European peoples, the Eurasian folk who originated from Indonesia; the peoples who were the colony of the Dutch East Indies; in the literature, the notable author is Tjalie Robinson.
Waiting for the Barbarians (1980) by J. M. Coetzee depicts the unfair and inhuman situation of people dominated by settlers.
To perpetuate and facilitate control of the colonial enterprise, some colonized people, especially from among the subaltern peoples of the British Empire, were sent to attend university in the Imperial Motherland; they were to become the native-born, but Europeanised, ruling class of colonial satraps. Yet, after decolonization, their bicultural educations originated postcolonial criticism of empire and colonialism, and of the representations of the colonist and the colonized. In the late 20th century, after the dissolution of the USSR in 1991, the constituent Soviet Socialist Republics became the literary subjects of postcolonial criticism, wherein the writers dealt with the legacies (cultural, social, economic) of the Russification of their peoples, countries, and cultures in service to Greater Russia.
Postcolonial literary study is in two categories:
the study of postcolonial nations; and
the study of the nations who continue forging a postcolonial national identity.
The first category of literature presents and analyzes the internal challenges inherent to determining an ethnic identity in a decolonized nation.
The second category of literature presents and analyzes the degeneration of civic and nationalist unities consequent to ethnic parochialism, usually manifested as the demagoguery of "protecting the nation," a variant of the us-and-them binary social relation. Civic and national unity degenerate when a patriarchal régime unilaterally defines what is and what is not "the national culture" of the decolonized country: the nation-state collapses, either into communal movements, espousing grand political goals for the postcolonial nation; or into ethnically mixed communal movements, espousing political separatism, as occurred in decolonized Rwanda, the Sudan, and the Democratic Republic of the Congo; thus the postcolonial extremes against which Frantz Fanon warned in 1961.
Application
Middle East
In the essay "Overstating the Arab State" (2001) by Nazih Ayubi, the author deals with the psychologically-fragmented postcolonial identity, as determined by the effects (political and social, cultural and economic) of Western colonialism in the Middle East. As such, the fragmented national identity remains a characteristic of such societies, consequence of the imperially convenient, but arbitrary, colonial boundaries (geographic and cultural) demarcated by the Europeans, with which they ignored the tribal and clan relations that determined the geographic borders of the Middle East countries, before the arrival of European imperialists. Hence, the postcolonial literature about the Middle East examines and analyzes the Western discourses about identity formation, the existence and inconsistent nature of a postcolonial national-identity among the peoples of the contemporary Middle East.
In his essay "Who Am I?: The Identity Crisis in the Middle East" (2006), P.R. Kumaraswamy says:
Independence and the end of colonialism did not end social fragmentation and war (civil and international) in the Middle East. In The Search for Arab Democracy: Discourses and Counter-Discourses (2004), Larbi Sadiki says that the problems of national identity in the Middle East are a consequence of the orientalist indifference of the European empires when they demarcated the political borders of their colonies, which ignored the local history and the geographic and tribal boundaries observed by the natives, in the course of establishing the Western version of the Middle East. In the event:[I]n places like Iraq and Jordan, leaders of the new sovereign states were brought in from the outside, [and] tailored to suit colonial interests and commitments. Likewise, most states in the Persian Gulf were handed over to those [Europeanised colonial subjects] who could protect and safeguard imperial interests in the post-withdrawal phase.Moreover, "with notable exceptions like Egypt, Iran, Iraq, and Syria, most [countries]...[have] had to [re]invent, their historical roots" after decolonization, and, "like its colonial predecessor, postcolonial identity owes its existence to force."
Africa
In the late 19th century, the Scramble for Africa (1874–1914) proved to be the tail end of mercantilist colonialism of the European imperial powers, yet, for the Africans, the consequences were greater than elsewhere in the colonized non–Western world. To facilitate the colonization the European empires laid railroads where the rivers and the land proved impassable. The Imperial British railroad effort proved overambitious in the effort of traversing continental Africa, yet succeeded only in connecting colonial North Africa (Cairo) with the colonial south of Africa (Cape Town).
Upon arriving to Africa, Europeans encountered various African civilizations namely the Ashanti Empire, the Benin Empire, the Kingdom of Dahomey, the Buganda Kingdom (Uganda), and the Kingdom of Kongo, all of which were annexed by imperial powers under the belief that they required European stewardship.
About East Africa, Kenyan writer Ngũgĩ wa Thiong'o wrote Weep Not, Child (1964), the first postcolonial novel about the East African experience of colonial imperialism; as well as Decolonizing the Mind: The Politics of Language in African Literature (1986). In The River Between (1965), with the Mau Mau Uprising (1952–60) as political background, he addresses the postcolonial matters of African religious cultures, and the consequences of the imposition of Christianity, a religion culturally foreign to Kenya and to most of Africa.
In postcolonial countries of Africa, Africans and non–Africans live in a world of genders, ethnicities, classes and languages, of ages, families, professions, religions and nations. There is a suggestion that individualism and postcolonialism are essentially discontinuous and divergent cultural phenomena.
Asia
French Indochina was divided into five subdivisions: Tonkin, Annam, Cochinchina, Cambodia, and Laos. Cochinchina (southern Vietnam) was the first territory under French control; Saigon was conquered in 1859; and in 1887, the Indochinese Union (Union indochinoise) was established.
In 1924, Nguyen Ai Quoc (aka Ho Chi Minh) wrote the first critical text against the French colonization: Le Procès de la Colonisation française ('French Colonization on Trial')
Trinh T. Minh-ha has been developing her innovative theories about postcolonialism in various means of expression, literature, films, and teaching. She is best known for her documentary film Reassemblage (1982), in which she attempts to deconstruct anthropology as a "western male hegemonic ideology." In 1989, she wrote Woman, Native, Other: Writing Postcoloniality and Feminism, in which she focuses on the acknowledgement of oral tradition.
Eastern Europe
The partitions of Poland (1772–1918) and occupation of Eastern European countries by the Soviet Union after the Second World War were forms of "white" colonialism, for long overlooked by postcolonial theorists. The domination of European empires (Prussian, Austrian, Russian, and later Soviet) over neighboring territories (Belarus, Bulgaria, Czechoslovakia, Hungary, Lithuania, Moldova, Poland, Romania, and Ukraine), consisting in military invasion, exploitation of human and natural resources, devastation of culture, and efforts to re-educate local people in the empires' language, in many ways resembled the violent conquest of overseas territories by Western European powers, despite such factors as geographical proximity and the missing racial difference.
Postcolonial studies in East-Central and Eastern Europe were inaugurated by Ewa M. Thompson's seminal book Imperial Knowledge: Russian Literature and Colonialism (2000), followed by works of Aleksander Fiut, Hanna Gosk, Violeta Kelertas, Dorota Kołodziejczyk, Janusz Korek, Dariusz Skórczewski, Bogdan Ştefănescu, and Tomasz Zarycki.
Ireland
Ireland experienced centuries of English/British colonialism between the 12th and 18th centuries - notably the Statute of Drogheda, 1494, which subordinated the Irish Parliament to the English (later, British) government - before the Kingdom of Ireland merged with the Kingdom of Great Britain on 1 January 1801 as the United Kingdom. Most of Ireland became independent of the U.K. in 1922 as the Irish Free State, a self-governing dominion of the British Empire. Pursuant to the Statute of Westminster, 1931 and enactment of a new Irish Constitution, Éire became fully independent of the United Kingdom in 1937; and then became a republic in 1949. Northern Ireland, in northeastern Ireland (northwestern Ireland is part of the Republic of Ireland), remains a province of the United Kingdom. Many scholars have drawn parallels between:
the economic, cultural and social subjugation of Ireland, and the experiences of the colonized regions of the world
the depiction of the native Gaelic Irish as wild, tribal savages and the depiction of other indigenous peoples as primitive and violent
the partition of Ireland by the U.K. government, analogous to the partitioning and boundary-drawing of the other future nation states by colonial powers
the post-independence struggle of the Irish Free State (which became the Republic of Ireland in 1949) to establish economic independence and its own identity in the world, and the similar struggles of other post-colonial nations; though, uniquely, Ireland had been independent, then become part of the U.K., then mostly independent again Ireland's membership of and support for the European Union has often been framed as an attempt to break away from the United Kingdom's economic orbit.
In 2003, Clare Carroll wrote in Ireland and Postcolonial Theory that "the "colonizing activities" of Raleigh, Gilbert, and Drake in Ireland can be read as a "rehearsal" for their later exploits in the Americas, and argues that the English Elizabethans represent the Irish as being more alien than the contemporary European representations of Native Americans."
Rachel Seoighe wrote in 2017, "Ashis Nandy describes how colonisation impacts on the native’s interior life: the meaning of the Irish language was bound up with loss of self in socio-cultural and political life. The purportedly wild and uncivilised Irish language itself was held responsible for the ‘backwardness’ of the people. Holding tight to your own language was thought to bring death, exile and poverty. These ideas and sentiments are recognised by Seamus Deane in his analysis of recorded memories and testimony of the Great Famine in the 1840s. The recorded narratives of people who starved, emigrated and died during this period reflect an understanding of the Irish language as complicit in the devastation of the economy and society. It was perceived as a weakness of a people expelled from modernity: their native language prevented them from casting off ‘tradition’ and ‘backwardness’ and entering the ‘civilised’ world, where English was the language of modernity, progress and survival."
The Troubles (1969–1998), a period of conflict in Northern Ireland between mostly Cathlolic and Gaelic Irish nationalists (who wish to join the Irish Republic) and mostly Protestant Scots-Irish and Anglo-Irish unionists (who are a majority of the population and wish to remain part of the United Kingdom) has been described as a post-colonial conflict. In Jacobin, Daniel Finn criticised journalism which portrayed the conflict as one of "ancient hatred", ignoring the imperial context.
Structural adjustment programmes (SAPs)
Structural adjustment programmes (SAPs) implemented by the World Bank and IMF are viewed by some postcolonialists as the modern procedure of colonization. Structural adjustment programmes (SAPs) calls for trade liberalization, privatization of banks, health care, and educational institutions. These implementations minimized government's role, paved pathways for companies to enter Africa for its resources. Limited to production and exportation of cash crops, many African nations acquired more debt, and were left stranded in a position where acquiring more loan and continuing to pay high interests became an endless cycle.
The Dictionary of Human Geography uses the definition of colonialism as "enduring relationship of domination and mode of dispossession, usually (or at least initially) between an indigenous (or enslaved) majority and a minority of interlopers (colonizers), who are convinced of their own superiority, pursue their own interests, and exercise power through a mixture of coercion, persuasion, conflict and collaboration." This definition suggests that the SAPs implemented by the Washington Consensus is indeed an act of colonization.
Criticism
Undermining of universal values
Indian-American Marxist scholar Vivek Chibber has critiqued some foundational logics of postcolonial theory in his book Postcolonial Theory and the Specter of Capital. Drawing on Aijaz Ahmad's earlier critique of Said's Orientalism and Sumit Sarkar's critique of the Subaltern Studies scholars, Chibber focuses on and refutes the principal historical claims made by the Subaltern Studies scholars; claims that are representative of the whole of postcolonial theory. Postcolonial theory, he argues, essentializes cultures, painting them as fixed and static categories. Moreover, it presents the difference between East and West as unbridgeable, hence denying people's "universal aspirations" and "universal interests." He also criticized the postcolonial tendency to characterize all of Enlightenment values as Eurocentric. According to him, the theory will be remembered "for its revival of cultural essentialism and its acting as an endorsement of orientalism, rather than being an antidote to it."
Fixation on national identity
The concentration of postcolonial studies upon the subject of national identity has determined it is essential to the creation and establishment of a stable nation and country in the aftermath of decolonization; yet indicates that either an indeterminate or an ambiguous national identity has tended to limit the social, cultural, and economic progress of a decolonized people. In Overstating the Arab State (2001) by Nazih Ayubi, Moroccan scholar Bin 'Abd al-'Ali proposed that the existence of "a pathological obsession with...identity" is a cultural theme common to the contemporary academic field Middle Eastern Studies.
Nevertheless, Kumaraswamy and Sadiki say that such a common sociological problem—that of an indeterminate national identity—among the countries of the Middle East is an important aspect that must be accounted in order to have an understanding of the politics of the contemporary Middle East. In the event, Ayubi asks if what 'Bin Abd al–'Ali sociologically described as an obsession with national identity might be explained by "the absence of a championing social class?"
In his essay The Death of Postcolonialism: The Founder's Foreword, Mohamed Salah Eddine Madiou argues that postcolonialism as an academic study and critique of colonialism is a "dismal failure." While explaining that Edward Said never affiliated himself with the postcolonial discipline and is, therefore, not "the father" of it as most would have us believe, Madiou, borrowing from Barthes' and Spivak's death-titles (The Death of the Author and Death of a Discipline, respectively), argues that postcolonialism is today not fit to study colonialism and is, therefore, dead "but continue[s] to be used which is the problem." Madiou gives one clear reason for considering postcolonialism a dead discipline: the avoidance of serious colonial cases, such as Palestine.
Postcolonial literature
Foundational works
Some works written prior to the formal establishment of postcolonial studies as a discipline have been considered retroactively as works of postcolonialist theory.
1924. Le Procès de la Colonisation française ('French Colonization on Trial'), by Nguyen Ai Quoc (aka Ho Chi Minh)
1950. Discourse on Colonialism, by Aimé Césaire
1952. Black Skin, White Masks, by Frantz Fanon
1961. The Wretched of the Earth, by Frantz Fanon
1965. The Colonizer and the Colonized, by Albert Memmi
1970. Consciencism, by Kwame Nkrumah
1978. Orientalism, by Edward Said
1988. Can the Subaltern Speak?, by Gayatri Chakravorty Spivak
Contemporary authors of postcolonial fiction
John Nkemngong Nkengasong (1959–)
Chinua Achebe (1930–2013)
Chimamanda Ngozi Adichie (1977–)
Ama Ata Aidoo (1940–2023)
Mariama Ba (1929–1981)
Giannina Braschi(1953–)
Edwidge Danticat(1969–)
Buchi Emecheta (1944–2018)
Amitav Ghosh (1956–)
Abdulrazak Gurnah (1948–)
Mohsin Hamid (1971–)
Jamaica Kincaid (1949–)
Jhumpa Lahiri (1967–)
Ben Okri (1959–)
Michael Ondaatje (1943–)
Arundhati Roy (1961–)
Jean Rhys (1890–1979)
Salman Rushdie (1947–)
Sam Selvon (1923–1994)
Ousmane Sembene (1923–2007)
Bapsi Sidhwa (1938–)
Zadie Smith (1975–)
Wole Soyinka (1934–)
Nadine Gordimer (1923–2014)
Ngugi wa Thiong'o (1938–)
Cadwell Turnbull (1987–)
Derek Walcott (1930–2017)
Postcolonial non-fiction
Pre-2000
Alatas, Syed Hussein. 1977. The Myth of the Lazy Native.
Anderson, Benedict. [1983] 1991. Imagined Communities: Reflections on the Origin and Spread of Nationalism. London: Verso. .
Ashcroft, B., G. Griffiths, and H. Tiffin. 1990. The Empire Writes Back: Theory and Practice in Post-Colonial Literature.
——, eds. 1995. The Post-Colonial Studies Reader. London: Routledge. .
——, eds. 1998. Key Concepts in Post-Colonial Studies. London: Routledge.
Amin, Samir. 1988. L'eurocentrisme ('Eurocentrism').
Balagangadhara, S. N. [1994] 2005. "The Heathen in his Blindness..." Asia, the West, and the Dynamic of Religion. Manohar books. .
Bhabha, Homi K. 1994. The Location of Culture.
Chambers, I., and L. Curti, eds. 1996. The Post-Colonial Question. Routledge.
Chatterjee, P. Nation and Its Fragments: Colonial and Postcolonial Histories. Princeton University Press.
Gandhi, Leela. 1998. Postcolonial Theory: A Critical Introduction. Columbia University Press: .
Guevara, Che. 11 December 1964. "Colonialism is Doomed" (speech). 19th General Assembly of the United Nations. Havana.
Minh-ha, Trinh T. 1989. Woman, Native, Other: Writing Postcoloniality and Feminism. Indiana University Press.
German edition: trans. Kathrina Menke. Vienna & Berlin: Verlag Turia & Kant. 2010.
Japanese edition: trans. Kazuko Takemura. Tokyo: Iwanami Shoten. 1995.
—— 1989. Infinite Layers/Third World?
Hashmi, Alamgir. 1998. The Commonwealth, Comparative Literature and the World: Two Lectures. Islamabad: Gulmohar.
Hountondji, Paulin J. 1983. African Philosophy: Myth & Reality.
Jayawardena, Kumari. 1986. Feminism and Nationalism in the Third World.
JanMohamed, A. 1988. Manichean Aesthetics: The Politics of Literature in Colonial Africa.
Kiberd, Declan. 1995. Inventing Ireland.
Lenin, Vladimir. 1916. Imperialism, the Highest Stage of Capitalism.
Mannoni, Octave, and P. Powesland. Prospero and Caliban, the Psychology of Colonization.
Nandy, Ashis. 1983. The Intimate Enemy: Loss and Recovery of Self Under Colonialism.
—— 1987. Traditions, Tyranny, and Utopias: Essays in the Politics of Awareness.
McClintock, Anne. 1994. "The Angel of Progress: Pitfalls of the Term 'Postcolonialism'." In Colonial Discourse/Postcolonial Theory, edited by M. Baker, P. Hulme, and M. Iverson.
Mignolo, Walter. 1999. Local Histories/Global designs: Coloniality.
Mohanty, Chandra Talpade. 1986. Under Western Eyes.
Mudimbe, V. Y. 1988. The Invention of Africa.
Narayan, Uma. 1997. Dislocating Cultures.
—— 1997. Contesting Cultures.
Parry, B. 1983. Delusions and Discoveries.
Raja, Masood Ashraf. "Postcolonial Student: Learning the Ethics of Global Solidarity in an English Classroom."
Quijano, Aníbal. [1991] 1999. "Coloniality and Modernity/Rationality." In Globalizations and Modernities.
Retamar, Roberto Fernández. [1971] 1989 . "Calibán: Apuntes sobre la cultura de Nuestra América" ['Caliban: Notes About the Culture of Our America']. In Calibán and Other Essays.
Said, Edward. 1993. Culture and Imperialism.
Spivak, Gayatri Chakravorty. 1988. Can the Subaltern Speak?
—— 1988. Selected Subaltern Studies.
—— 1990. The Postcolonial Critic.
—— 1999. A Critique of Postcolonial Reason: Towards a History of the Vanishing Present.
wa Thiong'o, Ngũgĩ. 1986. Decolonizing the Mind: The Politics of Language in African Literature.
Young, Robert J. C. 1990. White Mythologies: Writing History and the West.
—— 1995. Colonial Desire: Hybridity in Theory, Culture and Race.
After 2000
Ankerl, G. 2000. Coexisting Contemporary Civilizations. Geneva: Indiana University Press. .
Bachetta, Paola. 2012. Cahiers du CEDREF on Decolonial Feminist and Queer Theories.
Dabashi, Hamid. 2007. Iran: A People Interrupted.
Dean, B., and J. Levi, eds. 2003. At the Risk of Being Heard: Indigenous Rights, Identity, and Postcolonial States. University of Michigan Press. .
Dhawan, N. 2005. "Postkolonial Theorie. Eine kritische Einführung" ['Postcolonial Theory: A Critical Enquiry'].
El-Enany, Nadine. 2020. Bordering Britain
Gopal, Priyamvada. 2019. Insurgent Empire
Mbembe, Achille. 2000. On the Postcolony. Regents of the University of California.
McLeod, John. 2000. Beginning Postcolonialism.
2010. Beginning Postcolonialism (2nd ed.). Manchester University Press.
Mignolo, Walter. 2005. The Idea of Latin América.
Paperson, L. 2005. "The Postcolonial Ghetto." .
Poddar, Prem, and David Johnson, ed. 2008. A Historical Companion to Postcolonial Literatures in English. Edinburgh: Edinburgh University Press. . Retrieved 2016-02-23.
Prine, Richard. 2014. The Disappointed Bridge: Ireland and the Post-Colonial World.
Risam, Roopika. 2018. New Digital Worlds: Postcolonial Digital Humanities in Theory, Praxis, and Pedagogy.
Salzman, Philip C., and D. Robinson Divine, eds. 2008. Postcolonial Theory and the Arab–Israeli Conflict. Routledge.
Young, Robert J. C. 2001. Postcolonialism: An Historical Introduction.
Scholarly projects
In an effort to understand postcolonialism through scholarship and technology, in addition to important literature, many stakeholders have published projects about the subject. Here is an incomplete list of projects.
The Institute of Postcolonial Studies, based in Naarm/Melbourne, is an independent public education project dedicated to research and addressing contemporary matters informed by postcolonial and critical inquiry. IPCS edits the well-known journal Postcolonial Studies (published with Taylor and Francis).
Bodies and Structure (2019), on the spatial history of Japan and its empire
Chicana Diasporic (2018), a research hub that highlights the Chicana Caucus of the National Women's Caucus from 1973 to 1979
Harlem Shadows (2018), an open source collection of Claude McKay's 1922 collection of poems
Passamaquoddy People: At Home on the Oceans and Lakes (2014), a digital archive of photos and recordings of the Passamaquoddy people
Postcolonial Writers Make Worlds (2017), critical reading of Black and Asian British literature
Torn Apart/Separados (2018), visualizations and scholarly journal tracking global crisis situations
W.E.B. Du Bois's Data Portraits: Visualizing Black America (2019), charts from W.E.B. Du Bois in color about the lives of Black Americans
See also
Ali Shariati
Amina Wadud
Anticolonialism
Audre Lorde
Burn! (1969), directed by Gillo Pontecorvo
Cultural cringe
Cross-culturalism
Decolonization
The Dogs of War (1980), directed by John Irvin
Ethnology
Fatima Mernissi
An Image of Africa: Racism in Conrad's "Heart of Darkness" (1975), by Chinua Achebe
Inversion in postcolonial theory
Leila Ahmed
Linguistic imperialism
Lila Abu-Lughod
Kimberlé Crenshaw
Kecia Ali
Nation-building
Paulo Freire
Postcolonial anarchism
Postcolonial feminism
Postcolonial theology
Post-communism
Ranajit Guha
Ranjit Hoskote
Robert J.C. Young
Saba Mahmood
Street name controversy
Talal Asad
Teju Cole, "The White-Savior Industrial Complex", The Atlantic
Décolonisation de l'espace public (fr)
References
Further reading
External links
The Institute of Postcolonial Studies - Melbourne, Australia
Postcolonial Studies - academic journal
Contemporary Postcolonial and Postimperial Literature in English
Postcolonial Space
Postcolonial Interventions - academic journal
Critical theory
Neocolonialism
Africana philosophy
Postmodern theory
Post-structuralism | 0.764564 | 0.997759 | 0.762851 |
Social phenomenon | Social phenomena or social phenomenon (singular) are any behaviours, actions, or events that takes place because of social influence, including from contemporary as well as historical societal influences. They are often a result of multifaceted processes that add ever increasing dimensions as they operate through individual nodes of people. Because of this, social phenomenon are inherently dynamic and operate within a specific time and historical context.
Social phenomena are observable, measurable data. Psychological notions may drive them, but those notions are not directly observable; only the phenomena that express them.
See also
Phenomenological sociology
Sociological imagination
Further reading
References
Sociological terminology
Social philosophy
Phenomena | 0.773323 | 0.986456 | 0.762849 |
Alternative school | An alternative school is an educational establishment with a curriculum and methods that are nontraditional. Such schools offer a wide range of philosophies and teaching methods; some have strong political, scholarly, or philosophical orientations, while others are more ad hoc assemblies of teachers and students dissatisfied with some aspect of mainstream or traditional education.
Some schools are based on pedagogical approaches differing from that of the mainstream pedagogy employed in a culture, while other schools are for gifted students, children with special needs, children who have fallen off the track educationally or expelled from their base school, children who wish to explore unstructured or less rigid system of learning, etc.
Features
There are many models of alternative schools but the features of promising alternative programs seem to converge more or less on the following characteristics:
the approach is more individualized;
integration of children of different socio-economic status and mixed abilities;
experiential learning which is applicable to life outside school;
integrated approach to various disciplines;
instructional staff is certified in their academic field and are creative;
low student-teacher ratios;
collective ownership of the institute as teachers, students, support staff, administrators, parents all are involved in decision making;
an array of non-traditional evaluation methods.
United Kingdom
In the United Kingdom, 'alternative school' refers to a school that provides a learner centered informal education as an alternative to the regimen of traditional education in the United Kingdom. There's a long tradition of such schools in the United Kingdom, going back to Summerhill, whose founder, A. S. Neill, greatly influenced the spread of similar democratic type schools such as the famous Dartington Hall School, and Kilquhanity School, both now closed. Currently there is one democratic primary school Small Acres, and two democratic secondary schools, Summerhill and Sands School. There is also a range of schools based on the ideas of Maria Montessori and Rudolf Steiner.
United States
In the United States, there has been tremendous growth in the number of alternative schools in operation since the 1970s, when relatively few existed. Some alternative schools are for students of all academic levels and abilities who are better served by a non-traditional program. Others are specifically intended for students with special educational needs, address social problems that affect students, such as teenage parenthood or homelessness, or accommodate students who are considered at risk of failing academically.
Another common element of alternative schools in the United States has been the use of community resource professionals in various disciplines who serve as instructors on a part-time, volunteer basis. Depending upon the type of student going into an alternative school, this has sometimes caused friction with the teachers in conventional schools. The Leonia Alternative High School of the 1970s in New Jersey, which placed a heavy emphasis on the use of community resource instructors, ended up in a protracted battle with the local teachers union, resulting in the school eventually closing. While alternative schools became more commonplace by the 1990s, there were still tensions between them and teachers unions regarding the teachers losing central control over such matters.
Canada
In Canada, local school boards choose whether or not they wish to have alternative schools and how they are operated. The alternative schools may include multi-age groupings, integrated curriculum or holistic learning, parental involvement, and descriptive reports rather than grades. Some school systems provide alternative education streams within the state schools.
In Canada, schools for children who are having difficulty in a traditional secondary school setting are known as alternate schools.
Germany
Germany has over 200 Waldorf schools, including the first such school in the world (founded 1919), and a large number of Montessori schools. Each of these has its own national association, whereas most other alternative schools are organized in the National Association of Independent Alternative Schools. Funding for private schools in Germany differs from Bundesland to Bundesland.
Full public funding is given to laboratory schools researching school concepts for public education. The Laborschule Bielefeld had a great influence on many alternative schools, including the renewal of the democratic school concept.
South Korea
In South Korea, alternative schools serve three big groups of youth. The first group is students who could not succeed in formative Korean education. Many of these schools serve students who dropped out during their earlier school years, either voluntarily or by disciplinary action. The second group is young immigrants. As the population of immigrants from Southeast Asia and North Korea is increasing, several educators started to see the necessity of the adaptive education, specially designed for these young immigrants. Because South Korea has been a monoethnic society throughout its history, there is not enough system and awareness to protect these students from bullying, social isolation, or academic failure. For instance, the drop-out rate for North Korean immigrant students is ten times higher than that of students from South Korean students because their major challenge is initially to adapt to South Korean society, not to get a higher test score.
The other group is students who choose an alternative education because of its philosophy. Korean education, as in many other Asian countries, is based on testing and memorizing. Some students and parents believe this kind of education cannot nurture a student thoroughly and choose to go to an alternative school, that suggests a different way to learn for students. These schools usually stress the importance of interaction between other people and nature over written test results.
The major struggle in alternative schools in South Korea are recognition, lack of financial support, and quality gap between alternative schools. Although South Korean public's recognition to alternative education has deliberately changed, the progressive education still is not widely accepted. To enter a college, regular education is often preferred because of the nation's rigid educational taste on test result and record. For the same reason, South Korean government is not actively supporting alternative schools financially.
Hence, many alternative schools are at risk of bankruptcy, especially the schools that do not or cannot collect tuition from their students. Most Southeast Asian and North Korean immigrant families are financially in need, so they need assist from government's welfare system for their everyday life. It is clear that affording private education is a mere fantasy for these families. That phenomenon, at last, causes a gap among alternative schools themselves. Some schools are richly supported by upper-class parents and provide variety of in-school and after-school programs, and others rarely have resource to build few academic and extracurricular programs as such.
India
India has a long history of alternative schools. Vedic and Gurukul systems of education during 1500 BC to 500 BC emphasized on acquisition of occupational skills, cultural and spiritual enlightenment in an atmosphere which encouraged rational thinking, reasoning among the students. Hence the aim of education was to develop the pupil in various aspects of life as well as ensure social service. However, with the decline of the local economies and the advent of the colonial rulers this system went into decline. Some notable reforms like English as the medium of instruction, were introduced as recommended in Macaulay's Minute in the year 1835. The mainstream schools of today still follow the system developed in the colonial era. In the years since independence, Government has focused on expansion of school network, designing of curriculum according to educational needs, local language as the medium of instruction, etc. By the end of nineteenth century, many social reformers began to explore alternatives to contemporary education system. Vivekananda, Dayanand Saraswati, Jyotiba Phule, Savitribai Phule, Syed Ahmed Khan were the pioneers who took up the cause of social regeneration, removal of social inequalities, promotion of girl's education through alternate schools. In the early twentieth century educationists create models of alternative schools as a response to the drawbacks to mainstream schools which are still viable. Rabindranath Tagore's Shanti Niketan, Jiddu Krishnamurthy's Rishi Valley School, Sri Aurobindo and Mother's Sri Aurobindo International Center for Education, and Walden's Path Magnet School are some of the examples. An upsurge in alternative schools was seen in 1970's onward. But most of the alternate schools are the result of individual efforts rather than government.
Alternative Education Programs
Alternative education programs are ideal for people who think college education is not a requirement for becoming successful entrepreneurs. These programs educate neophyte and experienced entrepreneurs while providing them with the necessary resources. An article published at Forbes.com last February 11, 2018 mentioned that many educational institutions contribute to their respective accelerator courses. The University of Missouri System initiated the Ameren Accelerator which concentrates on energy startups and assists entrepreneurs in obtaining essential know-how about the industry from educator-partners at the university level. There are international programs that also offer related resources like Meltwater Entrepreneurial School of Technology in Ghana, Africa. It has an incubator program providing seed capital, training, and learning opportunities in a rigorous one-year program from outstanding students in the African region.
The Huffington Post cited options in alternative learning like Home School, Micro Schooling, and Unschooling. The concept of Unschooling means the student learns according to the way that person wants for specific reasons and choice. The individual gets help from teachers, parents, books, or formal classes but makes the final decision on how to proceed and according to his or her preferred schedule. Micro-schools or independent free schools differ in approach, size, and authority. These are contemporary one-room schools, full-time or part-time facilities, or learning centers that are owned and managed by teachers or parents. Some parents choose this non-traditional system over formal education because it teaches youngsters to look for practical solutions. The USA is attempting to serve an increasing number of a good number of at-risk students outside the conventional highs schools. There are Alternative Education Campuses that cater to dropouts or those who have been expelled from their schools. There are reportedly more than 4,000 AECs all over the country.
See also
List of democratic schools
Anarchistic free school
Continuation high school
Democratic school
Gifted education
Montessori school
Public alternative school
Reform school
Special education
Sudbury school
Unschooling
Virtual school
Waldorf school (or Steiner school)
Jiddu Krishnamurti Schools
References
Further reading
Claire V. Corn, Alternative American Schools: Ideals in Action (Ithaca: SUNY Press, 1991).
Alternative education
School terminology
School types | 0.76641 | 0.995275 | 0.762789 |
Applied behavior analysis | Applied behavior analysis (ABA), also called behavioral engineering, is a scientific discipline that applies the principles of learning based upon respondent and operant conditioning to change behavior of social significance. ABA is the applied form of behavior analysis; the other two are radical behaviorism (or the philosophy of the science) and the experimental analysis of behavior (or basic experimental research).
The term applied behavior analysis has replaced behavior modification because the latter approach suggested changing behavior without clarifying the relevant behavior-environment interactions. In contrast, ABA changes behavior by first assessing the functional relationship between a targeted behavior and the environment, a process known as a functional behavior assessment. Further, the approach seeks to develop socially acceptable alternatives for maladaptive behaviors, often through administering differential reinforcement contingencies.
Although service delivery providers commonly implement empirically validated interventions for individuals with autism, ABA has been utilized in a range of other areas, including applied animal behavior, organizational behavior management, substance abuse, behavior management in classrooms, acceptance and commitment therapy, and athletic exercise, among others.
ABA has been rejected or strongly criticized by the most members in the autism rights movement due to the perception that it reinforces autistic people in behaving like a non-autistic person and suppresses autistic traits instead of acceptance of autistic behaviors such as hand flapping or other visible forms of stimming. Also, some forms of ABA and its predecessors in the past used aversives, such as electric shocks.
Definition
ABA is an applied science devoted to developing procedures which will produce observable changes in behavior. It is to be distinguished from the experimental analysis of behavior, which focuses on basic experimental research, but it uses principles developed by such research, in particular operant conditioning and classical conditioning. Behavior analysis adopts the viewpoint of radical behaviorism, treating thoughts, emotions, and other covert activity as behavior that is subject to the same responses as overt behavior. This represents a shift away from methodological behaviorism, which restricts behavior-change procedures to behaviors that are overt, and was the conceptual underpinning of behavior modification.
Behavior analysts also emphasize that the science of behavior must be a natural science as opposed to a social science. As such, behavior analysts focus on the observable relationship of behavior with the environment, including antecedents and consequences, without resort to "hypothetical constructs".
History
The beginnings of ABA can be traced back to Teodoro Ayllon and Jack Michael's study "The psychiatric nurse as a behavioral engineer" (1959) that they published in the Journal of the Experimental Analysis of Behavior (JEAB). Ayllon and Michael were training the staff at a psychiatric hospital how to use a token economy based on the principles of operant conditioning for patients with schizophrenia and intellectual disability, which led to researchers at the University of Kansas to start the Journal of Applied Behavior Analysis (JABA) in 1968.
A group of researchers at the University of Washington, including Donald Baer, Sidney W. Bijou, Bill Hopkins, Jay Birnbrauer, Todd Risley, and Montrose Wolf, applied the principles of behavior analysis to treat autism, manage the behavior of children and adolescents in juvenile detention centers, and organize employees who required proper structure and management in businesses. In 1968, Baer, Bijou, Risley, Birnbrauer, Wolf, and James Sherman joined the Department of Human Development and Family Life at the University of Kansas, where they founded the Journal of Applied Behavior Analysis.
Notable graduate students from the University of Washington include Robert Wahler, James Sherman, and Ivar Lovaas. Lovaas established the UCLA Young Autism Project while teaching at the University of California, Los Angeles. In 1965, Lovaas published a series of articles that described a pioneering investigation of the antecedents and consequences that maintained a problem behavior, including the use of electric shock on autistic children to suppress stimming and meltdowns (described as "self-stimulatory behavior" and "tantrum behaviors" respectively) and to coerce "affectionate" behavior, and relied on the methods of errorless learning which was initially used by Charles Ferster to teach nonverbal children to speak. Lovaas also described how to use social (secondary) reinforcers, teach children to imitate, and what interventions (including electric shocks) may be used to reduce aggression and life-threatening self-injury.
In 1987, Lovaas published the study, "Behavioral treatment and normal educational and intellectual functioning in young autistic children". The experimental group in this study received an average of 40 hours per week in a 1:1 teaching setting at a table using errorless discrete trial training (DTT). The treatment is done at home with parents involved, and the curriculum is highly individualized with a heavy emphasis on teaching eye contact, fine and gross motor imitation, academics, and language. The use of aversives and reinforcement were used to motivate learning and reduce non-desired behaviors. Early development of the therapy in the 1960s involved use of electric shocks, scolding, and the withholding of food. By the time the children were enrolled in this study, such aversives were abandoned, and a loud "no", electric shock, or slap to the thigh were used only as a last resort to reduce aggressive and self-stimulatory behaviors. The outcome of this study indicated 47% of the experimental group (9/19) went on to lose their autism diagnosis and were described as indistinguishable from their typically developing adolescent peers. This included passing general education without assistance and forming and maintaining friendships. These gains were maintained as reported in the 1993 study, "Long-term outcome for children with autism who received early intensive behavioral treatment". Lovaas' work went on to be recognized by the US Surgeon General in 1999, and his research were replicated in university and private settings. The "Lovaas Method" went on to become known as early intensive behavioral intervention (EIBI).
Over the years, "behavior analysis" gradually superseded "behavior modification"; that is, from simply trying to alter problematic behavior, behavior analysts sought to understand the function of that behavior, what reinforcement histories (i.e., attention seeking, escape, sensory stimulation, etc.) promote and maintain it, and how it can be replaced by successful behavior. ABA's priority on compliance and behavioral modification over that of an individual's needs can lead to harmful consequences, including prompt dependency, loss of intrinsic motivation, and even psychological trauma. Curtailing of self-soothing behaviors is potentially classifiable as a form of abuse.
While ABA seems to be intrinsically linked to autism intervention, it is also used in a broad range of other areas. Recent notable areas of research in the Journal of Applied Behavior Analysis include autism, classroom instruction with typically developing students, pediatric feeding therapy, and substance use disorders. Other applications of ABA include applied animal behavior, consumer behavior analysis, forensic behavior analysis, behavioral medicine, behavioral neuroscience, clinical behavior analysis, organizational behavior management, schoolwide positive behavior interventions and support, and contact desensitization for phobias.
Characteristics
Baer, Wolf, and Risley's 1968 article is still used as the standard description of ABA. It lists the following seven characteristics of ABA. Another resource for the characteristics of applied behavior analysis is the textbook Behavior Modification: Principles and Procedures.
Applied: ABA focuses on the social significance of the behavior studied. For example, a non-applied researcher may study eating behavior because this research helps to clarify metabolic processes, whereas the applied researcher may study eating behavior in individuals who eat too little or too much, trying to change such behavior so that it is more acceptable to the persons involved. It is also based on trying to improve the everyday life of clients that are receiving it.
Behavioral: ABA is pragmatic; it asks how it is possible to get an individual to do something effectively. To answer this question, the behavior itself must be objectively measurable and observable. This is designed so that when someone is trying to determine a target behavior, it is able to be observed and understood by anyone. Verbal descriptions are treated as behavior in themselves, and not as substitutes for the behavior described.
Analytic: Behavior analysis is successful when the analyst understands and can manipulate the events that control a target behavior. This may be relatively easy to do in the lab, where a researcher is able to arrange the relevant events, but it is not always easy, or ethical, in an applied situation. In order to consider something to fall under the spectrum of analytic, it must demonstrate a functional relationship and it must be provable. Baer et al. outline two methods that may be used in applied settings to demonstrate control while maintaining ethical standards. These are the reversal design and the multiple baseline design. In the reversal design, the experimenter first measures the behavior of choice, introduces an intervention, and then measures the behavior again. Then, the intervention is removed, or reduced, and the behavior is measured yet again. The intervention is effective to the extent that the behavior changes and then changes back in response to these manipulations. The multiple baseline method may be used for behaviors that seem irreversible. Here, several behaviors are measured and then the intervention is applied to each in turn. The effectiveness of the intervention is revealed by changes in just the behavior to which the intervention is being applied.
Technological: The description of analytic research must be clear and detailed, so that any competent researcher can repeat it accurately. The goal is to make sure that anyone can implement and understand what is being explained. Cooper et al. describe a good way to check this: Have a person trained in applied behavior analysis read the description and then act out the procedure in detail. If the person makes any mistakes or has to ask any questions then the description needs improvement.
Conceptually Systematic: Behavior analysis should not simply produce a list of effective interventions. Rather, to the extent possible, these methods should be grounded in the principles of applied behavioral analysis. This is aided by the use of theoretically meaningful terms, such as "secondary reinforcement" or "errorless discrimination" where appropriate.
Effective: Though analytic methods should be theoretically grounded, they must be effective. Interventions also must be relevant to the client and/or culture. An analyst must ask themselves if the intervention is working. The intervention must also contain a positive change. If an intervention does not produce a large enough effect for practical use, then the analysis has failed
Generality: Behavior analysts should aim for interventions that are generally applicable; the methods should work in different environments, apply to more than one specific behavior, and have long-lasting effects. This generalizability should be implemented from the very beginning of the intervention. When first starting a new intervention, it is a good idea for that to take place in a natural environment for the client.
Other proposed characteristics
In 2005, Heward et al. suggested the addition of the following five characteristics:
Accountable: To be accountable means that ABA must be able to demonstrate that its methods are effective. This requires repeatedly measuring the effect of interventions (success, failure or no effect at all), and, if necessary, making changes that improve their effectiveness.
Public: The methods, results, and theoretical analyses of ABA must be published and open to scrutiny. There are no hidden treatments or mystical, metaphysical explanations.
Doable: To be generally useful, interventions should be available to a variety of individuals, who might be teachers, parents, therapists, or even those who wish to modify their own behavior. With proper planning and training, many interventions can be applied by almost anyone willing to invest the effort.
Empowering: ABA provides tools that give the practitioner feedback on the results of interventions. These allow clinicians to assess their skill level and build confidence in their effectiveness.
Optimistic: According to several leading authors, behavior analysts have cause to be optimistic that their efforts are socially worthwhile, for the following reasons:
The behaviors impacted by behavior analysis are largely determined by learning and controlled by manipulable aspects of the environment.
Practitioners can improve performance by direct and continuous measurements.
As a practitioner uses behavioral techniques with positive outcomes, they become more confident of future success.
The literature provides many examples of success in teaching individuals considered previously unteachable.
Use as therapy for autism
Although BCBA certification does not require any autism training, a large majority of ABA practitioners specialize in autism, and ABA itself is often mistakenly considered synonymous with therapy for autism. Practitioners often use ABA-based techniques to teach adaptive behaviors to, or diminish challenging behaviors presented by, individuals with autism.
Despite many years of research indicating that early intensive behavioral intervention—the traditional form of ABA that relies on discrete trial training—improves the intellectual performance of those with ASD, most of these studies lack random assignment and there is need for larger sample sizes. A 2018 Cochrane review of five controlled trials found weak evidence indicating that ABA may be effective for some autistic children, noting a high risk of bias in the studies included in the review. The effectiveness of ABA therapies for autism may be overall limited by diagnostic severity, age of intervention, and IQ. Despite this, however, ABA has nevertheless been recommended for people with intellectual disabilities.
In 2018, a Cochrane meta-analysis database concluded that some recent research is beginning to suggest that because of the heterology of ASD, there are two different ABA teaching approaches to acquiring spoken language: children with higher receptive language skills respond to 2.5 – 20 hours per week of the naturalistic approach, whereas children with lower receptive language skills need 25 hours per week of discrete trial training—the structured and intensive form of ABA. A 2023 multi-site randomized control trial study of 164 participants showed similar findings.
Quality of evidence
Conflicts of interest, methodological concerns, and a high risk of bias pervade most ABA studies. A 2019 meta-analysis noted that "methodological rigor remains a pressing concern" in research into ABA's use as therapy for autism; while the authors found some evidence in favour of behavioral interventions, the effects disappeared when they limited the scope of their review to randomized controlled trial designs and outcomes for which there was no risk of detection bias.
One study revealed extensive undisclosed conflicts of interest (COI) in published ABA studies. 84% of studies published in top behavioral journals over a period of one year had at least one author with a COI involving their employment, either as an ABA clinical provider or a training consultant to ABA clinical providers. However, only 2% of these studies disclosed the COI.
Low-quality evidence is likewise a concern in some research reporting on the potential harms of ABA on autistic children.
Another concern is that ABA research only measures behavior as a means of success, which has led to a lack of qualitative research about autistic experiences of ABA, a lack of research examining the internal effects of ABA and a lack of research for autistic children who are non-speaking or have comorbid intellectual disabilities (which is concerning considering this is one of the major populations that intensive ABA focuses on). Research is also lacking about whether ABA is effective long-term and very little longitudinal outcomes have been studied.
Ethical concerns
Researchers and advocates have denounced the ABA ethical code as too lenient, citing its failure to restrict or clarify the use of aversives, the absence of an autism or child development education requirement for ABA therapists, and its emphasis on parental consent rather than the consent of the person receiving services. This emphasis on parental consent stems from ABA viewing the parent as the client, a stance which has been criticized for centering benefits to the parent, not the child, in behavioral interventions. Numerous researchers have argued that ABA is abusive and can increase symptoms of post-traumatic stress disorder (PTSD) in people undergoing the intervention. Some bioethicists argue that employing ABA violates the principles of justice and nonmaleficence and infringes on the autonomy of both autistic children and their parents.
Two 2020 reviews found that very few studies directly reported on or investigated possible harms; although a significant number of studies mentioned adverse events in their analysis of why people withdrew from them, there was no effort to monitor or collect data on adverse outcomes.
Justin B. Leaf and others examined and responded to several of these criticisms of ABA in three papers published in 2018, 2019, and 2022, respectively, in which they questioned the evidence for such criticisms, concluding that the claim that all ABA is abusive has no basis in the published literature. Others have published similar responses.
Use of aversives
Lovaas incorporated aversives into some of the ABA practices he developed, including employing electric shocks, slapping, and shouting to modify undesirable behavior. Although the use of aversives in ABA became less common over time, and in 2012 their use was described as inconsistent with contemporary practice, aversives persisted in some ABA programs. In comments made in 2014 to the US Food and Drug Administration (FDA), a clinician previously employed by the Judge Rotenberg Educational Center claimed that "all textbooks used for thorough training of applied behavior analysts include an overview of the principles of punishment, including the use of electrical brain stimulation."
Views of the autistic community
Proponents of neurodiversity dispute the value of eliminating autistic behaviors, maintaining that it forces autistic people to mask their true personalities and conform to a narrow conception of normality. Masking is associated with suicidality and poor long-term mental health. Some autistic advocates contend that it is cruel to try to make autistic people behave as if they were non-autistic without consideration for their well-being, criticizing ABA's framing of autism as a tragedy in need of treatment. Instead, these critics advocate for increased social acceptance of harmless autistic traits and therapies focused on improving quality of life. The Autistic Self Advocacy Network, for example, campaigns against the use of ABA in autism. The European Council of Autistic People (EUCAP) published a 2024 position statement expressing deep concern about the harm caused by ABA being overlooked. They emphasize that most surveyed autistic individuals view ABA as harmful, abusive, and counterproductive to their well-being. EUCAP advocates for a variety of support methods and the inclusion of autistic individuals in decision-making processes regarding their care.
A 2020 study examined perspectives of autistic adults that received ABA as children and found that the overwhelming majority reported that "behaviorist methods create painful lived experiences", that ABA led to the "erosion of the true actualizing self", and that they felt they had a "lack of self-agency within interpersonal experiences".
Concepts
Behavior
Behavior refers to the movement of some part of an organism that changes some aspect of the environment. Often, the term behavior refers to a class of responses that share physical dimensions or functions, and in that case a response is a single instance of that behavior. If a group of responses have the same function, this group may be called a response class. Repertoire refers to the various responses available to an individual; the term may refer to responses that are relevant to a particular situation, or it may refer to everything a person can do.
Operant conditioning
Operant behavior is the so-called "voluntary" behavior that is sensitive to, or controlled by its consequences. Specifically, operant conditioning refers to the three-term contingency that uses stimulus control, in particular an antecedent contingency called the discriminative stimulus (SD) that influences the strengthening or weakening of behavior through such consequences as reinforcement or punishment. The term is used quite generally, from reaching for a candy bar, to turning up the heat to escape an aversive chill, to studying for an exam to get good grades.
Respondent (classical) conditioning
Respondent (classical) conditioning is based on innate stimulus-response relationships called reflexes. In his experiments with dogs, Pavlov usually used the salivary reflex, namely salivation (unconditioned response) following the taste of food (unconditioned stimulus). Pairing a neutral stimulus, for example a bell (conditioned stimulus) with food caused the dog to elicit salivation (conditioned response). Thus, in classical conditioning, the conditioned stimulus becomes a signal for a biologically significant consequence. Note that in respondent conditioning, unlike operant conditioning, the response does not produce a reinforcer or punisher (e.g., the dog does not get food because it salivates).
Reinforcement
Reinforcement is the key element in operant conditioning and in most behavior change programs. It is the process by which behavior is strengthened. If a behavior is followed closely in time by a stimulus and this results in an increase in the future frequency of that behavior, then the stimulus is a positive reinforcer. If the removal of an event serves as a reinforcer, this is termed negative reinforcement. There are multiple schedules of reinforcement that affect the future probability of behavior. "[H]e would get Beth to comply by hugging him and giving her food as a reward."
Punishment
Punishment is a process by which a consequence immediately follows a behavior which decreases the future frequency of that behavior. As with reinforcement, a stimulus can be added (positive punishment) or removed (negative punishment). Broadly, there are three types of punishment: presentation of aversive stimuli (e.g., pain), response cost (removal of desirable stimuli as in monetary fines), and restriction of freedom (as in a 'time out'). Punishment in practice can often result in unwanted side effects. Some other potential unwanted effects include resentment over being punished, attempts to escape the punishment, expression of pain and negative emotions associated with it, and recognition by the punished individual between the punishment and the person delivering it. ABA therapist state that they use punishment is used infrequently as a last resort or when there is a direct threat caused by the behavior.
Extinction
Extinction is the technical term to describe the procedure of withholding/discontinuing reinforcement of a previously reinforced behavior, resulting in the decrease of that behavior. The behavior is then set to be extinguished (Cooper et al.). Extinction procedures are often preferred over punishment procedures, as many punishment procedures are deemed unethical and in many states prohibited. Nonetheless, extinction procedures must be implemented with utmost care by professionals, as they are generally associated with extinction bursts. An extinction burst is the temporary increase in the frequency, intensity, and/or duration of the behavior targeted for extinction. Other characteristics of an extinction burst include an extinction-produced aggression—the occurrence of an emotional response to an extinction procedure often manifested as aggression; and b) extinction-induced response variability—the occurrence of novel behaviors that did not typically occur prior to the extinction procedure. These novel behaviors are a core component of shaping procedures.
Discriminated operant and three-term contingency
In addition to a relation being made between behavior and its consequences, operant conditioning also establishes relations between antecedent conditions and behaviors. This differs from the S–R formulations (If-A-then-B), and replaces it with an AB-because-of-C formulation. In other words, the relation between a behavior (B) and its context (A) is because of consequences (C), more specifically, this relationship between AB because of C indicates that the relationship is established by prior consequences that have occurred in similar contexts. This antecedent–behavior–consequence contingency is termed the three-term contingency. A behavior which occurs more frequently in the presence of an antecedent condition than in its absence is called a discriminated operant. The antecedent stimulus is called a discriminative stimulus (SD). The fact that the discriminated operant occurs only in the presence of the discriminative stimulus is an illustration of stimulus control. More recently behavior analysts have been focusing on conditions that occur prior to the circumstances for the current behavior of concern that increased the likelihood of the behavior occurring or not occurring. These conditions have been referred to variously as "Setting Event", "Establishing Operations", and "Motivating Operations" by various researchers in their publications.
Verbal behavior
B. F. Skinner's classification system of behavior analysis has been applied to treatment of a host of communication disorders. Skinner's system includes:
Tact – a verbal response evoked by a non-verbal antecedent and maintained by generalized conditioned reinforcement.
Mand – behavior under control of motivating operations maintained by a characteristic reinforcer.
Intraverbals – verbal behavior for which the relevant antecedent stimulus was other verbal behavior, but which does not share the response topography of that prior verbal stimulus (e.g., responding to another speaker's question).
Autoclitic – secondary verbal behavior which alters the effect of primary verbal behavior on the listener. Examples involve quantification, grammar, and qualifying statements (e.g., the differential effects of "I think..." vs. "I know...")
Skinner's use of behavioral techniques was famously critiqued by the linguist Noam Chomsky through an extensive breakdown of how Skinner's view of language as behavioral simply cannot explain the complexity of human language. This suggests that while behaviorist techniques can teach language, it is a very poor measure to explain language fundamentals. Considering Chomsky's critiques, it may be more appropriate to teach language through a Speech language pathologist instead of a behaviorist.
For an assessment of verbal behavior from Skinner's system, see Assessment of Basic Language and Learning Skills.
Measuring behavior
When measuring behavior, there are both dimensions of behavior and quantifiable measures of behavior. In applied behavior analysis, the quantifiable measures are a derivative of the dimensions. These dimensions are repeatability, temporal extent, and temporal locus.
Repeatability
Response classes occur repeatedly throughout time—i.e., how many times the behavior occurs.
Count is the number of occurrences in behavior.
Rate/frequency is the number of instances of behavior per unit of time.
Celeration is the measure of how the rate changes over time.
Temporal extent
Schirmer, Meck & Penney explore the ‘timing’ of temporal information that seeks out the rhythm and duration of the behavior. Given the expressions of behavior, an emotional meaning is obtained through the duration in correspondence with body and vocal expressions. Using the striatal beat frequency (SBF) model, this highlights the essential role of the striatum’s timing that synchronizes cortical oscillations. At onset of the event, ventral tegmental inputs reset the cortical phase that initiates the timing. During the event, the oscillations are monitored by neurons which is an identifier of the unique phase patterns for different durations of behavior. And when finished, the striatum decodes the patterns to aid in memory storage and comparison of event durations. Researchers discovered socio-temporal processes that attach social meaning to time, allowing the social significance to impact the perception and timing of acts.
Temporal locus
Latency specifically measures the time that elapses between the event of a stimulus and the behavior that follows. This is important in behavioral research because it quantifies how quickly an individual may respond to external stimuli, providing insights into their perceptual and cognitive processing rates. There are two measurements that are able to define temporal locus, they are response latency and interresponse time.
Response latency in children, when being treated with morphine they exhibit a longer time to the response latency in delayed matching of a simple task, and these children seem to have a harder time with social ability. This means that these children require more time to remember things when given the stimulus.
Interresponse time refers to the duration of time that occurs between two instances of behavior, and it helps in understanding patterns and frequency of a certain behavior on a period of time. Use of psychiatric medications may reduce the rate of response, but on the other hand lengthen the duration of interresponse time. The usage of these medications effectively reduces interest as the reaction declines as well.
Derivative measures
Derivative measures are additional metrics derived from primary data, often by combining or transforming dimensional quantities to offer deeper insights into a phenomenon. Despite not being directly tied to specific dimensions, these measures provide valuable supplemental information. In applied behavior analysis (ABA), for example, percentage is a derivative measure that quantifies the ratio of specific responses to total responses, offering a nuanced understanding of behavior and assisting in evaluating progress and intervention effectiveness.
Trials-to-criterion, another ABA derivative measure, tracks the number of response opportunities needed to achieve a set level of performance. This metric aids behavior analysts in assessing skill acquisition and mastery, influencing decisions on program adjustments and teaching methods.
Applied behavior analysis relies on meticulous measurement and impartial evaluation of observable behavior as a foundational principle. Without accurate data collection and analysis, behavior analysts lack the essential information to assess intervention effectiveness and make informed decisions about program modifications. Therefore, precise measurement and assessment play a pivotal role in ABA practice, guiding practitioners to enhance behavioral outcomes and drive significant change.
Behavior analysts utilize a few distinct techniques to gather information. A portion of the ways of collect data information include:
Frequency
This technique refers to the times that an objective way of behaving was noticed and counted. In the published article On Terms: Frequency and Rate in Applied Behavior Analysis, the authors state that two major texts, one being the Behavior Analyst Certification Board pair the word "frequency" with two different words—one text pairing with "count" and the other "rate". Despite one major text using the word "count" interchangeably with "frequency", both texts advise readers they should not be using counts of behavior without referencing the time base of the observation. Additionally, when given that context of advice, the count and time information provide data rate. The authors of this article suggest that when looking at applied behavior analysis (ABA) and accessing behavior measurement, you should be using the term "rate" instead of "count" to reference frequency. Any references to counts without information about observation time should be avoided.
In Annals of Clinical Psychiatry article Applied Behavioral Analytic Interventions for children with Autism: A Description and Review of Treatment Research, they point out how frequency is used to keep track of adaptive and maladaptive behaviors. By doing so, ABA therapists and clinicians are able to create a customized program for that patient. The author notes that tracking frequency, in cases specifically looking at frequency of requesting behaviors during play, language, imitation and socialization, can also be a variable to predict treatment outcome.
Rate
Same as frequency, yet inside a predefined time limit.
Duration
This estimation alludes to how much time that somebody participated in a way of behaving.
Fluency
Fluency, is a gauge on how smooth a behavior is performed. Fluency is associated with behaviors that we use over a long duration and be able to perform it with confidence. The three outcomes associated with fluency:
The ability to retain the behavior or action
Maintain the behavior while there are disruptions
The ability to transfer the behavior to other applications
Fluency will increase the response speed and accuracy of a behavior. However, when introduced to a new stimulus different from their usual behavior, there will be a decrease in reaction time or increased response time but with more false alarms. Fluency relies on repeated action so the amount of required effort for the behavior is lessened to an extent where the individual could focus more on the other factors of the behavior.
There are two types of approaches to fluency:
Unassisted approach - Individual practice of certain behavior. Set a target of response speed and accuracy under a timeframe and readjust accordingly depending on the difficulty.
Assisted approach - Behavior assisted by a teacher or an individual.
The unassisted approach would need to perform their reached target behavior to someone. The assisted learning approach have a limitation that it would need an individual to assist them which could be time-consuming for both individuals
Response latency
Latency refers to how much time after a particular boost has been given before the objective way of behaving happens.
Analyzing behavior change
Experimental control
In applied behavior analysis, all experiments should include the following:
At least one participant
At least one behavior (dependent variable)
At least one setting
A system for measuring the behavior and ongoing visual analysis of data
At least one treatment or intervention condition
Manipulations of the independent variable so that its effects on the dependent variable may be quantitatively or qualitatively analyzed
An intervention that will benefit the participant in some way (behavioral cusp)
Methodologies developed through ABA research
Task analysis
Task analysis is a process in which a task is analyzed into its component parts so that those parts can be taught through the use of chaining: forward chaining, backward chaining and total task presentation. Task analysis has been used in organizational behavior management, a behavior analytic approach to changing the behaviors of members of an organization (e.g., factories, offices, or hospitals). Behavioral scripts often emerge from a task analysis. Bergan conducted a task analysis of the behavioral consultation relationship and Thomas Kratochwill developed a training program based on teaching Bergan's skills. A similar approach was used for the development of microskills training for counselors. Ivey would later call this "behaviorist" phase a very productive one and the skills-based approach came to dominate counselor training during 1970–90. Task analysis was also used in determining the skills needed to access a career. In education, Englemann (1968) used task analysis as part of the methods to design the direct instruction curriculum.
Chaining
The skill to be learned is broken down into small units for easy learning. For example, a person learning to brush teeth independently may start with learning to unscrew the toothpaste cap. Once they have learned this, the next step may be squeezing the tube, etc.
For problem behavior, chains can also be analyzed and the chain can be disrupted to prevent the problem behavior. Some behavior therapies, such as dialectical behavior therapy, make extensive use of behavior chain analysis, but is not philosophically behavior analytic.
There are two types of chain in the ABA world: forward chain and backward chain. Forward chain starts with the first step and continues until the final step, while backward chain begins with the last step and moves backward until the first step.
Prompting
A prompt is a cue that is used to encourage a desired response from an individual. Prompts are often categorized into a prompt hierarchy from most intrusive to least intrusive, although there is some controversy about what is considered most intrusive, those that are physically intrusive or those that are hardest prompt to fade (e.g., verbal). In order to minimize errors and ensure a high level of success during learning, prompts are given in a most-to-least sequence and faded systematically. During this process, prompts are faded as quickly as possible so that the learner does not come to depend on them and eventually behaves appropriately without prompting.
Types of prompts
Prompters might use any or all of the following to suggest the desired response:
Vocal prompts: Words or other vocalizations
Visual prompts: A visual cue or picture
Gestural prompts: A physical gesture
Positional prompt: e.g., the target item is placed close to the individual.
Modeling: Modeling the desired response. This type of prompt is best suited for individuals who learn through imitation and can attend to a model.
Physical prompts: Physically manipulating the individual to produce the desired response. There are many degrees of physical prompts, from quite intrusive (e.g., the teacher places a hand on the learner's hand) to minimally intrusive (e.g., a slight tap).
This is not an exhaustive list of prompts; the nature, number, and order of prompts are chosen to be the most effective for a particular individual.
Fading
The overall goal is for an individual to eventually not need prompts. As an individual gains mastery of a skill at a particular prompt level, the prompt is faded to a less intrusive prompt. This ensures that the individual does not become overly dependent on a particular prompt when learning a new behavior or skill.
One of the primary choices that was made while showing another way of behaving is the manner by which to fade the prompts or prompts. An arrangement should be set up to fade the prompts in an organized style. For instance, blurring the actual brief of directing a kid's hands might follow this succession: (a) supporting wrists, (b) contacting hands softly, (c) contacting lower arm or elbow, and (d) pulling out actual contact through and through. Fading guarantees that the kid does not turn out to be excessively subject to a specific brief while mastering another expertise.
Thinning a reinforcement schedule
Thinning is often confused with fading. Fading refers to a prompt being removed, where thinning refers to an increase in the time or number of responses required between reinforcements. Periodic thinning that produces a 30% decrease in reinforcement has been suggested as an efficient way to thin. Schedule thinning is often an important and neglected issue in contingency management and token economy systems, especially when these are developed by unqualified practitioners (see professional practice of behavior analysis).
Generalization
Generalization is the expansion of a student's performance ability beyond the initial conditions set for acquisition of a skill. Generalization can occur across people, places, and materials used for teaching. For example, once a skill is learned in one setting, with a particular instructor, and with specific materials, the skill is taught in more general settings with more variation from the initial acquisition phase. For example, if a student has successfully mastered learning colors at the table, the teacher may take the student around the house or school and generalize the skill in these more natural environments with other materials. Behavior analysts have spent considerable amount of time studying factors that lead to generalization.
Shaping
Shaping involves gradually modifying the existing behavior into the desired behavior. If the student engages with a dog by hitting it, then they could have their behavior shaped by reinforcing interactions in which they touch the dog more gently. Over many interactions, successful shaping would replace the hitting behavior with patting or other gentler behavior. Shaping is based on a behavior analyst's thorough knowledge of operant conditioning principles and extinction. Recent efforts to teach shaping have used simulated computer tasks.
One teaching technique found to be effective with some students, particularly children, is the use of video modeling (the use of taped sequences as exemplars of behavior). It can be used by therapists to assist in the acquisition of both verbal and motor responses, in some cases for long chains of behavior.
Another example of shaping is when a toddler learns to walk. The child is reinforced by crawling, standing, taking a few steps, and then eventually walking. When a child is learning to walk, they are praised by a lot of claps and excitements.
Interventions based on an FBA
Functional behavioral assessment (FBA) is an individualized critical thinking process that may be used to address problem behavior. An evaluation is initiated to distinguish the causality of a problem behavior. This interactive evaluation includes gathering data about the ecological circumstances that occur prior to an identified conduct issue and the resulting rewards that reinforce the behavior. The data that is collected is then used to recognize and execute individualized interventions pointed toward lessening problem behaviors and expanding positive behavior outcomes.
Critical to behavior analytic interventions is the concept of a systematic behavioral case formulation with a functional behavioral assessment or analysis at the core. This approach should apply a behavior analytic theory of change (see Behavioral change theories). This formulation should include a thorough functional assessment, a skills assessment, a sequential analysis (behavior chain analysis), an ecological assessment, a look at existing evidenced-based behavioral models for the problem behavior (such as Fordyce's model of chronic pain) and then a treatment plan based on how environmental factors influence behavior. Some argue that behavior analytic case formulation can be improved with an assessment of rules and rule-governed behavior. Some of the interventions that result from this type of conceptualization involve training specific communication skills to replace the problem behaviors as well as specific setting, antecedent, behavior, and consequence strategies.
Other species
ABA has been successfully used in other species. Morris uses ABA to reduce feather-plucking in the black vulture (Coragyps atratus).
Major journals
Applied behavior analysts publish in many journals. Some examples of "core" behavior analytic journals are:
Applied Animal Behaviour Science
Behavioral Health and Medicine
Behavior Analysis: Research and Practice
Behavior and Philosophy
Behavior and Social Issues
Behavior Modification
Behavior Therapy
Journal of Applied Behavior Analysis
Journal of Behavior Analysis of Offender and Victim: Treatment and Prevention
Journal of Behavior Analysis of Sports, Health, Fitness, and Behavioral Medicine
Journal of Contextual Behavioral Science
Journal of Early and Intensive Behavior Interventions
Journal of Organizational Behavior Management
Journal of Positive Behavior Interventions
Journal of the Experimental Analysis of Behavior
Perspectives on Behavior Science (formerly The Behavior Analyst until 2018)
The Behavioral Development Bulletin
The Behavior Analyst Today
The International Journal of Behavioral Consultation and Therapy
The Journal of Behavioral Assessment and Intervention in Children
The Journal of Speech-Language Pathology and Applied Behavior Analysis
The Psychological Record
See also
Association for Behavior Analysis International
Behavior analysis of child development
Behavior therapy
Behavioral activation
Educational psychology
Parent management training
Professional practice of behavior analysis
References
Sources
Further reading
External links
Applied Behavior Analysis: Overview and Summary of Scientific Support
Functional Behavioral Assessment, The IRIS Center – U.S. Department of Education, Office of Special Education Programs Grant and Vanderbilt University
Behavior analysis
Behavior
Behavior modification
Behavioral concepts
Behaviorism
Life coaching
Mind control
Industrial and organizational psychology
Personal development
Autism pseudoscience | 0.76433 | 0.997976 | 0.762783 |
Psychodynamics | Psychodynamics, also known as psychodynamic psychology, in its broadest sense, is an approach to psychology that emphasizes systematic study of the psychological forces underlying human behavior, feelings, and emotions and how they might relate to early experience. It is especially interested in the dynamic relations between conscious motivation and unconscious motivation.
The term psychodynamics is also used to refer specifically to the psychoanalytical approach developed by Sigmund Freud (1856–1939) and his followers. Freud was inspired by the theory of thermodynamics and used the term psychodynamics to describe the processes of the mind as flows of psychological energy (libido or psi) in an organically complex brain.
There are four major schools of thought regarding psychological treatment: psychodynamic, cognitive-behavioral, biological, and humanistic treatment. In the treatment of psychological distress, psychodynamic psychotherapy tends to be a less intensive (once- or twice-weekly) modality than the classical Freudian psychoanalysis treatment (of 3–5 sessions per week). Psychodynamic therapies depend upon a theory of inner conflict, wherein repressed behaviours and emotions surface into the patient's consciousness; generally, one's conflict is unconscious.
Since the 1970s, psychodynamics has largely been abandoned as not fact-based; Freudian psychoanalysis has been criticized as pseudoscience.
Overview
In general, psychodynamics is the study of the interrelationship of various parts of the mind, personality, or psyche as they relate to mental, emotional, or motivational forces especially at the unconscious level. The mental forces involved in psychodynamics are often divided into two parts: (a) the interaction of the emotional and motivational forces that affect behavior and mental states, especially on a subconscious level; (b) inner forces affecting behavior: the study of the emotional and motivational forces that affect behavior and states of mind.
Freud proposed that psychological energy was constant (hence, emotional changes consisted only in displacements) and that it tended to rest (point attractor) through discharge (catharsis).
In mate selection psychology, psychodynamics is defined as the study of the forces, motives, and energy generated by the deepest of human needs.
In general, psychodynamics studies the transformations and exchanges of "psychic energy" within the personality. A focus in psychodynamics is the connection between the energetics of emotional states in the Id, ego and super-ego as they relate to early childhood developments and processes. At the heart of psychological processes, according to Freud, is the ego, which he envisions as battling with three forces: the id, the super-ego, and the outside world. The id is the unconscious reservoir of libido, the psychic energy that fuels instincts and psychic processes. The ego serves as the general manager of personality, making decisions regarding the pleasures that will be pursued at the id's demand, the person's safety requirements, and the moral dictates of the superego that will be followed. The superego refers to the repository of an individual's moral values, divided into the conscience – the internalization of a society's rules and regulations – and the ego-ideal – the internalization of one's goals. Hence, the basic psychodynamic model focuses on the dynamic interactions between the id, ego, and superego. Psychodynamics, subsequently, attempts to explain or interpret behaviour or mental states in terms of innate emotional forces or processes.
History
Freud used the term psychodynamics to describe the processes of the mind as flows of psychological energy (libido) in an organically complex brain. The idea for this came from his first year adviser, Ernst von Brücke at the University of Vienna, who held the view that all living organisms, including humans, are basically energy-systems to which the principle of the conservation of energy applies. This principle states that "the total amount of energy in any given physical system is always constant, that energy quanta can be changed but not annihilated, and that consequently when energy is moved from one part of the system, it must reappear in another part." This principle is at the very root of Freud's ideas, whereby libido, which is primarily seen as sexual energy, is transformed into other behaviours. However, it is now clear that the term energy in physics means something quite different from the term energy in relation to mental functioning.
Psychodynamics was initially further developed by Carl Jung, Alfred Adler and Melanie Klein. By the mid-1940s and into the 1950s, the general application of the "psychodynamic theory" had been well established.
In his 1988 book Introduction to Psychodynamics – a New Synthesis, psychiatrist Mardi J. Horowitz states that his own interest and fascination with psychodynamics began during the 1950s, when he heard Ralph Greenson, a popular local psychoanalyst who spoke to the public on topics such as "People who Hate", speak on the radio at UCLA. In his radio discussion, according to Horowitz, he "vividly described neurotic behavior and unconscious mental processes and linked psychodynamics theory directly to everyday life."
In the 1950s, American psychiatrist Eric Berne built on Freud's psychodynamic model, particularly that of the "ego states", to develop a psychology of human interactions called transactional analysis which, according to physician James R. Allen, is a "cognitive-behavioral approach to treatment and that it is a very effective way of dealing with internal models of self and others as well as other psychodynamic issues.".
Around the 1970s, a growing number of researchers began departing from the psychodynamics model and Freudian subconscious. Many felt that the evidence was over-reliant on imaginative discourse in therapy, and on patient reports of their state-of-mind. These subjective experiences are inaccessible to others. Philosopher of science Karl Popper argued that much of Freudianism was untestable and therefore not scientific. In 1975 literary critic Frederick Crews began a decades-long campaign against the scientific credibility of Freudianism. This culminated in Freud: The Making of an Illusion which aggregated years of criticism from many quarters. Medical schools and psychology departments no longer offer much training in psychodynamics, according to a 2007 survey. An Emory University psychology professor explained, “I don’t think psychoanalysis is going to survive unless there is more of an appreciation for empirical rigor and testing.”
Freudian analysis
According to American psychologist Calvin S. Hall, from his 1954 Primer in Freudian Psychology:
At the heart of psychological processes, according to Freud, is the ego, which he sees battling with three forces: the id, the super-ego, and the outside world. Hence, the basic psychodynamic model focuses on the dynamic interactions between the id, ego, and superego. Psychodynamics, subsequently, attempts to explain or interpret behavior or mental states in terms of innate emotional forces or processes. In his writings about the "engines of human behavior", Freud used the German word Trieb, a word that can be translated into English as either instinct or drive.
In the 1930s, Freud's daughter Anna Freud began to apply Freud's psychodynamic theories of the "ego" to the study of parent-child attachment and especially deprivation and in doing so developed ego psychology.
Jungian analysis
At the turn of the 20th century, during these decisive years, a young Swiss psychiatrist named Carl Jung had been following Freud's writings and had sent him copies of his articles and his first book, the 1907 Psychology of Dementia Praecox, in which he upheld the Freudian psychodynamic viewpoint, although with some reservations. That year, Freud invited Jung to visit him in Vienna. The two men, it is said, were greatly attracted to each other, and they talked continuously for thirteen hours. This led to a professional relationship in which they corresponded on a weekly basis, for a period of six years.
Carl Jung's contributions in psychodynamic psychology include:
The psyche tends toward wholeness.
The self is composed of the ego, the personal unconscious, the collective unconscious. The collective unconscious contains the archetypes which manifest in ways particular to each individual.
Archetypes are composed of dynamic tensions and arise spontaneously in the individual and collective psyche. Archetypes are autonomous energies common to the human species. They give the psyche its dynamic properties and help organize it. Their effects can be seen in many forms and across cultures.
The Transcendent Function: The emergence of the third resolves the split between dynamic polar tensions within the archetypal structure.
The recognition of the spiritual dimension of the human psyche.
The role of images which spontaneously arise in the human psyche (images include the interconnection between affect, images, and instinct) to communicate the dynamic processes taking place in the personal and collective unconscious, images which can be used to help the ego move in the direction of psychic wholeness.
Recognition of the multiplicity of psyche and psychic life, that there are several organizing principles within the psyche, and that they are at times in conflict.
See also
Ernst Wilhelm Brücke
Yisrael Salantar
Cathexis
Object relations theory
Reaction formation
Robert Langs
References
Further reading
Brown, Junius Flagg & Menninger, Karl Augustus (1940). The Psychodynamics of Abnormal Behavior, 484 pages, McGraw-Hill Book Company, inc.
Weiss, Edoardo (1950). Principles of Psychodynamics, 268 pages, Grune & Stratton
Pearson Education (1970). The Psychodynamics of Patient Care Prentice Hall, 422 pgs. Stanford University: Higher Education Division.
Jean Laplanche et J.B. Pontalis (1974). The Language of Psycho-Analysis, Editeur: W. W. Norton & Company,
Shedler, Jonathan. "That was Then, This is Now: An Introduction to Contemporary Psychodynamic Therapy", PDF
PDM Task Force. (2006). Psychodynamic Diagnostic Manual. Silver Spring, MD. Alliance of Psychoanalytic Organizations.
Hutchinson, E.(ED.) (2017).Essentials of human behavior: Integrating person, environment, and the life course. Thousand Oaks, CA: Sage.
Freudian psychology
Psychoanalysis | 0.765477 | 0.996473 | 0.762777 |
Social behavior | Social behavior is behavior among two or more organisms within the same species, and encompasses any behavior in which one member affects the other. This is due to an interaction among those members. Social behavior can be seen as similar to an exchange of goods, with the expectation that when you give, you will receive the same. This behavior can be affected by both the qualities of the individual and the environmental (situational) factors. Therefore, social behavior arises as a result of an interaction between the two—the organism and its environment. This means that, in regards to humans, social behavior can be determined by both the individual characteristics of the person, and the situation they are in.
A major aspect of social behavior is communication, which is the basis for survival and reproduction. Social behavior is said to be determined by two different processes, that can either work together or oppose one another. The dual-systems model of reflective and impulsive determinants of social behavior came out of the realization that behavior cannot just be determined by one single factor. Instead, behavior can arise by those consciously behaving (where there is an awareness and intent), or by pure impulse. These factors that determine behavior can work in different situations and moments, and can even oppose one another. While at times one can behave with a specific goal in mind, other times they can behave without rational control, and driven by impulse instead.
There are also distinctions between different types of social behavior, such as mundane versus defensive social behavior. Mundane social behavior is a result of interactions in day-to-day life, and are behaviors learned as one is exposed to those different situations. On the other hand, defensive behavior arises out of impulse, when one is faced with conflicting desires.
Development
Social behavior constantly changes as one continues to grow and develop, reaching different stages of life. The development of behavior is deeply tied with the biological and cognitive changes one is experiencing at any given time. This creates general patterns of social behavior development in humans. Just as social behavior is influenced by both the situation and an individual's characteristics, the development of behavior is due to the combination of the two as well—the temperament of the child along with the settings they are exposed to.
Culture (parents and individuals that influence socialization in children) play a large role in the development of a child's social behavior, as the parents or caregivers are typically those who decide the settings and situations that the child is exposed to. These various settings the child is placed in (for example, the playground and classroom) form habits of interaction and behavior insomuch as the child being exposed to certain settings more frequently than others. What takes particular precedence in the influence of the setting are the people that the child must interact with their age, sex, and at times culture.
Emotions also play a large role in the development of social behavior, as they are intertwined with the way an individual behaves. Through social interactions, emotion is understood through various verbal and nonverbal displays, and thus plays a large role in communication. Many of the processes that occur in the brain and underlay emotion often greatly correlate with the processes that are needed for social behavior as well. A major aspect of interaction is understanding how the other person thinks and feels, and being able to detect emotional states becomes necessary for individuals to effectively interact with one another and behave socially.
As the child continues to gain social information, their behavior develops accordingly. One must learn how to behave according to the interactions and people relevant to a certain setting, and therefore begin to intuitively know the appropriate form of social interaction depending on the situation. Therefore, behavior is constantly changing as required, and maturity brings this on. A child must learn to balance their own desires with those of the people they interact with, and this ability to correctly respond to contextual cues and understand the intentions and desires of another person improves with age. That being said, the individual characteristics of the child (their temperament) is important to understanding how the individual learns social behaviors and cues given to them, and this learnability is not consistent across all children.
Patterns of development across the lifespan
When studying patterns of biological development across the human lifespan, there are certain patterns that are well-maintained across humans. These patterns can often correspond with social development, and biological changes lead to respective changes in interactions.
In pre and post-natal infancy, the behavior of the infant is correlated with that of the caregiver. The development of social behavior is influenced by their mothers' reactions to children's emotional displays. In infancy, there is already a development of the awareness of a stranger, in which case the individual is able to identify and distinguish between people.
Come childhood, the individual begins to attend more to their peers, and communication begins to take a verbal form. One also begins to classify themselves on the basis of their gender and other qualities salient about themselves, like race and age.
When the child reaches school age, one typically becomes more aware of the structure of society in regards to gender, and how their own gender plays a role in this. They become more and more reliant on verbal forms of communication, and more likely to form groups and become aware of their own role within the group.
By puberty, general relations among same and opposite sex individuals are much more salient, and individuals begin to behave according to the norms of these situations. With increasing awareness of their sex and stereotypes that go along with it, the individual begins to choose how much they align with these stereotypes, and behaves either according to those stereotypes or not. This is also the time that individuals more often form sexual pairs.
Once the individual reaches child rearing age, one must begin to undergo changes within the own behavior in accordance to major life-changes of a developing family. The potential new child requires the parent to modify their behavior to accommodate a new member of the family.
Come senescence and retirement, behavior is more stable as the individual has often established their social circle (whatever it may be) and is more committed to their social structure.
Neural and biological correlates
Neural correlates
With the advent of the field social cognitive neuroscience came interest in studying social behavior's correlates within the brain to see what is happening beneath the surface as organisms act in a social manner. Although there is debate on which particular regions of the brain are responsible for social behavior, some have claimed that the paracingulate cortex is activated when one person is thinking about the motives or aims of another, a means of understanding the social world and behaving accordingly. The medial prefrontal lobe has also been seen to have activation during social cognition Research has discovered through studies on rhesus monkeys that the amygdala, a region known for expressing fear, was activated specifically when the monkeys were faced with a social situation they had never encountered before. This region of the brain was shown to be sensitive to the fear that comes with a novel social situation, inhibiting social interaction.
Another form of studying the brain regions that may be responsible for social behavior has been through looking at patients with brain injuries who have an impairment in social behavior. Lesions in the prefrontal cortex that occurred in adulthood can affect the functioning of social behavior. When these lesions or a dysfunction in the prefrontal cortex occur in infancy/early on in life, the development of proper moral and social behavior is effected and thus atypical.
Biological correlates
Along with neural correlates, research has investigated what happens within the body (and potentially modulates) social behavior. Vasopressin is a posterior pituitary hormone that is seen to potentially play a role in affiliation for young rats. Along with young rats, vasopressin has also been associated with paternal behavior in prairie voles. Efforts have been made to connect animal research to humans, and found that vasopressin may play a role in the social responses of males in human research.
Oxytocin has also been seen to be correlated with positive social behavior, and elevated levels have been shown to potentially help improve social behavior that may have been suppressed due to stress. Thus, targeting levels of oxytocin may play a role in interventions of disorders that deal with atypical social behavior.
Along with vasopressin, serotonin has also been inspected in relation to social behavior in humans. It was found to be associated with human feelings of social connection, and there is a drop in serotonin when one is socially isolated or has feelings of social isolation. Serotonin has also been associated with social confidence.
Affect
Positive affect (emotion) has been seen to have a large impact on social behavior, particularly by inducing more helping behavior, cooperation, and sociability. Studies have shown that even subtly inducing positive affect within individuals caused greater social behavior and helping. This phenomenon, however, is not one-directional. Just as positive affect can influence social behavior, social behavior can have an influence on positive affect.
Electronic media
Social behavior has typically been seen as a changing of behaviors relevant to the situation at hand, acting appropriately with the setting one is in. However, with the advent of electronic media, people began to find themselves in situations they may have not been exposed to in everyday life. Novel situations and information presented through electronic media has formed interactions that are completely new to people. While people typically behaved in line with their setting in face-to-face interaction, the lines have become blurred when it comes to electronic media. This has led to a cascade of results, as gender norms started to merge, and people were coming in contact with information they had never been exposed to through face-to-face interaction. A political leader could no longer tailor a speech to just one audience, for their speech would be translated and heard by anyone through the media. People can no longer play drastically different roles when put in different situations, because the situations overlap more as information is more readily available. Communication flows more quickly and fluidly through media, causing behavior to merge accordingly.
Media has also been shown to have an impact on promoting different types of social behavior, such as prosocial and aggressive behavior. For example, violence shown through the media has been seen to lead to more aggressive behavior in its viewers. Research has also been done investigating how media portraying positive social acts, prosocial behavior, could lead to more helping behavior in its viewers. The general learning model was established to study how this process of translating media into behavior works, and why. This model suggests a link between positive media with prosocial behavior and violent media with aggressive behavior, and posits that this is mediated by the characteristics of the individual watching along with the situation they are in. This model also presents the notion that when one is exposed to the same type of media for long periods of time, this could even lead to changes within their personality traits, as they are forming different sets of knowledge and may be behaving accordingly.
In various studies looking specifically at how video games with prosocial content effect behavior, it was shown that exposure influenced subsequent helping behavior in the video-game player. The processes that underlay this effect point to prosocial thoughts being more readily available after playing a video game related to this, and thus the person playing the game is more likely to behave accordingly. These effects were not only found with video games, but also with music, as people listening to songs involving aggression and violence in the lyrics were more likely to act in an aggressive manner. Likewise, people listening to songs related to prosocial acts (relative to a song with neutral lyrics) were shown to express greater helping behaviors and more empathy afterwards. When these songs were played at restaurants, it even led to an increase in tips given (relative to those who heard neutral lyrics).
Individual and group behavior
Conformity refers to the behavior that an individual is unconsciously pressured by the group to make his behavior tend to be consistent with the majority of people in the group. Generally speaking, the larger the group size, the easier it is for individuals to display conformity behaviors. Individuals may submit to the group for two reasons: first, to gain acceptance from the group (normative social influence); second, to obtain important information for the group (informational social influence).
Aggressive and violent behavior
Aggression is an important social behavior that can have both negative consequences (in a social interaction) and adaptive consequences (adaptive in humans and other primates for survival). There are many differences in aggressive behavior, and a lot of these differences are sex-difference based.
Verbal, coverbal, and nonverbal social behavior
Verbal and coverbal behaviors
Although most animals can communicate nonverbally, humans have the ability to communicate with both verbal and nonverbal behavior. Verbal behavior is the content of one's spoken word. Verbal and nonverbal behavior intersect in what is known as coverbal behavior, which is nonverbal behavior that contributes to the meaning of verbal speech (i.e. hand gestures used to emphasize the importance of what someone is saying). Although the spoken words convey meaning in and of themselves, one cannot dismiss the coverbal behaviors that accompany the words, as they place great emphasis on the thought and importance contributing to the verbal speech. Therefore, the verbal behaviors and gestures that accompany it work together to make up a conversation. Although many have posited this idea that nonverbal behavior accompanying speech serves an important role in communication, it is important to note that not all researchers agree. However, in most literature on gestures, unlike body language, gestures can accompany speech in ways that bring inner thoughts to life (often thoughts unable to be expressed verbally). Gestures (coverbal behaviors) and speech occur simultaneously, and develop along the same trajectory within children as well.
Nonverbal behaviors
Behaviors that include any change in facial expression or body movement constitute the meaning of nonverbal behavior. Communicative nonverbal behavior include facial and body expressions that are intentionally meant to convey a message to those who are meant to receive it. Nonverbal behavior can serve a specific purpose (i.e. to convey a message), or can be more of an impulse/reflex. Paul Ekman, an influential psychologist, investigated both verbal and nonverbal behavior (and their role in communication) a great deal, emphasizing how difficult it is to empirically test such behaviors. Nonverbal cues can serve the function of conveying a message, thought, or emotion both to the person viewing the behavior and the person sending these cues.
Disorders involving impairments in social behavior
A number of mental disorders affect social behavior. Social anxiety disorder is a phobic disorder characterized by a fear of being judged by others, which manifests itself as a fear of people in general. Due to this pervasive fear of embarrassing oneself in front of others, it causes those affected to avoid interactions with other people. Attention deficit hyperactivity disorder is a neurodevelopmental disorder mainly identified by its symptoms of inattention, hyperactivity, and impulsivity. Hyperactivity-Impulsivity may lead to hampered social interactions, as one who displays these symptoms may be socially intrusive, unable to maintain personal space, and talk over others. The majority of children that display symptoms of ADHD also have problems with their social behavior. Autism spectrum disorder is a neurodevelopmental disorder that affects the functioning of social interaction and communication. Autistic People may have difficulties in understanding social cues and the emotional states of others.
Learning disabilities are often defined as a specific deficit in academic achievement; however, research has shown that with a learning disability can come social skill deficits as well.
See also
Aggression
Health behavior
Collective animal behavior
Expectancy challenge sociological method
Herd behavior
Social behavior in education
Social learning theory
Social science
Sociality
Socialization
Violent Behavior
References
Sociological terminology
Behavior
Social psychology concepts
Sociology
Role theory | 0.767414 | 0.993944 | 0.762767 |
Genetic epistemology | Genetic epistemology or 'developmental theory of knowledge' is a study of the origins (genesis) of knowledge (epistemology) established by Swiss psychologist Jean Piaget. This theory opposes traditional epistemology and unites constructivism and structuralism. Piaget took epistemology as the starting point and adopted the method of genetics, arguing that all knowledge of the child is generated through interaction with the environment.
Aims
The goal of genetic epistemology is to link the knowledge to the model of its construction – i.e., the context in which knowledge is gained affects its perception, quality, and degree of retention. Further, genetic epistemology seeks to explain the process of cognitive development (from birth) in four primary stages: sensorimotor (birth to age 2), pre-operational (2–7), concrete operational (7–11), and formal operational (11 years onward).
As an example, consider that for children in the sensorimotor stage, teachers should try to provide a rich and stimulating environment with ample objects to play with. Then with children in the concrete operational stage, learning activities should involve problems of classification, ordering, location, conservation using concrete objects. The main focus is on the younger years of development. Assimilation occurs when the perception of a new event or object occurs to the learner in an existing schema and is usually used in the context of self-motivation. In Accommodation, one accommodates the experiences according to the outcome of the tasks. The highest form of development is equilibration. Equilibration encompasses both assimilation and accommodation as the learner changes how they think to get a better answer.
Piaget believed that knowledge is a biological function that results from the actions of an individual through change. He also stated that knowledge consists of structures, and comes about by the adaptation of these structures with the environment.
Types of knowledge
Piaget proposes three types of knowledge: physical, logical mathematical, and social knowledge.
Physical knowledge: It refers to knowledge related to objects in the world, which can be acquired through perceptual properties. The acquisition of physical knowledge has been equated with learning in Piaget's theory (Gruber and Voneche, 1995). In other words, thought is fit directly to experience.
Piaget also called his view constructivism, because he firmly believed that knowledge acquisition is a process of continuous self-construction. That is, Knowledge is not out there, external to the child and waiting to be discovered. But neither is it wholly performed within the child, ready to emerge as the child develops with the world surrounding her ... Piaget believed that children actively approach their environments and acquire knowledge through their actions.
See also
Constructivist epistemology
Cognitive psychology
Educational psychology
Evolutionary epistemology
General semantics
Genetic structuralism
Learning styles
Learning theory
Ontogeny recapitulates phylogeny
Theory of cognitive development
Notes
References
Developmental psychology
Educational psychology
History of psychology
Social epistemology | 0.777642 | 0.980867 | 0.762764 |
Chaos theory | Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas.
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.
Introduction
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
Chaotic dynamics
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
it must be sensitive to initial conditions,
it must be topologically transitive,
it must have dense periodic orbits.
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
Sensitivity to initial conditions
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by
where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
Non-periodicity
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
Topological mixing
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
Topological transitivity
A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.
Density of periodic orbits
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example, → → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
Strange attractors
Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
Coexisting attractors
In contrast to single type chaotic solutions, recent studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".
Minimum complexity of a chaotic system
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.
The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.
Infinite dimensional maps
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
,
where kernel is propagator derived as Green function of a relevant physical system,
might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map
may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:.
.
Jerk systems
In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form
are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behavior. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems.
A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits.
One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is:
Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes:
In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative.
Similar circuits only require one diode or no diodes at all.
See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system.
Spontaneous order
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system.
Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.
Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect.
History
James Clerk Maxwell first emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.
In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory.
In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.
In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
Also in 1987 James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.
Lorenz's pioneering contributions to chaotic modeling
Throughout his career, Professor Edward Lorenz authored a total of 61 research papers, out of which 58 were solely authored by him. Commencing with the 1960 conference in Japan, Lorenz embarked on a journey of developing diverse models aimed at uncovering the SDIC and chaotic features. A recent review of Lorenz's model progression spanning from 1960 to 2008 revealed his adeptness at employing varied physical systems to illustrate chaotic phenomena. These systems encompassed Quasi-geostrophic systems, the Conservative Vorticity Equation, the Rayleigh-Bénard Convection Equations, and the Shallow Water Equations. Moreover, Lorenz can be credited with the early application of the logistic map to explore chaotic solutions, a milestone he achieved ahead of his colleagues (e.g. Lorenz 1964).
In 1972, Lorenz coined the term "butterfly effect" as a metaphor to discuss whether a small perturbation could eventually create a tornado with a three-dimensional, organized, and coherent structure. While connected to the original butterfly effect based on sensitive dependence on initial conditions, its metaphorical variant carries distinct nuances. To commemorate this milestone, a reprint book containing invited papers that deepen our understanding of both butterfly effects was officially published to celebrate the 50th anniversary of the metaphorical butterfly effect.
A popular but inaccurate analogy for chaos
The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:
For want of a nail, the shoe was lost.
For want of a shoe, the horse was lost.
For want of a horse, the rider was lost.
For want of a rider, the battle was lost.
For want of a battle, the kingdom was lost.
And all for the want of a horseshoe nail.
Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. In a recent study, the characteristic of the aforementioned verse was recently denoted as "finite-time sensitive dependence".
Applications
Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.
Cryptography
Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.
Robotics
Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model.
Chaotic dynamics have been exhibited by passive walking biped robots.
Biology
For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.
As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.
Economics
It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.
Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.
Finite predictability in weather and climate
Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law.
AI-extended modeling framework
In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture").
Other areas
In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.
Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.
Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.
In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.
Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.
By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.
Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).
Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
See also
Examples of chaotic systems
Advected contours
Arnold's cat map
Bifurcation theory
Bouncing ball dynamics
Chua's circuit
Cliodynamics
Coupled map lattice
Double pendulum
Duffing equation
Dynamical billiards
Economic bubble
Gaspard-Rice system
Hénon map
Horseshoe map
List of chaotic maps
Rössler attractor
Standard map
Swinging Atwood's machine
Tilt A Whirl
Other related topics
Amplitude death
Anosov diffeomorphism
Catastrophe theory
Causality
Chaos as topological supersymmetry breaking
Chaos machine
Chaotic mixing
Chaotic scattering
Control of chaos
Determinism
Edge of chaos
Emergence
Mandelbrot set
Kolmogorov–Arnold–Moser theorem
Ill-conditioning
Ill-posedness
Nonlinear system
Patterns in nature
Predictability
Quantum chaos
Santa Fe Institute
Shadowing lemma
Synchronization of chaos
Unintended consequence
People
Ralph Abraham
Michael Berry
Leon O. Chua
Ivar Ekeland
Doyne Farmer
Martin Gutzwiller
Brosl Hasslacher
Michel Hénon
Aleksandr Lyapunov
Norman Packard
Otto Rössler
David Ruelle
Oleksandr Mikolaiovich Sharkovsky
Robert Shaw
Floris Takens
James A. Yorke
George M. Zaslavsky
References
Further reading
Articles
Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication).
Textbooks
Semitechnical and popular works
Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, .
John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp.
John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp.
Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp.
Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp.
James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp.
L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp.
Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003.
Hans Lauwerier, Fractals, Princeton University Press, 1991.
Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996.
David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994.
Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp.
Nuria Perpinya, Caos, virus, calma. La Teoría del Caos aplicada al desórden artístico, social y político, Páginas de Espuma, 2021.
Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991.
Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994.
Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984.
David Ruelle, Chance and Chaos, Princeton University Press 1993.
Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993.
Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991.
Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990.
Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003.
Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993.
M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992.
Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012.
External links
Nonlinear Dynamics Research Group with Animations in Flash
The Chaos group at the University of Maryland
The Chaos Hypertextbook. An introductory primer on chaos and fractals
ChaosBook.org An advanced graduate textbook on chaos (no fractals)
Society for Chaos Theory in Psychology & Life Sciences
Nonlinear Dynamics Research Group at CSDC, Florence, Italy
Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Gleick's Chaos (excerpt)
Systems Analysis, Modelling and Prediction Group at the University of Oxford
A page about the Mackey-Glass equation
High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone
The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos.
Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience.
"Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time, May 16, 2002)
Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller
Copyright note
Complex systems theory
Computational fields of study | 0.763046 | 0.999624 | 0.762759 |
Cybernetics | Cybernetics is the transdisciplinary study of circular processes such as feedback systems where outputs are also inputs. It is concerned with general principles that are relevant across multiple contexts, including in ecological, technological, biological, cognitive and social systems and also in practical activities such as designing, learning, and managing.
The field is named after an example of circular causal feedback—that of steering a ship (the ancient Greek κυβερνήτης (kybernḗtēs) means "helmsperson"). In steering a ship, the helmsperson adjusts their steering in continual response to the effect it is observed as having, forming a feedback loop through which a steady course can be maintained in a changing environment, responding to disturbances from cross winds and tide.
Cybernetics' transdisciplinary character has meant that it intersects with a number of other fields, leading to it having both wide influence and diverse interpretations.
Definitions
Cybernetics has been defined in a variety of ways, reflecting "the richness of its conceptual base." One of the best known definitions is that of the American scientist Norbert Wiener, who characterised cybernetics as concerned with "control and communication in the animal and the machine." Another early definition is that of the Macy cybernetics conferences, where cybernetics was understood as the study of "circular causal and feedback mechanisms in biological and social systems." Margaret Mead emphasised the role of cybernetics as "a form of cross-disciplinary thought which made it possible for members of many disciplines to communicate with each other easily in a language which all could understand."
Other definitions include: "the art of governing or the science of government" (André-Marie Ampère); "the art of steersmanship" (Ross Ashby); "the study of systems of any nature which are capable of receiving, storing, and processing information so as to use it for control" (Andrey Kolmogorov); and "a branch of mathematics dealing with problems of control, recursiveness, and information, focuses on forms and the patterns that connect" (Gregory Bateson).
Etymology
The Ancient Greek term κυβερνητικός (kubernētikos, '(good at) steering') appears in Plato's Republic and Alcibiades, where the metaphor of a steersman is used to signify the governance of people. The French word cybernétique was also used in 1834 by the physicist André-Marie Ampère to denote the sciences of government in his classification system of human knowledge.
According to Norbert Wiener, the word cybernetics was coined by a research group involving himself and Arturo Rosenblueth in the summer of 1947. It has been attested in print since at least 1948 through Wiener's book Cybernetics: Or Control and Communication in the Animal and the Machine. In the book, Wiener states:
Moreover, Wiener explains, the term was chosen to recognize James Clerk Maxwell's 1868 publication on feedback mechanisms involving governors, noting that the term governor is also derived from κυβερνήτης (kubernḗtēs) via a Latin corruption gubernator. Finally, Wiener motivates the choice by steering engines of a ship being "one of the earliest and best-developed forms of feedback mechanisms".
History
First wave
The initial focus of cybernetics was on parallels between regulatory feedback processes in biological and technological systems. Two foundational articles were published in 1943: "Behavior, Purpose and Teleology" by Arturo Rosenblueth, Norbert Wiener, and Julian Bigelowbased on the research on living organisms that Rosenblueth did in Mexicoand the paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" by Warren McCulloch and Walter Pitts. The foundations of cybernetics were then developed through a series of transdisciplinary conferences funded by the Josiah Macy, Jr. Foundation, between 1946 and 1953. The conferences were chaired by McCulloch and had participants included Ross Ashby, Gregory Bateson, Heinz von Foerster, Margaret Mead, John von Neumann, and Norbert Wiener. In the UK, similar focuses were explored by the Ratio Club, an informal dining club of young psychiatrists, psychologists, physiologists, mathematicians and engineers that met between 1949 and 1958. Wiener introduced the neologism cybernetics to denote the study of "teleological mechanisms" and popularized it through the book Cybernetics: Or Control and Communication in the Animal and the Machine.
During the 1950s, cybernetics was developed as a primarily technical discipline, such as in Qian Xuesen's 1954 "Engineering Cybernetics". In the Soviet Union, Cybernetics was initially considered with suspicion but became accepted from the mid to late 1950s.
By the 1960s and 1970s, however, cybernetics' transdisciplinarity fragmented, with technical focuses separating into separate fields. Artificial intelligence (AI) was founded as a distinct discipline at the Dartmouth workshop in 1956, differentiating itself from the broader cybernetics field. After some uneasy coexistence, AI gained funding and prominence. Consequently, cybernetic sciences such as the study of artificial neural networks were downplayed. Similarly, computer science became defined as a distinct academic discipline in the 1950s and early 1960s.
Second wave
The second wave of cybernetics came to prominence from the 1960s onwards, with its focus inflecting away from technology toward social, ecological, and philosophical concerns. It was still grounded in biology, notably Maturana and Varela's autopoiesis, and built on earlier work on self-organising systems and the presence of anthropologists Mead and Bateson in the Macy meetings. The Biological Computer Laboratory, founded in 1958 and active until the mid-1970s under the direction of Heinz von Foerster at the University of Illinois at Urbana–Champaign, was a major incubator of this trend in cybernetics research.
Focuses of the second wave of cybernetics included management cybernetics, such as Stafford Beer's biologically inspired viable system model; work in family therapy, drawing on Bateson; social systems, such as in the work of Niklas Luhmann; epistemology and pedagogy, such as in the development of radical constructivism. Cybernetics' core theme of circular causality was developed beyond goal-oriented processes to concerns with reflexivity and recursion. This was especially so in the development of second-order cybernetics (or the cybernetics of cybernetics), developed and promoted by Heinz von Foerster, which focused on questions of observation, cognition, epistemology, and ethics.
The 1960s onwards also saw cybernetics begin to develop exchanges with the creative arts, design, and architecture, notably with the Cybernetic Serendipity exhibition (ICA, London, 1968), curated by Jasia Reichardt, and the unrealised Fun Palace project (London, unrealised, 1964 onwards), where Gordon Pask was consultant to architect Cedric Price and theatre director Joan Littlewood.
Third wave
From the 1990s onwards, there has been a renewed interest in cybernetics from a number of directions. Early cybernetic work on artificial neural networks has been returned to as a paradigm in machine learning and artificial intelligence. The entanglements of society with emerging technologies has led to exchanges with feminist technoscience and posthumanism. Re-examinations of cybernetics' history have seen science studies scholars emphasising cybernetics' unusual qualities as a science, such as its "performative ontology". Practical design disciplines have drawn on cybernetics for theoretical underpinning and transdisciplinary connections. Emerging topics include how cybernetics' engagements with social, human, and ecological contexts might come together with its earlier technological focus, whether as a critical discourse or a "new branch of engineering".
Key concepts and theories
The central theme in cybernetics is feedback. Feedback is a process where the observed outcomes of actions are taken as inputs for further action in ways that support the pursuit, maintenance, or disruption of particular conditions, forming a circular causal relationship. In steering a ship, the helmsperson maintains a steady course in a changing environment by adjusting their steering in continual response to the effect it is observed as having.
Other examples of circular causal feedback include: technological devices such as the thermostat, where the action of a heater responds to measured changes in temperature regulating the temperature of the room within a set range, and the centrifugal governor of a steam engine, which regulates the engine speed; biological examples such as the coordination of volitional movement through the nervous system and the homeostatic processes that regulate variables such as blood sugar; and processes of social interaction such as conversation.
Negative feedback processes are those that maintain particular conditions by reducing (hence 'negative') the difference from a desired state, such as where a thermostat turns on a heater when it is too cold and turns a heater off when it is too hot. Positive feedback processes increase (hence 'positive') the difference from a desired state. An example of positive feedback is when a microphone picks up the sound that it is producing through a speaker, which is then played through the speaker, and so on.
In addition to feedback, cybernetics is concerned with other forms of circular processes including: feedforward, recursion, and reflexivity.
Other key concepts and theories in cybernetics include:
Autopoiesis
Black box
Conversation theory
Double bind theory: Double binds are patterns created in interaction between two or more parties in ongoing relationships where there is a contradiction between messages at different logical levels that creates a situation with emotional threat but no possibility of withdrawal from the situation and no way to articulate the problem. The theory was first described by Gregory Bateson and colleagues in the 1950s with regard to the origins of schizophrenia, but it is also characteristic of many other social contexts.
Experimental epistemology
Good regulator theorem
Perceptual control theory: A model of behavior based on the properties of negative feedback (cybernetic) control loops. A key insight of PCT is that the controlled variable is not the output of the system (the behavioral actions), but its input, "perception". The theory came to be known as "perceptual control theory" to distinguish from those control theorists that assert or assume that it is the system's output that is controlled. Method of levels is an approach to psychotherapy based on perceptual control theory where the therapist aims to help the patient shift their awareness to higher levels of perception in order to resolve conflicts and allow reorganization to take place.
Radical constructivism
Second-order cybernetics: Also known as the cybernetics of cybernetics, second-order cybernetics is the recursive application of cybernetics to itself and the practice of cybernetics according to such a critique.
Self-organisation
Social systems theory
Variety and Requisite Variety
Viable system model
Related fields and applications
Cybernetics' central concept of circular causality is of wide applicability, leading to diverse applications and relations with other fields. Many of the initial applications of cybernetics focused on engineering, biology, and exchanges between the two, such as medical cybernetics and robotics and topics such as neural networks, heterarchy. In the social and behavioral sciences, cybernetics has included and influenced work in anthropology, sociology, economics, family therapy, cognitive science, and psychology.
As cybernetics has developed, it broadened in scope to include work in management, design, pedagogy, and the creative arts, while also developing exchanges with constructivist philosophies, counter-cultural movements, and media studies. The development of management cybernetics has led to a variety of applications, notably to the national economy of Chile under the Allende government in Project Cybersyn. In design, cybernetics has been influential on interactive architecture, human-computer interaction, design research, and the development of systemic design and metadesign practices.
Cybernetics is often understood within the context of systems science, systems theory, and systems thinking. Systems approaches influenced by cybernetics include critical systems thinking, which incorporates the viable system model; systemic design; and system dynamics, which is based on the concept of causal feedback loops.
Many fields trace their origins in whole or part to work carried out in cybernetics, or were partially absorbed into cybernetics when it was developed. These include artificial intelligence, bionics, cognitive science, control theory, complexity science, computer science, information theory and robotics. Some aspects of modern artificial intelligence, particularly the social machine, are often described in cybernetic terms.
Journals and societies
Academic journals with focuses in cybernetics include:
Constructivist Foundations
Cybernetics and Human Knowing
Cybernetics and Systems
Enacting Cybernetics. An open access journal published by the Cybernetics Society and hosted by Ubiquity Press.
Biological Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics: Systems
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Cybernetics
IEEE Transactions on Computational Social Systems
Kybernetes
Academic societies primarily concerned with cybernetics or aspects of it include:
American Society for Cybernetics (ASC), founded in 1964
British Cybernetics Society (CybSoc)
: The Metaphorum group was set up in 2003 to develop Stafford Beer's legacy in Organizational Cybernetics. The Metaphorum Group was born in a Syntegration in 2003 and have every year after developed a Conference on issues related to Organizational Cybernetics' theory and practice.
IEEE Systems, Man, and Cybernetics Society
RC51 Sociocybernetics: RC51 is a research committee of the International Sociological Association promoting the development of (socio)cybernetic theory and research within the social sciences.
SCiO (Systems and Complexity in Organisation) is a community of systems practitioners who believe that traditional approaches to running organisations are no longer capable of dealing with the complexity and turbulence faced by organisations today and are responsible for many of the problems we see today. SCiO delivers an apprenticeship on masters level and a certification in systems practice.
See also
Further reading
Ascott, Roy (1967). Behaviourist Art and the Cybernetic Vision. Cybernetica, Journal of the International Association for Cybernetics (Namur), 10, pp. 25–56
François, Charles (1999). "Systemics and cybernetics in a historical perspective". In: Systems Research and Behavioral Science. Vol 16, pp. 203–219 (1999)
Hayles, N. Katherine (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, Chicago: The University of Chicago Press. ISBN 9780226321462
Heylighen, Francis, and Cliff Joslyn (2002). "Cybernetics and Second Order Cybernetics", in: R.A. Meyers (ed.), Encyclopedia of Physical Science & Technology (3rd ed.), Vol. 4, (Academic Press, San Diego), p. 155-169.
Ilgauds, Hans Joachim (1980), Norbert Wiener, Leipzig.
Mariátegui, José-Carlos / Maulen, D. (eds.) Special issue on “Cybernetics in Latin America: Contexts Developments, Perceptions and Impacts”, AI & Society, 37, 2022.
von Foerster, Heinz, (1995), Ethics and Second-Order Cybernetics .
Notes
References
External links
General
Norbert Wiener and Stefan Odobleja - A Comparative Analysis
Reading List for Cybernetics
Principia Cybernetica Web
Web Dictionary of Cybernetics and Systems
Glossary Slideshow (136 slides)
Societies and Journals
American Society for Cybernetics
IEEE Systems, Man, & Cybernetics Society
International Society for Cybernetics and Systems Research
The Cybernetics Society
Transhumanism
Science and technology studies
Automation | 0.763639 | 0.998742 | 0.762678 |
Contextualism | Contextualism, also known as epistemic contextualism, is a family of views in philosophy which emphasize the context in which an action, utterance, or expression occurs. Proponents of contextualism argue that, in some important respect, the action, utterance, or expression can only be understood relative to that context. Contextualist views hold that philosophically controversial concepts, such as "meaning P", "knowing that P", "having a reason to A", and possibly even "being true" or "being right" only have meaning relative to a specified context. Other philosophers contend that context-dependence leads to complete relativism.
In ethics, "contextualist" views are often closely associated with situational ethics, or with moral relativism.
Contextualism in architecture is a theory of design where modern building types are harmonized with urban forms usual to a traditional city.
In epistemology, contextualism is the treatment of the word 'knows' as context-sensitive. Context-sensitive expressions are ones that "express different propositions relative to different contexts of use". For example, some terms generally considered context-sensitive are indexicals, such as 'I', 'here', and 'now'; while 'I' has a constant linguistic meaning in all contexts of use, whom it refers to varies with context. Similarly, epistemic contextualists argue that the word 'knows' is context sensitive, expressing different relations in some different contexts.
Overview
Contextualism was introduced, in part, to undermine skeptical arguments that have this basic structure:
I don't know that I am not in a skeptical scenario H (e.g., I'm not a brain in a vat)
If I don't know that H is not the case, then I don't know an ordinary proposition O (e.g., I have hands)
Conclusion: Therefore, I don't know O
The contextualist solution is not to deny any premise, nor to say that the argument does not follow, but link the truth value of (3) to the context, and say that we can refuse (3) in context—like everyday conversational context—where we have different requirements to say we know.
The main tenet of contextualist epistemology is that knowledge attributions are context-sensitive, and the truth values of "know" depend on the context in which it is used. A statement like 'I know that I have hands' would be false. The same proposition in an ordinary context—e.g., in a cafe with friends— would be truth, and its negation would be false. When we participate in philosophical discourses of the skeptical sort, we seem to lose our knowledge; once we leave the skeptical context, we can truthfully say we have knowledge.
That is, when we attribute knowledge to someone, the context in which we use the term 'knowledge' determines the standards relative to which "knowledge" is being attributed (or denied). If we use it in everyday conversational contexts, the contextualist maintains, most of our claims to "know" things are true, despite skeptical attempts to show we know little or nothing. But if the term 'knowledge' is used when skeptical hypotheses are being discussed, we count as "knowing" very little, if anything. Contextualists use this to explain why skeptical arguments can be persuasive, while at the same time protecting the correctness of our ordinary claims to "know" things. This theory does not allow that someone can have knowledge at one moment and not the other, which would not be a satisfying epistemological answer. What contextualism entails is that in one context an utterance of a knowledge attribution can be true, and in a context with higher standards for knowledge, the same statement can be false. This happens in the same way that 'I' can correctly be used (by different people) to refer to different people at the same time.
What varies with context is how well-positioned a subject must be with respect to a proposition to count as "knowing" it. Contextualism in epistemology then is a semantic thesis about how 'knows' works in English, not a theory of what knowledge, justification, or strength of epistemic position consists in. However, epistemologists combine contextualism with views about what knowledge is to address epistemological puzzles and issues, such as skepticism, the Gettier problem, and the Lottery paradox.
Contextualist accounts of knowledge became increasingly popular toward the end of the 20th century, particularly as responses to the problem of skepticism. Contemporary contextualists include Michael Blome-Tillmann, Michael Williams, Stewart Cohen, Keith DeRose, David Lewis, Gail Stine, and George Mattey.
The standards for attributing knowledge to someone, the contextualist claims, vary from one user's context to the next. Thus, if I say "John knows that his car is in front of him", the utterance is true if and only if (1) John believes that his car is in front of him, (2) the car is in fact in front of him, and (3) John meets the epistemic standards that my (the speaker's) context selects. This is a loose contextualist account of knowledge, and there are many significantly different theories of knowledge that can fit this contextualist template and thereby come in a contextualist form.
For instance, an evidentialist account of knowledge can be an instance of contextualism if it's held that strength of justification is a contextually varying matter. And one who accepts a relevant alternative's account of knowledge can be a contextualist by holding that what range of alternatives are relevant is sensitive to conversational context. DeRose adopts a type of modal or "safety" (as it has since come to be known) account on which knowledge is a matter of one's belief as to whether or not p is the case matching the fact of the matter, not only in the actual world, but also in the sufficiently close possible worlds: Knowledge amounts to there being no "nearby" worlds in which one goes wrong with respect to p. But how close is sufficiently close? It's here that DeRose takes the modal account of knowledge in a contextualist direction, for the range of "epistemically relevant worlds" is what varies with context: In high standards contexts one's belief must match the fact of the matter through a much wider range of worlds than is relevant to low standards contexts.
It is claimed that neurophilosophy has the goal of contextualizing.
Contextualist epistemology has been criticized by several philosophers. Contextualism is opposed to any general form of Invariantism, which claims that knowledge is not context-sensitive (i.e. it is invariant). More recent criticism has been in the form of rival theories, including Subject-Sensitive Invariantism (SSI), mainly due to the work of John Hawthorne (2004), and Interest-Relative Invariantism (IRI), due to Jason Stanley (2005). SSI claims that it is the context of the subject of the knowledge attribution that determines the epistemic standards, whereas Contextualism maintains it is the attributor. IRI, on the other hand, argues that it is the context of the practical interests of the subject of the knowledge attribution that determines the epistemic standards. Stanley writes that bare IRI is "simply the claim that whether or not someone knows that p may be determined in part by practical facts about the subject's environment." ("Contextualism" is a misnomer for either form of Invariantism, since "Contextualism" among epistemologists is considered to be restricted to a claim about the context-sensitivity of knowledge attributions (or the word "knows"). Thus, any view which maintains that something other than knowledge attributions are context-sensitive is not, strictly speaking, a form of Contextualism.)
An alternative to contextualism called contrastivism has been proposed by Jonathan Schaffer. Contrastivism, like contextualism, uses semantic approaches to tackle the problem of skepticism.
Recent work in experimental philosophy has taken an empirical approach to testing the claims of contextualism and related views. This research has proceeded by conducting experiments in which ordinary non-philosophers are presented with vignettes, then asked to report on the status of the knowledge ascription. The studies address contextualism by varying the context of the knowledge ascription, e.g. how important it is that the agent in the vignette has accurate knowledge.
In the studies completed up to 2010, no support for contextualism has been found: stakes have no impact on evidence. More specifically, non-philosophical intuitions about knowledge attributions are not affected by the importance to the potential knower of the accuracy of that knowledge.
See also
Anekantavada
Degrees of truth
Exclusive disjunction
False dilemma
Fuzzy logic
Logical disjunction
Logical value
Multi-valued logic
Perspectivism
Principle of bivalence
Propositional attitude
Propositional logic
Relativism
Rhizome (philosophy)
Semiotic anthropology
Truth
Footnotes
References
Annis, David. 1978. "A Contextualist Theory of Epistemic Justification", in American Philosophical Quarterly, 15: 213–219.
Cappelen, H. & Lepore, E. 2005. Insensitive Semantics: A Defense of Semantic Minimalism and Speech Act Pluralism, Blackwell Publishing.
Cohen, Stuart. 1998. "Contextualist Solutions to Epistemological Problems: Scepticism, Gettier, and the Lottery." Australasian Journal of Philosophy, 76: 289–306.
Cohen Stuart. 1999. "Contextualism, Skepticism, and Reasons", in Tomberlin 1999.
DeRose, Keith. 1992. "Contextualism and Knowledge Attributions", Philosophy and Phenomenological Research 52: 913–929.
DeRose, Keith. 1995. "Solving the Skeptical Problem," Philosophical Review 104: 1-52.
DeRose, Keith. 1999. "Contextualism: An Explanation and Defense", in Greco and Sosa 1999.
DeRose, Keith. 2002. "Assertion, Knowledge, and Context," Philosophical Review 111: 167–203.
DeRose, Keith. 2009. The Case for Contextualism: Knowledge, Skepticism and Context, Vol. 1, Oxford: Oxford University Press.
Feldman, Richard. 1999. "Contextualism and Skepticism", in Tomberlin 1999.
Greco, J. & Sosa, E. 1999. Blackwell Guide to Epistemology, Blackwell Publishing.
Hawthorne, John. 2004. Knowledge and Lotteries, Oxford: Oxford University Press.
Mackie, J.L. 1977, "Ethics: Inventing Right and Wrong", Viking Press, .
May, Joshua, Sinnott-Armstrong, Walter, Hull, Jay G. & Zimmerman, Aaron. 2010. "Practical Interests, Relevant Alternatives, and Knowledge Attributions: An Empirical Study", Review of Philosophy and Psychology (formerly European Review of Philosophy), special issue on Psychology and Experimental Philosophy ed. by Edouard Machery, Tania Lombrozo, & Joshua Knobe, Vol. 1, No. 2, pp. 265–273.
Price, A. W. 2008. ' 'Contextuality in Practical Reason' ', Oxford University Press.
Schaffer, Jonathan. 2004. "From Contextualism to Contrastivism," Philosophical Studies 119: 73–103.
Schiffer, Stephen. 1996. "Contextualist Solutions to Scepticism", Proceedings of the Aristotelian Society, 96:317-33.
Stanley, Jason. 2005. Knowledge and Practical Interests. New York: Oxford University Press.
Timmons Mark, 1998 "Morality Without Foundations: A Defense of Ethical Contextualism Oxford University Press US.
Tomberlin, James (ed.). 1999. Philosophical Perspectives 13, Epistemology, Blackwell Publishing.
External links
A Brief History of Contextualism - DeRose on the history of contextualism in epistemology.
Contextualism in Epistemology - an article by Tim Black on the Internet Encyclopedia of Philosophy.
.
Consensus reality
Metaethics
Metatheory
Relativism
Skepticism
Systemic functional linguistics
Ethical theories
Theories of justification | 0.774856 | 0.98426 | 0.76266 |
Pedagogical pattern | A pedagogical pattern is the re-usable form of a solution to a problem or task in pedagogy, analogous to how a design pattern is the re-usable form of a solution to a design problem. Pedagogical patterns are used to document and share best practices of teaching. A network of interrelated pedagogical patterns is an example of a pattern language.
Overview
In a 2001 paper for SIGCSE, Joseph Bergin wrote:
Example structure of a pattern
Mitchell Weisburgh proposed nine aspects to documenting a pedagogical pattern for a certain skill. Not every pattern needs to include all nine. His listing is reproduced below:
Name – single word or short phrase that refers to the pattern. This allows for rapid association and retrieval.
Problem – definition of a problem, including its intent or a desired outcome, and symptoms that would indicate that this problem exists.
Context – preconditions which must exist in order for that problem to occur; this is often a situation. When forces conflict, the resolutions of those conflicts is often implied by the context.
Forces – description of forces or constraints and how they interact. Some of the forces may be contradictory. For example: being thorough often conflicts with time or money constraints.
Solution – instructions, possibly including variants. The solution may include pictures, diagrams, prose, or other media.
Examples – sample applications and solutions, analogies, visual examples, and known uses can be especially helpful, help user understand the context
Resulting Context – result after the pattern has been applied, including postconditions and side effects. It might also include new problems that might result from solving the original problem.
Rationale – the thought processes that would go into selecting this pattern, The rationale includes an explanation of why this pattern works, how forces and constraints are resolved to construct a desired outcome.
Related Patterns – differences and relationships with other patterns, possibly predecessor, antecedents, or alternatives that solve similar problems.
See also
Modeling (psychology)
Teacher education
Teaching method
Notes
References
In this article, pedagogical patterns are called learning design patterns.
External links
Pedagogical Patterns site
E-LEN, tutorial on making e-learning design pattern
Fourteen Pedagogical Patterns by Joseph Bergin
Pedagogy
Educational psychology
Design patterns | 0.824041 | 0.925471 | 0.762627 |
Erikson's stages of psychosocial development | Erikson's stages of psychosocial development, as articulated in the second half of the 20th century by Erik Erikson in collaboration with Joan Erikson, is a comprehensive psychoanalytic theory that identifies a series of eight stages that a healthy developing individual should pass through from infancy to late adulthood.
According to Erikson's theory the results from each stage, whether positive or negative, influence the results of succeeding stages. Erikson published a book called Childhood and Society in 1950 that highlighted his research on the eight stages of psychosocial development. Erikson was originally influenced by Sigmund Freud's psychosexual stages of development. He began by working with Freud's theories specifically, but as he began to dive deeper into biopsychosocial development and how other environmental factors affect human development, he soon progressed past Freud's theories and developed his own ideas. Erikson developed different substantial ways to create a theory about lifespan he theorized about the nature of personality development as it unfolds from birth through old age or death. He argued that the social experience was valuable throughout our life to each stage that can be recognizable by a conflict specifically as we encounter between the psychological needs and the surroundings of the social environment.
Erikson's stage theory characterizes an individual advancing through the eight life stages as a function of negotiating their biological and sociocultural forces. The two conflicting forces each have a psychosocial crisis which characterizes the eight stages. If an individual does indeed successfully reconcile these forces (favoring the first mentioned attribute in the crisis), they emerge from the stage with the corresponding virtue. For example, if an infant enters into the toddler stage (autonomy vs. shame and doubt) with more trust than mistrust, they carry the virtue of hope into the remaining life stages. The stage challenges that are not successfully overcome may be expected to return as problems in the future. However, mastery of a stage is not required to advance to the next stage. In one study, subjects showed significant development as a result of organized activities.
Stages
Psychological periodization of stages of human development
Hope: trust vs. mistrust (oral-sensory, infancy, under 1 year)
Existential Question: Can I Trust the World?
The first stage of Erik Erikson's theory centers around the infant's basic needs being met by the parents or caregiver and how this interaction leads to trust or mistrust. Trust as defined by Erikson is "an essential trustfulness of others as well as a fundamental sense of one's own trustworthiness." The infant depends on the parents, especially the mother, for sustenance and comfort. Infants will often use methods such as pointing to indicate their interests or desires to their parents or caregivers. The child's relative understanding of the world and society comes from the parents and their interaction with the child. Children first learn to trust their parents or a caregiver. If the parents expose their child to warmth, security, and dependable affection, the infant's view of the world will be one of trust. As the child learns to trust the world around them, they also acquire the virtue of hope. Should parents fail to provide a secure environment and to meet the child's basic needs; a sense of mistrust will result. Development of mistrust can later lead to feelings of frustration, suspicion, withdrawal, and a lack of confidence.
According to Erik Erikson, the major developmental task in infancy is to learn whether or not other people, especially primary caregivers, regularly satisfy basic needs. If caregivers are consistent sources of food, comfort, and affection, an infant learns trust — that others are dependable and reliable. If they are neglectful, or perhaps even abusive, the infant instead learns mistrust — that the world is an undependable, unpredictable, and possibly a dangerous place. Having some experience with mistrust allows the infant to gain an understanding of what constitutes dangerous situations later in life. However, infants and toddlers should not be subjected to prolonged situations of mistrust. This causes children to be ill adjusted later in life and see life with a cautious and careful outlook, which can be detrimental later in their life. In this stage, the child's most important needs are to feel safe, comforted, and well cared for.
This stage is where a child learns an attachment style to their caregiver. The attachment style the child develops can affect their relationships through the rest of their life. For example, if the infant is hungry, will it be fed? If their diaper got soiled, would anybody change it? If they're sad, will they be comforted? The infant's mind would tell if the world is a trustworthy place with trustworthy people. Infants need protection and support from the familiar adult; otherwise, they will most likely not survive. This concept was studied more by Bowlby and Ainsworth in their attachment theory which is consistent with Erikson's research.
Will: autonomy vs. shame/doubt (muscular-anal, toddlerhood, 1–2 years)
Existential Question: Is It Okay to Be Me?
As the child gains control over eliminative functions and motor abilities, they begin to explore their surroundings. Parents still provide a strong base of security from which the child can venture out to assert their will. The parents' patience and encouragement help to foster autonomy in the child. During early childhood, the child will start to have learning tasks and skills that instill personal responsibility, which allows the children to make choices that could help them develop a sense of autonomy and confidence. Children at this age like to explore the world around them and they are constantly learning about their environment. Caution must be taken at this age while children may explore things that are dangerous to their health and safety.
At this age, children develop their first interests. For example, a child who enjoys music may like to play with the radio. Children who enjoy the outdoors may be interested in animals and plants. Highly restrictive parents are more likely to instill in the child a sense of doubt, and reluctance to try new and challenging opportunities. As the child gains increased muscular coordination and mobility, toddlers become capable of satisfying some of their own needs. They begin to feed themselves, wash and dress themselves, and use the bathroom.
If caregivers encourage self-sufficient behavior, toddlers will develop a sense of autonomy—a sense of being able to handle many problems on their own. On the contrary, there is the possibility that the caregiver can demand too much too soon. This will likely lead the child to develop shame and doubt in their ability to handle problems. This shame and doubt could also come as a result of a caregiver ridiculing a child's early performance attempts. There is definitely a delicate balance to be had with autonomy. If the child receives too much autonomy, they have the potential to grow up with little concern for rules or regulations. It is worth noting that this could also increase the likelihood of injury. Conversely, if the parents exert too much control over them, the child can grow up to be more rebellious and impulsive. The abilities of the child are limited.
Purpose: initiative vs. guilt (locomotor-genital, early childhood, 3–6 years)
Existential Question: Is it Okay for Me to Do, Move, and Act?
Initiative adds to autonomy the quality of planning, undertaking, and attacking a task for the sake of just being active and on the move. The child is learning to master the world around them, learning basic skills and principles of physics. Things fall down, not up, round things roll. They learn how to zip and tie, count and speak with ease. At this stage, the child wants to begin and complete their own actions for a purpose. Guilt is a confusing new emotion. They may feel guilty over things that logically should not cause guilt. They may feel guilt when this initiative does not produce desired results.
The development of courage and independence are what set preschoolers, ages three to six years of age, apart from other age groups. Young children in this category face the psychological crisis of initiative versus guilt. This includes learning how to face complexities of planning and developing a sense of judgment. During this stage, the child learns to take initiative and prepares for leadership roles, and to achieve goals. Activities sought out by a child in this stage may include risk-taking behaviors, such as crossing a street alone or riding a bike without a helmet; both these examples involve self-limits. The child may also develop negative behaviors as they learn to take initiative. These negative behaviors, such as throwing objects, hitting, or yelling, can be a result of the child feeling frustrated after not being able to achieve a goal as planned.
Preschoolers are increasingly able to accomplish tasks on their own and can explore new areas. With this growing independence comes many choices about activities to be pursued. Sometimes children take on projects they can readily accomplish, but at other times they undertake projects that are beyond their capabilities or that interfere with other people's plans and activities. If parents and preschool teachers encourage and support children's efforts, while also helping them make realistic and appropriate choices, children develop initiative—independence in planning and undertaking activities. But if instead, adults discourage the pursuit of independent activities or dismiss them as silly and bothersome, children develop guilt about their needs and desires.
Competence: industry vs. inferiority (latency, late childhood, 7–10 years)
Existential Question: Can I Make it in the World of People and Things?
The aim of this stage is to bring a productive situation to completion which gradually supersedes the whims and wishes of play. The fundamentals of technology are developed. The failure to master trust, autonomy, and industrious skills may cause the child to doubt their future, leading to shame, guilt, and the experience of defeat and inferiority.
The child must deal with demands to learn new skills or risk a sense of inferiority, failure, and incompetence. In doing so, children are able to start contributing to society and making a difference in the world. They become more aware of themselves and how competent, or not, they are.
"Children at this age are becoming more aware of themselves as individuals." They work hard at "being responsible, being good and doing it right." They are now more reasonable to share and cooperate. Allen and Marotz (2003) also list some perceptual cognitive developmental traits specific for this age group. Children grasp the concepts of space and time in more logical, practical ways. They gain a better understanding of cause and effect, and of calendar time. At this stage, children are eager to learn and accomplish more complex skills: reading, writing, telling time. They also get to form moral values, recognize cultural and individual differences and are able to manage most of their personal needs and grooming with minimal assistance. At this stage, children might express their independence by talking back and being disobedient and rebellious.
Erikson viewed the elementary school years as critical for the development of self-confidence. Ideally, elementary school provides many opportunities to achieve the recognition of teachers, parents and peers by producing things—drawing pictures, solving addition problems, writing sentences, and so on. If children are encouraged to make and do things and are then praised for their accomplishments, they begin to demonstrate industry by being diligent, persevering at tasks until completed, and putting work before pleasure. If children are instead ridiculed or punished for their efforts or if they find they are incapable of meeting their teachers' and parents' expectations, they develop feelings of inferiority about their capabilities.
Children also begin to make relationships with others around them. Being social is especially important for this stage. It helps school aged children become either more or less confident about themselves and their abilities. Also, during this age, children also begin to migrate into their own social groups. Depending on the child's "group", the child will have more or less self confidence.
At this age, children start recognizing their special talents and continue to discover interests as their education improves. They may begin to choose to do more activities to pursue that interest, such as joining a sport if they know they have athletic ability, or joining the band if they are good at music. If not allowed to discover their own talents in their own time, they will develop a sense of lack of motivation, low self-esteem, and lethargy. They may become "couch potatoes" if they are not allowed to develop interests.
Fidelity: identity vs. role confusion (adolescence, 11–19 years)
Existential Question: Who Am I and What Can I Be?
The adolescent is newly concerned with how they appear to others. Superego identity is the accrued confidence that the outer sameness and continuity prepared in the future are matched by the sameness and continuity of one's meaning for oneself, as evidenced in the promise of a career. The ability to settle on a school or occupational identity is pleasant. In later stages of adolescence, the child develops a sense of sexual identity. Adolescents become curious about the roles they will play in the adult world as they transition from childhood to adulthood. Initially, they are apt to experience some role confusion—mixed ideas and feelings about the specific ways in which they will fit into society—and may experiment with a variety of behaviors and activities (e.g. tinkering with cars, baby-sitting for neighbors, affiliating with certain political or religious groups). Eventually, Erikson proposed, most adolescents achieve a sense of identity regarding who they are and where their lives are headed.
The teenager must achieve identity in occupation, gender roles, politics, and, in some cultures, religion. This is not always easy, however. The teenager must seek to find their place in this world and to find out how they can contribute to the world.
Erikson is credited with coining the term "identity crisis". He describes identity crisis as a critical part of development in which an adolescent or youth develops a sense of self. Identity crisis involves the integration of the physical self, personality, potential roles and occupations. It is influenced by culture and historical trends. This stage is necessary for the successful development of future stages. Each stage that came before and that follows has its own 'crisis', but even more so now, for this marks the transition from childhood to adulthood. This passage is necessary because "Throughout infancy and childhood, a person forms many identifications. But the need for identity in youth is not met by these." This turning point in human development seems to be the reconciliation between 'the person one has come to be' and 'the person society expects one to become'. This emerging sense of self will be established by 'forging' past experiences with anticipations of the future. In relation to the eight life stages as a whole, the fifth stage corresponds to the crossroads:
What is unique about the stage of Identity, is that it is a special sort of synthesis of earlier stages and a special sort of anticipation of later ones. Youth has a certain unique quality in a person's life; it is a bridge between childhood and adulthood. Youth is a time of radical change—the great body changes accompanying puberty, the ability of the mind to search one's own intentions and the intentions of others, the suddenly sharpened awareness of the roles society has offered for later life.
Adolescents "are confronted by the need to re-establish boundaries for themselves and to do this in the face of an often potentially hostile world". This is often challenging since commitments are being asked for before particular identity roles have formed. At this point, one is in a state of 'identity confusion', but society normally makes allowances for youth to "find themselves", and this state is called 'the moratorium':
The problem of adolescence is one of role confusion—a reluctance to commit which may haunt a person into his mature years. Given the right conditions—and Erikson believes these are essentially having enough space and time, a psychosocial moratorium, when a person can freely experiment and explore—what may emerge is a firm sense of identity, an emotional and deep awareness of who they are.
As in other stages, bio-psycho-social forces are at work. No matter how one has been raised, one's personal ideologies are now chosen for oneself. Often, this leads to conflict with adults over religious and political orientations. Another area where teenagers are deciding for themselves is their career choice, and often parents want to have a decisive say in that role. If society is too insistent, the teenager will acquiesce to external wishes, effectively forcing him or her to ‘foreclose' on experimentation and, therefore, true self-discovery. Once someone settles on a worldview and vocation, will they be able to integrate this aspect of self-definition into a diverse society? According to Erikson, when an adolescent has balanced both perspectives of "What have I got?" and "What am I going to do with it?" they have established their identity:
Dependent on this stage is the ego quality of fidelity—the ability to sustain loyalties freely pledged in spite of the inevitable contradictions and confusions of value systems. (Italics in original)
Leaving past childhood and facing the unknown of adulthood is a component of adolescence. Another characteristic of this stage is moratorium which tends to end as adulthood begins. Given that the next stage (Intimacy) is often characterized by marriage, many are tempted to cap off the fifth stage at 20 years of age. However, these age ranges are actually quite fluid, especially for the achievement of identity, since it may take many years to become grounded, to identify the object of one's fidelity, to feel that one has "come of age". In the biographies Young Man Luther and Gandhi's Truth, Erikson determined that their crises ended at ages 25 and 30, respectively:
Erikson does note that the time of Identity crisis for persons of genius is frequently prolonged. He further notes that in our industrial society, identity formation tends to be long, because it takes us so long to gain the skills needed for adulthood's tasks in our technological world. So… there is not exact time span in which to find oneself. It does not happen automatically at eighteen or at twenty-one. A very approximate rule of thumb for our society would put the end somewhere in one's twenties.
Love: intimacy vs. isolation (early adulthood, 20–45 years)
Existential Question: Can I Love?
The Intimacy versus Isolation conflict occurs following adolescence. At the start of this stage, identity versus role confusion is coming to an end, although it still lingers at the foundation of the stage. The stage doesn't always involve a romantic relationship but includes the strong bonds with others being formed. Young adults are still eager to blend their identities with those of their friends because they want to fit in. Erikson believes that people are sometimes isolated due to intimacy. People are afraid of rejections such as being turned down or their partners breaking up with them. Human beings are familiar with pain, and to some people, rejection is so painful that their egos cannot bear it. Erikson also argues that distantiation occurs with intimacy. Distantiation is the desire to isolate or destroy things that may be dangerous to one's own ideals or life. This can occur if a person has their intimate relationship invaded by outsiders.
Once people have established their identities, they are ready to make long-term commitments to others. They become capable of forming intimate, reciprocal relationships (e.g. through close friendships or marriage) and willingly make the sacrifices and compromises that such relationships require. Those in more advanced stages of identity development are often associated with greater success pertaining to intimacy formation. If people cannot form these intimate relationships—perhaps because of their own needs—then a sense of isolation may result, thereby arousing feelings of darkness and angst.
Erickson’s documentation of his theory spends time considering intimacy between 2 people. The main conflict is whether an individual is willing to give themselves up to someone else. As suggested in the previous paragraphs, it seems that it could be very valuable for someone at this stage to let go of some of their fears in order to gain a solid relationship with another person. Erickson discusses the differences of his theory as compared to Freud’s theory of psychosexual development. Freud tended to focus more on sexual gratification without deep personal relationships being involved. Erikson’s proposal suggests that there is more to intimacy than sexual gratification. There is value in the deep bonds that can be shared between two people socially. It is worth noting that Erikson, in his writing, does still discuss and see the value of sexual relations within a socially intimate relationship.
Care: generativity vs. stagnation (middle adulthood, 45–64 years)
Existential Question: Can I Make My Life Count?
Generativity is the concern of guiding the next generation. Socially-valued work and disciplines are expressions of generativity.
The adult stage of generativity has broad application to family, relationships, work, and society. "Generativity, then is primarily the concern in establishing and guiding the next generation... the concept is meant to include... productivity and creativity."
During middle age, the primary developmental task is one of contributing to society and helping to guide future generations. When a person makes a contribution during this period, perhaps by raising a family or working toward the betterment of society, a sense of generativity—a sense of productivity and accomplishment—results. In contrast, a person who is self-centered and unable or unwilling to help society move forward develops a feeling of stagnation—a dissatisfaction with the relative lack of productivity. People in this stage consider what they are leaving behind for their posterity and community, as they are coming closer to the end of their life. The virtue that is related with this stage is care. In contrary, the maladaptive virtue is rejectivity.
As shared in the quote above, productivity and creativity are announced as being related to generativity. Despite this relation, Erikson hopes that those two words don’t take away from the main message. That message being that generativity is focusing on helping other people. Our society can sometimes hyperfixate on the idea that children need parents. Erikson shares and reinforces another view. Adults need children. The effort that is given to the children can help the adult become more mature. On top of that, as an adult is generative to youth, it can influence the children to return the favor when they grow up.
Central tasks of middle adulthood
Express love through more than sexual contacts.
Maintain healthy life patterns.
Develop a sense of unity with mate.
Help growing and grown children to be responsible adults.
Relinquish central role in lives of grown children.
Accept children's mates and friends.
Create a comfortable home.
Be proud of accomplishments of self and mate/spouse.
Reverse roles with aging parents.
Achieve mature, civic and social responsibility.
Adjust to physical changes of middle age.
Use leisure time creatively.
Wisdom: ego integrity vs. despair (late adulthood, 65 years and above)
Existential Question: Is it Okay to Have Been Me?
As people grow older and become senior citizens, they tend to slow down their productivity and explore life as a retired person. Factors such as leisure activities and family involvement play a significant role in the life of a retiree and their adjustment to living without having to perform specific duties each day pertaining to their career. Even during this stage of adulthood, however, they are still developing. The association between aging and retirement can bring about a reappearance of bipolar tensions of earlier stages in Erikson's model, meaning that aspects of previous life stages can reactivate because of the onset of aging and retirement. Development at this stage also includes periods of reevaluation regarding life satisfaction, sustainment of active involvement, and developing a sense of health maintenance. Developmental conflicts may arise in this stage, but psychological growth in earlier stages can help significantly in resolving these conflicts.
It is during this time that they contemplate their accomplishments and evaluate the person that they have become. They are able to develop integrity if they see themselves as leading a successful life. Those that have developed integrity perceive that their lives have meaning. They tend to feel generally satisfied and accept themselves and others. As they near the end of their lives, they are more likely to be at peace about death. If they see their life as unproductive or feel that they did not accomplish their life goals, they become dissatisfied with life and develop despair. This can often lead to feelings of depression and hopelessness. They may also feel that life is unfair and be fearful of dying.
During this time there may be a renewal in interest in many things. This is believed to occur because the individuals in this time of life strive to be autonomous. As their bodies and minds start to deteriorate, they want to find a sense of balance. They will cling to their autonomy so that they will not need to be reliant on others for everything. Erikson explains that it is also important for adults in this stage to maintain relationships with others of different ages in order to develop integrity.
The final developmental task is retrospection: people look back on their lives and accomplishments. Practices such as narrative therapy can help individuals reinterpret their minds pertaining to their past and allow them to focus on the brighter aspects of their lives. They develop feelings of contentment and integrity if they believe that they have led a happy and productive life. If they look back on a life of disappointments and unachieved goals, they may instead develop a sense of despair.
This stage can occur out of the sequence when an individual feels they are near the end of their life (such as when receiving a terminal disease diagnosis).
When looking back on life, a person should hope to find both meaning and order. There are ways to alter or buoy one’s perspective during this stage. Altering or buoying one’s view could bring them closer to ego integrity. With that being said, it is better that a person has already carried out a life with meaning and order prior to beginning this stage.
Erikson ties this stage of development back into the first stage, trust vs mistrust. As shared by Erikson, the Webster dictionary once claimed that trust is “the assured reliance on another’s integrity”. One’s integrity could influence someone else’s trust. If a person at the end of their life fears death, then it could influence children to possibly fear life. If an adult is able to overcome any fears of death, then it can reinforce children to not be afraid of the life ahead of them.
Ninth stage
Psychosocial Crises: All first eight stages in reverse quotient order
Joan Erikson, who married and collaborated with Erik Erikson, added a ninth stage in The Life Cycle Completed: Extended Version. Living in the ninth stage, she wrote, "old age in one's eighties and nineties brings with it new demands, reevaluations, and daily difficulties". Addressing these new challenges requires "designating a new ninth stage". Erikson was ninety-three years old when she wrote about the ninth stage.
Joan Erikson showed that all the eight stages "are relevant and recurring in the ninth stage". In the ninth stage, the psychosocial crises of the eight stages are faced again, but with the quotient order reversed. For example, in the first stage (infancy), the psychosocial crisis was "Trust vs. Mistrust" with Trust being the "syntonic quotient" and Mistrust being the "dystonic". Joan Erikson applies the earlier psychosocial crises to the ninth stage as follows:
"Basic Mistrust vs. Trust: Hope"
In the ninth stage, "elders are forced to mistrust their own capabilities" because one's "body inevitably weakens". Yet, Joan Erikson asserts that "while there is light, there is hope" for a "bright light and revelation".
"Shame and Doubt vs. Autonomy: Will"
Ninth stage elders face the "shame of lost control" and doubt "their autonomy over their own bodies". So it is that "shame and doubt challenge cherished autonomy".
"Inferiority vs. Industry: Competence"
Industry as a "driving force" that elders once had is gone in the ninth stage. Being incompetent "because of aging is belittling" and makes elders "like unhappy small children of great age".
"Identity confusion vs. Identity: Fidelity"
Elders experience confusion about their "existential identity" in the ninth stage and "a real uncertainty about status and role".
"Isolation vs. Intimacy: Love"
In the ninth stage, the "years of intimacy and love" are often replaced by "isolation and deprivation". Relationships become "overshadowed by new incapacities and dependencies".
"Stagnation vs. Generativity: Care"
The generativity in the seventh stage of "work and family relationships", if it goes satisfactorily, is "a wonderful time to be alive". In one's eighties and nineties, there is less energy for generativity or caretaking. Thus, "a sense of stagnation may well take over".
"Despair and Disgust vs. Integrity: Wisdom"
Integrity imposes "a serious demand on the senses of elders". Wisdom requires capacities that ninth stage elders "do not usually have". The eighth stage includes retrospection that can evoke a "degree of disgust and despair". In the ninth stage, introspection is replaced by the attention demanded to one's "loss of capacities and disintegration".
Living in the ninth stage, Joan Erikson expressed confidence that the psychosocial crisis of the ninth stage can be met as in the first stage with the "basic trust" with which "we are blessed".
Development of post-Freudian theory
Erikson was a student of Anna Freud, the daughter of Sigmund Freud, whose psychoanalytic theory and psychosexual stages contributed to the basic outline of the eight stages, at least those concerned with childhood. Namely, the first four of Erikson's life stages correspond to Freud's oral, anal, phallic, and latency phases, respectively. Also, the fifth stage of adolescence is said to parallel the genital stage in psychosexual development:
Although the first three phases are linked to those of the Freudian theory, it can be seen that they are conceived along very different lines. Emphasis is not so much on sexual modes and their consequences, but on the ego qualities which emerge from each of the stages. There is an attempt also to link the sequence of individual development to the broader context of society.
Erikson saw a dynamic at work throughout life, one that did not stop at adolescence. He also viewed the life stages as a cycle: the end of one generation was the beginning of the next. Seen in its social context, the life stages were linear for an individual but circular for societal development:
In Freud's view, development is largely complete by adolescence. In contrast, one of Freud's students, Erik Erikson (1902–1994) believed that development continues throughout life. Erikson took the foundation laid by Freud and extended it through adulthood and into late life.
Criticism
One major criticism of Erikson's theory of psychosocial development is that it primarily describes the development of European or American males. Erikson's theory may be questioned as to whether his stages must be regarded as sequential, and only occurring within the age ranges he suggests. There is debate as to whether people only search for identity during the adolescent years or if one stage needs to happen before other stages can be completed. However, Erikson states that each of these processes occur throughout the lifetime in one form or another, and he emphasizes these "phases" only because it is at these times that the conflicts become most prominent.
Most empirical research into Erikson has related to his views on adolescence and attempts to establish identity. His theoretical approach was studied and supported, particularly regarding adolescence, by James E. Marcia. Marcia's work has distinguished different forms of identity, and there is some empirical evidence that those people who form the most coherent self-concept in adolescence are those who are most able to make intimate attachments in early adulthood. This supports the part of Eriksonian theory, that suggests that those best equipped to resolve the crisis of early adulthood are those who have most successfully resolved the crisis of adolescence.
Erikson attributed the development of the stages to the presence of specific tensions which may be present at any moment of a person's life. This causes another criticism of Erikson's theory of psychosocial development: that Erikson does not go into detail about what causes these stages of development or how they are resolved. There is little information stated about the experiences that result in how a person develops at each stage. Just as there are vague details about the causes of each theory that does not outline the necessary steps to resolve conflict in order to enter the next stage.
See also
Child development
Developmental psychology
Ethnic identity development
Kohlberg's stages of moral development
Neo-Freudianism
Positive disintegration
References
Works cited
Further reading
Erikson, E. (1950). Childhood and Society (1st ed.). New York: Norton .
Erikson, Erik H. (1959). Identity and the Life Cycle. New York: International Universities Press.
Erikson, Erik H. (1968). Identity, Youth and Crisis. New York: Norton.
Sheehy, Gail (1976). Passages: Predictable Crises of Adult Life. New York: E. P. Dutton.
Stevens, Richard (1983). Erik Erikson: An Introduction. New York: St. Martin's.
Developmental stage theories
Psychoanalysis | 0.763418 | 0.998945 | 0.762613 |
Doughnut (economic model) | The Doughnut, or Doughnut economics, is a visual framework for sustainable development – shaped like a doughnut or lifebelt – combining the concept of planetary boundaries with the complementary concept of social boundaries. The name derives from the shape of the diagram, i.e. a disc with a hole in the middle. The centre hole of the model depicts the proportion of people that lack access to life's essentials (healthcare, education, equity and so on) while the crust represents the ecological ceilings (planetary boundaries) that life depends on and must not be overshot. The diagram was developed by University of Oxford economist Kate Raworth in her 2012 Oxfam paper A Safe and Just Space for Humanity and elaborated upon in her 2017 book Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist and paper.
The framework was proposed to regard the performance of an economy by the extent to which the needs of people are met without overshooting Earth's ecological ceiling. The main goal of the new model is to re-frame economic problems and set new goals. In this context, the model is also referred to as a "wake-up call to transform our capitalist worldview". In this model, an economy is considered prosperous when all twelve social foundations are met without overshooting any of the nine ecological ceilings. This situation is represented by the area between the two rings, considered by its creator as a safe and just space for humanity.
Kate Raworth noted the planetary boundaries concept does not take human wellbeing into account (although, if Earth's ecosystem dies then all wellbeing is moot). She suggested social boundaries should be combined with the planetary boundaries structure. Adding measures such as jobs, education, food, access to water, health services and energy helps to accommodate an environmentally safe space compatible with poverty eradication and "rights for all". Within planetary limits and an equitable social foundation lies a doughnut-shaped area which is the area where there is a "safe and just space for humanity to thrive in".
Indicators
Social foundations
The social foundations are inspired by the social aims of the Sustainable Development Goals of the United Nations. These are:
Food security
Health
Education
Income and work (the latter is not limited to compensated employment but also includes things such as housekeeping)
Peace and justice
Political voice
Social equity
Gender equality
Housing
Networks (the latter includes both networks of communities, but also networks of information like the internet)
Energy
Water
Ecological ceilings
The nine ecological ceilings are from the planetary boundaries put forward by a group of Earth-system scientists led by Johan Rockström and Will Steffen. These are:
Climate change — the human-caused emissions of greenhouse gases such as carbon dioxide and methane trap heat in the atmosphere, changing the Earth's climate.
Ocean acidification — when human-emitted carbon dioxide is absorbed into the oceans, it makes the water more acidic. For example, this lowers the ability of marine life to grow skeletons and shells.
Chemical pollution — releasing toxic materials into nature decreases biodiversity and lowers the fertility of animals (including humans).
Nitrogen and phosphorus loading — inefficient or excessive use of fertiliser leads to the fertilizer running off to water bodies, where they cause algae blooms which kills underwater life.
Freshwater withdrawals — using too much freshwater dries up the source which may damage the ecosystem and be unusable after.
Land conversion — converting land for economic activity (such as creating roads and farmland) damages or removes the habitat for wildlife, removes carbon sinks and disrupts natural cycles.
Biodiversity loss — economic activity may cause a reduction in the number and variety of species. This makes ecosystems more vulnerable and may lower their capacity of sustaining life and providing ecosystem services.
Air pollution — the emission of aerosols (small particles) has a negative impact on the health of species. It can also affect precipitation and cloud formation.
Ozone layer depletion — some economic activity emits gases that damage the Earth's ozone layer. Because the ozone layer shields Earth from harmful radiation, its depletion results for example in skin cancer in animals.
Critique to mainstream economic theory
The doughnut model is still a collection of goals that may be pursued through different actions by different actors and does not include specific models related to markets or human behavior. The book Doughnut Economics consists of critiques and perspectives of what should be sought after by society as a whole. The critiques found in the book are targeted at certain economic models and their common base.
The mainstream economic models of the 20th century, defined here as those taught the most in Economics introductory courses around the world, are neoclassical. The Circular Flow published by Paul Samuelson in 1944 and the supply and demand curves published by William S. Jevons in 1862 are canonical examples of neoclassical economic models. Focused on the observable money flows in a given administrative unit and describing preferences mathematically, these models ignore the environments in which these objects are embedded: human minds, society, culture, and the natural environment. This omission was viable while the human population did not collectively overwhelm the Earth's systems, which is no longer the case. Furthermore, these models were created before statistical testing and research were possible. They were based, then, on assumptions about human behavior converted into "stylized facts". The origins of these assumptions are philosophical and pragmatic, simplifying and distorting the reflections of thinkers such as Adam Smith into Newtonian-resembling curves on a graph so that they could be of presumed practical use in predicting, for example, consumer choice.
The body of neoclassical economic theory grew and became more sophisticated over time, and competed with other theories for the post-mainstream economic paradigm of the North Atlantic. In the 1930s, Keynesian theory was it, and after the 1960s, monetarism gained prominence. One element remained as the policy prescriptions shifted: the "rational economic man" persona on which theories were based. Raworth, the creator of Doughnut Economics, denounces this literary invention as a perverse one, for its effects on its learners' assumptions about human behavior and, consequently, their own real behavior. Examples of this phenomenon in action have been documented, as have the effects of the erosion of trust and community on human well-being.
Real-world economies in the Doughnut perspective
Kate Raworth explains the doughnut economy is based on the premise that "Humanity's 21st century challenge is to meet the needs of all within the means of the planet. In other words, to ensure that no one falls short on life's essentials (from food and housing to healthcare and political voice), while ensuring that collectively we do not overshoot our pressure on Earth's life-supporting systems, on which we fundamentally depend – such as a stable climate, fertile soils, and a protective ozone layer. The Doughnut of social and planetary boundaries is a new framing of that challenge, and it acts as a compass for human progress this century."
Raworth states that "significant GDP growth is very much needed" for low- and middle-income countries to be able to meet the goals of the social foundation for their citizens.
Leaning on Earth studies and economics, Raworth maps out the current shortfalls and overshoots, as illustrated in Figure 2.
The Doughnut framework has been used to map localized socio-environmental performance in Erhai lake-catchment (China), Scotland, Wales, the UK, South Africa, Netherlands, India, globally and many more.
In April 2020, Kate Raworth was invited to join the City of Amsterdam's post-pandemic economic planning efforts.
An empirical application of the doughnut model showed in 2018 that so far across 150 countries not a single country satisfies its citizens' basic needs while maintaining a globally sustainable level of resource use.
Criticism
Branko Milanovic, at CUNY's Stone Center on Socio-Economic Inequality, said that for the doughnut theory to become popular, people would have to "magically" become "indifferent to how well we do compared to others, and not really care about wealth and income."
See also
Ecological economics
Critique of political economy
Prosperity Without Growth
The Closing Circle
References
Economics models
Ecological economics | 0.765883 | 0.995711 | 0.762597 |
Progressive education | Progressive education, or educational progressivism, is a pedagogical movement that began in the late 19th century and has persisted in various forms to the present. In Europe, progressive education took the form of the New Education Movement. The term progressive was engaged to distinguish this education from the traditional curricula of the 19th century, which was rooted in classical preparation for the early-industrial university and strongly differentiated by social class. By contrast, progressive education finds its roots in modern, post-industrial experience. Most progressive education programs have these qualities in common:
Emphasis on learning by doing – hands-on projects, expeditionary learning, experiential learning
Integrated curriculum focused on thematic units
Strong emphasis on problem solving and critical thinking
Group work and development of social skills
Understanding and action as the goals of learning as opposed to rote knowledge
Collaborative and cooperative learning projects
Education for social responsibility and democracy
Integration of community service and service learning projects into the daily curriculum
Selection of subject content by looking forward to ask what skills will be needed in future society
De-emphasis on textbooks in favor of varied learning resources
Emphasis on lifelong learning and social skills
Assessment by evaluation of child's projects and productions
History
Progressive education can be traced back to the works of John Locke and Jean-Jacques Rousseau, both of whom are known as forerunners of ideas that would be developed by theorists such as John Dewey. Considered one of the first of the British empiricists, Locke believed that "truth and knowledge… arise out of observation and experience rather than manipulation of accepted or given ideas". He further discussed the need for children to have concrete experiences in order to learn. Rousseau deepened this line of thinking in Emile, or On Education, where he argued that subordination of students to teachers and memorization of facts would not lead to an education.
Johann Bernhard Basedow
In Germany, Johann Bernhard Basedow (1724–1790) established the Philanthropinum at Dessau in 1774. He developed new teaching methods based on conversation and play with the child, and a program of physical development. Such was his success that he wrote a treatise on his methods, "On the best and hitherto unknown method of teaching children of noblemen".
Christian Gotthilf Salzmann
Christian Gotthilf Salzmann (1744–1811) was the founder of the Schnepfenthal institution, a school dedicated to new modes of education (derived heavily from the ideas of Jean-Jacques Rousseau). He wrote Elements of Morality, for the Use of Children, one of the first books translated into English by Mary Wollstonecraft.
Johann Heinrich Pestalozzi
Johann Heinrich Pestalozzi (1746–1827) was a Swiss pedagogue and educational reformer who exemplified Romanticism in his approach. He founded several educational institutions both in German- and French-speaking regions of Switzerland and wrote many works explaining his revolutionary modern principles of education. His motto was "Learning by head, hand and heart". His research and theories closely resemble those outlined by Rousseau in Emile. He is further considered by many to be the "father of modern educational science" His psychological theories pertain to education as they focus on the development of object teaching, that is, he felt that individuals best learned through experiences and through a direct manipulation and experience of objects. He further speculated that children learn through their own internal motivation rather than through compulsion. (See Intrinsic vs. Extrinsic motivation). A teacher's task will be to help guide their students as individuals through their learning and allow it to unfold naturally.
Friedrich Fröbel
Friedrich Wilhelm August Fröbel (1782–1852) was a student of Pestalozzi who laid the foundation for modern education based on the recognition that children have unique needs and capabilities. He believed in "self-activity" and play as essential factors in child education. The teacher's role was not to indoctrinate but to encourage self-expression through play, both individually and in group activities. He created the concept of kindergarten.
Johann Friedrich Herbart
Johann Friedrich Herbart (1776–1841) emphasized the connection between individual development and the resulting societal contribution. The five key ideas which composed his concept of individual maturation were Inner Freedom, Perfection, Benevolence, Justice, and Equity or Recompense. According to Herbart, abilities were not innate but could be instilled, so a thorough education could provide the framework for moral and intellectual development. In order to develop a child to lead to a consciousness of social responsibility, Herbart advocated that teachers utilize a methodology with five formal steps: "Using this structure a teacher prepared a topic of interest to the children, presented that topic, and questioned them inductively, so that they reached new knowledge based on what they had already known, looked back, and deductively summed up the lesson's achievements, then related them to moral precepts for daily living".
John Melchior Bosco
John Melchior Bosco (1815–1888) was concerned about the education of street children who had left their villages to find work in the rapidly industrialized city of Turin, Italy. Exploited as cheap labor or imprisoned for unruly behavior, Bosco saw the need for creating a space where they would feel at home. He called it an 'Oratory' where they could play, learn, share friendships, express themselves, develop their creative talents and pick up skills for gainful self-employment. With those who had found work, he set up a mutual-fund society (an early version of the Grameen Bank) to teach them the benefits of saving and self-reliance. The principles underlying his educational method that won over the hearts and minds of thousands of youth who flocked to his oratory were: 'be reasonable', 'be kind', 'believe' and 'be generous in service'. Today his method of education is practiced in nearly 3000 institutions set up around the world by the members of the Salesian Society he founded in 1873.
Cecil Reddie
While studying for his doctorate in Göttingen in 1882–1883, Cecil Reddie was greatly impressed by the progressive educational theories being applied there. Reddie founded Abbotsholme School in Derbyshire, England, in 1889. Its curriculum enacted the ideas of progressive education. Reddie rejected rote learning, classical languages and corporal punishment. He combined studies in modern languages and the sciences and arts with a program of physical exercise, manual labour, recreation, crafts and arts. Schools modeling themselves after Abbotsholme were established throughout Europe, and the model was particularly influential in Germany. Reddie often engaged foreign teachers, who learned its practices, before returning home to start their own schools. Hermann Lietz an Abbotsholme teacher founded five schools (Landerziehungsheime für Jungen) on Abbotsholme's principles. Other people he influenced included Kurt Hahn, Adolphe Ferrière and Edmond Demolins. His ideas also reached Japan, where it turned into "Taisho-era Free Education Movement" (Taisho Jiyu Kyoiku Undo)
John Dewey
Education according to John Dewey is the "participation of the individual in the social consciousness of the race" (Dewey, 1897, para. 1). As such, education should take into account that the student is a social being. The process begins at birth with the child unconsciously gaining knowledge and gradually developing their knowledge to share and partake in society.
For Dewey, education, which regulates "the process of coming to share in the social consciousness," is the "only sure" method of ensuring social progress and reform (Dewey, 1897, para. 60). In this respect, Dewey foreshadows Social Reconstructionism, whereby schools are a means to reconstruct society. As schools become a means for social reconstruction, they must be given the proper equipment to perform this task and guide their students.
Helen Parkhurst
The American teacher Helen Parkhurst (1886–1973) developed the Dalton Plan at the beginning of the twentieth century with the goal of reforming the then current pedagogy and classroom management. She wanted to break the teacher-centered lockstep teaching. During her first experiment, which she implemented in a small elementary school as a young teacher in 1904, she noticed that when students are given freedom for self-direction and self-pacing and to help one another, their motivation increases considerably and they learn more. In a later experiment in 1911 and 1912, Parkhurst re-organized the education in a large school for nine- to fourteen-year-olds. Instead of each grade, each subject was appointed its own teacher and its own classroom. The subject teachers made assignments: they converted the subject matter for each grade into learning assignments. In this way, learning became the students' own work; they could carry out their work independently, work at their own pace and plan their work themselves. The classroom turned into a laboratory, a place where students are working, furnished and equipped as work spaces, tailored to meet the requirements of specific subjects. Useful and attractive learning materials, instruments and reference books were put within the students' reach. The benches were replaced by large tables to facilitate co-operation and group instruction. This second experiment formed the basis for the next experiments, those in Dalton and New York, from 1919 onwards. The only addition was the use of graphs, charts enabling students to keep track of their own progress in each subject.
In the nineteen-twenties and nineteen-thirties, Dalton education spread throughout the world. There is no certainty regarding the exact numbers of Dalton schools, but there was Dalton education in America, Australia, England, Germany, the Netherlands, the Soviet Union, India, China and Japan.
Rudolf Steiner
Rudolf Steiner (1861–1925) first described the principles of what was to become Waldorf education in 1907. He established a series of schools based on these principles beginning in 1919. The focus of the education is on creating a developmentally appropriate curriculum that holistically integrates practical, artistic, social, and academic experiences. There are more than a thousand schools and many more early childhood centers worldwide; it has also become a popular form of homeschooling.
Maria Montessori
Maria Montessori (1870–1952) began to develop her philosophy and methods in 1897. She based her work on her observations of children and experimentation with the environment, materials, and lessons available to them. She frequently referred to her work as "scientific pedagogy", arguing for the need to go beyond observation and measurement of students, to developing new methods to transform them. Although Montessori education spread to the United States in 1911 there were conflicts with the American educational establishment and was opposed by William Heard Kilpatrick. However Montessori education returned to the United States in 1960 and has since spread to thousands of schools there.
In 1914 the Montessori Society in England organised its first conference. Hosted by Rev Bertram Hawker, who had set up, in partnership with his local elementary school in the Norfolk coastal village of East Runton, the first Montessori School in England. Pictures of this school, and its children, illustrated the 'Montessori's Own Handbook' (1914). Hawker had been impressed by his visit to Montessori's Casa dei Bambini in Rome, he gave numerous talks on Montessori's work after 1912, assisting in generating a national interest in her work. He organised the Montessori Conference 1914 in partnership with Edmond Holmes, ex-Chief Inspector of Schools, who had written a government report on Montessori. The conference decided that its remit was to promote the 'liberation of the child in the school', and though inspired by Montessori, would encourage, support and network teachers and educationalists who sought, through their schools and methods, that aim. They changed their name the following year to New Ideals in Education. Each subsequent conference was opened with reference to its history and origin as a Montessori Conference recognising her inspiration, reports italicized the members of the Montessori Society in the delegate lists, and numerous further events included Montessori methods and case studies. Montessori, through New Ideals in Education, its committee and members, events and publications, greatly influenced progressive state education in England. (references to be added).
Robert Baden-Powell
In July 1906, Ernest Thompson Seton sent Robert Baden-Powell a copy of his book The Birchbark Roll of the Woodcraft Indians. Seton was a British-born Canadian-American living in the United States. They shared ideas about youth training programs. In 1907 Baden-Powell wrote a draft called Boy Patrols. In the same year, to test his ideas, he gathered 21 boys of mixed social backgrounds and held a week-long camp in August on Brownsea Island in England. His organizational method, now known as the Patrol System and a key part of Scouting training, allowed the boys to organize themselves into small groups with an elected patrol leader. Baden Powell then wrote Scouting for Boys (London, 1908). The Brownsea camp and the publication of Scouting for Boys are generally regarded as the start of the Scout movement which spread throughout the world. Baden-Powell and his sister Agnes Baden-Powell introduced the Girl Guides in 1910.
Comparison with traditional education
Traditional education uses extrinsic motivation, such as grades and prizes. Progressive education is more likely to use intrinsic motivation, basing activities on the interests of the child. Praise may be discouraged as a motivator. Progressive education is a response to traditional methods of teaching. It is defined as an educational movement which gives more value to experience than formal learning. It is based more on experiential learning that concentrate on the development of a child's talents.
21st century skills
21st century skills are a series of higher-order skills, abilities, and learning dispositions that have been identified as being required for success in the rapidly changing, digital society and workplaces. Many of these skills are also defining qualities of progressive education as well as being associated with deeper learning, which is based on mastering skills such as analytic reasoning, complex problem solving, and teamwork. These skills differ from traditional academic skills in that they are not primarily content knowledge-based.
In the West
Germany
Hermann Lietz founded three Landerziehungsheime (country boarding schools) in 1904 based on Reddie's model for boys of different ages. Lietz eventually succeeded in establishing five more Landerziehungsheime. Edith and Paul Geheeb founded Odenwaldschule in Heppenheim in the Odenwald in 1910 using their concept of progressive education, which integrated the work of the head and hand.
Poland
Janusz Korczak was one notable follower and developer of Pestalozzi's ideas. He wrote
The names of Pestalozzi, Froebel and Spencer shine with no less brilliance than the names of the greatest inventors of the twentieth century. For they discovered more than the unknown forces of nature; they discovered the unknown half of humanity: children. His Orphan's Home in Warsaw became a model institution and exerted influence on the educational process in other orphanages of the same type.
Ireland
The Quaker school run in Ballitore, Co Kildare in the 18th century had students from as far away as Bordeaux (where there was a substantial Irish émigré population), the Caribbean and Norway. Notable pupils included Edmund Burke and Napper Tandy.
Sgoil Éanna, or in English St Enda's was founded in 1908 by Pádraig Pearse on Montessori principles. Its former assistant headmaster Thomas MacDonagh and other teachers including Pearse; games master Con Colbert; Pearse's brother, Willie, the art teacher, and Joseph Plunkett, and occasional lecturer in English, were executed by the British after the 1916 Rising. Pearse and MacDonagh were two of the seven leaders who signed the Irish Declaration of Independence. Pearse's book The Murder Machine was a denunciation of the English school system of the time and a declaration of his own educational principles.
Sweden
In Sweden, an early proponent of progressive education was Alva Myrdal, who with her husband Gunnar co-wrote Kris i befolkningsfrågan (1934), a most influential program for the social-democratic hegemony (1932–1976) popularly known as "Folkhemmet". School reforms went through government reports in the 1940s and trials in the 1950s, resulting in the introduction in 1962 of public comprehensive schools ("grundskola") instead of the previously separated parallel schools for theoretical and non-theoretical education.
United Kingdom
The ideas from Reddie's Abbotsholme spread to schools such as Bedales School (1893), King Alfred School, London (1898) and St Christopher School, Letchworth (1915), as well as all the Friends' schools, Steiner Waldorf schools and those belonging to the Round Square Conference. The King Alfred School was radical for its time in that it provided a secular education and that boys and girls were educated together. Alexander Sutherland Neill believed children should achieve self-determination and should be encouraged to think critically rather than blindly obeying. He implemented his ideas with the founding of Summerhill School in 1921. Neill believed that children learn better when they are not compelled to attend lessons. The school was also managed democratically, with regular meetings to determine school rules. Pupils had equal voting rights with school staff.
United States
Early practitioners
Fröbel's student Margarethe Schurz founded the first kindergarten in the United States at Watertown, Wisconsin, in 1856, and she also inspired Elizabeth Peabody, who went on to found the first English-speaking kindergarten in the United States – the language at Schurz's kindergarten had been German, to serve an immigrant community – in Boston in 1860. This paved the way for the concept's spread in the USA. The German émigré Adolph Douai had also founded a kindergarten in Boston in 1859, but was obliged to close it after only a year. By 1866, however, he was founding others in New York City.
William Heard Kilpatrick (1871–1965) was a pupil of Dewey and one of the most effective practitioners of the concept as well as the more adept at proliferating the progressive education movement and spreading word of the works of Dewey. He is especially well known for his "project method of teaching". This developed the progressive education notion that students were to be engaged and taught so that their knowledge may be directed to society for a socially useful need. Like Dewey he also felt that students should be actively engaged in their learning rather than actively disengaged with the simple reading and regurgitation of material.
The most famous early practitioner of progressive education was Francis Parker; its best-known spokesperson was the philosopher John Dewey. In 1875 Francis Parker became superintendent of schools in Quincy, Massachusetts, after spending two years in Germany studying emerging educational trends on the continent. Parker was opposed to rote learning, believing that there was no value in knowledge without understanding. He argued instead schools should encourage and respect the child's creativity. Parker's Quincy System called for child-centered and experience-based learning. He replaced the traditional curriculum with integrated learning units based on core themes related to the knowledge of different disciplines. He replaced traditional readers, spellers and grammar books with children's own writing, literature, and teacher prepared materials. In 1883 Parker left Massachusetts to become Principal of the Cook County Normal School in Chicago, a school that also served to train teachers in Parker's methods. In 1894 Parker's Talks on Pedagogics, which drew heavily on the thinking of Fröbel, Pestalozzi and Herbart, became one of the first American writings on education to gain international fame.
That same year, philosopher John Dewey moved from the University of Michigan to the newly established University of Chicago where he became chair of the department of philosophy, psychology and education. He and his wife enrolled their children in Parker's school before founding their own school two years later.
Whereas Parker started with practice and then moved to theory, Dewey began with hypotheses and then devised methods and curricula to test them. By the time Dewey moved to Chicago at the age of thirty-five, he had already published two books on psychology and applied psychology. He had become dissatisfied with philosophy as pure speculation and was seeking ways to make philosophy directly relevant to practical issues. Moving away from an early interest in Hegel, Dewey proceeded to reject all forms of dualism and dichotomy in favor of a philosophy of experience as a series of unified wholes in which everything can be ultimately related.
In 1896, John Dewey opened what he called the laboratory school to test his theories and their sociological implications. With Dewey as the director and his wife as principal, the University of Chicago Laboratory school, was dedicated "to discover in administration, selection of subject-matter, methods of learning, teaching, and discipline, how a school could become a cooperative community while developing in individuals their own capacities and satisfy their own needs." (Cremin, 136) For Dewey the two key goals of developing a cooperative community and developing individuals' own capacities were not at odds; they were necessary to each other. This unity of purpose lies at the heart of the progressive education philosophy. In 1912, Dewey sent out students of his philosophy to found The Park School of Buffalo and The Park School of Baltimore to put it into practice. These schools operate to this day within a similar progressive approach.
At Columbia, Dewey worked with other educators such as Charles Eliot and Abraham Flexner to help bring progressivism into the mainstream of American education. In 1917 Columbia established the Lincoln School of Teachers College "as a laboratory for the working out of an elementary and secondary curriculum which shall eliminate obsolete material and endeavor to work up in usable form material adapted to the needs of modern living." (Cremin, 282) Based on Flexner's demand that the modern curriculum "include nothing for which an affirmative case can not be made out" (Cremin, 281) the new school organized its activities around four fundamental fields: science, industry, aesthetics and civics. The Lincoln School built its curriculum around "units of work" that reorganized traditional subject matter into forms embracing the development of children and the changing needs of adult life. The first and second grades carried on a study of community life in which they actually built a city. A third grade project growing out of the day-to-day life of the nearby Hudson River became one of the most celebrated units of the school, a unit on boats, which under the guidance of its legendary teacher Miss Curtis, became an entrée into history, geography, reading, writing, arithmetic, science, art and literature. Each of the units was broadly enough conceived so that different children could concentrate on different aspects depending on their own interests and needs. Each of the units called for widely diverse student activities, and each sought to deal in depth with some critical aspect of contemporary civilization. Finally each unit engaged children working together cooperatively and also provided opportunities for individual research and exploration.
In 1924, Agnes de Lima, the lead writer on education for The New Republic and The Nation, published a collection of her articles on progressive education as a book, titled Our Enemy the Child.
In 1918, the National Education Association, representing superintendents and administrators in smaller districts across the country, issued its report "Cardinal Principles of Secondary Education." It emphasized the education of students in terms of health, a command of fundamental processes, worthy home membership, vocation, citizenship, worthy use of leisure, and ethical character. They emphasized life adjustment and reflected the social efficiency model of progressive education.
From 1919 to 1955, the Progressive Education Association founded by Stanwood Cobb and others worked to promote a more student-centered approach to education. During the Great Depression the organization conducted the Eight-Year Study, evaluating the effects of progressive programs. More than 1500 students over four years were compared to an equal number of carefully matched students at conventional schools. When they reached college, the experimental students were found to equal or surpass traditionally educated students on all outcomes: grades, extracurricular participation, dropout rates, intellectual curiosity, and resourcefulness. Moreover, the study found that the more the school departed from the traditional college preparatory program, the better was the record of the graduates. (Kohn, Schools, 232)
By mid-century, many public school programs had also adopted elements of progressive curriculum. At mid-century Dewey believed that progressive education had "not really penetrated and permeated the foundations of the educational institution."(Kohn, Schools, 6,7) As the influence of progressive pedagogy grew broader and more diffuse, practitioners began to vary their application of progressive principles. As varying interpretations and practices made evaluation of progressive reforms more difficult to assess, critics began to propose alternative approaches.
The seeds of the debate over progressive education can be seen in the differences of Parker and Dewey. These have to do with how much and by whom curriculum should be worked out from grade to grade, how much the child's emerging interests should determine classroom activities, the importance of child-centered vs. societal–centered learning, the relationship of community building to individual growth, and especially the relationship between emotion, thought and experience.
In 1955, the publication of Rudolf Flesch's Why Johnny Can't Read leveled criticism of reading programs at the progressive emphasis on reading in context. The conservative McCarthy era raised questions about the liberal ideas at the roots of the progressive reforms. The launching of Sputnik in 1957 at the height of the Cold War gave rise to a number of intellectually competitive approaches to disciplinary knowledge, such as BSCS biology PSSC physics, led by university professors such as Jerome Bruner and Jerrold Zacharias.
Some Cold War reforms incorporated elements of progressivism. For example, the work of Zacharias and Bruner was based in the developmental psychology of Jean Piaget and incorporated many of Dewey's ideas of experiential education. Bruner's analysis of developmental psychology became the core of a pedagogical movement known as constructivism, which argues that the child is an active participant in making meaning and must be engaged in the progress of education for learning to be effective. This psychological approach has deep connections to the work of both Parker and Dewey and led to a resurgence of their ideas in second half of the century.
In 1965, President Johnson inaugurated the Great Society and the Elementary and Secondary Education Act suffused public school programs with funds for sweeping education reforms. At the same time the influx of federal funding also gave rise to demands for accountability and the behavioral objectives approach of Robert F. Mager and others foreshadowed the No Child Left Behind Act passed in 2002. Against these critics eloquent spokespersons stepped forward in defense of the progressive tradition. The Open Classroom movement, led by Herb Kohl and George Dennison, recalled many of Parker's child centered reforms.
The late 1960s and early 1970s saw a rise and decline in the number of progressive schools. There were several reasons for the decline:
Demographics: As the baby boom passed, traditional classrooms were no longer as over-enrolled, reducing demand for alternatives.
The economy: The oil crisis and recession made shoestring schools less viable.
Times changed: With the ending of the Vietnam War, social activism waned.
Co-optation: Many schools were co-opted by people who didn't believe in the original mission.
Centralization: The ongoing centralization of school districts
Non-implementation: Schools failed to implement a model of shared governance
Interpersonal dynamics: Disagreement over school goals, poor group process skills, lack of critical dialogue, and fear of assertive leadership
Progressive education has been viewed as an alternative to the test-oriented instruction legislated by the No Child Left Behind educational funding act. Alfie Kohn has been an outspoken critic of the No Child Left Behind Act and a passionate defender of the progressive tradition.
In the East
India
Rabindranath Tagore (1861–1941) was one of the most effective practitioners of the concept of progressive education. He expanded Santiniketan, which is a small town near Bolpur in the Birbhum district of West Bengal, India, approximately 160 km north of Kolkata. He de-emphasized textbook learning in favor of varied learning resources from nature. The emphasis here was on self-motivation rather than on discipline, and on fostering intellectual curiosity rather than competitive excellence. There were courses on a great variety of cultures, and study programs devoted to China, Japan, and the Middle East. He was of the view that education should be a "joyous exercise of our inventive and constructive energies that help us to build up character."
Japan
Seikatsu Tsuzurikata is a grassroots movement in Japan that has many parallels to the progressive education movement, but it developed completely independently, beginning in
the late 1920s. The Japanese progressive educational movement was one of the stepping stones to the modernization of Japan and it has resonated down to the present.
See also
References
Further reading
Bernstein, Richard J. "John Dewey", Encyclopedia of Philosophy, New York: Macmillan, 1967, 380–385
Bruner, Jerome. The Process of Education (New York: Random House, 1960)
Bruner, Jerome. The Relevance of Education (New York: Norton, 1971)
Cappel, Constance, Utopian Colleges, New York: Peter Lang, 1999.
Cohen, Ronald D., and Raymond A. Mohl. The paradox of progressive education: The Gary Plan and urban schooling (1979)
Cremin, Lawrence. The Transformation of the School: Progressivism in American Education, 1876–1957 (New York: Knopf, 1962); The standard scholarly history
Dewey, John. Dewey on Education, edited by Martin Dworkin. New York: Teachers college Press, 1959
Dewey, John. Democracy and Education. (New York: Free Press, 1944.)
Dewey, John. Experience and Nature. (New York: Dover, 1958.)
Fallace, Thomas. Race and the Origins of Progressive Education, 1880–1929 (2015)
Knoester, Matthew. Democratic Education in Practice: Inside the Mission Hill School. New York: Teachers College Press, 2012.
Kohn, Alfie. The Case Against Standardized Testing (Portsmouth, New Hampshire: Heinemann, 2000)
Kohn, Alfie. The Schools Our Children Deserve (New York: Houghton Mifflin, 1999)
Mager, Robert F. Preparing Behavioral Objectives (Atlanta: Center for Effective Instruction, 1969)
Pratt, Caroline. I Learn from Children (New York: HarperPerennial/HarperCollins, 1948; republished by Grove Atlantic in 2014)
Ravitch, Dianne. Left Back: A Century of Battles over School Reform (New York, Simon and Schuster, 2000)
Snyder, Jeffrey Aaron. "Progressive Education in Black and White: Rereading Carter G. Woodson's Miseducation of the Negro." History of Education Quarterly 55#3 (2015): 273–293.
Sutinen, Ari. "Social Reconstructionist Philosophy of Education and George S. Counts-observations on the ideology of indoctrination in socio-critical educational thinking." International Journal of Progressive Education 10#1 (2014).
International
Blossing, Ulf, Gunn Imsen, and Lejf Moos. "Progressive Education and New Governance in Denmark, Norway, and Sweden." The Nordic Education Model (Springer Netherlands, 2014) pp 133–154.
Christou, Theodore Michael. Progressive Education: Revisioning and Reframing Ontario's Public Schools, 1919–1942 (2013)
Hughes, John Patrick. "Theory into practice in Australian progressive education: the Enmore Activity School." History of Education Review 44#1 (2015).
Keskin, Yusuf. "Progressive Education in Turkey: Reports of John Dewey and his Successors." International Journal of Progressive Education 10#3 (2014).
Knoll, Michael. "The Project Method – Its Origin and International Influence". In: Progressive Education Across the Continents. A Handbook, ed. Hermann Röhrs and Volker Lenhart. New York: Lang 1995. pp. 307–318.
Historiography
Graham, Patricia Albjerg. Progressive Education from Arcady to Academe: A History of the Progressive Education Association, 1919–1955 (1967)
Reese, William J. "The origins of progressive education." History of Education Quarterly 41#1 (2001): 1-24.
Wraga, William G. "Condescension and critical sympathy: Historians of education on progressive education in the United States and England." Paedagogica Historica 50#1-2 (2014): 59–75.
Education theory
Alternative education
Pedagogical movements and theories
Philosophy of education
Progressivism
19th-century reform movements | 0.767249 | 0.993934 | 0.762595 |
Neuro-linguistic programming | Neuro-linguistic programming (NLP) is a pseudoscientific approach to communication, personal development and psychotherapy, that first appeared in Richard Bandler and John Grinder's 1975 book The Structure of Magic I. NLP asserts that there is a connection between neurological processes, language and acquired behavioral patterns, and that these can be changed to achieve specific goals in life. According to Bandler and Grinder, NLP can treat problems such as phobias, depression, tic disorders, psychosomatic illnesses, near-sightedness, allergy, the common cold, and learning disorders, often in a single session. They also say that NLP can model the skills of exceptional people, allowing anyone to acquire them.
NLP has been adopted by some hypnotherapists as well as by companies that run seminars marketed as leadership training to businesses and government agencies.
There is no scientific evidence supporting the claims made by NLP advocates, and it has been called a pseudoscience. Scientific reviews have shown that NLP is based on outdated metaphors of the brain's inner workings that are inconsistent with current neurological theory, and that NLP contains numerous factual errors. Reviews also found that research that favored NLP contained significant methodological flaws, and that there were three times as many studies of a much higher quality that failed to reproduce the claims made by Bandler, Grinder, and other NLP practitioners.
Early development
According to Bandler and Grinder, NLP consists of a methodology termed modeling, plus a set of techniques that they derived from its initial applications. They derived many of the fundamental techniques from the work of Virginia Satir, Milton Erickson and Fritz Perls. Bandler and Grinder also drew upon the theories of Gregory Bateson, Alfred Korzybski and Noam Chomsky (particularly transformational grammar).
Bandler and Grinder say that their methodology can codify the structure inherent to the therapeutic "magic" as performed in therapy by Perls, Satir and Erickson, and indeed inherent to any complex human activity. From that codification, they say, the structure and its activity can be learned by others. Their 1975 book, The Structure of Magic I: A Book about Language and Therapy, is intended to be a codification of the therapeutic techniques of Perls and Satir.
Bandler and Grinder say that they used their own process of modeling to model Virginia Satir so they could produce what they termed the Meta-Model, a model for gathering information and challenging a client's language and underlying thinking. They say that by challenging linguistic distortions, specifying generalizations, and recovering deleted information in the client's statements, the transformational grammar concept of surface structure yields a more complete representation of the underlying deep structure and therefore has therapeutic benefit. Also derived from Satir were anchoring, future pacing and representational systems.
In contrast, the Milton-Model—a model of the purportedly hypnotic language of Milton Erickson—was described by Bandler and Grinder as "artfully vague" and metaphoric. The Milton-Model is used in combination with the Meta-Model as a softener, to induce "trance" and to deliver indirect therapeutic suggestion.
Psychologist Jean Mercer writes that Chomsky's theories "appear to be irrelevant" to NLP. Linguist Karen Stollznow describes Bandler's and Grinder's reference to such experts as namedropping. Other than Satir, the people they cite as influences did not collaborate with Bandler or Grinder. Chomsky himself has no association with NLP, with his work being theoretical in nature and having no therapeutic element. Stollznow writes, "[o]ther than borrowing terminology, NLP does not bear authentic resemblance to any of Chomsky's theories or philosophies—linguistic, cognitive or political."
According to André Muller Weitzenhoffer, a researcher in the field of hypnosis, "the major weakness of Bandler and Grinder's linguistic analysis is that so much of it is built upon untested hypotheses and is supported by totally inadequate data." Weitzenhoffer adds that Bandler and Grinder misuse formal logic and mathematics, redefine or misunderstand terms from the linguistics lexicon (e.g., nominalization), create a scientific façade by needlessly complicating Ericksonian concepts with unfounded claims, make factual errors, and disregard or confuse concepts central to the Ericksonian approach.
More recently, Bandler has stated, "NLP is based on finding out what works and formalizing it. In order to formalize patterns I utilized everything from linguistics to holography ... The models that constitute NLP are all formal models based on mathematical, logical principles such as predicate calculus and the mathematical equations underlying holography." There is no mention of the mathematics of holography nor of holography in general in Spitzer's, or Grinder's account of the development of NLP.
On the matter of the development of NLP, Grinder recollects:
The philosopher Robert Todd Carroll responded that Grinder has not understood Kuhn's text on the history and philosophy of science, The Structure of Scientific Revolutions. Carroll replies: (a) individual scientists never have nor are they ever able to create paradigm shifts volitionally and Kuhn does not suggest otherwise; (b) Kuhn's text does not contain the idea that being unqualified in a field of science is a prerequisite to producing a result that necessitates a paradigm shift in that field and (c) The Structure of Scientific Revolutions is foremost a work of history and not an instructive text on creating paradigm shifts and such a text is not possible—extraordinary discovery is not a formulaic procedure. Carroll explains that a paradigm shift is not a planned activity, rather it is an outcome of scientific effort within the dominant paradigm that produces data that cannot be adequately accounted for within the current paradigm—hence a paradigm shift, i.e. the adoption of a new paradigm. In developing NLP, Bandler and Grinder were not responding to a paradigmatic crisis in psychology nor did they produce any data that caused a paradigmatic crisis in psychology. There is no sense in which Bandler and Grinder caused or participated in a paradigm shift. "What did Grinder and Bandler do that makes it impossible to continue doing psychology ... without accepting their ideas? Nothing," argues Carroll.
Commercialization and evaluation
By the late 1970s, the human potential movement had developed into an industry and provided a market for some NLP ideas. At the center of this growth was the Esalen Institute at Big Sur, California. Perls had led numerous Gestalt therapy seminars at Esalen. Satir was an early leader and Bateson was a guest teacher. Bandler and Grinder have said that in addition to being a therapeutic method, NLP was also a study of communication and began marketing it as a business tool, writing that, "if any human being can do anything, so can you." After 150 students paid $1,000 each for a ten-day workshop in Santa Cruz, California, Bandler and Grinder gave up academic writing and started producing popular books from seminar transcripts, such as Frogs into Princes, which sold more than 270,000 copies. According to court documents relating to an intellectual property dispute between Bandler and Grinder, Bandler made more than $800,000 in 1980 from workshop and book sales.
A community of psychotherapists and students began to form around Bandler and Grinder's initial works, leading to the growth and spread of NLP as a theory and practice. For example, Tony Robbins trained with Grinder and utilized a few ideas from NLP as part of his own self-help and motivational speaking programmes. Bandler led several unsuccessful efforts to exclude other parties from using NLP. Meanwhile, the rising number of practitioners and theorists led NLP to become even less uniform than it was at its foundation. Prior to the decline of NLP, scientific researchers began testing its theoretical underpinnings empirically, with research indicating a lack of empirical support for NLP's essential theories. The 1990s were characterized by fewer scientific studies evaluating the methods of NLP than the previous decade. Tomasz Witkowski attributes this to a declining interest in the debate as the result of a lack of empirical support for NLP from its proponents.
Main components and core concepts
NLP can be understood in terms of three broad components: subjectivity, consciousness, and learning.
According to Bandler and Grinder, people experience the world subjectively, creating internal representations of their experiences. These representations involve the five senses and language. In other words, our conscious experiences take the form of sights, sounds, feelings, smells, and tastes. When we imagine something, recall an event, or think about the future, we utilize these same sensory systems within our minds Furthermore it is stated that these subjective representations of experience have a discernible structure, a pattern.
Bandler and Grinder assert that behavior (both our own and others') can be understood through these sensory-based internal representations. Behavior here includes verbal and non-verbal communication, as well as effective or adaptive behaviors and less helpful or "pathological" ones. They also assert that behavior in both the self and other people can be modified by manipulating these sense-based subjective representations.
NLP posits that consciousness can be divided into conscious and unconscious components. The part of our internal representations operating outside our direct awareness is referred to as the "unconscious mind".
Finally, NLP uses a method of learning called "modeling", designed to replicate expertise in any field. According to Bandler and Grinder, by analyzing the sequence of sensory and linguistic representations used by an expert while performing a skill, it's possible to create a mental model that can be learned by others.
Techniques or set of practices
According to one study by Steinbach, a classic interaction in NLP can be understood in terms of several major stages including establishing rapport, gleaning information about a problem mental state and desired goals, using specific tools and techniques to make interventions, and integrating proposed changes into the client's life. The entire process is guided by the non-verbal responses of the client. The first is the act of establishing and maintaining rapport between the practitioner and the client which is achieved through pacing and leading the verbal (e.g., sensory predicates and keywords) and non-verbal behavior (e.g., matching and mirroring non-verbal behavior, or responding to eye movements) of the client.
Once rapport is established, the practitioner may gather information about the client's present state as well as help the client define a desired state or goal for the interaction. The practitioner pays attention to the verbal and non-verbal responses as the client defines the present state and desired state and any resources that may be required to bridge the gap. The client is typically encouraged to consider the consequences of the desired outcome, and how they may affect his or her personal or professional life and relationships, taking into account any positive intentions of any problems that may arise. The practitioner thereafter assists the client in achieving the desired outcomes by using certain tools and techniques to change internal representations and responses to stimuli in the world. Finally, the practitioner helps the client to mentally rehearse and integrate the changes into his or her life. For example, the client may be asked to envision what it is like having already achieved the outcome.
According to Stollznow, "NLP also involves fringe discourse analysis and 'practical' guidelines for 'improved' communication. For example, one text asserts 'when you adopt the "but" word, people will remember what you said afterwards. With the "and" word, people remember what you said before and after.'"
Applications
Alternative medicine
NLP has been promoted as being able to treat a variety of diseases including Parkinson's disease, HIV/AIDS and cancer. Such claims have no supporting medical evidence. People who use NLP as a form of treatment risk serious adverse health consequences as it can delay the provision of effective medical care.
Psychotherapeutic
Early books about NLP had a psychotherapeutic focus given that the early models were psychotherapists. As an approach to psychotherapy, NLP shares similar core assumptions and foundations in common with some contemporary brief and systemic practices, such as solution focused brief therapy. NLP has also been acknowledged as having influenced these practices with its reframing techniques which seeks to achieve behavior change by shifting its context or meaning, for example, by finding the positive connotation of a thought or behavior.
The two main therapeutic uses of NLP are, firstly, as an adjunct by therapists practicing in other therapeutic disciplines and, secondly, as a specific therapy called Neurolinguistic Psychotherapy.
According to Stollznow, "Bandler and Grinder's infamous Frogs into Princes and their other books boast that NLP is a cure-all that treats a broad range of physical and mental conditions and learning difficulties, including epilepsy, myopia and dyslexia. With its promises to cure schizophrenia, depression and Post Traumatic Stress Disorder, and its dismissal of psychiatric illnesses as psychosomatic, NLP shares similarities with Scientology and the Citizens Commission on Human Rights (CCHR)." A systematic review of experimental studies by Sturt et al. (2012) concluded that "there is little evidence that NLP interventions improve health-related outcomes." In his review of NLP, Stephen Briers writes, "NLP is not really a cohesive therapy but a ragbag of different techniques without a particularly clear theoretical basis ... [and its] evidence base is virtually non-existent." Eisner writes, "NLP appears to be a superficial and gimmicky approach to dealing with mental health problems. Unfortunately, NLP appears to be the first in a long line of mass marketing seminars that purport to virtually cure any mental disorder ... it appears that NLP has no empirical or scientific support as to the underlying tenets of its theory or clinical effectiveness. What remains is a mass-marketed serving of psychopablum."
André Muller Weitzenhoffer—a friend and peer of Milton Erickson—wrote, "Has NLP really abstracted and explicated the essence of successful therapy and provided everyone with the means to be another Whittaker, Virginia Satir, or Erickson? ... [NLP's] failure to do this is evident because today there is no multitude of their equals, not even another Whittaker, Virginia Satir, or Erickson. Ten years should have been sufficient time for this to happen. In this light, I cannot take NLP seriously ... [NLP's] contributions to our understanding and use of Ericksonian techniques are equally dubious. Patterns I and II are poorly written works that were an overambitious, pretentious effort to reduce hypnotism to a magic of words."
Clinical psychologist Stephen Briers questions the value of the NLP maxim—a presupposition in NLP jargon—"there is no failure, only feedback". Briers argues that the denial of the existence of failure diminishes its instructive value. He offers Walt Disney, Isaac Newton and J.K. Rowling as three examples of unambiguous acknowledged personal failure that served as an impetus to great success. According to Briers, it was "the crash-and-burn type of failure, not the sanitised NLP Failure Lite, i.e. the failure-that-isn't really-failure sort of failure" that propelled these individuals to success. Briers contends that adherence to the maxim leads to self-deprecation. According to Briers, personal endeavour is a product of invested values and aspirations and the dismissal of personally significant failure as mere feedback effectively denigrates what one values. Briers writes, "Sometimes we need to accept and mourn the death of our dreams, not just casually dismiss them as inconsequential." Briers also contends that the NLP maxim is narcissistic, self-centered and divorced from notions of moral responsibility.
Other uses
Although the original core techniques of NLP were therapeutic in orientation their generic nature enabled them to be applied to other fields. These applications include persuasion, sales, negotiation, management training, sports, teaching, coaching, team building, public speaking, and in the process of hiring employees.
Scientific criticism
In the early 1980s, NLP was advertised as an important advance in psychotherapy and counseling, and attracted some interest in counseling research and clinical psychology. However, as controlled trials failed to show any benefit from NLP and its advocates made increasingly dubious claims, scientific interest in NLP faded.
Numerous literature reviews and meta-analyses have failed to show evidence for NLP's assumptions or effectiveness as a therapeutic method. While some NLP practitioners have argued that the lack of empirical support is due to insufficient research which tests NLP, the consensus scientific opinion is that NLP is pseudoscience and that attempts to dismiss the research findings based on these arguments "[constitute]s an admission that NLP does not have an evidence base and that NLP practitioners are seeking a post-hoc credibility."
Surveys in the academic community have shown NLP to be widely discredited among scientists. Among the reasons for considering NLP a pseudoscience are that evidence in favor of it is limited to anecdotes and personal testimony that it is not informed by scientific understanding of neuroscience and linguistics, and that the name "neuro-linguistic programming" uses jargon words to impress readers and obfuscate ideas, whereas NLP itself does not relate any phenomena to neural structures and has nothing in common with linguistics or programming. In education, NLP has been used as a key example of pseudoscience.
As a quasi-religion
Sociologists and anthropologists have categorized NLP as a quasi-religion belonging to the New Age and/or Human Potential Movements.
Medical anthropologist Jean M. Langford categorizes NLP as a form of folk magic; that is to say, a practice with symbolic efficacy—as opposed to physical efficacy—that is able to effect change through nonspecific effects (e.g., placebo). To Langford, NLP is akin to a syncretic folk religion "that attempts to wed the magic of folk practice to the science of professional medicine".
Bandler and Grinder were influenced by the shamanism described in the books of Carlos Castaneda. Concepts like "double induction" and "stopping the world", central to NLP modeling, were incorporated from these influences.
Some theorists characterize NLP as a type of "psycho-shamanism", and its focus on modeling has been compared to ritual practices in certain syncretic religions. The emphasis on lineage from an NLP guru has also been likened to similar concepts in some Eastern religions. Aupers, Houtman, and Bovbjerg identify NLP as a New Age "psycho-religion". Bovbjerg argues that New Age movements center on a transcendent "other". While monotheistic religions seek communion with a divine being, this focus shifts inward in these movements, with the "other" becoming the unconscious self. Bovbjerg posits that this emphasis on the unconscious and its hidden potential underlies NLP techniques promoting self-perfection through ongoing transformation.
Bovbjerg's secular critique echoes the conservative Christian perspective, as exemplified by David Jeremiah. He argues that NLP's emphasis on self-transformation and internal power conflicts with the Christian belief in salvation through divine grace.
Legal disputes
Founding, initial disputes, and settlement (1979–1981)
In 1979, Richard Bandler and John Grinder established the Society of Neuro-Linguistic Programming (NLP) to manage commercial applications of NLP, including training, materials, and certification. The founding agreement conferred exclusive rights to profit from NLP training and certification upon Bandler's corporate entity, Not Ltd. Around November 1980, Bandler and Grinder had ceased collaboration for undisclosed reasons.
On September 25, 1981, Bandler filed suit against Grinder's corporate entity, Unlimited Ltd., in the Superior Court of California, County of Santa Cruz seeking injunctive relief and damages arising from Grinder's NLP-related commercial activities; the Court issued a judgment in Bandler's favor on October 29, 1981. The subsequent settlement agreement granted Grinder a 10-year license to conduct NLP seminars, offer NLP certification, and utilize the NLP name, subject to royalty payments to Bandler.
Further litigation and consequences (1996–2000)
Bandler commenced further civil actions against Unlimited Ltd., various figures within the NLP community, and 200 initially unnamed defendants in July 1996 and January 1997. Bandler alleged violations of the initial settlement terms by Grinder and sought damages of no less than US$10,000,000.00 from each defendant.
In February 2000, the Court ruled against Bandler. The judgment asserted that Bandler had misrepresented his exclusive ownership of NLP intellectual property and sole authority over Society of NLP membership and certification.
Trademark revocation (1997)
In December 1997, a separate civil proceeding initiated by Tony Clarkson resulted in the revocation of Bandler's UK trademark of NLP. The Court ruled in Clarkson's favor.
Resolution and legacy (2000)
Bandler and Grinder reached a settlement in late 2000, acknowledging their status as co-creators and co-founders of NLP and committing to refrain from disparaging one another's NLP-related endeavors.
Due to these disputes and settlements, the terms 'NLP' and 'Neuro-Linguistic Programming' remain in the public domain. No single party holds exclusive rights, and there are no restrictions on offering NLP certifications.
The designations "NLP" and "Neuro-linguistic Programming" are not owned, trademarked, or subject to centralized regulation. Consequently, there are no restrictions on individuals self-identifying as "NLP Master Practitioners" or "NLP Master Trainers." This decentralization has led to numerous certifying associations.
Decentralization and criticism
This lack of centralized control means there's no single standard for NLP practice or training. Practitioners can market their own methodologies, leading to inconsistencies within the field. This has been a source of criticism, highlighted by an incident in 2009 where a British television presenter registered his cat with the British Board of Neuro Linguistic Programming (BBNLP), demonstrating the organization's lax credentialing. Critics like Karen Stollznow find irony in the initial legal battles between Bandler and Grinder, considering their failure to apply their own NLP principles to resolve their conflict. Others, such as Grant Devilly, characterize NLP associations as "granfalloons"—a term implying a lack of unifying principles or a shared sense of purpose.
See also
Avatar Course
Family systems therapy
Frank Farrelly
List of New Age topics
List of unproven and disproven cancer treatments
Solution-focused brief therapy
Notable practitioners
Steve Andreas
Paul McKenna
Notes
References
Citations
Works cited
Primary sources
Secondary sources
Further reading
External links
Hypnotherapy
Pseudoscience | 0.763075 | 0.999296 | 0.762538 |
Mill's Methods | Mill's Methods are five methods of induction described by philosopher John Stuart Mill in his 1843 book A System of Logic. They are intended to establish a causal relationship between two or more groups of data, analyzing their respective differences and similarities.
The methods
Direct method of agreement
For a property to be a necessary condition it must always be present if the effect is present. Since this is so, then we are interested in looking at cases where the effect is present and taking note of which properties, among those considered to be 'possible necessary conditions' are present and which are absent. Obviously, any properties which are absent when the effect is present cannot be necessary conditions for the effect. This method is also referred to more generally within comparative politics as the most different systems design.
Symbolically, the method of agreement can be represented as:
A B C D occur together with w x y z
A E F G occur together with w t u v
——————————————————
Therefore A is the cause, or the effect, of w.
To further illustrate this concept, consider two structurally different countries. Country A is a former colony, has a centre-left government, and has a federal system with two levels of government. Country B has never been a colony, has a centre-left government and is a unitary state. One factor that both countries have in common, the dependent variable in this case, is that they have a system of universal health care. Comparing the factors known about the countries above, a comparative political scientist would conclude that the government sitting on the centre-left of the spectrum would be the independent variable which causes a system of universal health care, since it is the only one of the factors examined which holds constant between the two countries, and the theoretical backing for that relationship is sound; social democratic (centre-left) policies often include universal health care.
Method of difference
This method is also known more generally as the most similar systems design within comparative politics.
A B C D occur together with w x y z
B C D occur together with x y z
——————————————————
Therefore A is the cause, or the effect, or a part of the cause of w.
As an example of the method of difference, consider two similar countries. Country A has a centre-right government, a unitary system and was a former colony. Country B has a centre-right government, a unitary system but was never a colony. The difference between the countries is that Country A readily supports anti-colonial initiatives, whereas Country B does not. The method of difference would identify the independent variable to be the status of each country as a former colony or not, with the dependant variable being supportive for anti-colonial initiatives. This is because, out of the two similar countries compared, the difference between the two is whether or not they were formerly a colony. This then explains the difference on the values of the dependent variable, with the former colony being more likely to support decolonization than the country with no history of being a colony.
Indirect method of difference
Also called the "Joint Method of Agreement and Difference", this principle is a combination of two methods of agreement. Despite the name, it is weaker than the direct method of difference and does not include it.
Symbolically, the Joint method of agreement and difference can be represented as:
A B C occur together with x y z
A D E occur together with x v w
F G occur with y w
——————————————————
Therefore A is the cause, or the effect, or a part of the cause of x.
Method of residue
If a range of factors are believed to cause a range of phenomena, and we have matched all the factors, except one, with all the phenomena, except one, then the remaining phenomenon can be attributed to the remaining factor.
Symbolically, the Method of Residue can be represented as:
A B C occur together with x y z
B is known to be the cause of y
C is known to be the cause of z
——————————————————
Therefore A is the cause or effect of x.
Method of concomitant variations
If across a range of circumstances leading to a phenomenon, some property of the phenomenon varies in tandem with some factor existing in the circumstances, then the phenomenon can be associated with that factor. For instance, suppose that various samples of water, each containing both salt and lead, were found to be toxic. If the level of toxicity varied in tandem with the level of lead, one could attribute the toxicity to the presence of lead.
Symbolically, the method of concomitant variation can be represented as (with ± representing a shift):
A B C occur together with x y z
A± B C results in x± y z.
—————————————————————
Therefore A and x are causally connected
Unlike the preceding four inductive methods, the method of concomitant variation doesn't involve the elimination of any circumstance. Changing the magnitude of one factor results in the change in the magnitude of another factor.
See also
Causal inference
Controlled scientific experiments
Baconian method
Bayesian network
Koch's postulates
References
Further reading
External links
Causal Reasoning—Provides some examples
Mill's methods for identifying causes—Provides some examples
Causality
Inductive reasoning
John Stuart Mill
Concepts in metaphysics | 0.770418 | 0.98975 | 0.762522 |
Life course approach | The life course approach, also known as the life course perspective or life course theory, refers to an approach developed in the 1960s for analyzing people's lives within structural, social, and cultural contexts. It views one's life as a socially sequenced timeline and recognizes the importance of factors such as generational succession and age in shaping behavior and career. Development does not end at childhood, but instead extends through multiple life stages to influence life trajectory.
The origins of this approach can be traced back to pioneering studies of the 1920s such as William I. Thomas and Florian Znaniecki's The Polish Peasant in Europe and America and Karl Mannheim's essay on the "Problem of Generations".
Overview
The life course approach examines an individual's life history and investigates, for example, how early events influenced future decisions and events such as marriage and divorce, engagement in crime, or disease incidence. The primary factor promoting standardization of the life course was improvement in mortality rates brought about by the management of contagious and infectious diseases such as smallpox. A life course is defined as "a sequence of socially defined events and roles that the individual enacts over time". In particular, the approach focuses on the connection between individuals and the historical and socioeconomic context in which these individuals lived.
The method encompasses observations including history, sociology, demography, developmental psychology, biology, public health and economics. So far, empirical research from a life course perspective has not resulted in the development of a formal theory.
Glen Elder theorized the life course as based on five key principles: life-span development, human agency, historical time and geographic place, timing of decisions, and linked lives. As a concept, a life course is defined as "a sequence of socially defined events and roles that the individual enacts over time" (Giele and Elder 1998, p. 22). These events and roles do not necessarily proceed in a given sequence, but rather constitute the sum total of the person's actual experience. Thus the concept of life course implies age-differentiated social phenomena distinct from uniform life-cycle stages and the life span. Life span refers to duration of life and characteristics that are closely related to age but that vary little across time and place.
In contrast, the life course perspective elaborates the importance of time, context, process, and meaning on human development and family life (Bengtson and Allen 1993). The family is perceived as a micro social group within a macro social context—a "collection of individuals with shared history who interact within ever-changing social contexts across ever increasing time and space" (Bengtson and Allen 1993, p. 470). Aging and developmental change, therefore, are continuous processes that are experienced throughout life. As such, the life course reflects the intersection of social and historical factors with personal biography and development within which the study of family life and social change can ensue (Elder 1985; Hareven 1996).
Life course theory also has moved in a constructionist direction. Rather than taking time, sequence, and linearity for granted, in their book Constructing the Life Course, Jaber F. Gubrium and James A. Holstein (2000) take their point of departure from accounts of experience through time. This shifts the figure and ground of experience and its stories, foregrounding how time, sequence, linearity, and related concepts are used in everyday life. It presents a radical turn in understanding experience through time, moving well beyond the notion of a multidisciplinary paradigm, providing an altogether different paradigm from traditional time-centered approaches. Rather than concepts of time being the principal building blocks of propositions, concepts of time are analytically bracketed and become focal topics of research and constructive understanding.
The life course approach has been applied to topics such as the occupational health of immigrants, and retirement age. It has also become increasingly important in other areas such as in the role of childhood experiences affecting the behaviour of students later in life or physical activity in old age.
References
Further reading
Elder G. H. Jr & Giele J.Z. (2009). Life Course Studies. An Evolving Field. In Elder G. H. Jr & Giele J.Z. (Eds.), The Craft of Life Course Research (pp 1–28). New-york, London: The Guilford Press.
Levy, R., Ghisletta, P., Le Goff, J. M., Spini, D., & Widmer, E. (2005). Towards an Interdisciplinary Perspective on the Life Course. pp. 3–32. Elsevier.
Developmental psychology
Methods in sociology
Epidemiology | 0.775605 | 0.983102 | 0.762499 |
Understanding by Design | Understanding by Design, or UbD, is an educational theory for curriculum design of a school subject, where planners look at the desired outcomes at the end of the study in order to design curriculum units, performance assessments, and classroom instruction. UbD is an example of backward design, the practice of looking at the outcomes first, and focuses on teaching to achieve understanding. It is advocated by Jay McTighe and Grant Wiggins (1950-2015) in their Understanding by Design (1998), published by the Association for Supervision and Curriculum Development. Understanding by Design and UbD are registered trademarks of the Association for Supervision and Curriculum Development (ASCD).
Backward design
Understanding by Design relies on what Wiggins and McTighe call "backward design" (also known as "backwards planning"). Teachers, according to UbD proponents, traditionally start curriculum planning with activities and textbooks instead of identifying classroom learning goals and planning towards that goal. In backward design, the teacher starts with classroom outcomes and then plans the curriculum, choosing activities and materials that help determine student ability and foster student learning.
The backward design approach has three stages. Stage 1 is identification of desired results for students. This may use content standards, common core or state standards. Stage 1 defines "Students will understand that..." and lists essential questions that will guide the learner to understanding. Stage 2 is assessing learning strategies. Stage 3 is listing the learning activities that will lead students to your desired results.
Teaching for understanding
In their article on science education, Smith and Siegel argue "that education aims at the imparting of knowledge: students are educated in part so that they may come to know things". While a student can know a lot about a particular subject, teachers globally are beginning to push their students to go beyond simple recall. This is where understanding plays an important role. The goal of Teaching for Understanding is to give students the tools to take what they know, and what they will eventually know, and make a mindful connection between the ideas. In a world that is filled with data, teachers are only able to help students learn a small number of ideas and facts. As such, it is important that we give students the tools needed to decipher and understand the ideas. This transferability of skills is at the heart of McTighe and Wiggins' technique. If a student is able to transfer the skills they learn in the classroom to unfamiliar situations, whether academic or non-academic, they are said to truly understand.
Teaching for Understanding had been used as a framework for developing literacy education for TESOL students, see Pearson and Pellerine (2010)
See also
Educational theory
Instructional design
References
External links
Grant Wiggins 1950 - 2015 Authentic Education
Bowen, Ryan S., (2017). Understanding by Design. Vanderbilt University Center for Teaching. Retrieved 4/2/2020
Philosophy of education
Pedagogical movements and theories
1998 non-fiction books | 0.771246 | 0.988587 | 0.762443 |
Thaumaturgy | Thaumaturgy, derived from the Greek words thauma (wonder) and ergon (work), refers to the practical application of magic to effect change in the physical world. Historically, thaumaturgy has been associated with the manipulation of natural forces, the creation of wonders, and the performance of magical feats through esoteric knowledge and ritual practice. Unlike theurgy, which focuses on invoking divine powers, thaumaturgy is more concerned with utilizing occult principles to achieve specific outcomes, often in a tangible and observable manner. It is sometimes translated into English as wonderworking.
This concept has evolved from its ancient roots in magical traditions to its incorporation into modern Western esotericism. Thaumaturgy has been practiced by individuals seeking to exert influence over the material world through both subtle and overt magical means. It has played a significant role in the development of magical systems, particularly those that emphasize the practical aspects of esoteric work.
In modern times, thaumaturgy continues to be a subject of interest within the broader field of occultism, where it is studied and practiced as part of a larger system of magical knowledge. Its principles are often applied in conjunction with other forms of esoteric practice, such as alchemy and Hermeticism, to achieve a deeper understanding and mastery of the forces that govern the natural and supernatural worlds.
A practitioner of thaumaturgy is a "thaumaturge", "thaumaturgist", "thaumaturgus", "miracle worker", or "wonderworker".
Etymology
The word thaumaturgy derives from Greek thaûma, meaning "miracle" or "marvel" (final t from genitive thaûmatos) and érgon, meaning "work". In the 16th century, the word thaumaturgy entered the English language meaning miraculous or magical powers. The word was first anglicized and used in the magical sense in John Dee's book The Mathematicall Praeface to Elements of Geometrie of Euclid of Megara (1570). He mentions an "art mathematical" called "thaumaturgy... which giveth certain order to make strange works, of the sense to be perceived and of men greatly to be wondered at".
Historical development
Ancient roots
The origins of thaumaturgy can be traced back to ancient civilizations where magical practices were integral to both religious rituals and daily life. In ancient Egypt, priests were often regarded as thaumaturges, wielding their knowledge of rituals and incantations to influence natural and supernatural forces. These practices were aimed at protecting the Pharaoh, ensuring a successful harvest, or even controlling the weather. Similarly, in ancient Greece, certain figures were believed to possess the ability to perform miraculous feats, often attributed to their deep understanding of the mysteries of the gods and nature. This blending of religious and magical practices laid the groundwork for what would later be recognized as thaumaturgy in Western esotericism.
In Greek writings, the term thaumaturge also referred to several Christian saints. In this context, the word is usually translated into English as 'wonderworker'. Notable early Christian thaumaturges include Gregory Thaumaturgus (c. 213–270), Saint Menas of Egypt (285–c. 309), Saint Nicholas (270–343), and Philomena ( 300 (?)).
Medieval and Renaissance Europe
During the medieval period, thaumaturgy evolved within the context of Christian mysticism and early scientific thought. The medieval understanding of thaumaturgy was closely linked to the idea of miracles, with saints and holy men often credited with thaumaturgic powers. The seventeenth-century Irish Franciscan editor John Colgan called the three early Irish saints, Patrick, Brigid, and Columba, thaumaturges in his Acta Triadis Thaumaturgae (Louvain, 1647). Later notable medieval Christian thaumaturges include Anthony of Padua (1195–1231) and the bishop of Fiesole, Andrew Corsini of the Carmelites (1302–1373), who was called a thaumaturge during his lifetime. This period also saw the development of grimoires—manuals for magical practices—where rituals and spells were documented, often blending Christian and pagan traditions.
In the Renaissance, the concept of thaumaturgy expanded as scholars like John Dee explored the intersections between magic, science, and religion. Dee's Mathematicall Praeface to Elements of Geometrie of Euclid of Megara (1570) is one of the earliest English texts to discuss thaumaturgy, describing it as the art of creating "strange works" through a combination of natural and mathematical principles. Dee's work reflects the Renaissance pursuit of knowledge that blurred the lines between the magical and the mechanical, as thaumaturges were often seen as early scientists who harnessed the hidden powers of nature.
In Dee's time, "the Mathematicks" referred not merely to the abstract computations associated with the term today, but to physical mechanical devices which employed mathematical principles in their design. These devices, operated by means of compressed air, springs, strings, pulleys or levers, were seen by unsophisticated people (who did not understand their working principles) as magical devices which could only have been made with the aid of demons and devils.
By building such mechanical devices, Dee earned a reputation as a conjurer "dreaded" by neighborhood children. He complained of this assessment in his Mathematicall Praeface:
Notable Renaissance and Age of Enlightenment Christian thaumaturges of the period include Gerard Majella (1726–1755), Ambrose of Optina (1812–1891), and John of Kronstadt (1829–1908).
Incorporation into modern esotericism
The transition into modern esotericism saw thaumaturgy taking on a more structured role within various magical systems, particularly those developed in the 18th and 19th centuries. In Hermeticism and the Western occult tradition, thaumaturgy was often practiced alongside alchemy and theurgy, with a focus on manipulating the material world through ritual and symbolic action. The Hermetic Order of the Golden Dawn, a prominent magical order founded in the late 19th century, incorporated thaumaturgy into its curriculum, emphasizing the importance of both theory and practice in the mastery of magical arts.
Thaumaturgy's role in modern esotericism also intersects with the rise of ceremonial magic, where it is often employed to achieve specific, practical outcomes—ranging from healing to the invocation of spirits. Contemporary magicians continue to explore and adapt thaumaturgic practices, often drawing from a wide range of historical and cultural sources to create eclectic and personalized systems of magic.
Core principles and practices
Principles of sympathy and contagion
Thaumaturgy is often governed by two key magical principles: the Principle of Sympathy and the Principle of Contagion. These principles are foundational in understanding how thaumaturges influence the physical world through magical means. The Principle of Sympathy operates on the idea that "like affects like", meaning that objects or symbols that resemble each other can influence each other. For example, a miniature representation of a desired outcome, such as a model of a bridge, could be used in a ritual to ensure the successful construction of an actual bridge. The Principle of Contagion, on the other hand, is based on the belief that objects that were once in contact continue to influence each other even after they are separated. This principle is often employed in the use of personal items, such as hair or clothing, in rituals to affect the person to whom those items belong.
These principles are not unique to thaumaturgy but are integral to many forms of magic across cultures. However, in the context of thaumaturgy, they are particularly important because they provide a theoretical framework for understanding how magical actions can produce tangible results in the material world. This focus on practical outcomes distinguishes thaumaturgy from other forms of magic that may be more concerned with spiritual or symbolic meanings.
Tools and rituals
Thaumaturgical practices often involve the use of specific tools and rituals designed to channel and direct magical energy. Common tools include wands, staffs, talismans, and ritual knives, each of which serves a particular purpose in the practice of magic. For instance, a wand might be used to direct energy during a ritual, while a talisman could serve as a focal point for the thaumaturge's intent. The creation and consecration of these tools are themselves ritualized processes, often requiring specific materials and astrological timing to ensure their effectiveness.
Rituals in thaumaturgy are typically elaborate and may involve the recitation of incantations, the drawing of protective circles, and the invocation of spirits or deities. These rituals are designed to create a controlled environment in which the thaumaturge can manipulate natural forces according to their will. The complexity of these rituals varies depending on the desired outcome, with more significant or ambitious goals requiring more intricate and time-consuming procedures.
Energy manipulation
At the heart of thaumaturgy is the metaphor of energy manipulation. Thaumaturges believe that the world is filled with various forms of energy that can be harnessed and directed through magical practices. This energy is often conceptualized as a natural force that permeates the universe, and through the use of specific techniques, thaumaturges believe that they can influence this energy to bring about desired changes in the physical world.
Energy manipulation in thaumaturgy involves both drawing energy from the surrounding environment and directing it toward a specific goal. This process often requires a deep understanding of the natural world, as well as the ability to focus and control one's own mental and spiritual energies. In many traditions, this energy is also linked to the practitioner's life force, meaning that the act of performing thaumaturgy can be physically and spiritually taxing. As a result, practitioners often undergo rigorous training and preparation to build their capacity to manipulate energy effectively and safely.
In esoteric traditions
Hermetic Qabalah
In Hermetic Qabalah, thaumaturgy occupies a significant role as it involves the practical application of mystical principles to influence the physical world. This tradition is deeply rooted in the concept of correspondences, where different elements of the cosmos are seen as interconnected. In the Hermetic tradition, a thaumaturge seeks to manipulate these correspondences to bring about desired changes. The sephiroth on the Tree of Life serve as a map for these interactions, with specific rituals and symbols corresponding to different sephiroth and their associated powers. For example, a ritual focusing on Yesod (the sephirah of the Moon) might involve elements such as silver, the color white, and the invocation of lunar deities to influence matters of intuition, dreams, or the subconscious mind.
The manipulation of these correspondences through ritual is not just symbolic but is believed to produce real effects in the material world. Practitioners use complex rituals that might include the use of sacred geometry, invocations, and the creation of talismans. These practices are believed to align the practitioner with the forces they wish to control, creating a sympathetic connection that enables them to direct these forces effectively. Aleister Crowley's Magick (Book 4) provides an extensive discussion on the use of ritual tools such as the wand, cup, and sword, each of which corresponds to different elements and powers within the Qabalistic system, emphasizing the practical aspect of these tools in thaumaturgic practices.
Alchemy and thaumaturgy
Alchemy and thaumaturgy are often intertwined, particularly in the context of spiritual transformation and the pursuit of enlightenment. Alchemy, with its focus on the transmutation of base metals into gold and the quest for the philosopher's stone, can be seen as a form of thaumaturgy where the practitioner seeks to transform not just physical substances but also the self. This process, known as the Great Work, involves the purification and refinement of both matter and spirit. Thaumaturgy comes into play as the practical aspect of alchemy, where rituals, symbols, and substances are used to facilitate these transformations.
The alchemical process is heavily laden with symbolic meanings, with each stage representing a different phase of transformation. The stages of nigredo (blackening), albedo (whitening), citrinitas (yellowing), and rubedo (reddening) correspond not only to physical changes in the material being worked on but also to stages of spiritual purification and enlightenment. Thaumaturgy, in this context, is the application of these principles to achieve tangible results, whether in the form of creating alchemical elixirs, talismans, or achieving spiritual goals. Crowley also elaborates on these alchemical principles in Magick (Book 4), particularly in his discussions on the symbolic and practical uses of alchemical symbols and processes within magical rituals.
Other esoteric systems
Thaumaturgy also plays a role in various other esoteric systems, where it is often viewed as a means of bridging the gap between the mundane and the divine. In Theosophy, for example, thaumaturgy is seen as part of the esoteric knowledge that allows practitioners to manipulate spiritual and material forces. Theosophical teachings emphasize the unity of all life and the interconnection of the cosmos, with thaumaturgy being a practical tool for engaging with these truths. Rituals and meditative practices are used to align the practitioner's will with higher spiritual forces, enabling them to effect change in the physical world.
In Rosicrucianism, thaumaturgy is similarly regarded as a method of spiritual practice that leads to the mastery of natural and spiritual laws. Rosicrucians believe that through the study of nature and the application of esoteric principles, one can achieve a deep understanding of the cosmos and develop the ability to influence it. This includes the use of rituals, symbols, and sacred texts to bring about spiritual growth and material success.
In the introduction of his translation of the "Spiritual Powers (神通 Jinzū)" chapter of Dōgen's Shōbōgenzō, Carl Bielefeldt refers to the powers developed by adepts of Esoteric Buddhism as belonging to the "thaumaturgical tradition". These powers, known as siddhi or abhijñā, were ascribed to the Buddha and subsequent disciples. Legendary monks like Bodhidharma, Upagupta, Padmasambhava, and others were depicted in popular legends and hagiographical accounts as wielding various supernatural powers.
Misconceptions and modern interpretations
Distinction from theurgy
A common misconception about thaumaturgy is its conflation with theurgy. While both involve the practice of magic, they serve distinct purposes and operate on different principles. Theurgy is primarily concerned with invoking divine or spiritual beings to achieve union with the divine, often for purposes of spiritual ascent or enlightenment. Thaumaturgy, on the other hand, focuses on the manipulation of natural forces to produce tangible effects in the physical world. This distinction is crucial in understanding the differing objectives of these practices: theurgy is inherently religious and mystical, while thaumaturgy is more pragmatic and results-oriented.
Aleister Crowley, in his Magick (Book 4), emphasizes the importance of understanding these differences, noting that while theurgic practices seek to align the practitioner with divine will, thaumaturgy allows the practitioner to exert their will over the material world through the application of esoteric knowledge and ritual.
Modern misunderstandings
In modern times, thaumaturgy is often misunderstood, particularly in popular culture where it is sometimes depicted as synonymous with fantasy magic or "miracle-working" in a religious sense. These portrayals can dilute the rich historical and esoteric significance of thaumaturgy, reducing it to a mere trope of magical fiction. For instance, the term is frequently used in fantasy literature and role-playing games to describe a generic form of magic, without consideration for its historical roots or the complex practices associated with it in esoteric traditions.
This modern misunderstanding is partly due to the broadening of the term "thaumaturgy" in contemporary discourse, where it is often detached from its original context and used more loosely. As a result, the nuanced distinctions between different types of magic, such as thaumaturgy and theurgy, are often overlooked, leading to a homogenized view of magical practices.
In popular culture
The term thaumaturgy is used in various games as a synonym for magic, a particular sub-school (often mechanical) of magic, or as the "science" of magic.
Thaumaturgy is defined as the "science" or "physics" of magic by Isaac Bonewits in his 1971 book Real Magic, a definition he also used in creating an RPG reference called Authentic Thaumaturgy (1978, 1998, 2005).
See also
; for example, the sigils of the Behenian fixed stars
References
Works cited
External links
Alchemy
Ceremonial magic
Hermetic Qabalah
Hermeticism
Magic (supernatural)
Magical terminology
Rosicrucianism
Thelema
Theosophy
Vajrayana
Zen | 0.764305 | 0.99755 | 0.762433 |
Nudge theory | Nudge theory is a concept in behavioral economics, decision making, behavioral policy, social psychology, consumer behavior, and related behavioral sciences that proposes adaptive designs of the decision environment (choice architecture) as ways to influence the behavior and decision-making of groups or individuals. Nudging contrasts with other ways to achieve compliance, such as education, legislation or enforcement.
The nudge concept was popularized in the 2008 book Nudge: Improving Decisions About Health, Wealth, and Happiness, by behavioral economist Richard Thaler and legal scholar Cass Sunstein, two American scholars at the University of Chicago. It has influenced British and American politicians. Several nudge units exist around the world at the national level (UK, Germany, Japan, and others) as well as at the international level (e.g. World Bank, UN, and the European Commission). It is disputed whether "nudge theory" is a recent novel development in behavioral economics or merely a new term for one of many methods for influencing behavior, investigated in the science of behavior analysis.
There have been some controversies regarding effectiveness of nudges. Maier et al. wrote that, after correcting the publication bias found by Mertens et al. (2021), there is no evidence that nudging would have any effect. "Nudging" is an umbrella term referring to many techniques, and skeptics believe some nudges (e.g. default effect) can be highly effective while others have little to no effect, and call for future work that shift away from investigating average effects but focus on moderators instead. A meta analysis of all unpublished nudging studies carried by nudge units with over 23 million individuals in the United Kingdom and United States found support for many nudges, but with substantially weaker effects than effects found in published studies. Moreover, some researchers criticized the "one-nudge-for-all" approach and advocated for more studies and implementations of personalized nudging (based on individual differences), which appear to be substantially more effective, with a more robust and consistent evidence base.
Nudges
Definition
The first formulation of the term nudge and associated principles was developed in cybernetics by James Wilk before 1995 and described by Brunel University academic D. J. Stewart as "the art of the nudge" (sometimes referred to as micronudges). It also drew on methodological influences from clinical psychotherapy tracing back to Gregory Bateson, including contributions from Milton Erickson, Watzlawick, Weakland and Fisch, and Bill O'Hanlon. In this variant, the nudge is a microtargeted design geared toward a specific group of people, irrespective of the scale of intended intervention.
In 2008, Richard Thaler and Cass Sunstein's book Nudge: Improving Decisions About Health, Wealth, and Happiness brought nudge theory to prominence. The authors refer to the influencing of behaviour without coercion as libertarian paternalism and the influencers as choice architects.
Thaler and Sunstein defined their concept as the following:
In this form, drawing on behavioral economics, the nudge is more generally applied in order to influence behaviour.
One of the most frequently cited examples of a nudge is the etching of the image of a housefly into the men's room urinals at Amsterdam's Schiphol Airport, which is intended to "improve the aim." The book also gained a following among US and UK politicians, in the private sector and in public health.
Overview
A nudge makes it more likely that an individual will make a particular choice, or behave in a particular way, by altering the environment so that automatic cognitive processes are triggered to favour the desired outcome.
An individual's behaviour is not always in alignment with their intentions (a discrepancy known as a value-action gap). It is common knowledge that humans are not fully rational beings; that is, people will often do something that is not in their own self-interest, even when they are aware that their actions are not in their best interest. As an example, when hungry, people who diet often underestimate their ability to lose weight, and their intentions to eat healthy can be temporarily weakened until they are satiated.
Nobel Laureate Daniel Kahneman describes two distinct systems for processing information as to why people sometimes act against their own self-interest: System 1 is fast, automatic, and highly susceptible to environmental influences; System 2 processing is slow, reflective, and takes into account explicit goals and intentions. When situations are overly complex or overwhelming for an individual's cognitive capacity, or when an individual is faced with time-constraints or other pressures, System 1 processing takes over decision-making. System 1 processing relies on various judgmental heuristics to make decisions, resulting in faster decisions. Unfortunately, this can also lead to suboptimal decisions. In fact, Thaler and Sunstein trace maladaptive behaviour to situations in which System 1 processing overrides an individual's explicit values and goals. It is well documented that habitual behaviour is resistant to change without a disruption to the environmental cues that trigger that behaviour.
Nudging techniques aim to use judgmental heuristics to the advantage of the party that is creating the set of choices. In other words, a nudge alters the environment so that when heuristic, or System 1, decision-making is used, the resulting choice will be the most positive or desired outcome. An example of such a nudge is switching the placement of junk food in a store, so that fruit and other healthy options are located next to the cash register, while junk food is relocated to another part of the store.
Techniques
Nudges are small changes in the environment that are easy and inexpensive to implement. In head-to-head comparisons, randomized experiments have that nudges can sometimes motivate behavior change more effectively than paying people. Several different techniques exist for nudging, including defaults, social-proof heuristics, and increasing the salience of the desired option.
A default option is the option that person automatically receives for doing nothing. People are more likely to choose a particular option if it is the default option. For example, Pichert and Katsikopoulos (2008) found that a greater number of consumers chose the renewable energy option for electricity when it was offered as the default option. Similarly, the default options given to mobile apps developers in advertising networks can significantly impact consumers' privacy.
A social-proof heuristic refers to the tendency of people to look at the behavior of others to help guide their own behavior. Studies have found some success in using social-proof heuristics to nudge people to make healthier food choices.
When people's attention is drawn toward a particular option, that option will become more salient and they will be more likely to choose it. As an example, in snack shops at train stations in the Netherlands, consumers purchased more fruit and healthy snack options when they were relocated next to the cash register. Since then, other similar studies have been made regarding the placement of healthier food options close to the checkout counter and the effect on the consuming behavior of the customers, and this is now considered an effective and well-accepted nudge.
Application
Behavioral insights and nudges are currently used in many countries around the world.
Government
There are various notable examples of government applications of nudge theory.
During their terms, both U.K. Prime Minister David Cameron and U.S. President Barack Obama may have sought to employ nudge theory to advance domestic policy goals in their respective countries. In 2008, the United States appointed Cass Sunstein, who helped develop the theory, as administrator of the Office of Information and Regulatory Affairs. In 2010, the British Behavioural Insights Team, or "Nudge Unit," was established at the British Cabinet Office and headed by psychologist David Halpern.
In Australia, the state Government of New South Wales established a Nudge Unit of its own in 2012. In 2016, the federal government followed suit, forming the Behavioural Economics Team of Australia (BETA) as the "central unit for applying behavioural insights...to public policy."
In 2020, the British government of Boris Johnson decided to rely on nudge theory to fight the coronavirus pandemic, with Chief Scientific Adviser Patrick Vallance seeking to encourage “herd immunity” with this strategy.
Business
Nudge theory has also been applied to business management and corporate culture.
For instance, nudge is applied to health, safety, and environment (HSE) with the primary goals of achieving a "zero accident culture." The concept is also used as a key component in a lot of human-resources software.
Particular forerunners in the application of nudge theory in corporate settings are top Silicon Valley companies. These companies are using nudges in various forms to increase productivity and happiness of employees. Recently, more companies are gaining interest in using what is called "nudge management" to improve the productivity of their white-collar workers.
Healthcare
Lately, nudge theory has also been used in different ways to help healthcare professionals make more deliberate decisions in numerous areas. For example, nudging has been used as a way to improve hand hygiene among healthcare workers to decrease the number of healthcare-associated infections. It has also been used as a way to make fluid administration a more thought-out decision in intensive care units, with the intention of reducing well known complications of fluid overload.
Mandatory display of inspector reports of eatery hygiene as a public 'nudge', have received mixed responses in different countries. A recent meta-analytic review of the hygiene ratings across North America, Europe, Asia, and Oceania has shown that inspector ratings (usually a smiley or a letter grade) is useful at times, but not informative enough for consumers.
Fundraising
Nudge theory can also be applied to fundraising, helping to increase donor contributions and increase continuous donations from the same individual, as well as to entice new donors to give.
There are some simple strategies used when applying nudge theory to this area. The first strategy is to make donating easy: creating default settings that automatically enroll a donor for continuous giving or prompts them to give every so often encourages individuals to continue giving. The second strategy to increase donors is to make giving more enticing, which can include increasing a person's motivation to give through rewards, personalized messages, or focusing on their interests. Personalized messages, small thank-you gifts, and demonstrating the impact one's donation can have on others, have been shown to be more effective when increasing donations. Another strategy helpful to increasing donors is using social influence, as people are very influenced by group norms. By allowing donors to become visible to the public and increasing their identifiability, other individuals will be more inclined to give as they conform to the social norms around them. Using peer effects has been shown to increase donations. Finally, timing is important: many studies have demonstrated that there are specific times when individuals are more likely to give, for example during holidays.
Although many nudging theories have been useful to increase donations and donors, many scholars question the ethics of using such techniques on the population. Ruehle et al. (2020), state that one has to always consider an individual's autonomy when designing nudges for a fundraising campaign. They state that the power of others behind messaging and potentially intrusive prompting can cause concern and may be seen as manipulative of donor's autonomy.
Artificial intelligence
Nudges are used at many levels in AI algorithms, for example recommender systems, and their consequences are still being investigated. Two articles appeared in Minds & Machines in 2018 addressed the relation between nudges and Artificial Intelligence, explaining how persuasion and psychometrics can be used by personalised targeting algorithms to influence individual and collective behaviour, sometimes also in unintended ways.
In 2020 an article in AI & Society addressed the use of this technology in Algorithmic Regulation.
A piece in the Harvard Business Review published in 2021 was one of the first articles to coin the term "Algorithmic Nudging" (see also Algorithmic Management). The author stresses "Companies are increasingly using algorithms to manage and control individuals not by force, but rather by nudging them into desirable behavior — in other words, learning from their personalized data and altering their choices in some subtle way."
While the concept builds on the work by University of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein, "due to recent advances in AI and machine learning, algorithmic nudging is much more powerful than its non-algorithmic counterpart. With so much data about workers’ behavioral patterns at their fingertips, companies can now develop personalized strategies for changing individuals’ decisions and behaviors at large scale. These algorithms can be adjusted in real-time, making the approach even more effective."
Tourism
One concern researchers in enjoyment-focused contexts, such as tourism, raised is a gap between attitude, intention and behaviour because tourists seek pleasure. Several empirical pieces of evidence in the tourism suggest the nudge theory's high effectiveness in reducing the burden of tourists' activities on the environment. For instance, tourists consumed more ethical foods, selected more sustainable hotels, reused towels and bed linen during hotel stays, increased their intentions to reduce their energy consumption, increased the adoption of tourists' voluntary carbon offsetting and many other examples.
Education
Nudges in education are techniques used to subtly guide students towards making better choices and achieving their academic goals. These nudges are based on the principles of behavioral economics and psychology, particularly the concept of dual process theory. This theory suggests that there are two systems of thinking: System 1, which is automatic and instinctual, and System 2, which is reflective and deliberate. Nudges aim to influence behavior by targeting System 1 processes, such as habits and automatic responses, to help students overcome common obstacles like procrastination, lack of motivation, or poor study habits. By designing nudges that align with students' goals and cognitive processes, educators can effectively support students in reaching their full potential and improving their academic performance.
Nudging in Education: Promises and Challenges
Similar to nudging in other areas, nudging in education aims to help individuals achieve desired behaviors they may struggle with due to habits or lack of motivation. For students, this could mean meeting deadlines, paying attention in class, or staying organized. Some promising examples include sending text reminders to parents to increase home literary activities and providing information about famous scientists' struggles to improve student grades. However, challenges remain. It's unclear if nudges lead to long-lasting changes or how they work over time once removed. Additionally, it's essential to ensure that nudges align with educational principles and have a positive impact on students. More research is needed to understand how nudges influence behavior and cognitive processes in education effectively.
While nudging shows potential in education, questions remain about its long-term effectiveness and how it fits within educational principles. Nudges should not only focus on end goals but also consider the cognitive processes and behaviors they influence. By understanding these aspects, educators can ensure that nudges promote positive educational practices and help students develop lasting habits. However, the implementation of nudging in education remains limited, highlighting the need for further exploration and development in this area
Behavior economics concepts commonly use in education
Critique
The evidence on nudging having any effect has been criticized as "limited," so Mertens and colleagues (2021) produced a comprehensive meta-analysis. They found that nudging is effective but there is a moderate publication bias. Later Maier and colleagues computed that, after correcting for this publication bias appropriately, there is no evidence that nudging would have any effect.
Tammy Boyce, from the public health foundation The King's Fund, has said: "We need to move away from short-term, politically motivated initiatives such as the 'nudging people' idea, which are not based on any good evidence and don't help people make long-term behaviour changes." Likewise, Mols and colleagues (2015), acknowledge nudges may at times be useful but argue that covert nudges offer limited scope for securing lasting behavior change.
Cass Sunstein has responded to criticism at length in his 2016 book, The Ethics of Influence: Government in the Age of Behavioral Science, making the case in favor of nudging, against charges that nudges diminish autonomy, threaten dignity, violate liberties, or reduce welfare. He previously defended nudge theory in his 2014 book Why Nudge?: The Politics of Libertarian Paternalism by arguing that choice architecture is inevitable and that some form of paternalism cannot be avoided.
Ethicists have debated nudge theory rigorously. These charges have been made by various participants in the debate from Bovens (2009) to Goodwin (2012). Wilkinson, for example, charges nudges for being manipulative, while others such as Yeung (2012) question their scientific credibility.
Public opinion on the ethicality of nudges has also been shown to be susceptible to “partisan nudge bias.” Research from David Tannenbaum, Craig R. Fox, and Todd Rogers (2017) found that adults and policymakers in the United States believed behavioral policies to be more ethical when they aligned with their own political leanings. Conversely, people took these same mechanisms to be more unethical when they differed from their politics. The researchers also found that nudges are not inherently partisan: when evaluating behavioral policies absent of political cues, people across the political spectrum were alike in their assessments.
When considering the future designers that would be creating these nudges, a study by Willermark and Islind (2022) showed that more than 50% of design students have positive attitudes towards the implementation of nudges as a form of choice architecture. The participants argued that "many people benefit from getting a little nudge", while about 40% have ambivalent or negative attitudes towards the concept stating that "We simply should not change the path of people’s choices".
Some, such as Hausman and Welch (2010) as well as Roberts (2018) and Mrkva (2021) have inquired whether nudging should be permissible on grounds of distributive justice. Though Roberts (2018) argued that nudges do not benefit vulnerable, low-income individuals as much as individuals who are less vulnerable, Mrkva's research suggests that nudges benefit low-income and low-SES people most, if anything increasing distributive justice and reducing the disparity between those with high and low financial literacy. This research suggests that in situations where consumers lack knowledge regarding their choices and are therefore more prone to choosing the wrong one, the implementation of 'good nudges' can be ethically justified. The same study also states that nudges have the potential to "increase firm profits while decreasing consumer welfare."
Lepenies and Malecka (2015) have questioned whether nudges are compatible with the rule of law. Similarly, legal scholars have discussed the role of nudges and the law.
Behavioral economists such as Bob Sugden have pointed out that the underlying normative benchmark of nudging is still homo economicus, despite the proponents' claim to the contrary.
It has been remarked that nudging is also a euphemism for psychological manipulation as practiced in social engineering.
There exists an anticipation and, simultaneously, implicit criticism of the nudge theory in works of Hungarian social psychologists Ferenc Mérei and László Garai, who emphasize the active participation in the nudge of its target.
The authors of a book titled Neuroliberalism: Behavioural Government in the Twenty-First Century (2017), argue that, while there is much value and diversity in behavioural approaches to government, there are significant ethical issues, including the danger of the neurological sciences being co-opted to the needs of neo-liberal economics.
See also
Choice architecture
Commitment device
Constructal Law - Design evolution in nature, animate and inanimate
Dark pattern
Default effect
Libertarian paternalism
List of cognitive biases
Negarchy
Psychohistory (fictional)
Thinking, Fast and Slow
Race to the Top
References
Further reading
Behavioral economics
Behavioural sciences
Cognitive biases
Psychological theories
Psychological manipulation | 0.764732 | 0.996989 | 0.762429 |
Sociocultural evolution | Sociocultural evolution, sociocultural evolutionism or social evolution are theories of sociobiology and cultural evolution that describe how societies and culture change over time. Whereas sociocultural development traces processes that tend to increase the complexity of a society or culture, sociocultural evolution also considers process that can lead to decreases in complexity (degeneration) or that can produce variation or proliferation without any seemingly significant changes in complexity (cladogenesis). Sociocultural evolution is "the process by which structural reorganization is affected through time, eventually producing a form or structure that is qualitatively different from the ancestral form".
Most of the 19th-century and some 20th-century approaches to socioculture aimed to provide models for the evolution of humankind as a whole, arguing that different societies have reached different stages of social development. The most comprehensive attempt to develop a general theory of social evolution centering on the development of sociocultural systems, the work of Talcott Parsons (1902–1979), operated on a scale which included a theory of world history. Another attempt, on a less systematic scale, originated from the 1970s with the world-systems approach of Immanuel Wallerstein (1930-2019) and his followers.
More recent approaches focus on changes specific to individual societies and reject the idea that cultures differ primarily according to how far each one has moved along some presumed linear scale of social progress. Most modern archaeologists and cultural anthropologists work within the frameworks of neoevolutionism, sociobiology, and modernization theory.
Introduction
Anthropologists and sociologists often assume that human beings have natural social tendencies but that particular human social behaviours have non-genetic causes and dynamics (i.e. people learn them in a social environment and through social interaction).
Societies exist in complex social environments (for example: with differing natural resources and constraints) and adapt themselves to these environments. It is thus inevitable that all societies change.
Specific theories of social or cultural evolution often attempt to explain differences between coeval societies by positing that different societies have reached different stages of development. Although such theories typically provide models for understanding the relationship between technologies, social structure or the values of a society, they vary as to the extent to which they describe specific mechanisms of variation and change.
While the history of evolutionary thinking with regard to humans can be traced back at least to Aristotle and other Greek philosophers, early sociocultural-evolution theories the ideas of Auguste Comte (1798–1857), Herbert Spencer (1820–1903) and Lewis Henry Morgan (1818–1881) developed simultaneously with, but independently of, the work of Charles Darwin (1809-1882) and were popular from late in the 19th century to the end of World War I. The 19th-century unilineal evolution theories claimed that societies start out in a primitive state and gradually become more civilized over time; they equated the culture and technology of Western civilization with progress. Some forms of early sociocultural-evolution theories (mainly unilineal ones) have led to much-criticised theories like social Darwinism and scientific racism, sometimes used in the past by European imperial powers to justify existing policies of colonialism and slavery and to justify new policies such as eugenics.
Most 19th-century and some 20th-century approaches aimed to provide models for the evolution of humankind as a single entity. However, most 20th-century approaches, such as multilineal evolution, focused on changes specific to individual societies. Moreover, they rejected directional change (i.e. orthogenetic, teleological or progressive change). Most archaeologists work within the framework of multilineal evolution. Other contemporary approaches to social change include neoevolutionism, sociobiology, dual inheritance theory, modernisation theory and postindustrial theory.
In his seminal 1976 book The Selfish Gene, Richard Dawkins wrote that "there are some examples of cultural evolution in birds and monkeys, but ... it is our own species that really shows what cultural evolution can do".
Stadial theory
Enlightenment and later thinkers often speculated that societies progressed through stages: in other words, they saw history as stadial. While expecting humankind to show increasing development, theorists looked for what determined the course of human history. Georg Wilhelm Friedrich Hegel (1770–1831), for example, saw social development as an inevitable process. It was assumed that societies start out primitive, perhaps in a state of nature, and could progress toward something resembling industrial Europe.
While earlier authors such as Michel de Montaigne (1533–1592) had discussed how societies change through time, the Scottish Enlightenment of the 18th century proved key in the development of the idea of sociocultural evolution. In relation to Scotland's union with England in 1707, several Scottish thinkers pondered the relationship between progress and the affluence brought about by increased trade with England. They understood the changes Scotland was undergoing as involving transition from an agricultural to a mercantile society. In "conjectural histories", authors such as Adam Ferguson (1723–1816), John Millar (1735–1801) and Adam Smith (1723–1790) argued that societies all pass through a series of four stages: hunting and gathering, pastoralism and nomadism, agriculture, and finally a stage of commerce.
Philosophical concepts of progress, such as that of Hegel, developed as well during this period. In France, authors such as Claude Adrien Helvétius (1715–1771) and other philosophes were influenced by the Scottish tradition. Later thinkers such as Comte de Saint-Simon (1760–1825) developed these ideas. Auguste Comte (1798–1857) in particular presented a coherent view of social progress and a new discipline to study it: sociology.
These developments took place in a context of wider processes. The first process was colonialism. Although imperial powers settled most differences of opinion with their colonial subjects through force, increased awareness of non-Western peoples raised new questions for European scholars about the nature of society and of culture. Similarly, effective colonial administration required some degree of understanding of other cultures. Emerging theories of sociocultural evolution allowed Europeans to organise their new knowledge in a way that reflected and justified their increasing political and economic domination of others: such systems saw colonised people as less evolved, and colonising people as more evolved. Modern civilization (understood as the Western civilization), appeared the result of steady progress from a state of barbarism, and such a notion was common to many thinkers of the Enlightenment, including Voltaire (1694–1778).
The second process was the Industrial Revolution and the rise of capitalism, which together allowed and promoted continual revolutions in the means of production. Emerging theories of sociocultural evolution reflected a belief that the changes in Europe brought by the Industrial Revolution and capitalism were improvements. Industrialisation, combined with the intense political change brought about by the French Revolution of 1789 and the U.S. Constitution, which paved the way for the dominance of democracy, forced European thinkers to reconsider some of their assumptions about how society was organised.
Eventually, in the 19th century three major classical theories of social and historical change emerged:
sociocultural evolutionism
the social cycle theory
the Marxist theory of historical materialism.
These theories had a common factor: they all agreed that the history of humanity is pursuing a certain fixed path, most likely that of social progress. Thus, each past event is not only chronologically, but causally tied to present and future events. The theories postulated that by recreating the sequence of those events, sociology could discover the "laws" of history.
Sociocultural evolutionism and the idea of progress
While sociocultural evolutionists agree that an evolution-like process leads to social progress, classical social evolutionists have developed many different theories, known as theories of unilineal evolution. Sociocultural evolutionism became the prevailing theory of early sociocultural anthropology and social commentary, and is associated with scholars like Auguste Comte, Edward Burnett Tylor, Lewis Henry Morgan, Benjamin Kidd, L. T. Hobhouse and Herbert Spencer. Such stage models and ideas of linear models of progress had a great influence not only on future evolutionary approaches in the social sciences and humanities, but also shaped public, scholarly, and scientific discourse surrounding the rising individualism and population thinking. Sociocultural evolutionism attempted to formalise social thinking along scientific lines, with the added influence from the biological theory of evolution. If organisms could develop over time according to discernible, deterministic laws, then it seemed reasonable that societies could as well. Human society was compared to a biological organism, and social science equivalents of concepts like variation, natural selection, and inheritance were introduced as factors resulting in the progress of societies. The idea of progress led to that of a fixed "stages" through which human societies progress, usually numbering threesavagery, barbarism, and civilizationbut sometimes many more. At that time, anthropology was rising as a new scientific discipline, separating from the traditional views of "primitive" cultures that was usually based on religious views.
Already in the 18th century, some authors began to theorize on the evolution of humans. Montesquieu (1689–1755) wrote about the relationship laws have with climate in particular and the environment in general, specifically how different climatic conditions cause certain characteristics to be common among different people. He likens the development of laws, the presence or absence of civil liberty, differences in morality, and the whole development of different cultures to the climate of the respective people, concluding that the environment determines whether and how a people farms the land, which determines the way their society is built and their culture is constituted, or, in Montesquieu's words, the "general spirit of a nation". Also Jean-Jacques Rousseau (1712–1778) presents a conjectural stage-model of human sociocultural evolution: first, humans lived solitarily and only grouped when mating or raising children. Later, men and women lived together and shared childcare, thus building families, followed by tribes as the result of inter-family interactions, which lived in "the happiest and the most lasting epoch" of human history, before the corruption of civil society degenerated the species again in a developmental stage-process. In the late 18th century, the Marquis de Condorcet (1743–1794) listed ten stages, or "epochs", each advancing the rights of man and perfecting the human race.
Erasmus Darwin (1731-1802), Charles Darwin's grandfather, was an enormously influential natural philosopher, physiologist and poet whose remarkably insightful ideas included a statement of transformism and the interconnectedness of all forms of life. His works, which are enormously wide-ranging, also advance a theory of cultural transformation: his famous The Temple of Nature is subtitled 'the Origin of Society'. This work, rather than proposing in detail a strict transformation of humanity between different stages, instead dwells on Erasmus Darwin's evolutionary mechanism: Erasmus Darwin does not explain each stage one-by-one, trusting his theory of universal organic development, as articulated in the Zoonomia, to illustrate cultural development as well. Erasmus Darwin therefore flits with abandon through his chronology: Priestman notes that it jumps from the emergence of life onto land, the development of opposable thumbs, and the origin of sexual reproduction directly to modern historical events.
Another more complex theorist was Richard Payne Knight (1751-1824), an influential amateur archeologist and universal theologian. Knight's The Progress of Civil Society: A Didactic Poem in Six Books (1796) fits precisely into the tradition of triumphant historical stages, beginning with Lucretius and reaching Adam Smith––but just for the first four books. In his final books, Knight then grapples with the French revolution and wealthy decadence. Confronted with these twin issues, Knight's theory ascribes progress to conflict: 'partial discord lends its aid, to tie the complex knots of general harmony'. Competition in Knight's mechanism spurs development from any one stage to the next: the dialectic of class, land and gender creates growth. Thus, Knight conceptualised a theory of history founded in inevitable racial conflict, with Greece representing 'freedom' and Egypt 'cold inactive stupor'. Buffon, Linnaeus, Camper and Monboddo variously forward diverse arguments about racial hierarchy, grounded in early theories of species change––though many thought that environmental changes could create dramatic changes in form without permanently altering the species or causing species transformation. However, their arguments still bear on race: Rousseau, Buffon and Monboddo cite orangutans as evidence of an earlier prelinguistic human type, and Monboddo even insisted Orangutans and certain African and South Asian races were identical.
Other than Erasmus Darwin, the other pre-eminent scientific text with a theory of cultural transformation was advanced by Robert Chambers (1802-1871). Chambers was a Scottish evolutionary thinker and philosopher who, though he was then and now perceived as scientifically inadequate and criticized by prominent contemporaries, is important because he was so widely read. There are records of everyone from Queen Victoria to individual dockworkers enjoying his Robert Chambers' Vestiges of the Natural History of Creation (1844), including future generations of scientists. That The Vestiges did not establish itself as the scientific cutting edge is precisely the point, since the Vestiges'''s influence means it was both the concept of evolution the Victorian public was most likely to experience, and the scientific presupposition laid earliest in the minds of bright young scholars.
Chambers propounded a 'principle of development' whereby everything evolved by the same mechanism and towards higher order structure or meaning. In his theory, life advanced through different 'classes', and within each class animals began at the lowest form and then advanced to more complex forms in the same class. In short, the progress of animals was like the development of a foetus. More than just an indistinct analogy, this parallel between embryology and species development had the status of a genuine causal mechanism in Chambers' theory: more advanced species developed longer as embryos into all their complexity. Motivated by this comparison, Chambers ascribed development to the 'laws of creation', though he also supposed that the whole development of species was in some way preordained: it was just that the preordination of the creator acted through establishing those laws. This, as discussed above, is similar to Spencer's later concept of development. Thus Chambers believed in a sophisticated theory of progress driven by a developmental analogy.
In the mid-19th century, a "revolution in ideas about the antiquity of the human species" took place "which paralleled, but was to some extent independent of, the Darwinian revolution in biology." Especially in geology, archaeology, and anthropology, scholars began to compare "primitive" cultures to past societies and "saw their level of technology as parallel with that of Stone Age cultures, and thus used these peoples as models for the early stages of human evolution." A developmental model of the evolution of the mind, culture, and society was the result, paralleling the evolution of the human species: "Modern savages [sic] became, in effect, living fossils left behind by the march of progress, relics of the Paleolithic still lingering on into the present." Classical social evolutionism is most closely associated with the 19th-century writings of Auguste Comte and of Herbert Spencer (coiner of the phrase "survival of the fittest"). In many ways, Spencer's theory of "cosmic evolution" has much more in common with the works of Jean-Baptiste Lamarck and Auguste Comte than with contemporary works of Charles Darwin. Spencer also developed and published his theories several years earlier than Darwin. In regard to social institutions, however, there is a good case that Spencer's writings might be classified as social evolutionism. Although he wrote that societies over time progressedand that progress was accomplished through competitionhe stressed that the individual rather than the collectivity is the unit of analysis that evolves; that, in other words, evolution takes place through natural selection and that it affects social as well as biological phenomenon. Nonetheless, the publication of Darwin's works proved a boon to the proponents of sociocultural evolution, who saw the ideas of biological evolution as an attractive explanation for many questions about the development of society.
Both Spencer and Comte view society as a kind of organism subject to the process of growth—from simplicity to complexity, from chaos to order, from generalisation to specialisation, from flexibility to organisation. They agree that the process of societal growth can be divided into certain stages, have their beginning and eventual end, and that this growth is in fact social progress: each newer, more-evolved society is "better". Thus progressivism became one of the basic ideas underlying the theory of sociocultural evolutionism.
However, Spencer's theories were more complex than just a romp up the great chain of being. Spencer based his arguments on an analogy between the evolution of societies and the ontogeny of an animal. Accordingly, he searched for "general principles of development and structure" or "fundamental principles of organization", rather than being content simply ascribing progress between social stages to the direct intervention of some beneficent deity. Moreover, he accepted that these conditions are "far less specific, far more modifiable, far more dependent on conditions that are variable": in short, that they are a messy biological process.
Though Spencer's theories transcended the label of 'stagism' and appreciate biological complexity, they still accepted a strongly fixed direction and morality to natural development. For Spencer, interference with the natural process of evolution was dangerous and had to be avoided at all costs. Such views were naturally coupled to the pressing political and economic questions of the time. Spencer clearly thought society's evolution brought about a racial hierarchy with Caucasians at the top and Africans at the bottom. This notion is deeply linked to the colonial projects European powers were pursuing at the time, and the idea of European superiority used paternalistically to justify those projects. The influential German zoologist Ernst Haeckel even wrote that 'natural men are closer to the higher vertebrates than highly civilized Europeans', including not just a racial hierarchy but a civilizational one. Likewise, Spencer's evolutionary argument advanced a theory of statehood: "until spontaneously fulfilled a public want should not be fulfilled at all" sums up Spencer's notion about limited government and the free operation of market forces.
This is not to suggest that stagism was useless or entirely motivated by colonialism and racism. Stagist theories were first proposed in contexts where competing epistemologies were largely static views of the world. Hence "progress" had in some sense to be invented, conceptually: the idea that human society would move through stages was a triumphant invention. Moreover, stages were not always static entities. In Buffon's theories, for example, it was possible to regress between stages, and physiological changes were species' reversibly adapting to their environment rather than irreversibly transforming.
In addition to progressivism, economic analyses influenced classical social evolutionism. Adam Smith (1723–1790), who held a deeply evolutionary view of human society, identified the growth of freedom as the driving force in a process of stadial societal development. According to him, all societies pass successively through four stages: the earliest humans lived as hunter-gatherers, followed by pastoralists and nomads, after which society evolved to agriculturalists and ultimately reached the stage of commerce. With the strong emphasis on specialisation and the increased profits stemming from a division of labour, Smith's thinking also exerted some direct influence on Darwin himself. Both in Darwin's theory of the evolution of species and in Smith's accounts of political economy, competition between selfishly functioning units plays an important and even dominating rôle. Similarly occupied with economic concerns as Smith, Thomas R. Malthus (1766–1834) warned that given the strength of the sex drive inherent in all animals, Malthus argued, populations tend to grow geometrically, and population growth is only checked by the limitations of economic growth, which, if there would be growth at all, would quickly be outstripped by population growth, causing hunger, poverty, and misery. Far from being the consequences of economic structures or social orders, this "struggle for existence" is an inevitable natural law, so Malthus.
Auguste Comte, known as "the father of sociology", formulated the law of three stages: human development progresses from the theological stage, in which nature was mythically conceived and man sought the explanation of natural phenomena from supernatural beings; through a metaphysical stage in which nature was conceived of as a result of obscure forces and man sought the explanation of natural phenomena from them; until the final positive stage in which all abstract and obscure forces are discarded, and natural phenomena are explained by their constant relationship. This progress is forced through the development of human mind, and through increasing application of thought, reasoning and logic to the understanding of the world. Comte saw the science-valuing society as the highest, most developed type of human organization.
Herbert Spencer, who argued against government intervention as he believed that society should evolve toward more individual freedom, followed Lamarck in his evolutionary thinking, in that he believed that humans do over time adapt to their surroundings. He differentiated between two phases of development as regards societies' internal regulation: the "military" and "industrial" societies. The earlier (and more primitive) military society has the goal of conquest and defense, is centralised, economically self-sufficient, collectivistic, puts the good of a group over the good of an individual, uses compulsion, force and repression, and rewards loyalty, obedience and discipline. The industrial society, in contrast, has a goal of production and trade, is decentralised, interconnected with other societies via economic relations, works through voluntary cooperation and individual self-restraint, treats the good of individual as of the highest value, regulates the social life via voluntary relations; and values initiative, independence and innovation."Herbert Spencer ". Sociological Theorists Page. The transition process from the military to industrial society is the outcome of steady evolutionary processes within the society. Spencer "imagined a kind of feedback loop between mental and social evolution: the higher the mental powers the greater the complexity of the society that the individuals could create; the more complex the society, the greater the stimulus it provided for further mental development. Everything cohered to make progress inevitable or to weed out those who did not keep up."
Regardless of how scholars of Spencer interpret his relation to Darwin, Spencer became an incredibly popular figure in the 1870s, particularly in the United States. Authors such as Edward L. Youmans, William Graham Sumner, John Fiske, John W. Burgess, Lester Frank Ward, Lewis H. Morgan (1818–1881) and other thinkers of the gilded age all developed theories of social evolutionism as a result of their exposure to Spencer as well as to Darwin.
In his 1877 classic Ancient Societies, Lewis H. Morgan, an anthropologist whose ideas have had much impact on sociology, differentiated between three eras: savagery, barbarism and civilization, which are divided by technological inventions, like fire, bow, pottery in the savage era, domestication of animals, agriculture, metalworking in the barbarian era and alphabet and writing in the civilization era. Thus Morgan drew a link between social progress and technological progress. Morgan viewed technological progress as a force behind social progress, and held that any social change—in social institutions, organizations or ideologies—has its beginnings in technological change.Morgan, Lewis H. (1877) "Chapter III: Ratio of Human Progress". Ancient Society. Morgan's theories were popularized by Friedrich Engels, who based his famous work The Origin of the Family, Private Property and the State on them. For Engels and other Marxists this theory was important, as it supported their conviction that materialistic factors—economic and technological—are decisive in shaping the fate of humanity.
Edward Burnett Tylor (1832–1917), a pioneer of anthropology, focused on the evolution of culture worldwide, noting that culture is an important part of every society and that it is also subject to a process of evolution. He believed that societies were at different stages of cultural development and that the purpose of anthropology was to reconstruct the evolution of culture, from primitive beginnings to the modern state.
Anthropologists Sir E.B. Tylor in England and Lewis Henry Morgan in the United States worked with data from indigenous people, who (they claimed) represented earlier stages of cultural evolution that gave insight into the process and progression of evolution of culture. Morgan had a significant influence on Karl Marx and on Friedrich Engels, who developed a theory of sociocultural evolution in which the internal contradictions in society generated a series of escalating stages that ended in a socialist society (see Marxism). Tylor and Morgan elaborated the theory of unilinear evolution, specifying criteria for categorising cultures according to their standing within a fixed system of growth of humanity as a whole and examining the modes and mechanisms of this growth. Theirs was often a concern with culture in general, not with individual cultures.
Their analysis of cross-cultural data was based on three assumptions:
contemporary societies may be classified and ranked as more "primitive" or more "civilized"
there are a determinate number of stages between "primitive" and "civilized" (e.g. band, tribe, chiefdom, and state)
all societies progress through these stages in the same sequence, but at different rates
Theorists usually measured progression (that is, the difference between one stage and the next) in terms of increasing social complexity (including class differentiation and a complex division of labour), or an increase in intellectual, theological, and aesthetic sophistication. These 19th-century ethnologists used these principles primarily to explain differences in religious beliefs and kinship terminologies among various societies.
Lester Frank Ward (1841–1913), sometimes referred to as the "father" of American sociology, rejected many of Spencer's theories regarding the evolution of societies. Ward, who was also a botanist and a paleontologist, believed that the law of evolution functioned much differently in human societies than it did in the plant and animal kingdoms, and theorized that the "law of nature" had been superseded by the "law of the mind". He stressed that humans, driven by emotions, create goals for themselves and strive to realize them (most effectively with the modern scientific method) whereas there is no such intelligence and awareness guiding the non-human world. Plants and animals adapt to nature; man shapes nature. While Spencer believed that competition and "survival of the fittest" benefited human society and sociocultural evolution, Ward regarded competition as a destructive force, pointing out that all human institutions, traditions and laws were tools invented by the mind of man and that that mind designed them, like all tools, to "meet and checkmate" the unrestrained competition of natural forces. Ward agreed with Spencer that authoritarian governments repress the talents of the individual, but he believed that modern democratic societies, which minimized the role of religion and maximized that of science, could effectively support the individual in his or her attempt to fully utilize their talents and achieve happiness. He believed that the evolutionary processes have four stages:
First comes cosmogenesis, creation and evolution of the world.
Then, when life arises, there is biogenesis.
Development of humanity leads to anthropogenesis, which is influenced by the human mind.
Finally there arrives sociogenesis, which is the science of shaping the evolutionary process itself to optimize progress, human happiness and individual self-actualization.
Ward regarded modern societies as superior to "primitive" societies (one need only look to the impact of medical science on health and lifespan) and shared theories of white supremacy. Though he supported the Out-of-Africa theory of human evolution, he did not believe that all races and social classes were equal in talent. When a Negro rapes a white woman, Ward declared, he is impelled not only by lust but also by the instinctive drive to improve his own race.Ibid. p 166, https://archive.org/details/racehistoryofide0000goss_r1r7/page/166/mode/2up?q=lester&view=theater Ward did not think that evolutionary progress was inevitable and he feared the degeneration of societies and cultures, which he saw as very evident in the historical record. Ward also did not favor the radical reshaping of society as proposed by the supporters of the eugenics movement or by the followers of Karl Marx; like Comte, Ward believed that sociology was the most complex of the sciences and that true sociogenesis was impossible without considerable research and experimentation.
Émile Durkheim, another of the "fathers" of sociology, developed a dichotomal view of social progress. His key concept was social solidarity, as he defined social evolution in terms of progressing from mechanical solidarity to organic solidarity. In mechanical solidarity, people are self-sufficient, there is little integration and thus there is the need for the use of force and repression to keep society together. In organic solidarity, people are much more integrated and interdependent and specialisation and cooperation are extensive. Progress from mechanical to organic solidarity is based firstly on population growth and increasing population density, secondly on increasing "morality density" (development of more complex social interactions) and thirdly on increasing specialisation in the workplace. To Durkheim, the most important factor in social progress is the division of labour. This was later used in the mid-1900s by the economist Ester Boserup (1910–1999) to attempt to discount some aspects of Malthusian theory.
Ferdinand Tönnies (1855–1936) describes evolution as the development from informal society, where people have many liberties and there are few laws and obligations, to modern, formal rational society, dominated by traditions and laws, where people are restricted from acting as they wish. He also notes that there is a tendency to standardisation and unification, when all smaller societies are absorbed into a single, large, modern society. Thus Tönnies can be said to describe part of the process known today as globalization. Tönnies was also one of the first sociologists to claim that the evolution of society is not necessarily going in the right direction, that social progress is not perfect, and it can even be called a regression as the newer, more evolved societies are obtained only after paying a high cost, resulting in decreasing satisfaction of the individuals making up that society. Tönnies' work became the foundation of neoevolutionism.
Although Max Weber is not usually counted as a sociocultural evolutionist, his theory of tripartite classification of authority can be viewed as an evolutionary theory as well. Weber distinguishes three ideal types of political leadership, domination and authority:
charismatic domination
traditional domination (patriarchs, patrimonialism, feudalism)
legal (rational) domination (modern law and state, bureaucracy)
Weber also notes that legal domination is the most advanced, and that societies evolve from having mostly traditional and charismatic authorities to mostly rational and legal ones.
Critique and impact on modern theories
The early 20th-century inaugurated a period of systematic critical examination, and rejection of the sweeping generalisations of the unilineal theories of sociocultural evolution. Cultural anthropologists such as Franz Boas (1858–1942), along with his students, including Ruth Benedict and Margaret Mead, are regarded as the leaders of anthropology's rejection of classical social evolutionism.
However, the school of Boas ignore some of the complexity in evolutionary theories that emerged outside Herbert Spencer's influence. Charles Darwin's On the Origin of Species gave a mechanistic account of the origins and development of animals, quite apart from Spencer's theories that emphasized the inevitable human development through stages. Consequently, many scholars developed more sophisticated understandings of how cultures evolve, relying on deep cultural analogies, than the theories in Herbert Spencer's tradition. Walter Bagehot (1872) applied selection and inheritance to the development of human political institutions. Samuel Alexander (1892) discusses the natural selection of moral principles in society. William James (1880) considered the 'natural selection' of ideas in learning and scientific development. In fact, he identified a 'remarkable parallel […] between the facts of social evolution on the one hand, and of zoological evolution as expounded by Mr Darwin on the other'. Charles Sanders Peirce (1898) even proposed that the current laws of nature we have exist because they have evolved over time. Darwin himself, in Chapter 5 of the Descent of Man, proposed that human moral sentiments were subject to group selection:
"A tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedience, courage, and sympathy, were always ready to aid one another, and to sacrifice themselves for the common good, would be victorious over most other tribes; and this would be natural selection."
Through the mechanism of imitation, cultures as well as individuals could be subject to natural selection.
While these theories involved evolution applied to social questions, except for Darwin's group selection the theories reviewed above did not advance a precise understanding of how Darwin's mechanism extended and applied to cultures beyond a vague appeal to competition. Ritchie's Darwinism and Politics (1889) breaks this trend, holding that "language and social institutions make it possible to transmit experience
quite independently of the continuity of race." Hence Ritchie saw cultural evolution as a process that could operate independently of and on different scales to the evolution of species, and gave it precise underpinnings: he was 'extending its range', in his own words, to ideas, cultures and institutions.
Thorstein Veblen, around the same time, came to a similar insight: that humans evolve to their social environment, but their social environment in turn also evolves. Veblen's mechanism for human progress was the evolution of human intentionality: Veblen labelled men 'a creature of habit' and thought that habits were 'mentally digested' from those who influenced him. In short, as Hodgson and Knudsen point out, Veblen thinks:
"the changing institutions in their turn make for a further selection of individuals endowed with the fittest temperament, and a further adaptation of individual temperament and habits to the changing environment through the formation of new institutions."
Thus, Veblen represented an extension of Ritchie's theories, where evolution operates at multiple levels, to a sophisticated appreciation of how each level interacts with the other.
This complexity notwithstanding, Boas and Benedict used sophisticated ethnography and more rigorous empirical methods to argue that Spencer, Tylor, and Morgan's theories were speculative and systematically misrepresented ethnographic data. Theories regarding "stages" of evolution were especially criticised as illusions. Additionally, they rejected the distinction between "primitive" and "civilized" (or "modern"), pointing out that so-called primitive contemporary societies have just as much history, and were just as evolved, as so-called civilized societies. They therefore argued that any attempt to use this theory to reconstruct the histories of non-literate (i.e. leaving no historical documents) peoples is entirely speculative and unscientific.
They observed that the postulated progression, which typically ended with a stage of civilization identical to that of modern Europe, is ethnocentric. They also pointed out that the theory assumes that societies are clearly bounded and distinct, when in fact cultural traits and forms often cross social boundaries and diffuse among many different societies (and are thus an important mechanism of change). Boas in his culture-history approach focused on anthropological fieldwork in an attempt to identify factual processes instead of what he criticized as speculative stages of growth. His approach greatly influenced American anthropology in the first half of the 20th century, and marked a retreat from high-level generalization and from "systems building".
Later critics observed that the assumption of firmly bounded societies was proposed precisely at the time when European powers were colonising non-Western societies, and was thus self-serving. Many anthropologists and social theorists now consider unilineal cultural and social evolution a Western myth seldom based on solid empirical grounds. Critical theorists argue that notions of social evolution are simply justifications for power by the élites of society. Finally, the devastating World Wars that occurred between 1914 and 1945 crippled Europe's self-confidence. After millions of deaths, genocide, and the destruction of Europe's industrial infrastructure, the idea of progress seemed dubious at best.
Thus modern sociocultural evolutionism rejects most of classical social evolutionism due to various theoretical problems:
The theory was deeply ethnocentric—it makes heavy value judgments about different societies, with Western civilization seen as the most valuable.
It assumed all cultures follow the same path or progression and have the same goals.
It equated civilization with material culture (technology, cities, etc.)
Because social evolution was posited as a scientific theory, it was often used to support unjust and often racist social practices – particularly colonialism, slavery, and the unequal economic conditions present within industrialized Europe. Social Darwinism is especially criticised, as it purportedly led to some philosophies used by the Nazis.
Max Weber, disenchantment, and critical theory
Weber's major works in economic sociology and the sociology of religion dealt with the rationalization, secularisation, and so called "disenchantment" which he associated with the rise of capitalism and modernity. In sociology, rationalization is the process whereby an increasing number of social actions become based on considerations of teleological efficiency or calculation rather than on motivations derived from morality, emotion, custom, or tradition. Rather than referring to what is genuinely "rational" or "logical", rationalization refers to a relentless quest for goals that might actually function to the detriment of a society. Rationalization is an ambivalent aspect of modernity, manifested especially in Western society – as a behaviour of the capitalist market, of rational administration in the state and bureaucracy, of the extension of modern science, and of the expansion of modern technology.
Weber's thought regarding the rationalizing and secularizing tendencies of modern Western society (sometimes described as the "Weber Thesis") would blend with Marxism to facilitate critical theory, particularly in the work of thinkers such as Jürgen Habermas (born 1929). Critical theorists, as antipositivists, are critical of the idea of a hierarchy of sciences or societies, particularly with respect to the sociological positivism originally set forth by Comte. Jürgen Habermas has critiqued the concept of pure instrumental rationality as meaning that scientific-thinking becomes something akin to ideology itself. For theorists such as Zygmunt Bauman (1925–2017), rationalization as a manifestation of modernity may be most closely and regrettably associated with the events of the Holocaust.
Modern theories
When the critique of classical social evolutionism became widely accepted, modern anthropological and sociological approaches changed respectively. Modern theories are careful to avoid unsourced, ethnocentric speculation, comparisons, or value judgments; more or less regarding individual societies as existing within their own historical contexts. These conditions provided the context for new theories such as cultural relativism and multilineal evolution.
In the 1920s and 1930s, Gordon Childe revolutionized the study of cultural evolutionism. He conducted a comprehensive pre-history account that provided scholars with evidence for African and Asian cultural transmission into Europe. He combated scientific racism by finding the tools and artifacts of the indigenous people from Africa and Asia and showed how they influenced the technology of European culture. Evidence from his excavations countered the idea of Aryan supremacy and superiority. Adopting "Kosinna's basic concept of the archaeological culture and his identification of such cultures as the remains of prehistoric peoples" and combining it with the detailed chronologies of European prehistory developed by Gustaf Oscar Montelius, Childe argued that each society needed to be delineated individually on the basis of constituent artefacts which were indicative of their practical and social function. Childe explained cultural evolution by his theory of divergence with modifications of convergence. He postulated that different cultures form separate methods that meet different needs, but when two cultures were in contact they developed similar adaptations, solving similar problems. Rejecting Spencer's theory of parallel cultural evolution, Childe found that interactions between cultures contributed to the convergence of similar aspects most often attributed to one culture. Childe placed emphasis on human culture as a social construct rather than products of environmental or technological contexts. Childe coined the terms "Neolithic Revolution", and "Urban Revolution" which are still used today in the branch of pre-historic anthropology.
In 1941 anthropologist Robert Redfield wrote about a shift from 'folk society' to 'urban society'. By the 1940s cultural anthropologists such as Leslie White and Julian Steward sought to revive an evolutionary model on a more scientific basis, and succeeded in establishing an approach known as neoevolutionism. White rejected the opposition between "primitive" and "modern" societies but did argue that societies could be distinguished based on the amount of energy they harnessed, and that increased energy allowed for greater social differentiation (White's law). Steward on the other hand rejected the 19th-century notion of progress, and instead called attention to the Darwinian notion of "adaptation", arguing that all societies had to adapt to their environment in some way.
The anthropologists Marshall Sahlins and Elman Service prepared an edited volume, Evolution and Culture, in which they attempted to synthesise White's and Steward's approaches. Other anthropologists, building on or responding to work by White and Steward, developed theories of cultural ecology and ecological anthropology. The most prominent examples are Peter Vayda and Roy Rappaport. By the late 1950s, students of Steward such as Eric Wolf and Sidney Mintz turned away from cultural ecology to Marxism, World Systems Theory, Dependency theory and Marvin Harris's Cultural materialism.
Today most anthropologists reject 19th-century notions of progress and the three assumptions of unilineal evolution. Following Steward, they take seriously the relationship between a culture and its environment to explain different aspects of a culture. But most modern cultural anthropologists have adopted a general systems approach, examining cultures as emergent systems and arguing that one must consider the whole social environment, which includes political and economic relations among cultures. As a result of simplistic notions of "progressive evolution", more modern, complex cultural evolution theories (such as Dual Inheritance Theory, discussed below) receive little attention in the social sciences, having given way in some cases to a series of more humanist approaches. Some reject the entirety of evolutionary thinking and look instead at historical contingencies, contacts with other cultures, and the operation of cultural symbol systems. In the area of development studies, authors such as Amartya Sen have developed an understanding of 'development' and 'human flourishing' that also question more simplistic notions of progress, while retaining much of their original inspiration.
Neoevolutionism
Neoevolutionism was the first in a series of modern multilineal evolution theories. It emerged in the 1930s and extensively developed in the period following the Second World War and was incorporated into both anthropology and sociology in the 1960s. It bases its theories on empirical evidence from areas of archaeology, palaeontology, and historiography and tries to eliminate any references to systems of values, be it moral or cultural, instead trying to remain objective and simply descriptive.
While 19th-century evolutionism explained how culture develops by giving general principles of its evolutionary process, it was dismissed by the Historical Particularists as unscientific in the early 20th century. It was the neo-evolutionary thinkers who brought back evolutionary thought and developed it to be acceptable to contemporary anthropology.
Neo-evolutionism discards many ideas of classical social evolutionism, namely that of social progress, so dominant in previous sociology evolution-related theories. Then neo-evolutionism discards the determinism argument and introduces probability, arguing that accidents and free will greatly affect the process of social evolution. It also supports counterfactual history—asking "what if" and considering different possible paths that social evolution may take or might have taken, and thus allows for the fact that various cultures may develop in different ways, some skipping entire stages others have passed through. Neo-evolutionism stresses the importance of empirical evidence. While 19th-century evolutionism used value judgments and assumptions for interpreting data, neo-evolutionism relies on measurable information for analysing the process of sociocultural evolution.
Leslie White, author of The Evolution of Culture: The Development of Civilization to the Fall of Rome (1959), attempted to create a theory explaining the entire history of humanity. The most important factor in his theory is technology. Social systems are determined by technological systems, wrote White in his book, echoing the earlier theory of Lewis Henry Morgan. He proposes a society's energy consumption as a measure of its advancement. He differentiates between five stages of human development. In the first, people use the energy of their own muscles. In the second, they use the energy of domesticated animals. In the third, they use the energy of plants (so White refers to agricultural revolution here). In the fourth, they learn to use the energy of natural resources: coal, oil, gas. In the fifth, they harness nuclear energy. White introduced a formula, P=E·T, where E is a measure of energy consumed, and T is the measure of efficiency of technical factors utilising the energy. This theory is similar to Russian astronomer Nikolai Kardashev's later theory of the Kardashev scale.
Julian Steward, author of Theory of Culture Change: The Methodology of Multilinear Evolution (1955, reprinted 1979), created the theory of "multilinear" evolution which examined the way in which societies adapted to their environment. This approach was more nuanced than White's theory of "unilinear evolution." Steward rejected the 19th-century notion of progress, and instead called attention to the Darwinian notion of "adaptation", arguing that all societies had to adapt to their environment in some way. He argued that different adaptations could be studied through the examination of the specific resources a society exploited, the technology the society relied on to exploit these resources, and the organization of human labour. He further argued that different environments and technologies would require different kinds of adaptations, and that as the resource base or technology changed, so too would a culture. In other words, cultures do not change according to some inner logic, but rather in terms of a changing relationship with a changing environment. Cultures therefore would not pass through the same stages in the same order as they changed—rather, they would change in varying ways and directions. He called his theory "multilineal evolution". He questioned the possibility of creating a social theory encompassing the entire evolution of humanity; however, he argued that anthropologists are not limited to describing specific existing cultures. He believed that it is possible to create theories analysing typical common culture, representative of specific eras or regions. As the decisive factors determining the development of given culture he pointed to technology and economics, but noted that there are secondary factors, like political system, ideologies and religion. All those factors push the evolution of a given society in several directions at the same time; hence the application of the term "multilinear" to his theory of evolution.
Marshall Sahlins, co-editor with Elman Service of Evolution and Culture (1960), divided the evolution of societies into 'general' and 'specific'. General evolution is the tendency of cultural and social systems to increase in complexity, organization and adaptiveness to environment. However, as the various cultures are not isolated, there is interaction and a diffusion of their qualities (like technological inventions). This leads cultures to develop in different ways (specific evolution), as various elements are introduced to them in different combinations and at different stages of evolution.
In his Power and Prestige (1966) and Human Societies: An Introduction to Macrosociology (1974), Gerhard Lenski expands on the works of Leslie White and Lewis Henry Morgan, developing the ecological-evolutionary theory. He views technological progress as the most basic factor in the evolution of societies and cultures. Unlike White, who defined technology as the ability to create and utilise energy, Lenski focuses on information—its amount and uses. The more information and knowledge (especially allowing the shaping of natural environment) a given society has, the more advanced it is. He distinguishes four stages of human development, based on advances in the history of communication. In the first stage, information is passed by genes. In the second, when humans gain sentience, they can learn and pass information through by experience. In the third, humans start using signs and develop logic. In the fourth, they can create symbols and develop language and writing. Advancements in the technology of communication translate into advancements in the economic system and political system, distribution of goods, social inequality and other spheres of social life. He also differentiates societies based on their level of technology, communication and economy: (1) hunters and gatherers, (2) agricultural, (3) industrial, and (4) special (like fishing societies).
Talcott Parsons, author of Societies: Evolutionary and Comparative Perspectives (1966) and The System of Modern Societies (1971) divided evolution into four subprocesses: (1) division, which creates functional subsystems from the main system; (2) adaptation, where those systems evolve into more efficient versions; (3) inclusion of elements previously excluded from the given systems; and (4) generalization of values, increasing the legitimization of the ever more complex system. He shows those processes on 4 stages of evolution: (I) primitive or foraging, (II) archaic agricultural, (III) classical or "historic" in his terminology, using formalized and universalizing theories about reality and (IV) modern empirical cultures. However, these divisions in Parsons' theory are the more formal ways in which the evolutionary process is conceptualized, and should not be mistaken for Parsons' actual theory. Parsons develops a theory where he tries to reveal the complexity of the processes which take form between two points of necessity, the first being the cultural "necessity," which is given through the values-system of each evolving community; the other is the environmental necessities, which most directly is reflected in the material realities of the basic production system and in the relative capacity of each industrial-economical level at each window of time. Generally, Parsons highlights that the dynamics and directions of these processes is shaped by the cultural imperative embodied in the cultural heritage, and more secondarily, an outcome of sheer "economic" conditions.
Michel Foucault's recent, and very much misunderstood, concepts such as Biopower, Biopolitics and Power-knowledge has been cited as breaking free from the traditional conception of man as cultural animal. Foucault regards both the terms "cultural animal" and "human nature"as misleading abstractions, leading to a non-critical exemption of man and anything can be justified when regarding social processes or natural phenomena (social phenomena). Foucault argues these complex processes are interrelated, and difficult to study for a reason so those 'truths' cannot be topled or disrupted. For Foucault, the many modern concepts and practices that attempt to uncover "the truth" about human beings (either psychologically, sexually, religion or spiritually) actually create the very types of people they purport to discover. Requiring trained "specialists" and knowledge codes and know how, rigorous pursuit is "put off" or delayed which makes any kind of study not only a 'taboo' subject but deliberately ignored. He cites the concept of 'truth' within many human cultures and the ever flowing dynamics between truth, power, and knowledge as a resultant complex dynamics (Foucault uses the term regimes of truth) and how they flow with ease like water which make the concept of 'truth' impervious to any further rational investigation. Some of the West's most powerful social institutions are powerful for a reason, not because they exhibit powerful structures which inhibit investigation or it is illegal to investigate there historical foundation. It is the very notion of "legitimacy" Foucault cites as examples of "truth" which function as a "Foundationalism" claims to historical accuracy. Foucault argues, systems such as Medicine, Prisons,CRS Report For Congress Federal Prison Industries 2007 and Religion, as well as groundbreaking works on more abstract theoretical issues of power are suspended or buried into oblivion. He cites as further examples the 'Scientific study' of Population biology and Population genetics as both examples of this kind of "Biopower" over the vast majority of the human population giving the new founded political population their 'politics' or polity. With the advent of biology and genetics teamed together as new scientific innovations notions of study of knowledge regarding truth belong to the realm of experts who will never divulge their secrets openly, while the bulk of the population do not know their own biology or genetics this is done for them by the experts.
This functions as a truth ignorance mechanism: "where the "subjugated knowledge's", as those that have been both written out of history and submerged in it in a masked form produces what we now know as truth. He calls them "Knowledge's from below" and a "historical knowledge of struggles".Genealogy, Foucault suggests, is a way of getting at these knowledge's and struggles; "they are about the insurrection of knowledge's." Foucault tries to show with the added dimension of "Milieu"(derived from Newtonian mechanics) how this Milieu from the 17th century with the development of the Biological and Physical sciences managed to be interwoven into the political, social and biological relationship of men with the arrival of the concept Work placed upon the industrial population. Foucault uses the term Umwelt, borrowed from Jakob von Uexküll, meaning environment within. Technology, production, cartography the production of Nation states and Government making the efficiency of the Body politic, Law, Heredity and Consanguine not only sound genuine and beyond historical origin and foundation it can be turned into 'exact truth' where the individual and the societal body are not only subjugated and nullified but dependent upon it. Foucault is not denying that genetic or biological study is inaccurate or is simply not telling the truth what he means is that notions of this newly discovered sciences were extended to include the vast majority (or whole populations) of populations as an exercise in "regimes change".
Foucault argues that the conceptual meaning from the Middle Ages and Canon law period, the Geocentric model, later superseded by the Heliocentrism model placing the position of the law of right in the Middle ages (Exclusive right or its correct legal term Sui generis) was the Divine right of kings and Absolute monarchy where the previous incarnation of truth and rule of political sovereignty was considered absolute and unquestioned by political philosophy (monarchs, popes and emperors). However, Foucault noticed that this Pharaonic version of political power was transversed and it was with 18th-century emergence of capitalism and liberal democracy that these terms began to be "democratized". The modern Pharaonic version represented by the president, the monarch, the pope and the prime minister all became propagandized versions or examples of symbol agents all aimed at towards a newly discovered phenomenon, the population. As symbolic symbol agents of power making the mass population having to sacrifice itself all in the name of the newly formed voting franchise we now call Democracy. However, this was all turned on its head (when the Medieval rulers were thrown out and replaced by a more exact apparatus now called the state) when the human sciences suddenly discovered: "The set of mechanisms through which the basic biological features of the human species became an object of a political strategy and took on board the fundamental facts that humans were now a biological species."
Sociobiology
Sociobiology departs perhaps the furthest from classical social evolutionism. It was introduced by Edward Wilson in his 1975 book Sociobiology: The New Synthesis and followed his adaptation of evolutionary theory to the field of social sciences. Wilson pioneered the attempt to explain the evolutionary mechanics behind social behaviours such as altruism, aggression, and nurturance. In doing so, Wilson sparked one of the greatest scientific controversies of the 20th century by introducing and rejuvenating neo-Darwinian modes of thinking in many social sciences and the humanities, leading to reactions ranging from fundamental opposition, not only from social scientists and humanists but also from Darwinists who see it as "excessively simplistic in its approach", to calls for a radical restructuring of the respective disciplines on an evolutionary basis.
The current theory of evolution, the modern evolutionary synthesis (or neo-darwinism), explains that evolution of species occurs through a combination of Darwin's mechanism of natural selection and Gregor Mendel's theory of genetics as the basis for biological inheritance and mathematical population genetics. Essentially, the modern synthesis introduced the connection between two important discoveries; the units of evolution (genes) with the main mechanism of evolution (selection).
Due to its close reliance on biology, sociobiology is often considered a branch of the biology, although it uses techniques from a plethora of sciences, including ethology, evolution, zoology, archaeology, population genetics, and many others. Within the study of human societies, sociobiology is closely related to the fields of human behavioral ecology and evolutionary psychology.
Sociobiology has remained highly controversial as it contends genes explain specific human behaviours, although sociobiologists describe this role as a very complex and often unpredictable interaction between nature and nurture. The most notable critics of the view that genes play a direct role in human behaviour have been biologists Richard Lewontin Steven Rose and Stephen Jay Gould. Given the convergence of much of sociobiology's claims with right-wing politics, this approach has seen severe opposition both with regard to its research results as well as its basic tenets; this has led even Wilson himself to revisit his claims and state his opposition to some elements of modern sociobiology.
Since the rise of evolutionary psychology, another school of thought, Dual Inheritance Theory, has emerged in the past 25 years that applies the mathematical standards of Population genetics to modeling the adaptive and selective principles of culture. This school of thought was pioneered by Robert Boyd at UCLA and Peter Richerson at UC Davis and expanded by William Wimsatt, among others. Boyd and Richerson's book, Culture and the Evolutionary Process (1985), was a highly mathematical description of cultural change, later published in a more accessible form in Not by Genes Alone (2004). In Boyd and Richerson's view, cultural evolution, operating on socially learned information, exists on a separate but co-evolutionary track from genetic evolution, and while the two are related, cultural evolution is more dynamic, rapid, and influential on human society than genetic evolution. Dual Inheritance Theory has the benefit of providing unifying territory for a "nature and nurture" paradigm and accounts for more accurate phenomenon in evolutionary theory applied to culture, such as randomness effects (drift), concentration dependency, "fidelity" of evolving information systems, and lateral transmission through communication. Nicholas Christakis also advances similar ideas about "evolutionary sociology" in his 2019 book, Blueprint: The Evolutionary Origins of a Good Society, emphasizing the relevance of underlying evolutionary forces that have helped to shape all societies, whatever their cultural differences.
Theory of modernization
Theories of modernization are closely related to the dependency theory and development theory. While they have been developed and popularized in the 1950s and 1960s, their ideological and epistemic ancestors can be traced back until at least the early 20th century when progressivist historians and social scientists, building upon Darwinian ideas that the roots of economic success in the US had to be found in its population structure, which, as an immigrant society, was composed of the strongest and fittest individuals of their respective countries of origin, had started to supply the national myth of US-American manifest destiny with evolutionary reasoning. Explicitly and implicitly, the US became the yardstick of modernisation, and other societies could be measured in the extent of their modernity by how closely they adhered to the US-American example. Modernization Theories combine the previous theories of sociocultural evolution with practical experiences and empirical research, especially those from the era of decolonization. The theory states that:
Western countries are the most developed, and the rest of the world (mostly former colonies) is in the earlier stages of development, and will eventually reach the same level as the Western world.
Development stages go from the traditional societies to developed ones.
Third World countries have fallen behind with their social progress and need to be directed on their way to becoming more advanced.
Developing from classical social evolutionism theories, the theory of modernization stresses the modernization factor: many societies are simply trying (or need) to emulate the most successful societies and cultures. It also states that it is possible to do so, thus supporting the concepts of social engineering and that the developed countries can and should help those less developed, directly or indirectly.
Among the scientists who contributed much to this theory are Walt Rostow, who in his The Stages of Economic Growth: A Non-Communist Manifesto (1960) concentrates on the economic system side of the modernization, trying to show factors needed for a country to reach the path to modernization in his Rostovian take-off model. David Apter concentrated on the political system and history of democracy, researching the connection between democracy, good governance and efficiency and modernization. David McClelland (The Achieving Society, 1967) approached this subject from the psychological perspective, with his motivations theory, arguing that modernization cannot happen until given society values innovation, success and free enterprise. Alex Inkeles (Becoming Modern, 1974) similarly creates a model of modern personality, which needs to be independent, active, interested in public policies and cultural matters, open to new experiences, rational and able to create long-term plans for the future. Some works of Jürgen Habermas are also connected with this subfield.
The theory of modernization has been subject to some criticism similar to that levied against classical social evolutionism, especially for being too ethnocentric, one-sided and focused on the Western world and its culture.
Contemporary perspectives
Political perspectives
The Cold War period was marked by rivalry between two superpowers, both of which considered themselves to be the most highly evolved cultures on the planet. The USSR painted itself as a socialist society which emerged from class struggle, destined to reach the state of communism, while sociologists in the United States (such as Talcott Parsons) argued that the freedom and prosperity of the United States were a proof of a higher level of sociocultural evolution of its culture and society. At the same time, decolonization created newly independent countries who sought to become more developed—a model of progress and industrialization which was itself a form of sociocultural evolution.
Technological perspectives
Many argue that the next stage of sociocultural evolution consists of a merger with technology, especially information processing technology. Several cumulative major transitions of evolution have transformed life through key innovations in information storage and replication, including RNA, DNA, multicellularity, and also language and culture as inter-human information processing systems. in this sense it can be argued that the carbon-based biosphere has generated a system (human society) capable of creating technology that will result in a comparable evolutionary
transition. "Digital information has reached a similar magnitude to information in the biosphere. It increases exponentially, exhibits high-fidelity replication, evolves through differential fitness, is expressed through artificial intelligence (AI), and has facility for virtually limitless recombination. Like previous evolutionary transitions, the potential symbiosis between biological and digital information will reach a critical point where these codes could compete via natural
selection. Alternatively, this fusion could create a higher-level superorganism employing a low-conflict division of labor in performing informational tasks...humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels, ...most transactions on the stock market are executed by automated trading algorithms, and our electric grids are in the hands of artificial intelligence. With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".
Anthropological perspectives
Current political theories of the new tribalists consciously mimic ecology and the life-ways of indigenous peoples, augmenting them with modern sciences. Ecoregional Democracy attempts to confine the "shifting groups", or tribes, within "more or less clear boundaries" that a society inherits from the surrounding ecology, to the borders of a naturally occurring ecoregion. Progress can proceed by competition between but not within tribes, and it is limited by ecological borders or by Natural Capitalism incentives which attempt to mimic the pressure of natural selection on a human society by forcing it to adapt consciously to scarce energy or materials. Gaians argue that societies evolve deterministically to play a role in the ecology of their biosphere, or else die off as failures due to competition from more efficient societies exploiting nature's leverage.
Thus, some have appealed to theories of sociocultural evolution to assert that optimizing the ecology and the social harmony of closely knit groups is more desirable or necessary than the progression to "civilization." A 2002 poll of experts on Neoarctic and Neotropic indigenous peoples (reported in Harper's magazine) revealed that all of them would have preferred to be a typical New World person in the year 1491, prior to any European contact, rather than a typical European of that time. This approach has been criticised by pointing out that there are a number of historical examples of indigenous peoples doing severe environmental damage (such as the deforestation of Easter Island and the extinction of mammoths in North America) and that proponents of the goal have been trapped by the European stereotype of the noble savage.
The role of war in the development of states and societies
Particularly since the end of the Cold War, there has been a growing number of scholars in the social sciences and humanities who came to complement the more presentist neo-evolutionary research with studies into the more distant past and its human inhabitants. A key element in many of these analyses and theories is warfare, which Robert L. Carneiro called the "prime mover in the origin of the state". He theorizes that given the limited availability of natural resources, societies will compete against each other, with the losing group either moving out of the area now dominated by the victorious one, or, if the area is circumscribed by an ocean or a mountain range and re-settlement is thus impossible, will be either subjugated or killed. Thus, societies become larger and larger, but, facing the constant threat of extinction or assimilation, they were also forced to become more complex in their internal organisation both in order to remain competitive as well as to administer a growing territory and a larger population.
Carneiro's ideas have inspired great number of subsequent research into the role of war in the process of political, social, or cultural evolution. An example of this is Ian Morris who argues that given the right geographic conditions, war not only drove much of human culture by integrating societies and increasing material well-being, but paradoxically also made the world much less violent. Large-scale states, says Morris, evolved because only they provided enough stability both internally and externally to survive the constant conflicts which characterise the early history of smaller states, and the possibility of war will continue to force humans to invent and evolve. War drove human societies to adapt in a step-wise process, and each development in military technology either requires or leads to comparable developments in politics and society.
Many of the underlying assumptions of Morris's thinking can be traced back in some form or another not only to Carneiro but also to Jared Diamond, and particularly his 1997 book Guns, Germs, and Steel. Diamond, who explicitly opposes racist evolutionary tales, argues that the ultimate explanation of why different human development on different continents is the presence or absence of domesticable plants and animals as well as the fact that the east-west orientation of Eurasia made migration within similar climates much easier than the south-north orientation of Africa and the Americas. Nevertheless, he also stresses the importance of conflict and warfare as a proximate explanation for how Europeans managed to conquer much of the world, given how societies who fail to innovate will "tend to be eliminated by competing societies".
Similarly, Charles Tilly argues that what drove the political, social, and technological change which, after centuries of great variation with regard to states, lead to the European states ultimately all converging on the national state was coercion and warfare: "War wove the European network of national states, and preparation for war created the internal structures of states within it." He describes how war became more expensive and complex due to the introduction of gunpowder and large armies and thus required significantly large states in order to provide the capital and manpower to sustain these, which at the same time were forced to develop new means of extraction and administration.
However, Norman Yoffee has criticised such theorists who, based on general evolutionary frameworks, came to formulate theories of the origins of states and their evolution. He claimed that in no small part due to the prominence of neoevolutionary explanations which group different societies into groups in order to compare them and their progress both to themselves and to modern ethnographic examples, while focusing mostly on political systems and a despotic élite who held together a territorial state by force, "much of what has been said of the earliest states, both in the professional literature as well as in popular writings, is not only factually wrong but also is implausible in the logic of social evolutionary theory".
See also
Accelerating change
Biocultural evolution
Clash of Civilizations
Critical juncture theory
Cultural diversity
Cultural evolution
Cultural materialism
Cultural neuroscience
Cultural selection theory
Diffusion of innovations
Dual inheritance theory
Economic determinism
Edward Burnett Tylor
Evolutionary anthropology
Environmental racism
Extended order
Franz Boas
Futures studies
Historicism
Institutional memory
Julian Steward
Leslie White
Lewis Henry Morgan
Memetics
Moral progress
Neoevolutionism
Neuroculture
Origin of language
Origin of speech
Origins of society
Population dynamics
Punctuated equilibrium
Rationalization (sociology)
Raciolinguistics
Reformism
Social Darwinism
Social cycle theory
Social dynamics
Social implications of the theory of evolution
Societal collapse
Sociocultural system
Social progress
Symbolic culture
Technological evolution
References
Cited sources
Sztompka, Piotr (2002). Socjologia. Znak. .
Bibliography
The Philosophy of Positivism
Robert Carneiro, Evolutionism in Cultural Anthropology: A Critical History. Westview Press, Boulder, CO, 2003.
Jared Diamond, The World until Yesterday: What Can We Learn from Traditional Societies?, Penguin Books, 2012.
Evans-Pritchard, Sir Edward, A History of Anthropological Thought, 1981, Basic Books, Inc., New York.
Graber, Robert B., A Scientific Model of Social and Cultural Evolution, 1995, Thomas Jefferson University Press, Kirksville, MO.
Harris, Marvin, The Rise of Anthropological Theory: A History of Theories of Culture, 1968, Thomas Y. Crowell, New York.
Hatch, Elvin, Theories of Man and Culture, 1973, Columbia University Press, New York.
Hays, H. R., From Ape to Angel: An Informal History of Social Anthropology, 1965, Alfred A. Knopf, New York.
Johnson, Allen W. and Earle, Timothy, The Evolution of Human Societies: From Foraging Group to Agrarian State, 1987, Stanford University Press.
Kaplan, David and Manners, Robert, Culture Theory, 1972, Waveland Press, Inc., Prospect Heights, Illinois.
Kuklick, Henrika, The Savage Within: The Social History of British Anthropology, 1885–1945, 1991, Cambridge University Press, Cambridge.
McGilchrist, Iain, The Master and His Emissary: The Divided Brain and the Making of the Western World, 2009, Yale University Press, US and London.
Mesoudi, A. (2007). Using the methods of experimental social psychology to study cultural evolution. Journal of Social, Evolutionary & Cultural Psychology, 1(2), 35–58. Full text
Mesoudi, A. Cultural Evolution: How Darwinian Theory Can Explain Human Culture and Synthesize the Social Sciences, 2011, University of Chicago Press,
Morgan, John Henry, In the Beginning: The Paleolithic Origins of Religious Consciousness 2007 Cloverdale Books, South Bend.
Raoul Naroll and William T. Divale. 1976. Natural Selection in Cultural Evolution: Warfare versus Peaceful Diffusion. American Ethnologist 3: 97–128.
Segal, Daniel (2000) Western Civ" and the Staging of History in American Higher Education The American Historical Review, Vol. 105, No. 3 (Jun., 2000), pp. 770–805
Seymour-Smith, Charlotte, Macmillan Dictionary of Anthropology, 1986, Macmillan, New York.
Stocking Jr., George W., Race, Culture, and Evolution: Essays in the History of Anthropology, 1968, The Free Press, New York.
Stocking Jr., George W., After Tylor: British Social Anthropology 1888–1951, 1995, The University of Wisconsin Press.
Stocking, George, Victorian Anthropology, Free Press, 1991,
Sztompka, Piotr, The Sociology of Social Change, Blackwell Publishers, 1994,
Trigger, Bruce, Sociocultural Evolution: Calculation and Contingency (New Perspectives on the Past), Blackwell Publishers, 1998,
Winthrop, Robert H., Dictionary of Concepts in Cultural Anthropology'', 1991, Greenwood Press, New York.
Readings from an evolutionary anthropological perspective
Two special issues on the evolution of culture:
Evolutionary Anthropology: Issues, News, and Reviews Volume 12, Issue 2, Pages 57–108 (April 2003)
The evolution of culture: New perspectives and evidence (p 57–60) Charles H. Janson, Eric A. Smith
Making space for traditions (p 61–70) Dorothy Fragaszy
Traditions in monkeys (p 71–81) Susan Perry, Joseph H. Manson
Is culture a golden barrier between human and chimpanzee? (p 82–91) Christophe Boesch
Cultural panthropology (p 92–105) Andrew Whiten, Victoria Horner, Sarah Marshall-Pescini
The fossil record – Human and nonhuman (p 106–108) Eric Delson
Evolutionary Anthropology: Issues, News, and Reviews Volume 12, Issue 3, Pages 109–159 (2003)
On stony ground: Lithic technology, human evolution, and the emergence of culture (p 109–122) Robert Foley, Marta Mirazón Lahr
The evolution of cultural evolution (p 123–135) Joseph Henrich, Richard McElreath
The adaptive nature of culture (p 136–149) Michael S. Alvard
Do animals have culture? (p 150–159) Kevin N. Laland, William Hoppitt
External links
Sociocultural evolution on Principia Cybernetica Web
Classical Sociological Theory: Comte and Spencer
Secular Cycles and Millennial Trends
Cultural concepts
Memetics
Anthropology
Sociological theories | 0.767713 | 0.993081 | 0.762401 |
Reflective writing | Reflective writing is an analytical practice in which the writer describes a real or imaginary scene, event, interaction, passing thought, or memory and adds a personal reflection on its meaning. Many reflective writers keep in mind questions such as "What did I notice?", "How has this changed me?" or "What might I have done differently?" when reflecting.
Thus, in reflective writing, the focus is on writing that is not merely descriptive. The writer revisits the scene to note details and emotions, reflect on meaning, examine what went well or revealed a need for additional learning, and relate what transpired to the rest of life.
According to Kara Taczak, "Reflection is a mode of inquiry: a deliberate way of systematically recalling writing experiences to reframe the current writing situation."
The more someone reflectively writes, the more likely they are to reflect in their everyday life regularly, think outside the box, and challenge accepted practices.
Background
When writing reflectively, a writer attempts to convey their own thought process. Therefore, reflective writing is one of the more personal styles of writing as the writer is clearly inserted into the work. This style of writing invites both the reader and the writer to introspect and examine their own thoughts and beliefs, and gives the writer and the reader a closer relationship.
Reflective writing tends to consist of description, or explaining the event and its context; interpretation, or how the experience challenged existing opinions; and outcome, or how the experience contributed to personal or professional development.
Most reflective writing is written in first person, as it speaks to the writer's personal experience, but often it is supplemented with third person in academic works as the writer must support their perspective with outside evidence.
Reflective writing is usually a style that must be learned and practiced. Most novice writers are not reflective initially and must progress from imitative writing to their own style of genuine, critical reflection.
Kathleen Blake Yancey notes that reflection "is the dialectical process by which we develop and achieve, first, specific goals for learning; second, strategies for reaching those goals; and third, means of determining whether or not we have met those goals or other goals."
The concepts of reflection and reflective writing are social constructs prevalent in academic literature, and in different contexts, their meanings have different interpretations.
Characteristics of reflective writing
The main characteristics of reflective writing include:
Reflection: The writer reflects on the issue (that is, the topic they are writing about) and considers how their own experience and points of view might influence their response. This helps the writer learn about themselves as well as contribute to a better final product that considers biases.
Evidence: The writer considers and cites different perspectives and evidence to provide a truly comprehensive reflection. "Evidence" can mean either academic evidence or the writer's own reflections and experiences, depending on whether the piece of reflection is personal or academic.
Clarity: The writer must be clear and cohesive. As reflective writing takes the reader through both the writer's own thoughts and sometimes other outside perspectives, unity and readability are crucial to ensure the reader does not get lost between points of view.
If the reflection is written for academia—that is, it is not a personal reflection or journal—additional features include:
Theory: An academic reflection will integrate theories and other academic works to explain the reflection. For example, a writer might say: "Smith's theory of social engagement might explain why I reacted the way I did."
Learning outcomes: An academic reflection will include commentary on how the writer learned from the experience, what they would have done differently, or how their perspectives or opinions have changed as a result of the experience.
Reflective writing in academia
Reflective writing is regularly used in academic settings, as it helps students think about how they think and allows students to think beyond the scope of the literal meaning of their writing or thinking. In other words, it is a form of metacognition. Proper reflective writing is heavily influenced by metacognition. Metacognition allows for better self-reflection and allows the writer to take the material beyond the literal meaning. Reflective writing can be seen as a metacognitive genre that heavily influences literacy narrative assignments due to the increased reflective thinking it applies to students. Students can consciously and unconsciously analyze their experiences and interactions through this assessment tool.It is frequently assigned to postsecondary students and is particularly useful to students and practitioners in composition, education, and health-related fields as it helps them reflect on their practice. Typical academic reflective writings include portfolios, summaries, and journals. Reflective writing is not limited to academic writing because it often takes many different forms. Sometimes it is used in stand-alone assessment tasks, and other times it is incorporated into other tasks such as essays.
Reflective writing in education systems aids in adapting students' "knowledge in waiting" into "knowledge in practice" which encourages students to analyze previous professional experiences when applying these to future situations. This type of analytical skill is advantageous for students pursuing professions that have recurring unpredictable situations and allows students to be better prepared for the workplace.
Evidence shows that reflective writing is a good way to increase empathy in medical students. Another study showed that students who were assigned reflective writing during a camp developed greater self-awareness, had a better understanding of their goals, and were better able to recognize their personal development.
It is also found that students who partake in critical reflective assignments use it as a way to let out their pent-up emotions, making critical reflection a way to seek cathartic relief.
Reflective writing is useful to improve collaboration, as it makes writers aware of how they sound when they voice their thoughts and opinions to others. Additionally, it is an important part of the reflective learning cycle, which includes planning, acting, observing, and reflecting.
Students can be hesitant to write reflectively as it requires them to not just consider but actively cite things they typically would hide or ignore in academic writing, like their anxieties and shortcomings.
Reflective writing in academic settings is sometimes criticized, as concerns exist regarding its effectiveness. Reflective writing assignments are often weighted low in a course's grade calculations, and among a crowded workload, students can see them as an afterthought. It has also been argued that reflective writing assignments are only assigned as "busy work", as they are low maintenance and relatively easy to grade. Additionally, because students know they will be graded on their reflection, it might be written in an inauthentic way.
Nonetheless, reflective writing is becoming increasingly important in education, as reflecting on completed work helps students see room for improvement.
Benefits of reflective writing
There are many benefits to reflective writing. A few are: increased self-awareness about personal writing techniques; improved critical analysis; and ability to examine and understand social, cultural, and political issues that involve language.
Within professions, reflective writing can be used as a therapeutic form of expression, especially useful in stress-filled professions.
Within a classroom setting, the addition of reflective writing assignments can help improve intellectual thinking by introducing assignments that encourage a deeper relationship between the individual and their writing. The introduction of reflective assignments in classroom settings further aids in student retention of information being discussed in the classroom.
See also
Narration
Storytelling
Writing therapy
References
External links
</ref>
Medical humanities
Writing | 0.773346 | 0.985826 | 0.762385 |
Rhetorical situation | A rhetorical situation is an event that consists of an issue, an audience, and a set of constraints. A rhetorical situation arises from a given context or exigence. An article by Lloyd Bitzer introduced the model of the rhetorical situation in 1968, which was later challenged and modified by Richard E. Vatz (1973) and Scott Consigny (1974). More recent scholarship has further redefined the model to include more expansive views of rhetorical operations and ecologies.
Theoretical development
In the twentieth century, three influential texts concerning the rhetorical situation were published: Lloyd Bitzer's "The Rhetorical Situation", Richard E. Vatz's "The Myth of the Rhetorical Situation", and Scott Consigny's "Rhetoric and Its Situations". Bitzer argues that a situation determines and brings about rhetoric; Vatz proposes that rhetoric creates "situations" by making issues salient; and Consigny explores the rhetor as an artist of rhetoric, creating salience through a knowledge of commonplaces.
Bitzer's definition
Lloyd Bitzer began the conversation in his 1968 piece titled "The Rhetorical Situation". Bitzer wrote that rhetorical discourse is called into existence by situation. He defined the rhetorical situation as "a complex of persons, events, objects, and relations presenting an actual or potential exigence which can be completely or partially removed if discourse, introduced into the situation, can so constrain human decision or action as to bring about the significant modification of the exigence." With any rhetorical discourse, a prior rhetorical situation exists. The rhetorical situation dictates the significant physical and verbal responses as well as the sorts of observations to be made. An example of this would be an activist speaking out on climate change as an apparent global problem. The situation, thus, calls for the activist to use and respond with rhetorical discourse on the climate change issue. In other words, rhetorical meaning is brought about by events. Bitzer especially focuses on the sense of timing (kairos) needed to speak about a situation in a way that can best remedy the exigence.
Three constituent parts make up any rhetorical situation.
The first constituent part is the exigence, or a problem existing in the world. Exigence is rhetorical when it can be affected and changed by human interaction, and when it is capable of positive modification through the act of persuasion. A rhetorical exigence may be strong, unique, or important, or it may be weak, common, or trivial.
The second constituent part is audience. Rhetorical discourse promotes change through influencing an audience's decision and actions.
The third constituent part is the set of constraints. Constraints may be the persons, events, objects, and relations that limit decisions and action. Theorists influenced by Marx would additionally discuss ideological constraints, which produce unconscious limitations for subjects in society, including the social constraints of gender, class, and race. The speaker brings about a new set of constraints through the image of his or her personal character (ethos), logical proofs (logos), and use of emotion (pathos).
Critical responses
Vatz's challenge
An important critique of Bitzer's theory came in 1973 from Richard E. Vatz. Vatz believes that rhetoric defines a situation, because the context and choices of events could be forever described, but the persuader or influencer or rhetor must select which events to make part of the agenda. Choosing certain events and not others, and deciding their relative value or importance, creates a certain presence, or salience. Vatz quotes Chaïm Perelman: "By the very fact of selecting certain elements and presenting them to the audience, their importance and pertinency to the discussion are implied. Indeed such a choice endows these elements with a presence..."
In essence, Vatz claims that the definitive elements of rhetorical efforts are the struggle to create for a chosen audience saliences or agendas, and this creation is then followed by the struggle to infuse the selected situation or facts with meaning or significance. What are we persuaded to talk about? What are we persuaded it means or signifies? These questions are the relevant ones to understand persuasion, not "What does the situation make us talk about?" or "What does it intrinsically mean?". Situations that do not physically make us attend to them are avoided and reflect the significance of subjectivity in framing socio-political realities. Vatz believes that situations are created, for example, when an activist sets an agenda to focus on climate change, thus creating a "rhetorical situation" (a situation determined by rhetoric). The activist (rhetor) enjoys more agency because they are not "controlled" by a situation, but creates the situation by making it salient in language. Vatz emphasizes the social construction of the situation as opposed to Bitzer's realism or objectivism.
While the two opinions have been widely recognized, Vatz has acknowledged that his piece is less recognized than Bitzer's. Vatz admits, while claiming that audience acceptance is not dispositive for measuring validity or predictive for future audience acceptance, that "more articles and professionals in our field cite his situational perspective than my rhetorical perspective." Bitzer's objectivism is clear, and easily taught as a method, despite Vatz's criticism. Vatz claims that portraying rhetoric as situation-based vitiates rhetoric as an important field, while portraying rhetoric as the cause of what people see as pressing situations enhances its significance as a field of study.
Consigny's challenge
Another response to Bitzer and Vatz came from Scott Consigny. Consigny believes that Bitzer's theory gives a rhetorical situation proper particularities, but "misconstrues the situation as being thereby determinate and determining," and that Vatz's theory gives the rhetor a correct character but does not correctly account for limits of a rhetor's ability.
Instead, he proposes the idea of rhetoric as an art. Consigny argues that rhetoric gives the means by which a rhetor can engage with a situation by meeting two conditions.
The first condition is integrity. Consigny argues that the rhetor must possess multiple opinions with the ability to solve problems through those opinions.
The second condition is receptivity. Consigny argues that the rhetor cannot create problems at will, but becomes engaged with particular situations.
Consigny finds that rhetoric which meets the two conditions should be interpreted as an art of topics or commonplaces. Taking after classical rhetoricians, he explains the topic as an instrument and a situation for the rhetor, allowing the rhetor to engage creatively with the situation. As a challenge to both Bitzer and Vatz, Consigny claims that Bitzer has a one-dimensional theory by dismissing the notion of topic as instrument, and that Vatz wrongly allows the rhetor to create problems willfully while ignoring the topic as situation. The intersection of topic as instrument and topic as realm gives the situation both meaning (as a perceptive formal device) and context (as material significance). Consigny concludes:The real question in rhetorical theory is not whether the situation or the rhetor is "dominant," but the extent, in each case, to which the rhetor can discover and control indeterminate matter, using his art of topics to make sense of what would otherwise remain simply absurd.
Other critical responses
Flower and Hayes
In their 1980 article, "The Cognition of Discovery: Defining a Rhetorical Problem", Linda Flower and John R. Hayes expand upon Bitzer's definition of the rhetorical situation. In studying the cognitive processes that induce discovery, Flower and Hayes propose the model of the rhetorical problem. The rhetorical problem consists of two elements: the rhetorical situation (exigence and audience), and the writer's goals involving the reader, persona, meaning, and text. The rhetorical problem model explains how a writer responds to and negotiates a rhetorical situation while addressing and representing his or her goals for a given text.
Biesecker
In response to both Bitzer and Vatz, Barbara Biesecker challenges the idea of the rhetorical situation in her 1989 article "Rethinking the Rhetorical Situation from Within the Thematic of Différance". Biesecker critiques both Bitzer's claim that rhetoric originates from the situation and Vatz's claim that the rhetoric itself creates its own situation. Rather, she proposes a deconstruction of rhetorical analysis, specifically through the lens of Jacques Derrida's thematic of différance. In addition to questioning proposed views of the speaker and the situation, this lens also challenges the view of the audience as a unified, rational concept. Taken together, Biesecker suggests that the thematic of différance allows us to see the rhetorical situation as an event that does not simply convince audiences to believe or act in a certain way or represent the claims put forth by a static speaker or situation. Rather, she argues, this deconstruction reveals the ability of the rhetorical situation to actually create provisional identities and social relationships through articulation.
Garret and Xiao
In their 1993 article, Mary Garrett and Xiaosui Xiao apply Bitzer's rhetorical situation model to the response of the Chinese public to the Opium Wars of the 19th century. Garrett and Xiao propose three major changes to the existing theory of the rhetorical situation:
Elevating the audience as a defining factor of rhetorical situation, rather than the speaker, because of its role in deciding exigency, kairos ("fittingness"), and constraints.
Recognizing the power of discourse traditions within a given culture to influence the audience's perceptions, exigency, kairos, and constraints.
Emphasizing the interactive and dialectical nature of the rhetorical situation.
Rhetorical ecology
Theories leading up to rhetorical ecology
Coe
The first time the concepts of rhetoric and ecology were explored in relation to one another was in 1975 by Richard Coe. In his article, "Eco-Logic for the Composition Classroom", Coe offers up eco-logic as an alternative to traditional analytical logic used in rhetoric and composition studies. The contrast between the two is that analytical logic breaks down wholes into smaller parts to examine them, while eco-logic examines the whole as itself. His primary proof in favor of this type of thinking and approach to rhetoric and composition is that the meaning of the written or spoken word is relative to the context in which it is written or spoken.
Cooper
A more explicit link between rhetoric and ecology was drawn in 1986 by Marilyn Cooper in her article titled "The Ecology of Writing". With an acute focus on the composition classroom, Cooper critiques the notion of writing as a primarily cognitive function, positing that it ignores important social aspects of the writing process. She also argues that a simply contextual perspective of writing is also insufficient; rather, an ecological view of writing extends past the immediate context of a writer and their text to examine the systems that the writer is a part of with other writers. Cooper suggests five different systems that are all intricately interwoven in the actual act of writing: ideas, purposes, interpersonal interactions, cultural norms, and textual forms. Cooper illustrates this ecological model using the metaphor of a web, in that something that impacts one system will inevitably impact all the systems. Cooper also addresses the significant rhetorical concern of audience, claiming that within the ecological model, views of audience are improved as the implication is that there is really communication with a real audience happening, as opposed to an imagined audience, or generalized other. For Cooper, the ecological model allows us to look at people who interact through writing and the systems making up the act of writing itself.
Edbauer's rhetorical ecology
In a 2005 article, "Unframing Models of Public Distribution: From Rhetorical Situation to Rhetorical Ecologies", Jenny Edbauer argued for an understanding of the rhetorical situation beyond the three traditional elements of audience, exigence, and constraints. Edbauer argues that the rhetorical situation lies within larger networks of meaning, or "ecologies". A shift from "rhetorical situations" to "rhetorical ecologies" takes into account the complex, overlapping, and constantly shifting nature of audience, exigence, and constraints, as well as the distribution of public rhetorics. Edbauer argues that viewing rhetorical situations as ecologies shows us that "public rhetorics do not only exist in the elements of their situations, but also in the radius of their neighboring events."
Challenges to rhetorical ecology
Jones
In 2021, Madison Jones published an article titled "A Counterhistory of Rhetorical Ecologies", challenging the rhetorical ecology framework. In the article, he explicitly acknowledges that he is not writing off the theory as something inherently bad; rather, he is observing complications within it and offering up creative new perspectives on the topic. He begins by outlining the various environmental, colonial, and nuclear issues that arise when the metaphor of ecology is invoked. Tying this back to rhetoric, he argues that spatiotemporal issues within the idea of rhetorical ecology (i.e., issues that are related to the location and timing of a rhetorical event) are directly linked back to these historical realities interwoven into the larger idea of ecology. He suggests the framework of field histories as a way to acknowledge the complicated history of the field of ecology as it is used rhetorically. He particularly focuses on the need to employ place-based and community-engaged research to better understand the history of the discipline and work toward shaping a better future.
Other recent theories
Gallagher
John R. Gallagher's 2015 article, "The Rhetorical Template", addresses the rhetorical situation in relation to "Web 2.0" and the templates of social networking sites, such as Facebook. Gallagher defines these Web 2.0 templates as "prefabricated designs that allow writers to create a coherent text." Gallagher contends that rhetorical templates offer a new approach to making meaning within new exigency. Rhetorical templates function within constraints of the genre, but also affect the exigence and purpose by creating how the text is written and read.
Use in teaching writing
The rhetorical situation is a component of some first-year college writing courses, wherein students learn about the rhetorical situation, rhetorical analysis, and awareness of the features they must respond to from their rhetorical situation(s). In this context, the rhetorical situation is taught in several parts:
Writer: the author, speaker, or other generator of the rhetoric under examination
Exigence: the reason the author is writing about their particular subject and why they are writing about it at this moment
Purpose: what the writer wants from the audience
Audience: the intended (and sometimes unintended) recipients of the writer's message
Genre: how the topic is presented by the writer to the audience
Subject: the topic that the writer is discussing
Context: describes the author, where and when the rhetoric is being created and/or received, etc.
Constraints: all of the elements that can limit or alter the message's efficacy; this is sometimes grouped together with context
Though some scholars, such as Douglas Downs and Elizabeth Wardle, have criticized the use of the rhetorical situation as a core component of first-year writing courses, arguing that it would be better to teach students about writing and writing studies than it would be to teach them how to write by responding to rhetorical situations. In response, some others (such as Tara Boyce) have noted that both approaches appear to have their limitations, and that challenges remain regardless of approach. Boyce writes that "students, though aware of and capable of implementing [rhetorical] awareness into writing practices, most often do not. The question remains in how to adapt pedagogy to achieve this awareness and how to measure that achievement."
References
Rhetoric
Writing | 0.773498 | 0.985626 | 0.762379 |
Humanistic psychology | Humanistic psychology is a psychological perspective that arose in the mid-20th century in answer to two theories: Sigmund Freud's psychoanalytic theory and B. F. Skinner's behaviorism. Thus, Abraham Maslow established the need for a "third force" in psychology. The school of thought of humanistic psychology gained traction due to Maslow in the 1950s.
Some elements of humanistic psychology are
to understand people, ourselves and others holistically (as wholes greater than the sums of their parts)
to acknowledge the relevance and significance of the full life history of an individual
to acknowledge the importance of intentionality in human existence
to recognize the importance of an end goal of life for a healthy person
Humanistic psychology also acknowledges spiritual aspiration as an integral part of the psyche. It is linked to the emerging field of transpersonal psychology.
Primarily, humanistic therapy encourages a self-awareness and reflexivity that helps the client change their state of mind and behavior from one set of reactions to a healthier one with more productive and thoughtful actions. Essentially, this approach allows the merging of mindfulness and behavioral therapy, with positive social support.
In an article from the Association for Humanistic Psychology, the benefits of humanistic therapy are described as having a "crucial opportunity to lead our troubled culture back to its own healthy path. More than any other therapy, Humanistic-Existential therapy models democracy. It imposes ideologies of others upon the client less than other therapeutic practices. Freedom to choose is maximized. We validate our clients' human potential."
In the 20th century, humanistic psychology was referred to as the "third force" in psychology, distinct from earlier, less humanistic approaches of psychoanalysis and behaviorism.
Its principal professional organizations in the US are the Association for Humanistic Psychology and the Society for Humanistic Psychology (Division 32 of the American Psychological Association). In Britain, there is the UK Association for Humanistic Psychology Practitioners.
Differences with psychoanalytic theory and behaviorism
Through disagreement with the predominant theories at the time, developed by Freud and Skinner, Maslow was able to formulate the main points of humanistic theory.
Maslow had the following criticisms of the two main theories at the time:
Freud's theory was deterministic, meaning that it attributed the behavior of people to unconscious desires.
Freud and Skinner's theories focused on individuals with mental conflicts (pathological) rather than all individuals.
The other two theories focused too much on the negative traits of human beings, rather than focusing on the positive power Maslow believed individuals to have.
As a result, when Maslow developed his theory, he decided to focus on the conscious (rather than the unconscious) and decided to develop a new theory to explain how all individuals could reach their highest potential.
Origins
One of humanistic psychology's early sources was the work of Carl Rogers, who was strongly influenced by Otto Rank, who broke with Freud in the mid-1920s. Rogers' focus was to ensure that the developmental processes led to healthier, if not more creative, personality functioning. The term 'actualizing tendency' was also coined by Rogers, and was a concept that eventually led Abraham Maslow to study self-actualization as one of the needs of humans. Rogers and Maslow introduced this positive, humanistic psychology in response to what they viewed as the overly pessimistic view of psychoanalysis.
The other sources of inspiration include the philosophies of existentialism and phenomenology.
Conceptual origins
Whilst origins of humanistic psychology date back to the early 1960s, the origins of humanism date back to the classical civilizations of China, Greece, and Rome, whose values were renewed in the European Renaissance.
The modern humanistic approach has its roots in phenomenological and existentialist thought (see Kierkegaard, Nietzsche, Heidegger, Merleau-Ponty and Sartre). Eastern philosophy and psychology also play a central role in humanistic psychology, as well as Judeo-Christian philosophies of personalism, as each shares similar concerns about the nature of human existence and consciousness.
For further information on influential figures in personalism, see: Emmanuel Mounier, Gabriel Marcel, Denis de Rougemont, Jacques Maritain, Martin Buber, Emmanuel Levinas, Max Scheler and Karol Wojtyla.
As behaviorism grew out of Ivan Pavlov's work with the conditioned reflex, and laid the foundations for academic psychology in the United States associated with the names of John B. Watson and B. F. Skinner; Abraham Maslow gave behaviorism the name "the first force", a force which systematically excluded the subjective data of consciousness and much information bearing on the complexity of the human personality and its development. Behavioral theory continued to develop to both account for simple and complex human behavior through theorists such as Arthur Staats, Stephen Hayes, and other post-Skinnerian researchers. Clinical behavioral analysis continues to be widely employed in anxiety disorder treatments, mood disorders, and even personality disorders.
The "second force" arose out of Freudian psychoanalysis, which were composed by psychologists like Alfred Adler, Erik Erikson, Carl Jung, Erich Fromm, Karen Horney, Melanie Klein, Harry Stack Sullivan, and Sigmund Freud himself. Maslow then emphasized the necessity of a "third force" (even though he did not use the term), saying that "it is as if Freud supplied us the sick half of psychology and we must now fill it out with the healthy half", as a critical review towards the cold and distant approach of the psychoanalysis and its deterministic way of viewing the human being.
In the late 1930s, psychologists, interested in the uniquely human issues, such as the self, self-actualization, health, hope, love, creativity, nature, being, becoming, individuality, and meaning—that is, a concrete understanding of human existence—included Abraham Maslow, Carl Rogers, and Clark Moustakas, who were interested in founding a professional association dedicated to a psychology focused on these features of human capital demanded by post-industrial society.
The humanistic psychology perspective is summarized by five core principles or postulates of humanistic psychology first articulated in an article written by James Bugental in 1964 and adapted by Tom Greening, psychologist and long-time editor of the Journal of Humanistic Psychology. The five basic principles of humanistic psychology are:
Human beings, as human, supersede the sum of their parts. They cannot be reduced to components.
Human beings have their existence in a uniquely human context, as well as in a cosmic ecology.
Human beings are aware and are aware of being aware—i.e., they are conscious. Human consciousness always includes an awareness of oneself in the context of other people.
Human beings have the ability to make choices and therefore have responsibility.
Human beings are intentional, aim at goals, are aware that they cause future events, and seek meaning, value, and creativity.
While humanistic psychology is a specific division within the American Psychological Association (Division 32), humanistic psychology is not so much a discipline within psychology as a perspective on the human condition that informs psychological research and practice.
Practical origins
WWII created practical pressures on military psychologists, they had more patients to see and care for than time or resources permitted. The origins of group therapy are here. Eric Berne's progression of books shows this transition out of what we might call pragmatic psychology of WWII into his later innovation, Transactional Analysis, one of the most influential forms of humanistic Popular Psychology of the later 1960s-1970. Even though transactional analysis was considered a unique methodology, it was challenged after Berne's death.
Orientation to scientific research
Humanistic psychologists generally do not believe that we will understand human consciousness and behavior through mainstream scientific research. The objection that humanistic psychologists have to traditional research methods is that they are derived from and suited for the physical sciences and not especially appropriate to studying the complexities and nuances of human meaning-making.
However, humanistic psychology has involved scientific research of human behavior since its inception. For example:
Abraham Maslow proposed many of his theories of human growth in the form of testable hypotheses, and he encouraged scientists to put them to the test.
Shortly after the founding of the American Association of Humanistic Psychology, its president, psychologist Sidney Jourard, began his column by declaring that "research" is a priority. "Humanistic Psychology will be best served if it is undergirded with research that seeks to throw light on the qualities of man that are uniquely human" (emphasis added)
In May 1966, the AAHP release a newsletter editorial that confirmed the humanistic psychologist's "allegiance to meaningfulness in the selection of problems for study and of research procedures, and an opposition to a primary emphasis on objectivity at the expense of significance." This underscored the importance of research to humanistic psychologists as well as their interest in special forms of human science investigation.
Likewise, in 1980, the American Psychological Association's publication for humanistic psychology (Division 32 of APA) ran an article titled, What makes research humanistic? As Donald Polkinghorne notes, "Humanistic theory does not propose that human action is completely independent of the environment or the mechanical and organic orders of the body, but it does suggest that, within the limits of experienced meanings, persons as unities can choose to act in ways not determined by prior events...and this is the theory we seek to test through our research" (p. 3).
A human science view is not opposed to quantitative methods, but, following Edmund Husserl:
favors letting the methods be derived from the subject matter and not uncritically adopting the methods of natural science, and
advocates for methodological pluralism. Consequently, much of the subject matter of psychology lends itself to qualitative approaches (e.g. the lived experience of grief), and quantitative methods are mainly appropriate when something can be counted without leveling the phenomena (e.g. the length of time spent crying).
Research has remained part of the humanistic psychology agenda, though with more of a holistic than reductionistic focus. Specific humanistic research methods evolved in the decades following the formation of the humanistic psychology movement.
Development of the field
Saybrook Conference
In November 1964 key figures in the movement gathered at Old Saybrook (CT) for the first invitational conference on Humanistic psychology. The meeting was a co-operation between the Association for Humanistic Psychology (AHP), which sponsored the conference, the Hazen Foundation, which provided financing, and Wesleyan University, which hosted the meeting. In addition to the founding figures of Humanistic psychology; Abraham Maslow, Rollo May, James Bugental and Carl Rogers, the meeting attracted several academic profiles from the humanistic disciplines, including: Gordon Allport, George Kelly, Clark Moustakas, Gardner Murphy, Henry Murray, Robert W. White, Charlotte Bühler, Floyd Matson, Jacques Barzun, and René Dubos. Robert Knapp was chairman and Henry Murray gave the keynote address.
Among the intentions of the participants was to formulate a new vision for psychology that, in their view, took into consideration a more complete image of the person than the image presented by the current trends of Behaviorism and Freudian psychology. According to Aanstoos, Serlin & Greening the participants took issue with the positivistic trend in mainstream psychology at the time. The conference has been described as a historic event that was important for the academic status of Humanistic psychology and its future aspirations.
Major theorists
Several key theorists have been considered to have prepared the ground for humanistic psychology. These theorists include Otto Rank, Abraham Maslow, Carl Rogers and Rollo May. This section provides a short-handed summary of each individual's contributions for the theory.
Abraham Maslow: In regards to humanistic theory, Maslow developed a hierarchy of needs. This is a pyramid which basically states that individuals first must have their physiological needs met, then safety, then love, then self-esteem and lastly self-actualization. People who have met their self-actualization needs are self-aware, caring, wise and their interests are problem centered. He theorized that self-actualizing people are continuously striving, thinking broadly and focusing on broader problems. He also believed however, that only 1% of people actually achieved self-actualization.
Carl Rogers: Rogers built upon Maslow's theory and argued that the process of self-actualization is nurtured in a growth promoting climate. Two conditions are required in order for a climate to be a self-actualizing growth promoting climate: the individual must be able to be their genuine self, and as the individual expresses their true self, they must be accepted by others.
Counseling and therapy
The aim of humanistic therapy is usually to help the client develop a stronger and healthier sense of self, also called self-actualization. Humanistic therapy attempts to teach clients that they have potential for self-fulfillment. This type of therapy is insight-based, meaning that the therapist attempts to provide the client with insights about their inner conflicts.
Approaches
Humanistic psychology includes several approaches to counseling and therapy. Among the earliest approaches we find the developmental theory of Abraham Maslow, emphasizing a hierarchy of needs and motivations; the existential psychology of Rollo May acknowledging human choice and the tragic aspects of human existence; and the person-centered or client-centered therapy of Carl Rogers, which is centered on the client's capacity for self-direction and understanding of his or her own development. Client-centered therapy is non-directive; the therapist listens to the client without judgement, allowing the client to come to insights by themselves. The therapist should ensure that all of the client's feelings are being considered and that the therapist has a firm grasp on the concerns of the client while ensuring that there is an air of acceptance and warmth. Client-centered therapist engages in active listening during therapy sessions.
A therapist cannot be completely non-directive; however, a nonjudgmental, accepting environment that provides unconditional positive regard will encourage feelings of acceptance and value.
Existential psychotherapies, an application of humanistic psychology, applies existential philosophy, which emphasizes the idea that humans have the freedom to make sense of their lives. They are free to define themselves and do whatever it is they want to do. This is a type of humanistic therapy that forces the client to explore the meaning of their life, as well as its purpose. There is a conflict between having freedoms and having limitations. Examples of limitations include genetics, culture, and many other factors. Existential therapy involves trying to resolve this conflict.
Another approach to humanistic counseling and therapy is Gestalt therapy, which puts a focus on the here and now, especially as an opportunity to look past any preconceived notions and focus on how the present is affected by the past. Role playing also plays a large role in Gestalt therapy and allows for a true expression of feelings that may not have been shared in other circumstances. In Gestalt therapy, non-verbal cues are an important indicator of how the client may actually be feeling, despite the feelings expressed.
Also part of the range of humanistic psychotherapy are concepts from depth therapy, holistic health, encounter groups, sensitivity training, marital and family therapies, body work, the existential psychotherapy of Medard Boss, and positive psychology.
Empathy and self-help
Empathy is one of the most important features of humanistic therapy. This idea focuses on the therapist's ability to see the world through the eyes of the client. Without this, therapists can be forced to apply an external frame of reference where the therapist is no longer understanding the actions and thoughts of the client as the client would, but strictly as a therapist which defeats the purpose of humanistic therapy. Included in empathizing, unconditional positive regard is one of the key elements of humanistic psychology. Unconditional positive regard refers to the care that the therapist needs to have for the client. This ensures that the therapist does not become the authority figure in the relationship allowing for a more open flow of information as well as a kinder relationship between the two. A therapist practicing humanistic therapy needs to show a willingness to listen and ensure the comfort of the patient where genuine feelings may be shared but are not forced upon someone. Marshall Rosenberg, one of Carl Rogers' students, emphasizes empathy in the relationship in his concept of Nonviolent Communication.
Self-help is also part of humanistic psychology: Sheila Ernst and Lucy Goodison have described using some of the main humanistic approaches in self-help groups. Humanistic Psychology is applicable to self-help because it is oriented towards changing the way a person thinks. One can only improve once they decide to change their ways of thinking about themselves, once they decide to help themselves. Co-counselling, which is an approach based purely on self-help, is regarded as coming from humanistic psychology as well. Humanistic theory has had a strong influence on other forms of popular therapy, including Harvey Jackins' Re-evaluation Counselling and the work of Carl Rogers, including his student Eugene Gendlin; (see Focusing) as well as on the development of the Humanistic Psychodrama by Hans-Werner Gessmann since the 80s.
Ideal and real selves
The ideal self and real self involve understanding the issues that arise from having an idea of what you wish you were as a person, and having that not match with who you actually are as a person (incongruence). The ideal self is what a person believes should be done, as well as what their core values are. The real self is what is actually played out in life. Through humanistic therapy, an understanding of the present allows clients to add positive experiences to their real self-concept. The goal is to have the two concepts of self become congruent. Rogers believed that only when a therapist was able to be congruent, a real relationship occurs in therapy. It is much easier to trust someone who is willing to share feelings openly, even if it may not be what the client always wants; this allows the therapist to foster a strong relationship.
Non-pathological
Humanistic psychology tends to look beyond the medical model of psychology in order to open up a non-pathologizing view of the person. This usually implies that the therapist downplays the pathological aspects of a person's life in favour of the healthy aspects. Humanistic psychology tries to be a science of human experience, focusing on the actual lived experience of persons. Therefore, a key ingredient is the actual meeting of therapist and client and the possibilities for dialogue to ensue between them. The role of the therapist is to create an environment where the client can freely express any thoughts or feelings; he does not suggest topics for conversation nor does he guide the conversation in any way. The therapist also does not analyze or interpret the client's behavior or any information the client shares. The role of the therapist is to provide empathy and to listen attentively to the client.
Societal applications
Social change
While personal transformation may be the primary focus of most humanistic psychologists, many also investigate pressing social, cultural, and gender issues. In an academic anthology from 2018, British psychologist Richard House and his co-editors wrote, "From its very outset, Humanistic Psychology has engaged fulsomely and fearlessly with the social, cultural and political, in a way that much of mainstream scientific, 'positivistic' psychology has sought to avoid".
Some of the earliest writers who were associated with and inspired by psychological humanism explored socio-political topics. For example:
Alfred Adler argued that achieving a sense of community feeling is essential to human development.
Medard Boss defined health as an openness to the world, and unhealth as anything in the psyche or society that blocked or constricted that openness.
Erich Fromm argued that the totalitarian impulse is rooted in people's fear of the uncertainties and responsibilities of freedom – and that the way to overcome that fear is to dare to live life fully and compassionately.
R. D. Laing analyzed the political nature of "normal", everyday experience.
Rollo May said that people have lost their values in the modern world, and that their health and humanity depends on having the courage to forge new values appropriate to the challenges of the present.
Wilhelm Reich argued that psychological problems are often caused by sexual repression, and that the latter is influenced by social and political conditions – which can and should be changed.
Carl Rogers came to believe that political life did not have to consist of an endless series of winner-take-all battles, that it could and should consist of an ongoing dialogue among all parties. If such dialogue were characterized by respect among the parties and authentic speaking by each party, compassionate understanding and – ultimately – mutually acceptable solutions could be reached.
Virginia Satir was convinced that her approach to family therapy would enable individuals to expand their consciousness, become less fearful, and bring communities, cultures, and nations together.
Relevant work was not confined to these pioneer thinkers. In 1978, members of the Association for Humanistic Psychology (AHP) embarked on a three-year effort to explore how the principles of humanistic psychology could be used to further the process of positive social and political change. The effort included a "12-Hour Political Party", held in San Francisco in 1980, where nearly 1,400 attendees discussed presentations by such non-traditional social thinkers as Ecotopia author Ernest Callenbach, Aquarian Conspiracy author Marilyn Ferguson, Person/Planet author Theodore Roszak, and New Age Politics author Mark Satin. The emergent perspective was summarized in a manifesto by AHP President George Leonard. It proffered such ideas as moving to a slow-growth or no-growth economy, decentralizing and "deprofessionalizing" society, and teaching social and emotional competencies in order to provide a foundation for more humane public policies and a healthier culture.
There have been many other attempts to articulate humanistic-psychology-oriented approaches to social change. For example, in 1979 psychologist Kenneth Lux and economist Mark A. Lutz called for a new economics based on humanistic psychology rather than utilitarianism. Also in 1979, California state legislator John Vasconcellos published a book calling for the integration of liberal politics and humanistic-psychological insight. From 1979 to 1983 the New World Alliance, a U.S. political organization based in Washington, D.C., attempted to inject humanistic-psychology ideas into political thinking and processes; sponsors of its newsletter included Vasconcellos and Carl Rogers.
In 1989 Maureen O'Hara, who had worked with both Carl Rogers and Paulo Freire, pointed to a convergence between the two thinkers. According to O'Hara, both focus on developing critical consciousness of situations which oppress and dehumanize. Throughout the 1980s and 1990s, Institute of Noetic Sciences president Willis Harman argued that significant social change cannot occur without significant consciousness change. In the 21st century, influenced by humanistic psychology, people such as Edmund Bourne, Joanna Macy, and Marshall Rosenberg continued to apply psychological insights to social and political issues.
In addition to its uses in thinking about social change, humanistic psychology is considered to be the main theoretical and methodological source of humanistic social work.
Social work
After psychotherapy, social work is the most important beneficiary of the humanistic psychology's theory and methodology. These theories have produced a deep reform of the modern social work practice and theory, leading, among others, to the occurrence of a particular theory and methodology: Humanistic Social Work. Most values and principles of the humanistic social work practice, described by Malcolm Payne in his book Humanistic Social Work: Core Principles in Practice, directly originate from the humanistic psychological theory and humanistic psychotherapy practice, namely creativity in human life and practice, developing self and spirituality, developing security and resilience, accountability, flexibility and complexity in human life and practice.
Furthermore, the representation and approach of the client (as human being) and social issue (as human issue) in social work is made from the humanistic psychology position. According to Petru Stefaroi, the way humanistic representation and approach of the client and their personality is realized is, in fact, the theoretical-axiological and methodological foundation of humanistic social work.
In setting goals and the intervention activities, in order to solve social/human problems, there prevail critical terms and categories of the humanistic psychology and psychotherapy, such as: self-actualization, human potential, holistic approach, human being, free will, subjectivity, human experience, self-determination/development, spirituality, creativity, positive thinking, client-centered and context-centered approach/intervention, empathy, personal growth, empowerment. Humanistic psychology has been utilised as a framework for theorizing the African philosophy of Ubuntu in social work practice. In addition, humanistic social work calls for the pursuit of social justice, holistic service provision, technological innovation and stewardship, dialogue and cooperation as well as professional care and peer support during the COVID-19 pandemic.
Creativity in corporations
Humanistic psychology's emphasis on creativity and wholeness created a foundation for new approaches towards human capital in the workplace stressing creativity and the relevance of emotional interactions. Previously the connotations of "creativity" were reserved for and primarily restricted to, working artists. In the 1980s, with increasing numbers of people working in the cognitive-cultural economy, creativity came to be seen as a useful commodity and competitive edge for international brands. This led to corporate creativity training in-service trainings for employees, led pre-eminently by Ned Herrmann at G.E. in the late 1970s.
See also
References
Further reading
Arnold, Kyle. (2014). Behind the Mirror: Reflective Listening and its Tain in the Work of Carl Rogers. The Humanistic Psychologist, 42:4 354-369.
Bendeck Sotillos, S. (Ed.). (2013). Psychology and the Perennial Philosophy: Studies in Comparative Religion. Bloomington, IN: World Wisdom. .
Bugental, J. F. T. (Ed.). (1967). Challenges of humanistic psychology. New York, NY: McGraw-Hill.
Bugental, J.F.T (1964). "The Third Force in Psychology". Journal of Humanistic Psychology 4 (1): 19–25. .
Buhler, C., & Allen, M. (1972). Introduction to humanistic psychology. Monterey CA: Brooks/Cole Pub. Co.
Chiang, H. -M., & Maslow, A. H. (1977). The healthy personality (Second ed.). New York, NY: D. Van Nostrand Co.
DeCarvalho, R. J. (1991). The founders of humanistic psychology. New York, NY: Praeger Publishers.
Frick, W. B. (1989). Humanistic psychology: Conversations with Abraham Maslow, Gardner Murphy, Carl Rogers. Bristol, IN: Wyndham Hall Press. (Original work published 1971)
Fromm, E. (1955). The sane society. Oxford, England: Rinehart & Co.* Fromm, E. (1955). The sane society. Oxford, England: Rinehart & Co.
Gessmann, H.-W. (2012). Humanistic Psychology and Humanistic Psychodrama. - Гуманистическая психология и гуманистическая психодрама. Москва - jurpsy.ru/lib/books/id/25808.php
Gunn, Jacqueline Simon; Arnold, Kyle; Freeman, Erica. (2015). The Dynamic Self Searching for Growth and Authenticity: Karen Horney's Contribution to Humanistic Psychology. The Forum of the American Academy of Psychoanalysis and Dynamic Psychiatry, 59: 2 20-23.
Human Potentialities: The Challenge and the Promise. (1968). Human potentialities: The challenge and the promise. St. Louis, MO: WH Green.
Kress, Oliver (1993). "A new approach to cognitive development: ontogenesis and the process of initiation". Evolution and Cognition 2(4): 319-332.
Maddi, S. R., & Costa, P. T. (1972). Humanism in personology: Allport, Maslow, and Murray. Chicago, IL: Aldine·Atherton.
Misiak, H., & Sexton, V. S. J. A. (1973). Phenomenological, existential, and humanistic psychologies: A historical survey. New York, NY: Grune & Stratton.
Moss, D. (1999). Humanistic and transpersonal psychology: A historical and biographical sourcebook. Westport, CT: Greenwood Press.
Moustakas, C. E. (1956). The self: Explorations in personal growth. Harper & Row.
Murphy, G. (1958). Human potentialities. New York, NY: Basic Books.
Nevill, D. D. (1977). Humanistic psychology: New frontiers. New York, NY: Gardner Press .
Otto, H. A. (1968). Human potentialities: The challenge and the promise. St. Louis, MO: WH Green.
Rogers, CR, Lyon, HC Jr, Tausch, R: (2013) On Becoming an Effective Teacher - Person-centered teaching, psychology, philosophy, and dialogues with Carl R. Rogers and Harold Lyon. London: Routledge
Rowan, John (2001). Ordinary Ecstasy: The Dialectics of Humanistic Psychology (3rd ed.). Brunner-Routledge.
Schneider, K., Bugental, J. F. T., & Pierson, J. F. (2001). The handbook of humanistic psychology: Leading edges in theory, research, and practice. London: SAGE.
Schneider, K.J., ed (2008). Existential-integrative Psychotherapy: Guideposts to the Core of Practice. New York: Routledge.
Severin, F. T. (1973). Discovering man in psychology: A humanistic approach. New York, NY: McGraw-Hill.
Singh, J. (1979). The humanistic view of man. New Delhi, India: Indian Institute of Public Administration.
Sutich, A. J., & Vich, M. A. (Eds.). (1969). Readings in humanistic psychology. New York, NY: Free Press.
Welch, I., Tate, G., & Richards, F. (Eds.). (1978). Humanistic psychology: A source book. Buffalo, NY: Prometheus Books.
Zucker, R. A., Rabin, A. I., Aronoff, j., & Frank, S. (Eds.). (1992). Personality structure in the life course. New York, NY: Springer.
External links
What Is Humanistic Psychology?
Association for Humanistic Psychology
Society for Humanistic Psychology, Division 32 of the American Psychological Association
University of West Georgia's Humanistic Psychology Program
All about Humanistic Psychology
Existential therapy | 0.764879 | 0.996676 | 0.762337 |
Community education | Community education, also known as Community-Based Education or Community Learning & Development, or Development Education is an organization's programs to promote learning and social development work with individuals and groups in their communities using a range of formal and informal methods. A common defining feature is that programmes and activities are developed in dialogue with communities and participants. The purpose of community learning and development is to develop the capacity of individuals and groups of all ages through their actions, the capacity of communities, to improve their quality of life. Central to this is their ability to participate in democratic processes.
Community education encompasses all those occupations and approaches that are concerned with running education and development programmes within local communities, rather than within educational institutions such as schools, colleges and universities. The latter is known as the formal education system, whereas community education is sometimes called informal education. It has long been critical of aspects of the formal education system for failing large sections of the population in all countries and had a particular concern for taking learning and development opportunities out to poorer areas, although it can be provided more broadly.
There are a myriad of job titles and employers include public authorities and voluntary or non-governmental organisations, funded by the state and by independent grant making bodies. Schools, colleges and universities may also support community learning and development through outreach work within communities. The community schools movement has been a strong proponent of this since the sixties. Some universities and colleges have run outreach adult education programmes within local communities for decades. Since the seventies the prefix word ‘community’ has also been adopted by several other occupations from youth workers and health workers to planners and architects, who work with more disadvantaged groups and communities and have been influenced by community education and community development approaches.
Community educators have over many years developed a range of skills and approaches for working within local communities and in particular with disadvantaged people. These include less formal educational methods, community organising and group work skills. Since the nineteen sixties and seventies through the various anti poverty programmes in both developed and developing countries, practitioners have been influenced by structural analyses as to the causes of disadvantage and poverty i.e. inequalities in the distribution of wealth, income, land etc. and especially political power and the need to mobilise people power to effect social change. Thus the influence of such educators as Paulo Friere and his focus upon this work also being about politicising the poor.
In the history of community education and community learning and development, the UK has played a significant role in hosting the two main international bodies representing community education and community development. These being the International Community Education Association, which was for many years based at the Community Education Development Centre based in Coventry UK. ICEA and CEDC have now closed, and the International Association for Community Development, which still has its HQ in Scotland. In the 1990s there was some thought as to whether these two bodies might merge. The term community learning and development has not taken off widely in other countries. Although community learning and development approaches are recognised internationally. These methods and approaches have been acknowledged as significant for local social, economic, cultural, environmental and political development by such organisations as the UN, WHO, OECD, World Bank, Council of Europe and EU.
Definition
Community education is often used interchangeably with adult education in Europe, particularly in the United Kingdom. For example, night schools and classes in village halls or community centres had been key opportunities for learning in communities outside of traditional school. Community education bridges the gap between adult education, lifelong learning and community development. John Rennie, former director of the Community Education Development Centre in Coventry, wrote that there are five tenants to defining community education: (1) the best solutions come from collective knowledge and shared experiences involving the community, (2) education is a lifelong activity, (3) use of a variety of resources, (4) each person has a contribution to make, and (5) a sense of citizenship. In the 1960s and 1970s, the UK saw an increase in community development and community action organisations which had the potential to blur the lines between the responsibilities of adult education and community development. However, poverty and social disadvantage emphasised the need for adult education opportunities and a community approach supported the need to meet individual circumstances and to understand barriers to learning. Ian Martin, Honorary Fellow, Community and Society at the University of Edinburgh, has argued that community education "will allow genuinely alternative and democratic agendas to emerge at the local level." In a 1996 UNESCO report known as Learning: The Treasure Within learning throughout life was promoted as benefiting both society and individuals because it allowed them to respond to the changing labour market and social landscape.
History
Community development and planning became more of a priority after the decolonisation of independent states in Asia, Africa, Oceania and the Caribbean, and the Second World War. Community development was first significantly promoted by the United Nations (UN) in the 1950s as a way to develop the socioeconomic prospects of low-income countries by supporting education, housing and healthcare infrastructure. The UN established a Regional and Community Development Division and a Community Development and Organization Section. In the 1970s, a shift in adult education saw practitioners experiment with more informal outreach work within local communities. The International Association for Community Development (IACD) was established in 1953 in the United States, and has since gone on to represent community development at the UN and partner with the United States, UK, Canada, Hungary, Hong Kong, Australia, Europe, New Zealand, Nigeria, India, Philippines, Georgia, Ireland and Kenya. The IACD includes community education as a way in which community development can empower people within their communities.
In the UK
England and Wales
In July 1917 the British government, under Lloyd George, established The Ministry of Reconstruction. This governmental department aimed to address a number of political and social areas including employment, housing and industrial relations. In 1919, The Ministry of Reconstruction Adult Education Committee (AEC) published the Final Report in which it argued that adult education was a "permanent national necessity." The AEC was chaired by A.L. Smith, and members included historian and social critic R.H. Tawney. Tawney believed that adult education was a democratic bottom-up process, that acts as a space for individuals to challenge and change their community. In the Final Report the need for adult education is described as individuals desire for "adequate opportunities for self-expression and the cultivation of their personal powers and interests." The 1919 Final Report identified a number of challenges that education may help to improve and these include; international cooperation, gender equality, maintaining democracy, and employment and the quality of work.
The UK underwent a reform in social welfare during and after the inter-war period. Community centres were built in newly established suburban housing estates under the 1936 Housing Act and the 1937 Physical Training and Recreation Act, and the Education Act 1944 introduced the Youth and Community Service. In the Ministry of Education pamphlet A Guide to the Education System of England and Wales (1945) it is stated that:
In a 1944 booklet entitled Citizen Centres for Adult Education by the Education Settlements Association (see, Settlement movement), posits adult education as vital to the "social reconstruction" of post-war Britain. One of the main challenges identified in the booklet is provision of centres and states that "the primary function of any local citizen centre should be the progressive development of the individual as a member of a free society, through mental training, the encouragement of self-effort, and the exercise of personal responsibility."
During the 1960s, Britain experienced increased poverty. In a 1965 survey entitled The Poor and the Poorest, Peter Townsend and Brian Abel-Smith measured poverty as the rate of people receiving National Assistance and, from this, they found approximately 14% of British people were living in poverty. British social researcher, Richard Titmuss published his book Income Distribution and Social Change in 1962, and argued that the wealth divide between classes was much wider than shown in official statistics. In 1965 the Seebohm Committee was established to investigate and review the work of social services in Britain. The subsequent Seebohm Report was published in 1968 and recommended greater integration between social care services and other health and social welfare services, particularly proposing the creation of a single family services department. The Seebohm Committee's work bolstered interest in community work as it was seen as a way to facilitate plans for social change. The Seebohm Report argued that in order to prevent delinquency, social work should be involved in encouraging positive community values and empowering people to help themselves. In 1965 a study group, chaired by Dame Eileen Younghusband and funded by the Calouste Gulbenkian Foundation, investigated the role of community work in Britain and how best to go about training community workers. The study group published their findings in 1968 and defined community work "as a means of giving life to local democracy" and said that community work was important to coordinate and develop "services within and among organisations in a local community." In response to concern about poverty and social inequalities in Britain, Prime Minister Harold Wilson introduced the National Community Development Projects (CDPs) in 1969. Subsequently this influenced creation of the Urban Aid Programme, which allocated grants to local authorities to support education, housing, and social care organisations. Martin Loney described the CDPs as "the story of Britain's largest ever government funded social experiment." The rationale of the CDPs, and similar American projects such as the Community Action Programmes for Juvenile Delinquency, was that social issues were local and caused by individual pathology.
CDPs were established in 12 cities and towns in Coventry, Liverpool, Southwark, Glyncorrwg, Bately, Birmingham, Canning Town, Cumbria, Newcastle, Oldham, Paisley and North Shields. The aim was for researchers to identify local issues, and work alongside the local community to provide and evaluate different methods of intervention. A number of reports were published particularly by the North Tyneside CDP, following CDPs research, and these include Whatever Happened to Council Housing (1976), Gilding the Ghetto (1979) and Costs of Industrial Change (1981).
In 1973, Adult Education: A Plan for Development was published by the Department of Education, also known as the Russell Report. The Russell Committee was chaired by Sir Lionel Russell and was first established by the Labour Government in 1969. However, the after the 1979 election, the Conservative Government under Margaret Thatcher came into power and this may have effected the approach of the Committee. The reports recognised that there was increased demand for adult education and that with "modest" investment could benefit adult education greatly to make use of existing resources. The General Statement of the Russell Report explained:
The Committee expressed that adult learning should be directed by the learner's individual needs e.g. for vocational reasons, for employment or to upskill in a job. The Report outlined ten important recommendations; (1) establishment of a Development Council for Adult Education for England and Wales, (2) ongoing partnership between statutory and voluntary bodies when providing adult learning, (3) the Secretary of State should provide guidance in accordance with the Education Act 1944 for how Local Education Authorities (LEAs) should provide adult learning, (4) increase number of full-time staff employed in adult learning with appropriate career and salary structures, (5) offer access to qualifications at all levels for adult learners, (6) targeted provision should be available for "disadvantaged adults", (7) increase in accommodation, premises, available for adult learning, (8) maintain funding structure for universities, (9) the Workers Educational Association (WEA) should be funded by LEAs and the Department for Education, and (10) greater opportunities for residential adult education. The Russell Committee were obliged to focus on non-vocational adult education. The Russell Report supported the use of the Direct Grant that funds were specifically stipulated for adult learning bodies. The Committee conducted research for the Report, and they recommended that programmes for learning should be developed specifically for disadvantaged individuals. 'Disadvantaged' is defined, in The Russell Report, as "[...] the extent to which integration into society" is influenced by physical or mental health, poverty or social deprivation, or lack of basic education, learning impairment or language barriers.
In 1977 the Advisory Council for Adult and Continuing Education (ACACE) was established in 1977 after the Russell Report and was chaired by British sociologist Richard Hoggart until 1983. The Council established committees related to national educational policies, conceptualising continuing education, for example integrating higher education and vocational training. In 1979, the ACACE carried out a survey of adult learners' access to higher education and they conclude that "recurrent post-secondary education could be established without heavy new expenditure, especially on capital projects. The basis of the system is there." In another 1979 paper entitled Towards Continuing Education: a discussion paper, the ACACE argue that adult education should include vocational training under the Employment Acts and the Education Acts. The ACACE defined adult education as a social policy concept, meaning that it would address issues relating to social change and the economy, and Naomi McIntosh argued that the Council helped to change people's attitudes about adult education.
In 1987, the National Vocational Qualifications (NVQs) was introduced in England, Wales and Northern Ireland as framework to standardise vocational qualifications. This followed the creation of the National Council for Vocational Qualifications (NCVQ), consisting of members appointed by the Secretary of State for Education and Employment. The Council aimed to accreddit qualifications, and assign levels to qualifications within the NVQ framework. Criticism of the framework, however, ranged from less flexibility for learners, too bureaucratic and the expense of new assessment procedures. In his book 'Russell and After: The Politics of Adult Learning (1969-1997)', Peter Clyne argues that "by concentrating on vocational qualifications and work-related skills and knowledge, the NCVQ was moving against the flow of the conclusions and recommendations of the Russell Committee and ACACE. A potentially damaging rift was being created between different forms of adult learning."
The UK Government published the Green paper entitled The Learning Age: a renaissance for a new Britain' in February 1998 presented by the then Secretary of State for Education and Employment David Blunkett. The paper describes learning as "contributing to social cohesion" and that it "fosters a sense of belonging, responsibility and identity." The paper also proposes setting up an Adult and Community Learning Fund "to sustain and encourage new schemes locally that help men and women gain access to education, including literacy and numeracy."
National Training Organisations (NTOs) worked in partnership across education with the government and third sector to: identify skill shortages, develop occupational standards, provide advice on training and communicate between partners. PAULO was an NTO for community-based learning and development established in January 2000. PAULO was concerned with the educational need of learners but also of the staff and their training by focusing on: appropriate community venues, prioritising voluntary learning, emphasising links between learning, individual and collective action and citizenship, promoting social inclusion and equality, and widening participation in lifelong learning.
The Learning and Skills Act 2000 was introduced and established the Learning and Skills Council (LSC) to ensure provision of education and training for young people and adults. Local LSCs were also established to guide local education authorities with providing adult and community learning opportunities. The Adult Learning Inspectorate (ALI) was a non-departmental public body established under the 2000 Act and headed by the Inspectorate David Sherlock. However, the UK Government established the Office for Standards in Education (OFSTED) in 1990 and advising on adult learning and community education came under its remit in 2007, replacing the ALI.
The Institute for Employment Studies published 'Adult Learning in England: a Review' in 2000 alongside The National Institute of Adult Continuing Education (NIACE) and gave an account of the services involved in providing community education. NIACE was an educational charity, founded in 1921, to promote adult learning in England and Wales before it became part of the Learning and Work Institute in 2016. The main agencies and services identified in the 2000 review are listed below as their current iterations:
Ministerial Departments e.g. the Department for Education
Jobcentre Plus (as part of the Department for Work and Pensions)
Local Education Authorities (LEAs)
Education and Skills Funding Agency
Voluntary and charity organisations
The National Careers Service
The Open University (OU)
BBC Schools
Employers
Trade Unions
Adult education is devolved in the UK, as well as regional authorities in England. Devolution deals in England were established in the Apprenticeships, Skills, Children and Learning Act 2009. Devolved authorities are responsible for allocating the Adult Education Budget (AEB) and meeting the needs of local employers. Between 2018 and 2019, adult education functions were transferred to certain mayoral combined authorities (MCAs) under the Local Democracy, Economic Development and Construction Act 2009. From 2022 to 2023, the Department of Education had devolved approximately 60% of the AEB to 9 MCAs and the Mayor of London. The regional authorities were: Cambridgeshire and Peterborough, Greater London Authority, Greater Manchester, Liverpool City Region, North of Tyne, South Yorkshire, Tees Valley, West Midlands, West of England, and West Yorkshire. Between August and December 2022, under the Sunak Government, the Department for Education established devolution deals with: Cornwall, East Midlands, Norfolk, the North East, Suffolk, and York and North Yorkshire.
In January 2021, the Department for Education published the White Paper "Skills for Jobs: Lifelong Learning for Opportunity and Growth" which aimed to "strengthen links between employers and further education providers." In 2020 the then Prime Minister Boris Johnson delivered a speech on the Lifetime Skills Guarantee in which he stated the education system in England "will move to a system where every student will have a flexible lifelong loan entitlement to four years of post-18 education – and suddenly, with that four year entitlement, and with the same funding mechanism, you bring universities and FE closer together." The White Paper made recommendations that aimed to deliver on the Lifetime Skills Guarantee by implementing a flexible Lifelong Loan Entitlement "to the equivalent of four years post-18 education from 2025." A major theme of the Paper was emphasising the role of employers working with education providers, and it recommends developing 'Local Skills Improvement Plans' to match skills with the needs of the labour market, including technical skills as well as improving english, maths and digital skills. £2.5 billion is proposed as a 'National Skills Fund' to upskill and reskill adult learners. Statutory guidance for Local Skills Improvement Plans was published in October 2022, in reference to the Skills and Post-16 Education Act 2022, and states that the Plans should set out key priorities and represent the needs of employers and how to address skill needs of employers in partnership with local education services.
Welsh policy
In 2012, the Welsh Government published guidance for providers of adult education and adult learners entitled "Education for Sustainable Development and Global Citizenship: A common understanding for the adult and community learning sector." The report encouraged partnership between Adult and Community Learning (ACL) practitioners and Education for Sustainable Development and Global Citizenship (ESDGC):
ESDGC is defined as supporting individuals to understand issues around climate change, food provision, biodiversity, international wars, terrorism and poverty.
The current adult and community learning strategy in Wales was published in 2017. The policy states its vision as "a Wales where learning is at the core of all we do; where participation in learning is encouraged and rewarded; and where people have equal opportunities to gain the skills for life and work that they need to prosper." The focus is on supporting adults with essential skills such as communication, ESOL, numeracy, digital skills, and employability skills.
Scotland
Community education in Scotland was established after the publication of the Alexander Report in 1975 entitled '''Adult Education: the Challenge of Change' chaired by Sir Kenneth Alexander.Scottish Education Department (1975) Adult Education: the challenge of change. Report by a Committee of Inquiry (the Alexander Report). Edinburgh: HMSO. The Alexander Report encouraged merging adult education, youth work and community development into one service. The Report referred to adult education as "voluntary leisure time courses" which "have no specific vocational purpose and which are voluntarily attended by a student in the time when he is not engaged in his normal daily occupation." Recommendations also included prioritising areas of multiple deprivation to serve disadvantaged communities, to make better use of colleges, and the establishment of a Scottish Council for Community Education. The publication of the Report coincided with local government reform in Scotland, which saw the creation of new larger local authorities under the Local Government (Scotland) Act 1973. As a result of the report adult education, community development and youth work were redefined as the Community Education Service, but many adult educators and youth workers did not change their professional approach. A report about training was developed by the Scottish Community Education Council in 1984.
The Scottish Community Education Council (SCEC) was established in 1982, first chaired by Scottish academic and activist Baroness Elizabeth Carnegy. The SCEC would go on to be chaired by Ralph Wilson, Dorothy Dalton, Esther Robertson, and Charlie McConnell until 2002. Training for Change was a report published after the SCEC created second working party on training chaired by Geoffrey Drought in 1984 and defined community education as "purposive developmental and educational programmes and structures which afford opportunities for individual and collective growth and change throughout life."Scottish Community Education Council (1984). Training for Change. Edinburgh: SCEC The Report focused on the need to provide flexible community education training and recommended improving the quality of fieldwork practice and supervision for community educators. The 1980s saw the expansion of community education projects in some of the largest Scottish local authority areas, for example, Strathclyde, and Lothian and Tayside. In 1979, the Adult Learning Project (ALP) was established which introduced a number of learning initiatives in the Gorgie Dalry area of Edinburgh. The ALP was financed by the Scottish Education Department and the Lothian Regional Council, as part of the Urban Aid project. The ALP developed learning opportunities by conducting secondary source investigation into the local area, primary source investigation by making contact with people in the community, finding co-investigators by recruiting volunteers from the public, building codifications which involved codifying themes from their findings to understand objectives and actions for the project to meet the needs of the community, and lastly developing appropriate learning opportunities.Campbell, Luke (November 019). The Adult Learning Project in the Age of Austerity (PDF). E.S.R.E.A conference, Network Access, Learning Careers and Identity. (Accessed: 10 August 2024) In 1989, the Community Education Validation and Endorsement Group (CeVe) was created with the remit to develop guidelines for the validation of community education training, and the competencies they developed included: engaging appropriately with local communities, empower individuals and groups, and to gather and use evaluative data to improve and develop programmes.
In 1998, a working group was created chaired by Douglas Osler to explore community education in Scotland, and worked alongside the Council of Scottish Local Authorities (COSLA). COSLA was established in 1975 as a national association of Scottish councils and defines itself as "the voice of Local Government in Scotland." The Osler Report entitled "Communities: Change through Learning" was published in November 1998 aimed to address "long-term confusion between community education as a way of working and community education as an amalgam of the 3 fields." Gordon Mackie et al. (2011) argue that Osler Committee approached community education under the vision of New Labour's Third Way, seeing it as a technique instead of one means of a service to the community. The Osler Report recommended that local authorities should develop community learning plans that met the needs of communities and that also contribute to the Government's aims for social inclusion, lifelong learning and active citizenship.
Following the Osler Report, the Scottish Executive published "Working and Learning Together to build stronger communities" in 2004 and in which the community education service is renamed as 'Community Learning and Development' (CLD). It set out 3 national priorities for CLD: (1) achievement through learning for adults by providing learning opportunities to improve core skills of literacy, numeracy, communications, working with others, problem-solving and information communications technology (ICT), (2) achievement through learning for young people, and (3) achievement through building community capacity. Structurally, practice changed from a Community Education Service to local and regional CLD Partnerships. Partnerships, in the 2004 report, are made up of local authority services (e.g. education, social work, community services, environmental protection, housing, arts and leisure, and libraries), as well as agency and voluntary partners representing issues (e.g. tenant's organisations, local youth and community councils, and equalities groups). The Scottish Executive also established a Ministerial Advisory Committee to conduct the Community Education Training Review (CETR), resulting in the "Empowered to Practice: The future of community learning and development training in Scotland" in 2003. This report concluded that there should be a stronger disciplinary system for practitioners, which may be achieved by greater management in the service. The recommendations from this report were furthered and resulted in the creation of a Short Life Task Group (SLTG) in 2004 chaired by Professor Ted Milburn.
The SLTG was given the remit to advise Ministers " regarding the establishment of a practitioner-led body responsible for validation, endorsement, accreditation and registration for community learning and development, with enhanced capacity, building upon the work of CeVe (Community Education Validation and Endorsement)." Findings from the SLTG resulted in the publication of the Milburn Report entitled "Strengthening Standards: Improving the Quality of Community Learning and Development Service Delivery" in January 2006. The main distinction was the recommendation to consider practice as a profession funded by the government. The proposed 'professional body' would have "independent status" and have responsibilities including: developing a qualifications framework for continuing professional development (CPD), and registration for practitioners to distinguish between practitioners who are qualified in CLD or Community Education and those who are not. Ministers agreed to establish a CLD Standards Council and an interim Council was established in 2007. In February 2008, the Cabinet Secretary for Education and Lifelong Learning directed the CLD Standards Council to:
Deliver a professional approvals structure for qualifications, courses and development for CLD practitioners
Consider and establish a registration system for practitioners
Develop and establish a model of CPD and training for practitioners
Development of The CLD Standards Council was officially completed in December 2008.
The most recent review of Community Learning Development (CLD) in Scotland was commissioned by the then Minister for Higher and Further Education and Veterans Graeme Dey in December 2023 and led by independent reviewer Kate Still. The report entitled "Learning: For All. For Life. A report from the Independent Review of Community Learning and Development (CLD)" was published in July 2024. Emphasis of the report was placed on "the extent to which CLD is contributing to delivering positive outcomes in line with Scottish Government priorities, including examination of the respective roles and responsibilities of those involved." The review made a number of recommendations in six key areas (1) leadership and structures, (2) overarching policy narratives, (3) focus on delivery, (4) budgets and funding, (5) developing the workforce and standards, and (6) demonstrating impact. Recommendations under the first key area included; establishing a joint CLD Strategic Leadership Group between the Scottish Government and COSLA, improve consistency within local authority structures, and regular reports to the Scottish Government. The second key area involved recommending development of a clear and cohesive policy narrative on life-long learning. Focus on delivery recommendations emphasised the importance of establishing a detailed prioritised and timed delivery plan and tackling the "current ESOL crisis". The fourth key area recommended reassessing the current balance of spending and the fifth key area recommended developing a CLD Workforce Plan. Lastly, demonstrating impact involved recommendations such as funding Scotland's participation in the OECD International Survey of Adult Skills (PIAAC), and creating an annual celebration of CLD success.
CLD is described by the Scottish Government as:
The main statutory basis for CLD currently is provided by The Requirements for Community Learning and Development (Scotland) Regulations 2013. The requirements only legally apply to local authorities, but are intended for all those who partner with local authorities in working towards shared outcomes for CLD. The policy includes strategic guidance on how local authorities can: develop partnership working, identify a range of partners, develop activities to deliver CLD outcomes, and improve performance. Practitioners can work in CLD without formal qualifications, with relevant experience, but it is often required by local authorities for practitioners to complete an approved professional qualification. Undergraduate degree and postgraduate qualifications in the subject are offered at the following Scottish universities: University of Dundee, The University of Edinburgh, University of Glasgow, University of the Highlands and Islands and University of West of Scotland.
Republic of Ireland
In the 1960s, there was limited government investment in community education for adults. The Irish Committee, chaired by Con Murphy and appointed in 1969 by Brian Lenihan, Minister of Education, defined adult education as facilities for adults outside of full-time school education to "learn whatever they need to learn at any period of their lives." The report entitled Adult Education in Ireland' or The Murphy Report, similar to the Russell Report, was published in 1973. It went on to provide five features of adult education, that must be met for the term to apply. These are; (1) must be "purposefully educative" meaning the learner must be motivated to learn, (2) must be "systematic" to reach agreed learning outcomes, (3) must last for longer than a single session, (4) must be an alternative to self-directed learning, require some tuition, and (5) must be "continuously evaluated or assessed and reinforced". The Murphy Report outlined 22 recommendations to develop adult education in Ireland, which included the need for better understanding of the literacy challenges facing adults.
In 1969 the non-governmental organisation Aos Oideachais Náisiúnta Trí Aontú Saorálach or The Irish National Association of Adult Education (AONTAS) was created by a group of individuals interested in community adult learning. The idea for the AONTAS was first formed by Liam Carey, from the Dublin Institute of Adult Education, after he delivered a seminar about adult education in Ireland in May 1968. Following this, a committee was established with the remit to set up a National Association of Adult Education. AONTAS was formally created in May 1969 as a "think tank for adult educators" wrote Carey. In 1974, following The Murphy Report, the Archdiocese of Dublin established the first adult literacy service in Ireland, the Dublin Literacy Scheme as part of the Dublin Institute of Adult Education. Demand for the Dublin Literacy Scheme led to AONTAS forming a separate organisation specifically focusing on adult literacy, the National Adult Literacy Agency (NALA). The NALA constitution, amended in 1984, defined its aim as "To advance the means of promoting adult literacy in Ireland, where literacy is taken as an integral part of adult basic education and adult continuing education."
In 1981 the Irish Minister of Education established a review of lifelong learning, and chaired by Ivor Kenny, this became known as The Kenny Report published in 1984. The Kenny Report argued for the importance of a structured adult education system, that met the needs of all adults, including those with fundamental basic needs. However, two of the report's recommendations were implemented: ad hoc Adult Education Boards were established in Vocational Educational Committee's, and the Adult Literacy and Community Education Budgets were created. Despite this, the Department of Education and Science described the Kenny Report as having:
In 2000, the Government of Ireland Department of Education and Science published Learning for Life: White Paper on Adult Education. The White Paper encouraged lifelong learning to take account of individual personal, cultural, social and economic needs and emphasised the importance of adult education to target marginalised communities. The White Paper defined adult education as "systematic learning undertaken by adults who return to learning having concluded initial education or training" and identified six priority areas: (1) consciousness raising to promote personal and collective development, (2) citizenship to promote social responsibility, (3) cohesion to empower people are most disadvantaged in society, (4) competitiveness to develop a skilled workforce, (5) cultural development to promote adult education as way to enhance community culture, and (6) community development to develop a sense of collective purpose. The White Paper prioritised the development of community education, for example, offering part-time courses and addressing gaps in people's education for those who had low levels of formal education.
During the 2000s, the Irish Government pursued a Lifelong Learning Strategy as a result of The White Paper. Subsequent initiatives introduced were the Back to Education Initiative (BTEI) and the Adult Education Guidance Initiative (AEGI). The BTEI provides young people and adults the opportunity to study free part-time, who have not achieved the Leaving Certificate from secondary school. The AEGI provides free support around further education and training for all adults, but prioritises people not in employment. The National Framework of Qualifications (NFQ) was established in Ireland in 2003 by the National Qualifications Authority of Ireland as a way to standardise training and qualifications across all educational institutions and providers. The NFQ was used in the National Skills Strategy published in 2007, by the Expert Group on Future Skills Needs, which recommended that literacy and basic skills are integrated into the educational programme. Ireland's National Skills Strategy 2025 is the most current published by the Government of Ireland, Department of Education and Skills in which one of the objectives is that "people across Ireland will engage more in lifelong learning." To meet this need, the Strategy cites continuing to develop further education programmes including; adult literacy, BTEI, community education, community training centres and ESOL.
Theoretical underpinnings
Theories of community
German sociologist Ferdinand Tonnies coined the terms 'gemeinschaft' and 'gesellschaft' in 1887 to create a distinction between community and civil society. 'Gemeinschaft' refers to smaller neighbourly, close communities, whereas 'gesellschaft' refers to larger market-driven, individualistic societies. French sociologist Emile Durkheim worried about disintegration of community because of modernity and social change, because he argued that people might loose traditional familial and social bonds as they prioritised work and economic competition. American urban sociologist, Robert E. Park also made a distinction between geographical areas and communities, Park believed that rural communities were those with greater interactions between a small group of close-knit individuals, and urban communities were less personal and more individualistic. Some community interventions are geographically targeted, for example, in Europe communities of need may be identified from databases such as the EU-SILC (European Union Statistics on Income and Living Conditions) or the Scottish Index of Multiple Deprivation in Scotland. Dave Beck and Rod Purcell have criticised this approach, based on statistics and geography, because they believe "these are artificial constructs that are labelled as communities, with the expectation that the people who live there do (or should) behave as if they were a functioning community."
Social Capital
The term 'social capital' is said to have been first used by Lyda Hanifan in his 1916 article entitled 'The rural school community centre' and later explained further in his 1920 book The Community Centre. In his 1916 article, Hanifan defines capital as "that in life which tends to make these tangible substances count for most in the daily lives of a people, namely, goodwill, fellowship, mutual sympathy and social intercourse among a group of individuals and families who make up a social unit, the rural community, whose logical center is the school." American-Canadian economist Jane Jacobs defined social capital as "people who have forged neighbourhood networks" in her 1961 critique of urban planning The Death and Life of Great American Cities. However, the term social capital is most associated with French sociologist Pierre Bourdieu during the 1980s. Bourdieu regarded social capital as something belonging to an individual from their social status and power. In Bourdieu and Wacquant's 1992 book An Invitation to Reflexive Sociology, he defines social capital as "the sum of the resources, actual or virtual, that accrue to an individual or a group by virtue of possessing a durable network of more or less institutionalized relationships of mutual acquaintance and recognition." Bourdieu linked social capital with the cultural capital, which he described as inherently built up through generations, and both social and cultural capital, alongside economic capital, contribute to inequality and deprivation. In Bourdieu's definition of social capital, there is room for inequality when people who have the most advantageous social networks get ahead of other people, in terms of their access to economic and cultural resources.
American sociologist James Coleman also considered social capital as relating to social relationships, but Coleman believed social capital to be a collective asset that benefits individuals as a group. For example, Coleman cites how a neighbourhood watch group benefits a neighbourhood as a whole because it helps to lower crime in an area, even benefiting those who are not part of the neighbourhood watch group. In 2000 American political scientist Robert Putnam published his book Bowling Alone: The Collapse and Revival of American Community in which he argues that there has been a decline of social capital in the United States since 1950. Putnam's work is credited with bringing the term social capital into popular vernacular, and he defined it as a public good. Putnam writes that social capital is "the connections among individuals' social networks and the norms of reciprocity and trustworthiness that arise from them". Putnam argues that Americans have increasingly become disengaged from community involvement and more distrustful of the government, and he uses data from the General Social Survey showing falling membership to civic organisations, and argues that this shows a decline in social capital.
Some researchers have split social capital into 3 different forms, and these include:
Bonding - long-lasting social bonds between individuals who share similar experiences. For example, family and friends.
Bridging - relationships between individuals who differ in social identity or geography but share an ethnicity, interest, or ideology, for example.
Linking - relationships between individuals of differing status and power. For example, users of a service or government officials.
Psychosocial Theories
Paul Hoggett and Chris Miller (2000) argue that the emotional life of individuals is often ignored in community development and encourage greater reflexivity from practitioners and communities. Reflexivity refers to the practice of examining ones own beliefs and judgements, and how this may effect their practice. Canadian-American psychologist Albert Bandura's theory of self-efficacy looks at the ways in which individual behaviour is influenced by specific situations, and Bandura defined self-efficacy as an individual's belief that they will be able to "exercise influence over events that affect their lives." Marilyn Taylor argues that if an individual has low self-efficacy, they will be less likely to engage in collective action.
Group dynamics and working together is a common component of community work (education and/or development). American psychologist Bruce Wayne Tuckman published his article 'Developmental sequence in small groups' in 1965 in which he produced a model of group development. Tuckman identified 4 stages of group development; (1) forming is when people come together and start initial discussions, (2) storming is where the group identify group positions and engage in conflict resolution, (3) norming is where the group agrees to work towards a shared goal, and (4) performing is when the group are achieving goals and able to engage in the decision-making process together. Understanding group dynamics might provide good insight into learning styles, skillset and personality traits.
Theories of State and Power
Gramsci and Cultural Hegemony
Italian Marxist philosopher Antoni Gramsci developed the theory of cultural hegemony which argued that capitalism and the ruling class used cultural institutions in society to maintain wealth and power.Gramsci, Antonio (1971) Selections from the Prison Notebooks of Antonio Gramsci, New York, International Publishers. Gramsci believed that capitalist societies were made up of two overlapping divisions, the first being the 'political society' which rules by force and the second being the 'civil society' which rules by consent. Gramsci's 'civil society' existed in the public sphere and any community groups or political parties were only allowed to form by the ruling class, and because the public sphere is where ideas and beliefs are articulated, hegemony of the ruling class was the culture produced. Cultural institutions could include the education system, and Beck and Purcell explain hegemony writing "the complicated network of institutions and organisations found in civil society and the state work together in a way that maintains the status quo; it keeps the powerful, powerful. [...] the education system teaches people their place within society, stratifying people for particular roles and rewarding particular forms of knowledge and behaviour." Joseph A. Buttigieg argues that the role of education lies at the centre of Gramsci's concept of hegemony. Gramsci saw adult education as being a challenge against the state, and Peter Mayo argues that Gramsci saw lifelong learning as counter-hegemonic because any site, including workplaces, could be used to educate the lower classes.
Freire and Critical Pedagogy
Brazilian philosopher Paulo Freire published his book Pedagogy of the Oppressed in 1970. Freire emphasised the importance of the political context in which community development takes place, and he offered a radical approach to practice. Freire positioned society as an interplay of inequality between labour and capital, the wealthy and the poor, and the oppressed and the oppressor. The Freirean approach aimed to challenge the thinking of both practitioners and learners, and the social relationships that make up education. Freire argued that there were two types of education; (1) banking which domesticates and placates people to conform to societal expectations and (2) problem posing which empowers people to think critically and make change. This idea is also generally known as critical pedagogy.
Foucault and Power
French social theorist Michel Foucault believed there were two forms of power: empirical and theoretical. Empirical power defines power that is well-established and traces the historical articulation of power throughout society, whereas theoretical power relates to the rudimental nature of power as a universal concept. In his 1975 book Discipline and Punish: The Birth of the Prison Foucault introduced his theory of power and argued that it was the mechanisms of power that controlled individuals, for example, through technology for surveillance. Foucault believed that power operated through individuals rather imposing on them, and that for power to be sustained there needs to be "willing subjects." British social theorist Steven Lukes developed Foucault's theory to argue that community education can provide people with the ability to govern themselves outside of the state.
Wisconsin Model
A philosophical base for developing Community Education programs is provided through the five components of the Wisconsin Model of Community Education. The model provides a process framework for local school districts to implement or strengthen community education. A set of Community Education Principles was developed by Larry Horyna and Larry Decker for the National Coalition for Community Education in 1991. These include:
Self-determination: Local people are in the best position to identify community needs and wants. Parents, as children's first and most important teachers, have both a right and a responsibility to be involved in their children's education.
Self-help: People are best served when their capacity to help themselves is encouraged and enhanced. When people assume ever-increasing responsibility for their own well-being, they acquire independence rather than dependence.
Leadership Development: The identification, development, and use of the leadership capacities of local citizens are prerequisites for ongoing self-help and community improvement efforts.
Localization: Services, programs, events, and other community involvement opportunities that are brought closest to where people live have the greatest potential for a high level of public participation. Whenever possible, these activities should be decentralized to locations of easy public access.
Integrated Delivery of Services: Organizations and agencies that operate for the public good can use their limited resources, meet their own goals, and better serve the public by establishing close working relationships with other organizations and agencies with related purposes.
Maximum Use of Resources: The physical, financial, and human resources of every community should be interconnected and used to their fullest if the diverse needs and interests of the community are to be met.
Inclusiveness: The segregation or isolation of people by age, income, sex, race, ethnicity, religion, or other factors inhibits the full development of the community. Community programs, activities, and services, should involve the broadest possible cross section of community residents.
Responsiveness: Public institutions have a responsibility to develop programs and services that respond to the continually changing needs and interests of their constituents.
Lifelong Learning: Learning begins at our birth and continues until death. Formal and informal learning opportunities should be available to residents of all ages in a wide variety of community settings.
Challenges
Social change
Community education can take the form of social change, and this can raise challenges because it might be going against the mainstream or tradition. Organisations or services that disrupt the status quo could face having their funding cut, or they can become captured by the status quo as they must meet service level outcomes, management outcomes, for example. Changes in learner demographics and the economy can also impact community education. For example, there is an ageing population in western societies and there may need to be changes in priorities of adult educators to address the resulting skills gap, particularly in digital skills.
Ethical issues
Working within the community can bring about ethical issues, for example, when considering the ends served by any intervention, the community in which an intervention is aimed at, the ways in which success will be measured, and the intended or unintended consequences. The power dynamics and role boundaries can be an issue for community workers because they may be caught between those who have power and people who want a change and more say over their social situation and community.
Implicit bias
Community educators may find they need to address any implicit bias they might hold regarding what issues a community faces and what it needs. French Marxist theorist Guy Debord was a founding member of the Situationist International in 1950s, which established the idea of psychogeography. Debord defined psychogeography as "The study of the specific effects of the geographical environment, consciously organised or not, on the emotions and behaviour of individuals." Rod Purcell argues that psychogeography is a way for community workers to not only understand a community, but also to empower people to critically think about their community.
Learner Engagement
In UNESCO's fourth Global Report on Adult Learning and Education'' published in 2019, that learner engagement is lower for "vulnerable and disadvantaged" communities. The report states that, in poorer rural areas, many women have no access to education and identifies "migrants and refugees, older adults, adults with disabilities, those living in rural areas, and adults with low prior educational attainment" as those facing the greatest barriers to learning.
Funding
Community education, particularly provision of adult learning, can face significant challenges around investment as governmental funding is limited in many countries. In the UK, spending on adult education is 25 per cent lower in 2024-25 compared to 2010-1.
Participatory democracy
Youth participation
In countries where democratic governments exist, people are encouraged to vote for someone to represent them. In today's society there is a dwindling interest in politics from our younger generation and this could have a negative effect on our democracy and political system in years to come. Community learning and development has the potential to encourage young people to become more interested in politics and helping them influence decisions that affect their lives.
In many parts of the world, youth parliament-style organisations have been set up to allow young people to debate issues that affect them and others in their community. Young people engage with these organisations voluntarily and are sometimes elected using a democratic system of voting. Young people are at the heart of these organisations and are usually involved in the management and development. The majority of these organisations are facilitated and staffed by workers trained in community learning and development; however, staff role is mainly to facilitate and be supportive but not intrusive.
These organisations allow young people to gain a voice, influence decision makers who affect their lives and provide them with a sense of self-worth and a place in society.
In the United Kingdom, examples of these organisations include the United Kingdom Youth Parliament (UKYP); in Scotland, the Scottish Youth Parliament (SYP); in Wales the Children & Young People's Assembly for Wales; and in Northern Ireland, the Northern Ireland Youth Forum. In Canada, examples include Youth Parliament of Manitoba (YPM), Saskatchewan Youth Parliament (SYP), TUXIS Parliament of Alberta (TUXIS), and British Columbia Youth Parliament (BCYP).
Parental participation
Cultural divides and deficit thinking creates mutual distrust between marginalized parents and schools which in turn creates barriers to active parental involvement of marginalized parents in the education of their children. Researches also show that parents of high socio-economic status play active and direct role in the education of their children and are more likely to influence school policies that affects their children's schooling whereas parents of low socio-economic status play indirect roles in the education of their children and are less likely to influence school policies that affects their children's schooling. The gap between parents' educational involvement among parents from higher socio-economic status and parents from lower socio-economic status results in a more personalized education that caters for the needs of children from higher socio-economic backgrounds and more alienating and generic education systems/policies for students from low socio-economic backgrounds.
The following practices are necessary for parent and community participation in the education of their wards to be effective; students come to school healthy and ready to learn, parents assist schools with financial and or material support, there are frequent communications between parents and school authorities, parents have meaningful authorities in the schools and they also assist in the teaching of their children. Parents' home based educational involvement such as creating an enabling learning environment at home, helping their children with their assignments, helping their children develop cognitive skills and other school skills and motivating their children to do well in school supports student success. Researches show that multimodal and effective migrant parental involvement in the education of their children increases the test scores of such students and also shows strong student success even after academic abilities and socio-economic status are taken into consideration.
School officials' racial stereotypes, class stereotypes, biases and attitudes regarding parental involvement in the education of their children hinders school officials from involving parents as partners in the education of their children. Also, bureaucracies in the public education systems hinders parents from advocating for changes that would benefit their children. Formally organized parental associations in schools that seeks to increase parental involvement, ignore the cultural and socio-economic needs of minorities, thereby contributing to the barriers of parental involvement, especially for marginalized parents. Research shows that high number of marginalized parents do not actively engage in their children's schooling. There is also a wide gap between the rhetoric of best parental involvement practices and actual parental involvement practices. Effective parental Involvement in the education of their children involves; parenting, communication, volunteering, home tutoring, involvement in decision-making, and collaboration with the community. Effective Parental Involvement treats and or makes school officials and parents partners in the education of their children.
See also
Adult education
Community development
Lifelong learning
Youth Bank
References
External links
Further reading
Educational stages
Adult education
Education in the United Kingdom | 0.77372 | 0.985247 | 0.762305 |
Ad hoc | Ad hoc is a Latin phrase meaning literally for this. In English, it typically signifies a solution designed for a specific purpose, problem, or task rather than a generalized solution adaptable to collateral instances (compare with a priori).
Common examples include ad hoc committees and commissions created at the national or international level for a specific task, and the term is often used to describe arbitration (ad hoc arbitration). In other fields, the term could refer to a military unit created under special circumstances (see task force), a handcrafted network protocol (e.g., ad hoc network), a temporary collaboration among geographically-linked franchise locations (of a given national brand) to issue advertising coupons, or a purpose-specific equation in mathematics or science.
Ad hoc can also function as an adjective describing temporary, provisional, or improvised methods to deal with a particular problem, the tendency of which has given rise to the noun adhocism. This concept highlights the flexibility and adaptability often required in problem-solving across various domains.
In everyday language, "ad hoc" is sometimes used informally to describe improvised or makeshift solutions, emphasizing their temporary nature and specific applicability to immediate circumstances.
Styling
Style guides disagree on whether Latin phrases like ad hoc should be italicized. The trend is not to use italics. For example, The Chicago Manual of Style recommends that familiar Latin phrases that are listed in the Webster's Dictionary, including "ad hoc", not be italicized.
Hypothesis
In science and philosophy, ad hoc means the addition of extraneous hypotheses to a theory to save it from being falsified. Ad hoc hypotheses compensate for anomalies not anticipated by the theory in its unmodified form.
Scientists are often skeptical of scientific theories that rely on frequent, unsupported adjustments to sustain them. Ad hoc hypotheses are often characteristic of pseudo-scientific subjects such as homeopathy.
In the military
In the military, ad hoc units are created during unpredictable situations, when the cooperation between different units is suddenly needed for fast action, or from remnants of previous units which have been overrun or otherwise whittled down.
In governance
In national and sub-national governance, ad hoc bodies may be established to deal with specific problems not easily accommodated by the current structure of governance or to address multi-faceted issues spanning several areas of governance. In the UK and other commonwealth countries, ad hoc Royal Commissions may be set up to address specific questions as directed by parliament.
In diplomacy
In diplomacy, diplomats may be appointed by a government as special envoys, or diplomats who serve on a ad hoc basis due to the possibility that such envoys' offices may either not be retained by a future government or may only exist during the duration of a relevant cause.
Networking
The term ad hoc networking typically refers to a system of network elements that combine to form a network requiring little or no planning.
See also
Ad hoc testing
Ad infinitum
Ad libitum
Adhocracy
Democracy
Heuristic
House rule
Russell's teapot
Inductive reasoning
Confirmation bias
Cherry picking
References
Further reading
External links
Latin words and phrases | 0.763739 | 0.998085 | 0.762276 |
Shared Socioeconomic Pathways | Shared Socioeconomic Pathways (SSPs) are climate change scenarios of projected socioeconomic global changes up to 2100 as defined in the IPCC Sixth Assessment Report on climate change in 2021. They are used to derive greenhouse gas emissions scenarios with different climate policies. The SSPs provide narratives describing alternative socio-economic developments. These storylines are a qualitative description of logic relating elements of the narratives to each other. In terms of quantitative elements, they provide data accompanying the scenarios on national population, urbanization and GDP (per capita). The SSPs can be quantified with various Integrated Assessment Models (IAMs) to explore possible future pathways both with regards to socioeconomic and climate pathways.
The five scenarios are:
SSP1: Sustainability ("Taking the Green Road")
SSP2: "Middle of the Road"
SSP3: Regional Rivalry ("A Rocky Road")
SSP4: Inequality ("A Road Divided")
SSP5: Fossil-fueled Development ("Taking the Highway")
There are also ongoing efforts to downscaling European shared socioeconomic pathways (SSPs) for agricultural and food systems, combined with representative concentration pathways (RCP) to regionally specific, alternative socioeconomic and climate scenarios.
Descriptions of the SSPs
SSP1: Sustainability (Taking the Green Road)
"The world shifts gradually, but pervasively, toward a more sustainable path, emphasizing more inclusive development that respects predicted environmental boundaries. Management of the global commons slowly improves, educational and health investments accelerate the demographic transition, and the emphasis on economic growth shifts toward a broader emphasis on human well-being. Driven by an increasing commitment to achieving development goals, inequality is reduced both across and within countries. Consumption is oriented toward low material growth and lower resource and energy intensity."
SSP2: Middle of the road
"The world follows a path in which social, economic, and technological trends do not shift markedly from historical patterns. Development and income growth proceeds unevenly, with some countries making relatively good progress while others fall short of expectations. Global and national institutions work toward but make slow progress in achieving sustainable development goals. Environmental systems experience degradation, although there are some improvements and overall the intensity of resource and energy use declines. Global population growth is moderate and levels off in the second half of the century. Income inequality persists or improves only slowly and challenges to reducing vulnerability to societal and environmental changes remain."
SSP3: Regional rivalry (A Rocky Road)
"A resurgent nationalism, concerns about competitiveness and security, and regional conflicts push countries to increasingly focus on domestic or, at most, regional issues. Policies shift over time to become increasingly oriented toward national and regional security issues. Countries focus on achieving energy and food security goals within their own regions at the expense of broader-based development. Investments in education and technological development decline. Economic development is slow, consumption is material-intensive, and inequalities persist or worsen over time. Population growth is low in industrialized and high in developing countries. A low international priority for addressing environmental concerns leads to strong environmental degradation in some regions."
SSP4: Inequality (A Road Divided)
"Highly unequal investments in human capital, combined with increasing disparities in economic opportunity and political power, lead to increasing inequalities and stratification both across and within countries. Over time, a gap widens between an internationally-connected society that contributes to knowledge- and capital-intensive sectors of the global economy, and a fragmented collection of lower-income, poorly educated societies that work in a labor intensive, low-tech economy. Social cohesion degrades and conflict and unrest become increasingly common. Technology development is high in the high-tech economy and sectors. The globally connected energy sector diversifies, with investments in both carbon-intensive fuels like coal and unconventional oil, but also low-carbon energy sources. Environmental policies focus on local issues around middle and high income areas."
SSP5: Fossil-Fueled Development (Taking the Highway)
"This world places increasing faith in competitive markets, innovation and participatory societies to produce rapid technological progress and development of human capital as the path to sustainable development. Global markets are increasingly integrated. There are also strong investments in health, education, and institutions to enhance human and social capital. At the same time, the push for economic and social development is coupled with the exploitation of abundant fossil fuel resources and the adoption of resource and energy intensive lifestyles around the world. All these factors lead to rapid growth of the global economy, while global population peaks and declines in the 21st century. Local environmental problems like air pollution are successfully managed. There is faith in the ability to effectively manage social and ecological systems, including by geo-engineering if necessary."
SSP temperature projections from the IPCC Sixth Assessment Report
The IPCC Sixth Assessment Report assessed the projected temperature outcomes of a set of five scenarios that are based on the framework of the SSPs. The names of these scenarios consist of the SSP on which they are based (SSP1-SSP5), combined with the expected level of radiative forcing in the year 2100 (1.9 to 8.5 W/m2). This results in scenario names SSPx-y.z as listed below.
The role of SSP4 is missing in this table.
See also
Climate change scenario
Coupled Model Intercomparison Project
Representative Concentration Pathway
Special Report on Emissions Scenarios (published in 2000)
References
Sources
Riahi et al., The Shared Socioeconomic Pathways and their energy, land use, and greenhouse gas emissions implications: An overview. Global Environmental Change, 42, 153-168.
Climate change assessment and attribution
Futures studies
Intergovernmental Panel on Climate Change | 0.766552 | 0.994415 | 0.762271 |
Environmental sociology | Environmental sociology is the study of interactions between societies and their natural environment. The field emphasizes the social factors that influence environmental resource management and cause environmental issues, the processes by which these environmental problems are socially constructed and define as social issues, and societal responses to these problems.
Environmental sociology emerged as a subfield of sociology in the late 1970s in response to the emergence of the environmental movement in the 1960s. It represents a relatively new area of inquiry focusing on an extension of earlier sociology through inclusion of physical context as related to social factors.
Definition
Environmental sociology is typically defined as the sociological study of socio-environmental interactions, although this definition immediately presents the problem of integrating human cultures with the rest of the environment. Different aspects of human interaction with the natural environment are studied by environmental sociologists including population and demography, organizations and institutions, science and technology, health and illness, consumption and sustainability practices, culture and identity, and social inequality and environmental justice. Although the focus of the field is the relationship between society and environment in general, environmental sociologists typically place special emphasis on studying the social factors that cause environmental problems, the societal impacts of those problems, and efforts to solve the problems. In addition, considerable attention is paid to the social processes by which certain environmental conditions become socially defined as problems. Most research in environmental sociology examines contemporary societies.
History
Environmental sociology emerged as a coherent subfield of inquiry after the environmental movement of the 1960s and early 1970s. The works of William R. Catton, Jr. and Riley Dunlap, among others, challenged the constricted anthropocentrism of classical sociology. In the late 1970s, they called for a new holistic, or systems perspective, which lead to a marked shift in the field’s focus. Since the 1970s, general sociology has noticeably transformed to include environmental forces in social explanations. Environmental sociology has now solidified as a respected, interdisciplinary field of study in academia.
Concepts
Existential dualism
The duality of the human condition rests with cultural uniqueness and evolutionary traits. From one perspective, humans are embedded in the ecosphere and co-evolved alongside other species. Humans share the same basic ecological dependencies as other inhabitants of nature. From the other perspectives, humans are distinguished from other species because of their innovative capacities, distinct cultures and varied institutions. Human creations have the power to independently manipulate, destroy, and transcend the limits of the natural environment.
According to Buttel (2004), there are five major traditions in environmental sociology today: the treadmill of production and other eco-Marxisms, ecological modernization and other sociologies of environmental reform, cultural-environmental sociologies, neo-Malthusianisms, and the new ecological paradigm. In practice, this means five different theories of what to blame for environmental degradation, i.e., what to research or consider as important. These ideas are listed below in the order in which they were invented. Ideas that emerged later built on earlier ideas, and contradicted them.
Neo-Malthusianism
Works such as Hardin's "Tragedy of the Commons" (1969) reformulated Malthusian thought about abstract population increases causing famines into a model of individual selfishness at larger scales causing degradation of common pool resources such as the air, water, the oceans, or general environmental conditions. Hardin offered privatization of resources or government regulation as solutions to environmental degradation caused by tragedy of the commons conditions. Many other sociologists shared this view of solutions well into the 1970s (see Ophuls). There have been many critiques of this view particularly political scientist Elinor Ostrom, or economists Amartya Sen and Ester Boserup.
Even though much of mainstream journalism considers Malthusianism the only view of environmentalism, most sociologists would disagree with Malthusianism since social organizational issues of environmental degradation are more demonstrated to cause environmental problems than abstract population or selfishness per se. For examples of this critique, Ostrom in her book Governing the Commons: The Evolution of Institutions for Collective Action (1990) argues that instead of self-interest always causing degradation, it can sometimes motivate people to take care of their common property resources. To do this they must change the basic organizational rules of resource use. Her research provides evidence for sustainable resource management systems, around common pool resources that have lasted for centuries in some areas of the world.
Amartya Sen argues in his book Poverty and Famines: An Essay on Entitlement and Deprivation (1980) that population expansion fails to cause famines or degradation as Malthusians or Neo-Malthusians argue. Instead, in documented cases a lack of political entitlement to resources that exist in abundance, causes famines in some populations. He documents how famines can occur even in the midst of plenty or in the context of low populations. He argues that famines (and environmental degradation) would only occur in non-functioning democracies or unrepresentative states.
Ester Boserup argues in her book The Conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure (1965) from inductive, empirical case analysis that Malthus's more deductive conception of a presumed one-to-one relationship with agricultural scale and population is actually reversed. Instead of agricultural technology and scale determining and limiting population as Malthus attempted to argue, Boserup argued the world is full of cases of the direct opposite: that population changes and expands agricultural methods.
Eco-Marxist scholar Allan Schnaiberg (below) argues against Malthusianism with the rationale that under larger capitalist economies, human degradation moved from localized, population-based degradation to organizationally caused degradation of capitalist political economies to blame. He gives the example of the organized degradation of rainforest areas which states and capitalists push people off the land before it is degraded by organizational means. Thus, many authors are critical of Malthusianism, from sociologists (Schnaiberg) to economists (Sen and Boserup), to political scientists (Ostrom), and all focus on how a country's social organization of its extraction can degrade the environment independent of abstract population.
New Ecological Paradigm
In the 1970s, the New Ecological Paradigm (NEP) conception critiqued the claimed lack of human-environmental focus in the classical sociologists and the sociological priorities their followers created. This was critiqued as the Human Exemptionalism Paradigm (HEP). The HEP viewpoint claims that human-environmental relationships were unimportant sociologically because humans are 'exempt' from environmental forces via cultural change. This view was shaped by the leading Western worldview of the time and the desire for sociology to establish itself as an independent discipline against the then popular racist-biological environmental determinism where environment was all. In this HEP view, human dominance was felt to be justified by the uniqueness of culture, argued to be more adaptable than biological traits. Furthermore, culture also has the capacity to accumulate and innovate, making it capable of solving all natural problems. Therefore, as humans were not conceived of as governed by natural conditions, they were felt to have complete control of their own destiny. Any potential limitation posed by the natural world was felt to be surpassed using human ingenuity. Research proceeded accordingly without environmental analysis.
In the 1970s, sociological scholars Riley Dunlap and William R. Catton, Jr. began recognizing the limits of what would be termed the Human Excemptionalism Paradigm. Catton and Dunlap (1978) suggested a new perspective that took environmental variables into full account. They coined a new theoretical outlook for sociology, the New Ecological Paradigm, with assumptions contrary to HEP.
The NEP recognizes the innovative capacity of humans, but says that humans are still ecologically interdependent as with other species. The NEP notes the power of social and cultural forces but does not profess social determinism. Instead, humans are impacted by the cause, effect, and feedback loops of ecosystems. The Earth has a finite level of natural resources and waste repositories. Thus, the biophysical environment can impose constraints on human activity. They discussed a few harbingers of this NEP in 'hybridized' theorizing about topics that were neither exclusively social nor environmental explanations of environmental conditions. It was additionally a critique of Malthusian views of the 1960s and 1970s.
Dunlap and Catton's work immediately received a critique from Buttel who argued to the contrary that classical sociological foundations could be found for environmental sociology, particularly in Weber's work on ancient "agrarian civilizations" and Durkheim's view of the division of labor as built on a material premise of specialization/specialization in response to material scarcity. This environmental aspect of Durkheim has been discussed by Schnaiberg (1971) as well.
Treadmill of Production Theory
The Treadmill of Production is a theory coined and popularized by Schnaiberg as a way to answer for the increase in U.S. environmental degradation post World War II. At its simplest, this theory states that the more product or commodities are created, the more resources will be used, and the higher the impact will be. The treadmill is a metaphor of being caught in the cycle of continuous growth which never stops, demanding more resources and as a result causing more environmental damage.
Eco-Marxism
In the middle of the HEP/NEP debate Neo-Marxist ideas of conflict sociology were applied to environmental conflicts. Therefore, some sociologists wanted to stretch Marxist ideas of social conflict to analyze environmental social movements from the Marxist materialist framework instead of interpreting them as a cultural "New Social Movement", separate from material concerns. So "Eco-Marxism" was developed based on using Neo-Marxist Conflict theories concepts of the relative autonomy of the state and applying them to environmental conflict.
Two people following this school were James O'Connor (The Fiscal Crisis of the State, 1971) and later Allan Schnaiberg.
Later, a different trend developed in eco-Marxism via the attention brought to the importance of metabolic analysis in Marx's thought by John Bellamy Foster. Contrary to previous assumptions that classical theorists in sociology all had fallen within a Human Exemptionalist Paradigm, Foster argued that Marx's materialism lead him to theorize labor as the metabolic process between humanity and the rest of nature. In Promethean interpretations of Marx that Foster critiques, there was an assumption his analysis was very similar to the anthropocentric views critiqued by early environmental sociologists. Instead, Foster argued Marx himself was concerned about the Metabolic rift generated by capitalist society's social metabolism, particularly in industrial agriculture—Marx had identified an "irreparable rift in the interdependent process of social metabolism," created by capitalist agriculture that was destroying the productivity of the land and creating wastes in urban sites that failed to be reintegrated into the land and thus lead toward destruction of urban workers health simultaneously. Reviewing the contribution of this thread of eco-marxism to current environmental sociology, Pellow and Brehm conclude, "The metabolic rift is a productive development in the field because it connects current research to classical theory and links sociology with an interdisciplinary array of scientific literatures focused on ecosystem dynamics."
Foster emphasized that his argument presupposed the "magisterial work" of Paul Burkett, who had developed a closely related "red-green" perspective rooted in a direct examination of Marx's value theory. Burkett and Foster proceeded to write a number of articles together on Marx's ecological conceptions, reflecting their shared perspective
More recently, Jason W. Moore, inspired by Burkett's value-analytical approach to Marx's ecology and arguing that Foster's work did not in itself go far enough, has sought to integrate the notion of metabolic rift with world systems theory, incorporating Marxian value-related conceptions. For Moore, the modern world-system is a capitalist world-ecology, joining the accumulation of capital, the pursuit of power, and the production of nature in dialectical unity. Central to Moore's perspective is a philosophical re-reading of Marx's value theory, through which abstract social labor and abstract social nature are dialectically bound. Moore argues that the emergent law of value, from the sixteenth century, was evident in the extraordinary shift in the scale, scope, and speed of environmental change. What took premodern civilizations centuries to achieve—such as the deforestation of Europe in the medieval era—capitalism realized in mere decades. This world-historical rupture, argues Moore, can be explained through a law of value that regards labor productivity as the decisive metric of wealth and power in the modern world. From this standpoint, the genius of capitalist development has been to appropriate uncommodified natures—including uncommodified human natures—as a means of advancing labor productivity in the commodity system.
Societal-environment dialectic
In 1975, the highly influential work of Allan Schnaiberg transfigured environmental sociology, proposing a societal-environmental dialectic, though within the 'neo-Marxist' framework of the relative autonomy of the state as well. This conflictual concept has overwhelming political salience. First, the economic synthesis states that the desire for economic expansion will prevail over ecological concerns. Policy will decide to maximize immediate economic growth at the expense of environmental disruption. Secondly, the managed scarcity synthesis concludes that governments will attempt to control only the most dire of environmental problems to prevent health and economic disasters. This will give the appearance that governments act more environmentally consciously than they really do. Third, the ecological synthesis generates a hypothetical case where environmental degradation is so severe that political forces would respond with sustainable policies. The driving factor would be economic damage caused by environmental degradation. The economic engine would be based on renewable resources at this point. Production and consumption methods would adhere to sustainability regulations.
These conflict-based syntheses have several potential outcomes. One is that the most powerful economic and political forces will preserve the status quo and bolster their dominance. Historically, this is the most common occurrence. Another potential outcome is for contending powerful parties to fall into a stalemate. Lastly, tumultuous social events may result that redistribute economic and political resources.
In 1980,the highly influential work of Allan Schnaiberg entitled The Environment: From Surplus to Scarcity (1980)
was a large contribution to this theme of a societal-environmental dialectic.
Ecological modernization and reflexive modernization
By the 1980s, a critique of eco-Marxism was in the offing, given empirical data from countries (mostly in Western Europe like the Netherlands, Western Germany and somewhat the United Kingdom) that were attempting to wed environmental protection with economic growth instead of seeing them as separate. This was done through both state and capital restructuring. Major proponents of this school of research are Arthur P.J. Mol and Gert Spaargaren. Popular examples of ecological modernization would be "cradle to cradle" production cycles, industrial ecology, large-scale organic agriculture, biomimicry, permaculture, agroecology and certain strands of sustainable development—all implying that economic growth is possible if that growth is well organized with the environment in mind.
Reflexive modernization
The many volumes of the German sociologist Ulrich Beck first argued from the late 1980s that our risk society is potentially being transformed by the environmental social movements of the world into structural change without rejecting the benefits of modernization and industrialization. This is leading to a form of 'reflexive modernization' with a world of reduced risk and better modernization process in economics, politics, and scientific practices as they are made less beholden to a cycle of protecting risk from correction (which he calls our state's organized irresponsibility)—politics creates ecodisasters, then claims responsibility in an accident, yet nothing remains corrected because it challenges the very structure of the operation of the economy and the private dominance of development, for example. Beck's idea of a reflexive modernization looks forward to how our ecological and social crises in the late 20th century are leading toward transformations of the whole political and economic system's institutions, making them more "rational" with ecology in mind.
Neo-Liberalism
Neo-liberalism includes deregulation, free market capitalism, and aims at reducing government spending. These Neo-liberal policies greatly affect environmental sociology. Since Neo-liberalism includes deregulation and essentially less government involvement, this leads to the commodification and privatization of unowned, state-owned, or common property resources. Diana Liverman and Silvina Vilas mentions that this results in payments for environmental services; deregulation and cuts in public expenditure for environmental management; the opening up of trade and investment; and transfer of environmental management to local or nongovernmental institutions. The privatization of these resources have impacts on society, the economy, and to the environment. An example that has greatly affected society is the privatization of water.
Social construction of the environment
Additionally in the 1980s, with the rise of postmodernism in the western academy and the appreciation of discourse as a form of power, some sociologists turned to analyzing environmental claims as a form of social construction more than a 'material' requirement. Proponents of this school include John A. Hannigan, particularly in Environmental Sociology: A Social Constructionist Perspective (1995). Hannigan argues for a 'soft constructionism' (environmental problems are materially real though they require social construction to be noticed) over a 'hard constructionism' (the claim that environmental problems are entirely social constructs).
Although there was sometimes acrimonious debate between the constructivist and realist "camps" within environmental sociology in the 1990s, the two sides have found considerable common ground as both increasingly accept that while most environmental problems have a material reality they nonetheless become known only via human processes such as scientific knowledge, activists' efforts, and media attention. In other words, most environmental problems have a real ontological status despite our knowledge/awareness of them stemming from social processes, processes by which various conditions are constructed as problems by scientists, activists, media and other social actors. Correspondingly, environmental problems must all be understood via social processes, despite any material basis they may have external to humans. This interactiveness is now broadly accepted, but many aspects of the debate continue in contemporary research in the field.
Events
Modern environmentalism
United States
The 1960s built strong cultural momentum for environmental causes, giving birth to the modern environmental movement and large questioning in sociologists interested in analyzing the movement. Widespread green consciousness moved vertically within society, resulting in a series of policy changes across many states in the U.S. and Europe in the 1970s. In the United States, this period was known as the "Environmental Decade" with the creation of the United States Environmental Protection Agency and passing of the Endangered Species Act, Clean Water Act, and amendments to the Clean Air Act. Earth Day of 1970, celebrated by millions of participants, represented the modern age of environmental thought. The environmental movement continued with incidences such as Love Canal.
Historical studies
While the current mode of thought expressed in environmental sociology was not prevalent until the 1970s, its application is now used in analysis of ancient peoples. Societies including Easter Island, the Anaszi, and the Mayans were argued to have ended abruptly, largely due to poor environmental management. This has been challenged in later work however as the exclusive cause (biologically trained Jared Diamond's Collapse (2005); or more modern work on Easter Island). The collapse of the Mayans sent a historic message that even advanced cultures are vulnerable to ecological suicide—though Diamond argues now it was less of a suicide than an environmental climate change that led to a lack of an ability to adapt—and a lack of elite willingness to adapt even when faced with the signs much earlier of nearing ecological problems. At the same time, societal successes for Diamond included New Guinea and Tikopia island whose inhabitants have lived sustainably for 46,000 years.
John Dryzek et al. argue in Green States and Social Movements: Environmentalism in the United States, United Kingdom, Germany, and Norway (2003) that there may be a common global green environmental social movement, though its specific outcomes are nationalist, falling into four 'ideal types' of interaction between environmental movements and state power. They use as their case studies environmental social movements and state interaction from Norway, the United Kingdom, the United States, and Germany. They analyze the past 30 years of environmentalism and the different outcomes that the green movement has taken in different state contexts and cultures.
Recently and roughly in temporal order below, much longer-term comparative historical studies of environmental degradation are found by sociologists. There are two general trends: many employ world systems theory—analyzing environmental issues over long periods of time and space; and others employ comparative historical methods. Some utilize both methods simultaneously, sometimes without reference to world systems theory (like Whitaker, see below).
Stephen G. Bunker (d. 2005) and Paul S. Ciccantell collaborated on two books from a world-systems theory view, following commodity chains through history of the modern world system, charting the changing importance of space, time, and scale of extraction and how these variables influenced the shape and location of the main nodes of the world economy over the past 500 years. Their view of the world was grounded in extraction economies and the politics of different states that seek to dominate the world's resources and each other through gaining hegemonic control of major resources or restructuring global flows in them to benefit their locations.
The three volume work of environmental world-systems theory by Sing C. Chew analyzed how "Nature and Culture" interact over long periods of time, starting with World Ecological Degradation (2001) In later books, Chew argued that there were three "Dark Ages" in world environmental history characterized by periods of state collapse and reorientation in the world economy associated with more localist frameworks of community, economy, and identity coming to dominate the nature/culture relationships after state-facilitated environmental destruction delegitimized other forms. Thus recreated communities were founded in these so-called 'Dark Ages,' novel religions were popularized, and perhaps most importantly to him the environment had several centuries to recover from previous destruction. Chew argues that modern green politics and bioregionalism is the start of a similar movement of the present day potentially leading to wholesale system transformation. Therefore, we may be on the edge of yet another global "dark age" which is bright instead of dark on many levels since he argues for human community returning with environmental healing as empires collapse.
More case oriented studies were conducted by historical environmental sociologist Mark D. Whitaker analyzing China, Japan, and Europe over 2,500 years in his book Ecological Revolution (2009). He argued that instead of environmental movements being "New Social Movements" peculiar to current societies, environmental movements are very old—being expressed via religious movements in the past (or in the present like in ecotheology) that begin to focus on material concerns of health, local ecology, and economic protest against state policy and its extractions. He argues past or present is very similar: that we have participated with a tragic common civilizational process of environmental degradation, economic consolidation, and lack of political representation for many millennia which has predictable outcomes. He argues that a form of bioregionalism, the bioregional state, is required to deal with political corruption in present or in past societies connected to environmental degradation.
After looking at the world history of environmental degradation from very different methods, both sociologists Sing Chew and Mark D. Whitaker came to similar conclusions and are proponents of (different forms of) bioregionalism.
Related journals
Among the key journals in this field are:
Environmental Sociology
Human Ecology
Human Ecology Review
Nature and Culture
Organization & Environment
Population and Environment
Rural Sociology
Society and Natural Resources
See also
Bibliography of sociology
Ecological anthropology
Ecological design
Ecological economics
Ecological modernization theory
Enactivism
Environmental design
Environmental design and planning
Environmental economics
Environmental policy
Environmental racism
Environmental racism in Europe
Environmental social science
Ethnoecology
Political ecology
Sociology of architecture
Sociology of disaster
Climate change
References
Notes
Dunlap, Riley E., Frederick H. Buttel, Peter Dickens, and August Gijswijt (eds.) 2002. Sociological Theory and the Environment: Classical Foundations, Contemporary Insights (Rowman & Littlefield, ).
Dunlap, Riley E., and William Michelson (eds.) 2002.Handbook of Environmental Sociology (Greenwood Press, )
Freudenburg, William R., and Robert Gramling. 1989. "The Emergence of Environmental Sociology: Contributions of Riley E. Dunlap and William R. Catton, Jr.", Sociological Inquiry 59(4): 439–452
Harper, Charles. 2004. Environment and Society: Human Perspectives on Environmental Issues. Upper Saddle River, New Jersey: Pearson Education, Inc.
Humphrey, Craig R., and Frederick H. Buttel. 1982.Environment, Energy, and Society. Belmont, California: Wadsworth Publishing Company.
Humphrey, Craig R., Tammy L. Lewis and Frederick H. Buttel. 2002. Environment, Energy and Society: A New Synthesis. Belmont, California: Wadsworth/Thompson Learning.
Mehta, Michael, and Eric Ouellet. 1995. Environmental Sociology: Theory and Practice, Toronto: Captus Press.
Redclift, Michael, and Graham Woodgate, eds. 1997.International Handbook of Environmental Sociology (Edgar Elgar, 1997; )
Schnaiberg, Allan. 1980. The Environment: From Surplus to Scarcity. New York: Oxford University Press.
Further reading
Hannigan, John, "Environmental Sociology", Routledge, 2014.
Zehner, Ozzie, Green Illusions: The Dirty Secrets of Clean Energy and the Future of Environmentalism, University of Nebraska Press, 2012. An environmental sociology text forming a critique of energy production and green consumerism.
External links
ASA Section on Environment and Technology
ESA Environment & Society Research Network
ISA Research Committee on Environment and Society (RC24)
Canadian Sociological Association (CSA) Environment Research Cluster | 0.772668 | 0.986531 | 0.762261 |
Critical literacy | Critical literacy is the ability to find embedded discrimination in media. This is done by analyzing the messages promoting prejudiced power relationships found naturally in media and written material that go unnoticed otherwise by reading beyond the author's words and examining the manner in which the author has conveyed their ideas about society's norms to determine whether these ideas contain racial or gender inequality.
Overview
Critical literacy is an instructional approach that advocates the adoption of "critical" perspectives toward text. Critical literacy is actively analysing texts and includes strategies for what proponents describe as uncovering underlying messages. The purpose of critical literacy is to create a self-awareness of the topic at hand. There are several different theoretical perspectives on critical literacy that have produced different pedagogical approaches. These approaches share the basic premise that literacy requires consumers of text to adopt a critical and questioning approach.
When students examine the writer's message for bias, they are practicing critical literacy. This skill of actively engaging with the text can be used to help students become more perceptive and socially aware people who do not receive the messages around them from media, books, and images without first taking apart the text and relating its messages back to their own personal life experiences. Thus by getting students to question the power structures in their society, critical literacy teaches them how to dispute these written and oral views regarding issues of equality so that they may combat the social injustices against marginalized groups in their communities.
According to proponents of critical literacy, the practice is not a means of attaining literacy in the sense of improving the ability to understand words, syntax, etc. With this idea in mind, students are able to look at what they are being taught as well as assessing what they are learning to their own situation. This means they are creating deeper meaning rather than studying content only.
Critical literacy has become a popular approach to teaching English to students in some English speaking-countries, including Canada, Australia, New Zealand, and the UK.
For post-structuralist practitioners of critical literacy, the definition of this practice can be quite malleable, but usually involves a search for discourses and representations, and reasons why certain discourses are included in or omitted from a text.
Two major theoretical perspectives within the field of critical literacy are the Neo-Marxist/Freirean and the Australian. These approaches overlap in many ways and they do not necessarily represent competing views, but they do approach the subject matter differently
Relationship to critical thinking
While critical literacy and critical thinking involve similar steps and may overlap, they are not interchangeable. Critical thinking is done when one troubleshoots problems and solves them through a process involving logic and mental analysis. This is because critical thinking focuses on ensuring that one's arguments are sufficiently supported by evidence and void of unclear or deceptive presentation. Thus, critical thinking attempts to understand the outside world and recognize that there are other arguments beyond one's own by evaluating their reasoning for such arguments, but critical thinking does not go further beyond revealing a loaded claim.
To make sense of the biases embedded within these claims first uncovered by critical thinking, critical literacy goes beyond identifying the problem to also analyzing the power dynamics that create the written or oral texts of society and then questioning their claims. Therefore, critical literacy examines the language and wording of politics within these texts and how politics uses certain aspects of grammar to convey its intended meaning. Practicing critical literacy lets students challenge both the author of the text in addition to the social and historical contexts in which the text was produced.
In addition to print sources, critical literacy also evaluates media and technology by looking at who owns these forms of information as well as to whom they are writing and their goal in creating these various texts. Students look at the underlying information being communicated in literature, popular and online media, and journalism in the hopes of taking social action.
History
Critical literacy practices grew out of the social justice pedagogy of Brazilian educator and theorist Paulo Freire, described in his 1967 Education as the Practice of Freedom and his 1968 Pedagogy of the Oppressed. Freirean critical literacy is conceived as a means of empowering populations against oppression and coercion, frequently seen as enacted by corporations or governments. Freirean critical literacy starts with the desire to balance social inequities and address societal problems caused by abuse of power – it is an analysis with an agenda. It proceeds from this philosophical basis to examine, analyze, and deconstruct texts.
Critical literacy was later established more prominently with Donaldo Macedo in 1987. In his 1968 book, Pedagogy of the Oppressed, Paulo Freire writes that individuals who are oppressed by those in positions of power are initially afraid to have freedom since they have internalized the rules of their oppressors and the consequences of not abiding by these rules. Thus, despite their internal desire for freedom, they continue to live in what Freire calls the "fear of freedom", following a pre-set prescription of behaviors that meet their oppressors' approval. In order to understand the actual nature of their oppression, Freire states that their education must teach them to understand that their reality can be changed and with it, their oppression.
This perspective is reflected in the works of Peter McLaren, Henry Giroux, and Jean Anyon, among many others. The Freirean perspective on critical literacy is strongly represented in critical pedagogy.
Critical pedagogy seeks to fight oppression by changing the way schools teach. From this emerges critical literacy, which states that by working to comprehend the way in which texts are written and presented, one may understand the political, social, and economic environments in which the text was formed as well as be able to identify hidden ideologies within such texts.
Other philosophical approaches to critical literacy, while sharing many of the ideas of Neo-Marxist/Freirean critical literacy, may be viewed as a less overtly politicized expansion on these ideas. Critical literacy helps teachers as well as students to explore the relationship between theoretical framework and its practical implications.
Factors
Freire includes several basic factors in his formation of critical literacy. The first step of critical literacy involves bringing awareness, or "consciousness" as Freire terms it, to those who are mistreated and to those who bring about this mistreatment through promoting unfair ideologies via politics and other positions of power, such as schools and government. This is because Freire and Macedo hold that written texts also represent information that has been built on previous schemas about the world since the mistreated often are not conscious that they are oppressed, viewing their poverty or marginalization as a natural part of life. Accepting their hardship, they do not know the steps that would end their oppression.
The second factor of critical literacy seeks to transform the way in which the schools teach. Ira Shor writes that critical literacy can be used to reveal one's subjective beliefs about the world by causing them to question their personal assumptions through using words. Able to be tailored to work with diverse ideas relating to feminism or neo-Marxism, critical literacy presents students with different ways of thinking about their self-development by challenging them to consider differing perspectives about issues rather than settle with the cultural norms and status quo. The goal of this is to lead students to promote social action within their community to change unjust structures.
It is accomplished through advocating honest dialogue between the teacher and students in which both parties learn together through critical discussion of important issues rather than follow a banking model of education, which is a traditional method of teaching that treats students as empty containers, to be filled by teachers whose primary roles are to lecture and pass on information that students must receive and recite during tests. Freire was not a proponent of the banking model because he believed rather than creating conscious knowledge within students, this model he claimed perpetuated oppression.
When teachers facilitate discussion between students regarding the controversial issues that pertain to them and their society, this honest dialogue acts as a bridge to allow students to question the social inequalities in their own communities and the underlying hierarchies that govern these prejudices. Honest dialogue between instructor and student leads students to the third factor: critical reflection of how they can apply the knowledge they have discovered through dialogue to their own life situations in order to take concrete actions to change society and right injustices.
Teaching critical literacy
By teaching critical literacy, teachers can help students take action by expanding their mindsets to better understanding the perspectives of other overlooked groups in society and thus, grow in appreciation for those who have a different culture and language than they do.
Teachers can adapt the teaching of critical literacy to their classrooms by encouraging students to read analytically and challenge the social norms found in texts. They can form their own ideas to dispute the text and write a response to oppose, or support, its claims.
Teachers can let students research social justice topic that they are interested in. This can lead students taking personal responsibility for social change in their communities. Having students dissect different texts from various sources and authors in order to uncover the authors' biases resulting from his or her ingrained ideas of norms is another method for developing the skill of critical literacy as well as having students rewrite passages they read but from the viewpoints and circumstances of oppressed minority groups. Reading a multitude of different texts or additional readings that accompany the text can also help students practice critical literacy. One example of a modality that can aid students with their critical literacy skills is the use of a film. The use of a film can be implemented in a variety of different classes including: history, science, literature, and so on. By utilizing a film or other visual modality, students are able to become engaged in the content in a way they would have not gotten in a traditional lesson. Visual modalities like graphic novels give students a better chance to understand and create meaning behind the information they are given. This in turn allows students to provide more evidence and theories behind the information.
Students’ growth in critical consciousness through their writing reminds teaching practitioners, policy-makers, and teacher educators to provide innovation in their classrooms to empower language learners with teaching methodologies contrary to what they are accustomed to during their learning.
Student skills
Critical literacy allows students to develop their ability to understand the messages found in online articles and other sources of media such as news stations or journalism through careful analysis of the text and how the text is presented.
Critical literacy teaches students how to identify discrimination within institutions of power and then to question these power dynamics when they appear in written and oral texts so that students may comprehend why certain topics such as racial slurs are controversial in society. Teachers help foster students' higher order thinking through in-class discussions about these social topics in what is known as a dialogic environment. Here, the traditional banking model of teaching is replaced by teachers giving students a chance to openly express their ideas and thoughts on the issues being taught in class.
Thirdly, critical literacy aids the growth of reading skills by allowing students to actively relate various texts to other texts to determine if the overall messages promote or discourages the marginalization of minority groups. Younger children can also learn to practice critical literacy by having a teacher read picture books out loud to them as the children learn to examine what messages the images and paragraphs in the picture books convey. By encouraging students to find ways these social issues relate to their own personal lives, students' minds are expanded to see cultural and racial differences as a positive thing.
Lastly, critical literacy prepares students to recognize the importance of language in the formation of politics, social hierarchy, race, and power because the way in which phrases are worded can impact the overall message. This also appears in the realm of education as schools and teachers must determine whether they will teach and request that students use only the standard academic dialect in class or allow them to continue using the dialect they learned in the home. Critical literacy causes students to rethink which variation of language they speak since the standard dialect is the prevalent one and contains more power.
See also
Allan Luke
Colin Lankshear
Critical reading
Culture jamming
Discourse
Henry Jenkins
Information literacy
Intertextuality
Mashup
Media literacy
Meme
Memetics
Participatory culture
Paulo Freire
Popular culture
Postpositivism
Scenario planning
Semiotics
Transmediation
Visual literacy
References
Further reading
Lankshear, C. & McLaren, P. (Eds.) (1993). Critical literacy: Radical and postmodernist perspectives. Albany: State University of New York Press.
Luke, C. (1995). Media and cultural studies. In P. Freebody, S. Muspratt and A. Luke (Eds.).
Constructing critical literacies. Crosskill, New Jersey: Hampton Press.
New London Group. (1996). A Pedagogy of Multiliteracies: Designing Social Futures. Harvard Educational Review, 66, 1.
External links
IRA Critical Literacy Resources - The International Reading Association index page for critical literacy resources.
Critical Literacy NZ describes critical literacy in New Zealand, which, in line with Australia, is beginning to adopt this practice
Critical Literacy Guide for teachers in the Australian state of Tasmania.
Read-Write-Think Lesson Plan
Reading (process)
Learning to read
Pedagogy
Critical pedagogy
Literacy
no:Kildekritisk vurderingskompetanse | 0.783466 | 0.972933 | 0.76226 |
Ethnolinguistics | Ethnolinguistics (sometimes called cultural linguistics) is an area of anthropological linguistics that studies the relationship between a language or group of languages and the cultural behavior of the people who speak those languages.
It examines how different cultures conceptualize and categorize their experiences, such as spatial orientation and environmental phenomena. Ethnolinguistics incorporates methods like ethnosemantics, which analyzes how people classify and label their world, and componential analysis, which dissects semantic features of terms to understand cultural meanings. The field intersects with cultural linguistics to investigate how language encodes cultural schemas and metaphors, influencing areas such as intercultural communication and language learning.
Examples
Ethnolinguists study the way perception and conceptualization influences language and show how that is linked to different cultures and societies. An example is how spatial orientation is expressed in various cultures.
For example, in many societies, words for the cardinal directions east and west are derived from terms for sunrise/sunset. The nomenclature for cardinal directions of Inuit speakers of Greenland, however, is based on geographical landmarks such as the river system and one's position on the coast. Similarly, the Yurok lack the idea of cardinal directions; they orient themselves with respect to their principal geographic feature, the Klamath River.
Cultural linguistics
Cultural Linguistics is a related branch of linguistics that explores the relationship between language and cultural conceptualisations. Cultural Linguistics draws on and expands the theoretical and analytical advancements in cognitive science (including complexity science and distributed cognition) and anthropology. Cultural linguistics examines how various features of human languages encode cultural conceptualisations, including cultural schemas, cultural categories, and cultural metaphors. In , language is viewed as deeply entrenched in the group-level, cultural cognition of communities of speakers. Thus far, the approach of Cultural Linguistics has been adopted in several areas of applied linguistic research, including intercultural communication, second language learning, Teaching English as an International Language, and World Englishes.
Ethnosemantics
Ethnosemantics, also called ethnoscience and cognitive anthropology, is a method of ethnographic research and ethnolinguistics that focuses on semantics by examining how people categorize words in their language. Ethnosemantics studies the way people label and classify the cultural, social, and environmental phenomena in their world and analyze the semantic categories these classifications create in order to understand the cultural meanings behind the way people describe things in their world.
Ethnosemantics as a method relies on Franz Boas' theory of cultural relativity, as well as the theory of linguistic relativity. The use of cultural relativity in ethnosemantic analysis serves to focus analyses on individual cultures and their own language terms, rather than using ethnosemantics to create overarching theories of culture and how language affects culture.
Methods and examples
In order to perform ethnosemantic analysis, all of the words in a language that are used for a particular subject are gathered by the researcher and are used to create a model of how those words relate to one another. Anthropologists who utilize ethnosemantics to create these models believe that they are a representation of how speakers of a particular language think about the topic being described.
For example, in her book The Anthropology of Language: An Introduction to Linguistic Anthropology, Harriet Ottenheimer uses the concept of plants and how dandelions are categorized to explain how ethnosemantics can be used to examine the differences in how cultures think about certain topics. In her example, Ottenheimer describes how the topic "plants" can be divided into the two categories "lettuce" and "weeds". Ethnosemantics can help anthropologists to discover whether a particular culture categorizes "dandelions" as a "lettuce" or a "weed", and using this information can discover something about how that culture thinks about plants.
In one section of Oscar Lewis' La Vida, he includes the transcript of an interview with a Puerto Rican woman in which she discusses a prostitute's social world. Using ethnosemantics, the speaker's statements about the people in that social circle and their behavior can be analyzed in order to understand how she perceives and conceptualizes her social world. The first step in this analysis is to identify and map out all of the social categories or social identities the speaker identified. Once the social categories have been mapped, the next steps are to attempt to define the precise meaning of each category, examine how the speaker describes the relationship of categories, and analyze how she evaluates the characteristics of the people who are grouped in those social categories.
The speaker in this example identified three basic social categories (the rich, the law, and the poor) and characterized those people in the higher categories of "rich" and "law" as bad people. The poor are further divided into those with disreputable positions and those with reputable positions. The speaker characterizes the disreputable poor generally as dishonest and corrupt, but presents herself as one of the few exceptions. This analysis of the speaker's description of her social circle thus allows for an understanding of how she perceives the world around her and the people in it.
Componential analysis
The method of componential analysis in ethnosemantic analysis is used to describe the criteria people use to classify concepts by analyzing their semantic features. For example, the word "man" can be analyzed into the semantic features "male," "mature," and "human"; "woman" can be analyzed into "female," "mature," and "human"; "girl" can be analyzed into "female," "immature," and "human"; and "bull" can be analyzed into "male," "mature," and "bovine." By using this method, the features of words in a category can be examined to form hypotheses about the significant meaning and identifying features of words in that category.
See also
Anthropological linguistics
Associative group analysis
Evolutionary psychology of language
Linguistic anthropology
Ecolinguistics
Wilhelm von Humboldt
References
Sources
Wierzbicka, Anna (1992) Semantics, Culture, and Cognition: Universal human concepts in culture-specific configuration. New York: Oxford University Press.
Bartmiński, Jerzy. Aspects of Cognitive Ethnolinguistics. Sheffield and Oakville, CT: Equinox, 2009/2012.
(en) Madeleine Mathiot (dir.), Ethnolinguistics: Boas, Sapir, and Whorf revisited, Mouton, La Haye, 1979, 323 p.
(fr) Luc Bouquiaux, Linguistique et ethnolinguistique : anthologie d'articles parus entre 1961 et 2003, Peeters, Louvain, Dudley, MA, 2004, 466 p.
(fr) Christine Jourdan et Claire Lefebvre (dir.), « L'ethnolinguistique », in Anthropologie et sociétés, vol. 23, no 3, 1999, p. 5–173
(fr) Bernard Pottier, L'ethnolinguistique, Didier, 1970, 130 p.
Amsterdam/Philadelphia: John Benjamins.
Trabant, Jürgen, , Liège: Madarga, 1992.
Trabant, Jürgen, Traditions de Humboldt, (German edition 1990), French edition, Paris: Maison des sciences de l'homme, 1999.
Trabant, Jürgen, Mithridates im Paradies: Kleine Geschichte des Sprachdenkens, München: Beck, 2003.
Trabant, Jürgen, 'L'antinomie linguistique: quelques enjeux politiques', Politiques & Usages de la Langue en Europe, ed. Michael Werner, Condé-sur-Noireau: Collection du Ciera, Dialogiques, Éditions de la Maison des sciences de l'homme, 2007.
Trabant, Jürgen, Was ist Sprache?, München: Beck, 2008.
Vocabulaire européen des philosophes, Dictionnaires des intraduisibles, ed. Barbara Cassin, Paris: Robert, 2004.
Whorf, Benjamin Lee, Language, Thought and Reality: Selected Writings (1956), ed. John B. Caroll, Cambridge, Massachusetts: M.I.T. Press, 1984.
Wierzbicka, Anna, Semantics, Culture, and Cognition: Universal Human Concepts in Culture-Specific Configurations, New York, Oxford University Press, 1992.
Wierzbicka, Anna, Understanding Cultures through their Key Words, Oxford: Oxford University Press, 1997.
Wierzbicka, Anna, Emotions across Languages and Cultures, Cambridge: Cambridge University Press, 1999.
Wierzbicka, Anna, Semantics: Primes and Universals (1996), Oxford: Oxford University Press, 2004.
Wierzbicka, Anna, Experience, Evidence & Sense: The Hidden Cultural Legacy of English, Oxford: Oxford University Press, 2010.
External links
Cultural Linguistics
Applied Cultural Linguistics
The Jurgen Trabant Wilhelm von Humboldt Lectures (7hrs)
Farzad Sharifian, publications
Cultural Linguistics: A new multidisciplinary field of research
Anthropology
Sociolinguistics | 0.770492 | 0.989312 | 0.762257 |
Participatory action research | Participatory action research (PAR) is an approach to action research emphasizing participation and action by members of communities affected by that research. It seeks to understand the world by trying to change it, collaboratively and following reflection. PAR emphasizes collective inquiry and experimentation grounded in experience and social history. Within a PAR process, "communities of inquiry and action evolve and address questions and issues that are significant for those who participate as co-researchers". PAR contrasts with mainstream research methods, which emphasize controlled experimentation, statistical analysis, and reproducibility of findings.
PAR practitioners make a concerted effort to integrate three basic aspects of their work: participation (life in society and democracy), action (engagement with experience and history), and research (soundness in thought and the growth of knowledge). "Action unites, organically, with research" and collective processes of self-investigation. The way each component is actually understood and the relative emphasis it receives varies nonetheless from one PAR theory and practice to another. This means that PAR is not a monolithic body of ideas and methods but rather a pluralistic orientation to knowledge making and social change.
Overview
In the UK and North America the work of Kurt Lewin[21] and the Tavistock Institute in the 1940s has been influential. However alternative traditions of PAR, begin with processes that include more bottom-up organising and popular education than were envisaged by Lewin.
PAR has multiple progenitors and resists definition. It is a broad tradition of collective self-experimentation backed up by evidential reasoning, fact-finding and learning. All formulations of PAR have in common the idea that research and action must be done 'with' people and not 'on' or 'for' people. It counters scientism by promoting the grounding of knowledge in human agency and social history (as in much of political economy). Inquiry based on PAR principles makes sense of the world through collective efforts to transform it, as opposed to simply observing and studying human behaviour and people's views about reality, in the hope that meaningful change will eventually emerge.
PAR draws on a wide range of influences, both among those with professional training and those who draw on their life experience and those of their ancestors. Many draw on the work of Paulo Freire, new thinking on adult education research, the Civil Rights Movement, South Asian social movements such as the Bhumi Sena, and key initiatives such as the Participatory Research Network created in 1978 and based in New Delhi. "It has benefited from an interdisciplinary development drawing its theoretical strength from adult education, sociology, political economy, community psychology, community development, feminist studies, critical psychology, organizational development and more". The Colombian sociologist Orlando Fals Borda and others organized the first explicitly PAR conference in Cartagena, Colombia in 1977. Based on his research with peasant groups in rural Boyaca and with other underserved groups, Fals Borda called for the 'community action' component to be incorporated into the research plans of traditionally trained researchers. His recommendations to researchers committed to the struggle for justice and greater democracy in all spheres, including the business of science, are useful for all researchers and echo the teaching from many schools of research:
"Do not monopolise your knowledge nor impose arrogantly your techniques, but respect and combine your skills with the knowledge of the researched or grassroots communities, taking them as full partners and co-researchers. Do not trust elitist versions of history and science which respond to dominant interests, but be receptive to counter-narratives and try to recapture them. Do not depend solely on your culture to interpret facts, but recover local values, traits, beliefs, and arts for action by and with the research organisations. Do not impose your own ponderous scientific style for communicating results, but diffuse and share what you have learned together with the people, in a manner that is wholly understandable and even literary and pleasant, for science should not be necessarily a mystery nor a monopoly of experts and intellectuals."
PAR can be thought of as a guiding paradigm to influence and democratize the creation of knowledge making, and ground it in real community needs and learning. Knowledge production controlled by elites can sometimes further oppress marginalized populations. PAR can be a way of overcoming the ineffectiveness and elitism of conventional schooling and science, and the negative effects of market forces and industry on the workplace, community life and sustainable livelihoods.
Fundamentally, PAR pushes against the notion that experiential distance is required for objectivity in scientific and sociological research. Instead, PAR values embodied knowledge beyond "gated communities" of scholarship, bridging academia and social movements such that research and advocacy — often thought to be mutually exclusive — become intertwined. Rather than be confined by academia, participatory settings are believed to have "social value," confronting epistemological gaps that may deepen ruts of inequality and injustice.
These principles and the ongoing evolution of PAR have had a lasting legacy in fields ranging from problem solving in the workplace to community development and sustainable livelihoods, education, public health, feminist research, civic engagement and criminal justice. It is important to note that these contributions are subject to many tensions and debates on key issues such as the role of clinical psychology, critical social thinking and the pragmatic concerns of organizational learning in PAR theory and practice. Labels used to define each approach (PAR, critical PAR, action research, psychosociology, sociotechnical analysis, etc.) reflect these tensions and point to major differences that may outweigh the similarities. While a common denominator, the combination of participation, action and research reflects the fragile unity of traditions whose diverse ideological and organizational contexts kept them separate and largely ignorant of one another for several decades.
The following review focuses on traditions that incorporate the three pillars of PAR. Closely related approaches that overlap but do not bring the three components together are left out. Applied research, for instance, is not necessarily committed to participatory principles and may be initiated and controlled mostly by experts, with the implication that 'human subjects' are not invited to play a key role in science building and the framing of the research questions. As in mainstream science, this process "regards people as sources of information, as having bits of isolated knowledge, but they are neither expected nor apparently assumed able to analyze a given social reality". PAR also differs from participatory inquiry or collaborative research, contributions to knowledge that may not involve direct engagement with transformative action and social history. PAR, in contrast, has evolved from the work of activists more concerned with empowering marginalized peoples than with generating academic knowledge for its own sake. Lastly, given its commitment to the research process, PAR overlaps but is not synonymous with action learning, action reflection learning (ARL), participatory development and community development—recognized forms of problem solving and capacity building that may be carried out with no immediate concern for research and the advancement of knowledge.
Organizational life
Action research in the workplace took its initial inspiration from Lewin's work on organizational development (and Dewey's emphasis on learning from experience). Lewin's seminal contribution involves a flexible, scientific approach to planned change that proceeds through a spiral of steps, each of which is composed of 'a circle of planning, action, and fact-finding about the result of the action', towards an organizational 'climate' of democratic leadership and responsible participation that promotes critical self-inquiry and collaborative work. These steps inform Lewin's work with basic skill training groups, T-groups where community leaders and group facilitators use feedback, problem solving, role play and cognitive aids (lectures, handouts, film) to gain insights into themselves, others and groups with a view to 'unfreezing' and changing their mindsets, attitudes and behaviours.
Lewin's understanding of action-research coincides with key ideas and practices developed at the influential Tavistock Institute (created in 1947)) in the UK and National Training Laboratories (NTL) in the US. An important offshoot of Tavistock thinking and practise is the sociotechnical systems perspective on workplace dynamics, guided by the idea that greater productivity or efficiency does not hinge on improved technology alone. Improvements in organizational life call instead for the interaction and 'joint optimization' of the social and technical components of workplace activity. In this perspective, the best match between the social and technical factors of organized work lies in principles of 'responsible group autonomy' and industrial democracy, as opposed to deskilling and top-down bureaucracy guided by Taylor's scientific management and linear chain of command.
NTL played a central role in the evolution of experiential learning and the application of behavioral science to improving organizations. Process consultation, team building, conflict management, and workplace group democracy and autonomy have become recurrent themes in the prolific body of literature and practice known as organizational development (OD). As with 'action science', OD is a response to calls for planned change and 'rational social management' involving a normative human relations movement and approach to worklife in capital-dominated economies. Its principal goal is to enhance an organization's performance and the worklife experience, with the assistance of a consultant, a change agent or catalyst that helps the sponsoring organization define and solve its own problems, introduce new forms of leadership and change organizational culture and learning. Diagnostic and capacity-building activities are informed, to varying degrees, by psychology, the behavioural sciences, organizational studies, or theories of leadership and social innovation. Appreciative Inquiry (AI), for instance, is an offshoot of PAR based on positive psychology. Rigorous data gathering or fact-finding methods may be used to support the inquiry process and group thinking and planning. On the whole, however, science tends to be a means, not an end. Workplace and organizational learning interventions are first and foremost problem-based, action-oriented and client-centred.
Psychosociology
Tavistock broke new ground in other ways, by meshing general medicine and psychiatry with Freudian and Jungian psychology and the social sciences to help the British army face various human resource problems. This gave rise to a field of scholarly research and professional intervention loosely known as psychosociology, particularly influential in France (CIRFIP). Several schools of thought and 'social clinical' practise belong to this tradition, all of which are critical of the experimental and expert mindset of social psychology. Most formulations of psychosociology share with OD a commitment to the relative autonomy and active participation of individuals and groups coping with problems of self-realization and goal effectiveness within larger organizations and institutions. In addition to this humanistic and democratic agenda, psychosociology uses concepts of psychoanalytic inspiration to address interpersonal relations and the interplay between self and group. It acknowledges the role of the unconscious in social behaviour and collective representations and the inevitable expression of transference and countertransference—language and behaviour that redirect unspoken feelings and anxieties to other people or physical objects taking part in the action inquiry.
The works of Balint, Jaques, and Bion are turning points in the formative years of psychosociology. Commonly cited authors in France include Amado, Barus-Michel, Dubost, Enriquez, Lévy, Gaujelac, and Giust-Desprairies. Different schools of thought and practice include Mendel's action research framed in a 'sociopsychoanalytic' perspective and Dejours's psychodynamics of work, with its emphasis on work-induced suffering and defence mechanisms. Lapassade and Lourau's 'socianalytic' interventions focus rather on institutions viewed as systems that dismantle and recompose norms and rules of social interaction over time, a perspective that builds on the principles of institutional analysis and psychotherapy. Anzieu and Martin's work on group psychoanalysis and theory of the collective 'skin-ego' is generally considered as the most faithful to the Freudian tradition. Key differences between these schools and the methods they use stem from the weight they assign to the analyst's expertise in making sense of group behaviour and views and also the social aspects of group behaviour and affect. Another issue is the extent to which the intervention is critical of broader institutional and social systems. The use of psychoanalytic concepts and the relative weight of effort dedicated to research, training and action also vary.
Applications
Community development and sustainable livelihoods
PAR emerged in the postwar years as an important contribution to intervention and self-transformation within groups, organizations and communities. It has left a singular mark on the field of rural and community development, especially in the Global South. Tools and concepts for doing research with people, including "barefoot scientists" and grassroots "organic intellectuals" (see Gramsci), are now promoted and implemented by many international development agencies, researchers, consultants, civil society and local community organizations around the world. This has resulted in countless experiments in diagnostic assessment, scenario planning and project evaluation in areas ranging from fisheries and mining to forestry, plant breeding, agriculture, farming systems research and extension, watershed management, resource mapping, environmental conflict and natural resource management, land rights, appropriate technology, local economic development, communication, tourism, leadership for sustainability, biodiversity and climate change. This prolific literature includes the many insights and methodological creativity of participatory monitoring, participatory rural appraisal (PRA) and participatory learning and action (PLA) and all action-oriented studies of local, indigenous or traditional knowledge.
On the whole, PAR applications in these fields are committed to problem solving and adaptation to nature at the household or community level, using friendly methods of scientific thinking and experimentation adapted to support rural participation and sustainable livelihoods.
Literacy, education and youth
In education, PAR practitioners inspired by the ideas of critical pedagogy and adult education are firmly committed to the politics of emancipatory action formulated by Freire, with a focus on dialogical reflection and action as means to overcome relations of domination and subordination between oppressors and the oppressed, colonizers and the colonized. The approach implies that "the silenced are not just incidental to the curiosity of the researcher but are the masters of inquiry into the underlying causes of the events in their world". Although a researcher and a sociologist, Fals Borda also has a profound distrust of conventional academia and great confidence in popular knowledge, sentiments that have had a lasting impact on the history of PAR, particularly in the fields of development, literacy, counterhegemonic education as well as youth engagement on issues ranging from violence to criminality, racial or sexual discrimination, educational justice, healthcare and the environment. When youth are included as research partners in the PAR process, it is referred to as Youth Participatory Action Research, or YPAR.
Community-based participatory research and service-learning are a more recent attempts to reconnect academic interests with education and community development. The Global Alliance on Community-Engaged Research is a promising effort to "use knowledge and community-university partnership strategies for democratic social and environmental change and justice, particularly among the most vulnerable people and places of the world." It calls for the active involvement of community members and researchers in all phases of the action inquiry process, from defining relevant research questions and topics to designing and implementing the investigation, sharing the available resources, acknowledging community-based expertise, and making the results accessible and understandable to community members and the broader public. Service learning or education is a closely related endeavour designed to encourage students to actively apply knowledge and skills to local situations, in response to local needs and with the active involvement of community members. Many online or printed guides now show how students and faculty can engage in community-based participatory research and meet academic standards at the same time.
Collaborative research in education is community-based research where pre-university teachers are the community and scientific knowledge is built on top of teachers' own interpretation of their experience and reality, with or without immediate engagement in transformative action.
Public health
PAR has made important inroads in the field of public health, in areas such as disaster relief, community-based rehabilitation, public health genomics, accident prevention, hospital care and drug prevention.
Because of its link to radical democratic struggles of the Civil Rights Movement and other social movements in South Asia and Latin America (see above), PAR is seen as a threat to their authority by some established elites. An international alliance university-based participatory researchers, ICPHR, omit the word "Action", preferring the less controversial term "participatory research".
Photovoice is one of the strategies used in PAR and is especially useful in the public health domain. Keeping in mind the purpose of PAR, which is to benefit communities, Photovoice allows the same to happen through the media of photography. Photovoice considers helping community issues and problems reach policy makers as its primary goal.
Occupational health and safety
Participatory programs within the workplace involve employees within all levels of a workplace organization, from management to front-line staff, in the design and implementation of health and safety interventions. Some research has shown that interventions are most successful when front-line employees have a fundamental role in designing workplace interventions. Success through participatory programs may be due to a number of factors. Such factors include a better identification of potential barriers and facilitators, a greater willingness to accept interventions than those imposed strictly from
upper management, and enhanced buy-in to intervention design, resulting in greater sustainability though promotion and acceptance. When designing an intervention, employees are able to consider lifestyle and other behavioral influences into solution activities that go beyond the immediate workplace.
Feminism and gender
Feminist research and women's development theory also contributed to rethinking the role of scholarship in challenging existing regimes of power, using qualitative and interpretive methods that emphasize subjectivity and self-inquiry rather than the quantitative approach of mainstream science. As did most research in the 1970s and 1980s, PAR remained androcentric. In 1987, Patricia Maguire critiqued this male-centered participatory research, arguing that "rarely have feminist and participatory action researchers acknowledged each other with mutually important contributions to the journey." Given that PAR aims to give equitable opportunity for diverse and marginalized voices to be heard, engaging gender minorities is an integral pillar in PAR's tenants. In addition to gender minorities, PAR must consider points of intersecting oppressions individuals may experience. After Maguire published Traveling Companions: Feminism, Teaching, And Action Research, PAR began to extend toward not only feminism, but also Intersectionality through Black Feminist Thought and Critical Race Theory (CRT). Today, applying an intersectional feminist lens to PAR is crucial to recognize the social categories, such as race, class, ability, gender, and sexuality, that construct individuals' power relations and lived experiences. PAR seeks to recognize the deeply complex condition of human living. Therefore, framing PAR's qualitative study methodologies through an intersectional feminist lens mobilizes all experiences – regardless of various social categories and oppressions – as legitimate sources of knowledge.
Neurodiversity
Neurodiversity has contributed to scholarship by including neurodivergent populations within research, by asking neurodivergent adults to get involved in discussing the various stages of the scientific methodology, which allows them to provide a better understanding of the research priorities within these communities. This research can challenge the ableist structure within academia where general assumptions (e.g. neurodivergence is inferior to neurotypicality), promote neurodivergent individuals as active collaborators, thus involving them in knowledge generation and ensure the theories of human cognition include strengths and weaknesses, together with lived experiences.
Civic engagement and ICT
Novel approaches to PAR in the public sphere help scale up the engaged inquiry process beyond small group dynamics. Touraine and others thus propose a 'sociology of intervention' involving the creation of artificial spaces for movement activists and non-activists to debate issues of public concern. Citizen science is another recent move to expand the scope of PAR, to include broader 'communities of interest' and citizens committed to enhancing knowledge in particular fields. In this approach to collaborative inquiry, research is actively assisted by volunteers who form an active public or network of contributing individuals. Efforts to promote public participation in the works of science owe a lot to the revolution in information and communications technology (ICT). Web 2.0 applications support virtual community interactivity and the development of user-driven content and social media, without restricted access or controlled implementation. They extend principles of open-source governance to democratic institutions, allowing citizens to actively engage in wiki-based processes of virtual journalism, public debate and policy development. Although few and far between, experiments in open politics can thus make use of ICT and the mechanics of e-democracy to facilitate communications on a large scale, towards achieving decisions that best serve the public interest.
In the same spirit, discursive or deliberative democracy calls for public discussion, transparency and pluralism in political decision-making, lawmaking and institutional life. Fact-finding and the outputs of science are made accessible to participants and may be subject to extensive media coverage, scientific peer review, deliberative opinion polling and adversarial presentations of competing arguments and predictive claims. The methodology of Citizens' jury is interesting in this regard. It involves people selected at random from a local or national population who are provided opportunities to question 'witnesses' and collectively form a 'judgment' on the issue at hand.
ICTs, open politics and deliberative democracy usher in new strategies to engage governments, scientists, civil society organizations and interested citizens in policy-related discussions of science and technology. These trends represent an invitation to explore novel ways of doing PAR on a broader scale.
Criminal justice
Compared to other fields, PAR frameworks in criminal justice are relatively new. But growing support for community-based alternatives to the criminal justice system has sparked interest in PAR in criminological settings. Participatory action research in criminal justice includes system-impacted people themselves in research and advocacy conducted by academics or other experts. Because system-impacted people hold experiential knowledge of the conditions and practices of the justice system, they may be able to more effectively expose and articulate problems with that system. Many people who have been incarcerated are also able to share with researchers facets of the justice system that are invisible to the outside world or are difficult to understand without first-hand experience. Proponents of PAR in criminal justice believe that including those most impacted by the justice system in research is crucial because the presence of these individuals precludes the possibility of misunderstanding or compounding harms of the justice system in that research.
Participants in PAR may also hold knowledge or education in more traditional academic fields, like law, policy or government that can inform criminological research. But PAR in criminology bridges the epistemological gap between knowledge gained through academia and through lived experience, connecting research to justice reform.
Ethics
Given the often delicate power balances between researchers and participants in PAR, there have been calls for a code of ethics to guide the relationship between researchers and participants in a variety of PAR fields. Norms in research ethics involving humans include respect for the autonomy of individuals and groups to deliberate about a decision and act on it. This principle is usually expressed through the free, informed and ongoing consent of those participating in research (or those representing them in the case of persons lacking the capacity to decide). Another mainstream principle is the welfare of participants who should not be exposed to any unfavourable balance of benefits and risks with participation in research aimed at the advancement of knowledge, especially those that are serious and probable. Since privacy is a factor that contributes to people's welfare, confidentiality obtained through the collection and use of data that are anonymous (e.g. survey data) or anonymized tends to be the norm. Finally, the principle of justice—equal treatment and concern for fairness and equity—calls for measures of appropriate inclusion and mechanisms to address conflicts of interests.
While the choice of appropriate norms of ethical conduct is rarely an either/or question, PAR implies a different understanding of what consent, welfare and justice entail. For one thing the people involved are not mere 'subjects' or 'participants'. They act instead as key partners in an inquiry process that may take place outside the walls of academic or corporate science. As Canada's Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans suggests, PAR requires that the terms and conditions of the collaborative process be set out in a research agreement or protocol based on mutual understanding of the project goals and objectives between the parties, subject to preliminary discussions and negotiations. Unlike individual consent forms, these terms of reference (ToR) may acknowledge collective rights, interests and mutual obligations. While they are legalistic in their genesis, they are usually based on interpersonal relationships and a history of trust rather than the language of legal forms and contracts.
Another implication of PAR ethics is that partners must protect themselves and each other against potential risks, by mitigating the negative consequences of their collaborative work and pursuing the welfare of all parties concerned. This does not preclude battles against dominant interests. Given their commitment to social justice and transformative action, some PAR projects may be critical of existing social structures and struggle against the policies and interests of individuals, groups and institutions accountable for their actions, creating circumstances of danger. Public-facing action can also be dangerous for some marginalized populations, such as survivors of domestic violence.
In some fields of PAR it is believed that an ethics of participation should go beyond avoidance of harm. For participatory settings that engage with marginalized or oppressed populations, including criminal justice, PAR can be mobilized to actively support individuals. An "ethic of empowerment" encourages researchers to consider participants as standing on equal epistemological footing, with equal say in research decisions. Within this ethical framework, PAR doesn't just affect change in the world but also directly improves the lives of the research participants. An "ethic of empowerment" may require a systemic shift in the way researchers view and talk about oppressed communities — often as degenerate or helpless. If not practiced in a way that actively considers the knowledge of participants, PAR can become manipulative. Participatory settings in which participants are tokenized or serve only as sources of information without joint power in decision-making processes can exploit rather than empower.
By definition, PAR is always a step into the unknown, raising new questions and creating new risks over time. Given its emergent properties and responsiveness to social context and needs, PAR cannot limit discussions and decisions about ethics to the design and proposal phase. Norms of ethical conduct and their implications may have to be revisited as the project unfolds. This has implications, both in resources and practice, for the ability to subject the research to true ethical oversight in the way that traditional research has come to be regulated.
Challenges
PAR offers a long history of experimentation with evidence-based and people-based inquiry, a groundbreaking alternative to mainstream positive science. As with positivism, the approach creates many challenges as well as debates on what counts as participation, action and research. Differences in theoretical commitments (Lewinian, Habermasian, Freirean, psychoanalytic, feminist, etc.) and methodological inclinations (quantitative, qualitative, mixed) are numerous and profound. This is not necessarily a problem, given the pluralistic value system built into PAR. Ways to better answer questions pertaining to PAR's relationship with science and social history are nonetheless key to its future.
One critical question concerns the problem-solving orientation of engaged inquiry—the rational means-ends focus of most PAR experiments as they affect organizational performance or material livelihoods, for instance. In the clinical perspective of French psychosociology, a pragmatic orientation to inquiry neglects forms of understanding and consciousness that are not strictly instrumental and rational. PAR must pay equal attention the interconnections of self-awareness, the unconscious and life in society.
Another issue, more widely debated, is scale—how to address broad-based systems of power and issues of complexity, especially those of another development on a global scale. How can PAR develop a macro-orientation to democratic dialogue and meet challenges of the 21st Century, by joining movements to support justice and solidarity on both local and global scales? By keeping things closely tied to local group dynamics, PAR runs the risk of substituting small-scale participation for genuine democracy and fails to develop strategies for social transformation on all levels. Given its political implications, community-based action research and its consensus ethos have been known to fall prey to powerful stakeholders and serve as Trojan horses to bring global and environmental restructuring processes directly to local settings, bypassing legitimate institutional buffers and obscuring diverging interests and the exercise of power during the process. Cooptation can lead to highly manipulated outcomes. Against this criticism, others argue that, given the right circumstances, it is possible to build institutional arrangements for joint learning and action across regional and national borders that can have impacts on citizen action, national policies and global discourses.
The role of science and scholarship in PAR is another source of difference. In the Lewinian tradition, "there is nothing so practical as a good theory". Accordingly, the scientific logic of developing theory, forming and testing hypotheses, gathering measurable data and interpreting the results plays a central role. While more clinically oriented, psychosociology in France also emphasizes the distinctive role of formal research and academic work, beyond problem solving in specific contexts. Many PAR practitioners critical of mainstream science and its overemphasis on quantitative data also point out that research based on qualitative methods may be theoretically-informed and rigorous in its own way. In other traditions, however, PAR keeps great distance from both academic and corporate science. Given their emphasis on pluralism and living knowledge, many practitioners of grassroots inquiry are critical of grand theory and advanced methods for collaborative inquiry, to the point of abandoning the word "research" altogether, as in participatory action learning. Others equate research with any involvement in reflexive practice aimed at assessing problems and evaluating project or program results against group expectations. As a result, inquiry methods tend to be soft and theory remains absent or underdeveloped. Practical and theoretical efforts to overcome this ambivalence towards scholarly activity are nonetheless emerging.
See also
Community organizing
Cooperative inquiry
Participatory design
Participatory monitoring
References
Further reading
Action Research, Sage,
Organizational Development Series, Addison-Wesley Business & Economics
Educational Action Research,
International Journal of Action Research, Rainer Hampp Verlag,
Journal of Applied Behavioral Science, Sage,
Journal of Organizational Change Management
Management Learning, Sage,
Participatory Learning and Action, IIED,
Progress in Community Health Partnerships: Research, Education, and Action, Johns Hopkins University Press,
Systems Practice and Action Research, Springer, (Print) 1573-9295 (Online)
Research methods
Citizen science models | 0.770026 | 0.989903 | 0.762252 |
Dehumanization | Dehumanization is the denial of full humanity in others along with the cruelty and suffering that accompany it. A practical definition refers to it as the viewing and the treatment of other people as though they lack the mental capacities that are commonly attributed to human beings. In this definition, every act or thought that regards a person as "less than" human is dehumanization.
Dehumanization is one form of incitement to genocide. It has also been used to justify war, judicial and extrajudicial killing, slavery, the confiscation of property, denial of suffrage and other rights, and to attack enemies or political opponents.
Conceptualizations
Behaviorally, dehumanization describes a disposition towards others that debases the others' individuality by either portraying it as an "individual" species or by portraying it as an "individual" object (e.g., someone who acts inhumanely towards humans). As a process, dehumanization may be understood as the opposite of personification, a figure of speech in which inanimate objects or abstractions are endowed with human qualities; dehumanization then is the disendowment of these same qualities or a reduction to abstraction.
In almost all contexts, dehumanization is used pejoratively along with a disruption of social norms, with the former applying to the actor(s) of behavioral dehumanization and the latter applying to the action(s) or processes of dehumanization. For instance, there is dehumanization for those who are perceived as lacking in culture or civility, which are concepts that are believed to distinguish humans from animals. Social norms define humane behavior and reflexively define what is outside of humane behavior or inhumane. Dehumanization differs from inhumane behaviors or processes in its breadth to propose competing social norms. It is an action of dehumanization as the old norms are depreciated to the competing new norms, which then redefine the action of dehumanization. If the new norms lose acceptance, then the action remains one of dehumanization. The definition of dehumanization remains in a reflexive state of a type-token ambiguity relative to both individual and societal scales.
In biological terms, dehumanization can be described as an introduced species marginalizing the human species, or an introduced person/process that debases other people inhumanely.
In political science and jurisprudence, the act of dehumanization is the inferential alienation of human rights or denaturalization of natural rights, a definition contingent upon presiding international law rather than social norms limited by human geography. In this context, a specialty within species does not need to constitute global citizenship or its inalienable rights; the human genome inherits both.
It is theorized that dehumanization takes on two forms: animalistic dehumanization, which is employed on a mostly intergroup basis; and mechanistic dehumanization, which is employed on a mostly interpersonal basis. Dehumanization can occur discursively (e.g., idiomatic language that likens individual human beings to non-human animals, verbal abuse, erasing one's voice from discourse), symbolically (e.g., imagery), or physically (e.g., chattel slavery, physical abuse, refusing eye contact). Dehumanization often ignores the target's individuality (i.e., the creative and exciting aspects of their personality) and can hinder one from feeling empathy or correctly understanding a stigmatized group.
Dehumanization may be carried out by a social institution (such as a state, school, or family), interpersonally, or even within oneself. Dehumanization can be unintentional, especially upon individuals, as with some types of de facto racism. State-organized dehumanization has historically been directed against perceived political, racial, ethnic, national, or religious minority groups. Other minoritized and marginalized individuals and groups (based on sexual orientation, gender, disability, class, or some other organizing principle) are also susceptible to various forms of dehumanization. The concept of dehumanization has received empirical attention in the psychological literature. It is conceptually related to infrahumanization, delegitimization, moral exclusion, and objectification. Dehumanization occurs across several domains; it is facilitated by status, power, and social connection; and results in behaviors like exclusion, violence, and support for violence against others.
"Dehumanisation is viewed as a central component to intergroup violence because it is frequently the most important precursor to moral exclusion, the process by which stigmatized groups are placed outside the boundary in which moral values, rules, and considerations of fairness apply."
David Livingstone Smith, director and founder of The Human Nature Project at the University of New England, argues that historically, human beings have been dehumanizing one another for thousands of years. In his work "The Paradoxes of Dehumanization", Smith proposes that dehumanization simultaneously regards people as human and subhuman. This paradox comes to light, as Smith identifies, because the reason people are dehumanized is so their human attributes can be taken advantage of.
Humanness
In Herbert Kelman's work on dehumanization, humanness has two features: "identity" (i.e., a perception of the person "as an individual, independent and distinguishable from others, capable of making choices") and "community" (i.e., a perception of the person as "part of an interconnected network of individuals who care for each other"). When a target's agency and embeddedness in a community are denied, they no longer elicit compassion or other moral responses and may suffer violence.
Objectification
Psychologist Barbara Fredrickson and Tomi-Ann Roberts argued that the sexual objectification of women extends beyond pornography (which emphasizes women's bodies over their uniquely human mental and emotional characteristics) to society generally. There is a normative emphasis on female appearance that causes women to take a third-person perspective on their bodies. The psychological distance women may feel from their bodies might cause them to dehumanize themselves. Some research has indicated that women and men exhibit a "sexual body part recognition bias", in which women's sexual body parts are better recognized when presented in isolation than in their entire bodies. In contrast, men's sexual body parts are better recognized in the context of their entire bodies than in isolation. Men who dehumanize women as either animals or objects are more liable to rape and sexually harass women and display more negative attitudes toward female rape victims.
Philosopher Martha Nussbaum identified seven components of sexual objectification: instrumentality, denial of autonomy, inertness, fungibility, violability, ownership, and denial of subjectivity.
In this context, instrumentality refers to when the objectified is used as an instrument to the objectifier's benefit. Denial of autonomy occurs in the form of the objectifier underestimating the objectified and denies their capabilities. In the case of inertness, the objectified is treated as if they are lazy and indolent. Fungibility brands the objectified to be easily replaceable. Volability is when the objectifier does not respect the objectified person's personal space or boundaries. Ownership is when the objectified is seen as another person's property. Lastly, the denial of subjectivity is a lack of sympathy for the objectified, or the dismissal of the notion that the objectified has feelings. These seven components cause the objectifier to view the objectified in a disrespectful way, therefore treating them so.
History
Native Americans
Native Americans were dehumanized as "merciless Indian savages" in the United States Declaration of Independence. Following the Wounded Knee massacre in December 1890, author L. Frank Baum wrote:The Pioneer has before declared that our only safety depends upon the total extermination [sic] of the Indians. Having wronged them for centuries we had better, in order to protect our civilization, follow it up by one more wrong and wipe these untamed and untamable creatures from the face of the earth. In this lies safety for our settlers and the soldiers who are under incompetent commands. Otherwise, we may expect future years to be as full of trouble with the redskins as those have been in the past. In Martin Luther King Jr.'s book on civil rights, Why We Can't Wait, he wrote:
Our nation was born in genocide when it embraced the doctrine that the original American, the Indian, was an inferior race. Even before there were large numbers of Negroes on our shores, the scar of racial hatred had already disfigured colonial society. From the sixteenth century forward, blood flowed in battles over racial supremacy. We are perhaps the only nation which tried as a matter of national policy to wipe out its indigenous population. Moreover, we elevated that tragic experience into a noble crusade. Indeed, even today we have not permitted ourselves to reject or to feel remorse for this shameful episode. Our literature, our films, our drama, our folklore all exalt it.
King was an active supporter of the Native American rights movement, which he drew parallels with his own leadership of the civil rights movement. Both movements aimed to overturn dehumanizing attitudes held by members of the public at large against them.
Causes and facilitating factors
Several lines of psychological research relate to the concept of dehumanization. Infrahumanization suggests that individuals think of and treat outgroup members as "less human" and more like animals; while Austrian ethnologist Irenäus Eibl-Eibesfeldt uses the term pseudo-speciation, a term that he borrowed from the psychoanalyst Erik Erikson, to imply that the dehumanized person or persons are regarded as not members of the human species. Specifically, individuals associate secondary emotions (which are seen as uniquely human) more with the ingroup than with the outgroup. Primary emotions (those experienced by all sentient beings, whether human or other animals) are found to be more associated with the outgroup. Dehumanization is intrinsically connected with violence. Often, one cannot do serious injury to another without first dehumanizing him or her in one's mind (as a form of rationalization). Military training is, among other things, systematic desensitization and dehumanization of the enemy, and military personnel may find it psychologically necessary to refer to the enemy as an animal or other non-human beings. Lt. Col. Dave Grossman has shown that without such desensitization it would be difficult, if not impossible, for one human to kill another human, even in combat or under threat to their own lives.
According to Daniel Bar-Tal, delegitimization is the "categorization of groups into extreme negative social categories which are excluded from human groups that are considered as acting within the limits of acceptable norms and values".
Moral exclusion occurs when outgroups are subject to a different set of moral values, rules, and fairness than are used in social relations with ingroup members. When individuals dehumanize others, they no longer experience distress when they treat them poorly. Moral exclusion is used to explain extreme behaviors like genocide, harsh immigration policies, and eugenics, but it can also happen on a more regular, everyday discriminatory level. In laboratory studies, people who are portrayed as lacking human qualities are treated in a particularly harsh and violent manner.
Dehumanized perception occurs when a subject experiences low frequencies of activation within their social cognition neural network. This includes areas of neural networking such as the superior temporal sulcus (STS) and the medial prefrontal cortex (mPFC). A 2001 study by psychologists Chris and Uta Frith suggests that the criticality of social interaction within a neural network has tendencies for subjects to dehumanize those seen as disgust-inducing, leading to social disengagement. Tasks involving social cognition typically activate the neural network responsible for subjective projections of disgust-inducing perceptions and patterns of dehumanization. "Besides manipulations of target persons, manipulations of social goals validate this prediction: Inferring preference, a mental-state inference, significantly increases mPFC and STS activity to these otherwise dehumanized targets." A 2007 study by Harris, McClure, van den Bos, Cohen, and Fiske suggests that a person's choice to dehumanize another person is due to decreased neural activity towards the projected target. This decreased neural activity is identified as low medial prefrontal cortex activation, which is associated with perceiving social information.
While social distance from the outgroup target is a necessary condition for dehumanization, some research suggests that this alone is insufficient. Psychological research has identified high status, power, and social connection as additional factors. Members of high-status groups more often associate humanity with the ingroup than the outgroup, while members of low-status groups exhibit no differences in associations with humanity. Thus, having a high status makes one more likely to dehumanize others. Low-status groups are more associated with human nature traits (e.g., warmth, emotionalism) than uniquely human characteristics, implying that they are closer to animals than humans because these traits are typical of humans but can be seen in other species. In addition, another line of work found that individuals in a position of power were more likely to objectify their subordinates, treating them as a means to one's end rather than focusing on their essentially human qualities. Finally, social connection—thinking about a close other or being in the actual presence of a close other—enables dehumanization by reducing the attribution of human mental states, increasing support for treating targets like animals, and increasing willingness to endorse harsh interrogation tactics. This is counterintuitive because social connection has documented personal health and well-being benefits but appears to impair intergroup relations.
Neuroimaging studies have discovered that the medial prefrontal cortex—a brain region distinctively involved in attributing mental states to others—shows diminished activation to extremely dehumanized targets (i.e., those rated, according to the stereotype content model, as low-warmth and low-competence, such as drug addicts or homeless people).
Race and ethnicity
Racist dehumanization entails that groups and individuals are understood as less than fully human by virtue of their race.
Dehumanization often occurs as a result of intergroup conflict. Ethnic and racial others are often represented as animals in popular culture and scholarship. There is evidence that this representation persists in the American context with African Americans implicitly associated with apes. To the extent that an individual has this dehumanizing implicit association, they are more likely to support violence against African Americans (e.g., jury decisions to execute defendants). Historically, dehumanization is frequently connected to genocidal conflicts in that ideologies before and during the conflict depict victims as subhuman (e.g., rodents). Immigrants may also be dehumanized in this manner.
In 1901, the six Australian colonies assented to federation, creating the modern nation state of Australia and its government. Section 51 (xxvi) excluded Aboriginals from the groups protected by special laws, and section 127 excluded Aboriginals from population counts. The Commonwealth Franchise Act 1902 categorically denied Aboriginals the right to vote. Indigenous Australians were not allowed the social security benefits (e.g., aged pensions and maternity allowances) which were provided to others. Aboriginals in rural areas were discriminated against and controlled as to where and how they could marry, work, live, and their movements.
In the U.S., African Americans were dehumanized by being classified as non-human primates. A California police officer who was also involved in the Rodney King beating described a dispute between an American Black couple as "something right out of Gorillas in the Mist". Franz Boas and Charles Darwin hypothesized that there might be an evolutionary process among primates. Monkeys and apes were least evolved, then savage and deformed anthropoids, which referred to people of African ancestry, to Caucasians as most developed.
Language
Language has been used as an essential tool in the process of dehumanizing others. Examples of dehumanizing language when referring to a person or group of people may include animal, cockroach, rat, vermin, monster, ape, snake, infestation, parasite, alien, savage, and subhuman. Other examples can include racist, sexist, and other derogatory forms of language. The use of dehumanizing language can influence others to view a targeted group as less human or less deserving of humane treatment.
In Unit 731, an imperial Japanese biological and chemical warfare research facility, brutal experiments were conducted on humans who the researchers referred to as 'maruta' (丸太) meaning logs. Yoshio Shinozuka, Japanese army medic who performed several vivisections in the facility said, "We called the victims 'logs.' We didn't want to think of them as people. We didn't want to admit that we were taking lives. So we convinced ourselves that what we were doing was like cutting down a tree."
Words such as migrant, immigrant, and expatriate are assigned to foreigners based on their social status and wealth, rather than ability, achievements, or political alignment. Expatriate is a word to describe the privileged, often light-skinned people newly residing in an area and has connotations that suggest ability, wealth, and trust. Meanwhile, the word immigrant is used to describe people coming to a new location to reside and infers a much less-desirable meaning.
The word "immigrant" is sometimes paired with "illegal", which harbors a profoundly derogatory connotation. Misuse of these terms—they are often used inaccurately—to describe the other, can alter the perception of a group as a whole in a negative way. Ryan Eller, the executive director of the immigrant advocacy group Define American, expressed the problem this way:
A series of language examinations found a direct relation between homophobic epithets and social cognitive distancing towards a group of homosexuals, a form of dehumanization. These epithets (e.g., faggot) were thought to function as dehumanizing labels because they tended to act as markers of deviance. One pair of studies found that subjects were more likely to associate malignant language with homosexuals, and that such language associations increased the physical distancing between the subject and the homosexual. This indicated that the malignant language could encourage dehumanization, cognitive and physical distancing in ways that other forms of malignant language do not. Another study involved a computational linguistic analysis of dehumanizing language regarding LGBTQ individuals and groups in the New York Times from 1986 to 2015. The study used previous psychological research on dehumanization to identify four language categories: (1) negative evaluations of a target group, (2) denial of agency, (3) moral disgust, and (4) likening members of the target group to non-human entities (e.g., machines, animals, vermin). The study revealed that LGBTQ people overall have been increasingly more humanized over time; however, they were found to be humanized less frequently than the New York Time's in-group identifier American.
Aliza Luft notes that the role of dehumanizing language and propaganda plays in violence and genocide is far less significant than other factors such as obedience to authority and peer pressure.
Property takeover
Property scholars define dehumanization as "the failure to recognize an individual's or group's humanity." Dehumanization often occurs alongside property confiscation. When a property takeover is coupled with dehumanization, the result is a dignity taking. There are several examples of dignity takings involving dehumanization.
From its founding, the United States repeatedly engaged in dignity takings from Native American populations, taking indigenous land in an "undeniably horrific, violent, and tragic record" of genocide and ethnocide. As recently as 2013, the degradation of a mountain sacred to the Hopi people—by spraying its peak pot with artificial snow made from wastewater—constituted another dignity taking by the U.S. Forest Service.
The 1921 Tulsa race massacre also constituted a dignity taking involving dehumanization. White rioters dehumanized African Americans by attacking, looting, and destroying homes and businesses in Greenwood, a predominantly Black neighborhood known as "Black Wall Street".
During the Holocaust, mass genocide—a severe form of dehumanization—accompanied the destruction and taking of Jewish property. This constituted a dignity taking.
Jewish settlers in the West Bank have been criticized for dehumanizing Palestinians and land grabbing on illegal settlements. These illegal settlement activities involve systemic settler violence against Palestinians, military orders, and state-sanctioned support. These actions force Palestinians to gradually give up their land and farming activities and gradually choke their sources of dignified income. Israeli soldiers sometimes actively participate in violence against civilians or look on from the sidelines.
Undocumented workers in the United States have also been subject to dehumanizing dignity takings when employers treat them as machines instead of people to justify dangerous working conditions. When harsh conditions lead to bodily injury or death, the property destroyed is the physical body.
Media-driven dehumanization
The propaganda model of Edward S. Herman and Noam Chomsky argues that corporate media are able to carry out large-scale, successful dehumanization campaigns when they promote the goals (profit-making) that the corporations are contractually obliged to maximize. State media are also capable of carrying out dehumanization campaigns, whether in democracies or dictatorships, which are pervasive enough that the population cannot avoid the dehumanizing memes.
War propaganda
National leaders use dehumanizing propaganda to sway public opinion in favor of the military elite's agenda or cause and to repel criticism and proper oversight. The Bush Jr administration used dehumanizing rhetoric to describe Arabs and Muslims collectively as backwards, violent fanatics who "hate us for our freedom" to justify his invasions of Afghanistan and Iraq and covert CIA operations in the Middle East and Africa. The media propaganda portrayed Arabs as a "monolithic evil" in the perception of the unwitting American public. They employed news, media, language, magazine stories, television, and popular culture to portray all Muslims as Arab and all Arabs as violent terrorists which much be feared, fought, and destroyed. Racism was also used by portraying all Arabs as dark-skinned and thus racially inferior and untrustworthy.
Non-state actors
Non-state actors—terrorists in particular—have also resorted to dehumanization to further their cause. The 1960s terrorist group Weather Underground had advocated violence against any authority figure and used the "police are pigs" meme to convince members that they were not harming human beings but merely killing wild animals. Likewise, rhetoric statements such as "terrorists are just scum", is an act of dehumanization.
In science, medicine, and technology
Relatively recent history has seen the relationship between dehumanization and science result in unethical scientific research. The Tuskegee syphilis experiment, Unit 731, and Nazi human experimentation on Jewish people are three such examples. In the former, African Americans with syphilis were recruited to participate in a study about the course of the disease. Even when treatment and a cure were eventually developed, they were withheld from the African-American participants so that researchers could continue their study. Similarly, Nazi scientists during the Holocaust conducted horrific experiments on Jewish people and Shiro Ishii's Unit 731 also did so to Chinese, Russian, Mongolian, American, and other nationalities held captive. Both were justified in the name of research and progress, which is indicative of the far-reaching effects that the culture of dehumanization had upon this society. When this research came to light, efforts were made to protect future research participants, and currently, institutional review boards exist to safeguard individuals from being exploited by scientists.
In a medical context, some dehumanizing practices have become more acceptable. While the dissection of human cadavers was seen as dehumanizing in the Dark Ages (see history of anatomy), the value of dissections as a training aid is such that they are now more widely accepted. Dehumanization has been associated with modern medicine generally and has explicitly been suggested as a coping mechanism for doctors who work with patients at the end of life. Researchers have identified six potential causes of dehumanization in medicine: deindividuating practices, impaired patient agency, dissimilarity (causes which do not facilitate the delivery of medical treatment), mechanization, empathy reduction, and moral disengagement (which could be argued to facilitate the delivery of medical treatment).
In some US states, legislation requires that a woman view ultrasound images of her fetus before having an abortion. Critics of the law argue that merely seeing an image of the fetus humanizes it and biases women against abortion. Similarly, a recent study showed that subtle humanization of medical patients appears to improve care for these patients. Radiologists evaluating X-rays reported more details to patients and expressed more empathy when a photo of the patient's face accompanied the X-rays. It appears that the inclusion of the photos counteracts the dehumanization of the medical process.
Dehumanization has applications outside traditional social contexts. Anthropomorphism (i.e., perceiving mental and physical capacities that reflect humans in nonhuman entities) is the inverse of dehumanization. Waytz, Epley, and Cacioppo suggest that the inverse of the factors that facilitate dehumanization (e.g., high status, power, and social connection) should promote anthropomorphism. That is, a low status, socially disconnected person without power should be more likely to attribute human qualities to pets or inanimate objects than a high-status, high-power, socially connected person.
Researchers have found that engaging in violent video game play diminishes perceptions of both one's own humanity and the humanity of the players who are targets of the game violence. While the players are dehumanized, the video game characters are often anthropomorphized.
Dehumanization has occurred historically under the pretense of "progress in the name of science". During the 1904 Louisiana Purchase Exposition, human zoos exhibited several natives from independent tribes worldwide, most notably a young Congolese man, Ota Benga. Benga's imprisonment was put on display as a public service showcasing "a degraded and degenerate race". During this period, religion was still the driving force behind many political and scientific activities. Because of this, eugenics was widely supported among the most notable U.S. scientific communities, political figures, and industrial elites. After relocating to New York in 1906, public outcry led to the permanent ban and closure of human zoos in the United States.
In philosophy
Danish philosopher Søren Kierkegaard explained his stance of anti-dehumanization in his teachings and interpretations of Christian theology. He wrote in his book Works of Love his understanding to be that "to love one's neighbor means equality… your neighbor is every man… he is your neighbor on the basis of equality with you before God; but this equality absolutely every man has, and he has it absolutely."
In art
Spanish romanticism painter Francisco Goya often depicted subjectivity involving the atrocities of war and brutal violence conveying the process of dehumanization. In the romantic period of painting, martyrdom art was most often a means of deifying the oppressed and tormented, and it was common for Goya to depict evil personalities performing these acts; however, he broke convention by dehumanizing these martyr figures: "...one would not know whom the painting depicts, so determinedly has Goya reduced his subjects from martyrs to meat".
See also
References
External links
Abuse
Harassment and bullying
Genocide
Interpersonal relationships
Moral psychology
Prejudice and discrimination
Social psychology concepts
Social inequality
Terrorism tactics
Violence
Human rights abuses | 0.764678 | 0.996793 | 0.762226 |
Eclectic approach | Eclectic approach is a method of language education that combines various approaches and methodologies to teach language depending on the aims of the lesson and the abilities of the learners. Different teaching methods are borrowed and adapted to suit the requirement of the learners. It breaks the monotony of the class.
In addition, It is a conceptual approach that does not merely include one paradigm or a set of assumptions. Instead, eclecticism adheres to or is constituted from several theories, styles, and ideas in order to gain a thorough insight about the subject, and draws upon different theories in different cases. ‘Eclecticism’ is common in many fields of study such as psychology, martial arts, philosophy, teaching, religion and drama
Approaches and methods
There are varied approaches and methods used for language teaching. In eclectic approach, the teacher can choose from these different methods and approaches:
Grammar-translation Method: It is a method of teaching languages by which students learn grammatical rules and then apply those rules by translating between the target language and the native language.
Direct Method: In this method the teacher refrains from using the students' native language. The target language is directly used for teaching all the four skills—listening, speaking, reading and writing.
Structural-situational Approach: In this approach, the teacher teaches language through a careful selection, gradation and presentation of vocabulary items and structures through situation based activities.
Audio-lingual/Audio-visual Method: In this style of teaching students are taught through a system of reinforcement. Here new words and grammar are directly taught without using the students' native language. However, unlike direct method, audio-lingual method does not focus on vocabulary. Instead, the teacher focuses on grammar through drill and practice.
Bilingual Method: The word 'bilingual' means the ability to speak two languages fluently. In bilingual method, the teacher teaches the language by giving mother tongue equivalents of the words or sentences. This method was developed by C.J. Dodson.
Communicative Language Teaching: This approach lays emphasis on oral method of teaching. It aims to develop communicative competence in students.
Total-Physical Response: It is based on the theory that memory is enhanced through association with physical response.
The Silent Way: In this method the teacher uses a combination of silence and gestures to focus students' attention. It was developed by Caleb Gattegno.
Advantages
The teacher has more flexibility.
No aspect of language skill is ignored.
There is variety in the classroom.
Classroom atmosphere is dynamic.
These types of programs not only negotiate teacher skill-development within an improved recognition of and respect for cross-cultural and multi-linguistic classroom settings, but also encourages student pride in their heritage, language, communication preferences and self-identity
One method can support the weaknesses of the other.
Multiple intelligences in the classroom are better developed.
Assistance to the different Students' learning styles
References
External links
Eclectic approach to teaching language, by Masum Billah
Language education | 0.780995 | 0.975968 | 0.762226 |
Constructive developmental framework | The constructive developmental framework (CDF) is a theoretical framework for epistemological and psychological assessment of adults. The framework is based on empirical developmental research showing that an individual's perception of reality is an actively constructed "world of their own", unique to them and which they continue to develop over their lifespan.
CDF was developed by Otto Laske based on the work of Robert Kegan and Michael Basseches, Laske's teachers at Harvard University. The CDF methodology involves three separate instruments that respectively measure a person's social–emotional stage, cognitive level of development, and psychological profile. It provides three epistemological perspectives on individual clients as well as teams. These constructs are designed to probe how an individual and/or group constructs the real world conceptually, and how close an individual's present thinking approaches the complexity of the real world.
Overview
The methodology of CDF is grounded in empirical research on positive adult development which began under Lawrence Kohlberg in the 1960s, continued by Robert Kegan (1982, 1994), Michael Basseches 1984, and Otto Laske (1998, 2006, 2009, 2015, 2018). Laske (1998, 2009) introduced concepts from Georg Wilhelm Friedrich Hegel's philosophy and the Frankfurt School into the framework, making a strict differentiation between social–emotional and cognitive development.
Kegan (1982) described five stages of development, of which the latter four are progressively attained only in adulthood. Basseches (1984) showed that adults potentially transcend formal logical thinking by way of dialectical thinking, in four phases, measurable by a fluidity index. Both Kegan's and Basseches' findings were updated and refined by Laske in 2005 and 2008 respectively. In 2008 and 2015, Laske proposed that dialectical thought forms are an instantiation of Roy Bhaskar's four moments of dialectic (MELD; Bhaskar 1993), and that these ontological moments form a sequence M→E→L→D that underlies individual cognitive development (Laske 2015), providing a basis for a dialectical cognitive science as well as a cognitively oriented management science. Based on the concept of 'dialogical dialectic', Laske stressed the need for a dialogical, in contrast to a monological, social science. The CDF methodology involves three separate instruments that respectively measure a person's social–emotional stage ('what should I do and for whom?'), cognitive level of development ('what can I know and what therefore are my options?'), and psychological profile ('how am I doing right now?'). The first two tools (ED, CD) provide an epistemological, the third (NP) a psychological, perspective on a person or team. See the list of references below.
In CDF, social-emotional, cognitive, and psychological assessment are arrived at separately, as follows:
A person's social-emotional profile addresses the question "What should I do and for whom?"; it is evaluated based on a semi-structured 1-hour interview in terms of "stages" (created by Kegan-Lahey in 1988, refined by Laske 2005).
A person's cognitive profile addresses the question "What can I know and what consequently are my options?"; it is evaluated based on a semi-structured 1-hour interview in terms of "dialectical thought forms" and the fluidity of their use during the interview or in a written text (Basseches 1984; refined by Laske 2008).
A person's psychological profile addresses the question "How am I presently doing?"; it is evaluated based on Morris Aderman's Need-Press Questionnaire (NP) grounded in Henry Murray's theory of personality (Aderman 1970).
In CDF, each of these profiles by itself is considered a pure abstraction since it is only in their togetherness that the "hidden dimensions of a person's consciousness" can be empirically understood and made the basis of an intervention. Importantly, a CDF intervention requires dialectical thinking, in contrast to purely logical thinking as used in positivistic research. For this reason, CDF is a model of dialogical, not monological, research.
Social–emotional development
Stages of adult development
According to the developmental psychologist Robert Kegan, a person's self-concept evolves in a series of stages through their lifetime. Such evolution is driven alternately by two main motivations: that of being autonomous and that of belonging to a group. Human beings are "controlled" by these motivations in the sense that they do not have influence on them but are rather defined by them. Additionally, these motivations are in conflict and their relationship develops over a lifespan.
Kegan describes 5 stages of development, of which the latter 4 are progressively attained in adulthood, although only a small proportion of adults reach the fourth stage and beyond:
Stage 1: Purely impulse or reflex-driven (infancy and early childhood).
Stage 2: The person's sense of self is ruled by their needs and wishes. The needs and wishes of others are relevant only to the extent that they support those of the person. Effectively the person and others inhabit two "separate worlds" (childhood to adolescence).
Stage 3: The person's sense of self is socially determined, based on the real or imagined expectations of others (post-adolescence).
Stage 4: The person's sense of self is determined by a set of values that they have authored for themselves (rarely achieved, only in adulthood).
Stage 5: The person's sense of self is no longer bound to any particular aspect of themselves or their history, and they are free to allow themselves to focus on the flow of their lives.
CDF refers to such stages as "social–emotional" in that they relate to the way a person makes meaning of their experience in the social world. CDF holds that people are rarely precisely at a single stage but more accurately are distributed over a range where they are subject to the conflicting influences of a higher and a lower stage.
Assessing the social–emotional profile of a person
The social–emotional profile of person is assessed by means of an interview, referred to as the "subject–object" interview. In the interview, the interviewer offers prompts such as "success", "change", "control", "limits", "frustration", and "risk" and invites the interviewee to describe meaningful experiences under those headings. The interviewer serves as a listener, whose role is to focus the attention of the interviewee onto their own thoughts and feelings.
The interview is scored by identifying excerpts of speech that indicate a particular stage or sub-stage. Relevant sections are chosen from the transcript of the interview and analyzed for indications of the stage of development. The most frequent sub-stage revealed by the scoring is described as the interviewee's "centre of gravity". Stages scored at below the center of gravity are described as "risk" (of regression) while stages scored above the center of gravity are described as "potential" (for development). The distribution of scores is summarized by a "risk–clarity–potential" index (RCP) that can be used to characterize the nature of the developmental challenges facing a person.
Cognitive development
Eras of adult cognitive development
According to Jean Piaget, thinking develops in 4 stages from childhood to young adulthood. Piaget named these stages sensory-motor, pre-operational, concrete-operational, and formal-operational. Development of formal-operational thinking is considered to continue until approximately the 25th year of life. Subsequent researchers have concentrated on the now famous question of Kohlberg: "Is there a life after 25?" In CDF, the development of post formal-operational thinking in an adult is indicated primarily by the strength of dialectical thinking measured in thought form use fluidity.
Following Bhaskar (1993), in CDF, human thinking is seen as developing in four sequential phases or 'eras', termed 'common sense', 'understanding', 'reason' and finally 'practical wisdom'. The first three phases of thinking development can be related to the different thinking systems put forward by the philosophers Locke, Kant and Hegel. Each phase includes and transcends the thinking system of the previous phase. The final phase of 'practical wisdom' loops back to a higher form of 'common sense' in that it constitutes sophisticated thinking that has become second nature and is therefore effortless. In contrast to other adult development researchers such as Fischer and Commons, Laske describes post-formal cognitive development in terms of the use and co-ordination of dialectical thought forms and thought form constellations which were described by Basseches as mental schemata.
Four classes of dialectical thought forms
Dialectical thinking has its roots in Greek classical philosophy but is also found in ancient Hindu and Buddhist philosophy, and relates to the search for truth through reasoned argument. It finds its foremost expression in the work of the German philosopher Georg Hegel. Essentially, dialectics is viewed as the system by which human thought attempts to capture the nature of reality. Building on Bhaskar and Basseches, CDF uses a framework for dialectical thinking based on the idea that everything in reality is transient and composed of contradictions, part of a larger whole, related in some way to everything else, and subject to sudden transformation. This framework therefore distinguishes dialectical thinking in terms of four classes of dialectical thought forms that can be said to define reality:
Process (P) – constant change; emergence from absence: this class of thought forms describes how things or systems emerge, evolve and disappear;
Context (C) – stable structures: this class of thought forms describes how things are part of the structure of a larger, stable, organized whole. The contextualization of parts within a whole gives rise to different perspectives or points of view;
Relationship (R) – unity in diversity; totality: this class of thought forms describes how things (which are all part of a larger whole) are related and the nature of their common ground;
Transformation (T) – balance and evolution including breakdown: this class of thought forms describes how living systems are in constant development and transformation, potentially via a collapse of the previous form of organization, and subject to the influence of human agency.
In addition, CDF distinguishes seven individual thought forms for every class, making a total of 28 thought forms, representing a re-formulation of Basseches' 24 schematas.
The cognitive profile of a person
The cognitive profile describes the thinking tools at a person's disposal and shows the degree to which a person's thinking has developed as indicated by their use of dialectical thought forms in the four classes. The profile is derived by means of a semi-structured interview where the interviewer has the task of eliciting the interviewee's use of thought forms in a conversation about the interviewee's work and workplace. The text of the interview is subsequently analyzed and scored to give a series of mathematical indicators.
According to CDF, thinking that is highly developed is represented by the following features:
a balanced use of all four classes of dialectical thought forms (P, C, R, T)
a high index of systemic thinking—meaning the use of transformative thought forms (T) and
balanced use of critical and constructive thought forms (P+R) vs. (C+T)
Link between social–emotional development and cognitive development
Social–emotional and cognitive development are often seen as separate lines of development but Laske (2008) proposed that they are linked by "stages of reflective judgment" or "epistemic position", described as the view taken by a person on what constitutes "knowledge" and "truth". Epistemic position defines a person's ability to deal with uncertainty and insecurity in their knowledge of the world and, together with the stage of social–emotional development, reflects the "stance" that a person takes towards the world. Whilst cognitive development provides a person with "tools" for thinking consisting of thought forms derived from both logic and dialectics, the "stance" that a person takes determines whether they apply the thinking tools at their disposal.
Personality
Psychogenic needs and press
CDF employs the theory put forward by psychologist Henry Murray that much of human behavior is determined by the effort to satisfy certain psychological (or "psychogenic") needs, most of which are unconscious. Personality is thus seen as characteristic behavior emerging from the dynamic between a person's pattern of psychogenic needs and the environmental forces acting on that person—termed "press".
The need–press analysis draws on Sigmund Freud's model of the human psyche divided into the components of Id, Ego and Super-ego. In living, a person is subject to the unconscious yearnings of the Id, whilst consciously aspiring to certain ideals imposed by the Super-ego, which itself is influenced by the social context. It is the dynamic balance between the forces of Id and Super-ego and the work environment that determines a person's capacity for work. Imbalances between the social reality of work and a person's ideals lead to frustration, and imbalances between a person's unconscious needs and their ideals lead to a waste of energy or "energy sink."
The personality profile of a person
CDF assessment methodology uses a self-report psychometric questionnaire originated by Henry Murray's student Morris Aderman, called the need–press (NP) inventory.
The questionnaire assesses psychological characteristics in terms of three categories: self-conduct, task focus, and interpersonal perspective,, each of them defined by 6 variables assessed independently. The questionnaire compares a person's current needs with 1) what they would be like in an ideal (moral) world and 2) what they perceive they are offered in actuality (such as a specific cultural environment they are in tune or at odds with). Each category is composed of several categories (scales) such as: need for control, drive to achieve, affiliation etc. Comparisons and interpretation can be made between a person's scores for "Need", and their scores for ideal and actual "Press". Comparisons can also be made between a person's scores and those of the group of people with whom they are working. Finally, NP scores can be linked to developmental scores (ED & CD), whether of an individual or team.
Applications
Assessment of work capability
The assessment methodology employed by CDF was created to measure peoples' capability and capacity for work. The theory of work used by CDF is derived from the work of Elliott Jaques. According to Jaques, work is defined as the application of reflective judgment in order to pursue certain goals within certain time limits. This definition stresses the importance of how decisions are made in a complex world and the time-span within which decisions are carried out. While Jaques offers a strictly cognitive definition of work, CDF views the social–emotional aspects of work as equally important, also including the person's (manager's, CEO's) NP profile.
CDF distinguishes between two kinds of work capability, applied and potential. Applied capability refers to the resources that an individual can already apply in order to carry out work. Potential capability refers to the resources that an individual may be capable of applying in the future. An individual can decide at any time not to apply their potential work capability. Equally circumstances may impede a person from applying their potential capability. Work capability is therefore not the same as the capacity to deliver work but rather defines and limits it.
In CDF work capacity is measured in terms of the need–press personality profile, whilst applied capability is measured in terms of the 'cognitive score', i.e., the proportional use of thinking tools provided by the four classes of thought forms) shown up by the cognitive profile, and potential capability is measured in terms of the relationship (epistemological balance) of the cognitive systems thinking index (STI) relative to the social-emotional risk–clarity–potential index (RCP).
Organizational talent management
For Elliot Jaques, human organizations are structured managerially according to levels of accountability. Each level of accountability entails a higher level of complexity in the work required of the role-holder, termed "size of role". Jaques defined the notion of requisite organization, where roles in an organization are hierarchically organized at specific levels of increasing complexity.
The application of CDF as an assessment methodology to measure the "size of person" in terms of their work capability and capacity provides a way forward for talent management systems to match the "size of person" to the "size of role". Progressively more complex roles require progressively higher levels of social–emotional development and cognitive development in the role-holder. In this way requisite organizations can align their human capability architecture with their managerial accountability architecture and design "growth assignments" that facilitate the development of capability for more complex roles.
Coaching
CDF provides a platform for professional coaching such as in leadership development and management development in a variety of ways. Firstly it provides assessment tools from which the coach can construct an integrated model of the coachee complete with the developmental challenges of the client who is to be helped. Secondly, and in the sense used by Edgar Schein the use of the assessment tools and the feedback of results by the coach is an act of "process consultation" by which the client may come to understand better the assumptions, values, attitudes and behaviors that are helping or hindering their success. Thirdly, CDF provides tools for deeper and more sophisticated thinking, thereby enabling the client to explore and expand their conceptual landscape of a problem.
CDF distinguishes between behavioral and developmental coaching. The goal of behavioral coaching is to improve the client's actual performance at work, described in CDF terms as their applied capability. In contrast, the goal of developmental coaching is to illuminate and develop the client's current and emergent capabilities for work in the context of their cognitive and social–emotional development.
Self-organization in teams
As shown in the book Dynamic Collaboration: Strengthening Self Organization and Collaborative Intelligence in Teams, by Jan De Visch and Otto Laske (2018), CDF can be a tool for building in organizations a dialogical culture by which distributed leadership in organizations can be realized.
See also
Model of hierarchical complexity
Neo-Piagetian theories of cognitive development
Positive adult development
References
Literature
Basseches, Michael: Dialectical thinking and adult development. Ablex Publishing, Norwood, NJ 1984, .
Bhaskar, Roy: Dialectic. The pulse of freedom. Verso, London & New York 1993, .
De Visch, Jan: The vertical dimension. 2010,
De Visch, Jan & Otto Laske (2018): Dynamic collaboration: Strengthening self-organization and collaborative intelligence in teams,().
Hager, August: Persönlichkeitsentwicklung wird messbar: verborgene Dimensionen menschlicher Arbeit entdecken und messen. In: Wirtschaftspsychologie, Nr. 1/2010, , pp17–23.
Jaques, Elliott: Requisite organization: the CEO's guide to creative structure and leadership. Cason Hall, Arlington, VA 1989, .
Jaques, Elliott: The life and behaviour of living organisms. A general theory. Praeger, London 2002, .
Kegan, Robert: In over our heads: the mental demands of modern life. Harvard University Press, Cambridge, MA 1994, .
Kegan, Robert: The evolving self: problem and process in human development. Harvard University Press, Cambridge, MA 1982, .
King, Patricia M. & Kitchener, Karen S.: Developing reflective judgment. Jossey-Bass, San Francisco, CA 1994, .
Lahey L, Souvaine E, Kegan R, Goodman R, Felix S: A guide to the subject-object interview: Its administration and interpretation. Minds at Work, Cambridge, MA 2011 .
Laske, Otto E. (2018), Interdevelopmental Institute Blogs, http://www.interdevelopmentals.org/?page_id=4831.
Laske, Otto E: Dialectic as core discipline of integral epistemology: Establishing Bhaskar's MELD as the corner stone of professional thinking about human flourishing. Integral Journal of Theory and Practice, vol. 10 no. 2 (2016).
Laske, Otto E.: How Roy Bhaskar extended and deepened the notion of cognitive adult development, Integral Leadership Review, Summer (2016).
Laske, Otto E: Dialectical thinking for integral leaders: a primer. Integral Publishers, Tucson, AZ (2015), .
Laske, Otto E. (Hrsg.): The Constructive Developmental Framework – Arbeitsfähigkeit und Erwachsenenentwicklung. Wirtschaftspsychologie, Nr. 1/2010, .
Laske, Otto E.: À la découverte du potentiel humain: Les processus de développement naturel de l'adulte. Gloucester, MA: Interdevelopmental Institute Press 2012.
Laske, Otto E.: Humanpotenziale erkennen, wecken und messen. Handbuch der entwicklungsorientierten Beratung. Bd. 1. Interdevelopmental Institute Press, Medford, MA 2010, .
Laske, Otto E.: Measuring hidden dimensions. Foundations of requisite organization. Volume 2. Interdevelopmental Institute Press, Medford, MA 2009, .
Laske, Otto E.: Measuring hidden dimensions. The art and science of fully engaging adults. Volume 1. Interdevelopmental Institute Press, Medford, MA 2006, .
Laske, Otto E: Transformative effects of coaching on executives' professional agenda. PsyD dissertation. Bell & Howell Company, Boston, MI 1999.
Ogilvie, Jean: Cognitive development: a new focus in working with leaders. In: Wirtschaftspsychologie, Nr. 1/2010, , pp70–75.
Schweikert, Simone: CDF als Bildungswerkzeug für Menschen im Zeitalter der Wissensökonomie. In: Wirtschaftspsychologie, Nr. 1/2010, , pp90–95.
Shannon, Nick: CDF: towards a decision science for organisational human resources? A practitioner's view. In: Wirtschaftspsychologie, Nr. 1/2010, , pp34–38.
Stewart, John, John Stewart reviews Laske on dialectical thinking, Integral Leadership Review 8/31/2016.
External links
Interdevelopmental Institute (IDM)
Need–Press Analysis
Constructivism (psychological school)
Developmental psychology
Organizational theory
Cognition | 0.780228 | 0.976898 | 0.762203 |
Inclusion (education) | Inclusion in education refers to including all students to equal access to equal opportunities of education and learning, and is distinct from educational equality or educational equity. It arose in the context of special education with an individualized education program or 504 plan, and is built on the notion that it is more effective for students with special needs to have the said mixed experience for them to be more successful in social interactions leading to further success in life. The philosophy behind the implementation of the inclusion model does not prioritize, but still provides for the utilization of special classrooms and special schools for the education of students with disabilities. Inclusive education models are brought into force by educational administrators with the intention of moving away from seclusion models of special education to the fullest extent practical, the idea being that it is to the social benefit of general education students and special education students alike, with the more able students serving as peer models and those less able serving as motivation for general education students to learn empathy.
Implementation of these practices varies. Schools most frequently use the inclusion model for select students with mild to moderate special needs. Fully inclusive schools, which are rare, do not separate "general education" and "special education" programs; instead, the school is restructured so that all students learn together.
Inclusive education differs from the 'integration' or 'mainstreaming' model of education, which tended to be a concern.
A premium is placed upon full participation by students with disabilities and upon respect for their social, civil, and educational rights. Feeling included is not limited to physical and cognitive disabilities, but also includes the full range of human diversity with respect to ability, language, culture, gender, age and of other forms of human differences. Richard Wilkinson and Kate Pickett wrote, "student performance and behaviour in educational tasks can be profoundly affected by the way we feel, we are seen and judged by others. When we expect to be viewed as inferior, our abilities seem to diminish". This is why the United Nations Sustainable Development Goal 4 recognizes the need for adequate physical infrastructures and the need for safe, inclusive learning environments.
Integration and mainstreaming
Inclusion has different historical roots/background which may be integration of students with severe disabilities in the US (who may previously been excluded from schools or even lived in institutions) or an inclusion model from Canada and the US (e.g., Syracuse University, New York) which is very popular with inclusion teachers who believe in participatory learning, cooperative learning, and inclusive classrooms.
Inclusive education differs from the early university professor's work (e.g., 1970s, Education Professor Carol Berrigan of Syracuse University, 1985; Douglas Biklen, Dean of School of Education through 2011) in integration and mainstreaming which were taught throughout the world including in international seminars in Italy. Mainstreaming (e.g., the Human Policy Press poster; If you thought the wheel was a good idea, you'll like the ramp) tended to be concerned about "readiness" of all parties for the new coming together of students with significant needs. Thus, integration and mainstreaming principally was concerned about disability and 'special educational needs' (since the children were not in the regular schools) and involved teachers, students, principals, administrators, School Boards, and parents changing and becoming 'ready for' students who needed accommodation or new methods of curriculum and instruction (e.g., required federal IEPs – individualized education program) by the mainstream.
By contrast, inclusion is about the child's right to participate and the school's duty to accept the child returning to the US Supreme Court's Brown vs. the Board of Education decision and the new Individuals with Disabilities Education (Improvement) Act (IDEIA). Inclusion rejects the use of special schools or classrooms, which remain popular among large multi-service providers, to separate students with disabilities from students without disabilities. A premium is placed upon full participation by students with disabilities, in contrast to earlier concept of partial participation in the mainstream, and upon respect for their social, civil, and educational rights. Inclusion gives students with disabilities skills they can use in and out of the classroom.
Fully inclusive schools and general or special education policies
Fully inclusive schools, which are rare, no longer distinguish between "general education" and "special education" programs which refers to the debates and federal initiatives of the 1980s, such as the Community Integration Project and the debates on home schools and special education-regular education classrooms; instead, the school is restructured so that all students learn together. All approaches to inclusive schooling require administrative and managerial changes to move from the traditional approaches to elementary and high school education.
Inclusion remains in 2015 as part of school (e.g., Powell & Lyle, 1997, now to the most integrated setting from LRE) and educational reform initiatives in the US and other parts of the world. Inclusion is an effort to improve quality in education in the fields of disability, is a common theme in educational reform for decades, and is supported by the UN Convention on the Rights of Persons with Disabilities (UN, 2006). Inclusion has been researched and studied for decades, though reported lightly in the public with early studies on heterogeneous and homogeneous ability groupings (Stainback & Stainback, 1989), studies of critical friends and inclusion facilitators (e.g., Jorgensen & Tashie, 2000), self-contained to general education reversal of 90% (Fried & Jorgensen, 1998), among many others obtaining doctoral degrees throughout the US.
Classification of students and educational practices
Classification of students by disability is standard in educational systems which use diagnostic, educational and psychological testing, among others. However, inclusion has been associated with its own planning, including MAPS which Jack Pearpoint leads with still leads in 2015 and person-centred planning with John O'Brien and Connie Lyle O'Brien who view inclusion as a force for school renewal.
Inclusion has two sub-types: the first is sometimes called regular inclusion or partial inclusion, and the other is full inclusion.
Inclusive practice is not always inclusive but is a form of integration. For example, students with special needs are educated in regular classes for nearly all of the day, or at least for more than half of the day. Whenever possible, the students receive any additional help or special instruction in the general classroom, and the student is treated like a full member of the class. However, most specialized services are provided outside a regular classroom, particularly if these services require special equipment or might be disruptive to the rest of the class (such as speech therapy), and students are pulled out of the regular classroom for these services. In this case, the student occasionally leaves the regular classroom to attend smaller, more intensive instructional sessions in a separate classrooms, or to receive other related services, such as speech and language therapy, occupational and/or physical therapy, psychological services, and social work. This approach can be very similar to many mainstreaming practices, and may differ in little more than the educational ideals behind it.
In the "full inclusion" setting, the students with special needs are always educated alongside students without special needs, as the first and desired option while maintaining appropriate supports and services. Some educators say this might be more effective for the students with special needs. At the extreme, full inclusion is the integration of all students, even those that require the most substantial educational and behavioral supports and services to be successful in regular classes and the elimination of special, segregated special education classes. Special education is considered a service, not a place and those services are integrated into the daily routines (See, ecological inventories) and classroom structure, environment, curriculum and strategies and brought to the student, instead of removing the student to meet his or her individual needs. However, this approach to full inclusion is somewhat controversial, and it is not widely understood or applied to date.
Much more commonly, local educational agencies have the responsibility to organize services for children with disabilities. They may provide a variety of settings, from special classrooms to mainstreaming to inclusion, and assign, as teachers and administrators often do, students to the system that seems most likely to help the student achieve his or her individual educational goals. Students with mild or moderate disabilities, as well as disabilities that do not affect academic achievement, such as using power wheelchair, scooter or other mobility device, are most likely to be fully included; indeed, children with polio or with leg injuries have grown to be leaders and teachers in government and universities; self advocates travel across the country and to different parts of the world. However, students with all types of disabilities from all the different disability categories (See, also 2012 book by Michael Wehmeyer from the University of Kansas) have been successfully included in general education classes, working and achieving their individual educational goals in regular school environments and activities.
Alternatives to inclusion programs: school procedures and community development
Students with disabilities who are not included are typically either mainstreamed or segregated.
A mainstreamed student attends some general education classes, typically for less than half the day, and often for less academically rigorous, or if you will, more interesting and career-oriented classes. For example, a young student with significant intellectual disabilities might be mainstreamed for physical education classes, art classes and storybook time, but spend reading and mathematics classes with other students that have similar disabilities ("needs for the same level of academic instruction"). They may have access to a resource room for remediation or enhancement of course content, or for a variety of group and individual meetings and consultations.
A segregated student attends no classes with non-disabled students with disability a tested category determined before or at school entrance. He or she might attend a special school termed residential schools that only enrolls other students with disabilities, or might be placed in a dedicated, self-contained classroom in a school that also enrolls general education students. The latter model of integration, like the 1970s Jowonio School in Syracuse, is often highly valued when combined with teaching such as Montessori education techniques. Home schooling was also a popular alternative among highly educated parents with children with significant disabilities.
Residential schools have been criticized for decades, and the government has been asked repeatedly to keep funds and services in the local districts, including for family support services for parents who may be currently single and raising a child with significant challenges on their own. Children with special needs may already be involved with early childhood education which can have a family support component emphasizing the strengths of the child and family.
Some students may be confined to a hospital due to a medical condition (e.g., cancer treatments) and are thus eligible for tutoring services provided by a school district. Less common alternatives include homeschooling and, particularly in developing countries, exclusion from education.
Legal issues: education law and disability laws
The new anti-discriminatory climate has provided the basis for much change in policy and statute, nationally and internationally. Inclusion has been enshrined at the same time that segregation and discrimination have been rejected. Articulations of the new developments in ways of thinking, in policy and in law include:
The UN Convention on the Rights of the Child (1989) which sets out children's rights in respect of freedom from discrimination and in respect of the representation of their wishes and views.
The Convention against Discrimination in Education of UNESCO prohibits any discrimination, exclusion or segregation in education.
The UNESCO Salamanca Statement (1994) which calls on all governments to give the highest priority to inclusive education.
The UN Convention on the Rights of Persons with Disabilities (2006) which calls on all States Parties to ensure an inclusive education system at all levels.
From the least restrictive to the most integrated setting
For schools in the United States, the federal requirement that students be educated in the historic least restrictive environment that is a reasonable accommodation encourages the implementation of inclusion of students previously excluded by the school system. However, a critical critique of the LRE principle, commonly used to guide US schools, indicates that it often places restrictions and segregation on the individuals with the most severe disabilities. By the late 1980s, individuals with significant disabilities and their families and caregivers were already living quality lives in homes and local communities. The US Supreme Court has now ruled in the Olmstead Decision (1999) that the new principle is that of the "most integrated setting", as described by the national Consortium of Citizens with Disabilities, which should result in better achievement of national integration and inclusion goals in the 21st century.
Inclusion rates in the world
The proportion of students with disabilities who are included varies by place and by type of disability, but it is relatively common for students with milder disabilities and less common with certain kinds of severe disabilities. In Denmark, 99% of students with learning disabilities like 'dyslexia' are placed in general education classrooms. In the United States, three out of five students with learning disabilities spend the majority of their time in the general education classroom.
Postsecondary statistics (after high school) are kept by universities and government on the success rates of students entering college, and most are eligible for either disability services (e.g., accommodations and aides) or programs on college campuses, such as supported education in psychiatric disabilities or College for Living. The former are fully integrated college degree programs with college and vocational rehabilitation services (e.g., payments for textbooks, readers or translators), and the latter courses developed similar to retirement institutes (e.g., banking for retirees).
Principles and necessary resources
Although once hailed, usually by its opponents, as a way to increase achievement while decreasing costs, full inclusion does not save money, but is more cost-beneficial and cost-effective. It is not designed to reduce students' needs, and its first priority may not even be to improve academic outcomes; in most cases, it merely moves the special education professionals (now dual certified for all students in some states) out of "their own special education" classrooms and into a corner of the general classroom or as otherwise designed by the "teacher-in-charge" and "administrator-in-charge". To avoid harm to the academic education of students with disabilities, a full panoply of services and resources is required (of education for itself), including:
Adequate supports and services for the student
Well-designed individualized education programs
Professional development for all teachers involved, general and special educators alike
Time for teachers to plan, meet, create, and evaluate the students together
Reduced class size based on the severity of the student needs
Professional skill development in the areas of cooperative learning, peer tutoring, adaptive curriculum
Collaboration between parents or guardians, teachers or para educators, specialists, administration, and outside agencies.
Sufficient funding so that schools will be able to develop programs for students based on student need instead of the availability of funding.
Indeed, the students with special needs do receive funds from the federal government, by law originally the Educational for All Handicapped Children Act of 1974 to the present day, Individuals with Disabilities Education Improvement Act, which requires its use in the most integrated setting.
In principle, several factors can determine the success of inclusive classrooms:
Family-school partnerships
Collaboration between general and special educators
Well-constructed plans that identify specific accommodations, modifications, and goals for each student
Coordinated planning and communication between "general" and "special needs" staff
Integrated service delivery
Ongoing training and staff development
Leadership of teachers and administrators
By the mid-1980s, school integration leaders in the university sector already had detailed schemas (e.g., curriculum, student days, students with severe disabilities in classrooms) with later developments primarily in assistive technology and communication, school reform and transformation, personal assistance of user-directed aides, and increasing emphasis on social relationships and cooperative learning. In 2015, most important are evaluations of the populations still in special schools, including those who may be deaf-blind, and the leadership by inclusion educators, who often do not yet go by that name, in the education and community systems.
Differing views of inclusion and integration
However, early integrationists community integration would still recommend greater emphasis on programs related to sciences, the arts (e.g., exposure), curriculum integrated field trips, and literature as opposed to the sole emphasis on community referenced curriculum. For example, a global citizen studying the environment might be involved with planting a tree ("independent mobility"), or going to an arboretum ("social and relational skills"), developing a science project with a group ("contributing ideas and planning"), and having two core modules in the curriculum.
Intervening early within the life of a child with Autism Spectrum Disorder (ASD) can essentially change their long-term improvement and quality of life. The foundation of early integration lies in distinguishing and supporting each child's qualities, all whereas tending to their particular challenges.
However, students will need to either continue to secondary school (meet academic testing standards), make arrangements for employment, supported education, or home/day services (transition services), and thus, develop the skills for future life (e.g., academic math skills and calculators; planning and using recipes or leisure skills) in the educational classrooms. Inclusion often involved individuals who otherwise might be at an institution or residential facility.
Today, longitudinal studies follow the outcomes of students with disabilities in classrooms, which include college graduations and quality of life outcomes. To be avoided are negative outcomes that include forms of institutionalization.
Differing views among experts in education
Inclusion in education, especially involving special education, has been a long-standing debate in many schools. Inclusion in this context is referring to putting students with special needs in the general classroom for most or all of the school day. The main reason people see this as beneficial is to reduce the social segregation for students. They claim that all their educational needs could be met in a general classroom if there was proper planning and support services given. On the other hand, many people see this as harmful to students with special needs education as they may not receive as much attention and help that they need.
James M. Kauffman and Jeanmarie Badar wrote an article that opens by saying if inclusion is the main priority "then special education will one day be looked upon as having gone through a period of shameful neglect of students' needs". The authors argue that the general education classroom is not the appropriate place to give children with special needs an effective education. This claim is backed up by providing six mistaken assumptions that people believe and giving reasons why it will not work and providing alternative ideas. One mistaken assumption they give is that "All students, including those with disabilities, should be expected to meet high standards". To which the authors say each child has their own highest standard and that this outlook should be adapted to all children, no matter if they have a disability or not. They go on to say that special education programs that pull students with disabilities out into separate classrooms and provide them with more attention, more time, and sometimes different assignments are extremely beneficial. The differences in the way students learn are what should be embraced in order to allow them to learn to their highest ability, as their education and understanding of the curriculum are more important than being included in the general classroom at all times.
On the other hand, some recent research has been done suggesting inclusion can be successful if certain things are done to help teachers become more educated on how to implement inclusion. Len Barton is a professor of Inclusive Education at the Institute of Education at the University of London and gave a lecture on how inclusion can be beneficial if certain criteria are followed. In a lecture he gave, he himself states that inclusion is not the one and only answer to helping education but it is a stepping stone. The conclusion of his studies states many criteria teachers need in order to make inclusion work. The first criterion is making the topic of inclusion the main part of educational programs for teachers in order to emphasize the importance of inclusion in boosting all students learning and participation.2 Barton says another factor is including disability and equality awareness training to teachers and staff done by trained professionals in order to increase teachers understanding behind inclusion.
In 2020, Dr. Chelsea P. Tracy-Bronson of Stockton University did a study looking at what people at the district level are doing to help inclusion in special education run smoothly. The goal of this study was to show modern strategies that are being implemented and that are working in order to create an equitable and inclusive education for all students. The study was done using a qualitative research methodology that looked at the views and experiences of seven special education leaders that were implementing successful and equitable inclusion programs.3 The research proposes that inclusion in special education can be successful when the district-level leaders encourage inclusive strategies, challenge the long-standing nonexclusive model, and really cultivate an environment for teachers and students to grow and understand the inclusive model.3 Like Barton, this study shows that inclusion can be a great tool in creating an equitable and inclusive learning environment for students with special needs.
Garry Hornby combines the two opposing sides into one idea that may help everyone. After analyzing teachers' attitudes and procedures directed at making inclusion work, Hornby concluded that inclusion into the general classroom should depend on the needs of the individual child. The ideas he analyzed focused on including and teaching children with special needs all in the same ways, which was not working. These inclusion models not working, frustrated teachers and administrators, making them have negative attitudes towards inclusion. However, if individual situations were addressed and a plan was made for each child with special needs, inclusion models would be more effective as children with very high needs would not spend as much time in the general classroom.4 This would shift the attention from how to make inclusion work, to a focus on effective education and helping the students reach their personal goals.
Overall, experts in the field of education have done extensive research on the topic of inclusion in regards to special education and have found a lot of data supporting both sides of the debate. As seen, this debate on whether or not inclusion is the right model for special education has been long-lasting, and there is no telling if it will ever really be over.
Common practices in inclusive classrooms
Students in an inclusive classroom are generally placed with their chronological age-mates, regardless of whether the students are working above or below the typical academic level for their age. Also, to encourage a sense of belonging, emphasis is placed on the value of friendships. Teachers often nurture a relationship between a student with special needs and a same-age student without a special educational need. Another common practice is the assignment of a buddy to accompany a student with special needs at all times (for example in the cafeteria, on the playground, on the bus and so on). This is used to show students that a diverse group of people make up a community, that no one type of student is better than another, and to remove any barriers to a friendship that may occur if a student is viewed as "helpless." Such practices reduce the chance for elitism among students in later grades and encourage cooperation among groups.
Teachers use a number of techniques to help build classroom communities:
Using games designed to build community
Involving students in solving problems
Sharing songs and books that teach community
Openly dealing with individual differences by discussion
Assigning classroom jobs that build community
Teaching students to look for ways to help each other
Utilizing physical therapy equipment such as standing frames, so students who typically use wheelchairs can stand when the other students are standing and more actively participate in activities
Encouraging students to take the role of teacher and deliver instruction (e.g. read a portion of a book to a student with severe disabilities)
Focusing on the strength of a student with special needs
Create classroom checklists
Take breaks when necessary
Create an area for children to calm down
Organize student desk in groups
Create a self and welcoming environment
Set ground rules and stick with them
Help establish short-term goals
Design a multi-faceted curriculum
Communicate regularly with parents and/or caregivers
Seek support from other special education teachers
Inclusionary practices are commonly utilized by using the following team-teaching models:
One teach, one support:
In this model, the content teacher will deliver the lesson and the special education teacher will assist student's individual needs and enforce classroom management as needed.
One teach, one observe:
In this model, the teacher with the most experience in the content will deliver the lesson and the other teacher will float or observe. This model is commonly used for data retrieval during IEP observations or Functional Behavior Analysis.
Station teaching (rotational teaching):
In this model, the room is divided into stations in which the students will visit with their small groups. Generally, the content teacher will deliver the lesson in his/her group, and the special education teacher will complete a review or an adapted version of the lesson with the students.
Parallel teaching:
In this model, one half of the class is taught by the content teacher and one half is taught by the special education teacher. Both groups are being taught the same lesson, just in a smaller group.
Alternative teaching:
In this method, the content teacher will teach the lesson to the class, while the special education teacher will teach a small group of students an alternative lesson.
Team teaching (content/support shared 50/50):
Both teachers share the planning, teaching, and supporting equally. This is the traditional method, and often the most successful co-teaching model.
Children with extensive support needs
For children with significant or severe disabilities, the programs may require what are termed health supports (e.g., positioning and lifting; visit to the nurse clinic), direct one-to-one aide in the classroom, assistive technology, and an individualized program which may involve the student "partially" (e.g., videos and cards for "visual stimulation"; listening to responses)in the full lesson plan for the "general education student". It may also require the introduction of teaching techniques commonly used (e.g., introductions and interest in science) that teachers may not use within a common core class.
Another way to think of health supports are as a range of services that may be needed from specialists, or sometimes generalists, ranging from speech and language, to visual and hearing (sensory impairments), behavioral, learning, orthopedics, autism, deaf-blindness, and traumatic brain injury, according to Virginia Commonwealth University's Dr. Paul Wehman. As Dr. Wehman has indicated, expectations can include post secondary education, supported employment in competitive sites, and living with family or other residential places in the community.
In 2005, comprehensive health supports were described in National Goals for Intellectual and Developmental Disabilities as universally available, affordable and promoting inclusion, as supporting well-informed, freely chosen health care decisions, culturally competent, promoting health promotion, and insuring well trained and respectful health care providers. In addition, mental health, behavioral, communication and crisis needs may need to be planned for and addressed.
"Full inclusion" – the idea that all children, including those with severe disabilities, can and should learn in a regular classroom has also taken root in many school systems, and most notably in the province of New Brunswick.
Collaboration among the professions
Inclusion settings allow children with and without disabilities to play and interact every day, even when they are receiving therapeutic services. When a child displays fine motor difficulty, his ability to fully participate in common classroom activities, such as cutting, coloring, and zipping a jacket may be hindered. While occupational therapists are often called to assess and implement strategies outside of school, it is frequently left up to classroom teachers to implement strategies in school. Collaborating with occupational therapists will help classroom teachers use intervention strategies and increase teachers' awareness about students' needs within school settings and enhance teachers' independence in implementation of occupational therapy strategies.
As a result of the 1997 re-authorization of the Individuals With Disabilities Education Act (IDEA), greater emphasis has been placed on delivery of related services within inclusive, general education environments. [Nolan, 2004] The importance of inclusive, integrated models of service delivery for children with disabilities has been widely researched indicating positive benefits. [Case-Smith& Holland, 2009] In traditional "pull out" service delivery models, children typically work in isolated settings one on one with a therapist, Case-Smith and Holland(2009) argue that children working on skills once or twice a week are "less likely to produce learning that leads to new behaviors and increased competence." [Case Smith &Holland, 2009, pg.419]. In recent years, occupational therapy has shifted from the conventional model of "pull out" therapy to an integrated model where the therapy takes place within a school or classroom.
Inclusion administrators have been requested to review their personnel to assure mental health personnel for children with mental health needs, vocational rehabilitation linkages for work placements, community linkages for special populations (e.g., "deaf-blind", "autism"), and collaboration among major community agencies for after school programs and transition to adulthood. Highly recommended are collaborations with parents, including parent-professional partnerships in areas of cultural and linguistic diversity (e.g., Syracuse University's special education Ph.D.'s Maya Kaylanpur and Beth Harry).
Selection of students for inclusion programs in schools
Educators generally say that some students with special needs are not good candidates for inclusion. Many schools expect a fully included student to be working at or near grade level, but more fundamental requirements exist: First, being included requires that the student is able to attend school. Students that are entirely excluded from school (for example, due to long-term hospitalization), or who are educated outside of schools (for example, due to enrollment in a distance education program) cannot attempt inclusion.
Additionally, some students with special needs are poor candidates for inclusion because of their effect on other students. For example, students with severe behavioral problems, such that they represent a serious physical danger to others, are poor candidates for inclusion, because the school has a duty to provide a safe environment to all students and staff.
Finally, some students are not good candidates for inclusion because the normal activities in a general education classroom will prevent them from learning. For example, a student with severe attention difficulties or extreme sensory processing disorders might be highly distracted or distressed by the presence of other students working at their desks. Inclusion needs to be appropriate to the child's unique needs.
Most students with special needs do not fall into these extreme categories, as most students who do attend school, are not violent, do not have severe sensory processing disorders, etc.
The students that are most commonly included are those with physical disabilities that have no or little effect on their academic work (diabetes mellitus, epilepsy, food allergies, paralysis), students with all types of mild disabilities, and students whose disabilities require relatively few specialized services.
Bowe says that regular inclusion, but not full inclusion, is a reasonable approach for a significant majority of students with special needs. He also says that for some students, notably those with severe autism spectrum disorders or "mental retardation", as well as many who are deaf or have multiple disabilities, even regular inclusion may not offer an appropriate education. Teachers of students with autism spectrum disorders sometimes use antecedent procedures, delayed contingencies, self-management strategies, peer-mediated interventions, pivotal response training and naturalistic teaching strategies.
Relationship to progressive education
Some advocates of inclusion promote the adoption of progressive education practices. In the progressive education or inclusive classroom, everyone is exposed to a "rich set of activities", and each student does what he or she can do, or what he or she wishes to do and learns whatever comes from that experience. Maria Montessori's schools are sometimes named as an example of inclusive education.
Inclusion requires some changes in how teachers teach, as well as changes in how students with and without special needs interact with and relate to one another. Inclusive education practices frequently rely on active learning, authentic assessment practices, applied curriculum, multi-level instructional approaches, and increased attention to diverse student needs and individualization. Student inclusion often starts with motivation, in order to reach the goal of engagement while in the classroom.
Sometimes it is not necessary that there will always be a positive environment and therefore a lot of attention of the teachers is also required along with the support of other children which will ensure a peaceful and happy place for both kinds of children.
Relationship to Universal Design for Learning (UDL)
A pedagogical practice that relates to both inclusive education and progressivist thinking is Universal Design for Learning (UDL). This method of teaching advocates for the removal of barriers in the physical and social environments that students of all abilities are within, as this is the main reason why students are unable to engage with the material presented in class. To implement UDL into a classroom, educators must understand not only the needs of their students, but also their abilities, interests, backgrounds, identities, prior knowledge, and their goals. By understanding their students, educators can then move on to using differentiated instruction to allow students to learn in a way that meets their needs; followed by accommodating and modifying programming to allow everyone to equitably and universally access curriculum. One study describes the applicability of UDL, by explaining that "the criteria for assessment of learning goals remain consistent. In effect, the learning endpoint goals stay the same, and it is the ways that student get to that endpoint of learning that is made more diverse. In this way, each student is challenged to learn to his or her own capacity, and is challenged through both multi-level authentic instruction and assessment". In other words, even though students are expressing their knowledge on the content through varied means, and quite possibly through different learning goals, they all inevitably accomplish the same goal, based on their own abilities and understandings.
In implementing UDL through the lens of access for those with exceptionalities, it is important to note what it means to be inclusive. Some classrooms or schools believe that being inclusive means that students with exceptionalities are in the room, without any attention paid to their need for support staff or modified curriculum expectations. Instead, inclusive education should be about teaching every single student and making the learning and teaching equitable, rather than equal. So, to implement UDL for the benefit of all students in the classroom, educators need to think about inclusivity relative to their students and their multifaceted identities – whether that is including materials written by authors of a particular race that happens to be prominent in their class, or creating more open spaces for a student in a wheel chair. Regardless of these changes, all students can benefit from them in one way or another.
Arguments for full inclusion in regular neighborhood schools
Advocates say that even partial non-inclusion is morally unacceptable. Proponents believe that non-inclusion reduces the disabled students' social importance and that maintaining their social visibility is more important than their academic achievement. Proponents say that society accords disabled people less human dignity when they are less visible in general education classrooms. Advocates say that even if typical students are harmed academically by the full inclusion of certain students with exceptionalities, that the non-inclusion of these students would still be morally unacceptable, as advocates believe that the harm to typical students' education is always less important than the social harm caused by making people with disabilities less visible in society.
A second key argument is that everybody benefits from inclusion. Advocates say that there are many children and young people who don't fit in (or feel as though they don't), and that a school that fully includes all disabled students feels welcoming to all. Moreover, at least one author has studied the impact a diversified student body has on the general education population and has concluded that students with 'mental retardation' who spend time among their peers show an increase in social skills and academic proficiency.
Advocates for inclusion say that the long-term effects of typical students who are included with special needs students at a very young age have a heightened sensitivity to the challenges that others face, increased empathy and compassion, and improved leadership skills, which benefits all of society.
A combination of inclusion and pull-out (partial inclusion) services has been shown to be beneficial to students with learning disabilities in the area of reading comprehension, and preferential for the special education teachers delivering the services.
Inclusive education can be beneficial to all students in a class, not just students with special needs. Some research show that inclusion helps students understand the importance of working together, and fosters a sense of tolerance and empathy among the student body.
Co-Design In Education
One form of design that heavily involves the users in the process of designing is co-design. Collaboration with the people who have personal experience with the topic at hand or people who will be using the product design (in this case curriculum or methods for inclusive learning) will result in a more effective product for the users. While most students are capable of learning in current educational settings, implementing co-design can create a more effective learning setting for students. Using co-design has the possibility to create a more accommodating experience. Curriculum designers do not have enough relevant experience to design the best working curriculum and strategies that are implemented for learning along curriculums used in classrooms that do not work for every student. That is why co-designing with teachers and when possible, students can create a more inclusive experience when learning for students to benefit all students and not just students with disabilities or "special needs".
The current and most common method in many ways centers around the designers themselves rather than the students and sometimes even the instructors, as it is the designer making the decisions and merely testing and getting feedback from the users as a form of incorporation. When designers control and limit the access to meanings that are included in the design process, it cuts off the chance for improved connections between the design and its user and is instead limited to what the designer deems significant. By leaving the users of the curriculum out of the design process itself the possibilities of new innovative ideas become limited to what designers value and the evolution of curriculum design remains moving at a slower pace than the needs of students evolve. If the canon of curriculum design could be evolved to be more collaborative with students in nature, making a more personalized and effective learning experience for students could be more obtainable.
There is a level of division between designers and users in which users do not feel equipped to take part in the design phase, "teachers are more comfortable adapting the implementation of materials than viewing themselves as critical users and co-designers of curriculum. Similarly, curriculum designers are more comfortable as the creators of materials rather than as partners with teachers in the design of the enacted curriculum." (Gunckel & Moore, pg. 2). Despite this, efforts have been made to include users in the design process of curriculums, such as a project reflected and explained by Kristin L. Gunckel and Felicia M. Moore in which designers brought in teachers to be in a position of co-design for a subject to be taught for a high school level class. This project served to see how including instructors as co-designers benefit the delivery of the curriculum. Throughout the project the designers were able to get feedback and suggestions from a perspective outside of the experience of the designers and could create a more impactful curriculum. As well from the perspective of the instructors, they were able to be more better prepared for the content and understood the intent and larger concept behind the materials to deliver more goal oriented lessons to the students. From this project it was shown how co-design in curriculum design is very beneficial to both designers and teachers as the teachers noted the implementation of the co-designed classes resulted in a positive and effective experience for the class.
Positive effects in regular classrooms
There are many positive effects of inclusions where both the students with special needs along with the other students in the classroom both benefit. Research has shown positive effects for children with disabilities in areas such as reaching individualized education program (IEP) goal, improving communication and social skills, increasing positive peer interactions, many educational outcomes, and post school adjustments. Positive effects on children without disabilities include the development of positive attitudes and perceptions of persons with disabilities and the enhancement of social status with non-disabled peers. While becoming less discriminatory, children without disabilities that learn in inclusive classrooms also develop communication and leadership skills more rapidly.
Several studies have been done on the effects of inclusion of children with disabilities in general education classrooms. A study on inclusion compared integrated and segregated (special education only) preschool students. The study determined that children in the integrated sites progressed in social skills development while the segregated children actually regressed.
Another study shows the effect on inclusion in grades 2 to 5. The study determined that students with specific learning disabilities made some academic and affective gains at a pace comparable to that of normal achieving students. Specific learning disabilities students also showed an improvement in self-esteem and in some cases improved motivation.
A third study shows how the support of peers in an inclusive classroom can lead to positive effects for children with autism. The study observed typical inclusion classrooms, ages ranging from 7 years old to 11 years old. The peers were trained on an intervention technique to help their fellow autistic classmates stay on task and focused. The study showed that using peers to intervene instead of classroom teachers helped students with autism reduce off-task behaviors significantly. It also showed that the typical students accepted the student with autism both before and after the intervention techniques were introduced.
Negative Accounts of Inclusion - Student Perspectives
Even with inclusive education becoming more popular in both the classroom and in society, there are still some students with exceptionalities that are not reaping the benefits of being in a mainstream classroom. Two recent studies show that there is still work to be done when it comes to implementing inclusivity into practice. One researcher studied 371 students from grades 1–6 in 2 urban and 2 rural mainstream elementary schools in Ireland that implemented inclusive education. Students were asked through questionnaire about the social status of their peers – some of whom are on the spectrum (Autism Spectrum Disorder (ASD)) – in relation to play and work contexts. This was to determine if these students were accepted or rejected socially in an inclusive education setting. "Results showed that children with ASD experienced significantly lower levels of social acceptance and higher levels of social rejection". This demonstrates that even though there are practices in place that work to include students with exceptionalities, there are still some who are rejected by their peers.
Many of the placements in mainstream schools with inclusive education are done because they believe the student is academically able, but rarely do they consider if they are socially able to adjust to these circumstances. One research study examined the experiences of students with ASD in inclusive mainstream schools. The 12 students ranged from 11 to 17 years old with varied symptoms and abilities along the autism spectrum. Results showed that all participants experienced feelings of dread, loneliness, and isolation, while being bullied, misunderstood, and unsupported by their peers and teachers. These feelings and exclusion had an impact on their well-being and demonstrated "that mainstream education is not meeting the needs of all with autism deemed mainstream able; a gap exists between inclusion rhetoric and their lived realities in the classroom". This shows that there is still need for improvement on the social conditions within inclusive education settings, as many with exceptionalities are not benefiting from this environment.
Implications
These negative accounts are incredibly important to the understanding of inclusive education as a program and pedagogical method. Though inclusive education aims to universally include and provide equitable education to all students regardless of their ability, there is still more that needs to be done. The aforementioned studies show that a key part of inclusive education – or schooling in general – is social relationships and acceptance. Without social relationships, students will feel the very opposite of what feelings should be evoked through inclusivity. This means that educators and even researchers should further inquire about the inclusion rates in schools and learn how students feel about this programming. What is the point of continuing to do something that is meant to help everyone when it clearly does not? Researchers and students with exceptionalities in suggest that there be more collaborative assignments for students, as this provides an opportunity for relationships and social skills to develop. Further, the focus should be on the other students in increasing empathy and embracing difference. Besides improving the interactions between students, there is also the need for educators to evoke change. Students with ASD have provided several strategies to use to improve their quality of education and the interactions that occur in the classroom, with accommodations being carried out that relate to their specific needs. Some accommodations include having clear expectations, providing socialization opportunities, alternative ways to learn and express said learning, and limit sensory distractions or overload in the classroom. Knowing this, students, educators, researchers, and beyond need to conceptualize and implement the idea of inclusive education as one that treats students with exceptionalities equitably and with respect, based on their strengths, needs, interests, background, identity, and zone of proximal development.
Criticisms of inclusion programs of school districts
Critics of full and partial inclusion include educators, administrators and parents. Full and partial inclusion approaches neglect to acknowledge the fact that most students with significant special needs require individualized instruction or highly controlled environments. Thus, general education classroom teachers often are teaching a curriculum while the special education teacher is remediating instruction at the same time. Similarly, a child with serious inattention problems may be unable to focus in a classroom that contains twenty or more active children. Although with the increase of incidence of disabilities in the student population, this is a circumstance all teachers must contend with, and is not a direct result of inclusion as a concept.
Full inclusion may be a way for schools to placate parents and the general public, using the word as a phrase to garner attention for what are in fact illusive efforts to educate students with special needs in the general education environment.
At least one study examined the lack of individualized services provided for students with IEPs when placed in an inclusive rather than mainstreamed environment.
Some researchers have maintained school districts neglect to prepare general education staff for students with special needs, thus preventing any achievement. Moreover, school districts often expound an inclusive philosophy for political reasons, and do away with any valuable pull-out services, all on behalf of the students who have no so say in the matter.
Inclusion is viewed by some as a practice philosophically attractive yet impractical. Studies have not corroborated the proposed advantages of full or partial inclusion. Moreover, "push in" servicing does not allow students with moderate to severe disabilities individualized instruction in a resource room, from which many show considerable benefit in both learning and emotional development.
Parents of disabled students may be cautious about placing their children in an inclusion program because of fears that the children will be ridiculed by other students, or be unable to develop regular life skills in an academic classroom.
Some argue that inclusive schools are not a cost-effective response when compared to cheaper or more effective interventions, such as special education. They argue that special education helps "fix" the students with exceptionalities by providing individualized and personalized instruction to meet their unique needs. This is to help students with special needs adjust as quickly as possible to the mainstream of the school and community. Proponents counter that students with special needs are not fully into the mainstream of student life because they are secluded to special education. Some argue that isolating students with special needs may lower their self-esteem and may reduce their ability to deal with other people. In keeping these students in separate classrooms they aren't going to see the struggles and achievements that they can make together. However, at least one study indicated mainstreaming in education has long-term benefits for students as indicated by increased test scores, where the benefit of inclusion has not yet been proved.
Broader approach: social and cultural inclusion
As used by UNESCO, inclusion refers to far more than students with special educational needs. It is centered on the inclusion of marginalized groups, such as religious, racial, ethnic, and linguistic minorities, immigrants, girls, the poor, students with disabilities, HIV/AIDS patients, remote populations, and more. In some places, these people are not actively included in education and learning processes. In the U.S. this broader definition is also known as "culturally responsive" education, which differs from the 1980s-1990s cultural diversity and cultural competency approaches, and is promoted among the ten equity assistance centers of the U.S. Department of Education, for example in Region IX (AZ, CA, NV), by the Equity Alliance at ASU. Gloria Ladson-Billings points out that teachers who are culturally responsive know how to base learning experiences on the cultural realities of the child (e.g. home life, community experiences, language background, belief systems). Proponents argue that culturally responsive pedagogy is good for all students because it builds a caring community where everyone's experiences and abilities are valued.
Proponents want to maximize the participation of all learners in the community schools of their choice and to rethink and restructure policies, curricula, cultures and practices in schools and learning environments so that diverse learning needs can be met, whatever the origin or nature of those needs. They say that all students can learn and benefit from education, and that schools should adapt to the physical, social, and cultural needs of students, rather than students adapting to the needs of the school. Proponents believe that individual differences between students are a source of richness and diversity, which should be supported through a wide and flexible range of responses. The challenge of rethinking and restructuring schools to become more culturally responsive calls for a complex systems view of the educational system (see Michael Patton), where one can extend the idea of strength through diversity to all participants in the educational system (e.g. parents, teachers, community members, staff).
Although inclusion is generally associated with elementary and secondary education, it is also applicable in postsecondary education. According to UNESCO, inclusion "is increasingly understood more broadly as a reform that supports and welcomes diversity amongst all learners."
Curriculum
Gender-sensitive curriculum
The notion of a gender-sensitive curriculum acknowledges the current reality of our bi-gender world and attempts to break down socialized learning outcomes that reinforce the notion that girls and boys are good at different things. Research has shown that while girls do struggle more in the areas of math and science and boys in the area of language arts, this is partly a socialization phenomenon. One key to creating a gender-friendly classroom is "differentiation" which essentially means when teachers plan and deliver their instruction with an awareness of gender and other student differences. Teachers can strategically group students for learning activities by a variety of characteristics so as to maximize individual strengths and contributions. Research has also shown that teachers differ in how they treat girls and boys in the classroom. Gender-sensitive practices necessitate equitable and appropriate attention to all learners. Teacher attention to content is also extremely important. For example, when trying to hold boys' attention teachers will often use examples that reference classically male roles, perpetuating a gender bias in content.
In addition to a curriculum that recognizes that gender impacts all students and their learning, other gender-sensitive curricula directly engages gender-diversity issues and topics. Some curricular approaches include integrating gender through story problems, writing prompts, readings, art assignments, research projects, and guest lectures that foster spaces for students to articulate their own understandings and beliefs about gender.
LGBTQ-inclusive curriculum
LGBTQ-inclusive curriculum is curriculum that includes positive representations of LGBTQ people, history, and events. LGBTQ curriculum also attempts to integrate these narratives without biasing the LGBTQ experience as a separate and fragmented from overarching social narratives and not as intersecting with ethnic, racial, and other forms of diversity that exist among LGBTQ individuals.
The purpose of an LGBTQ-inclusive curriculum is to ensure that LGBTQ students feel properly represented in curriculum narratives and therefore safer coming to school and more comfortable discussing LGBTQ-related topics. A study by GLSEN examined the impact of LGBTQ-inclusive practices on LGBTQ students' perceptions of safety. The study found that LGBT students in inclusive school settings were much less likely to feel unsafe because of their identities and more likely to perceive their peers as accepting and supportive.
Implementation of LGBTQ-inclusive curriculum involves both curriculum decisions and harnessing teachable moments in the classroom. One study by Snapp et al. showed that teachers often failed to intervene in LGBTQ-bullying.
Other research has suggested that education for healthcare professionals on how to better support LGBTQ patients has benefits for LGBTQ-healthcare service. Education in how to be empathic and conscientious of the needs of LGBTQ patients fits within the larger conversation about culturally-responsive healthcare.
Benefiting in an inclusive environment
"The inclusion of age-appropriate students in a general education classroom, alongside those with and without disability is beneficial to both parties involved. With inclusive education, all students are exposed to the same curriculum, they develop their own individual potential, and participate in the same activities at the same time. Therefore, there is a variety of ways in which learning takes place because students learn differently, at their own pace and by their own style. Effectively, inclusive education provides a nurturing venue where teaching and learning should occur despite pros and cons. It is evident that students with disabilities benefit more in an inclusive atmosphere because they can receive help from their peers with diverse abilities and they compete at the same level due to equal opportunities given." Research on the topic of inclusive education can contribute to the development of existing knowledge in several ways.
See also
European Agency for Special Needs and Inclusive Education
Centre for Studies on Inclusive Education
Post Secondary Transition For High School Students with Disabilities
Mara Sapon-Shevin, student of Douglas Biklen
Discrimination in education
Douglas Biklen
Teaching for social justice
Mainstreaming in education
Special Assistance Program (Australian education)
Circle of friends (disability)
Accord Coalition for inclusivity on the grounds of religion (England and Wales)
Education for All Handicapped Children Act
The Compass Institute Inc Further education and vocational pathways for young people with disabilities
Right to education
Universal access to education
Community integration
Inclusive education in Latin America
Least dangerous assumption
References
Sources
Ainscow M., Booth T. (2003) The Index for Inclusion: Developing Learning & Participation in Schools. Bristol: Center for Studies in Inclusive Education
Thomas, G., & Loxley, A. (2007) Deconstructing Special Education and Constructing Inclusion (2nd Edition). Maidenhead: Open University Press.
Elementary programming for inclusive classrooms
Social development: Promoting Social Development in the Inclusive Classroom
M. Mastropieri, Thomas E. Scruggs. The Inclusive Classroom: Strategies for Effective Instruction
Mary Beth Doyle. The Paraprofessional's Guide to the Inclusive Classroom
Conrad M., & Whitaker T. (1997). Inclusion and the law: A principal's proactive approach. The Clearing House
Jorgensen, C., Schuh, M., & Nisbet, J. (2005). The inclusion facilitator's guide. Baltimore: Paul H. Brookes Publishing Co.
Gunckel, Kristin L., and Felicia M. Moore. "Including Students and Teachers in the Co-Design of the Enacted Curriculum." Eric.ed.gov, 2005, https://files.eric.ed.gov/fulltext/ED498676.pdf.
Further reading
Baglieri, S., & Shapiro, A. (2012). Disability Studies and the Inclusive Classroom. New York, NY: Routledge.
Biklen, D.2000. Constructing inclusion: Lessons from critical, disability narratives.International Journal on Inclusive Education, 4(4):337 –353.
Biklen, D., & Burke, J. (2006). Presuming competence. Equity & Excellence in Education, 39, 166–175.
Connor, D. (2006). Michael's Story: "I get into so much trouble just by walking":Narrative knowing and life at the intersections of learning disability, race, and class. Equity & Excellence in Education, 39, 154–165.
Davis, L. J. (2010). Constructing normalcy. In L. J. Davis (Ed.), The Disability Studies Reader. (3rd ed.) (pp. 9–28). New York: Routledge.
Erevelles, N. (2011). "Coming out Crip" in inclusive education. Teachers College Record, 113 (10). Retrieved from http://www.tcrecord.org Id Number: 16429
Graham, L., & Slee, R. (2007). An illusory interiority: Interrogating the discourse/s of inclusion. Educational Philosophy and Theory, 40, 277–293.
Kasa-Hendrickson, C. (2005) 'There's no way this kid's retarded': Teachers' optimistic constructions of students' ability. International Journal of Inclusive Education, 9 (1), 55–69.
Kluth, P. 2003. "You're going to love this kid." Teaching students with autism in the inclusive classroom, Baltimore: Brookes.
Knobloch, P. & Harootunian, B. (1989). A classroom is where difference is valued. (pp. 199–209). In: S. Stainback, W. Stainback, & Forest, M., Educating All Students in the Mainstream of Regular Education. Baltimore, MD: Paul H. Brookes.
O'Brien, L. (2006). Being bent over backward: A mother and teacher educator challenges the positioning of her daughter with disabilities. Disability Studies Quarterly, 26 (2).
Porter, L., & Smith, D. (Eds.) (2011). Exploring inclusive educational practices through professional inquiry. Boston, MA: Sense Publishers.
Putnam, J. W. (1993). Cooperative Learning and Strategies for Inclusion: Celebrating Diversity in the Classroom. Baltimore, MD: Paul H. Brookes.
Stainback, S. & Stainback, W. (1996). Inclusion: Guide for Educators. Baltimore, MD: Paul H. Brookes.
Strully, J. & Strully, C. (1984, September). Shawntell & Tanya: A story of friendship. Exceptional Parent, 35–40.
Thomas, G. (2012). A review of thinking and research about inclusive education policy, with suggestions for a new kind of inclusive thinking. British Educational Research Journal, 38 (3), 473–490.
Thompson, B., Wickham, D., Shanks, P., Wegner, J., Ault, M., Reinertson, B. & Guess, D. (nd, @1985). Expanding the circle of inclusion: Integrating young children with severe multiple disabilities into Montessori classrooms. Montessori Life.
Toste, Jessica R.. "The Illusion of Inclusion: How We Are Failing Students with Learning Disabilities", Oath Inc. (2015). Website.11(12)2017
Wa Munyi, C. ( 2012). Past and present perceptions towards disability: A historical perspective. Disability Studies Quarterly, 32.
Werts, M.G., Wolery, M., Snyder, E. & Caldwell, N. (1996). Teacher perceptions of the supports critical to the success of inclusion programs. TASH, 21(1): 9-21.
Gunckel, Kristin L., and Felicia M. Moore. "Including Students and Teachers in the Co-Design of the Enacted Curriculum." Eric.ed.gov, 2005, https://files.eric.ed.gov/fulltext/ED498676.pdf.
External links
Inclusion in an International Perspective, a webdossier by Education Worldwide, international inclusion information, information for each continent and several countries
Inclusion and Social Justice Articles - A directory of articles on the internet with a specific section on inclusion in education.
UNESCO: Inclusion in education
Inclusive Education Library
Education policy
Special education
Critical pedagogy
Educational environment
Philosophy of education
Educational psychology
Education reform
Accessibility
Segregation
Social inclusion | 0.764559 | 0.9969 | 0.762189 |
Research design | Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. A strong research design yields valid answers to research questions while weak designs yield unreliable, imprecise or irrelevant answers.
Incorporated in the design of a research study will depend on the standpoint of the researcher over their beliefs in the nature of knowledge (see epistemology) and reality (see ontology), often shaped by the disciplinary areas the researcher belongs to.
The design of a study defines the study type (descriptive, correlational, semi-experimental, experimental, review, meta-analytic) and sub-type (e.g., descriptive-longitudinal case study), research problem, hypotheses, independent and dependent variables, experimental design, and, if applicable, data collection methods and a statistical analysis plan. A research design is a framework that has been created to find answers to research questions.
Design types and sub-types
There are many ways to classify research designs. Nonetheless, the list below offers a number of useful distinctions between possible research designs. A research design is an arrangement of conditions or collection.
Descriptive (e.g., case-study, naturalistic observation, survey)
Correlational (e.g., case-control study, observational study)
Experimental (e.g., field experiment, controlled experiment, quasi-experiment)
Review (literature review, systematic review)
Meta-analytic (meta-analysis)
Sometimes a distinction is made between "fixed" and "flexible" designs. In some cases, these types coincide with quantitative and qualitative research designs respectively, though this need not be the case. In fixed designs, the design of the study is fixed before the main stage of data collection takes place. Fixed designs are normally theory-driven; otherwise, it is impossible to know in advance which variables need to be controlled and measured. Often, these variables are measured quantitatively. Flexible designs allow for more freedom during the data collection process. One reason for using a flexible research design can be that the variable of interest is not quantitatively measurable, such as culture. In other cases, the theory might not be available before one starts the research.
Grouping
The choice of how to group participants depends on the research hypothesis and on how the participants are sampled. In a typical experimental study, there will be at least one "experimental" condition (e.g., "treatment") and one "control" condition ("no treatment"), but the appropriate method of grouping may depend on factors such as the duration of measurement phase and participant characteristics:
Cohort study
Cross-sectional study
Cross-sequential study
Longitudinal study
Confirmatory versus exploratory research
research tests a priori hypotheses — outcome predictions that are made before the measurement phase begins. Such a priori hypotheses are usually derived from a theory or the results of previous studies. The advantage of confirmatory research is that the result is more meaningful, in the sense that it is much harder to claim that a certain result is generalizable beyond the data set. The reason for this is that in confirmatory research, one ideally strives to reduce the probability of falsely reporting a coincidental result as meaningful. This probability is known as α-level or the probability of a type I error.
research, on the other hand, seeks to generate a posteriori hypotheses by examining a data-set and looking for potential relations between variables. It is also possible to have an idea about a relation between variables but to lack knowledge of the direction and strength of the relation. If the researcher does not have any specific hypotheses beforehand, the study is exploratory with respect to the variables in question (although it might be confirmatory for others). The advantage of exploratory research is that it is easier to make new discoveries due to the less stringent methodological restrictions. Here, the researcher does not want to miss a potentially interesting relation and therefore aims to minimize the probability of rejecting a real effect or relation; this probability is sometimes referred to as β and the associated error is of type II. In other words, if the researcher simply wants to see whether some measured variables could be related, he would want to increase the chances of finding a significant result by lowering the threshold of what is deemed to be significant.
Sometimes, a researcher may conduct exploratory research but report it as if it had been confirmatory ('Hypothesizing After the Results are Known', HARKing—see Hypotheses suggested by the data); this is a questionable research practice bordering on fraud.
State problems versus process problems
A distinction can be made between state problems and process problems. State problems aim to answer what the state of a phenomenon is at a given time, while process problems deal with the change of phenomena over time. Examples of state problems are the level of mathematical skills of sixteen-year-old children, the computer skills of the elderly, the depression level of a person, etc. Examples of process problems are the development of mathematical skills from puberty to adulthood, the change in computer skills when people get older, and how depression symptoms change during therapy.
State problems are easier to measure than process problems. State problems just require one measurement of the phenomena of interest, while process problems always require multiple measurements. Research designs such as repeated measurements and longitudinal study are needed to address process problems.
Examples of fixed designs
Experimental research designs
In an experimental design, the researcher actively tries to change the situation, circumstances, or experience of participants (manipulation), which may lead to a change in behavior or outcomes for the participants of the study. The researcher randomly assigns participants to different conditions, measures the variables of interest, and tries to control for confounding variables. Therefore, experiments are often highly fixed even before the data collection starts.
In a good experimental design, a few things are of great importance. First of all, it is necessary to think of the best way to operationalize the variables that will be measured, as well as which statistical methods would be most appropriate to answer the research question. Thus, the researcher should consider what the expectations of the study are as well as how to analyze any potential results. Finally, in an experimental design, the researcher must think of the practical limitations including the availability of participants as well as how representative the participants are to the target population. It is important to consider each of these factors before beginning the experiment. Additionally, many researchers employ power analysis before they conduct an experiment, in order to determine how large the sample must be to find an effect of a given size with a given design at the desired probability of making a Type I or Type II error. The researcher has the advantage of minimizing resources in experimental research designs.
Non-experimental research designs
Non-experimental research designs do not involve a manipulation of the situation, circumstances or experience of the participants. Non-experimental research designs can be broadly classified into three categories. First, in relational designs, a range of variables are measured. These designs are also called correlation studies because correlation data are most often used in the analysis. Since correlation does not imply causation, such studies simply identify co-movements of variables. Correlational designs are helpful in identifying the relation of one variable to another, and seeing the frequency of co-occurrence in two natural groups (see Correlation and dependence). The second type is comparative research. These designs compare two or more groups on one or more variable, such as the effect of gender on grades. The third type of non-experimental research is a longitudinal design. A longitudinal design examines variables such as performance exhibited by a group or groups over time (see Longitudinal study).
Examples of flexible research designs
Case study
Famous case studies are for example the descriptions about the patients of Freud, who were thoroughly analysed and described.
Bell (1999) states "a case study approach is particularly appropriate for individual researchers because it gives an opportunity for one aspect of a problem to be studied in some depth within a limited time scale".
Grounded theory study
Grounded theory research is a systematic research process that works to develop "a process, and action or an interaction about a substantive topic".
See also
Bold hypothesis
Clinical study design
Design of experiments
Grey box completion and validation
Research proposal
Royal Commission on Animal Magnetism
References
design | 0.764984 | 0.996344 | 0.762187 |
Etiquette in technology | Etiquette in technology, colloquially referred to as netiquette, is a term used to refer to the unofficial code of policies that encourage good behavior on the Internet which is used to regulate respect and polite behavior on social media platforms, online chatting sites, web forums, and other online engagement websites. The rules of etiquette that apply when communicating over the Internet are different from these applied when communicating in person or by audio (such as telephone) or video call. It is a social code that is used in all places where one can interact with other human beings via the Internet, including text messaging, email, online games, Internet forums, chat rooms, and many more. Although social etiquette in real life is ingrained into our social life, netiquette is a fairly recent concept.
It can be a challenge to communicate on the Internet without misunderstandings mainly because input from facial expressions and body language is absent in cyberspace. Therefore, several rules, in an attempt to safeguard against these misunderstandings and to discourage unfriendly behavior, are regularly put in place at many websites, and often enforced by moderation by the website's users or administrators.
Netiquette
Netiquette, a colloquial portmanteau of network and etiquette or Internet and etiquette, is a set of social conventions that facilitate interaction over networks, ranging from Usenet and mailing lists to blogs and forums.
Like the network itself, these developing norms remain in a state of flux and vary from community to community. The points most strongly emphasized about Usenet netiquette often include using simple electronic signatures, and avoiding multiposting, cross-posting, off-topic posting, hijacking a discussion thread, and other techniques used to minimize the effort required to read a post or a thread. Similarly, some Usenet guidelines call for use of unabbreviated English while users of instant messaging protocols like SMS occasionally encourage just the opposite, bolstering use of SMS language.
Common rules for e-mail and Usenet such as avoiding flamewars and spam are constant across most mediums and communities. Another rule is to avoid typing in all caps or excessively enlarging script for emphasis, which is considered to be the equivalent of shouting or yelling. Other commonly shared points, such as remembering that one's posts are (or can easily be made) public, are generally intuitively understood by publishers of Web pages and posters to Usenet, although this rule is somewhat flexible depending on the environment. On more private protocols, however, such as e-mail and SMS, some users take the privacy of their posts for granted. One-on-one communications, such as private messages on chat forums and direct SMS, may be considered more private than other such protocols.
A group e-mail sent by Cerner CEO Neal Patterson to managers of a facility in Kansas City concerning "Cerner's declining work ethic" read, in part, "The parking lot is sparsely used at 8 A.M.; likewise at 5 P.M. As managers—you either do not know what your EMPLOYEES are doing, or YOU do not CARE ... In either case, you have a problem and you will fix it or I will replace you." After the e-mail was forwarded to hundreds of other employees, it quickly leaked to the public. On the day that the e-mail was posted to Yahoo!, Cerner's stock price fell by over 22% from a high market capitalization of US$1.5 billion.
Beyond matters of basic courtesy and privacy, e-mail syntax (defined by RFC 2822) allows for different types of recipients. The primary recipient, defined by the To: line, can reasonably be expected to respond, but recipients of carbon copies cannot be, although they still might. Likewise, misuse of the CC: functions in lieu of traditional mailing lists can result in serious technical issues. In late 2007, employees of the United States Department of Homeland Security used large CC: lists in place of a mailing list to broadcast messages to several hundred users. Misuse of the "reply to all" caused the number of responses to that message to quickly expand to some two million messages, bringing down their mail server. In cases like this, rules of netiquette have more to do with efficient sharing of resources—ensuring that the associated technology continues to function—rather than more basic etiquette. On Usenet, cross-posting, in which a single copy of a message is posted to multiple groups is intended to prevent this from happening, but many newsgroups frown on the practice, as it means users must sometimes read many copies of a message in multiple groups.
Due to the large variation between what is considered acceptable behavior in various professional environments and between professional and social networks, codified internal manuals of style can help clarify acceptable limits and boundaries for user behavior. For instance, failure to publish such a guide for e-mail style was cited among the reasons for a NZ$17,000 wrongful dismissal finding against a firm that fired a woman for misuse of boldface colorful all caps text in company-wide e-mail traffic.
Netiquette in South Korea
In South Korea, the Korea Internet Safety Commission declared the 'Netizen Ethics Code' on June 15, 2000, and the Ministry of Education prepared the 'Information Communication Ethics Education Guidelines' in early 2001. Therefore, some middle and high schools started to provide education on netiquette. The basic netiquette education contents of South Korea are as follows. Postings to a noticeboard should be written clearly and concisely, use proper grammar and Korean spelling, and avoid excessive refutation of other people's writings. In e-mails, identify yourself and send a letter. When chatting, you should introduce yourself first, engage in conversation, use the title "Nim," and do not slander, abuse, or make sarcastic remarks. Furthermore, it is against etiquette to repeat the same thing over and over again, and you must offer parting salutations when you come out of a chat. Furthermore, do not engage in sexual harassment, stalking, or the use of expletives.
Digital citizenship
Digital citizenship is how a person should act while using digital technology online and has also been defined as "the ability to participate in society online". The term is often mentioned in relation to Internet safety and netiquette.
The term has been used as early as 1998 and has gone through several changes in description as newer technological advances have changed the method and frequency of how people interact with one another online. Classes on digital citizenship have been taught in some public education systems and some argue that the term can be "measured in terms of economic and political activities online".
Cell phone etiquette
The issue of mobile communication and etiquette has also become an issue of academic interest. The rapid adoption of the device has resulted in the intrusion of telephony into situations where it was previously not used. This has exposed the implicit rules of courtesy and opened them to re-evaluation.
In the education system
Most schools in the United States, Europe and Canada have prohibited mobile phones in the classroom, citing class disruptions and the potential for cheating via text messaging. In the UK, possession of a mobile phone in an examination can result in immediate disqualification from that subject or from all that student's subjects. This still applies even if the mobile phone was not turned on at the time. In New York City, students were banned from taking cell phones to school until 2015. This has been a debate for several years, but finally passed legislature in 2008.
"Most schools allow students to have cell phones for safety purposes"—a reaction to the Columbine High School massacre (Lipscomb 2007: 50). Apart from emergency situations, most schools don't officially allow students to use cell phones during class time.
In the public sphere
Talking or texting on a cell phone in public may seem a distraction for many individuals. When in public there are two times when one uses a phone. The first is when the user is alone; the other is when the user is in a group. The main issue for most people is when they are in a group, and the cell phone becomes a distraction or a barrier for successful socialization among family and friends. In the past few years, society has become less tolerant of cell phone use in public areas; for example, public transportation, restaurants and much more. This is exemplified by the widespread recognition of campaigns such as Stop Phubbing, which prompted discussion as to how mobile phones should be used in the presence of others. "Some have suggested that mobile phones 'affect every aspect of our personal and professional lives either directly or indirectly'" (Humphrey). Every culture's tolerance of cell phone usage varies; for instance in Western society cell phones are permissible during free time at schools, whereas in the Eastern countries, cell phones are strictly prohibited on school property.
Mobile phone use can be an important matter of social discourtesy, such as phones ringing during funerals or weddings, in toilets, cinemas and theatres. Some book shops, libraries, bathrooms, cinemas, doctors' offices and places of worship prohibit their use, so that other patrons will not be disturbed by conversations. Some facilities install signal-jamming equipment to prevent their use. Some new auditoriums have installed wire mesh in the walls to make a Faraday cage, which prevents signal penetration without violating signal jamming laws.
A working group made up of Finnish telephone companies, public transport operators and communications authorities has launched a campaign to remind mobile phone users of courtesy, especially when using mass transit—what to talk about on the phone, and how to. In particular, the campaign wants to impact loud mobile phone usage as well as calls regarding sensitive matters.
Trains, particularly those involving long-distance services, often offer a "quiet carriage" where phone use is prohibited, much like the designated non-smoking carriage of the past. In the UK however many users tend to ignore this as it is rarely enforced, especially if the other carriages are crowded and they have no choice but to go in the "quiet carriage". In Japan, it is generally considered impolite to talk using a phone on any train; e-mailing is generally the mode of mobile communication. Mobile phone usage on local public transport is also increasingly seen as a nuisance; the Austrian city of Graz, for instance, has mandated a total ban of mobile phones on its tram and bus network in 2008 (though texting and emailing is still allowed).
Nancy J. Friedman has spoken widely about landline and cell phone etiquette.
Within social relationships
When critically assessing the family structure, it is important to examine the parent/child negotiations which occur in the household, in relation to the increased use of cell phones. Teenagers use their cell phones as a way to negotiate spatial boundaries with their parents (Williams 2005:316). This includes extending curfews in the public space and allowing more freedom for the teenagers when they are outside of the home (Williams 2005:318). More importantly, cell phone etiquette relates to kinship groups and the family as an institution. This is because cell phones act as a threat due to the rapid disconnect within families. Children are often so closely affiliated with their technological gadgets, and they tend to interact with their friends constantly and this has a negative impact on their relationship with their parents (Williams 2005:326). Teenagers see themselves as gaining a sense of empowerment from the mobile phone. Cell phone etiquette in the household from an anthropological perspective has shown an evolution in the institution of family. The mobile phone has now been integrated into family practices and perpetuated a wider concern which is the fracture between parent and child relationships. We are able to see the traditional values disappearing; however, reflexive monitoring is occurring (Williams 2005:320). Through this, parents are becoming friendlier with their children and critics emphasize that this change is problematic because children should be subjected to social control. One way of social control is limiting the time spent interacting with friends, which is difficult to do in today's society because of the rapid use of cell phones.
Netiquette vs. cell phone etiquette
Cell phone etiquette is largely dependent on the cultural context and what is deemed to be socially acceptable. For instance, in certain cultures using your handheld devices while interacting in a group environment is considered bad manners, whereas, in other cultures around the world it may be viewed differently. In addition, cell phone etiquette also encompasses the various types of activities which are occurring and the nature of the messages which are being sent. More importantly, messages of an inappropriate nature can be sent to an individual and this could potentially orchestrate problems such as verbal/cyber abuse.
New technology and behavior
One of the biggest obstacles to communication in online settings is the lack of emotional cues. Facial cues dictate the mood and corresponding diction of people in conversations. During phone conversations, tone of voice communicates the emotions of the speakers removed on opposite sides of phone lines. Conversely, in chat rooms, instant messaging apps, texting, and other text-based communication, signals that would indicate a person's emotional state are absent. Because of this, accommodations have been developed, notably the use of emoticons and abbreviations. Emoticons use punctuation marks and symbols to graphically represent facial expressions. For example, a colon and parenthesis can be used to represent a smiling face, indicating happiness or satisfaction. To symbolize laughter, the abbreviation "LOL" (standing for "laughing out loud") developed. Other commonly used abbreviations are "BRB" ("be right back") and "TTYL" ("talk to you later").
Now, as newer modes of communication become increasingly common, apps such as Snapchat are growing to develop platform-specific rules and etiquette. Snapchat lets a user send pictures or videos that disappear after several seconds. Although it is entirely possible to make use of Snapchat for the purpose of sexting, namely sending nude and erotic photos, originally compared to Instagram by way of the app's ability to broadcast pictures to many people, it has now become standard to communicate through Snapchat by sending pictures back and forth and using the caption bar for messages. The reply option on Snapchat specifically promotes this behavior, but Snapchat etiquette is not set in stone. Some people use Snapchat specifically for the purpose of communication, while some use it to simply provide a visual update of their day. The newest update to Snapchat, an instant messaging add-on, seems to be catered to those who use the app to send messages back and forth.
See also
Digital citizen
Eternal September
Restrictions on cell phone use by U.S. drivers
Shotgun email
References
Further reading
Pręgowski, Michał Piotr, "Rediscovering the netiquette: the role of propagated values and personal patterns in defining self-identity of the Internet user ", Observatorio 2009: 354–356. Google Scholar. Web. 15 Dec. 2010.
Null, Christopher "Text Messaging Etiquette: To Text or Not to Text ". PC World 2010. Web. 15 December 2010.
External links
RFC1855: the historical 1995 document at IETF, listing Netiquette guidelines.
ToastMasters on Social Media Etiquette
"A new sort of online protocol", CNET, 1997 (last accessed: 16 March 2019)
The rules of netiquette—Matthew Strawbridge's weblog, 2009
Some FAQ's about Mailing Lists and Mailing List Netiquette
Virginia Shea, Netiquette (online ed.) book
Technology
Internet culture | 0.769457 | 0.990422 | 0.762087 |
Westernization | Westernization (or Westernisation, see spelling differences), also Europeanisation or occidentalization (from the Occident), is a process whereby societies come under or adopt what is considered to be Western culture, in areas such as industry, technology, science, education, politics, economics, lifestyle, law, norms, mores, customs, traditions, values, mentality, perceptions, diet, clothing, language, writing system, religion, and philosophy. During colonialism it often involved the spread of Christianity. A related concept is Northernization, which is the consolidation or influence of the Global North.
Westernization has been a growing influence across the world in the last few centuries, with some thinkers assuming Westernization to be the equivalent of modernization, a way of thought that is often debated. The overall process of Westernization is often two-sided in that Western influences and interests themselves are joined with parts of the affected society, at minimum, to become a more Westernized society, with the putative goal of attaining a Western life or some aspects of it, while Western societies are themselves affected by this process and interaction with non-Western groups.
Westernization traces its roots back to Ancient Greece. Later, the Roman Empire took on the first process of Westernization as it was heavily influenced by Greece and created a new culture based on the principles and values of the Ancient Greek society. The Romans emerged with a culture that grew into a new Western identity based on the Greco-Roman society. Westernization can also be compared to acculturation and enculturation. Acculturation is "the process of cultural and psychological change that takes place as a result of contact between cultural groups and their individual members".
After contact, changes in cultural patterns are evident within one or both cultures. Specific to Westernization and the non-Western culture, foreign societies tend to adopt changes in their social systems relative to Western ideology, lifestyle, and physical appearance, along with numerous other aspects, and shifts in culture patterns can be seen to take root as a community becomes acculturated to Western customs and characteristics – in other words, Westernized. The phenomenon of Westernization does not follow any one specific pattern across societies as the degree of adaption and fusion with Western customs will occur at varying magnitudes within different communities. Specifically, the extent to which domination, destruction, resistance, survival, adaptation, or modification affect a native culture may differ following inter-ethnic contact.
Western world
The West was originally defined as the Western world. A thousand years later, the East-West Schism separated the Catholic Church and Eastern Orthodox Church from each other. The definition of Western changed as the West was influenced by and spread to other nations. Islamic and Byzantine scholars added to the Western canon when their stores of Greek and Roman literature jump-started the Renaissance. The Cold War also reinterpreted the definition of the West by excluding the countries of the former Eastern Bloc. Today, most modern uses of the term refer to the societies in the West and their close genealogical, linguistic, and philosophical descendants. Typically included are those countries whose ethnic identity and dominant culture are derived from Western European culture. Though it shares a similar historical background, the Western world is not a monolithic bloc, as many cultural, linguistic, religious, political, and economic differences exist between Western countries and populations.
Significantly influenced countries
The following countries or regions experienced a significant influence by the process of Westernization:
Armenia: Geographically located in the Caucasus region of West Asia, Armenia's culture has been increasingly influenced by the process of Westernization. Throughout its history, Armenia has been influenced by Western and Eastern civilizations. Armenia became the first state in the world to adopt Christianity as its official religion in 301 AD. The traditional Armenian homeland composed of Eastern Armenia and Western Armenia came under the rule of the Roman, Persian, Arab, Ottoman, and Russian empires. Current Armenia gained its independence in 1991, following the collapse of the Soviet Union. Today, the Government of Armenia maintains positive relations with Iran, Russia, and the West, including the United States and the EU. The country participates in various organizations linked to the EU, such as the Eastern Partnership, the Euronest Parliamentary Assembly and is a member of the Council of Europe, the European Political Community, the OSCE, the BSEC, La Francophonie, and NATO's Partnership for Peace and Euro-Atlantic Partnership Council. In 2017, Armenia signed an extensive agreement with the EU; the CEPA agreement further strengthens economic and political ties. Armenia is also a member of various European organisations for sports, education, and cultural events such as UEFA, the European Olympic Committees, and the European Higher Education Area, and participates in the Eurovision Song Contest.
Azerbaijan: Geographically located in the Caucasus mountain range (natural border between Western Asia and Eastern Europe). Azerbaijan borrowed Western traditions mainly as a result of imperial Russian influence, with the Muslim world's first opera and secular democracy being established there before its invasion by the Soviets. Currently, the country participates in various European organizations including the EU's Eastern Partnership, the Council of Europe, and GUAM. It is also a member of European organisations for sports such as UEFA and the European Olympic Committees, and regularly participates in the Eurovision Song Contest. Despite this, the country remains an authoritarian regime with considerable human rights and press freedom issues.
Cape Verde: An insular country in West Africa, Cape Verde has influences of European culture (particularly Portuguese) and, together with the Azores and Madeira (Portugal), and the Canary Islands (Spain), it is part of the archipelagos of Macaronesia. Due to this, the country has shared close diplomatic and cultural relations with both Iberian countries and has even tried to approach Western organizations, like the EU and NATO.
Hong Kong, Macau, and Singapore: Despite their geographical positions in East and Southeast Asia, due to the heavy influences of European heritage (particularly British and Portuguese) culture, they are at-least partially westernized.
Israel: Although Israel is geographically located in Western Asia, many Western cultural influences were brought in Israel by Jewish settlers from the diaspora, particularly countries like Canada, France, Germany, the United Kingdom, and the United States. It is a member of the OECD. It is often a member of European organisations for sports and cultural events such as UEFA and Eurovision, which is due in large part to Israel's ouster from their respective Asian counterparts. According to Sammy Smooha, a professor emeritus of sociology at Haifa University, Israel is described as a "hybrid," a modern and developed "semi-Western" state. With time, he acknowledged, Israel will become "more and more Western." But as a result of the ongoing Arab–Israeli conflict, full Westernization will be a slow process in Israel.
Japan, South Korea, and Taiwan: Although they are geographically located in East Asia, the three countries have westernized themselves by adopting democratic forms of government, free market economic systems, major contributions to Western science and technology, and could be described as "hybrid", "semi-Western" states.
Americas: Most countries in Americas are considered Western countries, largely because most of its peoples are descended from Europeans (Spanish and Portuguese settlers and later immigration from other European nations), and their society operates in a highly Westernized way. Most countries in the Americas use either English, French, Spanish or Portuguese as their official language. According to the CIA World Factbook, there has also been considerable immigration to South America, particularly to Argentina, Brazil, Chile, and Uruguay, from European nations other than Spain and Portugal (for example, from Germany, Italy, the Netherlands, etc.—see Immigration to Argentina, Immigration to Brazil, Immigration to Chile, and Immigration to Uruguay).
Lebanon: Geographically located in Western Asia, Lebanon is the most Westernized country in the Arab world. In ancient history, Lebanon was ruled by the Hellenistic and Roman empires. Even though it was later ruled by the Caliphate, Lebanon has the highest proportion of Christians in the Arab world, and Christians have dominated the country politically, economically and culturally. Since it was historically a French mandate, France promotes French culture and European-style education in Lebanon. At that time, Beirut was known as the "Little Paris of the Middle East". Currently, French language is still widely spoken and Lebanon is a member of the Organization of la Francophonie.
Philippines: Geographically located in Southeast Asia, due to heavy influences of European (particularly Spanish) and American cultures in Filipino culture, the country is considered Westernized. Moreover, nearly 90% of the Filipino population practices Christianity.
Thailand: Although Thailand is geographically located in Southeast Asia, through the 18th and 19th centuries, Siam faced imperialist pressure from France and the United Kingdom, including many unequal treaties with Western powers and forced concessions of territory; it nevertheless remained the only Southeast Asian country to avoid direct Western colonization. The country became westernized by itself, the Siamese system of government was centralized and initially organized into a modern unitary absolute monarchy during the reign of Chulalongkorn, later as a constitutional monarchy following the Siamese revolution of 1932. In the late 1950s, Thailand became a major ally of the United States, and played a key anti-communist role in the region as a member of the SEATO. Currently, Thailand continues to have strong ties to Western countries.
Turkey: Although geographically only 3% of Turkey lies in Europe (East Thrace) and the rest in Western Asia, Turkey is one of the most Westernized Turkic countries. The country has a similar economic system, has a customs union with the European Union in addition to being an official candidate for membership, and is a member of traditional European & Western organisations such as the OECD, the Council of Europe, and NATO. It is also a member of European organisations for sports such as UEFA and the European Olympic Committees, and has participated in the Eurovision Song Contest. Relations between Turkey and Western countries have been deteriorating since the 2010s.
Vietnam:Geographically located in Southeast Asia, due to the influence of French rule, Vietnamese completely abandoned Chữ Hán and Chữ Nôm that the French government considered backward and hindered the spread of European ideas, and adopted Latin script (chữ Quốc ngữ). During French rule, a large number of French-style buildings were built in Saigon and Hanoi, thus earning the nickname Paris of the East. Christianity (especially Catholicism) has a huge influence in Vietnam. After the partition of Vietnam, South Vietnam was Americanized and North Vietnam was Sovietized. Currently, Vietnam is a member of the Organization of la Francophonie.
Views
Kishore Mahbubani
Kishore Mahbubani's book entitled The Great Convergence: Asia, the West, and the Logic of One World (Public Affairs), is very optimistic. It proposes that a new global civilization is being created. The majority of non-Western countries admire and adhere to Western living standards. It says this newly emerging global order has to be ruled through new policies and attitudes. He argues that policymakers all over the world must change their preconceptions and accept that we live in one world. The national interests must be balanced with global interests and the power must be shared. Mahbubani urges that only through these actions can we create a world that converges benignly.
Samuel P. Huntington posits a conflict between "the West and the Rest" and offers three forms of general action that non-Western civilizations can react toward Western countries.
Non-Western countries can attempt to achieve isolation to preserve their own values and protect themselves from Western invasion. He argues that the cost of this action is high and only a few states can pursue it.
According to the theory of "band-wagoning" non-Western countries can join and accept Western values.
Non-Western countries can make an effort to balance Western power through modernization. They can develop economic, and military power and cooperate with other non-Western countries against the West while still preserving their own values and institutions.
Mahbubani counters this argument in his other book, The New Asian Hemisphere: The Irresistible Shift of Global Power to the East. This time, he argues that Western influence is now "unraveling", with Eastern powers such as China arising. He states:
He explains the decline of Western influence, stating reasons as to the loss of Western credibility with the rest of the world.
There is an increasing perception that Western countries will prioritize their domestic problems over international issues, despite their spoken and written promises of having global interests and needs.
The West has become increasingly biased and close-minded in their perception of "non-Western" countries such as China, declaring it an "un-free" country for not following a democratic form of government.
The West uses a double standard when dealing with international issues.
As the biggest Eastern populations gain more power, they are moving away from the Western influences they sought after in the past. The "anti-Americanism" sentiment is not temporary, as Westerners like to believe – the change in the Eastern mindset has become far too significant for it to change back.
Samuel P. Huntington
In contrast to territorial delineation, others, like the American political scientist Samuel P. Huntington in The Clash of Civilizations, consider what is "Western" based on religious affiliation, such as deeming the majority-Western Christian part of Europe and North America the West, and creating 6 other civilizations, including Latin America, Confucian, Japanese, Islamic, Hindu and Slavic-Orthodox, to organize the rest of the globe. Huntington argued that after the end of the Cold War, world politics had been moved into a new aspect in which non-Western civilizations were no more the exploited recipients of Western civilization but become another important actor joining the West to shape and move the world history. Huntington believed that while the age of ideology had ended, the world had only reverted to a normal state of affairs characterized by cultural conflict. In his thesis, he argued that the primary axis of conflict in the future will be along cultural and religious lines.
Edward Said
In Orientalism Edward Said views Westernization as it occurred in the process of colonization, an exercise of essentializing a "subject race" in order to more effectively dominate them. Said references Arthur Balfour, the British Prime Minister from 1902 to 1905, who regarded the rise of nationalism in Egypt in the late 19th century as counterproductive to a "benevolent" system of occupational rule. Balfour frames his argument in favor of continued rule over the Egyptian people by appealing to England's great "understanding" of Egypt's civilization and purporting that England's cultural strengths complemented and made them natural superiors to Egypt's racial deficiencies. Regarding this claim, Said says, "Knowledge to Balfour means surveying a civilization from its origins to its prime to its decline – and of course, it means being able to...The object of such knowledge is inherently vulnerable to scrutiny; this object is a 'fact' which, if it develops, changes, or otherwise transforms itself...[the civilization] nevertheless is fundamentally, even ontologically stable. To have such knowledge of such a thing is to dominate it." The act of claiming coherent knowledge of a society in effect objectifies and others it into marginalization, making people who are classified into that race as "almost everywhere nearly the same." Said also argues that this relationship to the "inferior" races, in fact, works to also fortify and make coherent what is meant by "the West"; if "The Oriental is irrational, depraved (fallen), childlike, "different..." then "...the European is rational, virtuous, mature, normal." Thus, "the West" acts as a construction in the similar way as does "the Orient" – it is a created notion to justify a particular set of power relations, in this case, the colonization and rule of a foreign country.
Process
Colonization and Europeanization (1400s–1970s)
From the 1400s onward, Europeanization and colonialism spread gradually over much of the world and controlled different regions during this five centuries long period, colonizing or subjecting the majority of the globe.
Following World War II, Western leaders and academics sought to expand innate liberties and international equality. A period of decolonization began. At the end of the 1960s, most colonies were allowed autonomy. Those new states often adopted some aspects of Western politics such as a constitution, while frequently reacting against Western culture.
In Asia
General reactions to Westernization can include fundamentalism, protectionism, or embrace to varying degrees. Countries such as Korea and China attempted to adopt a system of isolationism but have ultimately juxtaposed parts of Western culture into their own, often adding original and unique social influences, as exemplified by the introduction of over 1,300 locations of the traditionally Western fast-food chain McDonald's into China. Specific to Taiwan, the industry of bridal photography (see Photography in Taiwan) has been significantly influenced by the Western idea of "love". As examined by author Bonnie Adrian, Taiwanese bridal photos of today provide a striking contrast to past accepted norms, contemporary couples often displaying great physical affection and, at times, placed in typically Western settings to augment the modernity, in comparison to the historically prominent relationship, often stoic and distant, exhibited between bride and groom. Though Western concepts may have initially played a role in creating this cultural shift in Taiwan, the market and desire for bridal photography has not continued without adjustments and social modifications to this Western notion.
Korea
In Korea, the first contact with Westernization was during the Joseon Dynasty, in the 17th century. Every year, the emperor dispatched a few envoy ambassadors to China and while they were staying in Beijing, the Western missionaries were there. Through the missionaries, Korean ambassadors were able to adopt Western technology. In the 19th century, Korea started to send ambassadors to the foreign countries, other than Japan and China. While Korea was being Westernized slowly in the late 19th century, Korea had the idea of "Eastern ways and Western frames (東道西器)", meaning that they accepted the Western "bowl", but used it with Eastern principles inside.
Japan
In Japan, the Netherlands continued to play a key role in transmitting Western know-how to the Japanese from the 17th century to the mid-19th century, because the Japanese had only opened their doors to Dutch merchants before US Navy Commodore Matthew Perry's visit in 1853. After Commodore Perry's visit, Japan began to deliberately accept Western culture to the point of hiring Westerners to teach Western customs and traditions to the Japanese starting in the Meiji era. Since then, many Japanese politicians have encouraged the Westernization of Japan with the use of the term Datsu-A Ron, which means the argument for "leaving Asia" or "Good-bye Asia". In Datsu-A Ron, "Westernization" was described as an "unavoidable" but "fruitful" change. In contrast, despite many advances in industrial efficiency, Japan has sustained a culture of strict social hierarchy and limited individualization.
Iran
In Iran, the process of Westernization dates back to the country's attempt to westernize during the beginning of the 1930s, which was dictated by Shah Rezā Khan and continued by his son during the Cold War and agitated the largely conservative Shia Muslim masses of the country which was partly responsible for the 1979 Iranian Revolution.
Turkey
In Turkey, the synchronization process with the West is known as the Tanzimat (reorganization) period. The Ottoman Empire began to change itself according to modern science, practice, and culture. The Empire took some innovations from the West. Also, with the contribution of foreign engineers, the Empire repaired its old arm systems. Newly-found schools, permanent ambassadors, and privy councils were an essential improvement for the Empire. As a result, Turkey is one of the most Westernized majority-Muslim nations.
India
India's independence movement took inspiration from Western ideas about democracy and human rights. India's ruling class after independence in 1947 remained somewhat Westernized; India's first Prime Minister, Jawaharlal Nehru, had such a substantial Britishness that he once described himself as "the last Englishman to rule India." In 2014, however, the Bharatiya Janata Party (BJP) won power on the back of perceptions of the ruling class being insufficiently Indian.
Globalization (1970s–present)
Westernization is often regarded as a part of the ongoing process of globalization. This theory proposes that Western thought has led to globalisation, and that globalisation propagates Western culture, leading to a cycle of Westernization. On top of largely Western government systems such as democracy and constitution, many Western technologies and customs like music, clothing, and cars have been introduced across various parts of the world and copied and created in traditionally non-Western countries.
Westernization has been reversed in some countries following war or regime change. For example: Russia in aftermath of the Bolshevik Revolution around 1917, Continental China by 1949, Cuba in aftermath of the Revolution in 1959, and Iran by the 1979 revolution.
The main characteristics are economic and political (free trade) democratisation, combined with the spread of an individualised culture. Often it was regarded as opposite to the worldwide influence of communism. After the break-up of the USSR in late 1991 and the end of the Cold War, many of its component states and allies nevertheless underwent Westernization, including privatization of hitherto state-controlled industry.
With debates still going on, the question of whether globalization can be characterized as Westernization can be seen in various aspects. Globalization is happening in various aspects, ranging from economics, politics, and even food or culture. Westernization, to some schools, is seen as a form of globalization that leads the world to be similar to Western powers. Being globalized means taking positive aspects of the world, but globalization also brings the debate about being Westernized. Democracy, fast food, and American pop culture can all be examples that are considered as Westernization of the world.
According to the "Theory of the Globe scrambled by Social network: a new Sphere of Influence 2.0", published by Jura Gentium (University of Florence), the increasing role of Westernization is characterized by social media. The comparison with Eastern societies, who decided to ban American social media platforms (such as Iran and China with Facebook and Twitter), marks a political desire to avoid the Westernization process of their own populations and ways to communicate.
Consequences
Due to the colonization of the Americas and Oceania by Europeans, the cultural, ethnic, and linguistic make-up of the Americas and Oceania has been changed. This is most visible in settler colonies such as: Australia, Canada, New Zealand, and the United States,Argentina, Brazil, Chile, Costa Rica, and Uruguay, where the traditional indigenous population has been predominantly replaced demographically by non-indigenous settlers due to transmitted disease and conflict. This demographic takeover in settler countries has often resulted in the linguistic, social, and cultural marginalisation of indigenous people. Even in countries where large populations of indigenous people remain or the indigenous peoples have mixed (mestizo) considerably with European settlers, such as: Mexico, Peru, Panama, Suriname, Ecuador, Bolivia, Venezuela, Belize, Paraguay, South Africa, Colombia, Guatemala, Haiti, Honduras, Guyana, El Salvador, Jamaica, Cuba, or Nicaragua, relative marginalisation still exists.
Linguistic influence
Due to colonization and immigration, the formerly prevalent languages in the Americas, Oceania, and part of South Africa, are now usually Indo-European languages or creoles based on them:
English (Australia, New Zealand, United States, and Canada without mainly French-speaking Quebec); English along with English-based creole languages (Anglophone Africa, Antigua and Barbuda, Bahamas, Barbados, Dominica, Federated States of Micronesia, Fiji, Grenada, Guyana, Hong Kong, India, Jamaica, Kiribati, Marshall Islands, Nauru, Palau, Papua New Guinea, the Philippines, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Samoa, Singapore, Solomon Islands, Sri Lanka, Tonga, Tuvalu, and Trinidad and Tobago).
French (Quebec, New Brunswick and parts of Ontario in Canada and Saint Pierre and Miquelon); French along with French-based creole languages (Francophone Africa, French Guiana, Guadeloupe, Haiti, Vanuatu, Martinique, and Saint-Barthelemy).
Spanish ( the Americas, Equatorial Guinea, Western Sahara, and the Philippines).
Portuguese (Brazil, Lusophone Africa, East Timor, Macau, Goa, and other members of the Community of Portuguese Language Countries).
Dutch along with Creole languages (Suriname, Aruba and the Netherlands Antilles).
Afrikaans along with English (parts of South Africa and Namibia).
German along with Creole languages (along with Afrikaans in Namibia and some areas in the US, such as Pennsylvania (Pennsylvania Dutch))
Many indigenous languages are on the verge of becoming extinct. Some settler countries have preserved indigenous languages; for example, in New Zealand, the Māori language is one of three official languages, the others being English and New Zealand sign language, another example is Ireland, where Irish is the first official language, followed by English as the second official language.
Sports importance in Westernization
The importance of sports partly comes from its connection to Westernization. The insight by Edelman, R., & Wilson, W. (2017) explains “This new system of thought and practices imbued with positive values in the exertion and strategic deployment of the human body, embracing the Anglo-American notion that physical activity was meaningful in and of itself, conducive to values such as learning and character-building. Modern athletics and competitive sports, avatars of this new body culture, elicited largely willing local receptions in North Asia, though there were no doubt isolated cases of coercive foisting better characterized as cultural imperialism.”
See also
References
Further reading
'
The Limits of Westernization: American and East Asian Intellectuals Create Modernity, 1860-1960 (2019) Routledge, written by Jon Thares Davidann
The Decline of the West (1918), written by Oswald Spengler.
The End of History and the Last Man (1992), written by Francis Fukuyama.
The Clash of Civilizations (1996), written by Samuel P. Huntington.
The Triumph of the West (1985) written by Oxford University historian J.M. Roberts.
Gardels, Nathan (1997) 'Clash of civilizations: modernization without Westernization', The National Times, May/June: 8-10.
Global culture
Cultural assimilation
Cultural geography
Imperialism
Western culture | 0.766697 | 0.993961 | 0.762067 |
OODA loop | The OODA loop (observe, orient, decide, act) is a decision-making model developed by United States Air Force Colonel John Boyd. He applied the concept to the combat operations process, often at the operational level during military campaigns. It is often applied to understand commercial operations and learning processes. The approach explains how agility can overcome raw power in dealing with human opponents.
As can be seen from the diagram, the OODA loop includes continuous collection of feedback and observations. This enables late commitment, which is an important element of agility. This is in contrast to e.g. the PDCA cycle which requires early commitment (the first steps are Plan and Do).
The OODA loop has become an important concept in litigation, business, law enforcement, management education, military strategy and cyber security and cyberwarfare. According to Boyd, decision-making occurs in an iterative cycle of "observe, orient, decide, act". An entity (whether an individual or an organization) that can process this cycle quickly, observing and reacting to unfolding events more rapidly and/or more effectively than an opponent, can thereby get inside the opponent's decision cycle and gain the advantage.
Some scholars are critical of the concept. Aviation historian Michael Hankins, for example, writes that "the OODA loop is vague enough that its defenders and attackers can each see what they want to see in it. For some, the OODA concept’s flexibility is its strength, but for others it becomes so generalized as to lose its usefulness." He concludes that "The OODA loop is merely one way among a myriad of ways of describing intuitive processes of learning and decision making that most people experience daily. It is not incorrect, but neither is it unique or especially profound."
See also
Decision cycle
Double-loop learning
Feedback loop
Improvement cycle
DMAIC
PDCA
Learning cycle
Maneuver warfare
Mental model
Nursing process
Problem solving
Situation awareness
SWOT analysis
United States Army Strategist
References
Bibliography
Boyd, John, R., The Essence of Winning and Losing, 28 June 1995 a five slide set by Boyd.
Greene, Robert, OODA and You
Hillaker, Harry, Code one magazine, "John Boyd, United States Air Force Retired, Father of the F16", July 1997.
Linger, Henry, Constructing The Infrastructure for the Knowledge Economy: Methods and Tools, Theory and Practice, p. 449
Metayer, Estelle, Decision making: It's all about taking off – and landing safely…, Competia, December 2011
Osinga, Frans, "Science, Strategy and War The Strategic Theory of John Boyd"
Richards, Chet, Certain to Win: the Strategy of John Boyd, Applied to Business (2004)
Ullman, David G., "OO–OO–OO!” The Sound of a Broken OODA Loop], Crosstalk, April 2007
External links
Archived documents
Video: The OODA Loop and Clausewitzian "Friction"
Bazin, A. (2005). "Boyd's OODA Loop and the Infantry Company Commander". Infantry Magazine.
OODA Loop 2.0: Information, Not Agility, Is Life OODA Loop 2.0: Information, Not Agility, Is Life
OODA Loop in Emergency Preparedness OODA Loop in Emergency Preparedness
Feedback
Intelligence analysis
Management cybernetics
Strategy
Systems analysis
United States Air Force | 0.763429 | 0.99819 | 0.762048 |
Law of three stages | The law of three stages is an idea developed by Auguste Comte in his work The Course in Positive Philosophy. It states that society as a whole, and each particular science, develops through three mentally conceived stages: (1) the theological stage, (2) the metaphysical stage, and (3) the positive stage.
The progression of the three stages of sociology
(1) The Theological stage refers to the appeal to personified deities. During the earlier stages, people believed that all the phenomena of nature were the creation of the divine or supernatural. Adults and children failed to discover the natural causes of various phenomena and hence attributed them to a supernatural or divine power. Comte broke this stage into 3 sub-stages:
1A. Fetishism – Fetishism was the primary stage of the theological stage of thinking. Throughout this stage, primitive people believe that inanimate objects have living spirits in them, also known as animism. People worship inanimate objects like trees, stones, a pieces of wood, volcanic eruptions, etc. Through this practice, people believe that all things root from a supernatural source.
1B. Polytheism – At one point, Fetishism began to bring about doubt in the minds of its believers. As a result, people turned towards polytheism: the explanation of things through the use of many Gods. Primitive people believe that all natural forces are controlled by different Gods; a few examples would be the God of water, God of rain, God of fire, God of air, God of earth, etc.
1C. Monotheism – Monotheism means believing in one God or God in one; attributing all to a single, supreme deity. Primitive people believe a single theistic entity is responsible for the existence of the universe.
(2) The Metaphysical stage is an extension of the theological stage. It refers to explanation by impersonal abstract concepts. People often try to characterize God as an abstract being. They believe that an abstract power or force guides and determines events in the world. Metaphysical thinking discards belief in a concrete God. For example: In Classical Hindu Indian society, the principle of the transmigration of the soul, the conception of rebirth, and notions of pursuant were largely governed by metaphysical uphill.
(3) The Positivity stage, also known as the scientific stage, refers to scientific explanation based on observation, experiment, and comparison. Positive explanations rely upon a distinct method, the scientific method, for their justification. Today people attempt to establish cause-and-effect relationships. Positivism is a purely intellectual way of looking at the world; as well, it also emphasizes observation and classification of data and facts. This is the highest, most evolved behavior according to Comte.
Comte, however, was conscious of the fact that the three stages of thinking may or do coexist in the same society or the same mind and may not always be successive.
Comte proposed a hierarchy of the sciences based on historical sequence, with areas of knowledge passing through these stages in order of complexity. The simplest and most remote areas of knowledge—mechanical or physical—are the first to become scientific. These are followed by the more complex sciences, those considered closest to us.
The sciences, then, according to Comte's "law", developed in this order: Mathematics; Astronomy; Physics; Chemistry; Biology; Sociology. A science of society is thus the "Queen science" in Comte's hierarchy as it would be the most fundamentally complex.
Since Comte saw social science as an observation of human behavior and knowledge, his definition of sociology included observing humanity’s development of science itself. Because of this, Comte presented this introspective field of study as the science above all others. Sociology would both complete the body of positive sciences by discussing humanity as the last unstudied scientific field and would link the fields of science together in human history, showing the "intimate interrelation of scientific and social development".
To Comte, the law of three stages made the development of sociology inevitable and necessary. Comte saw the formation of his law as an active use of sociology, but this formation was dependent on other sciences reaching the positive stage; Comte’s three-stage law would not have evidence for a positive stage without the observed progression of other sciences through these three stages. Thus, sociology and its first law of three stages would be developed after other sciences were developed out of the metaphysical stage, with the observation of these developed sciences becoming the scientific evidence used in a positive stage of sociology. This special dependence on other sciences contributed to Comte’s view of sociology being the most complex. It also explains sociology being the last science to be developed.
Comte saw the results of his three-stage law and sociology as not only inevitable but good. In Comte’s eyes, the positive stage was not only the most evolved but also the stage best for mankind. Through the continuous development of positive sciences, Comte hoped that humans would perfect their knowledge of the world and make real progress to improve the welfare of humanity. He acclaimed the positive stage as the "highest accomplishment of the human mind" and as having "natural superiority" over the other, more primitive stages.
Overall, Comte saw his law of three stages as the start of the scientific field of sociology as a positive science. He believed this development was the key to completing positive philosophy and would finally allow humans to study every observable aspect of the universe. For Comte, sociology’s human-centered studies would relate the fields of science to each other as progressions in human history and make positive philosophy one coherent body of knowledge. Comte presented the positive stage as the final state of all sciences, which would allow human knowledge to be perfected, leading to human progress.
Critiques of the law
Historian William Whewell wrote "Mr. Comte's arrangement of the progress of science as successively metaphysical and positive, is contrary to history in fact, and contrary to sound philosophy in principle." The historian of science H. Floris Cohen has made a significant effort to draw the modern eye towards this first debate on the foundations of positivism.
In contrast, within an entry dated early October 1838 Charles Darwin wrote in one of his then private notebooks that "M. Comte's idea of a theological state of science [is a] grand idea."
See also
Antipositivism
Religion of Humanity
Sociological positivism
References
External links
History Guide
Sociocultural evolution theory
Religion and science
Auguste Comte
History of sociology | 0.76539 | 0.995598 | 0.762021 |
Power-knowledge | In critical theory, power-knowledge is a term introduced by the French philosopher Michel Foucault. According to Foucault's understanding, power is based on knowledge and makes use of knowledge; on the other hand, power reproduces knowledge by shaping it in accordance with its anonymous intentions. Power creates and recreates its own fields of exercise through knowledge.
The relationship between power and knowledge has been always a central theme in the social sciences.
Foucault's conception
Foucault was an epistemological constructivist and historicist. Foucault was critical of the idea that humans can reach "absolute" knowledge about the world. A fundamental goal in many of Foucault's works is to show how that which has traditionally been considered as absolute, universal and true in fact is historically contingent. To Foucault, even the idea of absolute knowledge is a historically contingent idea. This does not, however, lead to epistemological nihilism; rather, Foucault argues that people "always begin anew" when it comes to knowledge.
Foucault incorporated mutuality into his neologism power-knowledge, the most important part of which is the hyphen that links the two aspects of the integrated concept together (and alludes to their inherent inextricability).
In his later works, Foucault suggests that power-knowledge was later replaced in the modern world, with the term governmentality which points to a specific mentality of governance.
Subsequent developments
While in most of the 20th century the term ‘knowledge’ has been closely associated with power, in the last decades ‘information’ has become a central term as well. With the growing use of big-data, information is increasingly seen as the means to generate useful knowledge and power.
One of the recently developed model, known as the Volume and Control Model, describes how information is capitalized by global corporations and transforms into economic power. Volume is defined as the informational resources—the amount and diversity of information and the people producing it. Control is the ability to channel the interaction between information and people through two competing mechanisms: popularization (information relevant to most people), and personalization (information relevant to each individual person).
According to this understanding, knowledge is never neutral, as it determines force relations. The notion of power-knowledge is therefore likely to be employed in critical, normative contexts. One example of the implications of power-knowledge is Google’s monopoly of knowledge, its PageRank algorithm, and its inevitable commercial and cultural biases around the world, which are based on the volume and control principles. A recent study shows, for example, the commercial implications of Google Images algorithm, as all search results for the term 'beauty' in different languages predominantly yield images of young white women.
See also
Knowledge is power
References
Neologisms
Social philosophy
Michel Foucault | 0.769519 | 0.990224 | 0.761996 |
Unified theory of acceptance and use of technology | The unified theory of acceptance and use of technology (UTAUT) is a technology acceptance model formulated by Venkatesh and others in "User acceptance of information technology: Toward a unified view". The UTAUT aims to explain user intentions to use an information system and subsequent usage behavior. The theory holds that there are four key constructs:
1) performance expectancy,
2) effort expectancy,
3) social influence, and
4) facilitating conditions.
The first three are direct determinants of usage intention and behavior, and the fourth is a direct determinant of user behavior. Gender, age, experience, and voluntariness of use are posited to moderate the impact of the four key constructs on usage intention and behavior. The theory was developed through a review and consolidation of the constructs of eight models that earlier research had employed to explain information systems usage behaviour (theory of reasoned action, technology acceptance model, motivational model, theory of planned behavior, a combined theory of planned behavior/technology acceptance model, model of personal computer use, diffusion of innovations theory, and social cognitive theory). Subsequent validation by Venkatesh et al. (2003) of UTAUT in a longitudinal study found it to account for 70% of the variance in Behavioural Intention to Use (BI) and about 50% in actual use.
Application
Koivumäki et al. applied UTAUT to study the perceptions of 243 individuals in northern Finland toward mobile services and technology and found that time spent using the devices did not affect consumer perceptions, but familiarity with the devices and user skills did have an impact.
Eckhardt et al. applied UTAUT to study social influence of workplace referent groups (superiors, colleagues) on intention to adopt technology in 152 German companies and found significant impact of social influence from workplace referents on information technology adoption.
Curtis et al. applied UTAUT to the adoption of social media by 409 United States nonprofit organizations. UTAUT had not been previously applied to the use of social media in public relations. They found that organizations with defined public relations departments are more likely to adopt social media technologies and use them to achieve their organizational goals. Women considered social media to be beneficial, and men exhibited more confidence in actively utilizing social media.
Verhoeven et al. applied UTAUT to study computer use frequency in 714 university freshmen in Belgium and found that UTAUT was also useful in explaining varying frequencies of computer use and differences in information and communication technology skills in secondary school and in the university.
Welch et al. applied UTAUT to study factors contributing to Mobile learning adoption among 118 museum staff in England. UTAUT had not been previously applied to the use of just-in-time knowledge interventions to development technological knowledge within the museum sector. They found that UTAUT was useful in explaining the determinants of mobile learning adoption.
Extension of the theory
Lin and Anol postulated an extended model of UTAUT, including the influence of online social support on network information technology usage. They surveyed 317 undergraduate students in Taiwan regarding their online social support in using instant messaging and found that social influence plays an important role in affecting online social support.
Sykes et al. proposed a model of acceptance with peer support (MAPS), integrating prior research on individual adoption with research on social networks in organizations. They conducted a 3-month-long study of 87 employees in one organization and found that studying social network constructs can aid in understanding new information system use.
Wang, Wu, and Wang added two constructs (perceived playfulness and self-management of learning) to the UTAUT in their study of determinants of acceptance of mobile learning in 370 individuals in Taiwan and found that they were significant determinants of behavioral intention to use mobile learning in all respondents.
Hewitt et al. extended the UTAUT to study the acceptance of autonomous vehicles. Two separate surveys of 57 and 187 individuals in the USA showed that users were less accepting of high autonomy levels and displayed significantly lower intention to use highly autonomous vehicles.
Wang and Wang extended the UTAUT in their study of 343 individuals in Taiwan to determine gender differences in mobile Internet acceptance. They added three constructs – perceived playfulness, perceived value, and palm-sized computer self-efficacy to UTAUT and chose behavioral intention as a dependent variable. They omitted use behavior, facilitating conditions, and experience. .l. Also, since the devices were used in a voluntary context, and they found that most adopters were ages 20–35, they omitted voluntariness and age. Perceived value had a significant influence on adoption intention, and palm-sized computer self-efficacy played a critical role in predicting mobile Internet acceptance. Perceived playfulness, however, did not have a strong influence on behavioral intention, but this may have been due to service or network communication quality issues during the study.
Cheng-Min Chao developed and empirically tested a model to predict the factors affecting students' behavioral intentions toward using mobile learning (m-learning). The study applied the extended unified theory of acceptance and use of technology (UTAUT) model with the addition of perceived enjoyment, mobile self-efficacy, satisfaction, trust, and perceived risk moderators. The study collected data from 1562 respondents to conduct a cross-sectional study and employed a research model based on multiple technology acceptance theories.
Cimperman et al. developed an extended UTAUT model to analyze the acceptance rate of home telehealth services among older adults. The extended UTAUT model has six predictors and were empirically tested to be effective when predicting how a certain behaviour influences acceptance rate.
Criticism
Bagozzi critiqued the model and its subsequent extensions, stating "UTAUT is a well-meaning and thoughtful presentation," but that it presents a model with 41 independent variables for predicting intentions and at least 8 independent variables for predicting behavior," and that it contributed to the study of technology adoption "reaching a stage of chaos." He proposed instead a unified theory that coheres the "many splinters of knowledge" to explain decision making.
Van Raaij and Schepers criticized the UTAUT as being less parsimonious than the previous Technology Acceptance Model and TAM2 because its high R2 is only achieved when moderating key relationships with up to four variables. They also called the grouping and labeling of items and constructs problematic because a variety of disparate items were combined to reflect a single psychometric construct.
Li suggested that using moderators to artificially achieve high R2 in UTAUT is unnecessary and also impractical for understanding organizational technology adoption, and demonstrated that good predictive power can be achieved even with simple models when proper initial screening procedures are applied. The results provide insights for organizational research design under practical business settings.
See also
Lazy user model
References
Product management
Technological change
Management cybernetics | 0.769264 | 0.990545 | 0.761991 |
Lev Vygotsky | Lev Semyonovich Vygotsky (, ; ; – June 11, 1934) was a Russian and Soviet psychologist, best known for his work on psychological development in children and creating the framework known as cultural-historical activity theory. After his early death, his books and research were banned in the Soviet Union until Joseph Stalin's death in 1953, with a first collection of major texts published in 1956.
His major ideas include:
The Social Origin of Mind: Vygotsky believed that human mental and cognitive abilities are not biologically determined, but instead created and shaped by use of language and tools in the process of interacting and constructing the cultural and social environment.
The Importance of Mediation: He saw mediation as the key to human development, because it leads to the use of cultural tools and becomes a pathway for psychological development through the process of interiorization.
The Zone of Proximal Development: Vygotsky introduced the concept of the ZPD which refers to the gap between a child's current level of development and the level they are capable of reaching with tools provided by others with more knowledge.
The Significance of Play: Vygotsky viewed play as a crucial aspect of children's development, as the best sandbox to build and develop the practice of mediation.
Biography
Lev Simkhovich Vygodsky (his patronymic was later changed to Semyonovich and his surname to Vygotsky for unclear reasons) was born on November 17, 1896, in the town of Orsha in Mogilev Governorate of the Russian Empire (now Belarus) into a non-religious middle-class family of Russian Jewish extraction. His father Simkha Leibovich (also known as Semyon Lvovich) was a banker and his mother was Tsetsilia Moiseevna.
Vygotsky was raised in the city of Gomel, where he was homeschooled until 1911 and then obtained a formal degree with distinction in a private Jewish gymnasium, which allowed him entrance to a university. In 1913, Vygotsky was admitted to the Moscow University by mere ballot through a "Jewish Lottery": at the time a three percent Jewish student quota was administered for entry in Moscow and Saint Petersburg universities. He had an interest in the humanities and social sciences, but at the insistence of his parents he applied to the medical school at Moscow University. During the first semester of study, he transferred to the law school. In parallel, he attended lectures at Shanyavsky Moscow City People's University. Vygotsky's early interests were in the arts and, primarily, in the topics of the history of the Jewish people, the tradition, culture and Jewish identity.
In January 1924, Vygotsky took part in the Second All-Russian Psychoneurological Congress in Petrograd (soon thereafter renamed Leningrad). After the Congress, Vygotsky met with Alexander Luria and with his help received an invitation to become a research fellow at the Psychological Institute in Moscow which was under the direction of Konstantin Kornilov. Vygotsky moved to Moscow with his new wife, Roza Smekhova, with whom he would have two children. He began his career at the Psychological Institute as a "staff scientist, second class". He also became a secondary school teacher, covering a period marked by his interest in the processes of learning and the role of language in learning.
By the end of 1925, Vygotsky completed his dissertation titled "The Psychology of Art", which was not published until the 1960s, and a book titled "Pedagogical Psychology", which apparently drew on lecture notes he prepared in Gomel while he was a psychology instructor at local educational establishments. In the summer of 1925 he made his first and only trip abroad to a London congress on the education of the deaf. Upon return to the Soviet Union, he was hospitalized due to tuberculosis and would remain an invalid and out of work until the end of 1926. His dissertation was accepted as the prerequisite of a scholarly degree, which was awarded to Vygotsky in autumn 1925 in absentia.
After his release from the hospital, Vygotsky did theoretical and methodological work on the crisis in psychology, but never finished the draft of the manuscript and interrupted his work on it around mid-1927. The manuscript was published later with notable editorial interventions and distortions in 1982 and was presented by the editors as one of the most important of Vygotsky's works. In this early manuscript, Vygotsky argued for the formation of a general psychology that could unite the naturalist objectivist strands of psychological science with the more philosophical approaches of Marxist orientation. However, he also harshly criticized those of his colleagues who attempted to build a "Marxist Psychology" as an alternative to the naturalist and philosophical schools. He argued that if one wanted to build a truly Marxist Psychology, there were no shortcuts to be found by merely looking for applicable quotes in the writings of Marx. Rather one should look for a methodology that was in accordance with the Marxian spirit.
From 1926 to 1930, Vygotsky worked on a research program investigating the development of higher psychological functions, i.e. culturally governed lower psychological functions such as voluntary attention, selective memory, object-oriented action, and decision making. During this period he gathered a group of collaborators including Alexander Luria, Boris Varshava, Alexei Leontiev, Leonid Zankov, and several others. Vygotsky guided his students in researching this phenomenon from three different perspectives:
The instrumental approach, which aimed to understand the ways humans use objects as mediation aids in memory and reasoning.
A developmental approach, focused on how children acquire higher cognitive functions during development
A culture-historical approach, studying how social and cultural patterns of interaction shape forms of mediation and developmental trajectories
Vygotsky died of a relapse of tuberculosis on June 11, 1934, at the age of 37, in Moscow. One of Vygotsky's last private notebook entries was:
Chronology of the most important events of life and career
1922-24 - worked in the psychological laboratory which he organized in Gomel Pedagogical College;
January of 1924 - meeting Luria at the II Psychoneurological Congress in Petrograd, moving from Gomel to Moscow, enrolling in graduate school and taking position at the State Institute of Experimental Psychology in Moscow;
July of 1924 - the beginning of work as the head of the sub-department of the education of physically and intellectually disabled children in the department of social and legal protection of minors (SPON);
November of 1924 - during II Congress of the Social and Legal Protection of Minors in Moscow, a turn of Soviet defectology to social education was officially announced and collection of articles and materials edited by Vygotsky "Issues of the upbringing of blind, deaf and mentally retarded children" was published;
May 9, 1925 - the birth of the first child: the daughter Gita
Summer of 1925 - the only trip abroad: went to London for a defectology conference; on the way passed through Germany, where he met with German psychologists
November 5, 1925 - Vygotsky, in absence (due to illness), was awarded the title of senior researcher, equivalent to the modern degree of candidate of sciences for defense of the dissertation "Psychology of Art". The contract for the publication of The Psychology of Art was signed on November 9, 1925, but the text was published only in 1965;
November 21, 1925 to May 22, 1926 - hospitalization in the Zakharyino sanatorium-type hospital due to tuberculosis; upon discharge qualified as a disabled person until the end of the year;
1926 - Vygotsky's first book, Pedagogical Psychology, was published; writes notes and essays that would be published years later under the title "The Historical Meaning of the Psychological Crisis";
1927 - resumes work at the RANION Institute of Experimental Psychology and in a number of other institutions in Moscow and Leningrad;
September 17, 1927 - approved as a professor by the scientific and pedagogical section of the State Academic Council (SUS);
December 19, 1927 - appointed as the head of the Medical and Pedagogical Station of the Glavsotsvos of the People's Commissariat of Education of the RSFSR, remained in this position until October 1928 (dismissed on his own will);
December 28, 1927 to January 4, 1928 - First All-Russian Pedological Congress, Moscow: Vygotsky works as co-editor of the section on difficult childhood, and also presents two reports: "The development of a difficult child and its study" and "Instrumental method in pedology"; these two articles together with Zankov's report "Principles for the construction of complex programs of an auxiliary school from a pedological point of view" and Luria "On the methodology of instrumental-psychological research" become the first public presentation of "Instrumental Psychology" as a research method associated with the names of Vygotsky and Luria;
1928 - Vygotsky's second book "Pedology of School Age" was published, along with a number of articles establishing "Instrumental Psychology" approach in Russian and English language journals;
December of 1928 - after a conflict with the director of the Institute of Experimental Psychology (GIEP) K. N. Kornilov, the research activities of the Vygotsky-Luria group were curtailed in this organization, and experimental research was transferred to the Academy of Communications.
1929 - freelance scientific consultant, head of psychological laboratories at the Experimental Defectological Institute (transformed Medical-pedagogical station)
Major themes of research
Vygotsky was a pioneering psychologist with interests in extremely diverse fields: his work covered topics such as the origin and the psychology of art, development of higher mental functions, philosophy of science and the methodology of psychological research, the relation between learning and human development, concept formation, interrelation between language and thought development, play as a psychological phenomenon, learning disabilities, and abnormal human development (aka defectology). His philosophical framework includes interpretations of the cognitive role of mediation tools, as well as the re-interpretation of well-known concepts in psychology such as internalization of knowledge. Vygotsky introduced the notion of zone of proximal development, a metaphor capable of describing the potential of human cognitive development.
His most important and widely known contribution is his theory for the development of "higher psychological functions," which emerge through unification of interpersonal connections and actions taken within a given socio-cultural environment (i.e. language, culture, society, and tool-use). It was during this period that he identified the play of young children as their "leading activity", which he understood to be the main source of preschoolers' psychological development, and which he viewed as an expression of an inseparable unity of emotional, volitional, and cognitive development.
While Vygotsky never met Jean Piaget, he had read a number of his works and agreed on some of his perspectives on learning. At some point (around 1929–30), Vygotsky came to disagree with Piaget's understanding of learning and development, and held a different theoretical position from Piaget on the topic of inner speech; Piaget thought that egocentric speech follows from inner speech and "dissolved away" as children matured. Vygotsky showed that egocentric speech became inner speech, and then called "thoughts". Piaget only read Vygotsky's work after his death and openly praised him for his discovery of the social origin of children's thoughts, reasoning, and moral judgements.
Cultural-historical theory
The hypothesis put forward by Vygotsky was a paradigm shift in psychology. He was first to propose that all psychological functions that govern mental, cognitive and physical actions of the individual are not immutable but have a history of cultural development (in human history and in everyone personally) through interiorization of cultural tools. Therefore, the process of transformation which is happening when current cultural tools are interiorized becomes the focus of psychological research.
Vygotsky posits the existence of lower and higher mental functions. The latter have social origins and complex system structure, mediated by cultural tools and controlled by an individual. Vygotsky came to the conclusion that consciousness is possible because of the mediated nature of higher psychological functions. Between the stimulus and the reaction of a person (both behavioral and mental), an additional connection arises through a mediating link - a stimulus-means, or a sign. Signs are tools that mediate higher psychological functions and control one's own behavior. A word could direct attention, create personal meaning, form a concept and coordinate.
Vygotsky illustrated his idea of mediation via Buridan's ass paradox. A problematic situation of choosing between two equal possibilities, interested Vygotsky primarily from the point of view of solving it through a coin flip - redelegating decision to the outside object - an example of using cultural tools to govern one's own psychological function of volition.
While developing a method for studying higher psychological functions, Vygotsky was guided by the principle of ex ungue leonem and additionally analyzed phenomena such as using a knot in the handkerchief for remembering and finger counting.
Cultural mediation and internalization
Vygotsky studied child development and the significant roles of cultural mediation and interpersonal communication. He observed how higher mental functions developed through these interactions, and also represented the shared knowledge of a culture. This process is known as internalization. Internalization may be understood in one respect as "knowing how". For example, the practices of riding a bicycle or pouring a cup of milk, initially, are outside and beyond the child. The mastery of the skills needed for performing these practices occurs through the activity of the child within society. A further aspect of internalization is appropriation, in which children take tools and adapt them to personal use, perhaps using them in unique ways. Internalizing the use of a pencil allows the child to use it very much for personal ends rather than drawing exactly what others in society have drawn previously:
Zone of Proximal Development
"Zone of Proximal Development" (ZPD) is a term Vygotsky used to characterize an individual's mental development. He originally defined the ZPD as "the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers." He used the example of two children in school who originally could solve problems at an eight-year-old developmental level (that is, typical for children who were age 8). After each child received assistance from an adult, one was able to perform at a nine-year-old level and one was able to perform at a twelve-year-old level. He said "This difference between twelve and eight, or between nine and eight, is what we call the zone of proximal development." He further said that the ZPD "defines those functions that have not yet matured but are in the process of maturation, functions that will mature tomorrow but are currently in an embryonic state." The zone is bracketed by the learner's current ability and the ability they can achieve with the aid of an instructor of some capacity.
Scaffolding
According to Vygotsky, through the assistance of a more knowledgeable other, a child is able to learn skills or aspects of a skill that go beyond the child's actual developmental or maturational level. This assistance is defined as 'scaffolding'. The lower limit of ZPD is the level of skill reached by the child working independently (also referred to as the child's developmental level). The upper limit is the level of potential skill that the child is able to reach with the assistance of a more capable instructor. In this sense, the ZPD provides a prospective view of cognitive development, as opposed to a retrospective view that characterizes development in terms of a child's independent capabilities. The advancement through and attainment of the upper limit of the ZPD is limited by the instructional and scaffolding-related capabilities of the more knowledgeable other (MKO). The MKO is typically assumed to be an older, more experienced teacher or parent, but often can be a learner's peer or someone their junior. The MKO need not even be a person, it can be a machine or book, or other source of visual and/or audio input.
Thinking and Speech
In the last years of his life, Vygotsky paid most of his attention to the study of the relationship between thought and word in the structure of consciousness. This problem was explored in Vygotsky's book, Thinking and Speech, that was published posthumously in 1934. The book was a collection of essays and scholarly papers that Vygotsky wrote during different periods of his thought development. It was edited by his closest associates Kolbanovskii, Zankov, and Shif. The book established the connection between speech and the development of mental concepts and awareness. Vygotsky described silent inner speech as being qualitatively different from verbal external speech, but both equally important. Vygotsky believed inner speech developed from external speech via a gradual process of "internalization" (i.e., transition from the external to the internal), with younger children only really able to "think out loud". He claimed that in its mature form, inner speech would not resemble spoken language as we know it (in particular, being greatly compressed). Hence, thought itself developed socially.
Inner speech, according to Vygotsky, develops through the accumulation of long-term functional and structural changes. It branches off from the child's external speech along with the differentiation of the social and egocentric functions of speech, and, finally, the speech functions acquired by the child become the main functions of his thinking.
In this work, Vygotsky points out the genesis of the development of thinking and speech and that the relationship between them is not a constant value.
Legacy
Soviet Union
After Vygotsky's early death, his books and research were banned until Stalin's death in 1953, with a first collection of major texts published in 1956. A small group of his collaborators and students were able to continue his lines of thought in research. The members of the group laid a foundation for the systematic development of Vygotskian psychology in such diverse fields as the psychology of memory (P. Zinchenko), perception, sensation, and movement (Zaporozhets, Asnin, A. N. Leont'ev), personality (Lidiya Bozhovich, Asnin, A. N. Leont'ev), will and volition (Zaporozhets, A. N. Leont'ev, P. Zinchenko, L. Bozhovich, Asnin), psychology of play (G. D. Lukov, Daniil El'konin) and psychology of learning (P. Zinchenko, L. Bozhovich, D. El'konin), as well as the theory of step-by-step formation of mental actions (Pyotr Gal'perin), general psychological activity theory (A. N. Leont'ev) and psychology of action (Zaporozhets). Andrey Puzyrey elaborated the ideas of Vygotsky in respect of psychotherapy and even in the broader context of deliberate psychological intervention (psychotechnique), in general.
United States
Only a couple of Vygotsky's texts were published in English before the translation of Thinking and Speech in 1962. Since then, the majority of his texts have been translated, and his ideas have become influential in some modern educational approaches. The first proponents of Vygotsky in the USA were Michael Cole and James Wertsch. Today, an umbrella term for theoretical framework based on Vygotsky's ideas is "Cultural-historical activity theory" (aka CHAT) or "Activity theory".
Works
Consciousness as a problem in the Psychology of Behavior, 1925
Educational Psychology, 1926
Historical meaning of the crisis in Psychology, unfinished and aborted in 1927
The Problem of the Cultural Development of the Child, 1929
The Fundamental Problems of Defectology, 1929
The Socialist alteration of Man, 1930
Ape, Primitive Man, and Child: Essays in the History of Behaviour, A. R. Luria and L. S. Vygotsky, 1930
Tool and symbol in child development, 1930
Paedology of the Adolescent, 1929-1931
Play and its role in the Mental development of the Child, oral presentation 1933
Thinking and Speech, 1934
The Psychology of Art, 1971 (English translation by MIT Press)
Mind in Society: The Development of Higher Psychological Processes, 1978 (Harvard University Press)
The Collected Works of L. S. Vygotsky, 1987
See also
Cognitivism (learning theory)
Cultural-Historical Activity Theory (CHAT)
Laboratory of Comparative Human Cognition (LCHC)
Leading Activity
Organization Workshop
PsyAnima, Dubna Psychological Journal
Social constructivism
Vygotsky Circle
References
Further reading
Wertsch J. V. (1985). Vygotsky and the social formation of mind. Cambridge, MA: Harvard University Press.
Yaroshevsky M. (1989) Lev Vygotsky. Progress, Moscow
Kozulin A. (1990). Vygotsky's Psychology: A Biography of Ideas. Cambridge, Harvard University Press.
Van der Veer R. & Valsiner J. (1991). Understanding Vygotsky. A quest for synthesis. Oxford, Basil Blackwell.
Holzman L. (1993) Lev Vygotsky: Revolutionary Scientist. Routledge
Van der Veer, R. & Valsiner, J. eds (1994). The Vygotsky Reader. Oxford, Blackwell.
Vygodskaya, G. L., & Lifanova, T. M. (1996). Lev Semyonovich Vygotsky: Zhizn', deyatel'nost', shtrikhi k portretu. Moscow: Smysl. Translated in Vygodskaya, G. L., & Lifanova, T. M. Lev Semenovich Vygotsky, Journal of Russian and East European Psychology, volume 37.
Van der Veer R. (2007). Lev Vygotsky. Continuum Books.
Daniels, H., Wertsch, J. & Cole, M. (Eds.) (2007). The Cambridge Companion to Vygotsky.
Dafermos, M. (2018). Rethinking Cultural-Historical Theory. Singapore, Springer.
Zavershneva, E., & Van der Veer, R. (2018). Vygotsky's notebooks: A selection. Singapore, Springer.
External links
Lev Vygotsky archive, marxists.org: all major works
Annotated bibliography of scholarly histories on Vygotsky, Advances in the History of Psychology, York University
1896 births
1934 deaths
People from Orsha
People from Orshansky Uyezd
Belarusian Jews
Soviet Jews
Belarusian scientists
Cognitive scientists
Cognitive psychologists
Communication theorists
Constructivism (psychological school)
Developmental psychologists
Educational psychologists
Literacy and society theorists
Philosophers of education
Soviet psychologists
Soviet scientists
Spinoza scholars
Spinozists
Jewish Russian scientists
Jewish philosophers
Systems psychologists
Moscow State University alumni
20th-century deaths from tuberculosis
Academic staff of Moscow State University
Imperial Moscow University alumni
20th-century psychologists
Tuberculosis deaths in the Soviet Union
Tuberculosis deaths in Russia
Russian scientists | 0.763323 | 0.998246 | 0.761984 |
Community | A community is a social unit (a group of living things) with a shared socially-significant characteristic, such as place, set of norms, culture, religion, values, customs, or identity. Communities may share a sense of place situated in a given geographical area (e.g. a country, village, town, or neighborhood) or in virtual space through communication platforms. Durable good relations that extend beyond immediate genealogical ties also define a sense of community, important to people's identity, practice, and roles in social institutions such as family, home, work, government, TV network, society, or humanity at large. Although communities are usually small relative to personal social ties, "community" may also refer to large-group affiliations such as national communities, international communities, and virtual communities.
In terms of sociological categories, a community can seem like a sub-set of a social collectivity.
In developmental views, a community can emerge out of a collectivity.
The English-language word "community" derives from the Old French (Modern French: ), which comes from the Latin communitas "community", "public spirit" (from Latin communis, "common").
Human communities may have intent, belief, resources, preferences, needs, and risks in common, affecting the identity of the participants and their degree of cohesiveness.
Perspectives of various disciplines
Archaeology
Archaeological studies of social communities use the term "community" in two ways, mirroring usage in other areas. The first meaning is an informal definition of community as a place where people used to live. In this literal sense it is synonymous with the concept of an ancient settlement—whether a hamlet, village, town, or city. The second meaning resembles the usage of the term in other social sciences: a community is a group of people living near one another who interact socially. Social interaction on a small scale can be difficult to identify with archaeological data. Most reconstructions of social communities by archaeologists rely on the principle that social interaction in the past was conditioned by physical distance. Therefore, a small village settlement likely constituted a social community and spatial subdivisions of cities and other large settlements may have formed communities. Archaeologists typically use similarities in material culture—from house types to styles of pottery—to reconstruct communities in the past. This classification method relies on the assumption that people or households will share more similarities in the types and styles of their material goods with other members of a social community than they will with outsiders.
Sociology
Early sociological studies identified communities as fringe groups at the behest of local power elites. Such early academic studies include Who Governs? by Robert Dahl as well as the papers by Floyd Hunter on Atlanta. At the turn of the 21st century the concept of community was rediscovered by academics, politicians, and activists. Politicians hoping for a democratic election started to realign with community interests.
Ecology
In ecology, a community is an assemblage of populations—potentially of different species—interacting with one another. Community ecology is the branch of ecology that studies interactions between and among species. It considers how such interactions, along with interactions between species and the abiotic environment, affect social structure and species richness, diversity and patterns of abundance. Species interact in three ways: competition, predation and mutualism:
Competition typically results in a double negative—that is both species lose in the interaction.
Predation involves a win/lose situation, with one species winning.
Mutualism sees both species co-operating in some way, with both winning.
The two main types of ecological communities are major communities, which are self-sustaining and self-regulating (such as a forest or a lake), and minor communities, which rely on other communities (like fungi decomposing a log) and are the building blocks of major communities. Moreover, we can establish other non-taxonomic subdivisions of biocenosis, such as guilds.
Semantics
The concept of "community" often has a positive semantic connotation, exploited rhetorically by populist politicians and by advertisers
to promote feelings and associations of mutual well-being, happiness and togetherness—veering towards an almost-achievable utopian community.
In contrast, the epidemiological term "community transmission" can have negative implications, and instead of a "criminal community" one often speaks of a "criminal underworld" or of the "criminal fraternity".
Key concepts
Gemeinschaft and Gesellschaft
In (1887), German sociologist Ferdinand Tönnies described two types of human association: (usually translated as "community") and ("society" or "association"). Tönnies proposed the – dichotomy as a way to think about social ties. No group is exclusively one or the other. stress personal social interactions, and the roles, values, and beliefs based on such interactions. stress indirect interactions, impersonal roles, formal values, and beliefs based on such interactions.
Sense of community
In a seminal 1986 study, McMillan and Chavis identify four elements of "sense of community":
membership: feeling of belonging or of sharing a sense of personal relatedness,
influence: mattering, making a difference to a group and of the group mattering to its members
reinforcement: integration and fulfillment of needs,
shared emotional connection.
A "sense of community index" (SCI) was developed by Chavis and colleagues, and revised and adapted by others. Although originally designed to assess sense of community in neighborhoods, the index has been adapted for use in schools, the workplace, and a variety of types of communities.
Studies conducted by the APPA indicate that young adults who feel a sense of belonging in a community, particularly small communities, develop fewer psychiatric and depressive disorders than those who do not have the feeling of love and belonging.
Socialization
The process of learning to adopt the behavior patterns of the community is called socialization. The most fertile time of socialization is usually the early stages of life, during which individuals develop the skills and knowledge and learn the roles necessary to function within their culture and social environment. For some psychologists, especially those in the psychodynamic tradition, the most important period of socialization is between the ages of one and ten. But socialization also includes adults moving into a significantly different environment where they must learn a new set of behaviors.
Socialization is influenced primarily by the family, through which children first learn community norms. Other important influences include schools, peer groups, people, mass media, the workplace, and government. The degree to which the norms of a particular society or community are adopted determines one's willingness to engage with others. The norms of tolerance, reciprocity, and trust are important "habits of the heart", as de Tocqueville put it, in an individual's involvement in community.
Community development
Community development is often linked with community work or community planning, and may involve stakeholders, foundations, governments, or contracted entities including non-government organisations (NGOs), universities or government agencies to progress the social well-being of local, regional and, sometimes, national communities. More grassroots efforts, called community building or community organizing, seek to empower individuals and groups of people by providing them with the skills they need to effect change in their own communities. These skills often assist in building political power through the formation of large social groups working for a common agenda. Community development practitioners must understand both how to work with individuals and how to affect communities' positions within the context of larger social institutions. Public administrators, in contrast, need to understand community development in the context of rural and urban development, housing and economic development, and community, organizational and business development.
Formal accredited programs conducted by universities, as part of degree granting institutions, are often used to build a knowledge base to drive curricula in public administration, sociology and community studies. The General Social Survey from the National Opinion Research Center at the University of Chicago and the Saguaro Seminar at the Harvard Kennedy School are examples of national community development in the United States. The Maxwell School of Citizenship and Public Affairs at Syracuse University in New York State offers core courses in community and economic development, and in areas ranging from non-profit development to US budgeting (federal to local, community funds). In the United Kingdom, the University of Oxford has led in providing extensive research in the field through its Community Development Journal, used worldwide by sociologists and community development practitioners.
At the intersection between community development and community building are a number of programs and organizations with community development tools. One example of this is the program of the Asset Based Community Development Institute of Northwestern University. The institute makes available downloadable tools to assess community assets and make connections between non-profit groups and other organizations that can help in community building. The Institute focuses on helping communities develop by "mobilizing neighborhood assets" – building from the inside out rather than the outside in. In the disability field, community building was prevalent in the 1980s and 1990s with roots in John McKnight's approaches.
Community building and organizing
In The Different Drum: Community-Making and Peace (1987) Scott Peck argues that the almost accidental sense of community that exists at times of crisis can be consciously built. Peck believes that conscious community building is a process of deliberate design based on the knowledge and application of certain rules. He states that this process goes through four stages:
Pseudocommunity: When people first come together, they try to be "nice" and present what they feel are their most personable and friendly characteristics.
Chaos: People move beyond the inauthenticity of pseudo-community and feel safe enough to present their "shadow" selves.
Emptiness: Moves beyond the attempts to fix, heal and convert of the chaos stage, when all people become capable of acknowledging their own woundedness and brokenness, common to human beings.
True community: Deep respect and true listening for the needs of the other people in this community.
In 1991, Peck remarked that building a sense of community is easy but maintaining this sense of community is difficult in the modern world. An interview with M. Scott Peck by Alan Atkisson. In Context #29, p. 26.
The three basic types of community organizing are grassroots organizing, coalition building, and "institution-based community organizing", (also called "broad-based community organizing", an example of which is faith-based community organizing, or Congregation-based Community Organizing).
Community building can use a wide variety of practices, ranging from simple events (e.g., potlucks, small book clubs) to larger-scale efforts (e.g., mass festivals, construction projects that involve local participants rather than outside contractors).
Community building that is geared toward citizen action is usually termed "community organizing". In these cases, organized community groups seek accountability from elected officials and increased direct representation within decision-making bodies. Where good-faith negotiations fail, these constituency-led organizations seek to pressure the decision-makers through a variety of means, including picketing, boycotting, sit-ins, petitioning, and electoral politics.
Community organizing can focus on more than just resolving specific issues. Organizing often means building a widely accessible power structure, often with the end goal of distributing power equally throughout the community. Community organizers generally seek to build groups that are open and democratic in governance. Such groups facilitate and encourage consensus decision-making with a focus on the general health of the community rather than a specific interest group.
If communities are developed based on something they share in common, whether location or values, then one challenge for developing communities is how to incorporate individuality and differences. Rebekah Nathan suggests in her book, My Freshman Year, we are drawn to developing communities totally based on sameness, despite stated commitments to diversity, such as those found on university websites.
Types of community
A number of ways to categorize types of community have been proposed. One such breakdown is as follows:
Location-based Communities: range from the local neighbourhood, suburb, village, town or city, region, nation or even the planet as a whole. These are also called communities of place.
Identity-based Communities: range from the local clique, sub-culture, ethnic group, religious, multicultural or pluralistic civilisation, or the global community cultures of today. They may be included as communities of need or identity, such as disabled persons, or frail aged people.
Organizationally-based Communities: range from communities organized informally around family or network-based guilds and associations to more formal incorporated associations, political decision-making structures, economic enterprises, or professional associations at a small, national or international scale.
Intentional Communities: a mix of all three previous types, these are highly cohesive residential communities with a common social or spiritual purpose, ranging from monasteries and ashrams to modern ecovillages and housing cooperatives.
The usual categorizations of community relations have a number of problems: (1) they tend to give the impression that a particular community can be defined as just this kind or another; (2) they tend to conflate modern and customary community relations; (3) they tend to take sociological categories such as ethnicity or race as given, forgetting that different ethnically defined persons live in different kinds of communities—grounded, interest-based, diasporic, etc.
In response to these problems, Paul James and his colleagues have developed a taxonomy that maps community relations, and recognizes that actual communities can be characterized by different kinds of relations at the same time:
Grounded community relations. This involves enduring attachment to particular places and particular people. It is the dominant form taken by customary and tribal communities. In these kinds of communities, the land is fundamental to identity.
Life-style community relations. This involves giving primacy to communities coming together around particular chosen ways of life, such as morally charged or interest-based relations or just living or working in the same location. Hence the following sub-forms:
community-life as morally bounded, a form taken by many traditional faith-based communities.
community-life as interest-based, including sporting, leisure-based and business communities which come together for regular moments of engagement.
community-life as proximately-related, where neighbourhood or commonality of association forms a community of convenience, or a community of place (see below).
Projected community relations. This is where a community is self-consciously treated as an entity to be projected and re-created. It can be projected as through thin advertising slogan, for example gated community, or can take the form of ongoing associations of people who seek political integration, communities of practice based on professional projects, associative communities which seek to enhance and support individual creativity, autonomy and mutuality. A nation is one of the largest forms of projected or imagined community.
In these terms, communities can be nested and/or intersecting; one community can contain another—for example a location-based community may contain a number of ethnic communities. Both lists above can be used in a cross-cutting matrix in relation to each other.
Internet communities
In general, virtual communities value knowledge and information as currency or social resource. What differentiates virtual communities from their physical counterparts is the extent and impact of "weak ties", which are the relationships acquaintances or strangers form to acquire information through online networks. Relationships among members in a virtual community tend to focus on information exchange about specific topics. A survey conducted by Pew Internet and The American Life Project in 2001 found those involved in entertainment, professional, and sports virtual-groups focused their activities on obtaining information.
An epidemic of bullying and harassment has arisen from the exchange of information between strangers, especially among teenagers, in virtual communities. Despite attempts to implement anti-bullying policies, Sheri Bauman, professor of counselling at the University of Arizona, claims the "most effective strategies to prevent bullying" may cost companies revenue.
Virtual Internet-mediated communities can interact with offline real-life activity, potentially forming strong and tight-knit groups such as QAnon.
See also
Circles of Sustainability
Communitarianism
Community theatre
Community wind energy
Engaged theory
Outline of community
Wikipedia community
Notes
References
Barzilai, Gad. 2003. Communities and Law: Politics and Cultures of Legal Identities. Ann Arbor: University of Michigan Press.
Beck, U. 1992. Risk Society: Towards a New Modernity. London: Sage: 2000. What is globalization? Cambridge: Polity Press.
Chavis, D.M., Hogge, J.H., McMillan, D.W., & Wandersman, A. 1986. "Sense of community through Brunswick's lens: A first look." Journal of Community Psychology, 14(1), 24–40.
Chipuer, H.M., & Pretty, G.M.H. (1999). A review of the Sense of Community Index: Current uses, factor structure, reliability, and further development. Journal of Community Psychology, 27(6), 643–658.
Christensen, K., et al. (2003). Encyclopedia of Community. 4 volumes. Thousand Oaks, CA: Sage.
Cohen, A. P. 1985. The Symbolic Construction of Community. Routledge: New York.
Durkheim, Émile. 1950 [1895] The Rules of Sociological Method. Translated by S.A. Solovay and J.H. Mueller. New York: The Free Press.
Cox, F., J. Erlich, J. Rothman, and J. Tropman. 1970. Strategies of Community Organization: A Book of Readings. Itasca, IL: F.E. Peacock Publishers.
Effland, R. 1998. The Cultural Evolution of Civilizations Mesa Community College.
Giddens, A. 1999. "Risk and Responsibility" Modern Law Review 62(1): 1–10.
Lenski, G. 1974. Human Societies: An Introduction to Macrosociology. New York: McGraw-Hill, Inc.
Long, D.A., & Perkins, D.D. (2003). Confirmatory Factor Analysis of the Sense of Community Index and Development of a Brief SCI. Journal of Community Psychology, 31, 279–296.
Lyall, Scott, ed. (2016). Community in Modern Scottish Literature. Brill | Rodopi: Leiden | Boston.
Nancy, Jean-Luc. La Communauté désœuvrée – philosophical questioning of the concept of community and the possibility of encountering a non-subjective concept of it
Newman, D. 2005. Sociology: Exploring the Architecture of Everyday Life, Chapter 5. "Building Identity: Socialization" Pine Forge Press. Retrieved: 2006-08-05.
Putnam, R.D. 2000. Bowling Alone: The collapse and revival of American community. New York: Simon & Schuster
Sarason, S.B. 1974. The psychological sense of community: Prospects for a community psychology. San Francisco: Jossey-Bass. 1986. "Commentary: The emergence of a conceptual center." Journal of Community Psychology, 14, 405–407.
Smith, M.K. 2001. Community. Encyclopedia of informal education. Last updated: January 28, 2005. Retrieved: 2006-07-15.
Types of organization | 0.763263 | 0.998317 | 0.761979 |
Economic sector | One classical breakdown of economic activity distinguishes three sectors:
Primary: involves the retrieval and production of raw-material commodities, such as corn, coal, wood or iron. Miners, farmers and fishermen are all workers in the primary sector.
Secondary: involves the transformation of raw or intermediate materials into goods, as in steel into cars, or textiles into clothing. Builders and dressmakers work in the secondary sector.
Tertiary: involves the supplying of services to consumers and businesses, such as babysitting, cinemas or banking. Shopkeepers and accountants work in the tertiary sector.
In the 20th century, economists began to suggest that traditional tertiary services could be further distinguished from "quaternary" and quinary service sectors. Economic activity in the hypothetical quaternary sector comprises information- and knowledge-based services, while quinary services include industries related to human services and hospitality.
Economic theories divide economic sectors further into economic industries.
Historic evolution
An economy may include several sectors that evolved in successive phases:
The ancient economy built mainly on the basis of subsistence farming.
The Industrial Revolution lessened the role of subsistence farming, converting land-use to more extensive and monocultural forms of agriculture over the last three centuries. Economic growth took place mostly in the mining, construction and manufacturing industries.
In the economies of modern consumer societies, services, finance, and technology—the knowledge economy—play an increasingly significant role.
Even in modern times, developing countries tend to rely more on the first two sectors, in contrast to developed countries.
By ownership
An economy can also be divided along different lines:
Public sector or state sector
Private sector or privately run businesses
Voluntary sector
See also
Three-sector theory
Jean Fourastié
Industry classification
International Standard Industrial Classification
Industry Classification Benchmark
North American Industry Classification System – a sample application of sector-oriented analysis
Division of labour
Economic development
References
01
Business analysis
Business management | 0.767404 | 0.992869 | 0.761931 |
Social development theory | Social development theory attempts to explain qualitative changes in the structure and framework of society, that help the society to better realize aims and objectives. Development can be defined in a manner applicable to all societies at all historical periods as an upward ascending movement featuring greater levels of energy, efficiency, quality, productivity, complexity, comprehension, creativity, mastery, enjoyment and accomplishment. Development is a process of social change, not merely a set of policies and programs instituted for some specific results. During the last five centuries this process has picked up in speed and intensity, and during the last five decades has witnessed a marked surge in acceleration.
The basic mechanism driving social change is increasing awareness leading to better organization. When society senses new and better opportunities for progress it develops new forms of organization to exploit these new openings successfully. The new forms of organization are better able to harness the available social energies and skills and resources to use the opportunities to get the intended results.
Development is governed by many factors that influence the results of developmental efforts. There must be a motive that drives the social change and essential preconditions for that change to occur. The motive must be powerful enough to overcome obstructions that impede that change from occurring. Development also requires resources such as capital, technology, and supporting infrastructure.
Development is the result of society's capacity to organize resources to meet challenges and opportunities. Society passes through well-defined stages in the course of its development. They are nomadic hunting and gathering, rural agrarian, urban, commercial, industrial, and post-industrial societies. Pioneers introduce new ideas, practices, and habits that conservative elements initially resist. At a later stage, innovations are accepted, imitated, organized, and used by other members of the community. Organizational improvements introduced to support the innovations can take place simultaneously at four different levels—physical, social, mental, and psychological. Moreover, four different types of resources are involved in promoting development. Of these four, physical resources are most visible, but least capable of expansion. Productivity of resources increases enormously as the quality of organization and level of knowledge inputs rise.
Development pace and scope varies according to the stage society is in. The three main stages are physical, vital (vital refers to the dynamic and nervous social energies of humanity that propel individuals to accomplish), and mental.
Terminology
Though the term development usually refers to economic progress, it can apply to political, social, and technological progress as well. These various sectors of society are so intertwined that it is difficult to neatly separate them. Development in all these sectors is governed by the same principles and laws, and therefore the term applies uniformly.
Economic development and human development need not mean the same thing. Strategies and policies aimed at greater growth may produce greater income in a country without improving the average living standard. This happened in oil-producing Middle Eastern countries—a surge in oil prices boosted their national income without much benefit to poorer citizens. Conversely, people-oriented programs and policies can improve health, education, living standards, and other quality-of-life measures with no special emphasis on monetary growth. This occurred in the 30 years of socialist and communist rule in Kerala in India.
Four related but distinct terms and phenomena form successive steps in a graded series: survival, growth, development, and evolution. Survival refers to a subsistence lifestyle with no marked qualitative changes in living standards. Growth refers to horizontal expansion in the existing plane characterized by quantitative expansion—such as a farmer increasing the area under cultivation, or a retailer opening more stores. Development refers to a vertical shift in the level of operations that causes qualitative changes, such as a retailer turning into a manufacturer or an elementary school turning into a high school.
Human development
Development is a human process, in the sense that human beings, not material factors, drive development. The energy and aspiration of people who seek development form the motive force that drives development. People's awareness may decide the direction of development. Their efficiency, productivity, creativity, and organizational capacities determine the level of people's accomplishment and enjoyment. Development is the outer realization of latent inner potentials. The level of people's education, intensity of their aspiration and energies, quality of their attitudes and values, skills and information all affect the extent and pace of development. These factors come into play whether it is the development of the individual, family, community, nation, or the whole world.
Process of emergence of new activities in society
Unconscious vs. conscious development
Human development normally proceeds from experience to comprehension. As society develops over centuries, it accumulates the experience of countless pioneers. The essence of that experience becomes the formula for accomplishment and success. The fact that experience precedes knowledge can be taken to mean that development is an unconscious process that gets carried out first, while knowledge becomes conscious later on only. Unconscious refers to activities that people carry out without knowing what the end results will be, or where their actions will lead. They carry out the acts without knowing the conditions required for success.
Role of pioneering individuals
The gathering of conscious knowledge of society matures and breaks out on the surface in the form of new ideas—espoused by pioneers who also take new initiatives to give expression to those ideas. Those initiatives may call for new strategies and new organizations, which conservative elements may resist. If the pioneer's initiatives succeed, it encourages imitation and slow propagation in the rest of the community. Later, growing success leads to society assimilating the new practice, and it becomes regularized and institutionalized. This can be viewed in three distinct phases of social preparedness, initiative of pioneers, and assimilation by the society.
The pioneer as such plays an important role in the development process—since through that person, unconscious knowledge becomes conscious. The awakening comes to the lone receptive individual first, and that person spreads the awakening to the rest of the society. Though pioneers appear as lone individuals, they act as conscious representatives of society as a whole, and their role should be viewed in that light.<ref>Cleveland, Harlan and Jacobs, Garry, The Genetic Code for Social Development". In: Human Choice, World Academy of Art & Science, USA, 1999, p. 7.</ref>
Imitation of the pioneer
Though a pioneer comes up with innovative ideas very often the initial response to a pioneer is one of indifference, ridicule or even one of outright hostility. If the pioneer persists and succeeds in an initiative, that person's efforts may eventually get the endorsement of the public. That endorsement tempts others to imitate the pioneer. If they also succeed, news spreads and brings wider acceptance. Conscious efforts to lend organizational support to the new initiative helps institutionalize the new innovation.
Organization of new activities
The organization is the human capacity to harness all available information, knowledge, resources, technology, infrastructure, and human skills to exploit new opportunities—and to face challenges and hurdles that block progress. The development comes through improvements in the human capacity of an organization. In other words, development comes through the emergence of better organizations that enhance society's capacity to make use of opportunities and face challenges.
The development of organizations may come through the formulation of new laws and regulations or new systems. Each new step of progress brings a corresponding new organization. Increasing European international trade in the 16th and 17th centuries demanded corresponding development in the banking industry and new commercial laws and civil arbitration facilities. New types of business ventures were formed to attract the capital needed to finance expanding trade. As a result, a new business entity appeared—the joint-stock company, which limited the investors' liability to the extent of their personal investment without endangering other properties.
Each new developmental advance is accompanied by new or more suitable organizations that facilitate that advance. Often, existing inadequate organizations must change to accommodate new advances.
Many countries have introduced scores of new reforms and procedures—such as the release of business activities directories, franchising, lease purchase, service, credit rating, collection agencies, industrial estates, free trade zones, and credit cards. Additionally, a diverse range of internet services have formed. Each new facility improves effective use of available social energies for productive purposes. The importance of these facilities for speeding development is apparent when they are absent. When Eastern European countries wanted to transition to market-type economies, they were seriously hampered in their efforts due to the absence of supportive systems and facilities.
Organization matures into institution
At a particular stage, organizations mature into institutions that become part of society. Beyond this point, an organization does not need laws or agencies to foster growth or ensure a continued presence. The transformation of an organization into an institution signifies society's total acceptance of that new organization.
The income tax office is an example of an organization that is actively maintained by the enactment of laws and the formation of an office for procuring taxes. Without active governmental support, this organization would disappear, as it does not enjoy universal public support. On the other hand, the institution of marriage is universally accepted, and would persist even if governments withdrew regulations that demand registration of marriage and impose age restrictions. The institution of marriage is sustained by the weight of tradition, not by government agencies and legal enactments.
Cultural transmission by the family
Families play a major role in the propagation of new activities once they win the support of the society. A family is a miniature version of the larger society—acceptance by the larger entity is reflected in the smaller entity. The family educates the younger generation and transmits social values like self-restraint, responsibility, skills, and occupational training. Though children do not follow their parents' footsteps as much as they once did, parents still mold their children's attitudes and thoughts regarding careers and future occupations. When families propagate a new activity, it signals that the new activity has become an integral part of the society.
Education
One of the most powerful means of propagating and sustaining new developments is the educational system in a society. Education transmits society's collective knowledge from one generation to the next. It equips each new generation to face future opportunities and challenges with knowledge gathered from the past. It shows the young generation the opportunities ahead for them, and thereby raises their aspiration to achieve more. Information imparted by education raises the level of expectations of youth, as well as aspirations for higher income. It also equips youth with the mental capacity to devise ways and means to improve productivity and enhance living standards.
Society can be conceived as a complex fabric that consists of interrelated activities, systems, and organizations. Development occurs when this complex fabric improves its own organization. That organizational improvement can take place simultaneously in several dimensions.
Quantitative expansion in the volume of social activities
Qualitative expansion in the content of all those elements that make up the social fabric
Geographic extension of the social fabric to bring more of the population under the cover of that fabric
Integration of existing and new organizations so the social fabric functions more efficiently
Such organizational innovations occur all the time, as a continuous process. New organizations emerge whenever a new developmental stage is reached, and old organizations are modified to suit new developmental requirements. The impact of these new organizations may be powerful enough to make people believe they are powerful in their own right—but it is society that creates the new organizations required to achieve its objectives.
The direction that the developmental process takes is influenced by the population's awareness of opportunities. Increasing awareness leads to greater aspiration, which releases greater energy that helps bring about greater accomplishment
Resources
Since the time of the English economist Thomas Malthus, some have thought that capacity for development is limited by availability of natural resources. Resources can be divided into four major categories: physical, social, mental, and human. Land, water, minerals and oil, etc. constitute physical resources. Social resources consist of society's capacity to manage and direct complex systems and activities. Knowledge, information and technology are mental resources. The energy, skill and capacities of people constitute human resources.
The science of economics is much concerned with scarcity of resources. Though physical resources are limited, social, mental, and human resources are not subject to inherent limits. Even if these appear limited, there is no fixity about the limitation, and these resources continue to expand over time. That expansion can be accelerated by the use of appropriate strategies. In recent decades the rate of growth of these three resources has accelerated dramatically.
The role of physical resources tends to diminish as society moves to higher developmental levels. Correspondingly, the role of non-material resources increases as development advances. One of the most important non-material resources is information, which has become a key input. Information is a non-material resource that is not exhausted by distribution or sharing. Greater access to information helps increase the pace of its development. Ready access to information about economic factors helps investors transfer capital to sectors and areas where it fetches a higher return. Greater input of non-material resources helps explain the rising productivity of societies in spite of a limited physical resource base.
Application of higher non-material inputs also raises the productivity of physical inputs. Modern technology has helped increase the proven sources of oil by 50% in recent years—and at the same time, reduced the cost of search operations by 75%. Moreover, technology shows it is possible to reduce the amount of physical inputs in a wide range of activities. Scientific agricultural methods demonstrated that soil productivity could be raised through synthetic fertilizers. Dutch farm scientists have demonstrated that a minimal water consumption of 1.4 liters is enough to raise a kilogram of vegetables, compared to the thousand liters that traditional irrigation methods normally require.
Henry Ford's assembly line techniques reduced the man-hours of labor required to deliver a car from 783 minutes to 93 minutes. These examples show that the greater input of higher non-material resources can raise the productivity of physical resources and thereby extend their limits.
Technological development
When the mind engages in pure creative thinking, it comes up with new thoughts and ideas. When it applies itself to society it can come up with new organizations. When it turns to the study of nature, it discovers nature's laws and mechanisms. When it applies itself to technology, it makes new discoveries and practical inventions that boost productivity. Technical creativity has had an erratic course through history, with some intense periods of creative output followed by some dull and inactive periods. However, the period since 1700 has been marked by an intense burst of technological creativity that is multiplying human capacities exponentially.
Though many reasons can be cited for the accelerating pace of technological inventions, a major cause is the role played by mental creativity in an increasing atmosphere of freedom. Political freedom and liberation from religious dogma had a powerful impact on creative thinking during the Age of Enlightenment. Dogmas and superstitions greatly restricted mental creativity. For example, when the astronomer Copernicus proposed a heliocentric view of the world, the church rejected it because it did not conform to established religious doctrine. When Galileo used a telescope to view the planets, the church condemned the device as an instrument of the devil, as it seemed so unusual. The Enlightenment shattered such obscurantist fetters on freedom of thought. From then on, the spirit of experimentation thrived.
Though technological inventions have increased the pace of development, the tendency to view developmental accomplishments as mainly powered by technology misses the bigger picture. Technological innovation was spurred by general advances in the social organization of knowledge. In the Middle Ages, efforts at scientific progress were few, mainly because there was no effective system to preserve and disseminate knowledge. Since there was no organized protection for patent rights, scientists and inventors were secretive about observations and discoveries. Establishment of scientific associations and scientific journals spurred the exchange of knowledge and created a written record for posterity.
Technological development depends on social organizations. Nobel laureate economist Arthur Lewis observed that the mechanization of factory production in England—the Industrial Revolution—was a direct result of the reorganization of English agriculture. Enclosure of common lands in England generated surplus income for farmers. That extra income generated additional raw materials for industrial processing, and produced greater demand for industrial products that traditional manufacturing processes could not meet.
The opening of sea trade further boosted demand for industrial production for export. Factory production increased many times when production was reorganized to use steam energy, combined with moving assembly lines, specialization, and division of labor. Thus, technological development was both a result of and a contributing factor to the overall development of society.
Individual scientific inventions do not spring out of the blue. They build on past accomplishments in an incremental manner, and give a conscious form to the unconscious knowledge that society gathers over time. As pioneers are more conscious than the surrounding community, their inventions normally meet with initial resistance, which recedes over time as their inventions gain wider acceptance. If opposition is stronger than the pioneer, then the introduction of an invention gets delayed.
In medieval times, when guilds tightly controlled their members, medical progress was slow mainly because physicians were secretive about their remedies. When Denis Papin demonstrated his steam engine, German naval authorities refused to accept it, fearing it would lead to increased unemployment. John Kay, who developed a flying shuttle textile loom, was physically threatened by English weavers who feared the loss of their jobs. He fled to France where his invention was more favorably received.
The widespread use of computers and application of biotechnology raises similar resistance among the public today. Whether the public receives an invention readily or resists depends on their awareness and willingness to entertain rapid change. Regardless of the response, technological inventions occurs as part of overall social development, not as an isolated field of activity.
Limits to development
The concept of inherent limits to development arose mainly because past development was determined largely by availability of physical resources. Humanity relied more on muscle-power than thought-power to accomplish work. That is no longer the case. Today, mental resources are the primary determinant of development. Where people drove a simple bullock cart, they now design ships and aircraft that carry huge loads across immense distances. Humanity has tamed rivers, cleared jungles and even turned arid desert lands into cultivable lands through irrigation.
By using intelligence, society has turned sand into powerful silicon chips that carry huge amounts of information and form the basis of computers. Since there is no inherent limit to the expansion of society's mental resources, the notion of limits to growth cannot be ultimately binding.
Three stages of development
Society's developmental journey is marked by three stages: physical, vital, and mental. These are not clear-cut stages, but overlap. All three are present in any society at times. One of them is predominant while the other two play subordinate roles. The term 'vital' denotes the emotional and nervous energies that empower society's drive towards accomplishment and express most directly in the interactions between human beings. Before the full development of mind, it is these vital energies that predominate in human personality and gradually yield the ground as the mental element becomes stronger. The speed and circumstances of social transition from one stage to another varies.
Physical stage
The physical stage is characterized by the domination of the physical element of the human personality. Physical stage is distinguished by the predominance of physical part of human personality, which is characterised by minimal technological advances and becoming depended on manual labour for agricultural practices. During this phase, society is preoccupied with bare survival and subsistence. Moreover, societal structures often exhibit rigidity, with little room for social mobility or advancement beyond one's inherited status or position. People follow tradition strictly and there is little innovation and change. Land is the main asset and productive resource during the physical stage and wealth is measured by the size of land holdings. This is the agrarian and feudal phase of society. Inherited wealth and position rule the roost and there is very little upward mobility. Meanwhile, economic transactions primarily revolve around barter systems and the exchange of goods rather than monetary transactions. Feudal lords and military chiefs function as the leaders of the society. Commerce and money play a relatively minor role. As innovative thinking and experimental approaches are discouraged, people follow tradition unwaveringly and show little inclination to think outside of established guidelines. Occupational skills are passed down from parent to child by a long process of apprenticeship. Despite its limitations, the physical stage lays the foundation for subsequent phases of development, serving as a crucial starting point for societal evolution and progress.
Guilds restrict the dissemination of trade secrets and technical knowledge. The Church controls the spread of new knowledge and tries to smother new ideas that does not agree with established dogmas. The physical stage comes to an end when the reorganization of agriculture gives scope for commerce and industry to expand. This happened in Europe during the 18th century when political revolutions abolished feudalism and the Industrial Revolution gave a boost to factory production. The shift to the vital and mental stages helps to break the bonds of tradition and inject new dynamism in social life.
Vital stage
The vital stage of society is infused with dynamism and change. The vital activities of society expand markedly. Society becomes curious, innovative and adventurous. During the vital stage emphasis shifts from interactions with the physical environment to social interactions between people. Trade supplants agriculture as the principal source of wealth.
The dawning of this phase in Europe led to exploratory voyages across the seas leading to the discovery of new lands and an expansion of sea trade. Equally important, society at this time began to more effectively harness the power of money. Commerce took over from agriculture, and money replaced land as the most productive resource. The center of life shifted from the countryside to the towns where opportunities for trade and business were in greater abundance.
The center of power shifted from the aristocracy to the business class, which employed the growing power of money to gain political influence. During the vital stage, the rule of law becomes more formal and binding, providing a secure and safe environment for business to flourish. Banks, shipping companies and joint-stock companies increase in numbers to make use of the opportunities. Fresh innovative thinking leads to new ways of life that people accept as they prove beneficial. Science and experimental approaches begin to make a headway as the hold of tradition and dogma weaken. Demand for education rises.
As the vital stage matures through the expansion of the commercial and industrial complex, surplus income arises, which prompts people to spend more on items so far considered out of reach. People begin to aspire for luxury and leisure that was not possible when life was at a subsistence level.
Mental stage
This stage has three essential characteristics: practical, social, and political application of mind. The practical application of mind generates many inventions. The social application of mind leads to new and more effective types of social organization. The political application leads to changes in the political systems that empower the populace to exercise political and human rights in a free and democratic manner. These changes began in the Renaissance and Enlightenment, and gained momentum in the Reformation, which proclaimed the right of individuals to relate directly to God without the mediation of priests. The political application of mind led to the American and French Revolutions, which produced writing that first recognized the rights of the common man and gradually led to the actual enjoyment of these rights.
Organization is a mental invention. Therefore, it is not surprising that the mental stage of development is responsible for the formulation of a great number of organizational innovations. Huge business corporations have emerged that make more money than even the total earnings of some small countries. Global networks for transportation and communication now connect the nations of the world within a common unified social fabric for sea and air travel, telecommunications, weather reporting and information exchange.
In addition to spurring technological and organizational innovation, the mental phase is also marked by the increasing power of ideas to change social life. Ethical ideals have been with humanity since the dawn of civilization. But their practical application in daily social life had to wait for the mental stage of development to emerge. The proclamation of human rights and the recognition of the value of the individual have become effective only after the development of mind and spread of education. The 20th century truly emerged as the century of the common man. Political, social, economic and many other rights were extended to more and more sections of humanity with each succeeding decade.
The relative duration of these three stages and the speed of transition from one to another varies from one society to another. However broadly speaking, the essential features of the physical, vital and mental stages of development are strikingly similar and therefore quite recognizable even in societies separated by great distance and having little direct contact with one another.
Moreover, societies also learn from those who have gone through these transitions before and, therefore, may be able to make the transitions faster and better. When the Netherlands introduced primary education in 1618, it was a pioneering initiative. When Japan did the same thing late in the 19th century, it had the advantage of the experience of the US and other countries. When many Asian countries initiated primary education in the 1950s after winning independence, they could draw on the vast experience of more developed nations. This is a major reason for the quickening pace of progress.
Natural vs. planned development
Natural development is distinct from development by government initiatives and planning. Natural development involves an unplanned and unconscious evolution of social norms and structures, which makes it different from government-led initiatives. Natural development is the spontaneous and unconscious process of development that normally occurs. It is distinguished by different factors including historical legacies, economic circumstances, cultural standards and its inherent complexities. Planned development is the result of deliberate conscious initiatives by the government to speed development through special programs and policies. Natural development is an unconscious process, since it results from the behavior of countless individuals acting on their own—rather than conscious intention of the community. It is also unconscious in the sense that society achieves the results without being fully conscious of how it did so. On the other hand, planned development is the result of deliberate attempts of governmental authorities to stimulate the developing process through specific policies and programmes.
The natural development of democracy in Europe over the past few centuries can be contrasted with the conscious effort to introduce democratic forms of government in former colonial nations after World War II. Planned development is also largely unconscious: the goals may be conscious, but the most effective means for achieving them may remain poorly understood. Planned development can become fully conscious only when the process of development itself is fully understood. The achievement of planned development relies on a comprehensive understanding of fundamental social dynamics and a sophisticated implementation strategy. While in planned development the government is the initiator, in the natural version it is private individuals or groups that are responsible for the initiative. Whoever initiates, the principles and policies are the same and success is assured only when the conditions and right principles are followed. Over centuries, democracy’s organic growth has been experienced while Europe stands in contrast to the purposeful Post-World War II, which attempts in colonial countries to establish democratic rules. In contrast to planned development projects, it has been found that natural development results from local efforts of private citizens along with community organisations.
Summary
Social development theory offers a comprehensive framework for understanding the qualitative changes in society over time. It highlights the role of increasing awareness and better organization in driving progress. Through stages of physical, vital, and mental development, societies evolve, embracing innovation and adapting to change.
See also
Idea of Progress
Social change
World systems theory
References
Jacobs, Garry et al.. Kamadhenu: The Prosperity Movement, Southern Publications, India, 1988.
Asokan. N. History of USA'', The Mother's Service Society, 2006.
Sociological theories
Economic development
Human development
International development
Technology development | 0.768191 | 0.991802 | 0.761893 |
Community development | The United Nations defines community development as "a process where community members come together to take collective action and generate solutions to common problems." It is a broad concept, applied to the practices of civic leaders, activists, involved citizens, and professionals to improve various aspects of communities, typically aiming to build stronger and more resilient local communities.
Community development is also understood as a professional discipline, and is defined by the International Association for Community Development as "a practice-based profession and an academic discipline that promotes participative democracy, sustainable development, rights, economic opportunity, equality and social justice, through the organisation, education and empowerment of people within their communities, whether these be of locality, identity or interest, in urban and rural settings".
Community development seeks to empower individuals and groups of people with the skills they need to effect change within their communities. These skills are often created through the formation of social groups working for a common agenda. Community developers must understand both how to work with individuals and how to affect communities' positions within the context of larger social institutions.
Community development as a term has taken off widely in anglophone countries, i.e. the United States, United Kingdom, Australia, Canada, New Zealand, as well as other countries in the Commonwealth of Nations. It is also used in some countries in Eastern Europe with active community development associations in Hungary and Romania. The Community Development Journal, published by Oxford University Press, since 1966 has aimed to be the major forum for research and dissemination of international community development theory and practice.
Community development approaches are recognised internationally. These methods and approaches have been acknowledged as significant for local social, economic, cultural, environmental and political development by such organisations as the UN, WHO, OECD, World Bank, Council of Europe and EU. There are a number of institutions of higher education offer community development as an area of study and research such as the University of Toronto, Leiden University, SOAS University of London, and the Balsillie School of International Affairs, among others.
Definitions
There are complementary definitions of community development.
The United Nations defines community development broadly as "a process where community members come together to take collective action and generate solutions to common problems." and the International Association for Community Development defines it as both a practice based profession and an academic discipline. Following the adoption of the IACD definition in 2016, the association has gone on to produce International Standards for Community Development Practice. The values and ethos that should underpin practice can be expressed as: Commitment to rights, solidarity, democracy, equality, environmental and social justice. The purpose of community development is understood by IACD as being to work with communities to achieve participative democracy, sustainable development, rights, economic opportunity, equality and social justice. This practice is carried out by people in different roles and contexts, including people explicitly called professional community workers (and people taking on essentially the same role but with a different job title), together with professionals in other occupations ranging from social work, adult education, youth work, health disciplines, environmental education, local economic development, to urban planning, regeneration, architecture and more who seek to apply community development values and adopt community development methods. Community development practice also encompasses a range of occupational settings and levels from development roles working with communities, through to managerial and strategic community planning roles.
The Community Development Challenge report, which was produced by a working party comprising leading UK organizations in the field including the (now defunct) Community Development Foundation, the (now defunct) Community Development Exchange and the (now defunct) Federation for Community Development Learning defines community development as:
A set of values and practices which plays a special role in overcoming poverty and disadvantage, knitting society together at the grass roots and deepening democracy. There is a community development profession, defined by national occupational standards and a body of theory and experience going back the best part of a century. There are active citizens who use community development techniques on a voluntary basis, and there are also other professions and agencies which use a community development approach or some aspects of it.
Community Development Exchange defines community development as:
both an occupation (such as a community development worker in a local authority) and a way of working with communities. Its key purpose is to build communities based on justice, equality and mutual respect.
Community development involves changing the relationships between ordinary people and people in positions of power, so that everyone can take part in the issues that affect their lives. It starts from the principle that within any community there is a wealth of knowledge and experience which, if used in creative ways, can be channeled into collective action to achieve the communities' desired goals.
Community development practitioners work alongside people in communities to help build relationships with key people and organizations and to identify common concerns. They create opportunities for :the community to learn new skills and, by enabling people to act together, community development practitioners help to foster social inclusion and equality.
Different approaches
There are numerous overlapping approaches to community development. Some focus on the processes, some on the outcomes/ objectives. They include:
Arts, Culture, and Development; focuses on the role of arts and culture in community development, social transformation
Community Engagement; focuses on relationships at the core of facilitating "understanding and evaluation, involvement, exchange of information and opinions, about a concept, issue or project, with the aim of building social capital and enhancing social outcomes through decision-making” (p. 173).
Women Self-help Group; focusing on the contribution of women in settlement groups.
Community capacity building; focusing on helping communities obtain, strengthen, and maintain the ability to set and achieve their own development objectives.
Large Group Capacitation; an adult education and social psychology approach grounded in the activity of the individual and the social psychology of the large group focusing on large groups of unemployed or semi-employed participants, many of whom with Lower Levels of Literacy (LLLs).
Social capital formation; focusing on benefits derived from the cooperation between individuals and groups.
Nonviolent direct action; when a group of people take action to reveal an existing problem, highlight an alternative, or demonstrate a possible solution to a social issue which is not being addressed through traditional societal institutions (governments, religious organizations or established trade unions) to the satisfaction of the direct action participants.
Economic development, focusing on the "development" of developing countries as measured by their economies, although it includes the processes and policies by which a nation improves the economic, political, and social well-being of its people.
Community economic development (CED); an alternative to conventional economic development which encourages using local resources in a way that enhances economic outcomes while improving social conditions. For example, CED involves strategies which aim to improve access to affordable housing, medical, and child care.
A worker cooperative is a progressive CED strategy that operates as businesses both managed and owned by their employees. They are beneficial due to their potential to create jobs and providing a route for grassroots political action. Some challenges that the worker cooperative faces include the mending of the cooperative’s identity as both business and as a democratic humanitarian organization. They are limited in resources and scale.
Sustainable development; which seeks to achieve, in a balanced manner, economic development, social development and environmental protection outcomes.
Community-driven development (CDD), an economic development model which shifts overreliance on central governments to local communities.
Asset-based community development (ABCD); is a methodology that seeks to uncover and use the strengths within communities as a means for sustainable development.
Faith-based community development; which utilizes faith-based organizations to bring about community development outcomes.
Community-based participatory research (CBPR); a partnership approach to research that equitably involves, for example, community members, organizational representatives, and researchers in all aspects of the research process and in which all partners contribute expertise and share decision making and ownership, which aims to integrate this knowledge with community development outcomes.
Community organizing; an approach that generally assumes that social change necessarily involves conflict and social struggle in order to generate collective power for the powerless.
Participatory planning including community-based planning (CBP); involving the entire community in the strategic and management processes of urban planning; or, community-level planning processes, urban or rural.
Town-making; or machizukuri (まちづくり) refers to a Japanese concept which is "an umbrella term generally understood as citizen participation in the planning and management of a living environment". It can include redevelopment, revitalization, and post-disaster reconstruction, and usually emphasizes the importance of local citizen participation. In recent years, cooperation between local communities and contents tourism (such as video games, anime, and manga) has also become a key driver of machizukuri in some local communities, such as the tie-up between CAPCOM's Sengoku Basara and the city of Shiroishi.
Language revitalization focuses on the use of a language so that it serves the needs of a community. This may involve the creation of books, films and other media in the language. These actions help a small language community to preserve their language and culture.
Methodologies focusing on the educational component of community development, including the community-wide empowerment that increased educational opportunity creates.
Methodologies addressing the issues and challenges of the Digital divide, making affordable training and access to computers and the Internet, addressing the marginalisation of local communities that cannot connect and participate in the global Online community. In the United States, nonprofit organizations such as Per Scholas seek to “break the cycle of poverty by providing education, technology and economic opportunities to individuals, families and communities” as a path to development for the communities they serve.
There are a myriad of job titles for community development workers and their employers include public authorities and voluntary or non-governmental organisations, funded by the state and by independent grant making bodies. Since the nineteen seventies the prefix word 'community' has also been adopted by several other occupations from the police and health workers to planners and architects, who have been influenced by community development approaches.
History
Amongst the earliest community development approaches were those developed in Kenya and British East Africa during the 1930s. Community development practitioners have over many years developed a range of approaches for working within local communities and in particular with disadvantaged people. Since the nineteen sixties and seventies through the various anti poverty programmes in both developed and developing countries, community development practitioners have been influenced by structural analyses as to the causes of disadvantage and poverty i.e. inequalities in the distribution of wealth, income, land, etc. and especially political power and the need to mobilise people power to affect social change. Thus the influence of such educators as Paulo Freire and his focus upon this work. Other key people who have influenced this field are Saul Alinsky (Rules for Radicals) and E.F. Schumacher (Small Is Beautiful). There are a number of international organisations that support community development, for example, Oxfam, UNICEF, The Hunger Project and Freedom from Hunger, run community development programs based upon community development initiatives for relief and prevention of malnutrition. Since 2006 the Dragon Dreaming Project Management techniques have spread to 37 countries and are engaged in an estimated 3,250 projects worldwide.
In the global North
In the 19th century, the work of the Welsh early socialist thinker Robert Owen (1771–1851), sought to develop a more perfect community. At New Lanark and at later communities such as Oneida in the USA and the New Australia Movement in Australia, groups of people came together to create utopian or intentional communities, with mixed success. Some such communities, formed ex nihilo, contrast the concepts of the development of a community at a later stage.
United States
In the United States in the 1960s, the term "community development" began to complement and generally replace the idea of urban renewal, which typically focused on physical development projects - often at the expense of working-class communities. One of the earliest proponents of the term in the United States was social scientist William W. Biddle (100-1973). In the late 1960s, philanthropies such as the Ford Foundation and government officials such as Senator Robert F. Kennedy took an interest in local nonprofit organizations. A pioneer was the Bedford Stuyvesant Restoration Corporation in Brooklyn, which attempted to apply business and management skills to the social mission of uplifting low-income residents and their neighborhoods. Eventually such groups became known as "Community development corporations" or CDCs. Federal laws, beginning with the 1974 Housing and Community Development Act, provided a way for state and municipal governments to channel funds to CDCs and to other nonprofit organizations.
National organizations such as the Neighborhood Reinvestment Corporation (founded in 1978 and known since 2005 as NeighborWorks America), the Local Initiatives Support Corporation (LISC) (founded in 1980), and the Enterprise Foundation (founded in 1981) have built extensive networks of affiliated local nonprofit organizations to which they help provide financing for numerous physical- and social-development programs in urban and rural communities. The CDCs and similar organizations have been credited by some with starting the process that stabilized and revived seemingly hopeless inner-city areas such as the South Bronx in New York City.
United Kingdom
In the UK, community development has had two main traditions. The first was as an approach for preparing for the independence of countries from the former British Empire in the 1950s and 1960s. Domestically, community development first came into public prominence with the Labour Government's anti deprivation programmes of the latter 1960s and 1970s. The main example of this activity, the CDP (Community Development Programme), piloted local area-based community development. This influenced a number of largely urban local authorities, in particular in Scotland with Strathclyde Region's major community-development programme (the largest at the time in Europe).
The Gulbenkian Foundation was a key funder of commissions and reports which influenced the development of community development in the UK from the latter 1960s to the 1980s. This included recommending that there be a national institute or centre for community development, able to support practice and to advise government and local authorities on policy. This resulted in the forma establishment in 1991 of the Community Development Foundation. In 2004 the Carnegie UK Trust established a commission of inquiry into the future of rural community development, examining such issues as land reform and climate change. Carnegie funded over sixty rural community-development action-research projects across the UK and Ireland and national and international communities of practice to exchange experiences. This included the International Association for Community Development (IACD).
In 1999 the Labour Government established a UK-wide organisation responsible for setting professional-training standards for all education and development practitioners working within local communities. This organisation, PAULO – the National Training Organisation for Community Learning and Development, was named after Paulo Freire (1921-1997). It was formally recognised by David Blunkett, the Secretary of State for Education and Employment. Its first chair was Charlie McConnell, the Chief Executive of the Scottish Community Education Council, who had played a lead role in bringing together a range of occupational interests under a single national-training standards body, including community education, community development and development education. The inclusion of community development was significant as it was initially uncertain as to whether it would join the National Training Organisation (NTO) for Social Care. The Community Learning and Development NTO represented all the main employers, trades unions, professional associations and national-development agencies working in this area across the four nations of the UK.
The new body used the wording "community learning and development" to acknowledge that all of these occupations worked primarily within local communities, and that this work encompassed not just providing less formal learning support but also a concern for the wider holistic development of those communities – socio-economically, environmentally, culturally and politically. By bringing together these occupational groups this created for the first time a single recognised employment-sector of nearly 300,000 full- and part-time paid staff within the UK, approximately 10% of these staff being full-time. The NTO continued to recognise the range of occupations within it, for example specialists who work primarily with young people, but all agreed that they shared a core set of professional approaches to their work. In 2002 the NTO became part of a wider Sector Skills Council for lifelong learning.
The UK currently hosts the only global network of practitioners and activists working towards social justice through community development approach, the International Association for Community Development (IACD). IACD, formed in the USA in 1953, moved to Belgium in 1978 and was restructured and relaunched in Scotland in 1999.
Canada
Community development in Canada has roots in the development of co-operatives, credit unions and caisses populaires. The Antigonish Movement which started in the 1920s in Nova Scotia, through the work of Doctor Moses Coady and Father James Tompkins, has been particularly influential in the subsequent expansion of community economic development work across Canada.
Australia
Community development in Australia has often focussed on Aboriginal Australian communities, and during the period of the 1980s to the early 21st century funds channelled through the Community Employment Development Program, where Aboriginal people could be employed in "a work for the dole" scheme, gave the chance for non-government organisations to apply for a full or part-time worker funded by the Department for Social Security. Dr Jim Ife, formerly of Curtin University, organised a ground-breaking text-book on community development.
In the "Global South"
Community planning techniques drawing on the history of utopian movements became important in the 1920s and 1930s in East Africa, where community development proposals were seen as a way of helping local people improve their own lives with indirect assistance from colonial authorities.
Mohandas K. Gandhi adopted African community development ideals as a basis of his South African Ashram, and then introduced it as a part of the Indian Swaraj movement, aiming at establishing economic interdependence at village level throughout India. With Indian independence, despite the continuing work of Vinoba Bhave in encouraging grassroots land reform, India under its first Prime Minister Jawaharlal Nehru adopted a mixed-economy approach, mixing elements of socialism and capitalism. During the fifties and sixties, India ran a massive community development programme with focus on rural development activities through government support. This was later expanded in scope and was called integrated rural development scheme [IRDP]. A large number of initiatives that can come under the community development umbrella have come up in recent years.
The main objective of community development in India remains to develop the villages and to help the villagers help themselves to fight against poverty, illiteracy, malnutrition, etc. The beauty of Indian model of community development lies in the homogeneity of villagers and high level of participation.
Community development became a part of the Ujamaa Villages established in Tanzania by Julius Nyerere, where it had some success in assisting with the delivery of education services throughout rural areas, but has elsewhere met with mixed success. In the 1970s and 1980s, community development became a part of "Integrated Rural Development", a strategy promoted by United Nations Agencies and the World Bank. Central to these policies of community development were:
Adult literacy programs, drawing on the work of Brazilian educator Paulo Freire and the "Each One Teach One" adult literacy teaching method conceived by Frank Laubach.
Youth and women's groups, following the work of the Serowe Brigades of Botswana, of Patrick van Rensburg.
Development of community business ventures and particularly cooperatives, in part drawn on the examples of José María Arizmendiarrieta and the Mondragon Cooperatives of the Basque region of Spain
Compensatory education for those missing out in the formal education system, drawing on the work of Open Education as pioneered by Michael Young.
Dissemination of alternative technologies, based upon the work of E. F. Schumacher as advocated in his book Small Is Beautiful: A Study of Economics As If People Mattered
Village nutrition programs and permaculture projects, based upon the work of Australians Bill Mollison and David Holmgren.
Village water supply programs
In the 1990s, following critiques of the mixed success of "top down" government programs, and drawing on the work of Robert Putnam, in the rediscovery of social capital, community development internationally became concerned with social capital formation. In particular the outstanding success of the work of Muhammad Yunus in Bangladesh with the Grameen Bank from its inception in 1976, has led to the attempts to spread microenterprise credit schemes around the world. Yunus saw that social problems like poverty and disease were not being solved by the market system on its own. Thus, he established a banking system which lends to the poor with very little interest, allowing them access to entrepreneurship. This work was honoured by the 2006 Nobel Peace Prize.
Another alternative to "top down" government programs is the participatory government institution. Participatory governance institutions are organizations which aim to facilitate the participation of citizens within larger decision making and action implementing processes in society. A case study done on municipal councils and social housing programs in Brazil found that the presence of participatory governance institutions supports the implementation of poverty alleviation programs by local governments.
The "human scale development" work of Right Livelihood Award-winning Chilean economist Manfred Max Neef promotes the idea of development based upon fundamental human needs, which are considered to be limited, universal and invariant to all human beings (being a part of our human condition). He considers that poverty results from the failure to satisfy a particular human need, it is not just an absence of money. Whilst human needs are limited, Max Neef shows that the ways of satisfying human needs is potentially unlimited. Satisfiers also have different characteristics: they can be violators or destroyers, pseudosatisfiers, inhibiting satisfiers, singular satisfiers, or synergic satisfiers. Max-Neef shows that certain satisfiers, promoted as satisfying a particular need, in fact inhibit or destroy the possibility of satisfying other needs: e.g., the arms race, while ostensibly satisfying the need for protection, in fact then destroys subsistence, participation, affection and freedom; formal democracy, which is supposed to meet the need for participation often disempowers and alienates; commercial television, while used to satisfy the need for recreation, interferes with understanding, creativity and identity. Synergic satisfiers, on the other hand, not only satisfy one particular need, but also lead to satisfaction in other areas: some examples are breastfeeding; self-managed production; popular education; democratic community organizations; preventative medicine; meditation; educational games.
India
Community development in India was initiated by Government of India through Community Development Programme (CDP) in 1952. The focus of CDP was on rural communities. But, professionally trained social workers concentrated their practice in urban areas. Thus, although the focus of community organization was rural, the major thrust of Social Work gave an urban character which gave a balance in service for the program.
Vietnam
International organizations apply the term community in Vietnam to the local administrative unit, each with a traditional identity based on traditional, cultural, and kinship relations. Community development strategies in Vietnam aim to organize communities in ways that increase their capacities to partner with institutions, the participation of local people, transparency and equality, and unity within local communities.
Social and economic development planning (SDEP) in Vietnam uses top-down centralized planning methods and decision-making processes which do not consider local context and local participation. The plans created by SDEP are ineffective and serve mainly for administrative purposes. Local people are not informed of these development plans. The participatory rural appraisal (PRA) approach, a research methodology that allows local people to share and evaluate their own life conditions, was introduced to Vietnam in the early 1990s to help reform the way that government approaches local communities and development. PRA was used as a tool for mostly outsiders to learn about the local community, which did not effect substantial change.
The village/commune development (VDP/CDP) approach was developed as a more fitting approach than PRA to analyze local context and address the needs of rural communities. VDP/CDP participatory planning is centered around Ho Chi Minh's saying that "People know, people discuss and people supervise." VDP/CDP is often useful in Vietnam for shifting centralized management to more decentralization, helping develop local governance at the grassroots level. Local people use their knowledge to solve local issues. They create mid-term and yearly plans that help improve existing community development plans with the support of government organizations. Although VDP/CDP has been tested in many regions in Vietnam, it has not been fully implemented for a couple reasons. The methods applied in VDP/CDP are human resource and capacity building intensive, especially at the early stages. It also requires the local people to have an "initiative-taking" attitude. People in the remote areas where VDP/CDP has been tested have mostly passive attitudes because they already receive assistance from outsiders. There also are no sufficient monitoring practices to ensure effective plan implementation. Integrating VDP/CDP into the governmental system is difficult because the Communist Party and Central government's policies on decentralization are not enforced in reality.
Non-governmental organizations (NGO) in Vietnam, legalized in 1991, have claimed goals to develop civil society, which was essentially nonexistent prior to the Đổi Mới economic reforms. NGO operations in Vietnam do not exactly live up to their claimed goals to expand civil society. This is mainly due to the fact that NGOs in Vietnam are mostly donor-driven, urban, and elite-based organizations that employ staff with ties to the Communist Party and Central government. NGOs are also overlooked by the Vietnam Fatherland Front, an umbrella organization that reports observations directly to the Party and Central government. Since NGOs in Vietnam are not entirely non-governmental, they have been coined instead as 'VNGOs.' Most VNGOs have originated from either the state, hospital or university groups, or individuals not previously associated with any groups. VNGOs have not yet reached those most in need, such as the rural poor, due to the entrenched power networks' opposition to lobbying for issues such the rural poor's land rights. Authoritarianism is prevalent in nearly all Vietnamese civic organizations. Authoritarian practices are more present in inner-organizational functions than in organization leaders' worldviews. These leaders often reveal both authoritarian and libertarian values in contradiction. Representatives of Vietnam's NGO's stated that disagreements are normal, but conflicts within an organization should be avoided, demonstrating the one-party "sameness" mentality of authoritarian rule.
See also
Community building
Complete communities
Community education
Community engagement
Community practice
Organization workshop
Rural community development
Urbanism
References
Further reading
Briggs, Xavier de Souza, and Elizabeth Mueller and Mercer Sullivan, From Neighborhood to Community: Evidence on the Social Effects of Community Development Corporation. Community Development Research Center, 1997.
Ferguson, Ronald F. and William T. Dickens, eds., Urban Problems and Community Development. Brookings Institution Press, 1999. ,
Grogan, Paul and Tony Proscio, Comeback Cities: A Blueprint for Urban Neighborhood Revival. Westview Press, 2002. ,
von Hoffman, Alexander, House by House, Block by Block: The Rebirth of America's Urban Neighborhoods. Oxford University Press, 2003, Ppbck. ed., 2004. ,
McConnell, Charlie, Community Learning and Development: The Making of an Empowering Profession. Community Learning Scotland/PAULO, 2002,
Towards Shared International Standards for Community Development Practice. IACD. 2018
External links
The Citizens' Handbook – A large collection practices and activities for citizens' groups
National Civic League – US organization that promotes partnerships between government and citizens' groups
Shelterforce – A nonprofit magazine on community development, affordable housing, and neighborhood stabilization.
Community building
Citizen science models
de:Stadtentwicklung | 0.765532 | 0.995245 | 0.761892 |
Start with Why | Start with Why: How Great Leaders Inspire Everyone to Take Action is a book by Simon Sinek.
Overview
The book starts with comparing the two main ways to influence human behaviour: manipulation and inspiration. Sinek argues that inspiration is the more powerful and sustainable of the two. The book primarily discusses the significance of leadership and purpose to succeed in life and business. Sinek highlights the importance of taking the risk and going against the status-quo to find solutions to global problems. He believes leadership holds the key to inspiring a nation to come together and advance a common interest to make a nation, or the planet, a more civilised place. He turns to Dr. Martin Luther King Jr, John F Kennedy, Steve Jobs and the entire Apple culture as examples of how a purpose can be created to inspire a culture together, away from the manipulative society we live in today. This is highly important especially considering the amount of time we spend on our phones and other devices.
The golden circle
Sinek says people are inspired by a sense of purpose (or "Why"), and that this should come first when communicating, before "How" and "What". Sinek calls this triad the golden circle, a diagram of a bullseye (or concentric circles or onion diagram) with "Why" in the innermost circle (representing people's motives or purposes), surrounded by a ring labelled "How" (representing people's processes or methods), enclosed in a ring labelled "What" (representing results or outcomes). He speculates about the biological factors behind this structure, such as the limbic system.
Reception
Lindsay McGregor and Neel Doshi, co-authors of the book Primed to Perform: How to Build the Highest Performing Cultures Through the Science of Total Motivation, came to a similar conclusion: "Why we work determines how well we work."
Ken Krogue, in a blog post for Forbes, argued that it is far more important, especially for salespeople, to find the right person (which Krogue called "starting with Who") before "starting with Why":
Sales
According to NPD BookScan's mid-June 2016 to mid-June 2017 ranking of printed book sales, Start with Why ranked (without disclosing the geographical region) as the "bestselling leadership book" of that period, selling 171,000 paperback copies.
See also
Five Ws
Onion model
The Infinite Game
References
Business books
2009 non-fiction books | 0.766627 | 0.993743 | 0.76183 |
Scenario planning | Scenario planning, scenario thinking, scenario analysis, scenario prediction and the scenario method all describe a strategic planning method that some organizations use to make flexible long-term plans. It is in large part an adaptation and generalization of classic methods used by military intelligence.
In the most common application of the method, analysts generate simulation games for policy makers. The method combines known facts, such as demographics, geography and mineral reserves, with military, political, and industrial information, and key driving forces identified by considering social, technical, economic, environmental, and political ("STEEP") trends.
In business applications, the emphasis on understanding the behavior of opponents has been reduced while more attention is now paid to changes in the natural environment. At Royal Dutch Shell for example, scenario planning has been described as changing mindsets about the exogenous part of the world prior to formulating specific strategies.
Scenario planning may involve aspects of systems thinking, specifically the recognition that many factors may combine in complex ways to create sometimes surprising futures (due to non-linear feedback loops). The method also allows the inclusion of factors that are difficult to formalize, such as novel insights about the future, deep shifts in values, and unprecedented regulations or inventions. Systems thinking used in conjunction with scenario planning leads to plausible scenario storylines because the causal relationship between factors can be demonstrated. These cases, in which scenario planning is integrated with a systems thinking approach to scenario development, are sometimes referred to as "dynamic scenarios".
Critics of using a subjective and heuristic methodology to deal with uncertainty and complexity argue that the technique has not been examined rigorously, nor influenced sufficiently by scientific evidence. They caution against using such methods to "predict" based on what can be described as arbitrary themes and "forecasting techniques".
A challenge and a strength of scenario-building is that "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process". As a consequence, societal predictions can become self-destructing. For example, a scenario in which a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more secure cybersecurity measures, thus limiting the issue.
Principle
Crafting scenarios
Combinations and permutations of fact and related social changes are called "scenarios". Scenarios usually include plausible, but unexpectedly important, situations and problems that exist in some nascent form in the present day. Any particular scenario is unlikely. However, futures studies analysts select scenario features so they are both possible and uncomfortable. Scenario planning helps policy-makers and firms anticipate change, prepare responses, and create more robust strategies.
Scenario planning helps a firm anticipate the impact of different scenarios and identify weaknesses. When anticipated years in advance, those weaknesses can be avoided or their impacts reduced more effectively than when similar real-life problems are considered under the duress of an emergency. For example, a company may discover that it needs to change contractual terms to protect against a new class of risks, or collect cash reserves to purchase anticipated technologies or equipment. Flexible business continuity plans with "PREsponse protocols" can help cope with similar operational problems and deliver measurable future value.
Zero-sum game scenarios
Strategic military intelligence organizations also construct scenarios. The methods and organizations are almost identical, except that scenario planning is applied to a wider variety of problems than merely military and political problems.
As in military intelligence, the chief challenge of scenario planning is to find out the real needs of policy-makers, when policy-makers may not themselves know what they need to know, or may not know how to describe the information that they really want.
Good analysts design wargames so that policy makers have great flexibility and freedom to adapt their simulated organisations. Then these simulated organizations are "stressed" by the scenarios as a game plays out. Usually, particular groups of facts become more clearly important. These insights enable intelligence organizations to refine and repackage real information more precisely to better serve the policy-makers' real-life needs. Usually the games' simulated time runs hundreds of times faster than real life, so policy-makers experience several years of policy decisions, and their simulated effects, in less than a day.
This chief value of scenario planning is that it allows policy-makers to make and learn from mistakes without risking career-limiting failures in real life. Further, policymakers can make these mistakes in a safe, unthreatening, game-like environment, while responding to a wide variety of concretely presented situations based on facts. This is an opportunity to "rehearse the future", an opportunity that does not present itself in day-to-day operations where every action and decision counts.
How military scenario planning or scenario thinking is done
Decide on the key question to be answered by the analysis. By doing this, it is possible to assess whether scenario planning is preferred over the other methods. If the question is based on small changes or a very small number of elements, other more formalized methods may be more useful.
Set the time and scope of the analysis. Take into consideration how quickly changes have happened in the past, and try to assess to what degree it is possible to predict common trends in demographics, product life cycles. A usual timeframe can be five to 10 years.
Identify major stakeholders. Decide who will be affected and have an interest in the possible outcomes. Identify their current interests, whether and why these interests have changed over time in the past.
Map basic trends and driving forces. This includes industry, economic, political, technological, legal, and societal trends. Assess to what degree these trends will affect your research question. Describe each trend, how and why it will affect the organisation. In this step of the process, brainstorming is commonly used, where all trends that can be thought of are presented before they are assessed, to capture possible group thinking and tunnel vision.
Find key uncertainties. Map the driving forces on two axes, assessing each force on an uncertain/(relatively) predictable and important/unimportant scale. All driving forces that are considered unimportant are discarded. Important driving forces that are relatively predictable (ex. demographics) can be included in any scenario, so the scenarios should not be based on these. This leaves you with a number of important and unpredictable driving forces. At this point, it is also useful to assess whether any linkages between driving forces exist, and rule out any "impossible" scenarios (ex. full employment and zero inflation).
Check for the possibility to group the linked forces and if possible, reduce the forces to the two most important. (To allow the scenarios to be presented in a neat xy-diagram)
Identify the extremes of the possible outcomes of the two driving forces and check the dimensions for consistency and plausibility. Three key points should be assessed:
Time frame: are the trends compatible within the time frame in question?
Internal consistency: do the forces describe uncertainties that can construct probable scenarios.
Vs the stakeholders: are any stakeholders currently in disequilibrium compared to their preferred situation, and will this evolve the scenario? Is it possible to create probable scenarios when considering the stakeholders? This is most important when creating macro-scenarios where governments, large organisations et al. will try to influence the outcome.
Define the scenarios, plotting them on a grid if possible. Usually, two to four scenarios are constructed. The current situation does not need to be in the middle of the diagram (inflation may already be low), and possible scenarios may keep one (or more) of the forces relatively constant, especially if using three or more driving forces. One approach can be to create all positive elements into one scenario and all negative elements (relative to the current situation) in another scenario, then refining these. In the end, try to avoid pure best-case and worst-case scenarios.
Write out the scenarios. Narrate what has happened and what the reasons can be for the proposed situation. Try to include good reasons why the changes have occurred as this helps the further analysis. Finally, give each scenario a descriptive (and catchy) name to ease later reference.
Assess the scenarios. Are they relevant for the goal? Are they internally consistent? Are they archetypical? Do they represent relatively stable outcome situations?
Identify research needs. Based on the scenarios, assess where more information is needed. Where needed, obtain more information on the motivations of stakeholders, possible innovations that may occur in the industry and so on.
Develop quantitative methods. If possible, develop models to help quantify consequences of the various scenarios, such as growth rate, cash flow etc. This step does of course require a significant amount of work compared to the others, and may be left out in back-of-the-envelope-analyses.
Converge towards decision scenarios. Retrace the steps above in an iterative process until you reach scenarios which address the fundamental issues facing the organization. Try to assess upsides and downsides of the possible scenarios.
Use by managers
The basic concepts of the process are relatively simple. In terms of the overall approach to forecasting, they can be divided into three main groups of activities (which are, generally speaking, common to all long range forecasting processes):
Environmental analysis
Scenario planning
Corporate strategy
The first of these groups quite simply comprises the normal environmental analysis. This is almost exactly the same as that which should be undertaken as the first stage of any serious long-range planning. However, the quality of this analysis is especially important in the context of scenario planning.
The central part represents the specific techniques – covered here – which differentiate the scenario forecasting process from the others in long-range planning.
The final group represents all the subsequent processes which go towards producing the corporate strategy and plans. Again, the requirements are slightly different but in general they follow all the rules of sound long-range planning.
Applications
Business
In the past, strategic plans have often considered only the "official future", which was usually a straight-line graph of current trends carried into the future. Often the trend lines were generated by the accounting department, and lacked discussions of demographics, or qualitative differences in social conditions.
These simplistic guesses are surprisingly good most of the time, but fail to consider qualitative social changes that can affect a business or government. Paul J. H. Schoemaker offers a strong managerial case for the use of scenario planning in business and had wide impact.
The approach may have had more impact outside Shell than within, as many others firms and consultancies started to benefit as well from scenario planning. Scenario planning is as much art as science, and prone to a variety of traps (both in process and content) as enumerated by Paul J. H. Schoemaker. More recently scenario planning has been discussed as a tool to improve the strategic agility, by cognitively preparing not only multiple scenarios but also multiple consistent strategies.
Military
Scenario planning is also extremely popular with military planners. Most states' department of war maintains a continuously updated series of strategic plans to cope with well-known military or strategic problems. These plans are almost always based on scenarios, and often the plans and scenarios are kept up-to-date by war games, sometimes played out with real troops. This process was first carried out (arguably the method was invented by) the Prussian general staff of the mid-19th century.
Finance
In economics and finance, a financial institution might use scenario analysis to forecast several possible scenarios for the economy (e.g. rapid growth, moderate growth, slow growth) and for financial returns (for bonds, stocks, cash, etc.) in each of those scenarios. It might consider sub-sets of each of the possibilities. It might further seek to determine correlations and assign probabilities to the scenarios (and sub-sets if any). Then it will be in a position to consider how to distribute assets between asset types (i.e. asset allocation); the institution can also calculate the scenario-weighted expected return (which figure will indicate the overall attractiveness of the financial environment). It may also perform stress testing, using adverse scenarios.
Depending on the complexity of the problem, scenario analysis can be a demanding exercise. It can be difficult to foresee what the future holds (e.g. the actual future outcome may be entirely unexpected), i.e. to foresee what the scenarios are, and to assign probabilities to them; and this is true of the general forecasts never mind the implied financial market returns. The outcomes can be modeled mathematically/statistically e.g. taking account of possible variability within single scenarios as well as possible relationships between scenarios. In general, one should take care when assigning probabilities to different scenarios as this could invite a tendency to consider only the scenario with the highest probability.
Geopolitics
In politics or geopolitics, scenario analysis involves reflecting on the possible alternative paths of a social or political environment and possibly diplomatic and war risks.
History of use by academic and commercial organizations
Most authors attribute the introduction of scenario planning to Herman Kahn through his work for the US Military in the 1950s at the RAND Corporation where he developed a technique of describing the future in stories as if written by people in the future. He adopted the term "scenarios" to describe these stories. In 1961 he founded the Hudson Institute where he expanded his scenario work to social forecasting and public policy. One of his most controversial uses of scenarios was to suggest that a nuclear war could be won. Though Kahn is often cited as the father of scenario planning, at the same time Kahn was developing his methods at RAND, Gaston Berger was developing similar methods at the Centre d’Etudes Prospectives which he founded in France. His method, which he named 'La Prospective', was to develop normative scenarios of the future which were to be used as a guide in formulating public policy. During the mid-1960s various authors from the French and American institutions began to publish scenario planning concepts such as 'La Prospective' by Berger in 1964 and 'The Next Thirty-Three Years' by Kahn and Wiener in 1967. By the 1970s scenario planning was in full swing with a number of institutions now established to provide support to business including the Hudson Foundation, the Stanford Research Institute (now SRI International), and the SEMA Metra Consulting Group in France. Several large companies also began to embrace scenario planning including DHL Express, Dutch Royal Shell and General Electric.
Possibly as a result of these very sophisticated approaches, and of the difficult techniques they employed (which usually demanded the resources of a central planning staff), scenarios earned a reputation for difficulty (and cost) in use. Even so, the theoretical importance of the use of alternative scenarios, to help address the uncertainty implicit in long-range forecasts, was dramatically underlined by the widespread confusion which followed the Oil Shock of 1973. As a result, many of the larger organizations started to use the technique in one form or another. By 1983 Diffenbach reported that 'alternate scenarios' were the third most popular technique for long-range forecasting – used by 68% of the large organizations he surveyed.
Practical development of scenario forecasting, to guide strategy rather than for the more limited academic uses which had previously been the case, was started by Pierre Wack in 1971 at the Royal Dutch Shell group of companies – and it, too, was given impetus by the Oil Shock two years later. Shell has, since that time, led the commercial world in the use of scenarios – and in the development of more practical techniques to support these. Indeed, as – in common with most forms of long-range forecasting – the use of scenarios has (during the depressed trading conditions of the last decade) reduced to only a handful of private-sector organisations, Shell remains almost alone amongst them in keeping the technique at the forefront of forecasting.
There has only been anecdotal evidence offered in support of the value of scenarios, even as aids to forecasting; and most of this has come from one company – Shell. In addition, with so few organisations making consistent use of them – and with the timescales involved reaching into decades – it is unlikely that any definitive supporting evidenced will be forthcoming in the foreseeable future. For the same reasons, though, a lack of such proof applies to almost all long-range planning techniques. In the absence of proof, but taking account of Shell's well documented experiences of using it over several decades (where, in the 1990s, its then CEO ascribed its success to its use of such scenarios), can be significant benefit to be obtained from extending the horizons of managers' long-range forecasting in the way that the use of scenarios uniquely does.
Process
The part of the overall process which is radically different from most other forms of long-range planning is the central section, the actual production of the scenarios. Even this, though, is relatively simple, at its most basic level. As derived from the approach most commonly used by Shell, it follows six steps:
Decide drivers for change/assumptions
Bring drivers together into a viable framework
Produce 7–9 initial mini-scenarios
Reduce to 2–3 scenarios
Draft the scenarios
Identify the issues arising
Step 1 – decide assumptions/drivers for change
The first stage is to examine the results of environmental analysis to determine which are the most important factors that will decide the nature of the future environment within which the organisation operates. These factors are sometimes called 'variables' (because they will vary over the time being investigated, though the terminology may confuse scientists who use it in a more rigorous manner). Users tend to prefer the term 'drivers' (for change), since this terminology is not laden with quasi-scientific connotations and reinforces the participant's commitment to search for those forces which will act to change the future. Whatever the nomenclature, the main requirement is that these will be informed assumptions.
This is partly a process of analysis, needed to recognise what these 'forces' might be. However, it is likely that some work on this element will already have taken place during the preceding environmental analysis. By the time the formal scenario planning stage has been reached, the participants may have already decided – probably in their sub-conscious rather than formally – what the main forces are.
In the ideal approach, the first stage should be to carefully decide the overall assumptions on which the scenarios will be based. Only then, as a second stage, should the various drivers be specifically defined. Participants, though, seem to have problems in separating these stages.
Perhaps the most difficult aspect though, is freeing the participants from the preconceptions they take into the process with them. In particular, most participants will want to look at the medium term, five to ten years ahead rather than the required longer-term, ten or more years ahead. However, a time horizon of anything less than ten years often leads participants to extrapolate from present trends, rather than consider the alternatives which might face them. When, however, they are asked to consider timescales in excess of ten years they almost all seem to accept the logic of the scenario planning process, and no longer fall back on that of extrapolation. There is a similar problem with expanding participants horizons to include the whole external environment.
Brainstorming
In any case, the brainstorming which should then take place, to ensure that the list is complete, may unearth more variables – and, in particular, the combination of factors may suggest yet others.
A very simple technique which is especially useful at this – brainstorming – stage, and in general for handling scenario planning debates is derived from use in Shell where this type of approach is often used. An especially easy approach, it only requires a conference room with a bare wall and copious supplies of 3M Post-It Notes.
The six to ten people ideally taking part in such face-to-face debates should be in a conference room environment which is isolated from outside interruptions. The only special requirement is that the conference room has at least one clear wall on which Post-It notes will stick. At the start of the meeting itself, any topics which have already been identified during the environmental analysis stage are written (preferably with a thick magic marker, so they can be read from a distance) on separate Post-It Notes. These Post-It Notes are then, at least in theory, randomly placed on the wall. In practice, even at this early stage the participants will want to cluster them in groups which seem to make sense. The only requirement (which is why Post-It Notes are ideal for this approach) is that there is no bar to taking them off again and moving them to a new cluster.
A similar technique – using 5" by 3" index cards – has also been described (as the 'Snowball Technique'), by Backoff and Nutt, for grouping and evaluating ideas in general.
As in any form of brainstorming, the initial ideas almost invariably stimulate others. Indeed, everyone should be encouraged to add their own Post-It Notes to those on the wall. However it differs from the 'rigorous' form described in 'creative thinking' texts, in that it is much slower paced and the ideas are discussed immediately. In practice, as many ideas may be removed, as not being relevant, as are added. Even so, it follows many of the same rules as normal brainstorming and typically lasts the same length of time – say, an hour or so only.
It is important that all the participants feel they 'own' the wall – and are encouraged to move the notes around themselves. The result is a very powerful form of creative decision-making for groups, which is applicable to a wide range of situations (but is especially powerful in the context of scenario planning). It also offers a very good introduction for those who are coming to the scenario process for the first time. Since the workings are largely self-evident, participants very quickly come to understand exactly what is involved.
Important and uncertain
This step is, though, also one of selection – since only the most important factors will justify a place in the scenarios. The 80:20 Rule here means that, at the end of the process, management's attention must be focused on a limited number of most important issues. Experience has proved that offering a wider range of topics merely allows them to select those few which interest them, and not necessarily those which are most important to the organisation.
In addition, as scenarios are a technique for presenting alternative futures, the factors to be included must be genuinely 'variable'. They should be subject to significant alternative outcomes. Factors whose outcome is predictable, but important, should be spelled out in the introduction to the scenarios (since they cannot be ignored). The Important Uncertainties Matrix, as reported by Kees van der Heijden of Shell, is a useful check at this stage.
At this point it is also worth pointing out that a great virtue of scenarios is that they can accommodate the input from any other form of forecasting. They may use figures, diagrams or words in any combination. No other form of forecasting offers this flexibility.
Step 2 – bring drivers together into a viable framework
The next step is to link these drivers together to provide a meaningful framework. This may be obvious, where some of the factors are clearly related to each other in one way or another. For instance, a technological factor may lead to market changes, but may be constrained by legislative factors. On the other hand, some of the 'links' (or at least the 'groupings') may need to be artificial at this stage. At a later stage more meaningful links may be found, or the factors may then be rejected from the scenarios. In the most theoretical approaches to the subject, probabilities are attached to the event strings. This is difficult to achieve, however, and generally adds little – except complexity – to the outcomes.
This is probably the most (conceptually) difficult step. It is where managers' 'intuition' – their ability to make sense of complex patterns of 'soft' data which more rigorous analysis would be unable to handle – plays an important role. There are, however, a range of techniques which can help; and again the Post-It-Notes approach is especially useful:
Thus, the participants try to arrange the drivers, which have emerged from the first stage, into groups which seem to make sense to them. Initially there may be many small groups. The intention should, therefore, be to gradually merge these (often having to reform them from new combinations of drivers to make these bigger groups work). The aim of this stage is eventually to make 6–8 larger groupings; 'mini-scenarios'. Here the Post-It Notes may be moved dozens of times over the length – perhaps several hours or more – of each meeting. While this process is taking place the participants will probably want to add new topics – so more Post-It Notes are added to the wall. In the opposite direction, the unimportant ones are removed (possibly to be grouped, again as an 'audit trail' on another wall). More important, the 'certain' topics are also removed from the main area of debate – in this case they must be grouped in clearly labelled area of the main wall.
As the clusters – the 'mini-scenarios' – emerge, the associated notes may be stuck to each other rather than individually to the wall; which makes it easier to move the clusters around (and is a considerable help during the final, demanding stage to reducing the scenarios to two or three).
The great benefit of using Post-It Notes is that there is no bar to participants changing their minds. If they want to rearrange the groups – or simply to go back (iterate) to an earlier stage – then they strip them off and put them in their new position.
Step 3 – produce initial mini-scenarios
The outcome of the previous step is usually between seven and nine logical groupings of drivers. This is usually easy to achieve. The 'natural' reason for this may be that it represents some form of limit as to what participants can visualise.
Having placed the factors in these groups, the next action is to work out, very approximately at this stage, what is the connection between them. What does each group of factors represent?
Step 4 – reduce to two or three scenarios
The main action, at this next stage, is to reduce the seven to nine mini-scenarios/groupings detected at the previous stage to two or three larger scenarios
There is no theoretical reason for reducing to just two or three scenarios, only a practical one. It has been found that the managers who will be asked to use the final scenarios can only cope effectively with a maximum of three versions! Shell started, more than three decades ago, by building half a dozen or more scenarios – but found that the outcome was that their managers selected just one of these to concentrate on. As a result, the planners reduced the number to three, which managers could handle easily but could no longer so easily justify the selection of only one! This is the number now recommended most frequently in most of the literature.
Complementary scenarios
As used by Shell, and as favoured by a number of the academics, two scenarios should be complementary; the reason being that this helps avoid managers 'choosing' just one, 'preferred', scenario – and lapsing once more into single-track forecasting (negating the benefits of using 'alternative' scenarios to allow for alternative, uncertain futures). This is, however, a potentially difficult concept to grasp, where managers are used to looking for opposites; a good and a bad scenario, say, or an optimistic one versus a pessimistic one – and indeed this is the approach (for small businesses) advocated by Foster. In the Shell approach, the two scenarios are required to be equally likely, and between them to cover all the 'event strings'/drivers. Ideally they should not be obvious opposites, which might once again bias their acceptance by users, so the choice of 'neutral' titles is important. For example, Shell's two scenarios at the beginning of the 1990s were titled 'Sustainable World' and 'Global Mercantilism'[xv]. In practice, we found that this requirement, much to our surprise, posed few problems for the great majority, 85%, of those in the survey; who easily produced 'balanced' scenarios. The remaining 15% mainly fell into the expected trap of 'good versus bad'. We have found that our own relatively complex (OBS) scenarios can also be made complementary to each other; without any great effort needed from the teams involved; and the resulting two scenarios are both developed further by all involved, without unnecessary focusing on one or the other.
Testing
Having grouped the factors into these two scenarios, the next step is to test them, again, for viability. Do they make sense to the participants? This may be in terms of logical analysis, but it may also be in terms of intuitive 'gut-feel'. Once more, intuition often may offer a useful – if academically less respectable – vehicle for reacting to the complex and ill-defined issues typically involved. If the scenarios do not intuitively 'hang together', why not? The usual problem is that one or more of the assumptions turns out to be unrealistic in terms of how the participants see their world. If this is the case then you need to return to the first step – the whole scenario planning process is above all an iterative one (returning to its beginnings a number of times until the final outcome makes the best sense).
Step 5 – write the scenarios
The scenarios are then 'written up' in the most suitable form. The flexibility of this step often confuses participants, for they are used to forecasting processes which have a fixed format. The rule, though, is that you should produce the scenarios in the form most suitable for use by the managers who are going to base their strategy on them. Less obviously, the managers who are going to implement this strategy should also be taken into account. They will also be exposed to the scenarios, and will need to believe in these. This is essentially a 'marketing' decision, since it will be very necessary to 'sell' the final results to the users. On the other hand, a not inconsiderable consideration may be to use the form the author also finds most comfortable. If the form is alien to him or her the chances are that the resulting scenarios will carry little conviction when it comes to the 'sale'.
Most scenarios will, perhaps, be written in word form (almost as a series of alternative essays about the future); especially where they will almost inevitably be qualitative which is hardly surprising where managers, and their audience, will probably use this in their day to day communications. Some, though use an expanded series of lists and some enliven their reports by adding some fictional 'character' to the material – perhaps taking literally the idea that they are stories about the future – though they are still clearly intended to be factual. On the other hand, they may include numeric data and/or diagrams – as those of Shell do (and in the process gain by the acid test of more measurable 'predictions').
Step 6 – identify issues arising
The final stage of the process is to examine these scenarios to determine what are the most critical outcomes; the 'branching points' relating to the 'issues' which will have the greatest impact (potentially generating 'crises') on the future of the organisation. The subsequent strategy will have to address these – since the normal approach to strategy deriving from scenarios is one which aims to minimise risk by being 'robust' (that is it will safely cope with all the alternative outcomes of these 'life and death' issues) rather than aiming for performance (profit) maximisation by gambling on one outcome.
Use of scenarios
Scenarios may be used in a number of ways:
a) Containers for the drivers/event strings
Most basically, they are a logical device, an artificial framework, for presenting the individual factors/topics (or coherent groups of these) so that these are made easily available for managers' use – as useful ideas about future developments in their own right – without reference to the rest of the scenario. It should be stressed that no factors should be dropped, or even given lower priority, as a result of producing the scenarios. In this context, which scenario contains which topic (driver), or issue about the future, is irrelevant.
b) Tests for consistency
At every stage it is necessary to iterate, to check that the contents are viable and make any necessary changes to ensure that they are; here the main test is to see if the scenarios seem to be internally consistent – if they are not then the writer must loop back to earlier stages to correct the problem. Though it has been mentioned previously, it is important to stress once again that scenario building is ideally an iterative process. It usually does not just happen in one meeting – though even one attempt is better than none – but takes place over a number of meetings as the participants gradually refine their ideas.
c) Positive perspectives
Perhaps the main benefit deriving from scenarios, however, comes from the alternative 'flavors' of the future their different perspectives offer. It is a common experience, when the scenarios finally emerge, for the participants to be startled by the insight they offer – as to what the general shape of the future might be – at this stage it no longer is a theoretical exercise but becomes a genuine framework (or rather set of alternative frameworks) for dealing with that.
Scenario planning compared to other techniques
The flowchart to the right provides a process for classifying a phenomenon as a scenario in the intuitive logics tradition.Scenario planning differs from contingency planning, sensitivity analysis and computer simulations.
Contingency planning is a "What if" tool, that only takes into account one uncertainty. However, scenario planning considers combinations of uncertainties in each scenario. Planners also try to select especially plausible but uncomfortable combinations of social developments.
Sensitivity analysis analyzes changes in one variable only, which is useful for simple changes, while scenario planning tries to expose policy makers to significant interactions of major variables.
While scenario planning can benefit from computer simulations, scenario planning is less formalized, and can be used to make plans for qualitative patterns that show up in a wide variety of simulated events.
During the past 5 years, computer supported Morphological Analysis has been employed as aid in scenario development by the Swedish Defence Research Agency in Stockholm. This method makes it possible to create a multi-variable morphological field which can be treated as an inference model – thus integrating scenario planning techniques with contingency analysis and sensitivity analysis.
Scenario analysis
Scenario analysis is a process of analyzing future events by considering alternative possible outcomes (sometimes called "alternative worlds"). Thus, scenario analysis, which is one of the main forms of projection, does not try to show one exact picture of the future. Instead, it presents several alternative future developments. Consequently, a scope of possible future outcomes is observable. Not only are the outcomes observable, also the development paths leading to the outcomes. In contrast to prognoses, the scenario analysis is not based on extrapolation of the past or the extension of past trends. It does not rely on historical data and does not expect past observations to remain valid in the future. Instead, it tries to consider possible developments and turning points, which may only be connected to the past. In short, several scenarios are fleshed out in a scenario analysis to show possible future outcomes. Each scenario normally combines optimistic, pessimistic, and more and less probable developments. However, all aspects of scenarios should be plausible. Although highly discussed, experience has shown that around three scenarios are most appropriate for further discussion and selection. More scenarios risks making the analysis overly complicated. Scenarios are often confused with other tools and approaches to planning. The flowchart to the right provides a process for classifying a phenomenon as a scenario in the intuitive logics tradition.
Principle
Scenario-building is designed to allow improved decision-making by allowing deep consideration of outcomes and their implications.
A scenario is a tool used during requirements analysis to describe a specific use of a proposed system. Scenarios capture the system, as viewed from the outside
Scenario analysis can also be used to illuminate "wild cards." For example, analysis of the possibility of the earth being struck by a meteor suggests that whilst the probability is low, the damage inflicted is so high that the event is much more important (threatening) than the low probability (in any one year) alone would suggest. However, this possibility is usually disregarded by organizations using scenario analysis to develop a strategic plan since it has such overarching repercussions.
Combination of Delphi and scenarios
Scenario planning concerns planning based on the systematic examination of the future by picturing plausible and consistent images of that future. The Delphi method attempts to develop systematically expert opinion consensus concerning future developments and events. It is a judgmental forecasting procedure in the form of an anonymous, written, multi-stage survey process, where feedback of group opinion is provided after each round.
Numerous researchers have stressed that both approaches are best suited to be combined. Due to their process similarity, the two methodologies can be easily combined. The output of the different phases of the Delphi method can be used as input for the scenario method and vice versa. A combination makes a realization of the benefits of both tools possible. In practice, usually one of the two tools is considered the dominant methodology and the other one is added on at some stage.
The variant that is most often found in practice is the integration of the Delphi method into the scenario process (see e.g. Rikkonen, 2005; von der Gracht, 2008;). Authors refer to this type as Delphi-scenario (writing), expert-based scenarios, or Delphi panel derived scenarios. Von der Gracht (2010) is a scientifically valid example of this method. Since scenario planning is “information hungry”, Delphi research can deliver valuable input for the process. There are various types of information output of Delphi that can be used as input for scenario planning. Researchers can, for example, identify relevant events or developments and, based on expert opinion, assign probabilities to them. Moreover, expert comments and arguments provide deeper insights into relationships of factors that can, in turn, be integrated into scenarios afterwards. Also, Delphi helps to identify extreme opinions and dissent among the experts. Such controversial topics are particularly suited for extreme scenarios or wildcards.
In his doctoral thesis, Rikkonen (2005) examined the utilization of Delphi techniques in scenario planning and, concretely, in construction of scenarios. The author comes to the conclusion that the Delphi technique has instrumental value in providing different alternative futures and the argumentation of scenarios. It is therefore recommended to use Delphi in order to make the scenarios more profound and to create confidence in scenario planning. Further benefits lie in the simplification of the scenario writing process and the deep understanding of the interrelations between the forecast items and social factors.
Critique
While there is utility in weighting hypotheses and branching potential outcomes from them, reliance on scenario analysis without reporting some parameters of measurement accuracy (standard errors, confidence intervals of estimates, metadata, standardization and coding, weighting for non-response, error in reportage, sample design, case counts, etc.) is a poor second to traditional prediction. Especially in “complex” problems, factors and assumptions do not correlate in lockstep fashion. Once a specific sensitivity is undefined, it may call the entire study into question.
It is faulty logic to think, when arbitrating results, that a better hypothesis will render empiricism unnecessary. In this respect, scenario analysis tries to defer statistical laws (e.g., Chebyshev's inequality Law), because the decision rules occur outside a constrained setting. Outcomes are not permitted to “just happen”; rather, they are forced to conform to arbitrary hypotheses ex post, and therefore there is no footing on which to place expected values. In truth, there are no ex ante expected values, only hypotheses, and one is left wondering about the roles of modeling and data decision. In short, comparisons of "scenarios" with outcomes are biased by not deferring to the data; this may be convenient, but it is indefensible.
“Scenario analysis” is no substitute for complete and factual exposure of survey error in economic studies. In traditional prediction, given the data used to model the problem, with a reasoned specification and technique, an analyst can state, within a certain percentage of statistical error, the likelihood of a coefficient being within a certain numerical bound. This exactitude need not come at the expense of very disaggregated statements of hypotheses. R Software, specifically the module “WhatIf,” (in the context, see also Matchit and Zelig) has been developed for causal inference, and to evaluate counterfactuals. These programs have fairly sophisticated treatments for determining model dependence, in order to state with precision how sensitive the results are to models not based on empirical evidence.
Another challenge of scenario-building is that "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process". As a consequence, societal predictions can become self-destructing. For example, a scenario in which a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more secure cybersecurity measures, thus limiting the issue.
Critique of Shell's use of scenario planning
In the 1970s, many energy companies were surprised by both environmentalism and the OPEC cartel, and thereby lost billions of dollars of revenue by mis-investment. The dramatic financial effects of these changes led at least one organization, Royal Dutch Shell, to implement scenario planning. The analysts of this company publicly estimated that this planning process made their company the largest in the world. However other observers of Shell's use of scenario planning have suggested that few if any significant long-term business advantages accrued to Shell from the use of scenario methodology. Whilst the intellectual robustness of Shell's long term scenarios was seldom in doubt their actual practical use was seen as being minimal by many senior Shell executives. A Shell insider has commented "The scenario team were bright and their work was of a very high intellectual level. However neither the high level "Group scenarios" nor the country level scenarios produced with operating companies really made much difference when key decisions were being taken".
The use of scenarios was audited by Arie de Geus's team in the early 1980s and they found that the decision-making processes following the scenarios were the primary cause of the lack of strategic implementation ), rather than the scenarios themselves. Many practitioners today spend as much time on the decision-making process as on creating the scenarios themselves.
See also
Decentralized planning (economics)
Hoshin Kanri#Hoshin planning
Futures studies
Futures techniques
Global Scenario Group
Jim Dator (Hawaii Research Center for Futures Studies)
Resilience (organizational)
Robust decision-making
Scenario (computing)
Similar terminology
Feedback loop
System dynamics (also known as Stock and flow)
System thinking
Analogous concepts
Delphi method, including Real-time Delphi
Game theory
Horizon scanning
Morphological analysis
Rational choice theory
Stress testing
Twelve leverage points
Examples
Climate change mitigation scenarios – possible futures in which global warming is reduced by deliberate actions
Dynamic Analysis and Replanning Tool
Energy modeling – the process of building computer models of energy systems
Pentagon Papers
References
Additional Bibliography
D. Erasmus, The future of ICT in financial services: The Rabobank ICT scenarios (2008).
M. Godet, Scenarios and Strategic Management, Butterworths (1987).
M. Godet, From Anticipation to Action: A Handbook of Strategic Prospective. Paris: Unesco, (1993).
Adam Kahane, Solving Tough Problems: An Open Way of Talking, Listening, and Creating New Realities (2007)
H. Kahn, The Year 2000, Calman-Levy (1967).
Herbert Meyer, "Real World Intelligence", Weidenfeld & Nicolson, 1987,
National Intelligence Council (NIC) , "Mapping the Global Future", 2005,
M. Lindgren & H. Bandhold, Scenario planning – the link between future and strategy, Palgrave Macmillan, 2003
G. Wright& G. Cairns, Scenario thinking: practical approaches to the future, Palgrave Macmillan, 2011
A. Schuehly, F. Becker t& F. Klein, Real Time Strategy: When Strategic Foresight Meets Artificial Intelligence, Emerald, 2020*
A. Ruser, Sociological Quasi-Labs: The Case for Deductive Scenario Development, Current Sociology Vol63(2): 170-181, https://journals.sagepub.com/doi/pdf/10.1177/0011392114556581
Scientific journals
Foresight
Futures
Futures & Foresight Science
Journal of Futures Studies
Technological Forecasting and Social Change
External links
Wikifutures wiki; Scenario page—wiki also includes several scenarios (GFDL licensed)
ScenarioThinking.org —more than 100 scenarios developed on various global issues, on a wiki for public use
Shell Scenarios Resources—Resources on what scenarios are, Shell's new and old scenario's, explorer's guide and other scenario resources
Learn how to use Scenario Manager in Excel to do Scenario Analysis
Systems Innovation (SI) courseware
Further reading
"Learning from the Future: Competitive Foresight Scenarios", Liam Fahey and Robert M. Randall, Published by John Wiley and Sons, 1997, , Google book
"Shirt-sleeve approach to long-range plans.", Linneman, Robert E, Kennell, John D.; Harvard Business Review; Mar/Apr77, Vol. 55 Issue 2, p141
Business models
Futures techniques
Military strategy
Risk analysis
Risk management
Strategic management
Systems thinking
Systems engineering
Types of marketing | 0.771021 | 0.988018 | 0.761782 |
Cognitive development | Cognitive development is a field of study in neuroscience and psychology focusing on a child's development in terms of information processing, conceptual resources, perceptual skill, language learning, and other aspects of the developed adult brain and cognitive psychology. Qualitative differences between how a child processes their waking experience and how an adult processes their waking experience are acknowledged (such as object permanence, the understanding of logical relations, and cause-effect reasoning in school-age children). Cognitive development is defined as the emergence of the ability to consciously cognize, understand, and articulate their understanding in adult terms. Cognitive development is how a person perceives, thinks, and gains understanding of their world through the relations of genetic and learning factors. There are four stages to cognitive information development. They are, reasoning, intelligence, language, and memory. These stages start when the baby is about 18 months old, they play with toys, listen to their parents speak, they watch TV, anything that catches their attention helps build their cognitive development.
Jean Piaget was a major force establishing this field, forming his "theory of cognitive development". Piaget proposed four stages of cognitive development: the sensorimotor, preoperational, concrete operational, and formal operational period. Many of Piaget's theoretical claims have since fallen out of favor. His description of the most prominent changes in cognition with age, is generally still accepted today (e.g., how early perception moves from being dependent on concrete, external actions. Later, abstract understanding of observable aspects of reality can be captured; leading to the discovery of underlying abstract rules and principles, usually starting in adolescence)
In recent years, however, alternative models have been advanced, including information-processing theory, neo-Piagetian theories of cognitive development, which aim to integrate Piaget's ideas with more recent models and concepts in developmental and cognitive science, theoretical cognitive neuroscience, and social-constructivist approaches. Another such model of cognitive development is Bronfenbrenner's Ecological Systems Theory. A major controversy in cognitive development has been "nature versus nurture", i.e., the question if cognitive development is mainly determined by an individual's innate qualities ("nature"), or by their personal experiences ("nurture"). However, it is now recognized by most experts that this is a false dichotomy: there is overwhelming evidence from biological and behavioral sciences that from the earliest points in development, gene activity interacts with events and experiences in the environment. While naturalists are convinced of the power of genetic mechanisms, knowledge from different disciplines, such as Comparative psychology, Molecular biology, and Neuroscience, shows arguments for an ecological component in launching cognition (see the section "The beginning of cognition" below).
History
Jean Piaget is inexorably linked to cognitive development as he was the first to systematically study developmental processes. Despite being the first to develop a systemic study of cognitive development, Piaget was not the first to theorize about cognitive development.
Jean-Jacques Rousseau wrote Emile, or On Education in 1762. He discusses childhood development as happening in three stages. In the first stage, up to age 12, the child is guided by their emotions and impulses. In the second stage, ages 12–16, the child's reason starts to develop. In the third and final stage, age 16 and up, the child develops into an adult.
James Sully wrote several books on childhood development, including Studies of Childhood in 1895 and Children's Ways in 1897. He used a detailed observational study method with the children. Contemporary research in child development actually repeats observations and observational methods summarized by Sully in Studies of Childhood, such as the mirror technique.
Sigmund Freud developed the theory of psychosexual development, which indicates children must pass through several stages as they develop their cognitive skills.
Maria Montessori began her career working with mentally disabled children in 1897, then conducted observation and experimental research in elementary schools. She wrote The Discovery of the Child in 1950 which developed the Montessori method of education. She discussed four planes of development: birth to 6 years, 6 to 12, 12 to 18, and 18 to 24. The Montessori method now has three developmentally-meaningful age groups: 2–2.5 years, 2.5–6, and 6–12. She was working on human behavior in older children but only published lecture notes on the subject.
Arnold Gesell was the creator of the maturational theory of development. Gesell said that development occurs due to biological hereditary features such as genetics and children will reach developmental milestones when they are ready to do so in a predictable sequence. Because of his theory of development, he devised a developmental scale that is used today called the Gesell Developmental Schedule (GDS) that provides parents, teachers, doctors, and other pertinent people with an overview of where an infant or child falls on the developmental spectrum.
Erik Erikson was a neo-Freudian who focused on how children develop personality and identity. Although a contemporary of Freud, there is a larger focus on social experiences that occur across the lifespan, as opposed to childhood exclusively, that contribute to how personality and identity emerge. His framework uses eight systematic stages that all children must pass through.
Urie Bronfenbrenner devised the ecological systems theory, which identifies various levels of a child's environment. The primary focus of this theory focuses on the quality and context of a child's environment. Bronfenbrenner suggested that as a child grows older, their interaction between the various levels of their environment grows more complex due to cognitive abilities expanding.
Lawrence Kohlberg wrote the theory of stages of moral development, which extended Piaget's findings of cognitive development and showed that they continue through the lifespan. Kohlberg's six stages follow Piaget's constructivist requirements in that those stages can not be skipped and it is very rare to regress in stages. Notable works: Moral Stages and Moralization: The Cognitive-Development Approach (1976) and Essays on Moral Development (1981)
Lev Vygotsky's theory is based on social learning as the most important aspect of cognitive development. In Vygotsky's theory, adults are very important for young children's development. They help children learn through mediation, which is modeling and explaining concepts. Together, adults and children master concepts of their culture and activities. Vygotsky believed we get our complex mental activities through social learning. A significant part of Vygotsky's theory is based on the zone of proximal development, which he believes is when the most effective learning takes place. The zone of proximal development is what a child cannot accomplish alone but can accomplish with the help of an MKO (more knowledgeable other). Vygotsky also believed culture is a very important part of cognitive development such as the language, writing and counting system used in that culture. Another aspect of Vygotsky's theory is private speech. Private speech is when a person talks to themselves in order to help themselves problem solve. Scaffolding or providing support to a child and then slowly removing support and allowing the child to do more on their own over time is also an aspect of Vygotsky's theory.
Beginning of Cognition
In cognitive development, the essential issue in beginning cognition is how the nervous system grasps perception and shapes intentionality in the sensorimotor stage (or before) when organisms only demonstrate simple reflexes (see articles perception, cognition, binding problem, multi sensory integration). The significance of this knowledge is that the mode to cognize at the stage without communication and abstract thinking, being a pre-requisite of social reality formation, determines the development of everything from cooperative interactions and knowledge assimilation to moral identity and cultural evolution that provides building societies (see also Social cognition and Collective behaviour). The contemporary academic discussion on a controversy in cognitive development (whether cognitive development is mainly determined by an individual's innate qualities or personal experiences) is still in progress.
Many influential scientists argue that the genetic code is no more than a rule of causal specificity based on the fact that cells use nucleic acids as templates for the primary structure of proteins. However, it is unacceptable to say that DNA contains the information for phenotypic design. The epigenetic approach to human psychological development – that cascading phenotypic effects are not encoded directly in the genes – contrasts sharply with many so-called nativist approaches. Opponents of innate knowledge discuss four problems in appearance of the perception of objects.
The binding problem – According to cognitive psychologist Anne Treisman, the binding problem can be divided into three separate problems. (1) How are relevant elements that should be related as a whole selected and separated from elements that belong to other objects, ideas, or events? (2) How is the binding encoded so it can be transferred to other brain systems and used? (3) How are the correct relationships between related elements within the same object defined? This problem is also connected to the problem of multisensory integration in perception.
The perception stability problem – According to research professor of Liepaja University Igor Val Danilov, newborns and infants cannot capture the same picture of the environment as adults because of their immature sensory systems. They cannot sense environmental stimuli from social phenomena to the same extent as adults. The outcomes of processing similar sensory stimuli in immature and mature organisms differ. The corresponding holistic representations of objects can hardly occur in these organisms.
The excitatory inputs problem – According to the received view in cognitive sciences, cognition develops due to experience-dependent neuronal plasticity, e.g.,. Neuronal plasticity refers to the capacity of the nervous system to modify itself, functionally and structurally, in response to experience and injury. However, the structural organization of excitatory inputs supporting spike-timing-dependent plasticity remains unknown. How is the relation between a specific sensory stimulus and the appropriate structural organization of the excitatory inputs in specific neurons formed?
The problem of Morphogenesis – Cell actions during an embryo formation, including shape changes, cell contact remodeling, cell migration, cell division, and cell extrusion, need control over cell mechanics. This complex dynamical process is associated with protrusive, contractile, and adhesive forces and hydrostatic pressure, as well as material properties of cells that dictate how cells respond to active stresses. Precise coordination of all cells is a necessary condition. Moreover, such a complex dynamical process likely requires clear parameters of the final biological structure – the complete developmental program with a template for accomplishing it. Collinet and Lecuit (2021) pose a question: what forces or mechanisms at the cellular level manage four very general classes of tissue deformation, namely tissue folding and invagination, tissue flow and extension, tissue hollowing, and, finally, tissue branching? They challenge the nativists' notion that shape is fully encoded and determined by genes: how are cell mechanics and associated cell behaviors robustly organized in space and time during tissue morphogenesis? They argue that not only gene expression and the resulting biochemical cues but also mechanics and geometry act as sources of morphogenetic information to ultimately define the time and length scales of the cell behaviors driving morphogenesis. Thus, it is not only the interaction of gene activity with events and experiences in the environment that contributes to the formation of tissues in morphogenesis. Because the nervous system structures operate over everything that makes us human, the formation of neural tissues in a certain way is essential for shaping cognitive functions. According to research professor Igor Val Danilov, such a complex process of shaping the determined structure of the nervous system requires a complete developmental program with a template for accomplishing the final biological structure of the nervous system. Indeed, because even processes of the cell coupling for shaping a nervous system during embryonal development challenge the naturalistic approach, how the nervous system grasps perception and shapes intentionality (independently, i.e., without any template) seems even more complicated.
So, the fact that gene activity interacts with events and experiences in the environment (as noted above) may not fully explain the integrative complexity of intentionality-perception development for beginning cognitive development. Nowadays, the Shared intentionality hypothesis is the only one that attempts to explain neurophysiological processes at the beginning of cognitive development at different levels of interaction, from interpersonal dynamics to neuronal interactions. It also solves the above noted problems. Professor of psychology Michael Tomasello hypothesised that social bonds between children and caregivers would gradually increase through the essential motive force of shared intentionality beginning from birth. The notion of Shared intentionality, introduced by Michael Tomasello, was developed by Research Professor Igor Val Danilov, expanding it to the intrauterine period. The Shared intentionality approach also points out that "an innate sensitivity to specific patterns of information" mentioned in the section "Speculated core systems of cognition" is also the outcome of Shared intentionality with caregivers, who obviously participated in the experiments.
Jean Piaget
Jean Piaget was the first psychologist and philosopher to brand this type of study as "cognitive development". Other researchers, in multiple disciplines, had studied development in children before, but Piaget is often credited as being the first one to make a systematic study of cognitive development and gave it its name. His main contribution is the stage theory of child cognitive development. He also published his observational studies of cognition in children, and created a series of simple tests to reveal different cognitive abilities in children. Piaget believed that people move through stages of development that allow them to think in new, more complex ways.
Criticism
Many of Piaget's claims have fallen out of favor. For example, he claimed that young children cannot conserve numbers. However, further experiments showed that children did not really understand what was being asked of them. When the experiment is done with candies, and the children are asked which set they want rather than having to tell an adult which is more, they show no confusion about which group has more items. Piaget argues that the child cannot conserve numbers if they do not understand one-to-one correspondence.
Piaget's theory of cognitive development ends at the formal operational stage that is usually developed in early adulthood. It does not take into account later stages of adult cognitive development as described by, for example, Harvard University professor Robert Kegan.
Additionally, Piaget largely ignores the effects of social and cultural upbringing on stages of development because he only examined children from western societies. This matters as certain societies and cultures have different early childhood experiences. For example, individuals in nomadic tribes struggle with number counting and object counting. Certain cultures have specific activities and events that are common at a younger age which can affect aspects such as object permeance. This indicates that children from different societies may achieve a stage like the formal operational stage while in other societies, children at the exact same age remain in the concrete operational stage.
Stages
Sensorimotor stage
Piaget believed that infants entered a sensorimotor stage which lasts from birth until age 2. In this stage, individuals use their senses to investigate and interact with their environment. Through this they develop coordination between the sensory input and motor responses. Piaget also theorized that this stage ended with the acquisition of object permanence and the emergence of symbolic thought.
This view collapsed in the 1980s when research was put out showing that infants as young as five months are able to represent out-of-sight objects, as well their properties, such as number and rigidity.
Preoperational stage
Piaget believed that children entered a preoperational stage from roughly age 2 until age 7. This stage involves the development of symbolic thought (which manifests in children’s increased ability to ‘play pretend’). This stage involves language acquisition, but also the inability to understand complex logic or to manipulate information.
Subsequent work suggesting that preschoolers were indeed capable of taking others' perspectives into account and reasoning about abstract relationships, including causal relationships marked the demise of this aspect of stage theory as well.
Concrete operational stage
Piaget believed that the concrete operational stage spanned roughly from age 6 through age 12. This stage is marked by the development and achievement of skills such as conservation, classification, serialism, and spatial reasoning.
Work suggesting that much younger children reason about abstract ideas including kinds, logical operators, and causal relationships rendered this aspect of stage theory obsolete.
Formal operational stage
Piaget believed that the formal operational stage spans roughly from age 12 through adulthood, and is marked by the ability to apply mental operations to abstract ideas.
Erik Erickson
Erikson worked with Freud but unlike Freud, Erikson focused on Biological, Psychological, and social factors in human development. Each stage is rooted in some kind of competence, or perceived ability to do things.
Each stage is defined by 2 conflicting psychological tendencies and by what traits develop in the stage dependent on how much of each tendency was experienced. There are virtues that develop in healthy circumstances and maladaptations that develop in unhealthy circumstances. It consists of 8 stages. While the conflicting tendencies may appear to be good versus bad. They can be considered as a balance where most healthy individuals experience some of each.
Stage 1-Infancy- Trust Versus Mistrust
A baby has very little ability to do anything for themself. As such infants develop according to whether they learn to trust or distrust the world around them. The virtue that arises during this stage is hope and the maladaptation is withdrawal.
Stage 2-Early Childhood- Autonomy Versus Shame
As a child starts to explore the world the conflict they experience is autonomy or a feeling of being able to do things themselves, verses shame or doubt, which is a feeling of being unable to do things themselves and fear of making mistakes. The virtue that arises during this period is will, suggesting a control over one's actions. The maladaptation for this stage is compulsion, or lack of control over one's actions.
Stage 3-Play Age- Initiative Versus Guilt
As a child grows from the stage of autonomy verses shame, they experience the conflict of initiative vs guilt. Initiative or having the ability to act in a situation against guilt or feeling bad about their actions or feeling incapable of acting. The virtue that develops in this stage is purpose and the maladaptation is inhibition.
Stage 4-School Age- Industry Versus Inferiority
As a child's awareness of their effect on the world around them grows they come to the conflict of industry and inferiority. Industry meaning ability and willingness to proactively interact with the world around them and Inferiority meaning incapability or perceived incapability to interact with the world. The virtue that is learned in this stage is competence and the maladaptation is inertia or passivity.
Stage 5-adolescence- Identity Versus Identity Confusion
As a child grows into adolescence, their ability to interact with the world starts to interact with their perceptions of who they are, and they find themselves in a conflict between identity and identity confusion. Identity means knowledge of who they are and developing their own sense of right and wrong. Identity confusion meaning confusion over who they are and what right and wrong is to them. The virtue that is developed is fidelity and the maldevelopment is repudiation.
Stage 6-Young Adulthood- Intimacy Versus Isolation
During young adulthood, people find themselves in a place where they are looking for belonging in a small number of close relationships. Intimacy suggests finding very close relationships with other people and isolation is a lack of such a connection. The virtue that can arise from this is love and the maladaptation is distantiation.
Stage 7-Adulthood- Generativity Versus Stagnation
In this stage of life people find that along with accomplishing personal goals, they are either giving to the next generation, whether as a mentor or a parent or they turn towards themselves and keep a distance from others. The virtue that arises in this stage is caring and the maladaptation is rejectivity.
Stage 8-Old Age- Integrity Versus Despair
Those in the twilight of their life look back at their lives and either are satisfied with their life's work or feel great regret. This satisfaction or regret is a large part of their identity at the end of their lives. The virtue that develops is wisdom and the maldevelopment is disdain.
Current Theories of Cognitive Development
Core Knowledge Theory
Empiricists study how these skills may be learned in such a short time. The debate is over whether these systems are learned by general-purpose learning devices or domain-specific cognition. Moreover, many modern cognitive developmental psychologists, recognizing that the term "innate" does not square with modern knowledge about epigenesis, neurobiological development, or learning, favor a non-nativist framework. Researchers who discuss "core systems" often speculate about differences in thinking and learning between proposed domains.
Research suggests that children have an innate sensitivity to specific patterns of information, referred to as core domains.The discussion of “core knowledge” theory focuses on a few main systems, including agents, objects, numbers, and navigation.
Agents
It is speculated that a piece of an infants’ core knowledge lies in their ability to abstractly represent actors. Agents are actors, human or otherwise, who process events and situations, and select actions based on goals and beliefs. Children expect the actions of agents to be goal-directed, efficient, and understand that they have costs, such as time, energy, or effort. Children are importantly able to differentiate between actors and inanimate objects, proving a deeper understanding of the concept of an agent.
Objects
Within the theorized systems, infants’ core knowledge of objects has been one of the most extensively studied. These studies suggest that young infants appear to have an early expectation of object solidity, namely understanding that objects cannot pass through one another. Similarly, they demonstrate an awareness of object continuity, expecting objects to move on continuous paths rather than teleporting or discontinuously changing their locations. They also expect objects to follow the laws of gravity.
Numbers
Evidence suggests that humans utilize two core systems for number representation: approximate representations and precise representations. The approximate number system helps to capture the relationship between quantities by estimating numerical magnitudes. This system becomes more precise with age. The second system helps to precisely monitor small groups (limited to around 3 for infants) of individual objects and accurately represent those numerical quantities.
Place
Very young children appear to have some skill in navigation. This basic ability to infer the direction and distance of unseen locations develops in ways that are not entirely clear. However, there is some evidence that it involves the development of complex language skills between 3 and 5 years. Also, there is evidence that this skill depends importantly on visual experience, because congenitally blind individuals have been found to have impaired abilities to infer new paths between familiar locations.
One of the original nativist versus empiricist debates was over depth perception. There is some evidence that children less than 72 hours old can perceive such complex things as biological motion. However, it is unclear how visual experience in the first few days contributes to this perception. There are far more elaborate aspects of visual perception that develop during infancy and beyond.
Shared Intentionality
This approach integrates Externalism (a group of positions in the philosophy of mind: embodied cognition, embodied embedded cognition, enactivism, extended mind, and situated cognition) with the Empiricist ideas about the beginning of cognition only from learning in the environment. According to the Externalism approach, communicative symbols are encoded into the local topological properties of neuronal maps, which reflect a dynamical action pattern. The sensorimotor neuronal network enables pairing the relevant cue with a particular symbol saved in the sensorimotor structures and processes that reveals embodied meanings. In this sense, the Shared intentionality theory does not contradict the Core Knowledge Theory while complements it.
Based on evidence of child cognitive development, experimental data from research on child behavior in the prenatal period, and advances in inter-brain neuroscience research, research professor at Liepaja University Igor Val Danilov introduced the notion of non-local neuronal coupling of the mother and fetus neuronal networks. The term non-local neuronal coupling refers to the pre-perceptual communication provided by copying adequate ecological dynamics by one biological system from another, both indwelling one environmental context. The naive actor (fetus) replicates information from the experienced agent (mother) due to the synchronization of intrinsic processes of these dynamic systems (embodied information). This non-local neuronal coupling succeeds due to a low-frequency oscillator (mother's heartbeats) that coordinates relevant local neuronal networks in specific subsystems of these two organisms, which already exhibit gamma activity (similar embodied information in both). The registered cooperative neuronal activity in inter-brain research, so-called mirror neurons, is probably the manifestation of this non-local neuronal coupling. In such a manner, the experienced agent ensures one-direction conveying information about an actual cognitive event toward an organism at the simple reflexes stage of cognitive development without interacting through sensory signals. Obviously, any sensory communication between the mother and fetus is impossible. Therefore, non-local neuronal coupling mediates environmental learning early in cognition.
The notion of non-local neuronal coupling filled a gap in knowledge both in the Core Knowledge Theory and the group of positions in Externalism about the very beginning of cognition, which has also been shown by the binding problem, the perception stability problem, the excitatory inputs problem, and the problem of Morphogenesis. The nervous system of the young organism at the prenatal stage of development cannot alone solve the complexity of intentionality-perception development for beginning cognitive development. For the innate sensitivity to specific patterns of information (referred to as core domains according to the Core Knowledge Theory) or for pairing the relevant cue with a particular symbol saved in the sensorimotor structures (embodied information according to Externalism), the organism only with an ability of reflex responses should distinguish the relevant stimulus (an informative cue) from the environment with the cacophony of stimuli: electromagnetic waves, chemical interactions, and pressure fluctuations. The notion of non-local neuronal coupling explains the neurophysiological processes of Shared intentionality at the cellular level that reveal in young organisms the innate sensitivity and/or embodied meanings during cognition. The Shared intentionality approach shows how, at different levels of interaction, from interpersonal dynamics to neuronal coupling, the collaborative interaction emerges in the mother-child pairs for sharing the essential sensory stimulus of the actual cognitive event. Finally, research has already shown that the Shared intentionality magnitude can be assessed by emulating the mother-fetal communication model in dyads of mothers and children from 2 to 10 years old.
Key Topics of Study in Cognitive Development
Language Acquisition
A major, well-studied process and consequence of cognitive development is language acquisition. The traditional view was that this is the result of deterministic, human-specific genetic structures and processes. Other traditions, however, have emphasized the role of social experience in language learning. However, the relation of gene activity, experience, and language development is now recognized as incredibly complex and difficult to specify. Language development is sometimes separated into learning of phonology (systematic organization of sounds), morphology (structure of linguistic units—root words, affixes, parts of speech, intonation, etc.), syntax (rules of grammar within sentence structure), semantics (study of meaning), and discourse or pragmatics (relation between sentences). However, all of these aspects of language knowledge—which were originally posited by the linguist Noam Chomsky to be autonomous or separate—are now recognized to interact in complex ways.
It was not until 1962 that bilingualism had been accepted as a contributing factor to cognitive development. There have been a number of studies showing how bilingualism contributes to the executive function of the brain, which is the main center at which cognitive development happens. According to Bialystok in "Bilingualism and the Development of Executive Function: The Role of Attention", children who are bilingual have to actively filter through the two different languages to select the one they need to use, which in turn makes the development stronger in that center.
Other theories
Whorf's hypothesis
While working as a student of Edward Sapir, Benjamin Lee Whorf posited that a person's thinking depends on the structure and content of their social group's language. Per Whorf, language determines our thoughts and perceptions. For example, it used to be thought that the Greeks, who wrote left to right, thought differently than Egyptians since the Egyptians wrote right to left. Whorf's theory was so strict that he believed if a word is absent in a language, then the individual is unaware of the object's existence. This theory was played out in George Orwell's book, Animal Farm; the pig leaders slowly eliminated words from the citizen's vocabulary so that they were incapable of realizing what they were missing. The Whorfian hypothesis failed to recognize that people can still be aware of the concept or item, even though they lack efficient coding to quickly identify the target information.
Quine's bootstrapping hypothesis
Willard Van Orman Quine argued that there are innate conceptual biases that enable the acquisition of language, concepts, and beliefs. Quine's theory follows nativist philosophical traditions, such as the European rationalist philosophers, for example Immanuel Kant.
Neo-Piagetian theories
Neo-Piagetian theories of cognitive development emphasized the role of information processing mechanisms in cognitive development, such as attention control and working memory. They suggested that progression along Piagetian stages or other levels of cognitive development is a function of strengthening of control mechanisms and is within the stages themselves.
Neuroscience
During development, especially the first few years of life, children show interesting patterns of neural development and a high degree of neuroplasticity. Neuroplasticity, as explained by the World Health Organization, can be summed up in three points.
Any adaptive mechanism used by the nervous system to repair itself after injury.
Any means by which the nervous system can repair individually damaged central circuits.
Any means by which the capacity of the central nervous system can adapt to new physiological conditions and environment.
The relation of brain development and cognitive development is extremely complex and, since the 1990s, has been a growing area of research.
Cognitive development and motor development may also be closely interrelated. When a person experiences a neurodevelopmental disorder and their cognitive development is disturbed, we often see adverse effects in motor development as well. Cerebellum, which is the part of brain that is most responsible for motor skills, has been shown to have significant importance in cognitive functions in the same way that prefrontal cortex has important duties in not only cognitive abilities but also development of motor skills. To support this, there is evidence of close co-activation of neocerebellum and dorsolateral prefrontal cortex in functional neuroimaging as well as abnormalities seen in both cerebellum and prefrontal cortex in the same developmental disorder. In this way, we see close interrelation of motor development and cognitive development and they cannot operate in their full capacity when either of them are impaired or delayed.
Cultural influences
From cultural psychologists' view, minds and culture shape each other. In other words, culture can influence brain structures which then influence our interpretation of the culture. These examples reveal cultural variations in neural responses:
Figure-line task
Behavioral research has shown that one's strength in independent (tasks which are focused on influencing others or oneself) or interdependent tasks (tasks where one changes their own behavior to favor others) differ based on their cultural context. In general, East Asian cultures are more interdependent whereas Western cultures are more independent. Hedden et al. assessed functional magnetic resonance imaging (fMRI) responses of East Asians and Americans while they performed independent (absolute) or interdependent (relative) tasks. The study showed that participants used regions of the brain associated with attentional control when they had to perform culturally incongruent tasks. In other words, neural paths used for the same task were different for Americans and East Asians.
Transcultural neuroimaging studies
New studies in transcultural neuroimaging studies have demonstrated that one's cultural background can influence the neural activity that underlies both high (for example, social cognition) and low (for example, perception) level cognitive functions. Studies demonstrated that groups that come from different cultures or that have been exposed to culturally different stimuli have differences in neural activity. For example, differences were found in that of the pre motor cortex during mental calculation and that of the VMPFC during trait judgements of one's mother from people with different cultural backgrounds. In conclusion, since differences were found in both high-level and low-level cognition one can assume that our brain's activity is strongly and, at least in part, constitutionally shaped by its sociocultural context.
Understanding of others' intentions
Kobayashi et al. compared American-English monolingual and Japanese-English bilingual children's brain responses in understanding others' intentions through false-belief story and cartoon tasks. They found universal activation of the region bilateral ventromedial prefrontal cortex in theory of mind tasks. However, American children showed greater activity in the left inferior frontal gyrus during the tasks whereas Japanese children had greater activity in the right inferior frontal gyrus during the Japanese Theory of Mind tasks. In conclusion, these examples suggest that the brain's neural activities are not universal but are culture dependent.
In underrepresented groups
Deaf and hard-of-hearing
Being deaf or hard-of-hearing has been noted to impact cognitive development as hearing loss impacts social development, language acquisition, and the culture reacts to a deaf child. Cognitive development in academic achievement, reading development, language development, performance on standardized measures of intelligence, visual-spatial and memory skills, development of conceptual skills, and neuropsychological function are dependent upon the child's primary language of communication, either American Sign Language or English, as well as if the child is able to communicate and use the communication modality as a language. There is some research pointing to deficits in development of theory of mind in children who are deaf and hard-of-hearing which may be due to a lack of early conversational experience. Other research points to lower scores on the Wechsler Intelligence Scale for Children, especially in the Verbal Comprehension Index due differences in cultural knowledge acquisition.
Transgender people
Since the 2010s there has been an increase in research into how transgender people fit into cognitive development theory. At the earliest, transgender children can begin to socially transition during identity exploration. In 2015, Kristina Olson and colleagues studied transgender youth in comparison to their cisgender siblings and unrelated cisgender children. The students participated in the IAT, a test that measures how one may identify based on a series of questions related to memory. Overall it determines a child's gender preference. It showed that the transgender children's results correlated with their desired gender. The behaviors of the children also related back to their results. For instance, the transgender boys enjoyed food and activities typically associated and enjoyed by cisgender boys. The article reports that the researchers found that the children were not confused, deceptive, or oppositional of their gender identity, and responded with actions that are typically represented by their gender identity.
See also
References
Further reading
Klausmeier, J. Herbert & Patricia, S. Allen. "Cognitive Development of Children and Youth: A Longitudinal Study". 1978. pp. 3, 4, 5, 83, 91, 92, 93, 95, 96
McShane, John. "Cognitive Development: an information processing approach". 1991. pp. 22–24, 140, 141, 156, 157
Begley, Sharon. (1996) Your Child's Brain. Newsweek. Record: 005510CCB734C89244420.
Cherry, Kendra. (2012). Erikson's Theory of Psychosocial Development. Psychosocial Development in Infancy and Early Childhood.
Freud, Lisa (10/05/2010). Developmental Cognitive Psychology, Behavioral Neuroscience, and Psychobiology Program. Eunice Kennedy Shiver: National Institute of Child Health and Human Development.
Davies, Kevin. (4/17/2001). Nature vs. Nurture Revisited. NOVA.
Cognitive psychology
Neuroscience
Developmental psychology | 0.764843 | 0.995995 | 0.76178 |
Learning by teaching | In the field of pedagogy, learning by teaching is a method of teaching in which students are made to learn material and prepare lessons to teach it to the other students. There is a strong emphasis on acquisition of life skills along with the subject matter.
Background
The method of having students teach other students has been present since antiquity. Most often this was due to lack of resources. For example, the Monitorial System was an education method that became popular on a global scale during the early 19th century. It was developed in parallel by Scotsman Andrew Bell who had worked in Madras and Joseph Lancaster who worked in London; each attempted to educate masses of poor children with scant resources by having older children teach younger children what they had already learned.
Systematic research into intentionally improving education, by having students learn by teaching began in the middle of the 20th century.
In the early 1980s, Jean-Pol Martin systematically developed the concept of having students teach other in the context of learning French as a foreign language, and he gave it a theoretical background in numerous publications, which was thus referred to in German as Lernen durch Lehren, shortened to LdL. The method was originally resisted, as the German educational system generally emphasized discipline and rote learning. However the method became widely used in Germany in secondary education, and in the 1990s it was further formalized and began to be used in universities as well. By 2008 Martin had retired, and although he remained active Joachim Grzega took the lead in developing and promulgating LdL.
LdL method
After preparation by the teacher, students become responsible for their own learning and teaching. The new material is divided into small units and student groups of not more than three people are formed.
Students are then encouraged to experiment to find ways to teach the material to the others. Along with ensuring that students learn the material, another goal of the method, is to teach students life skills like respect for other people, planning, problem solving, taking chances in public, and communication skills. The teacher remains actively involved, stepping in to further explain or provide support if the teaching-students falter or the learning-students do not seem to understand the material.
The method is distinct from tutoring in that LdL is done in class, supported by the teacher, and distinct from student teaching, which is a part of teacher education.
Plastic platypus learning
A related method is the plastic platypus learning or platypus learning technique. This technique is based on evidence that show that teaching an inanimate object improves understanding and knowledge retention of a subject.
The advantage of this technique is that the learner does not need the presence of another person in order to teach the subject. The concept is similar to the software engineering technique of rubber duck debugging, in which a programmer can find bugs in their code without the help of others, simply by explaining what the code does, line by line, to an inanimate object such as a rubber duck.
A similar process is the Feynman technique, named after physicist Richard Feynman, in which a person attempts to write an explanation of some information in a way that a child could understand, developing original analogies where necessary. When the writer reaches an area which they are unable to comfortably explain, they go back and re-read or research the topic until they are able to do so.
Flipped learning + teaching
Traditional instructor teaching style classes can be mixed with or transformed to flipped teaching. Before and after each (traditional/flipped) lecture, anonymized evaluation items on the Likert scale can be recorded from the students for continuous monitoring/dashboarding. In planned flipped teaching lessons, the teacher hands out lesson teaching material one week before the lesson is scheduled for the students to prepare talks. Small student groups work on the lecture chapters instead of homework, and then give the lecture in front of their peers. The professional lecturer then discusses, complements, and provides feedback at the end of the group talks. Here, the professional lecturer acts as a coach to help students with preparation and live performance.
See also
Peer mentoring
Peer-led team learning
Rubber duck debugging – A code debugging technique which involves explaining code to a rubber duck
References
Further reading
Adamson, Timothy; Ghose, Debasmita; Shannon C. Yasuda; Jehu, Lucas; Shepard, Silva; Michal, A.; Duan, Jyoce; Scassellati, Brian: „Why We Should Build Robots That Both Teach and Learn". 2021.https://scazlab.yale.edu/sites/default/files/files/hrifp1028-adamsonA.pdf
m
Kabache, Taieb (2022): Probing the Impact of Learning-by-teaching Method to Boost EFL Learners’ Engagement during the Grammar Session: The case of first-year PEM students at Taleb Abdurrahman ENS Laghouat. Algeria.
Kolbe, Simon (2021): Learning by Teaching - a Resource Orientated Approach Towards Mordern Inclusive Education. In: Mevlüt Aydogmus (Hg.): New Trends and Promising Directions in Modern Education. New Perspectives 2021. Meram/Konya: Palet Yayinlari Verlag, 234-255.
m
(Author copy)
Serholt, Sofia, Ekström Sara, Künster Dennis, Ljungblad Sara, Pareto Lena (2022): Comparing a Robot Tutee to a Human Tutee in a Learning-By-Teaching Scenario with Children, 2022 Front. Robot. AI, 21 February 2022 | https://doi.org/10.3389/frobt.2022.836462
External links
Lernen durch Lehren website, archived 12/2018
Online course (Video): Learning by teaching, Nellie Deutsch, 2017 Learning by teaching, Nellie Deutsch, 2017] Online course (Video): Learning by teaching, Nellie Deutsch, 2017
Video: Protege effect: Learning by teaching, Ontario 2014
Video: Learning by teaching, Germany 2004
Video: Learning by teaching. Teaching Methodology, ELT under Cover, 2022
Jean-Pol Martin - English Language Teacher Interview #16, ELT Under The Covers Podcast, 2022
Martin, J. P. & Kolbe, S. (2023, May 21). Learning by Teaching [Interview]. Seitwerk. https://www.youtube.com/playlist?list=PL4hofRriFR15Xa0fr4VFvhj-_bMExnnbm
Applied learning
Alternative education
Progressivism
Educational practices
Learning methods
Pedagogy | 0.773616 | 0.984696 | 0.761776 |
Indigenous education | Indigenous education specifically focuses on teaching Indigenous knowledge, models, methods, and content within formal or non-formal educational systems. The growing recognition and use of Indigenous education methods can be a response to the erosion and loss of Indigenous knowledge through the processes of colonialism, globalization, and modernity. Indigenous education also refers to the teaching of the history, culture, and languages of Indigenous peoples of a region.
Indigenous peoples' right to education is recognized in Article 14 of the United Nations Declaration on the Rights of Indigenous Peoples. The United Nations Declaration of the Rights of Indigenous Peoples makes particular reference to the educational rights of Indigenous peoples in Article 14. It emphasizes the responsibility of states to adequately provide access to education for Indigenous people, particularly children, and when possible, for education to take place within their own culture and to be delivered in their own language.
Cultural context of Indigenous learning in the Americas
A growing body of scientific literature has described Indigenous ways of learning, in different cultures and countries. Learning in Indigenous communities is a process that
involves all members in the community.
The learning styles that children use in their Indigenous schooling are the same ones that occur in their community context. These Indigenous learning styles often include: observation, imitation, use of narrative/storytelling, collaboration, and cooperation, as seen among American Indian, Alaska Native and Latin American communities. This is a hands on approach that emphasizes direct experience and learning through inclusion.The child feels that they are a vital member of the community, and they are encouraged to participate in a meaningful way by community members. Children often effectively learn skills through this system, without being taught explicitly or in a formal manner. This differs from Western learning styles, which tend to include methods such as explicit instruction in which a figure of authority directs the learner's attention, and testing/ quizzing. Creating an educational environment for Indigenous children that is consistent with upbringing, rather than an education that follows a traditionally Western format, allows for a child to retain knowledge more easily, because they are learning in a way that was encouraged from infancy within their family and community.
Robinson further said that traditional Western methods of education generally disregard the importance Indigenous cultures and environmental contributions, which results in a lack of relevance for students of aboriginal backgrounds. Modern schools have a tendency to teach skills stripped of context which has a detrimental impact on Indigenous students because they thrive off educational environments in which their cultures and languages are respected and infused in learning. Various aspects of Indigenous culture need to be considered when discussing Indigenous learning, such as: content (how culture is portrayed in text and through language), social culture/ interactions (relations between class interactions and interactions within Indigenous communities), and cognitive culture (differences in worldview, spiritual understandings, practical knowledge, etc.).
Classroom structure
According to Akhenoba Robinson (2019), the structure of Indigenous American classrooms that reflect the organization of Indigenous communities eliminates the distinction between the community and classroom and makes it easier for students to assimilate the material. Effective classrooms modeled off of the social structure of Indigenous communities are typically focused on group or cooperative learning that provide an inclusive environment. Between traditional Aboriginal education and the western system of education. A key factor for successful Indigenous education practices is the student-teacher relationship. Classrooms are socially constructed in a way that the teacher shares the control of the classroom with the students. Rather than taking an authoritative role, the teacher is viewed as a co-learner to the students, and they maintain a balance between personal warmth and demand for academic achievement. In Mexico, teachers have been observed to let their students move freely about the classroom while working in order to consult with other students, as well as using their instructors for occasional guidance.
Teachers in Indigenous classrooms in a community in Alaska rely on group work, encourage the students to watch each other as a way to learn, and avoid singling out students for praise, criticism, or recitation. Praise, by Western standards, is minimal in Indigenous classrooms, and when it is given it is for effort, not for providing a correct answer to a question. Classroom discourse in Indigenous classrooms is an example of how the teacher shares control with the students. Observations in the Yup'ik and Mazahua communities show that Indigenous teachers are less likely to solicit an answer from an individual student, but rather encourage all of the students to participate in classroom discourse. In the Yup'ik classroom, direct questions are posed to the group as whole, and the flow of the discussion is not the sole responsibility of the teacher. Classrooms in Indigenous communities that incorporate Indigenous ways of learning utilize open-ended questioning, inductive/analytical reasoning, and student participation and verbalization, in group settings.
Escuela Unitaria (one-room one-teacher)
In 2019, A. Robinson wrote that Escuela Unitaria is a one-room one-teacher style of schooling that is used in some rural communities, which utilizes ways of learning common in some Indigenous or Indigenous-heritage communities in the Americas. The school serves up to six grades in a single classroom setting with smaller groups (divided by grade level) in the classroom. Community involvement is strongly implemented in the management of the school. Learning activities are not just inside the classroom but also outside in the agricultural environment. Children are self-instructed and the content involves the students' rural community and family participation. The school is structured to meet cultural needs and match available resources. This classroom setting allows for a collaborative learning environment that includes the teacher, the students, and the community. Integration of cultural knowledge within the curriculum allows students to participate actively and to have a say in the responsibilities for classroom activities.
Spirituality
Indigenous students make meaning of what they learn through spirituality. Spirituality in learning involves students making connections between morals, values and intellect rather than simply acquiring knowledge. Knowledge to Indigenous people is personal and involves emotions, culture, traditional skills, nature, etc. For this reason, Indigenous students need time to make connections in class, and often benefit from a safe and respectful environment that encourages discussions among students.
Gilliard and Moore (2007) presented the experiences of eight Native American educators, focusing on the impact of having family and community culture included in the curriculum. Typically, tribal K-12 schools on the reservation have majority European American teachers. This study differs in that sense by studying educators who are all of Native American background and their interactions with students and families. These educators reported that their interactions with families stem from respect and understanding. There were three categories that surfaced when understanding and defining culture; (1) respect of children, families, and community, (2) building a sense of belonging and community through ritual, and (3) the importance of family values and beliefs.
Respect of children, families, and community; educators approached interactions in a reflective and respectful way when talking with children, families, and the community. Educators accepted practices concerning death in individual families. Educators made it a point to be aware of curricular activities that may offend certain tribes. Lastly, educators spoke in a soft, quiet, and gentle way to the children.
Building a sense of belongingness and community through ritual; specific to the tribe on Flathead Reservation, powwows are a community ritual that bring together families and community. Educators worked with families and their children to make moccasins, ribbon shirts and dresses, and shawls prior to the powwow, and included elements of a powwow into their classroom. For example, they keep a drum in the classroom to use for drumming, singing, and dancing.
Importance of family values and beliefs; educators give the opportunity to parent's to be involved in the day-to-day activities in and around the classroom. Such as, meal times, play time, holidays, and celebrations. Educators collaborate with parents regarding curriculum around holidays and cultural celebrations, reinforced importance of speaking their tribal languages, and clarified with parents what their home language is, and had respectful discussions around traditional values and beliefs that led to compromise, not isolation or separation.
The educators in this study worked on a daily basis to respect, plan, and learn about parent beliefs and values so they can create a community culture linked to school curriculum.
Similar to the previous study mentioned, Vaughn (2016) conducted a multiple case study of four Native American teachers and two European American teachers at Lakeland Elementary. The participants were asked to draw from influences, relationships, and resources of the local tribe, local and state practices, and knowledge of effective pedagogies to co-construct knowledge.
At the time this study was conducted, Lakeland Elementary was failing to meet No Child Left Behind's yearly progress in reading. State officials would come to observe teachers, unannounced, to make sure they were teaching the mandated literacy curriculum. This required the teachers to follow the literacy program, even though the curriculum seldom met the individual and specific linguistic and cultural needs of the majority of Native American students at the school.
So the researcher focused on two questions. The first one being, "In what ways did these teachers approach developing a curriculum to support their students' social, cultural, and linguistic needs?" One theme that came up was "pedagogical re-envisioning", which are pedagogies and understandings of culturally responsive teaching to address writing and understand that each student has individual needs. With understanding this, teachers are able to give students the opportunity to include oral storytelling so students have their own personal twist on their learning. The second question was "What shifts in teachers' pedagogical practices resulted from this collaboration?" Four themes came up; cultural resources, working with community, multimodal approaches, and integrating students' experiences and interests from their lives outside of school into the curriculum. By addressing these four themes, teachers were able to re-envision how curriculum can meet individual needs for many Native American students without leaving out their interests, culture, or resources.
Holistic approach to learning
Holistic education focuses on the "whole picture" and how concepts and ideas are interrelated, then analyzes and makes meaning of certain ideas. This form of education is beneficial for all students, especially Indigenous students. Traditionally, Indigenous forms of learning were/are holistic in nature, focusing on interconnections with context (especially culture, nature, and experiences).
According to a study by Stevenson et al. (2014), challenges that arise with using technology consistently can stem from a weak relationship between spending time outdoors and environmental knowledge and behavior in middle school aged students in North Carolina. This weak relationship may be due to a change in relationship between children and nature. Instead of children having a natural interaction with nature, outdoor activities are based on organized sport or technology.
Inclusion of Arts education constitutes a big part of student learning, it's an activity-based experiential subject
Middle school aged Native American students reported higher levels of environmental behavior than Caucasian students, urging environmental education professionals to continue to close achievement gaps in classrooms. Environmental education professionals continue to ensure that the same factors creating inequity don't affect environmental knowledge. Along with creating a classroom that strives to include environmental knowledge, promoting outdoor activities, and direct interaction with nature gives a chance for Native American students to voice their knowledge to the teacher, and to their peers.
Another form of holistic approach to learning includes parental and community advocacy. As reported by Pedro (2015), parents of students expressed concern that the high school their children attended neglected their children's voices, knowledge, and perspectives in the school. The school districts diversity specialist sought advice to construct a curriculum that would validate, teach, and support the perspective of Native American peoples of the Southwest United States. This team constructed a curriculum based on three ideas; (1) Native American students are harmed when their curriculum is void of knowledge that reflect their identity, culture, and heritage, (2) students who are not Native American are harmed as they learn about narrowed and historicized depictions of Indigenous peoples of the United States, and (3) teaching knowledge from a variety of perspectives should be fundamental to any learning environment.
Pedro suggested, with the foundation of parents' values, that students are able to engage in conversation, in their mind, through critical dialogic listening in silence. Just because students weren't engaging verbally in the discussion, didn't mean students weren't receptive to the points being made by other students who were verbally engaged. Students can share their beliefs and identities through meta-conversations in connection with the voiced realities between other students. After hearing different sides of other students' stories, they were able to construct their own identities and understandings into the debate, silently.
To validate the silence, the teacher in this instance, writes down quotes and questions students had asked in small and whole group conversations. At the end of each unit, the teacher would use these quotes and questions to ask students to reflect upon their writings, using notes they took and readings/handouts given to them. Through this option, students were able to contribute their identities, knowledge, and understandings into the classroom space. This process was called Literacy Events, in which students were given the opportunity to absorb and make sense of different perspectives and ideas from verbal discussions in class and readings. Silence helped the students relate internally, and through writing, their perspectives became known. Essentially, in the end, their stories were in their minds and contributing to the conversation as they chose whose ideas to accept and reject or a combination of both. Parents advocated for their children, so next time a student chooses silence, it might not mean that they are disengaged or uninterested. Instead, give them another avenue to express their thoughts.
Indigenous American ways of learning
Indigenous education involves oral traditions (such as listening, watching, imitating), group work, apprenticeship, and high levels of cultural context. Additionally, knowledge to Indigenous people is sacred, centers on the idea that each student constructs knowledge individually, and is rooted in experience and culture. Learning is believed to be life-long and involves a unique sense of self-identity and passion, as well as focuses on the importance of community survival and contributions to life and community sustainability. The Indigenous ways of learning occur when diverse perspectives are interconnected through spiritual, holistic, experiential and transformative methods. The optimal learning environment for Indigenous students incorporates: the land (and traditional skills), Indigenous languages, traditions, cultures, people (self, family, elders, and community), and spirituality.
Active participation
In many Indigenous communities of the Americas, children often begin to learn through their eagerness to be active participants in their communities. Through this, children feel incorporated as valued members when given the opportunity to contribute to everyday social and cultural activities . For example, in a traditional village in Yucatán, Mexico, great importance is placed on engaging in mature activities to help children learn how to participate and contribute appropriately. Adults rarely force children to contribute; rather, they provide children with a great range of independence in deciding what to do with their time. Therefore, children are likely to demonstrate that they want to be a productive member of the community because they have been a part of a social, collaborative culture that views everyday work as something that everyone can partake and help in.
A main model of learning is to incorporate children in various activities where they are expected to be active contributors. The different forms of activities can vary from momentary interactions to broad societal foundations and how those complement their community's traditions. In Maya Belize culture, girls as young as four can work alongside their mothers when washing clothes in the river – rather than being given verbal instructions, they observe keenly, imitate to the best of their ability, and understand that their inclusion is crucial to the community. Rather than being separated and directed away from the mature work and the Indigenous heritage, children are expected to observe and pitch in.
Indigenous communities in the Americas emphasize the ability for community members of all ages to be able to collaborate. In this kind of environment, children learn not only how to participate alongside others, but are also likely to demonstrate an eagerness to contribute as a part of their community. Integration of younger and older children provides the opportunity for different levels of observation, listening, and participation to occur [Rogoff et al. (2010)]. Soon after or even during an activity, children are often seen to take it upon themselves to participate in the same previous social and cultural activities that they observed and participated in . By encouraging child immersion in activities rather than specifically asking for their participation, children have the freedom to construct their own knowledge with self-motivation to continue cultural practices alongside others .
Children in many Indigenous cultures of the Americas actively participate and contribute to their community and family activities by observing and pitching in (link to LOPI page ) while informally learning to socialize and gaining a sense of responsibility amongst other skills. A mother reported that being an active participant in everyday activities provides children with the opportunity to gain direction in learning and working that other environments may not provide. For instance, 15-year-old Josefina and her family own a small restaurant in an Indigenous community in Nocutzepo, Mexico where the entire family collaborates to ensure the restaurant functions smoothly. This includes everyone from the grandmother who tends to the fire for cooking to 5-year-old Julia who contributes by carrying the pieces of firewood. Josefina is one of the seven family members that pitches in towards the family food stand. Through observation and listening, she learned that the food stand was the family's main source of income. Overtime, Josefina took it upon herself to pitch in and take over the food stand, thus learning responsibility, cooperation, and commitment. Nobody instructed or demanded her to help with the family business, but she learned the community's expectations and way of living. The inclusive and welcoming environment of the marketplace setting encourages children to participate in everyday social practices and take initiative to learn about their culture, facilitating communal collaboration.
Motivation
In Indigenous American communities, the inclusion of children in communal activities motivates them to engage with their social world, helping them to develop a sense of belonging. Active participation involves children undertaking initiative and acting autonomously.
Similarly, Learning by Observing and Pitching In (LOPI) supports informal learning which generates self-sovereignty. The combination of children's inclusion, development of independence, and initiative for contribution are common elements identified in Indigenous American ways of learning.
Education in Indigenous communities is primarily based on joint engagement in which children are motivated to "pitch-in" in collective activities through developing solidarity within family, resulting in reciprocal bonds. Learning is viewed as an act of meaningful and productive work, not as a separate activity. When asked to self-report about their individual contributions, Indigenous Mexican heritage children placed emphasis on the community rather than on individual role. Their contributions emphasized collaboration and mutual responsibility within the community. A study was conducted with children who had immigrated from Indigenous communities in rural Mexico. The children were less likely to view activities that Westernized culture regarded as "chores" to be a type of work. These children felt that activities such as taking care of siblings, cooking, and assisting in cleaning were activities that help the family. When asked how they viewed participation in household work, children from two Mexican cities reported they contribute because it is a shared responsibility of everyone in the family. They further reported that they want to pitch in to the work because helping and contributing allows them to be more integrated in ongoing family and community activities. Many Mexican-heritage children also reported being proud of their contributions, while their families reported the contributions of children are valued by everyone involved.
Learning through collaborative work is often correlated with children learning responsibility. Many children in Indigenous Yucatec families often attempt and are expected to help around their homes with household endeavors. It is common to see children offer their help of their own accord, such as Mari, an 18-month-old child from an Indigenous family who watched her mother clean the furniture with a designated cleaning leaf. Mari then took it upon herself to pick a leaf from a nearby bush and attempt to scrub the furniture as well. Although Mari was not using the proper type of leaf, by attempting to assist in cleaning the furniture, she demonstrated that she wanted to help in a household activity. Mari's mother supported and encouraged Mari's participation by creating an environment where she is able to pitch in, even if not in a completely accurate manner. Parents often offer guidance and support in Indigenous American cultures when the child needs it—as they believe this encourages children to be self-motivated and responsible.
Children from Indigenous communities of the Americas are likely to pitch in and collaborate freely without being asked or instructed to do so. For example, P'urepecha children whose mothers followed more traditional Indigenous ways of living demonstrated significantly more independent collaboration when playing Chinese checkers than middle-class children whose mothers had less involvement in Indigenous practices of the Americas. Similarly, when mothers from the Mayan community of San Pedro were instructed to construct a 3-D jigsaw puzzle with their children, mothers who practiced traditional Indigenous culture showed more cooperative engagements with their children than mothers with less traditional practices. These studies exemplify the idea that children from families that practice traditional Indigenous American cultures are likely to exhibit a motivation to collaborate without instruction. Therefore, being in an environment where collaboration is emphasized, serves as an example for children in Indigenous American communities to pitch in out of their own self-motivation and eagerness to contribute.
Assessment
In many Indigenous communities of the Americas children rely on assessment to master a task. Assessment can include the evaluation of oneself as well as evaluation from external influences like parents, family members, or community members. Assessment involves feedback given to learners from their support; this can be through acceptance, appreciation or correction. The purpose of assessment is to assist the learner as they actively participate in their activity. While contributing in the activity, children are constantly evaluating their learning progress based on the feedback of their support. With this feedback, children modify their behavior in mastering their task.
In the Mexican Indigenous heritage community of Nocutzepo, there is available feedback to a learner by observing the results of their contribution and by observing if their support accepted or corrected them. For example, a 5-year-old girl shapes and cooks tortillas with her mother, when the girl would make irregular tortilla shapes her mother would focus her daughter's attention to an aspect of her own shaping. By doing this, the young girl would imitate her mother's movements and improve her own skills. Feedback given by the mother helped the young girl evaluate her own work and correct it.
In traditional Chippewa culture, assessment and feedback are offered in variety of ways. Generally, Chippewa children are not given much praise for their contributions. On occasion, the parents offer assessment through rewards given to the child. These rewards are given as feedback for work well done, and come in the form of a toy carved out of wood, a doll of grass, or maple sugar. When children do not meet expectations, and fail in their contributions, Chippewa parents make sure not to use ridicule as a means of assessment. The Chippewa also recognize the harmful effects of excessive scolding to a child's learning process. Chippewa parents believes that scolding a child too much would "make them worse", and holds back the child's ability to learn.
For the Chillihuani community in Peru, parents bring up children in a manner that allows them to grow maturely with values like responsibility and respect. These values ultimately influence how children learn in this community. Parents from the Chillihuani community offer assessment of their children through praise, even if the child's contribution is not perfect. Additionally, feedback can come in the form of responsibility given for a difficult task, with less supervision. This responsibility is an important aspect of the learning process for children in Chillihuani because it allows them advance their skills. At only five years old, children are expected to herd sheep, alpaca and llamas with the assistance of an older sibling or adult relative. By age 8, children take on the responsibility of herding alone even in unfavorable weather conditions. Children are evaluated in terms of their ability to handle difficult tasks and then complemented on a job well done by their parents. This supports the learning development of the child's skills, and encourages their continued contributions.
Criticisms of the Western educational model
Omitting indigenous knowledge amounts to cultural assimilation. The government stigmatizes indigenous learning, culture, and language to assimilate indigenous peoples and create a more homogenized country. A study on Malaysian post secondary students found that indigenous children struggled with social and academic adaptation as well as self-esteem. The study also found that indigenous students had much more difficulty transitioning to university and other new programs compared to non-indigenous students. These challenges are rooted in the fact that indigenous students are underrepresented in higher education and face psychological challenges, such as self-esteem.
Globally, there is a large gap in educational attainment between indigenous and non-indigenous people. A study in Canada found that this gap is widened by the residential school system and traditionally Eurocentric curriculum and teaching methods. Stemming from the negative psychological impacts of attending residential schools in 1883, which were heavily influenced by Christian missionaries and European ideals and customs, a feeling of distrust towards Canadian schools has been passed down through generations. As a result of experiencing racism, neglect, and forced assimilation, the cycle of distrust has pervaded children and grandchildren, and so on. There is a continued lack of teaching of indigenous knowledge, perspective, and history.
As mentioned above, there has been a modern-day global shift towards recognizing the importance of Indigenous education. One reason for this current awareness is the rapid spread of Western educational models throughout the world. Critics of the Western educational model believe that due to colonial histories and lingering cultural ethnocentrism, the Western model can not substitute for an Indigenous education. Throughout history, Indigenous Peoples have experienced, and continue many negative interactions Western society (for example, the Canadian Residential School System), which has led to the oppression and marginalization of Indigenous people. The film "Schooling the World: The White Man's Last Burden" addresses this issue of modern education and its destruction of unique, Indigenous cultures and individuals' identities. Shot in the Buddhist culture of Ladakh in the northern Indian Himalayas, the film fuses the voices of the Ladakhi people and commentary from an anthropologist/ethnobotanist, a National Geographical Explorer-in-Residence, and an architect of education programs. In essence, the film examines the definitions of wealth and poverty, in other words, knowledge and ignorance. Furthermore, it reveals the effects of trying to institute a global education system or central learning authority, which can ultimately demolish "traditional sustainable agricultural and ecological knowledge, in the breakup of extended families and communities, and in the devaluation of ancient spiritual traditions." Finally, the film promotes a deeper dialogue between cultures, suggesting that there is no single way to learn. No two human beings are alike because they develop under different circumstances, learning, and education.
The director and editor of the film Carol Black writes, "One of the most profound changes that occurs when modern schooling is introduced into traditional societies around the world is a radical shift in the locus of power and control over learning from children, families, and communities to ever more centralized systems of authority." Black continues by explaining that in many non-modernized societies, children learn in a variety of ways, including free play or interaction with multiple children, immersion in nature, and directly helping adults with work and communal activities. "They learn by experience, experimentation, trial and error, by independent observation of nature and human behavior, and through voluntary community sharing of information, story, song, and ritual." Most importantly, local elders and traditional knowledge systems are autonomous in comparison to a strict Western education model. Adults have little control over children's "moment-to-moment movements and choices." Once learning is institutionalized, both the freedom of the individual and his/her respect for the elder's wisdom are ruined. "Family and community are sidelined…The teacher has control over the child, the school district has control over the teacher, the state has control over the district, and increasingly, systems of national standards and funding create national control over states." When Indigenous knowledge is seen as inferior to a standard school curriculum, an emphasis is placed on an individual's success in a broader consumer culture instead of on an ability to survive in his/her own environment. Black concludes with a comment, "We assume that this central authority, because it is associated with something that seems like an unequivocal good – 'education' – must itself be fundamentally good, a sort of benevolent dictatorship of the intellect." From a Western perspective, centralized control over learning is natural and consistent with the principles of freedom and democracy; and yet, it is this same centralized system or method of discipline that does not take into account the individual, which in the end stamps out local cultures.
Colonialism and Western methods of learning
The education system in the Americas reinforces western cultures, prior knowledge and learning experiences which leads to the marginalization and oppression of various other cultures. Teaching students primarily through European perspectives results in non-European students believing that their cultures have not contributed to the knowledge of societies. Often, Indigenous students resist learning because they do not want to be oppressed or labeled as 'incapable of learning' due to neo-colonial knowledge and teaching. The act of decolonization would greatly benefit Indigenous students and other marginalized students because it involves the deconstruction of engagement with the values, beliefs and habits of Europeans.
Pedagogical approaches to Indigenous education
Decentralization requires a shift in education that steps away from Western practices. The following are pedagogical approaches aimed at empowering Indigenous students and Indigenous communities through education that does not rely on western culture.
Culturally relevant pedagogy
Culturally relevant pedagogy involves curriculum tailored to the cultural needs of students and participants involved. Culture is at the core of CRP and teachers and educators aim for all students to achieve academic success, develop cultural competence, and develop critical consciousness to challenge the current social structures of inequality that affect Indigenous communities in particular. Culturally relevant pedagogy also extends to culturally sustaining and revitalizing pedagogy which actively works to challenge power relations and colonization by reclaiming, through education, what has been displaced by colonization and recognizing the importance of community engagement in such efforts.
Critical Indigenous pedagogy
Critical Indigenous pedagogy focuses on resisting colonization and oppression through education practices that privilege Indigenous knowledge and promote Indigenous sovereignty. Beyond schooling and instruction, CIP is rooted in thinking critically about social injustices and challenging those through education systems that empower youth and teachers to create social change. The goal of teachers and educators under CIP is to guide Indigenous students in developing critical consciousness by creating a space for self-reflection and dialogue as opposed to mere instruction. This form of pedagogy empowers Indigenous youth to take charge and responsibility to transform their own communities.
Under critical Indigenous pedagogy, schools are considered sacred landscapes since they offer a sacred place for growth and engagement. Western-style schooling is limited in engaging Indigenous knowledge and languages but schools that embrace critical Indigenous pedagogy recognize Indigenous knowledge and epistemologies which is why Indigenous schools should be considered sacred landscape.
Land-based pedagogy
Land as pedagogy recognizes colonization as dispossession and thus aims to achieve decolonization through education practices that connect Indigenous people to their native land and the social relations that arise from those lands. Land-based pedagogy encourages Indigenous people to center love for the land and each other as the core of education in order to contest oppression and colonialism that is aimed at deterring Indigenous people from their land.
Land-based pedagogy has no specific curriculum because education and knowledge come from what the land gives. Unlike western practices with a standard curriculum, land-based pedagogy is based on the idea of abstaining from imposing an agenda to another living being. Intelligence is considered a consensual engagement where children consent to learning and having a set curriculum is thought to normalize dominance and non-consent within schooling and inevitably extended to societal norms. Western style education is seen as coercive because in order to achieve something, one must follow the set guidelines and curriculum enforced by educators. Individuals show interest and commitment on their own thus achieving self-actualization and sharing their knowledge with others through modeling and "wearing their teachings." The values of land-based pedagogy are important to Indigenous people groups who believe that "raising Indigenous children in a context where their consent, physically and intellectually, is not just required but valued, goes a long way to undoing the replication of colonial gender violence" (Simpson, 31)
Community-based pedagogy
Community-based education is central to the revival of Indigenous cultures and diverse languages. This form of pedagogy allows community members to participate and influence the learning environment in local schools. Community-based education embraces the ideas of Paolo Freirie who called for individuals to "become active participants in shaping their own education" (May, 10).
The main effects of instilling community-based pedagogy in schools are as follows:
Parent involvement in decision making encourages children to become closer to their teachers
Indigenous parents themselves gain confidence and positively impacts their children's learning
Teacher-parent collaboration eliminates stereotypes non-Indigenous teachers may have about Indigenous people.
Communities collectively gain self-respect and achieve political influence as they take responsibility for their local schools
The school environment under a community-based education system requires communication and collaboration between the school and the community. The community must share leadership within the schools and must be involved in decision-making, planning, and implementation. Children learn through the guidance rather than determinants of their teachers or elders and are taught skills of active participation. Out of community-based education arises community-based participatory research (CBPR), an approach to research that facilitates co-learning co-partnership between researchers and community members to promote community-capacity building. CBPR requires having youth-researcher partnerships, youth action-groups, and local committees made up of youth, tribal leaders, and elders. This approach to research builds strength and empowers community members.
Culturally sustaining and revitalizing pedagogy
McCarty and Lee (2014) express that tribal sovereignty (Indigenous people's as peoples, not populations or national minorities), must include education sovereignty. The authors report that Culturally Sustaining and Revitalizing Pedagogy (CSRP) is necessary in education, based on three items; (1) asymmetrical power relations and the goal of transforming legacies of colonization, (2) reclaim and revitalize what has been disrupted and displaced by colonization, and (3) the need for community-based accountability.
CSRP is meant to off balance dominant policy dialogue. This research follows two case studies at two different schools, one in Arizona and one in New Mexico. Tiffany Lee reports for Native American Community Academy (NACA) in Albuquerque, New Mexico. The core values for the school include; respect, responsibility, community service, culture, perseverance, and reflection. These core values reflect tribal communities as well. NACA offers three languages; Navajo, Lakota, and Tiwa, and the school also seeks outside resources to teach local languages. This study emphasizes that teaching language is culturally sustaining and revitalizing; which creates a sense of belonging and strengthens cultural identities, pride, and knowledge. At NACA, teachers know they possess inherent power as Indigenous education practitioners. They make a difference in revitalizing Native languages through culturally sustaining practices. The second case study was reported by Teresa McCarty at Puente de Hozho (PdH), that language has a different role for members of various cultural communities. At PdH, the educators reflect parents' influence (Dine and Latino/a) for culturally sustaining and revitalizing education. The goal is to heal forced linguistic wounds and convey important cultural and linguistic knowledge that connects to the school's curriculum and pedagogy.
Balancing academic, linguistic, and cultural interests is based on accountability to Indigenous communities. The authors describe the need for linguistic teachings as a "fight for plurilingual and pluricultural education." Educators can attempt to balance state and federal requirements with local communities and Indigenous nations.
Criticisms of indigenous education
Lack of Timely Reading Instruction
Questions as to the Meaning of Indigenous
Lack of Rigorous Basis
Language revitalization efforts
Many Native American and Indigenous communities in the United States are working to revitalize their Indigenous languages. These language revitalization efforts often take place in schools, via language immersion programs. In Guatemala, teachers have had a sense of agency to teach students' the Indigenous language as well as about Indigenous culture in order to prevent language loss and maintain cultural identity.
Importance
Researchers have brought up the importance of language revitalization efforts to preserve Native culture. The extinction of Native languages has been brought up as one of the reasons that revitalization efforts are necessary and McCarty, Romero, and Zepeda have noted that "84% of all Indigenous languages in the United States and Canada have no new speakers to pass them on." Native language is seen as a path to preserving Native heritage such as "knowledge of medicine, religion, cultural practices and traditions, music, art, human relationships and child-rearing practices, as well as Indigenous ways of knowing about the sciences, history, astronomy, psychology, philosophy, and anthropology." "Duane Mistaken Chief, a member of the Blackfeet tribe, explains that American Indians use words and phrases to reconstruct their cultures and to heal themselves. By studying the Indian words, they learn to respect themselves. From the Indian point of view, the traditional language is a sacred gift, the symbol of one's identity, the embodiment of one's culture and traditions, a means for expressing inner thoughts and feelings, and the source of ancestral wisdom." Additionally, linguists and community members believe in the importance of revitalizing Native languages because "it is at once a direction for research, action, and documentation." Finally, it has been suggested that it is especially important to recognize Native languages in school settings because this leads to teachers recognizing the people, which leads to self-esteem and academic success for the students.
School Based Language-Immersion Models
Aguilera and LeCompte (2007) compared case studies of three different language-immersion programs in schools in Alaska, Hawaii, and the Navajo Nation. They examined evidence from prior research studies, examined descriptive documents from the study participants, conducted phone interviews and email exchanges with executive directors and school district administrators, and utilized other research on language-immersion models. In addition to qualitative evidence, they analyzed quantitative data such as school test scores and demographic.
Through their comparison of test data, Aguilera and LeCompte found that there was an increase in performance on state benchmark exam scores by the Ayaprun- and Dine'- immersion students. On the flip side, there was lower performance in these schools on the norm-referenced tests. However, the researchers note that these tests are often biased, negatively impacting Indigenous students. Ultimately, the researchers did not find that one immersion model had a higher academic achievement impact on Native students than the other studies. However, they "agree with language experts that total immersion is a more effective approach to achieving proficiency in a Native language."
Through their study, Aguilera and LeCompte (2007) examined the language nest and two-way immersion models. Another researcher, Lee (2007), examined "compartmentalizing" through both quantitative and qualitative measures. Quantitatively, Lee examined language levels, language usage, and lifespan experiences of Navajo students. Qualitatively, Lee interviewed Navajo students to learn more about their feelings and opinions on learning the Navajo language. Below are descriptions of the three school models used in the studies by Aguilera, LeCompte, and Lee.
Language nest – This model is used by the Native Hawaiian Aha Punana Leo consortium and begins in preschools. "In the language nest preschools, the Indigenous language is considered the student's first language, and children converse and study in that language, every day and all day." These students are taught in English only after they are literate in their Indigenous language.
Two-Way Language-Immersion Model – In this model, maintenance of the Native language is promoted, while students also learn a second language. This model typically lasts from five to seven years. One form of a two-way language immersion model is the 50–50 model, in which students use English half of the class time and the target Native language the other half of the class time. The other model is a 90–10 model, in which students use the target Native language 90% of the time beginning in kindergarten. These students then increase the use of English "by 10% annually until both languages are used equally—a 50–50 split by fourth grade."
Compartmentalizing— Schools that do not have full immersion programs often use compartmentalizing. Compartmentalizing refers to the Indigenous language being taught as a separate topic of study as opposed to having students instructed in the Native language for their academic content areas. According to Lee (2007), compartmentalizing is the most common approach for teaching Navajo language in schools today.
Through her study, Lee (2007) concluded that "Navajo-language use in the home was the strongest influence over the students' current Navajo-language level and Navajo-language use." She noted that "schools need to become more proactive in language revitalization" and shared that she found the compartmentalizing language-immersion programs in her study "modest" and "the language was mostly taught as though all the students were monolingual English speakers." Ultimately, the researcher asserts that in order for language-immersion programs to be done well, schools need to invest in more resources, improved teaching pedagogy, and the development of students' critical thinking and critical consciousness skills."
Difficulties of Implementation
Despite much interest in language revitalization efforts in Native communities, it can be challenging when it comes to program implementation. Research suggests several factors in the United States that make it difficult to implement language immersion programs in schools.
Aguilera and LeCompte (2007) found the following difficulties in their study:
An "overwhelming pressure to teach English, especially due to the "recent emphasis on high-stakes testing in English"
"Lack of importance given to cultural aspects of language by non-native educators and policymakers"
Lack of family participation, due to parents' fears that their children will not learn English or be successful if they participate in an immersion program
Securing long term funding to sustain programs
Other studies found additional difficulties in implementation:
Hostile Policies: McCarty and Nicholas (2014) conducted qualitative research on language revitalization efforts for the Mohawk, Navajo, Hawaiian, and Hopi people and found one difficulty in implementation was hostile policies toward bi/multilingual education efforts.
Scarcity of Indigenous Staff and Resources: Mary Hermes opened Waadookodaading, a language immersion school centered around the Ojibwe language. The school is located near a reservation of about 3,000 enrolled members, but as of 2007, there were only approximately 10 fluent speakers. Because of massive language loss among Indigenous groups, it can be difficult to find fluent native speakers. It is necessary to have high language proficiency in order to teach in an immersion school. Not only do immersion teachers need to be fluent in the language, but they also need to be skilled in pedagogy which presents additional challenges. Requirements from the NCLB state that paraprofessionals need to have at least an associate degree, and those working in the primary grades to have early childhood education coursework. Oftentimes, the people who would serve in these positions in language immersion schools are elders, and they do not have these requirements. Additionally, a lack of materials in Indigenous languages results in a demand on educators to produce the materials along the way.
Conflicting Perspectives: Ngai (2008) conducted qualitative research on Salish language revitalization efforts by speaking with 89 participants through 101 interviews in three different school districts on the Flathead Indian Reservation. His goal through his research was to produce a framework that could be used for Native language education in districts that had a mix of Native and non-Native students. Ngai found that, "Language revitalization is particularly challenging in school districts with a mix of AI/AN and non-Native populations because of the co-existence of diverse and often conflicting perspectives."
Helpful Factors in Implementation
Despite the challenges of creating and maintaining immersion programs, there are many schools in existence today. Researchers suggest the following factors as helpful in leading to implementation of immersion models.
Leadership and community activism – Aguilera and LeCompte (2007) noted in their study that having Indigenous leaders who are invested in implementing these models is critical. In another study, Ngai (2008) notes that, "In public schools, the continuation of Salish language instruction since the 1970s can be attributed to the efforts of Salish-language teachers who are willing to step into a traditionally hostile setting in order to pass the language on to the young."
School Autonomy – Many schools have applied for charter status in order to protect language-immersion schools from being closed by school members who object to the programs. Charter status also allows schools the flexibility to gain more funding.
Partnerships with higher education systems—In order to implement a language immersion model, schools must have trained teachers. Several of the communities where language immersion models have been successful are, "situated in communities where there is access to higher education degree programs, and some of these postsecondary institutions offer Native language classes."
Benefits
For Indigenous learners and instructors, the inclusion of these methods into schools often enhances educational effectiveness by providing an education that adheres to an Indigenous person's own inherent perspectives, experiences, language, and customs, thereby making it easier for children to transition into the realm of adulthood. For non-Indigenous students and teachers, such an education often has the effect of raising awareness of individual and collective traditions surrounding Indigenous communities and peoples, thereby promoting greater respect for and appreciation of various cultural realities.
In terms of educational content, the inclusion of Indigenous knowledge within curricula, instructional materials, and textbooks has largely the same effect on preparing students for the greater world as other educational systems, such as the Western model.
There is value in including Indigenous knowledge and education in the public school system. Students of all backgrounds can benefit from being exposed to Indigenous education, as it can contribute to reducing racism in the classroom and increase the sense of community in a diverse group of students. There are a number of sensitive issues about what can be taught (and by whom) that require responsible consideration by non-Indigenous teachers who appreciate the importance of interjecting Indigenous perspectives into standard mainstream schools. Concerns about misappropriation of Indigenous ways of knowing without recognizing the plight of Indigenous Peoples and "giving back" to them are legitimate. Since most educators are non-Indigenous, and because Indigenous perspectives may offer solutions for current and future social and ecological problems, it is important to refer to Indigenous educators and agencies to develop curriculum and teaching strategies while at the same time encouraging activism on behalf of Indigenous Peoples. One way to bring authentic Indigenous experiences into the classroom is to work with community elders. They can help facilitate the incorporation of authentic knowledge and experiences into the classroom. Teachers must not shy away from bringing controversial subjects into the classroom. The history of Indigenous people should be delved into and developed fully. There are many age appropriate ways to do this, including the use of children's literature, media, and discussion. Individuals are recommended to reflect regularly on their teaching practice to become aware of areas of instruction in need of Indigenous perspectives.
21st century skills
Incorporating Indigenous ways of learning into educational practices has potential to benefit both Indigenous and non-Indigenous learners. The 21st century skills needed in modern curriculum include: collaboration, creativity, innovation, problem-solving, inquiry, multicultural literacy, etc. Indigenous ways of learning incorporate all these skills through experiential and holistic methods. Additionally, aboriginal education styles align with 21st century skills though involving teachers and students as co-constructors of education, and by valuing the interconectidness of content and context.
Educational gap
Some Indigenous people view education as an important tool to improve their situation by pursuing economic, social and cultural development; it provides them with individual empowerment and self-determination. Education is also a means for employment; it is a way for socially marginalized people to raise themselves out of poverty. However, some education systems and curricula lack knowledge about Indigenous peoples ways of learning, causing an educational gap for Indigenous people. Factors for the education gap include lower school enrollments, poor school performance, low literacy rates, and higher dropout rates. Some schools teach Indigenous children to be "socialized" and to be a national asset to society by assimilating, "Schooling has been explicitly and implicitly a site of rejection of Indigenous knowledge and language, it has been used as a means of assimilating and integrating Indigenous peoples into a 'national' society and identity at the cost of their Indigenous identity and social practices". Intercultural learning is an example of how to build a bridge for the educational gap.
Other factors that contribute to the education gap in Indigenous cultures are socioeconomic disadvantage, which includes access to healthcare, employment, incarceration rates, and housing. According to the Australian Government Department of the Prime Minister and Cabinet in their 2015 Closing the Gap Report, the country was not on track to halve the gap in reading, writing and numeracy achievements for Indigenous Australian students. The government reported that there had been no overall improvement in Indigenous reading and numeracy since 2008.
Importance
Indigenous knowledge is particularly important to modern environmental management in today's world. Environmental and land management strategies traditionally used by Indigenous peoples have continued relevance. Indigenous cultures usually live in a particular bioregion for many generations and have learned how to live there sustainably. In modern times, this ability often puts truly Indigenous cultures in a unique position of understanding the interrelationships, needs, resources, and dangers of their bioregion. This is not true of Indigenous cultures that have been eroded through colonialism or genocide or that have been displaced.
The promotion of Indigenous methods of education and the inclusion of traditional knowledge also enables those in Western and post-colonial societies to re-evaluate the inherent hierarchy of knowledge systems. Indigenous knowledge systems were historically denigrated by Western educators; however, there is a current shift towards recognizing the value of these traditions. The inclusion of aspects of Indigenous education requires us to acknowledge the existence of multiple forms of knowledge rather than one, standard, benchmark system.
A prime example of how Indigenous methods and content can be used to promote the above outcomes is demonstrated within higher education in Canada. Due to certain jurisdictions' focus on enhancing academic success for Aboriginal learners and promoting the values of multiculturalism in society, the inclusion of Indigenous methods and content in education is often seen as an important obligation and duty of both governmental and educational authorities.
Many scholars in the field assert that Indigenous education and knowledge has a "transformative power" for Indigenous communities that can be used to foster "empowerment and justice." The shift to recognizing Indigenous models of education as legitimate forms is therefore important in the ongoing effort for Indigenous rights, on a global scale.
Implications for teachers
Educators need to foster a respectful learning environment that promotes confidence and openness as well as an authentic dialogue to help students come to understand content through spirituality and cultural infusion. It is also important for educators to realize that time is crucial for students to connect intellect, spirituality and their understanding of the physical world. Many educators have stated that educational programs do not prepare them with enough support and materials for effectively teaching Indigenous students. Therefore, it is important for educators to seek out ongoing teach development programs directed toward improving teaching so that marginalized groups do not suffer.
Challenges (as seen with the Na)
There are numerous practical challenges to the implementation of Indigenous education. Incorporating Indigenous knowledge into formal Western education models can prove difficult. However, the discourse surrounding Indigenous education and knowledge suggests that integrating Indigenous methods into traditional modes of schooling is an "ongoing process of 'cultural negotiation.'"
Indigenous education often takes different forms than a typical Western model, as the practices of the Na ethnic group of southwest China illustrate. Because Na children learn through example, traditional Na education is less formal than the standard Western model. In contrast to structured hours and a classroom setting, learning takes places throughout the day, both in the home and in adults' workplaces. Based on the belief that children are "fragile, soulless beings", Na education focuses on nurturing children rather than on punishing them. Children develop an understanding of cultural values, such as speech taboos and the "reflection" of individual actions "on the entire household." Playing games teaches children about their natural surroundings and builds physical and mental acuity. Forms of Indigenous knowledge, including weaving, hunting, carpentry, and the use of medicinal plants, are passed on from adult to child in the workplace, where children assist their relatives or serve as apprentices for several years.
However, increasing modernity is a challenge to such modes of instruction. Some types of Indigenous knowledge are dying out because of decreased need for them and a lack of interest from youth, who increasingly leave the village for jobs in the cities. Furthermore, formal Chinese state schooling "interferes with informal traditional learning." Children must travel a distance from their villages to attend state schools, removing them from traditional learning opportunities in the home and workplace. The curriculum in state schools is standardized across China and holds little relevance to the lives of the Na. Na children are required to learn Mandarin Chinese, Chinese and global history, and Han values, as opposed to their native language, local history, and Indigenous values. Methods of instruction rely on rote learning rather than experiential learning, as employed in Na villages.
Several individuals and organizations pay for children's school fees and build new schools in an attempt to increase village children's access to education. Yet such well-intended actions do not affect the schools' curriculum, which means there is no improvement in the sustainability of the children's native cultures. As a result, such actions may actually "be contributing to the demise of the very culture" they are trying to preserve.
Organizations
Many organizations work to promote Indigenous methods of education. Indigenous peoples have founded and actively run several of these organizations. On a global scale, many of these organizations engage in active knowledge transfer in an effort to protect and promote Indigenous knowledge and education modes.
One such organization, the Indigenous Education Institute (IEI), aims to apply Indigenous knowledge and tradition to a contemporary context, with a particular focus on astronomy and other science disciplines.
Another such organization is the World Indigenous Nations Higher Education Consortium (WINHEC), which was launched during the World Indigenous Peoples Conference on Education (WIPCE) at Delta Lodge, Kananakis Calgary in Alberta, Canada in August 2002. The founding members were Australia, Hawai'i, Alaska, the American Indian Higher Education Consortium of the United States, Canada, the Wänanga of Aotearoa (New Zealand), and Saamiland (North Norway). The stated aims of WINHEC include the provision of an international forum for Indigenous peoples to pursue common goals through higher education.
See also
Contemporary Native American issues in the United States#Education
Alternative education
Bilingual education
Traditional knowledge
Indigenous language
Indigenous peoples
Traditional ecological knowledge
Traditional knowledge
Indigenous rights
Critical pedagogy of place
References | 0.778265 | 0.978798 | 0.761764 |
Anthroposophy | Anthroposophy is a spiritual new religious movement which was founded in the early 20th century by the esotericist Rudolf Steiner that postulates the existence of an objective, intellectually comprehensible spiritual world, accessible to human experience. Followers of anthroposophy aim to engage in spiritual discovery through a mode of thought independent of sensory experience. Though proponents claim to present their ideas in a manner that is verifiable by rational discourse and say that they seek precision and clarity comparable to that obtained by scientists investigating the physical world, many of these ideas have been termed pseudoscientific by experts in epistemology and debunkers of pseudoscience.
Anthroposophy has its roots in German idealism, Western and Eastern esoteric ideas, various religious traditions, and modern Theosophy. Steiner chose the term anthroposophy (from Greek ἄνθρωπος , 'human', and σοφία sophia, 'wisdom') to emphasize his philosophy's humanistic orientation. He defined it as "a scientific exploration of the spiritual world", others have variously called it a "philosophy and cultural movement", a "spiritual movement", a "spiritual science", "a system of thought", or "a spiritualist movement".
Anthroposophical ideas have been applied in a range of fields including education (both in Waldorf schools and in the Camphill movement), environmental conservation and banking; with additional applications in agriculture, organizational development, the arts, and more.
The Anthroposophical Society is headquartered at the Goetheanum in Dornach, Switzerland. Anthroposophy's supporters include writers Saul Bellow, and Selma Lagerlöf, painters Piet Mondrian, Wassily Kandinsky and Hilma af Klint, filmmaker Andrei Tarkovsky, child psychiatrist Eva Frommer, music therapist Maria Schüppel, Romuva religious founder Vydūnas, and former president of Georgia Zviad Gamsakhurdia. While critics and proponents alike acknowledge Steiner's many anti-racist statements. "Steiner's collected works...contain pervasive internal contradictions and inconsistencies on racial and national questions."
The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history". Many scientists, physicians, and philosophers, including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Ideas of Steiner's that are unsupported or disproven by modern science include: racial evolution, clairvoyance (Steiner claimed he was clairvoyant), and the Atlantis myth.
History
The early work of the founder of anthroposophy, Rudolf Steiner, culminated in his Philosophy of Freedom (also translated as The Philosophy of Spiritual Activity and Intuitive Thinking as a Spiritual Path). Here, Steiner developed a concept of free will based on inner experiences, especially those that occur in the creative activity of independent thought. "Steiner was a moral individualist".
By the beginning of the twentieth century, Steiner's interests turned almost exclusively to spirituality. His work began to draw the attention of others interested in spiritual ideas; among these was the Theosophical Society. From 1900 on, thanks to the positive reception his ideas received from Theosophists, Steiner focused increasingly on his work with the Theosophical Society, becoming the secretary of its section in Germany in 1902. During his leadership, membership increased dramatically, from just a few individuals to sixty-nine lodges.
By 1907, a split between Steiner and the Theosophical Society became apparent. While the Society was oriented toward an Eastern and especially Indian approach, Steiner was trying to develop a path that embraced Christianity and natural science. The split became irrevocable when Annie Besant, then president of the Theosophical Society, presented the child Jiddu Krishnamurti as the reincarnated Christ. Steiner strongly objected and considered any comparison between Krishnamurti and Christ to be nonsense; many years later, Krishnamurti also repudiated the assertion. Steiner's continuing differences with Besant led him to separate from the Theosophical Society Adyar. He was subsequently followed by the great majority of the Theosophical Society's German members, as well as many members of other national sections.
By this time, Steiner had reached considerable stature as a spiritual teacher and expert in the occult. He spoke about what he considered to be his direct experience of the Akashic Records (sometimes called the "Akasha Chronicle"), thought to be a spiritual chronicle of the history, pre-history, and future of the world and mankind. In a number of works, Steiner described a path of inner development he felt would let anyone attain comparable spiritual experiences. In Steiner's view, sound vision could be developed, in part, by practicing rigorous forms of ethical and cognitive self-discipline, concentration, and meditation. In particular, Steiner believed a person's spiritual development could occur only after a period of moral development.
In 1912, Steiner broke away from the Theosophical Society to found an independent group, which he named the Anthroposophical Society. After World War I, members of the young society began applying Steiner's ideas to create cultural movements in areas such as traditional and special education, farming, and medicine.
By 1923, a schism had formed between older members, focused on inner development, and younger members eager to become active in contemporary social transformations. In response, Steiner attempted to bridge the gap by establishing an overall School for Spiritual Science. As a spiritual basis for the reborn movement, Steiner wrote a Foundation Stone Meditation which remains a central touchstone of anthroposophical ideas.
Steiner died just over a year later, in 1925. The Second World War temporarily hindered the anthroposophical movement in most of Continental Europe, as the Anthroposophical Society and most of its practical counter-cultural applications were banned by the Nazi government. Though at least one prominent member of the Nazi Party, Rudolf Hess, was a strong supporter of anthroposophy, very few anthroposophists belonged to the National Socialist Party. In reality, Steiner had both enemies and loyal supporters in the upper echelons of the Nazi regime. Staudenmaier speaks of the "polycratic party-state apparatus", so Nazism's approach to Anthroposophy was not characterized by monolithic ideological unity. When Hess flew to the UK and was imprisoned, their most powerful protector was gone, but Anthroposophists were still not left without supporters among higher-placed Nazis.
The Third Reich had banned almost all esoteric organizations, claiming that these were controlled by Jews. The truth was that while Anthroposophists complained of bad press, they were to a surprising extent tolerated by the Nazi regime, "including outspokenly supportive pieces in the Völkischer Beobachter". Ideological purists from Sicherheitsdienst argued largely in vain against Anthroposophy. According to Staudenmaier, "The prospect of unmitigated persecution was held at bay for years in a tenuous truce between pro-anthroposophical and anti-anthroposophical Nazi factions."
Morals: Anthroposophy was not the stake of that dispute, but merely powerful Nazis wanting to get rid of other powerful Nazis. E.g. Jehovah's Witnesses were treated much more aggressively than Anthroposophists.
Kurlander stated that "the Nazis were hardly ideologically opposed to the supernatural sciences themselves"—rather they objected to the free (i.e. non-totalitarian) pursuit of supernatural sciences.
According to Hans Büchenbacher, an anthroposophist, the Secretary General of the General Anthroposophical Society, Guenther Wachsmuth, as well as Steiner's widow, Marie Steiner, were “completely pro-Nazi.” Marie Steiner-von Sivers, Guenther Wachsmuth, and Albert Steffen, had publicly expressed sympathy for the Nazi regime since its beginnings; led by such sympathies of their leadership, the Swiss and German Anthroposophical organizations chose for a path conflating accommodation with collaboration, which in the end ensured that while the Nazi regime hunted the esoteric organizations, Gentile Anthroposophists from Nazi Germany and countries occupied by it were let be to a surprising extent. Of course they had some setbacks from the enemies of Anthroposophy among the upper echelons of the Nazi regime, but Anthroposophists also had loyal supporters among them, so overall Gentile Anthroposophists were not badly hit by the Nazi regime.
Staudenmaier's overall argument is that "there were often no clear-cut lines between theosophy, anthroposophy, ariosophy, astrology and the völkisch movement from which the Nazi Party arose."
By 2007, national branches of the Anthroposophical Society had been established in fifty countries and about 10,000 institutions around the world were working on the basis of anthroposophical ideas.
Etymology and earlier uses of the word
Anthroposophy is an amalgam of the Greek terms ( 'human') and ( 'wisdom'). An early English usage is recorded by Nathan Bailey (1742) as meaning "the knowledge of the nature of man."
The first known use of the term anthroposophy occurs within Arbatel de magia veterum, summum sapientiae studium, a book published anonymously in 1575 and attributed to Heinrich Cornelius Agrippa. The work describes anthroposophy (as well as theosophy) variously as an understanding of goodness, nature, or human affairs. In 1648, the Welsh philosopher Thomas Vaughan published his Anthroposophia Theomagica, or a discourse of the nature of man and his state after death.
The term began to appear with some frequency in philosophical works of the mid- and late-nineteenth century. In the early part of that century, Ignaz Troxler used the term anthroposophy to refer to philosophy deepened to self-knowledge, which he suggested allows deeper knowledge of nature as well. He spoke of human nature as a mystical unity of God and world. Immanuel Hermann Fichte used the term anthroposophy to refer to "rigorous human self-knowledge," achievable through thorough comprehension of the human spirit and of the working of God in this spirit, in his 1856 work Anthropology: The Study of the Human Soul. In 1872, the philosopher of religion Gideon Spicker used the term anthroposophy to refer to self-knowledge that would unite God and world: "the true study of the human being is the human being, and philosophy's highest aim is self-knowledge, or Anthroposophy."
In 1882, the philosopher Robert Zimmermann published the treatise, "An Outline of Anthroposophy: Proposal for a System of Idealism on a Realistic Basis," proposing that idealistic philosophy should employ logical thinking to extend empirical experience. Steiner attended lectures by Zimmermann at the University of Vienna in the early 1880s, thus at the time of this book's publication.
In the early 1900s, Steiner began using the term anthroposophy (i.e. human wisdom) as an alternative to the term theosophy (i.e. divine wisdom).
Central ideas
Spiritual knowledge and freedom
Anthroposophical proponents aim to extend the clarity of the scientific method to phenomena of human soul-life and spiritual experiences. Steiner believed this required developing new faculties of objective spiritual perception, which he maintained was still possible for contemporary humans. The steps of this process of inner development he identified as consciously achieved imagination, inspiration, and intuition. Steiner believed results of this form of spiritual research should be expressed in a way that can be understood and evaluated on the same basis as the results of natural science.
Steiner hoped to form a spiritual movement that would free the individual from any external authority. For Steiner, the human capacity for rational thought would allow individuals to comprehend spiritual research on their own and bypass the danger of dependency on an authority such as himself.
Steiner contrasted the anthroposophical approach with both conventional mysticism, which he considered lacking the clarity necessary for exact knowledge, and natural science, which he considered arbitrarily limited to what can be seen, heard, or felt with the outward senses.
Nature of the human being
In Theosophy, Steiner suggested that human beings unite a physical body of substances gathered from and returning to the inorganic world; a life body (also called the etheric body), in common with all living creatures (including plants); a bearer of sentience or consciousness (also called the astral body), in common with all animals; and the ego, which anchors the faculty of self-awareness unique to human beings.
Anthroposophy describes a broad evolution of human consciousness. Early stages of human evolution possess an intuitive perception of reality, including a clairvoyant perception of spiritual realities. Humanity has progressively evolved an increasing reliance on intellectual faculties and a corresponding loss of intuitive or clairvoyant experiences, which have become atavistic. The increasing intellectualization of consciousness, initially a progressive direction of evolution, has led to an excessive reliance on abstraction and a loss of contact with both natural and spiritual realities. However, to go further requires new capacities that combine the clarity of intellectual thought with the imagination and with consciously achieved inspiration and intuitive insights.
Anthroposophy speaks of the reincarnation of the human spirit: that the human being passes between stages of existence, incarnating into an earthly body, living on earth, leaving the body behind, and entering into the spiritual worlds before returning to be born again into a new life on earth. After the death of the physical body, the human spirit recapitulates the past life, perceiving its events as they were experienced by the objects of its actions. A complex transformation takes place between the review of the past life and the preparation for the next life. The individual's karmic condition eventually leads to a choice of parents, physical body, disposition, and capacities that provide the challenges and opportunities that further development requires, which includes karmically chosen tasks for the future life.
Steiner described some conditions that determine the interdependence of a person's lives, or karma.
Evolution
The anthroposophical view of evolution considers all animals to have evolved from an early, unspecialized form. As the least specialized animal, human beings have maintained the closest connection to the archetypal form; contrary to the Darwinian conception of human evolution, all other animals devolve from this archetype. The spiritual archetype originally created by spiritual beings was devoid of physical substance; only later did this descend into material existence on Earth. In this view, human evolution has accompanied the Earth's evolution throughout the existence of the Earth.
Anthroposophy adapted Theosophy's complex system of cycles of world development and human evolution. The evolution of the world is said to have occurred in cycles. The first phase of the world consisted only of heat. In the second phase, a more active condition, light, and a more condensed, gaseous state separate out from the heat. In the third phase, a fluid state arose, as well as a sounding, forming energy. In the fourth (current) phase, solid physical matter first exists. This process is said to have been accompanied by an evolution of consciousness which led up to present human culture.
Ethics
The anthroposophical view is that good is found in the balance between two polar influences on world and human evolution. These are often described through their mythological embodiments as spiritual adversaries which endeavour to tempt and corrupt humanity, Lucifer and his counterpart Ahriman. These have both positive and negative aspects. Lucifer is the light spirit, which "plays on human pride and offers the delusion of divinity", but also motivates creativity and spirituality; Ahriman is the dark spirit that tempts human beings to "...deny [their] link with divinity and to live entirely on the material plane", but that also stimulates intellectuality and technology. Both figures exert a negative effect on humanity when their influence becomes misplaced or one-sided, yet their influences are necessary for human freedom to unfold.
Each human being has the task to find a balance between these opposing influences, and each is helped in this task by the mediation of the Representative of Humanity, also known as the Christ being, a spiritual entity who stands between and harmonizes the two extremes.
Claimed applications
Steiner/Waldorf education
There is a pedagogical movement with over 1000 Steiner or Waldorf schools (the latter name stems from the first such school, founded in Stuttgart in 1919) located in some 60 countries; the great majority of these are independent (private) schools. Sixteen of the schools have been affiliated with the United Nations' UNESCO Associated Schools Project Network, which sponsors education projects that foster improved quality of education throughout the world. Waldorf schools receive full or partial governmental funding in some European nations, Australia and in parts of the United States (as Waldorf method public or charter schools) and Canada.
The schools have been founded in a variety of communities: for example in the favelas of São Paulo to wealthy suburbs of major cities; in India, Egypt, Australia, the Netherlands, Mexico and South Africa. Though most of the early Waldorf schools were teacher-founded, the schools today are usually initiated and later supported by a parent community. Waldorf schools are among the most visible anthroposophical institutions.
Biodynamic agriculture
Biodynamic agriculture, is a form of alternative agriculture based on pseudo-scientific and esoteric concepts. It was also the first intentional form of organic farming, begun in 1924, when Rudolf Steiner gave a series of lectures published in English as The Agriculture Course. Steiner is considered one of the founders of the modern organic farming movement.
"And Himmler, Hess, and Darré all promoted biodynamic (anthroposophic) approaches to farming as an alternative to industrial agriculture." "'[...] with the active cooperation of the Reich League for Biodynamic Agriculture' [...] Pancke, Pohl, and Hans Merkel established additional biodynamic plantations across the eastern territories as well as Dachau, Ravensbrück, and Auschwitz concentration camps. Many were staffed by anthroposophists."
"Steiner’s 'biodynamic agriculture' based on 'restoring the quasi-mystical relationship between earth and the cosmos' was widely accepted in the Third Reich (28)."
Anthroposophical medicine
Anthroposophical medicine is a form of alternative medicine based on pseudoscientific and occult notions rather than in science-based medicine.
Most anthroposophic medical preparations are highly diluted, like homeopathic remedies, while harmless in of themselves, using them in place of conventional medicine to treat illness is ineffective and risks adverse consequences.
One of the most studied applications has been the use of mistletoe extracts in cancer therapy, but research has found no evidence of benefit.
Special needs education and services
In 1922, Ita Wegman founded an anthroposophical center for special needs education, the Sonnenhof, in Switzerland. In 1940, Karl König founded the Camphill Movement in Scotland. The latter in particular has spread widely, and there are now over a hundred Camphill communities and other anthroposophical homes for children and adults in need of special care in about 22 countries around the world. Both Karl König, Thomas Weihs and others have written extensively on these ideas underlying Special education.
Architecture
Steiner designed around thirteen buildings in an organic—expressionist architectural style. Foremost among these are his designs for the two Goetheanum buildings in Dornach, Switzerland. Thousands of further buildings have been built by later generations of anthroposophic architects.
Architects who have been strongly influenced by the anthroposophic style include Imre Makovecz in Hungary, Hans Scharoun and Joachim Eble in Germany, Erik Asmussen in Sweden, Kenji Imai in Japan, Thomas Rau, Anton Alberts and Max van Huut in the Netherlands, Christopher Day and Camphill Architects in the UK, Thompson and Rose in America, Denis Bowman in Canada, and Walter Burley Griffin and Gregory Burgess in Australia.
ING House in Amsterdam is a contemporary building by an anthroposophical architect which has received awards for its ecological design and approach to a self-sustaining ecology as an autonomous building and example of sustainable architecture.
Eurythmy
Together with Marie von Sivers, Steiner developed eurythmy, a performance art combining dance, speech, and music.
Social finance and entrepreneurship
Around the world today are a number of banks, companies, charities, and schools for developing co-operative forms of business using Steiner's ideas about economic associations, aiming at harmonious and socially responsible roles in the world economy. The first anthroposophic bank was the Gemeinschaftsbank für Leihen und Schenken in Bochum, Germany, founded in 1974.
Socially responsible banks founded out of anthroposophy include Triodos Bank, founded in the Netherlands in 1980 and also active in the UK, Germany, Belgium, Spain and France. Other examples include Cultura Sparebank which dates from 1982 when a group of Norwegian anthroposophists began an initiative for ethical banking but only began to operate as a savings bank in Norway in the late 90s, La Nef in France and RSF Social Financein San Francisco.
Harvard Business School historian Geoffrey Jones traced the considerable impact both Steiner and later anthroposophical entrepreneurs had on the creation of many businesses in organic food, ecological architecture and sustainable finance.
Organizational development, counselling and biography work
Bernard Lievegoed, a psychiatrist, founded a new method of individual and institutional development oriented towards humanizing organizations and linked with Steiner's ideas of the threefold social order. This work is represented by the NPI Institute for Organizational Development in the Netherlands and sister organizations in many other countries.
Speech and drama
There are also anthroposophical movements to renew speech and drama, the most important of which are based in the work of Marie Steiner-von Sivers (speech formation, also known as Creative Speech) and the Chekhov Method originated by Michael Chekhov (nephew of Anton Chekhov).
Art
Anthroposophic painting, a style inspired by Rudolf Steiner, featured prominently in the first Goetheanum's cupola. The technique frequently begins by filling the surface to be painted with color, out of which forms are gradually developed, often images with symbolic-spiritual significance. Paints that allow for many transparent layers are preferred, and often these are derived from plant materials. Rudolf Steiner appointed the English sculptor Edith Maryon as head of the School of Fine Art at the Goetheanum. Together they carved the 9-metre tall sculpture titled The Representative of Humanity, on display at the Goetheanum.
Other
Phenomenological approaches to science, pseudo-scientific ideas based on Goethe's philosophy of nature.
John Wilkes' fountain-like flowforms, sculptural forms that guide water into rhythmic movement for the purposes of decoration.
Antisemitic legislation in Italy (1938–1945).
The Fellowship Community in Chestnut Ridge, New York, United States, which includes a retirement community and other anthroposophic projects.
The Harduf kibbutz in Israel.
Social goals
For a period after World War I, Steiner was extremely active and well known in Germany, in part because he lectured widely proposing social reforms. Steiner was a sharp critic of nationalism, which he saw as outdated, and a proponent of achieving social solidarity through individual freedom. A petition proposing a radical change in the German constitution and expressing his basic social ideas (signed by Herman Hesse, among others) was widely circulated. His main book on social reform is Toward Social Renewal.
Anthroposophy continues to aim at reforming society through maintaining and strengthening the independence of the spheres of cultural life, human rights and the economy. It emphasizes a particular ideal in each of these three realms of society:
Liberty in cultural life
Equality of rights, the sphere of legislation
Fraternity in the economic sphere
According to Cees Leijenhorst, "Steiner outlined his vision of a new political and social philosophy that avoids the two extremes of capitalism and socialism."
Steiner did influence Italian Fascism, which exploited "his racial and anti-democratic dogma." The fascist ministers Giovanni Antonio Colonna di Cesarò (nicknamed "the Anthroposophist duke"; he became antifascist after taking part in Benito Mussolini's government) and Ettore Martinoli have openly expressed their sympathy for Rudolf Steiner. Most from the occult pro-fascist UR Group were Anthroposophists.
According to Egil Asprem, "Steiner’s teachings had a clear authoritarian ring, and developed a rather crass polemic against 'materialism', 'liberalism', and cultural 'degeneration'. [...] For example, anthroposophical medicine was developed to contrast with the 'materialistic' (and hence 'degenerate') medicine of the establishment."
Esoteric path
Paths of spiritual development
According to Steiner, a real spiritual world exists, evolving along with the material one. Steiner held that the spiritual world can be researched in the right circumstances through direct experience, by persons practicing rigorous forms of ethical and cognitive self-discipline. Steiner described many exercises he said were suited to strengthening such self-discipline; the most complete exposition of these is found in his book How To Know Higher Worlds. The aim of these exercises is to develop higher levels of consciousness through meditation and observation. Details about the spiritual world, Steiner suggested, could on such a basis be discovered and reported, though no more infallibly than the results of natural science.
Steiner regarded his research reports as being important aids to others seeking to enter into spiritual experience. He suggested that a combination of spiritual exercises (for example, concentrating on an object such as a seed), moral development (control of thought, feelings and will combined with openness, tolerance and flexibility) and familiarity with other spiritual researchers' results would best further an individual's spiritual development. He consistently emphasised that any inner, spiritual practice should be undertaken in such a way as not to interfere with one's responsibilities in outer life. Steiner distinguished between what he considered were true and false paths of spiritual investigation.
In anthroposophy, artistic expression is also treated as a potentially valuable bridge between spiritual and material reality.
Prerequisites to and stages of inner development
Steiner's stated prerequisites to beginning on a spiritual path include a willingness to take up serious cognitive studies, a respect for factual evidence, and a responsible attitude. Central to progress on the path itself is a harmonious cultivation of the following qualities:
Control over one's own thinking
Control over one's will
Composure
Positivity
Impartiality
Steiner sees meditation as a concentration and enhancement of the power of thought. By focusing consciously on an idea, feeling or intention the meditant seeks to arrive at pure thinking, a state exemplified by but not confined to pure mathematics. In Steiner's view, conventional sensory-material knowledge is achieved through relating perception and concepts. The anthroposophic path of esoteric training articulates three further stages of supersensory knowledge, which do not necessarily follow strictly sequentially in any single individual's spiritual progress.
By focusing on symbolic patterns, images, and poetic mantras, the meditant can achieve consciously directed Imaginations that allow sensory phenomena to appear as the expression of underlying beings of a soul-spiritual nature.
By transcending such imaginative pictures, the meditant can become conscious of the meditative activity itself, which leads to experiences of expressions of soul-spiritual beings unmediated by sensory phenomena or qualities. Steiner calls this stage Inspiration.
By intensifying the will-forces through exercises such as a chronologically reversed review of the day's events, the meditant can achieve a further stage of inner independence from sensory experience, leading to direct contact, and even union, with spiritual beings ("Intuition") without loss of individual awareness.
Spiritual exercises
Steiner described numerous exercises he believed would bring spiritual development; other anthroposophists have added many others. A central principle is that "for every step in spiritual perception, three steps are to be taken in moral development." According to Steiner, moral development reveals the extent to which one has achieved control over one's inner life and can exercise it in harmony with the spiritual life of other people; it shows the real progress in spiritual development, the fruits of which are given in spiritual perception. It also guarantees the capacity to distinguish between false perceptions or illusions (which are possible in perceptions of both the outer world and the inner world) and true perceptions: i.e., the capacity to distinguish in any perception between the influence of subjective elements (i.e., viewpoint) and objective reality.
Place in Western philosophy
Steiner built upon Goethe's conception of an imaginative power capable of synthesizing the sense-perceptible form of a thing (an image of its outer appearance) and the concept we have of that thing (an image of its inner structure or nature). Steiner added to this the conception that a further step in the development of thinking is possible when the thinker observes his or her own thought processes. "The organ of observation and the observed thought process are then identical, so that the condition thus arrived at is simultaneously one of perception through thinking and one of thought through perception."
Thus, in Steiner's view, we can overcome the subject-object divide through inner activity, even though all human experience begins by being conditioned by it. In this connection, Steiner examines the step from thinking determined by outer impressions to what he calls sense-free thinking. He characterizes thoughts he considers without sensory content, such as mathematical or logical thoughts, as free deeds. Steiner believed he had thus located the origin of free will in our thinking, and in particular in sense-free thinking.
Some of the epistemic basis for Steiner's later anthroposophical work is contained in the seminal work, Philosophy of Freedom. In his early works, Steiner sought to overcome what he perceived as the dualism of Cartesian idealism and Kantian subjectivism by developing Goethe's conception of the human being as a natural-supernatural entity, that is: natural in that humanity is a product of nature, supernatural in that through our conceptual powers we extend nature's realm, allowing it to achieve a reflective capacity in us as philosophy, art and science. Steiner was one of the first European philosophers to overcome the subject-object split in Western thought. Though not well known among philosophers, his philosophical work was taken up by Owen Barfield (and through him influenced the Inklings, an Oxford group of Christian writers that included J. R. R. Tolkien and C. S. Lewis).
Christian and Jewish mystical thought have also influenced the development of anthroposophy.
Union of science and spirit
Steiner believed in the possibility of applying the clarity of scientific thinking to spiritual experience, which he saw as deriving from an objectively existing spiritual world. Steiner identified mathematics, which attains certainty through thinking itself, thus through inner experience rather than empirical observation, as the basis of his epistemology of spiritual experience.
Anthroposophy regards mainstream science as Ahrimanic.
Relationship to religion
Christ as the center of earthly evolution
Steiner's writing, though appreciative of all religions and cultural developments, emphasizes Western tradition as having evolved to meet contemporary needs. He describes Christ and his mission on earth of bringing individuated consciousness as having a particularly important place in human evolution, whereby:
Christianity has evolved out of previous religions;
The being which manifests in Christianity also manifests in all faiths and religions, and each religion is valid and true for the time and cultural context in which it was born;
All historical forms of Christianity need to be transformed considerably to meet the continuing evolution of humanity.
Thus, anthroposophy considers there to be a being who unifies all religions, and who is not represented by any particular religious faith. This being is, according to Steiner, not only the Redeemer of the Fall from Paradise, but also the unique pivot and meaning of earth's evolutionary processes and of human history. To describe this being, Steiner periodically used terms such as the "Representative of Humanity" or the "good spirit" rather than any denominational term.
Divergence from conventional Christian thought
Steiner's views of Christianity diverge from conventional Christian thought in key places, and include gnostic elements:
One central point of divergence is Steiner's views on reincarnation and karma.
Steiner differentiated three contemporary paths by which he believed it possible to arrive at Christ:
Through heart-felt experiences of the Gospels; Steiner described this as the historically dominant path, but becoming less important in the future.
Through inner experiences of a spiritual reality; this Steiner regarded as increasingly the path of spiritual or religious seekers today.
Through initiatory experiences whereby the reality of Christ's death and resurrection are experienced; Steiner believed this is the path people will increasingly take.
Steiner also believed that there were two different Jesus children involved in the Incarnation of the Christ: one child descended from Solomon, as described in the Gospel of Matthew, the other child from Nathan, as described in the Gospel of Luke. (The genealogies given in the two gospels diverge some thirty generations before Jesus' birth, and 'Jesus' was a common name in biblical times.)
His view of the second coming of Christ is also unusual; he suggested that this would not be a physical reappearance, but that the Christ being would become manifest in non-physical form, visible to spiritual vision and apparent in community life for increasing numbers of people beginning around the year 1933.
He emphasized his belief that in the future humanity would need to be able to recognize the Spirit of Love in all its genuine forms, regardless of what name would be used to describe this being. He also warned that the traditional name of the Christ might be misused, and the true essence of this being of love ignored.
According to Jane Gilmer, "Jung and Steiner were both versed in ancient gnosis and both envisioned a paradigmatic shift in the way it was delivered."
As Gilles Quispel put it, "After all, Theosophy is a pagan, Anthroposophy a Christian form of modern Gnosis."
Maria Carlson stated "Theosophy and Anthroposophy are fundamentally Gnostic systems in that they posit the dualism of Spirit and Matter."
R. McL. Wilson in The Oxford Companion to the Bible agrees that Steiner and Anthroposophy are under the influence of gnosticism.
Robert A. McDermott says Anthroposophy belongs to Christian Rosicrucianism. According to Nicholas Goodrick-Clarke, Rudolf Steiner "blended modern Theosophy with a Gnostic form of Christianity, Rosicrucianism, and German Naturphilosophie".
Geoffrey Ahern states that Anthroposophy belongs to neo-gnosticism broadly conceived, which he identifies with Western esotericism and occultism.
According to Catholic scholars Anthroposophy belongs to the New Age.
Judaism
Rudolf Steiner wrote and lectured on Judaism and Jewish issues over much of his adult life. He was a fierce opponent of popular antisemitism, but asserted that there was no justification for the existence of Judaism and Jewish culture in the modern world, a radical assimilationist perspective which saw the Jews completely integrating into the larger society. He also supported Émile Zola's position in the Dreyfus affair. Steiner emphasized Judaism's central importance to the constitution of the modern era in the West but suggested that to appreciate the spirituality of the future it would need to overcome its tendency toward abstraction.
Steiner financed the publication of the book Die Entente-Freimaurerei und der Weltkrieg (1919) by ; Steiner also wrote the foreword for the book, partly based upon his own ideas. The publication comprised a conspiracy theory according to whom World War I was a consequence of a collusion of Freemasons and Jews – still favorite scapegoats of the conspiracy theorists – their purpose being the destruction of Germany. Fact is that Steiner spent a large sum of money for publishing "a now classic work of anti-Masonry and anti-Judaism". The writing was later enthusiastically received by the Nazi Party.
In his later life, Steiner was accused by the Nazis of being Jewish, and Adolf Hitler called anthroposophy "Jewish methods". The anthroposophical institutions in Germany were banned during Nazi rule and several anthroposophists sent to concentration camps.
Important early anthroposophists who were Jewish included two central members on the executive boards of the precursors to the modern Anthroposophical Society, and Karl König, the founder of the Camphill movement, who had converted to Christianity. Martin Buber and Hugo Bergmann, who viewed Steiner's social ideas as a solution to the Arab–Jewish conflict, were also influenced by anthroposophy.
There are numerous anthroposophical organisations in Israel, including the anthroposophical kibbutz Harduf, founded by Jesaiah Ben-Aharon, forty Waldorf kindergartens and seventeen Waldorf schools (as of 2018). A number of these organizations are striving to foster positive relationships between the Arab and Jewish populations: The Harduf Waldorf school includes both Jewish and Arab faculty and students, and has extensive contact with the surrounding Arab communities, while the first joint Arab-Jewish kindergarten was a Waldorf program in Hilf near Haifa.
Christian Community
Towards the end of Steiner's life, a group of theology students (primarily Lutheran, with some Roman Catholic members) approached Steiner for help in reviving Christianity, in particular "to bridge the widening gulf between modern science and the world of spirit". They approached a notable Lutheran pastor, Friedrich Rittelmeyer, who was already working with Steiner's ideas, to join their efforts. Out of their co-operative endeavor, the Movement for Religious Renewal, now generally known as The Christian Community, was born. Steiner emphasized that he considered this movement, and his role in creating it, to be independent of his anthroposophical work, as he wished anthroposophy to be independent of any particular religion or religious denomination.
Reception
Anthroposophy's supporters include Saul Bellow, Selma Lagerlöf, Andrei Bely, Joseph Beuys, Owen Barfield, architect Walter Burley Griffin, Wassily Kandinsky, Andrei Tarkovsky, Bruno Walter, Right Livelihood Award winners Sir George Trevelyan, and Ibrahim Abouleish, and child psychiatrist Eva Frommer.
The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history." However authors, scientists, and physicians including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Others including former Waldorf pupil Dan Dugan and historian Geoffrey Ahern have criticized anthroposophy itself as a dangerous quasi-religious movement that is fundamentally anti-rational and anti-scientific.
Scientific basis
Though Rudolf Steiner studied natural science at the Vienna Technical University at the undergraduate level, his doctorate was in epistemology and very little of his work is directly concerned with the empirical sciences. In his mature work, when he did refer to science it was often to present phenomenological or Goethean science as an alternative to what he considered the materialistic science of his contemporaries.
Steiner's primary interest was in applying the methodology of science to realms of inner experience and the spiritual worlds (his appreciation that the essence of science is its method of inquiry is unusual among esotericists), and Steiner called anthroposophy Geisteswissenschaft (science of the mind, cultural/spiritual science), a term generally used in German to refer to the humanities and social sciences.
Whether this is a sufficient basis for anthroposophy to be considered a spiritual science has been a matter of controversy. As Freda Easton explained in her study of Waldorf schools, "Whether one accepts anthroposophy as a science depends upon whether one accepts Steiner's interpretation of a science that extends the consciousness and capacity of human beings to experience their inner spiritual world."
Sven Ove Hansson has disputed anthroposophy's claim to a scientific basis, stating that its ideas are not empirically derived and neither reproducible nor testable. Carlo Willmann points out that as, on its own terms, anthroposophical methodology offers no possibility of being falsified except through its own procedures of spiritual investigation, no intersubjective validation is possible by conventional scientific methods; it thus cannot stand up to empiricist critics. Peter Schneider describes such objections as untenable, asserting that if a non-sensory, non-physical realm exists, then according to Steiner the experiences of pure thinking possible within the normal realm of consciousness would already be experiences of that, and it would be impossible to exclude the possibility of empirically grounded experiences of other supersensory content.
Olav Hammer suggests that anthroposophy carries scientism "to lengths unparalleled in any other Esoteric position" due to its dependence upon claims of clairvoyant experience, its subsuming natural science under "spiritual science." Hammer also asserts that the development of what he calls "fringe" sciences such as anthroposophic medicine and biodynamic agriculture are justified partly on the basis of the ethical and ecological values they promote, rather than purely on a scientific basis.
Though Steiner saw that spiritual vision itself is difficult for others to achieve, he recommended open-mindedly exploring and rationally testing the results of such research; he also urged others to follow a spiritual training that would allow them directly to apply his methods to achieve comparable results.
Anthony Storr stated about Rudolf Steiner's Anthroposophy: "His belief system is so eccentric, so unsupported by evidence, so manifestly bizarre, that rational skeptics are bound to consider it delusional... But, whereas Einstein's way of perceiving the world by thought became confirmed by experiment and mathematical proof, Steiner's remained intensely subjective and insusceptible of objective confirmation."
According to Dan Dugan, Steiner was a champion of the following pseudoscientific claims, also championed by Waldorf schools:
wrong color theory;
obtuse criticism of the theory of relativity;
weird ideas about motions of the planets;
supporting vitalism;
doubting germ theory;
weird approach to physiological systems;
"the heart is not a pump".
Religious nature
Two German scholars have called Anthroposophy "the most successful form of 'alternative' religion in the [twentieth] century." Other scholars stated that Anthroposophy is "aspiring to the status of religious dogma". According to Maria Carlson, anthroposophy is a "positivistic religion" "offering a seemingly logical theology based on pseudoscience."
According to Swartz, Brandt, Hammer, and Hansson, Anthroposophy is a religion. They also call it "settled new religious movement", while Martin Gardner called it a cult. Another scholar also calls it a new religious movement or a new spiritual movement. Already in 1924 Anthroposophy got labeled "new religious movement" and "occultist movement". Other scholars agree it is a new religious movement. According to , both the theory and practice of Anthroposophy display characteristics of religion, and, according to Zander, Rudolf Steiner would plead no contest. According to Zander, Steiner's book Geheimwissenschaft [Occult Science] contains Steiner's mythology about cosmogenesis. Hammer notices that Anthroposophy is a synthesis which does include occultism. Hammer also notices that Steiner's occult doctrines bear a strong resemblance to post-Blavatskyan Theosophy (e.g. Annie Besant and Charles Webster Leadbeater). According to Helmut Zander, Steiner's clairvoyant insights always developed according to the same pattern. He took revised texts from theosophical literature and then passed them off as his own higher insights. Because he did not want to be an occult storyteller, but a (spiritual) scientist, he adapted his reading, which he had seen supernaturally in the world's memory, to the current state of technology. When, for example, the Wright brothers began flying with gliders and eventually with motorized aircraft in 1903, Steiner transformed the ponderous gondola airships of his Atlantis story into airplanes with elevators and rudders in 1904.
As an explicitly spiritual movement, anthroposophy has sometimes been called a religious philosophy. In 1998 People for Legal and Non-Sectarian Schools (PLANS) started a lawsuit alleging that anthroposophy is a religion for Establishment Clause purposes and therefore several California school districts should not be chartering Waldorf schools; the lawsuit was dismissed in 2012 for failure to show anthroposophy was a religion. A 2012 paper in legal science reports this verdict as being provisional, and disagrees with its result, i.e. anthroposophy was declared "not a religion" due to an outdated legal framework. In 2000, a French court ruled that a government minister's description of anthroposophy as a cult was defamatory. The French governmental anti-cults agency MIVILUDES reported that it remains vigilant about Anthroposophy, especially because of its deviant medical applications and its work with underage persons, and that the works of Grégoire Perra which lambast anthroposophical medicine do not constitute defamation. Anthroposophical MDs think diseases are caused primarily by karma and demons, rather than materialistic causes. The Gospel of Luke is their main handbook of medical science; this makes them believe they have magical powers, and that medicine is essentially a form of magic. The professional French organization of Anthroposophic MDs have sued Mr. Perra for such claims; they have been condemned to pay 25,000 Euros damages for abusively suing him.
Scholars state that Anthroposophy is influenced by Christian Gnosticism. The Catholic Church did in 1919 issue an edict classifying Anthroposophy as "a neognostic heresy" despite the fact that Steiner "very well respected the distinctions on which Catholic dogma insists".
Some Baptist and mainstream academical heresiologists still appear inclined to agree with the more narrow prior edict of 1919 on dogma and the Lutheran (Missouri Sinod) apologist and heresiologist Eldon K. Winker quoted Ron Rhodes that Steiner's Christology is very similar to Cerinthus. Steiner did perceive "a distinction between the human person Jesus, and Christ as the divine Logos", which could be construed as Gnostic but not Docetic, since "they do not believe the Christ departed from Jesus prior to the crucfixion". "Steiner's Christology is discussed as a central element of his thought in Johannes Hemleben, Rudolf Steiner: A Documentary Biography, trans. Leo Twyman (East Grinstead, Sussex: Henry Goulden, 1975), pp. 96-100. From the perspective of orthodox Christianity, it may be said that Steiner combined a docetic understanding of Christ's nature with the Adoptionist heresy." Older scholarship says Steiner's Christology is Nestorian. According to Egil Asprem, "Steiner’s Christology was, however, quite heterodox, and hardly compatible with official church doctrine."
Statements on race
Rudolf Steiner was an extreme pan-German nationalist, and never disavowed such stance.
Some anthroposophical ideas challenged the National Socialist racialist and nationalistic agenda. In contrast, some American educators have criticized Waldorf schools for failing to equally include the fables and myths of all cultures, instead favoring European stories over African ones.
From the mid-1930s on, National Socialist ideologues attacked the anthroposophical worldview as being opposed to Nazi racist and nationalistic principles; anthroposophy considered "Blood, Race and Folk" as primitive instincts that must be overcome.
An academic analysis of the educational approach in public schools noted that "[A] naive version of the evolution of consciousness, a theory foundational to both Steiner's anthroposophy and Waldorf education, sometimes places one race below another in one or another dimension of development. It is easy to imagine why there are disputes [...] about Waldorf educators' insisting on teaching Norse tales and Greek myths to the exclusion of African modes of discourse."
In response to such critiques, the Anthroposophical Society in America published in 1998 a statement clarifying its stance:
We explicitly reject any racial theory that may be construed to be part of Rudolf Steiner's writings. The Anthroposophical Society in America is an open, public society and it rejects any purported spiritual or scientific theory on the basis of which the alleged superiority of one race is justified at the expense of another race.
Tommy Wieringa, a Dutch writer who grew among Anthroposophists, commenting upon an essay by the Anthroposophist , he wrote "It was a meeting of old acquaintances: Nazi leaders such as Rudolf Hess and Heinrich Himmler already recognized a kindred spirit in Rudolf Steiner, with his theories about racial purity, esoteric medicine and biodynamic agriculture."
The racism of Anthroposophy is spiritual and paternalistic (i.e. benevolent), while the racism of fascism is materialistic and often malign. Olav Hammer, university professor expert in new religious movements and Western esotericism, confirms that now the racist and anti-Semitic character of Steiner's teachings can no longer be denied, even if that is "spiritual racism".
According to Munoz, in the materialist perspective (i.e. no reincarnations), Anthroposophy is racist, but in the spiritual perspective (i.e. reincarnations mandatory) it is not racist.
Reception by Nazi regime in Germany
Though several prominent members of the Nazi Party were supporters of anthroposophy and its movements, including agriculturalist , SS colonel Hermann Schneider, and Gestapo chief Heinrich Müller, anti-Nazis such as Traute Lafrenz, a member of the White Rose resistance movement, were also followers. Rudolf Hess, the adjunct Führer, was a patron of Waldorf schools and a staunch defender of biodynamic agriculture. "Before 1933, Himmler, Walther Darré (the future Reich Agriculture Minister), and Rudolf Höss (the future commandant of Auschwitz) had studied ariosophy and anthroposophy, belonged to the occult-inspired Artamanen movement, [...]"
"One of the most insightful contributions to this area is Peter Staudenmaier's case study of Anthroposophy, which has demonstrated the ambiguous role of Anthroposophists in fascist Italy and Nazi Germany." According to Staudenmaier, the fascist and Nazi authorities saw occultism not as deviant, but as deeply familiar.
See also
Esotericism in Germany and Austria
Pneumatosophy
Spiritual but not religious
References
Notes
Citations
External links
Rudolf Steiner Archive (Steiner's works online)
Steiner's complete works in German
Rudolf Steiner Handbook (PDF; 56 MB)
Goetheanum
Societies
General Anthroposophical Society
Anthroposophical Society in America
Anthroposophical Society in Great Britain
Anthroposophical Initiatives in India
Anthroposophical Society in Australia
Anthroposophical Society in New Zealand
Esoteric Christianity
Rudolf Steiner
Spirituality
New religious movements | 0.762775 | 0.99864 | 0.761738 |
Paradigm | In science and philosophy, a paradigm is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitute legitimate contributions to a field. The word paradigm is Greek in origin, meaning "pattern".
Etymology
Paradigm comes from Greek παράδειγμα (paradeigma); "pattern, example, sample"; from the verb παραδείκνυμι (paradeiknumi); "exhibit, represent, expose"; and that from παρά (para); "beside, beyond"; and δείκνυμι (deiknumi); "to show, to point out".
In classical (Greek-based) rhetoric, a paradeigma aims to provide an audience with an illustration of a similar occurrence. This illustration is not meant to take the audience to a conclusion; however, it is used to help guide them to get there.
One way of how a paradeigma is meant to guide an audience would be exemplified by the role of a personal accountant. It is not the job of a personal accountant to tell a client exactly what (and what not) to spend money on, but to aid in guiding a client as to how money should be spent based on the client's financial goals. Anaximenes defined paradeigma as "actions that have occurred previously and are similar to, or the opposite of, those which we are now discussing".
The original Greek term παράδειγμα (paradeigma) was used by scribes in Greek texts (such as Plato's dialogues Timaeus [ 360 BCE] and Parmenides) as one possibility for the model or the pattern that the demiurge supposedly used to create the cosmos.
The English-language term paradigm has technical meanings in the fields of grammar (as applied, for example, to declension and conjugation – the 1900 Merriam-Webster dictionary defines the technical use of paradigm only in the context of grammar) and of rhetoric (as a term for an illustrative parable or fable). In linguistics, Ferdinand de Saussure (1857–1913) used paradigm to refer to a class of elements with similarities (as opposed to syntagma – a class of elements expressing relationship.).
The Merriam-Webster Online dictionary defines one usage of paradigm as "a philosophical and theoretical framework of a scientific school or discipline within which theories, laws, and generalizations and the experiments performed in support of them are formulated; broadly: a philosophical or theoretical framework of any kind."
The Oxford Dictionary of Philosophy (2008) attributes the following description of the term in the history and philosophy of science to Thomas Kuhn's 1962 work The Structure of Scientific Revolutions:
Kuhn suggests that certain scientific works, such as Newton's Principia or John Dalton's New System of Chemical Philosophy (1808), provide an open-ended resource: a framework of concepts, results, and procedures within which subsequent work is structured. Normal science proceeds within such a framework or paradigm. A paradigm does not impose a rigid or mechanical approach, but can be taken more or less creatively and flexibly.
Scientific paradigm
The Oxford English Dictionary defines a paradigm as "a pattern or model, an exemplar; a typical instance of something, an example". The historian of science Thomas Kuhn gave the word its contemporary meaning when he adopted the word to refer to the set of concepts and practices that define a scientific discipline at any particular period of time. In his book, The Structure of Scientific Revolutions (first published in 1962), Kuhn defines a scientific paradigm as: "universally recognized scientific achievements that, for a time, provide model problems and solutions to a community of practitioners, i.e.,
what is to be observed and scrutinized
the kind of questions that are supposed to be asked and probed for answers in relation to this subject
how these questions are to be structured
what predictions made by the primary theory within the discipline
how the results of scientific investigations should be interpreted
how an experiment is to be conducted, and what equipment is available to conduct the experiment.
In The Structure of Scientific Revolutions, Kuhn saw the sciences as going through alternating periods of normal science, when an existing model of reality dominates a protracted period of puzzle-solving, and revolution, when the model of reality itself undergoes sudden drastic change. Paradigms have two aspects. Firstly, within normal science, the term refers to the set of exemplary experiments that are likely to be copied or emulated. Secondly, underpinning this set of exemplars are shared preconceptions, made prior to – and conditioning – the collection of evidence. These preconceptions embody both hidden assumptions and elements that Kuhn describes as quasi-metaphysical. The interpretations of the paradigm may vary among individual scientists.
Kuhn was at pains to point out that the rationale for the choice of exemplars is a specific way of viewing reality: that view and the status of "exemplar" are mutually reinforcing. For well-integrated members of a particular discipline, its paradigm is so convincing that it normally renders even the possibility of alternatives unconvincing and counter-intuitive. Such a paradigm is opaque, appearing to be a direct view of the bedrock of reality itself, and obscuring the possibility that there might be other, alternative imageries hidden behind it. The conviction that the current paradigm is reality tends to disqualify evidence that might undermine the paradigm itself; this in turn leads to a build-up of unreconciled anomalies. It is the latter that is responsible for the eventual revolutionary overthrow of the incumbent paradigm, and its replacement by a new one. Kuhn used the expression paradigm shift (see below) for this process, and likened it to the perceptual change that occurs when our interpretation of an ambiguous image "flips over" from one state to another. (The rabbit-duck illusion is an example: it is not possible to see both the rabbit and the duck simultaneously.) This is significant in relation to the issue of incommensurability (see below).
An example of a currently accepted paradigm would be the standard model of physics. The scientific method allows for orthodox scientific investigations into phenomena that might contradict or disprove the standard model; however grant funding would be proportionately more difficult to obtain for such experiments, depending on the degree of deviation from the accepted standard model theory the experiment would test for. To illustrate the point, an experiment to test for the mass of neutrinos or the decay of protons (small departures from the model) is more likely to receive money than experiments that look for the violation of the conservation of momentum, or ways to engineer reverse time travel.
Mechanisms similar to the original Kuhnian paradigm have been invoked in various disciplines other than the philosophy of science. These include: the idea of major cultural themes, worldviews (and see below), ideologies, and mindsets. They have somewhat similar meanings that apply to smaller and larger scale examples of disciplined thought. In addition, Michel Foucault used the terms episteme and discourse, mathesis, and taxinomia, for aspects of a "paradigm" in Kuhn's original sense.
Paradigm shifts
In The Structure of Scientific Revolutions, Kuhn wrote that "the successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12).
Paradigm shifts tend to appear in response to the accumulation of critical anomalies as well as in the form of the proposal of a new theory with the power to encompass both older relevant data and explain relevant anomalies. New paradigms tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, a statement generally attributed to physicist Lord Kelvin famously claimed, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement." Five years later, Albert Einstein published his paper on special relativity, which challenged the set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years. In this case, the new paradigm reduces the old to a special case in the sense that Newtonian mechanics is still a good model for approximation for speeds that are slow compared to the speed of light. Many philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. Kuhn's original model is now generally seen as too limited .
Some examples of contemporary paradigm shifts include:
In medicine, the transition from "clinical judgment" to evidence-based medicine
In social psychology, the transition from p-hacking to replication
In software engineering, the transition from the Rational Paradigm to the Empirical Paradigm
In artificial intelligence, the transition from classical AI to data-driven AI
Kuhn's idea was, itself, revolutionary in its time. It caused a major change in the way that academics talk about science; and, so, it may be that it caused (or was part of) a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognize such a paradigm shift. Being in the social sciences, people can still use earlier ideas to discuss the history of science.
Paradigm paralysis
Perhaps the greatest barrier to a paradigm shift, in some cases, is the reality of paradigm paralysis: the inability or refusal to see beyond the current models of thinking. This is similar to what psychologists term confirmation bias and the Semmelweis reflex. Examples include rejection of Aristarchus of Samos', Copernicus', and Galileo's theory of a heliocentric solar system, the discovery of electrostatic photography, xerography and the quartz clock.
Incommensurability
Kuhn pointed out that it could be difficult to assess whether a particular paradigm shift had actually led to progress, in the sense of explaining more facts, explaining more important facts, or providing better explanations, because the understanding of "more important", "better", etc. changed with the paradigm. The two versions of reality are thus incommensurable. Kuhn's version of incommensurability has an important psychological dimension. This is apparent from his analogy between a paradigm shift and the flip-over involved in some optical illusions. However, he subsequently diluted his commitment to incommensurability considerably, partly in the light of other studies of scientific development that did not involve revolutionary change. One of the examples of incommensurability that Kuhn used was the change in the style of chemical investigations that followed the work of Lavoisier on atomic theory in the late 18th century. In this change, the focus had shifted from the bulk properties of matter (such as hardness, colour, reactivity, etc.) to studies of atomic weights and quantitative studies of reactions. He suggested that it was impossible to make the comparison needed to judge which body of knowledge was better or more advanced. However, this change in research style (and paradigm) eventually (after more than a century) led to a theory of atomic structure that accounts well for the bulk properties of matter; see, for example, Brady's General Chemistry. According to P J Smith, this ability of science to back off, move sideways, and then advance is characteristic of the natural sciences, but contrasts with the position in some social sciences, notably economics.
This apparent ability does not guarantee that the account is veridical at any one time, of course, and most modern philosophers of science are fallibilists. However, members of other disciplines do see the issue of incommensurability as a much greater obstacle to evaluations of "progress"; see, for example, Martin Slattery's Key Ideas in Sociology.
Subsequent developments
Opaque Kuhnian paradigms and paradigm shifts do exist. A few years after the discovery of the mirror-neurons that provide a hard-wired basis for the human capacity for empathy, the scientists involved were unable to identify the incidents that had directed their attention to the issue. Over the course of the investigation, their language and metaphors had changed so that they themselves could no longer interpret all of their own earlier laboratory notes and records.
Imre Lakatos and research programmes
However, many instances exist in which change in a discipline's core model of reality has happened in a more evolutionary manner, with individual scientists exploring the usefulness of alternatives in a way that would not be possible if they were constrained by a paradigm. Imre Lakatos suggested (as an alternative to Kuhn's formulation) that scientists actually work within research programmes. In Lakatos' sense, a research programme is a sequence of problems, placed in order of priority. This set of priorities, and the associated set of preferred techniques, is the positive heuristic of a programme. Each programme also has a negative heuristic; this consists of a set of fundamental assumptions that – temporarily, at least – takes priority over observational evidence when the two appear to conflict.
This latter aspect of research programmes is inherited from Kuhn's work on paradigms, and represents an important departure from the elementary account of how science works. According to this, science proceeds through repeated cycles of observation, induction, hypothesis-testing, etc., with the test of consistency with empirical evidence being imposed at each stage. Paradigms and research programmes allow anomalies to be set aside, where there is reason to believe that they arise from incomplete knowledge (about either the substantive topic, or some aspect of the theories implicitly used in making observations).
Larry Laudan: Dormant anomalies, fading credibility, and research traditions
Larry Laudan has also made two important contributions to the debate. Laudan believed that something akin to paradigms exist in the social sciences (Kuhn had contested this, see below); he referred to these as research traditions. Laudan noted that some anomalies become "dormant", if they survive a long period during which no competing alternative has shown itself capable of resolving the anomaly. He also presented cases in which a dominant paradigm had withered away because its lost credibility when viewed against changes in the wider intellectual milieu.
In social sciences
Kuhn himself did not consider the concept of paradigm as appropriate for the social sciences. He explains in his preface to The Structure of Scientific Revolutions that he developed the concept of paradigm precisely to distinguish the social from the natural sciences. While visiting the Center for Advanced Study in the Behavioral Sciences in 1958 and 1959, surrounded by social scientists, he observed that they were never in agreement about the nature of legitimate scientific problems and methods. He explains that he wrote this book precisely to show that there can never be any paradigms in the social sciences. Mattei Dogan, a French sociologist, in his article "Paradigms in the Social Sciences", develops Kuhn's original thesis that there are no paradigms at all in the social sciences since the concepts are polysemic, involving the deliberate mutual ignorance between scholars and the proliferation of schools in these disciplines. Dogan provides many examples of the non-existence of paradigms in the social sciences in his essay, particularly in sociology, political science and political anthropology.
However, both Kuhn's original work and Dogan's commentary are directed at disciplines that are defined by conventional labels (such as "sociology"). While it is true that such broad groupings in the social sciences are usually not based on a Kuhnian paradigm, each of the competing sub-disciplines may still be underpinned by a paradigm, research programme, research tradition, and/ or professional imagery. These structures will be motivating research, providing it with an agenda, defining what is and is not anomalous evidence, and inhibiting debate with other groups that fall under the same broad disciplinary label. (A good example is provided by the contrast between Skinnerian radical behaviourism and personal construct theory (PCT) within psychology. The most significant of the many ways these two sub-disciplines of psychology differ concerns meanings and intentions. In PCT, they are seen as the central concern of psychology; in radical behaviourism, they are not scientific evidence at all, as they cannot be directly observed.)
Such considerations explain the conflict between the Kuhn/ Dogan view, and the views of others (including Larry Laudan, see above), who do apply these concepts to social sciences.
Handa, M.L. (1986) introduced the idea of "social paradigm" in the context of social sciences. He identified the basic components of a social paradigm. Like Kuhn, Handa addressed the issue of changing paradigm; the process popularly known as "paradigm shift". In this respect, he focused on social circumstances that precipitate such a shift and the effects of the shift on social institutions, including the institution of education. This broad shift in the social arena, in turn, changes the way the individual perceives reality.
Another use of the word paradigm is in the sense of "worldview". For example, in social science, the term is used to describe the set of experiences, beliefs and values that affect the way an individual perceives reality and responds to that perception. Social scientists have adopted the Kuhnian phrase "paradigm shift" to denote a change in how a given society goes about organizing and understanding reality. A "dominant paradigm" refers to the values, or system of thought, in a society that are most standard and widely held at a given time. Dominant paradigms are shaped both by the community's cultural background and by the context of the historical moment. Hutchin outlines some conditions that facilitate a system of thought to become an accepted dominant paradigm:
Professional organizations that give legitimacy to the paradigm
Dynamic leaders who introduce and purport the paradigm
Journals and editors who write about the system of thought. They both disseminate the information essential to the paradigm and give the paradigm legitimacy
Government agencies who give credence to the paradigm
Educators who propagate the paradigm's ideas by teaching it to students
Conferences conducted that are devoted to discussing ideas central to the paradigm
Media coverage
Lay groups, or groups based around the concerns of lay persons, that embrace the beliefs central to the paradigm
Sources of funding to further research on the paradigm
Other uses
The word paradigm is also still used to indicate a pattern or model or an outstandingly clear or typical example or archetype. The term is frequently used in this sense in the design professions. Design Paradigms or archetypes comprise functional precedents for design solutions. The best known references on design paradigms are Design Paradigms: A Sourcebook for Creative Visualization, by Wake, and Design Paradigms by Petroski.
This term is also used in cybernetics. Here it means (in a very wide sense) a (conceptual) protoprogram for reducing the chaotic mass to some form of order. Note the similarities to the concept of entropy in chemistry and physics. A paradigm there would be a sort of prohibition to proceed with any action that would increase the total entropy of the system. To create a paradigm requires a closed system that accepts changes. Thus a paradigm can only apply to a system that is not in its final stage.
Beyond its use in the physical and social sciences, Kuhn's paradigm concept has been analysed in relation to its applicability in identifying 'paradigms' with respect to worldviews at specific points in history. One example is Matthew Edward Harris' book The Notion of Papal Monarchy in the Thirteenth Century: The Idea of Paradigm in Church History. Harris stresses the primarily sociological importance of paradigms, pointing towards Kuhn's second edition of The Structure of Scientific Revolutions. Although obedience to popes such as Innocent III and Boniface VIII was widespread, even written testimony from the time showing loyalty to the pope does not demonstrate that the writer had the same worldview as the Church, and therefore pope, at the centre. The difference between paradigms in the physical sciences and in historical organisations such as the Church is that the former, unlike the latter, requires technical expertise rather than repeating statements. In other words, after scientific training through what Kuhn calls 'exemplars', one could not genuinely believe that, to take a trivial example, the earth is flat, whereas thinkers such as Giles of Rome in the thirteenth century wrote in favour of the pope, then could easily write similarly glowing things about the king. A writer such as Giles would have wanted a good job from the pope; he was a papal publicist. However, Harris writes that 'scientific group membership is not concerned with desire, emotions, gain, loss and any idealistic notions concerning the nature and destiny of humankind...but simply to do with aptitude, explanation, [and] cold description of the facts of the world and the universe from within a paradigm'.
See also
Basic beliefs
Concept
Conceptual framework
Conceptual model
Conceptual schema
Contextualism
Dogma
Flying geese paradigm
Heuristic
Ideology
Mental model
Mental representation
Metanarrative
Methodology
Mindset
Perspectivism
Point of view (philosophy)
Poststructuralism
Programming paradigm
Schema (psychology)
School of thought
Set (psychology)
Triune continuum paradigm
World view
The history of the various paradigms in evolutionary biology (Wikiversity)
Footnotes
References
Clarke, Thomas and Clegg, Stewart (eds). Changing Paradigms. London: HarperCollins, 2000.
Dogan, Mattei., "Paradigms in the Social Sciences", in International Encyclopedia of the Social and Behavioral Sciences, Volume 16, 2001)
Handa, M. L. (1986) "Peace Paradigm: Transcending Liberal and Marxian Paradigms" Paper presented in "International Symposium on Science, Technology and Development, New Delhi, India, March 20–25, 1987, Mimeographed at O.I.S.E., University of Toronto, Canada (1986)
Harris, Matthew Edward. The Notion of Papal Monarchy in the Thirteenth Century: The Idea of Paradigm in Church History. Lewiston, New York: Edwin Mellen Press, 2010.
Hutchin, Ted. The Right Choice : Using Theory of Constraints for Effective Leadership, Hoboken : Taylor and Francis, 2013.
Kuhn, Thomas S. The Structure of Scientific Revolutions, 3rd Ed. Chicago and London: Univ. of Chicago Press, 1996. – Google Books Aug. 2011
Masterman, Margaret, "The Nature of a Paradigm", pp. 59–89 in Imre Lakatos and Alan Musgrave. Criticism and the Growth of Knowledge. Cambridge: Cambridge Univ. Press, 1970.
Popper, Karl. The Logic of Scientific Discovery, 1934 (as Logik der Forschung, English translation 1959), .
The Fourth Paradigm: Data-Intensive Scientific Discovery, Microsoft Research, 2009, http://fourthparadigm.org
Encyclopædia Britannica, Univ. of Chicago, 2003,
Cristianini, Nello, "On the Current Paradigm in Artificial Intelligence"; AI Communications 27 (1): 37–43. 2014
Aesthetics
Consensus reality
Epistemology of science | 0.763707 | 0.997399 | 0.761721 |
Subsets and Splits